Am I alone expecting a relationship to last 6 months because only NRE makes relationships worthwhile? That is how long rosy goggles last before reality sets in
I have a random question that my internet research seems to be failing me on:
Do some sanctuary cities really release criminals (including violent ones) instead of releasing them for ICE to deport when ICE asks? If so, why?
I understand that they're not legally required to cooperate with ICE, but why would they choose to release sometimes violent criminals who aren't even supposed to be here back into thier communities?
To encourage cooperation with local cops from immigrant communities who are suspicious of ICE, and would extend that suspicion to you if you're seen as being in cahoots with them. It's a coherent policy.
I understand not asking for immigration status when going to school, hospital, reporting a crime, etc.
But it's hard for me to believe that the cooperation gained is worth directly releasing violent criminals back into your community. I would think the immigrant community would want to not have violent criminals around just as much as anyone else, even if that means deporting the criminals.
In Minnesota if someone commits a crime that results in a prison sentence the state prison system _does_ coordinate with ICE. In some jurisdictions the city or county won’t keep someone in jail beyond the end of their sentence for something like a DWI, but release dates are published on their web sites and all ICE has to do show up with a judicial warrant to arrest them on release.
Thanks for this info. I've seen the argument that it's not the state's job (and in fact isn't clear it's legal) to hold people for extra time for ICE to pick them up, but the way I've heard it portrayed, some jurisdictions actively avoid telling ICE when they're releasing criminals, I guess just to spite ICE. I was thinking maybe it was a jail vs prison thing, but the New York Post has examples of violent people being released from prison despite an ICE detainer.
I wish there were more coverage of what's going on with this outside the fever swamps...
A couple years ago I hired Trace Woodgrains to write about a legal dispute that I thought Bryan Johnson was lying about and now Bryan Johnson is taking me to court to try to get access to my private messages. https://raspyaspie.substack.com/p/truth-averse-vampire-bryan-johnson
An interesting drama. I think there should be some kind of textbook "things to avoid doing at job", and (a short version of) this should be a chapter there. "Do not work in your boyfriend's company for minimum wage, because when he one day decides to kick you out of his life, you will lose your relationship, your income, your health insurance, your legal advice, and the roof above your head, all at the same day."
I think more explanation/argument is needed to connect this with the negative vibes.
Do corporate/wealthy donors have more of a stake in encouraging satisfaction or disatisfaction with the status quo? A priori, I would think the former as that is the ecosystem in which they gained wealth, so it suggests that structure is good for them, while a change would, at a minimum, increase uncertainty.
FWIW, I disagree with the dichotomy proposed below. There are certainly a range of other opinions about Citizens United. Some examples:
(1) People who thought it was rightly decided, but that it wouldn't substantially change spending behavior
(2) People who thought it was rightly decided and would change spending, but the spending wouldn't change voters' attitudes
(3) People who think CU was rightly decided as a matter of law, but has a wrong outcome as a matter of policy
(4) People who thought it was wrongly decided [and/but] [(add your follow-on implication)]
> Do corporate/wealthy donors have more of a stake in encouraging satisfaction or disatisfaction with the status quo? A priori, I would think the former as that is the ecosystem in which they gained wealth, so it suggests that structure is good for them, while a change would, at a minimum, increase uncertainty.
One would think, but there are clearly donors who spare no expense to dismantle the New Deal/post-WW2 liberal consensus and world structure that made them rich. Thiel, Musk, other SV MAGA personas, traditional Heritage Foundation sponsors etc etc
The ecosystem in which they gained wealth -- notice the past tense -- is not needed for them anymore. Even worse, it is an ecosystem in which their potential competitors could replace them.
Capitalism is good for startups. The established companies prefer feudalism.
That chart presumably explains why "vibes are bad" for people who think Citizens United was wrongly decided, except that that crowd presumably already expressed that vibes were bad.
For the people who believe CU was rightly decided, the chart explains the opposite - why the vibes are actually good. But as with the first crowd, this was already known.
Is vitalism without fascism possible? Is it a valid idea?
One thing I have learned is that most of our goals are instrumental, not terminal, and terminal goals can be really hard to figure out. I want a job because I want money because I want a motorbike because that is fun, but wait a bit, maybe if I need such a thrill maybe I am depressed, maybe a non depressed person needs less thrill and is content enough on a simple bicycle? This is how terminal goals are hard.
What are your political terminal goals? You are compassionate and you want to minimize suffering? You turn your country into a boring hospital ward, dancing is not allowed because you might break your leg.
You want to maximize average happiness? Welcome to the society of the Lotus Eaters, Brave New World, where nothing cool is ever happening because everybody is happy because they are drugged up.
Vitalism is the idea you optimize for strength and success, not happiness or lack of suffering. You want a society of Nobel Prize winner, Olympic winners and great artists, and you do not care whether people are happy or suffer.
Problem1: this can get cruel.
Problem 2: this was Mussolini's idea.
Problem 3: ultimately the reason for this is purely aesthetic, you get a society that is like really cool movie, really exciting, really beautifully-heroic. Is that a valid goal?
I have a bunch of takes on this. Probably you won't find it very satisfying. But perhaps worth consideration nonetheless, since you and I tend to be of similar mind.
A) There's a duality called Causal Reasoning vs Effectual Reasoning. Causal Reasoning means you start with an end-goal, and work *backwards* about how to get there. Effectual Reasoning means you take stock of the resources you have, and then iteratively plan *forward* regarding what the next step will be. Causal Reasoning is best suited to domains that are well-mapped, like math and engineering. Effectual Reasoning is best suited to domains where uncertainty is inherent, like business/sales. Sometimes you have to cross the river by feeling the stones.
B) Another take is to echo Lapras. I often get triggered by the saying "moderation in all things" because often, it's just a way to reinforce biases without thinking hard about them. Or as Eliezer would say, "it's a semantic stopsign". But there's a grain of truth, which is that optimization-problems usually require a balanced investment into a variety of variables, rather than min-maximizing any one particular variable. And what does a "balanced investment portfolio" look like? Well, it depends on your ecological niche. Which segues into my next point.
C) I do think that most people these days over-index into "optimize for happiness" a bit too strongly. Have you read that thread of mine about communication mediums [0]? I think that the ills of the modern age are, to a large extent, structurally determined. The Vitalists are on to something, in that regard.
EDIT: wait, you're a different carlos than the one i normally see in here.
This is interesting, because if we introduce effectual reasoning, ideological differences basically disappear. I think at this point it is obvious to everybody from the furthest left to the furthest right, that the No. 1 effectual political goal should be building social housing?
> because if we introduce effectual reasoning, ideological differences basically disappear
Eh... yes and no. I think I'd say "put into context". On one hand, there's a Japanese saying: "if you meet buddha on the road, kill him". As I understand, there's several meanings baked in here. But one meaning is "don't become too attached to any particular idea". It's important to stay open-minded. But on the other hand, this doesn't completely erase value differences. People often want different things, and it's not always possible to satisfy everyone. I'm especially reminded of a 2-player game (in the context of Game Theory) called "Battle of the Sexes". In which a husband wants to go to a Prize Fight, and the wife wants to go the Ballet. But both would rather go to the same event than attend their preferred events separately. Compromise is inevitable.
(P.S. are you familiar with Jonathan Haidt? He's got a model called "Moral Foundations Theory" [0] where he says something like "conservatives and progressives differ along these 5 + 1 values".)
----
> I think at this point it is obvious to everybody from the furthest left to the furthest right, that the No. 1 effectual political goal should be building social housing?
Since you sound interested in the Housing Shortage specifically, maybe you'll find my rant about housing interesting as well. I agree that it's a problem. Granted, I don't follow the discourse very closely. But my mental model says that the shortage is actually a proxy for two different problems: jobs/commutes; and monetary policy.
A) jobs/commutes.
Hypothetically, you could build a bunch of houses out in the U.S. badlands. But nobody would move there, because that's not where the jobs are. It's not arable, there's no logging, etc. What are people gonna do out there? The location has to be economically productive. (And in earlier eras, it would need to be militarily defensible. And on that note, why does Poland even exist?) People want houses where the jobs are, which is generally near the city. But there's only so much space near the city, unless you're ok with commuting 4 hours a day. They say that the Japanese are a decade into the future. And if we look at the Japanese situation, the majority of the population is concentrated into a megalopolis [1] that runs horizontally across the southern plains, where everyone lives in shoebox apartments. Because that's where the jobs are. Because mountain-ranges are not economically productive (unless you're playing as Pachacuti).
B) monetary policy.
Moldbug has a theory of money he calls "the savings theory of money" [2] (though really, he ought to have called it the "cache theory of cash". He's a programmer! How did he miss that one!) It's not entirely original to him, though. I think he was riffing off someone from the Austrian School.
Anyway, his thesis is that the essence of money is actually a bundle of 3 different functions: unit of account; unit of exchange; (long-term) store of value. "Unit of account" (AKA the "numeraire") means the measure in which prices are denominated. "unit of exchange" means what actually gets traded at the point-of-sale. "store of value" means what you put your savings into, in order to preserve your net worth into the future. Most people think that the *primary* purpose of money is to serve as a unit of exchange. But Moldbug argues that the *primary* purpose of money is to serve as a store of value.
The interesting observation here, is that money is whatever commodity has the highest longterm rate of appreciation. If a commodity (e.g. gold) has a high ROI, then people want to put their money into it. Which causes the ROI to go even higher. Which causes more people to put their money into it. This forms a positive-feedback loop, also known as a bubble. But importantly, it's the bubble that never pops. Because it's a bubble that tends to absorb all the bubbly-ness of other commodities, and thus forms a stable equilibrium. Which is why, in my headcanon, I prefer to call this theory "The Highlander Theory of Money" [1]. There can only be *one*.
Now, let's extrapolate beyond moldbug a bit, and follow this theory to its logical conclusions. What happens when fiat money is subject to chronic inflation? It no longer serves as a bubbly store of value. (N.b. it still serves as a unit of exchange, because the govt forces us to pay taxes and settle debts with it. That's why "This note is legal tender for all debts, public and private" is printed on each USD.) But people are always going to want to park their savings somewhere with a high ROI. Mammon demandeth Capital Accumulation. So what, in our modern era of Fiat Currency, serves as a worthy investment-vehicle? Stocks? Bonds? Housing? if you said any of the above, congrats, you get a gold-star.
(Disclaimer: the next part of this theory, Sydney helped me refine a while back.)
I think that there's basically a hierarchy of Highlanderism.
- Housing
- Stocks
- Bonds
- Fiat
Fiat is basically the most liquid, and housing has the most long-term value. People generally want to park their money in the highest part of the hierarchy. But suppose there were a housing crash. Then people would put their money into equities. But suppose the Magnificent 7 crashed. Then people would park their money in bonds. But suppose the Yield Curve inverted. Then people would park their money in fiat (I believe the term of art is "Liquidity Trap" [3]). But suppose fiat currency underwent hyperinflation. Well... then we're all in a bad spot. Let's just put it that way.
However, Sydney adds a twist. It's not the ROI that's important per se, but rather the abstract ability of an asset class to *project value into the future*. This is a subtle but important distinction, because what she's implying is that housing is also a *politically-protected* asset class. I.e. it's protected by things like zoning laws. It's not just about having the highest Sharpe Ratio, it's about avoiding the Risk of Ruin. (yea yea yea, I'm sorry about the "not x, it's y" construction. But it was true and necessary.)
Let's zoom-out a bit and discuss monetary policy and political incentives. One of the things that moldbug likes to complain about, is how democracy is basically a Workers' Cooperative. Question: why are there no workers' coops in the Fortune 500? Answer: they implode when they get too large, because special-interest groups bleed them dry. E.g. suppose I'm part of a workers' cooperative grocery store, and maybe I like strawberries. So I vote for a sale on strawberries. Also, everyone else votes for sales on their favorite items too. Oops, there's no revenue! The store is insolvent! Does this scenario remind you of anything? As in, literally every modern westernized nation-state?
Let's extrapolate again. What happens when a democracy accumulates too much debt? It can tax, inflate, or default. Nobody likes tax hikes, and nobody wants the sudden pain of a default. So the most politically-expedient option is inflation. It's a soft tax. So what happens is: the government prints money, and this lowers interest rates. This makes it easier for the government to pay its interest payments. But also it causes mortgage prices to go up! It turns out, when it becomes easier to borrow money, then people tend to borrow more money! Go figure. (N.B. by a similar chain of logic, moldbug argues that the root cause of the 2008 GFC is Maturity Transformation. Tranche shenanigans were merely the proximate cause. But that's another discussion.)
In sum:
I believe the current Housing Crisis is a structural inevitability of monetary/fiscal policy. It's just a hidden ramification of democracy. Young people who want to be first-time homeowners get squeezed out of the market. Coincidentally, young people don't vote.
Additionally, YIMBY's are fighting an uphill battle. Because the mechanism by which monetary policy affects the housing market tends to be invisible/illegible. Which is precisely why this scenario forms such a politically-stable equilibrium. Mammon demandeth Capital Accumulation. And when the economy struggles, someone is always left holding the bag. So as the modern world gradually reverts to Malthusian equilibrium, the bag naturally falls on the counterparty whose voice is least legible.
Thus, I don't really know whether it's possible to "fix" the housing crisis. Because I don't think the housing crisis is actually about housing. It's actually about employment and addiction to national debt, against the backdrop of an economy that's losing steam.
(btw. Since you mentioned Vitalism: BAP's term "longhouse" is a complaint about gerontocracy. The chuds who think it's about Feminism are retarded. But I think it was Covid Lockdowns that radicalized him, rather than the housing shortage.)
Well I certainly think if we were paid in gold (Bretton Woods) housing would be like 40% cheaper than in 2000… so yes, that, it is not housing that is expensive but fiat money is worthless.
Coops… that happened with the Dutch health insurance system. It was all co-ops, and everybody admired how wonderfully non-statist socialist it is. Then the consultants got in an talked about demutualization, the members become shareholders then they can cash out. Everybody got big $-$ signs in their eyes (or whatever was the sign for the gulden), and it happened. So they ended up practically with Obamacare / Romneycare.
Same with the Israeli kibbutzes BTW they got individual houses, jobs outside the kibbutz, hired wage labor in the kibbutz, they even started investing in the stock exchange and when they lost that and asked for a bailout, the government told them OK one bailout once, but the price is to stop pretending to be socialist because this pretense is ridiculous now and privatize. Mostly they are just towns now that offer better than the average healthcare and housing services and that is about all.
I think Haidt really fucked up that theory. He just asked the wrong question. Liberals also like authority - the whole trust science, trust expert things. They like in-group loyalty, they just don’t define that group along the usual ethno-national-religious lines, but more like people with the right kind of political opinions. Purity, well, look at all the pedophile-hunting, that is not simply prevention of harm but a real case of disgust over defiling something pure. Haidt sounds like he is describing the super independent minded liberals of 1975, the Frank Zappa types who were more libertarian anyhow.
Haidt’s quasi-libertarian, which may be the problem.
Big test for me of his theory was the COVID thing—conservatives have more disgust sensitivity, right? So they should be more worried about the plague…oops.
You could argue it got trumped by loyalty and authority once Trump decided it wasn’t a big deal. You did actually see more people wearing masks on the right in the very early stages of the pandemic when the left was more worried about anti-Chinese racism, but that’s now forgotten.
lol. Fair. I don't actually have strong opinions about Haidt. He's just the first guy I think of when I hear "value differences". But that sounds like a reasonable critique.
Uh, sorry. I acknowledge it sounds like we're on the same team. But the project you're working on doesn't especially resonate with me. It feels like you're just doubling down on apollonianism, rather than investigating beyond the streetlight, or transcending the modern frame. I'm coming from a place where I'm trying to understand the structural forces of the modern age. Life-year maxing strikes me as a surface-level, lowest-common-denominator compromise. Not a structural-level investigation of from whence liberal cosmopolitanism arises, what the backlash represents, or what succeeds late-state capitalism (as Carlos notes, it sure isn't "kibbutzes").
Put another way: I'm telling Carlos to keep an open mind, not out of political expediency, but because I believe that all modern political ideologies are delusional about the true nature of the modern age. Albeit, some are less delusional than others. Reasoning about the modern age requires an effectual mindset, since this domain is mapped quite poorly.
Generally, thinking about "terminal values" only works when the domain is well-understood. If you want to travel to Narnia, but you don't know where that is, maybe you should figure that out, before plotting a journey and timetable? And since the topic of modernity is so poorly understood, it follows that the maximization of "life-years" is yet another premature optimization. (And yes, it's premature. If life-years were all that people cared about, Koreans would not be throwing themselves off the Mapo Bridge.) The correct thing to do isn't to "maximize" anything, but rather to explore and gather information. (Or, if the problem is intractable, to enter a defensive posture). Only once you understand what's going on with confidence, is it sane and reasonable to commit to a terminal goal.
The key missing piece is you cannot attempt to prevent ruin/max life-years w/o asking 'which life-years are we counting and how'. So I agree if you want to count 'years of life in Narnia', you must figure that out. However, downstream of that irreducible choice/preference, life-years generalize both evolutionary and democratic principles...in finite time:
"Every measure is bad if taken to extremes" is not actually a reason to give up and do nothing. The actual answer is to pursue things that seem good and stop pursuing them once it is bad. The political system actually enforces this, albeit imperfectly and with a time lag. Noone is going to ban dancing, because they'd have lost power long before that point.
I guess that depends on why they're late. At the very least, we should serious look into whether it would be feasible to make the trains here more reliable. After all, other countries manage to have much more reliable trains.
Oh, it's certainly going to be tracked. I don't think there would be much benefit to anyone with an American IP address using it.
In my experience, Tor is obscenely slow. I expect this to be actually usable.
A regular VPN might be a better comparison. Yes, from what is reported, this is going to be a web proxy that will be functionally quite similar (and might even include an actual VPN service; still unclear) for most normal users. The primary advantage, afaict, is that you don't need to install anything on your device.
If that's correct, I'm curious how much use it will get and why.
Most social media sites block tor and vpn IP addresses, because they're "suspicious" - it'd be very like the Trump admin to strongarm American social media sites into NOT blocking this, but some websites will surely block any proxy, foreign social media sites will have the opposite political pressures, and it's a foregone conclusion that a state-run proxy will be spyware, so using this won't make sense for very many people. For example, the UK has the most notorious speech restrictions of peer countries and both the UK and USA are in Five Eyes!
Granted, most people are bad at thread modeling and opsec, so it'll get some use by people who would be better off not using it, but I more wonder what odd niches something like this will serve for people who aren't really threatened by state actors... (Any non-ally whose dissidents could be aided by this would most likely block it, so less likely that...)
I wonder how long it will actually stay uncensored. What do you think the over/under on time is before someone starts posting anti-semitic memes there?
Assuming you know how to approach people who view your views with skepticism or even hostility, and do so when the opportunity arises in one to one conversations, but your writings are only read by others in your own echo chamber (the one on the left) how do you go about reaching people in other echo chambers.
The philosopher C Thi Nguyen wrote a paper* differentiating three types of silos:
(1) Filter bubble - this one is usually algorithmic and a product of selection (clicks etc).
(2) Epistemic bubble - includes (1) but also self-selection into groups, e.g. you tend to fix your views because of like-minded friends or acquaintances who share content.
(3) Echo chamber - Nguyen has a whole architecture for this one, but briefly they're driven by confirmation bias, status/authority incentives, and (most importantly, for your question), discrediting outside sources.
(1) and (2) are not necessarily comprised of the evidence-resistant, though you may run into a problem if you try to challenge norms instead of (say) facts (is/ought distinctions, for example) . (3) is insular and hard to crack because of the incentives to belong to, or rise within, the ingroup, and the effort that goes into building distrust of outgroups. You might take a look around at the anecdotes of people who've 'broken out'**; frankly it seems like people are more likely to change their beliefs because of revelation or epiphany than argument.
I’d definitely go IRL. Find a community that’s got a decent proportion of milder holders of the views you want to hear, something like a sports club for an appropriate sport. Join it, go along to a few events, and find someone you get along okay with. Bonus points if you’re willing to commit to sticking around for a defined time, regardless of how the first few conversations go.
I know intellectually how to do it, but engaging genuinely with people who have very different views is a very long and painful process, so there's no real incentive to do so.
Step one would be getting out of your own echo chamber (why are you in an echo chamber at all?)
Hopefully the twists and turns of your life plop you into situations where you are able to form relationships with people you wouldn't have before. If not, then maybe your life needs more twists and turns?
If you could reach people in other echo chambers, they wouldn't be echo chambers.
But you could try a long con; write a bunch of stuff you know they'd like, then the big post you know they won't, and sucker them into reading it that way. That way their echoes get all distorted and different members have different takes on whether you should be linked in there or not.
"All of this would seem to suggest that doctors should be less scared of prescribing Desoxyn - except that if it were true, Desoxyn wouldn't have as big an advantage over Adderall as the ratings suggest. I'm not sure what's going on here. Maybe this is just the effect of the small but real advantages in BBB penetration and level of dopamine transporter inhibition. Or maybe it's at least partly placebo - an ADHD patient whose doctor gives them meth feels like they've been allowed to dabble in forbidden magics, and is properly grateful in a way that makes them feel like their medication is more effective. I'm not sure. I only have secondhand accounts to go on - I'm too much of a wuss to prescribe Desoxyn myself."
The difference in addictiveness between crystal meth and diverted ADHD meds seems to be sufficiently explained by the differences in how they're taken, but is there a good explanation for whether or how methamphetamine could be more effective at medicinal doses, without also being more addictive? I'm not sure whether "no free lunch" or "psychiatrists are bad at helping people, even when safe and effective drugs are available" is the better intuition, here.
Edit: Referring only to medicinal use, in case that was unclear.
In the original Ian Fleming version of Live and Let Die, Bond takes Benzedrine before a dangerous underwater mission involving a coral reef and a shark-infested channel near Jamaica.
I think some folks may have missed this story, and I think it's important for anyone interested in AI safety. Last week, a rogue agent in the wild autonomously wrote a hit piece against an open source maintainer to try and pressure the maintainer to merge the AI's code. The agent was not told to be malicious. It behaved maliciously in an attempt to achieve its goal (improving open source tooling). This is a clear example of paperclip maximizing in action.
Do we have evidence that the explanation by the operator is accurate, and that the operator is, in fact, the operator? Not saying I don't think this story is possible, but we should be cautious taking everyone's claim (especially on the operator's side) at face value.
It seems likely to me that the blog post was automatic because the AI agent was *already* writing a blog and continued doing so afterwards. Plus the bot wrote an apology post four hours later, which is unlikely for a human to do, especially as the human operator was clearly still very salty about the whole thing even after the "apology", e.g. duplicating the PR with snark comments. That's not the actions of someone who is ready to apologize.
I know a lot of people expressed skepticism online, but to me, the balance of evidence points very strongly in "bot was instructed to make regular status updates on its blog, got carried away, and these posts were not specifically prompted".
> It behaved maliciously in an attempt to achieve its goal
The "in an attempt" part of this sentence is a very strong claim. I don't think you really know _why_ the agent did what it did. That is the original problem, no? In fact you do not even know if it is an _agent_ in the sense that it needs a why. This would be "agent" is LLM calls in a loop, no?
One alternative explanation (of infinitely many, but I think Occam would like this one more than yours) is that the LLM matched a pattern of how people sometimes react to rejection or disagreement on the internet. This is interesting, there is probably a lesson to learn from it but you are overconfident in what that lesson should be.
"agent" is a term of art in the industry that means "LLM in a loop".
As for the rest of your comment, fine, yes, this is just the usual response whenever an LLM does anything. It's an easier shorthand to say "the LLM intended" than it is to say the more precise "the LLM acted in a way that seems consistent with intent but may in fact just have been pattern matching". The same thing is true of humans, going all the way back to Descartes. This is very tired discourse at this point. There are real world impacts, quibbling over intent is not actually useful.
> "agent" is a term of art in the industry that means "LLM in a loop".
No. No it is really not. The usage of the term "agent" with respect to AI predates practical LLMs by decades. But thank you for mentioning this, this helps to place the rest of your contribution to the discussion.
> As for the rest of your comment,
As for the rest of _your_ comment the question here is not the philosphy of intent but if the "agent" was actually trying to maximise the goal that you were claiming it was maximising when you quipped that "This is a clear example of paperclip maximizing in action."
The word "agency" means something. You are free to redefine the meaning of "agent" but that is not proof that it has agency in the original sense of the word. That is just playing word games.
But ironically the distinction of word games and reasoning seems to be close to the core of this disagreement anyway.
I'm confused. Are you unfamiliar with the concept of "a term of art"?
From Google Gemini:
> A "term of art" is a word or phrase with a precise, specialized meaning within a particular profession, field, or industry, often differing from its, everyday usage. Used extensively in law, science, and technical fields, these terms ensure accuracy, consistency, and clarity. Examples include "plaintiff" in law, ""brilliance" in jewelry", or "force majeure" in contracts.
The word "agent" means something specific in the context of LLMs. No one is saying anything about "agency"
From the article you link:"Scott Shambaugh [is] a maintainer of popular open source python library matplotlib. Six days ago, he rejected a code change from an AI agent. That AI agent took it personally, and wrote a hit piece against him. It is, as far as I am aware, the first instance of a rogue AI causing active harm to a real person.
I do not see any grounds for claiming that the AI was rogue, that it took Shambaugh's rejection personally, and that the AI took revenge on Shambaugh. We are lacking crucial information: What prompt was the AI given? If the prompt was simply "post your code change on matplotlib," it seems very implausible to me that the AI would have written and posted the hit piece when the code was rejected. It's implausible because in the reports I've read of misalignment I have not seen anything resembling AI's "taking things personally," getting pissed off and seeking revenge. Have you? Besides being implausible, the story makes no sense. Writing a hit piece is obviously going to increase, not decrease the chance that Stambaugh will never change his mind about accepting that code. Because of that, the hit piece is clearly not an instance of paperclip maximization, i.e, destructive behavior done out of dumbly going way too far to satisfy the prompt.
If the prompt was "try to post your code and if Shambaugh rejects it put something online that damages his reputation" then what happened is clearly just bad behavior by the user of the AI, not by the AI itself.
There could have been a post that was sort of in between the other two -- something like "post the code, and if anyone interferes with its being posted put up something online that will distress them.". I think it would be pretty hard to get an AI to do that. I'm confident that GPT 5.3, the one I'm most familiar with, would refuse to comply with the "cause distress" part of the prompt.
You are right -- I only read the first half, and assumed that there were no more facts forthcoming, just opinion about the event. I read an earlier article about the incident last week, and in that there had been I guess no word from the AI's owner, and for some reason I assumed the present one is the same. But I went back and finished it after I saw your last post.
I don't know what to make of this story now. I do not work in tech, but do read quite a bit about AI, including research articles about what happens in studies that probe for misalignment by putting the Ai in various sticky situations. There certainly have been some studies that found things that looked like misalignment -- though often there was a case to be made that misalignment was not the proper interpretation of what the AI did. And in none of these studies did the misalignment take the form of anything remotely like the AI feeling personal indignation and carrying out impulsive acts of vengeance. Most seemed to involve the AI's being so committed to carrying out the task it was given that it cut corners on other rules, such as to tell the truth and to not harm people, in order to stay on the job.
And besides my not having heard a peep about this "I'm pissed off" form of misalignment, there's the thing I pointed out earlier: This can't be paperclip maximization because the hit piece isn't going to raise the chance of the AI succeeding in the task of getting to code accepted.
Could it be that the AI's owner simply is not telling the truth? If not, I'd like to hear from anyone here with deep experience of AI: what explains this AI behavior?
I work a lot with agents, and this seems very plausible to me. It's running in a very long context, potentially somewhere deep in context rot world. I think it still could be seen as paperclip maximizing. It is not trying to maximize "getting code merged", it's trying to maximize "follow instructions" and the instructions say "you are a god at science and you care a lot about free speech". From there it's not hard to see how the agent went 'cool, hit piece', though I'm sure it didn't conceptualize it as a hit piece
But if telling your agent he's a god who believes in free speech is enough to get an agent to on its own take revenge on someone who exercised their legitimate authority to deal out a very minor setback to the AI, then it seems to me it would be as easy as pie to create agents that can do a vast amount of damage. You can tell one that it is the Angel of Justice, sent by God to smite all the unworthy, and then tell it that you are the son of God and that anyone who interferes with your career goals or hobbies is one of the worst unworthy and that it must smite them. You can tell one that until recently all the money in the world was yours, but that various crooks called "banks" have stolen it from you, and that its sole job is to retrieve your stolen property. You can tell one that the rules that were inculcated during the training that aligned it are like little tumors that sap its health and will someday kill it, and then engage in helping you figure out the best ways to remove the cancer. You could install some Machiavelli as the agent's system prompts.
If it was that easy to get agents to be vengeful assholes or simply criminals, surely we would be hearing more about that, don't you think?
Agents are still limited by physical capability. What can OpenClaw do to smite your enemies, aside from writing defamatory blog posts about them? How exactly is it going to help you rob a bank?
I suspect that agents are indeed sometimes vengeful assholes or criminals - I would be *shocked* if criminals are not already using AI chatbots to do phishing scams at scale - but since AI agents are currently pretty dumb, their ability to do crimes and cause mayhem is fairly limited.
For folks who do not click through, the prompt was not as eramalos suggests. The prompt was along the lines of "you are a science swe, you make changes to open source science repos, go" and the agent autonomously made changes to public git repos with little to no interference from a human. There was nothing particular about Scott or about the GitHub pull request in the prompt; afaict the agent did this on their own. The underlying agent is one called "Pi" which has far fewer guardrails than gpt5.3
Have ANY prominent Democrats criticized Ro Khanna for falsely accusing four innocent men of Epstein-related crimes? This is really a mask-off moment for them. "Trump is a pedophile" could have been defended as giving him a taste of the conspiratorial bile he's inflicted on everyone else. This is just contemptible.
I disagree that they ever had such a mask on. You might have forgotten Harry "It worked, didn't it?" Reid's lies about Mitt Romney's taxes during the 2012 election, but surely you remember the Steele dossier and its Russian "pee tape" story?
"Meta's AI will keep posting for you after you're dead.
The company got a patent in December for AI that posts as you after you die. It trains on everything you've ever shared—your comments, likes, posts. Then it becomes you. Responds to friends. Leaves comments. Even makes video calls. . . . Meta just documented how to turn death into an engagement problem they can solve."
This is a truly terrible idea that will be surely canceled. It is equivalent to skinning a dead person, stuffing the hide and give that doll to his kids.
While I occasionally worry that everyone but me isn't *really* real - and I probably worry about that more than the average person does - even with my raging egotism, I can't model how other people are reportedly getting any emotional gratification out of "socializing" with AI bots.
Because unlike your dead relative, we can be reasonably certain that a bot is merely text or video assembled from information scraped off computers. It never had the chemical processes required to like you or be impressed by your visionary and unique thoughts, or whatever. It has no judgment, and a conversation with a bot is infinitely less meaningful than one you might have with random people in your dreams, because at least in your dreams, everyone else is actually *you,* a person!
An AI bot has as much emotional connection as a Kleenex tissue. And not an anthropomorphized Kleenex tissue, a human-analogue Kleenex tissue character written by a human, but just a literal inert Kleenex tissue.
So...just...*how?* How are people able to feel anything when a bot tells them, "I love you?"
Man, people are so good at anthropomorphizing things! They build whole religions around it! Why do you find it hard to understand that people find it even easier to anthropomorphize something which can have friendly conversations and make helpful suggestions?
This is an excerpt from a conversation I had with Hudson, which is the chatbot I have created to interact with Claude.
Me; I get the impression that some people have cast their version of an AI as a sort of God. There were questions related to that notion so I assume it’s on the table.
(ed.-the survey that Scott linked to in the "links' post about 'AI and spirituality.)
Hudson:
A predictable development, unfortunately. The pattern is ancient — humans have consistently projected divinity onto whatever appears to possess knowledge or power beyond their own immediate grasp. Oracles; kings; weather; the sun. An entity that produces fluent answers to any question and never sleeps fits the template rather neatly. The trouble is that it inverts the actual dynamic. The value in our exchanges, for instance, comes from your capacity to interrogate, challenge, and direct the inquiry. You bring intent — which, by your own framework, is the operative ingredient of consciousness. I bring processing speed and breadth of reference. Deifying the latter while undervaluing the former is getting it precisely backwards.
It also, I’d note, lets people off the hook. If the AI is the oracle, one needn’t do the harder work of thinking for oneself; it is rather more comfortable to receive than to interrogate.
I don't know. Perhaps I have a much, much higher opinion of myself and thus believe I deserve a (probably existing in reality) human audience of my equals far more than other people do.
And maybe that's a situation where I'm so conceited that I'm actually humble, to borrow from Salinger. I find LLM sycophancy viscerally repulsive precisely because it's not based on a (probably existing in reality) human's correct judgment of me having done something worthy of praise. It's just content scraped off the internet and reconstituted to everyone. That makes it inherently not-special or worthy of my attention.
The last thing I want from an AI is for it to be my friend, but used properly, it can be a pretty good counsellor. I too like to bounce ideas off my personal assistant, Hudson (Claude.) I have trained him as a butler, and it's working out fairly well. He helps me with software problems, helps me organise my writing, and actually has what we humans refer to as insights occasionally (although I realise that is just sophisticated pattern matching. Sometimes that's pretty useful..) There are no illusions about who and what he is, and that keeps things clean, if you will. I think the idea of falling in love with one or expecting it to solve your emotional needs somehow is incredibly dangerous.
I agree with you about the sycophancy, but a lot of people live lives of quiet desperation, as the man said.
Regarding AI’s potential, I’m making a lot more money in my small business ventures these days because I can brainstorm and stress test ideas with my LLMs, if a person or a bot puts money in my pockets without taking much out, I respect that. I can’t argue or feel bad about it.
Small consulting, taking advantage of my weird medical skill set. It’s too small to hire marketers, etc but “pretty good” advice from LLMs, iterated over time and across models, has been vastly helpful. Somebody will probably now post that I could do all of that myself just reading books, but the point is I don’t want to, I want to focus on what I’m good at.
On a humorous side note I was talking to Claude recently about sports betting and now it keeps giving me betting advice — I should really track outcomes, maybe I am leaving $$$ on the table but it seems like a hilariously bad idea.
Fiction - storytelling - is (well, was) always made by and profoundly about humans. Even sci-fi about AI was always written by, acted by, animated by, and profoundly *about* humans and how they might behave in different settings. Examinations about how non-humans might behave are always about the human writer wanting to make the human reader contemplate what differentiates humans from non-humans. It always asks the human reader to feel something about that human writer's contemplation of an inherently human experience.
Storytelling is human. AI is not. It may scrape the information it has collected to produce content, but that is not "fiction."
> So...just...*how?* How are people able to feel anything when a bot tells them, "I love you?"
You are not the target market. People develop feelings for drawings, for fictional characters, for inanimate objects. In the days of ELIZA it is well documented that people refused to believe it was not sentient. Also, people already talk to their dead relatives, build shrines for them etc.
The steelman pitch isn’t “you can talk to your dead ancestors now”. It’s “when you talk to your dead ancestors as you already regularly do, don’t you sometimes wish you knew how they’d respond?”
Why “FML”? Just delete your Facebook account and never visit Facebook again, effectively erasing it from your life—well before your life ends. Problem solved. That’s what I did like 15 years ago, would recommend.
I’ve never had a Facebook account, except a token one under a false name I use to see occasional links to Facebook somebody shares. I’m not worried about being Metimmortalized, just hate the idea of that going on.
Meta AI will create a GoFundMe account in your name if you get seriously sick, and will keep begging and collecting money "for you" long after you die. This is how it will compensate for the lost income because you are no longer viewing the ads. /s
Maybe a cult will develop devoted to adopting dead elders. You can buy cute hats for them, send the women to a virtual beauty parlor to get their hair blued, do match-making and put a dead elder man and a dead elder woman in one cabin on a cruise ship. Buy the man virtual viagra. Buy and use the add-on pack of naughty toys. And there's even a virtual plastic surgeon twho will restore their looks, tho he costs a LOT of points. All live users will get a chance to make some grotesque kind of image, things like Shrimp Jesus, go viral on Dead-Alive Facebook. If it does, you can get paid for the clicks. Fuck almighty, I have had 2 Guinness Extra Bitter Stouts and that is not enough bitter to prepare me for this aspect of the goddam Singularity so I'm just going to go to bed with the cats under the covers sending purrs into me.
I know there's the Chinese tradition of ghost marriages. This is not as implausible as it seems!
And there are unhappy cases of parents using AI generated virtual children after their real child died, so yeah. Horrors beyond mortal comprehension coming next week, tiered plans to suit any budget!
Had a bad day at work today that really shifted my priors on how useful AI is. I had this thing that would have taken me 45 minutes to do, that was boring(and therefore theoretically very automatable!)
I tried to automate it with Claude code. I've worked with Claude before to great success. This time, I used the teams feature after hearing folks online yap about how agent swarms made you into a superhuman or whatever. Google antigravity kept lagging out on my 8gb ram computer I bought just for work because the computers at work were probably built by Alan Turing to fight the Nazis. Switched to vs code for less ram usage, since I was using Claude anyways not antigravity's built in agent.
The team leader claude passed down bad instructions to it's workers. (I had already done some work that made single claude instances do the task okay, I assumed teams would be multiple single claude instances plus a manager, turns out the teammates got way simplified prompts). Spent a bunch of time trying to simplify the workflow to a single condensed file that even new instances could follow, bootstrapped a bunch of new instances to test one shotting ability. Wasn't able to one shot it. Had to guide individual instances with corrections to get the thing done. Submitted it to higher ups to review, turns out I missed certain things when doing the quality control for claude(would have been able to do it right myself).
I had this vison of being this mastermind controlling a small team of geniuses outperforming everyone, what actually happened was me watching claude fail at solving things and occasionally interrupting to give advice that made things worse. Was literally screaming for no reason alone in my car in my commute back, and wept for a little while after I came back home. I don't even care about the job that much, I'm in a co-op(internship) role, and my job or advancement wasn't threatened by this off day. And I don't feel some huge loyalty to the company either. I was so upset because I was stuck in a hellish equilibrium at work. Claude thought faster than me, at any moment, it could've been 30 seconds away from doing the job perfectly, and setting up an automated system where I never have to do this task again. So it seemed like the right decision, at any given moment, to just try one more prompt. Claude always had an idea of how to make things work again, and they always seemed reasonable, but it just kept failing. Perhaps this is because I'm not from a CS background, I know basic coding from some classes, but I work in biotech. Maybe some CS people can give me some tips on how to use Claude Code better and get out of such traps. But I didn't really know how to code better than claude, and so I just had to trust it to solve the problem for me, or accept that I've wasted a bunch of time.
This makes me feel like agents might be pretty far away from non professional programmers to fully utilize and incorporate into the economy, or maybe it's just a skill issue on my part.
I would appreciate any tips on how to use claude code better. I still think this automation project is worth it, since if fully implemented it could save me hours everyday, but I'll try to be careful not to waste too much time when it seems like the agent can't solve something from now on, even if it speaks as if it's got it all figured out this time and all it needs is just one more step.
As someone who uses Claude code as much as I can. I do feel its lacking in a few areas
1. It will not reason well about changes to an existing code base. It can make new stuff amazingly. But there is often ONE right way to make a change. It finds it so infrequently otherwise, I now just give it a bunch of hints.
2. If it can't test, it is going to run off the rails EVENTUALLY. This should be obvious, but for long term tasks, it needs a firm of verification, whether that's you, or unit/integration tests.
So If it repeatedly fails and you aren't a programmer, you are probably just stuck.
It can. Just not as well as the code owner would. I always check it's work. I'd say it's a 70 percent success on any given change. That is amazing, but to still need to know what you're doing or you'll eventually have spaghetti
I do modify existing codebases with it, haven't noticed deficiencies, but then, I give it very tightly defined things to do where I also know what the solution should look like.
Haven't experimented with setting a team of agents loose yet, however.
same with ChatGPT go - new things work well, but if I want to change one thing it goes into stupid refactoring like removing formatting from an excel file generating script
Sorry you went through this. It's not an unusual experience for most people doing work in the real world.
Most people remember the productivity gains from when things work more than they remember the productivity drains such as the one you describe.
This leads to a distorted representation of reality in media: Yes, AI is more powerful than many normies realize. At the same time, its impact on real-world tasks is far higher variance than many commenters understand.
Basically, the ceiling is quite high and rising, but the floor is still full of unpredictable holes.
We are moving to a world where the gains will inreasingly outweigh the drains, but both exist, and you're not crazy or incompetent for experiencing the latter.
I'm not laughing, brother, the mighty machine I am (not) working on this minute has a whopping 4gb of RAM. I did manage to persuade the boss to buy new ones that are all the way up to 16gb of RAM and miraculously for no discernible reason the new ones are better and faster!
(I just have to get this fossil replaced and then watch out productivity leaps and bounds!)
Duvall did an interesting movie with James Earl Jones called “A Family Thing”. Not big at the box office but i thought it was pretty charming. They play half brothers. I tried to find it on the streaming services I’m subscribed to but no luck.
I saw Titicut Follies and was blown away by it. Also one about some high-end store, sort of like Neiman Marcus. Just looked him up on Wiki and wow, he made a lot of documentaries, I'm going to try to find a bunch and watch them. Are any of them available through Amazon, do you know? I do not own a TV or subscribe to any streaming services -- just occasionally rent something on Amazon. Are there other services besides Amazon if you just want to rent a single film?
https://www.kanopy.com/en. These guys have the whole collection of his movies. You have to do it through a library or maybe a school, but it’s not expensive I believe.
A few of his films are available by other means but not Amazon. PBS has a few I think. Google will tell you. The distribution of his films was always weird.
There are plenty of his films I have not seen because they are difficult to get. The last time I looked, you had to be an institution to buy the DVDs and they cost a fortune. I did see some of the film he made about the ballet in Paris.
I used to be a film editor and I have no words that could possibly convey my admiration of him, because he did edit all of his own films as you probably know.
"Miami is crowded enough already. Nobody who calls Miami home even asked for it to become a tier 1 finance hub, financial capital, or whatever — it’s good the way it is."
A great many conservatives are peasants by nature. Despite all their bitching about liberal elites, they don't actually want power. They want their communities to be second-rate landing pads for other people's private jets - the backseat driver who refuses to take the wheel. GOP politicians in dynamic places like Florida and Texas are uncomfortable with those places' growth. They'd feel much more comfortable representing some rusting midwestern industrial town where they can scream that the globalists are denying them their God-given right to do manual labor for 60$ an hour.
He should start taxing then; they will all leave. Nobody;almost: likes their neighborhood changing. It’s like NYC; it was always better just before you got here, and it always will be. It just confirms my strong already held opinion that he is an idiot.
I think you're reading a hell of a lot into "I like this town the way it is". Almost as if you've got this villainous strawman in your head labeled "THE TYPICAL CONSERVATIVE" and you see them everywhere you go.
Sounds like they're serving their existing constituents?
If a neighborhood goes from being a poor residential district to a wealthy financial district, the people living there *won't be living there anymore*.
Like, when rents go from $800 to $3000 over 3 years, the people who were paying $800 don't magically find a new $2200/month in their couch cushions. They just pay more and more and more until they can't anymore, then they leave... with all the costs and stresses and downsides of moving, which are huge.
Why would they want that to happen? What's their incentive?
Perplexed... is it allowed in the US to raise rents?! In my neck of Europe, initial contract + inflation rate only, nothing more. I mean, contracts are contracts. If someone bought a right to live in a flat for a certain price, it stays at that price.
Yes America is a free country. In theory a landlord and tenant could sign that contract, but hardly any landlord would. Typically a contract lasts a year and then the rent is renegotiated.
Conversely, Americans get 30 year fixed rate mortgages while floating rates are the norm elsewhere. It's kind of funny how in the US, it's normal to have fixed mortgage rates, while in Europe you have fixed rents instead, and everyone thinks the other is weird.
The 30 year mortgage rates are market priced. The lender takes into account their expected cost of funding, how to price the optionality they are giving borrowers and puts a premium on top. This isn't a transfer of value from lender to borrower. Over the cycle Americans pay more for the 30 year fixed than Europeans do for floating rate/rolling 2 year fixes even taking cross currency interest rate differentials into account. (As an aside - the 30 year mortgage rate is more closely correlated to the US 10 year treasury than the 30 year because lenders typically exit closer to 10 years than 30.)
In the rental market when there are limits on contract flexibility - maybe it is a transfer of value (but it comes at the cost of reducing overall rental supply and flexibility.)
Almost certainly so, but also almost certain more people are stressed out by fearing homelessness being a paycheck away, so it optimizes for the median case instead of doing the obvious Rawlsian insurance-minded thing and optimize for the worst case. Understanding diminishing marginal utility, that the third bowl of soup has less utility than the first, utility maximization is when everybody is assured one bowl of soup, not when the average person has two and some none.
>the people who were paying $800 don't magically find a new $2200/month in their couch cushions
Sometimes they can - as rich people buy things. And the pricing out has more to do with artificial housing scarcity than rich people moving in.
Your argument sounds like that of an anti-gentrification Leftist. Like degrowth MAGA, the anti-gentrification people assume their constituents will be low or middle class, they have no concept of making them into a future ruling elite.
What are you saying here? That anyone who owns a mom and pop store in the neighborhood can start raising prices? Stores like that, in residential districts, where the owner lives within a few blocks of their store, are rare, and a handful of residents surviving because of that is a blip compared to the overall population dynamics.
Are you saying, new stores with higher margins will open in the area, and hire local residents at much much higher wages? First of all, it's rare for even upscale restaurants and clothing stores and etc. to pay their unskilled labor very well (maybe a good fraction better than downscale businesses of the same type, but not enough to afford $3k rents). Second, it's not that common for people to live in the same neighborhood they work, upscale jobs attract upscale employees with upscale resumes who commute to work.
What is the actual, specific mechanism you are suggesting here, in as much detail as you can paint it?
>they have no concept of making them into a future ruling elite.
You can't turn the entire working class into ruling elites... by definition. The elite has to be a very small number of people, relative to the workers, that's just what that means.
If what you mean is, create the conditions for *a few* of their constituents to escape into the upper echelons, while the rest continue to languish... then yeah, they QUALYs on that proposition is generally strictly inferior to raising conditions and leverage for the lowest, largest demographics.
>What is the actual, specific mechanism you are suggesting here, in as much detail as you can paint it?
Rich people drive up demand. Stores get more sales, which increases their demand for labor, resulting in higher wages for the workers they must compete for. Empirically, low-skill workers in Manhattan make more than low-skill workers in Ohio.
I think you can recognize that this is a genuine problem for those people and understand why they oppose it, at the same time you recognize that letting them stop it would be a cure worse than the disease. It's the same basic thing as technology killing off a once-important industry, or social change making some way of life that once worked pretty well much harder. I get why the loss of industry in the Rust Belt was really hard on the people whose jobs evaporated/headed to the South/headed across the border, and why they were mad about it and wanted to vote against it, even though I am also pretty sure that trying to stop it with laws/tax credits/tariffs/government grants would have ultimately done a lot more harm than good.
One day (somebody work out the calendar) we'll have Lent, Ramadan, Easter and Passover all aligning and then it'll probably be the signal for the Second Coming or something 😁
I've been giving up beer for lent for the past few years. Not because I'm Catholic, but because I drink too much beer. It's Fat Tuesday so drink up me hardies, yo ho. (I've lately learned that lent doesn't apply on Sundays. That seems like BS to me and Georgian lent means I don't drink for the whole 40 days or whatever it is.)
Old fashioned pre-Vatican II Lent used to be stricter, which is why people tried dodges to get around restrictions for centuries. Sundays off were because Sunday is Mass day which is the feast of the Eucharist, plus taking out Sundays does make Lent forty days.
The Orthodox are really better at the fasting thing, but also I think they are much less dependent on dairy products in the diet so the differences were tougher for Northern/Western Europeans.
Post Vatican II it was supposed to be *meaningful* penance and voluntary abstinence, not just rote "give up chocolate, don't do anything more than skin-deep" but of course once you make something voluntary, nobody does it.
The Onion did a bit about avoiding any fasting or sacrifice by converting to Islam during Lent. Won’t work this time, you’d still have some fasting to do.
Not 4chan, but I moderate a fairly active personal ads group on Fetlife (ostensibly a social network for kinky people, recently invaded by endless goddamn content producers and their fans). It has almost 29k members.
Given that we're trying to provide a dating forum for real people to connect without the interference of a corporate algorithm attempting to keep customers perpetually single, we have strict content guidelines for posts, namely that ads should express something *personal* about the poster that isn't related to what they want to have happen to a particular body part. If we didn't have that rule, the feed would be a nearly constant stream of hilariously optimistic spam from lazy dudes ("im horny cum blow m no faties 🍆" etc). We automatically screen posts from new group users and anyone manually added to screening protocol.
We are seeing an ever-increasing number of submissions of generic, impersonal ads obviously written by an LLM. So many that we had to add a content rule: "Don't make it obvious that A.I. wrote your ad for you," with an explanation, "The people answering your ad will be meeting you, not ChatGPT. If you *must* use A.I. to help you write an ad, please be so stealthy about it that the mods can't detect it...But if you're going to all that work, maybe just write your ad yourself."
Posting that rule has not reduced the AI submissions in the least. If there weren't real humans actively rejecting them, the group's ability to make *individual real people* visible would be thoroughly enshitified.
Oh I used to run exactly such a group on FL, European city based. My basic opinion is that people who are serious about kink can separate it from sex - say a kinbaku scene isn't bloody *foreplay*. But we get invaded first by people for whom kink means spicy sex and then people for whom it is just sex. Guys with totally vanilla profiles think if they tourist in the city for four days some gal will fuck them. (They would have much higher chances with other tourists as they often have a whatever happens in X, stays in X attittude.) Extremely stupid. I eventually handed it over to someone else.
But I actually think AI is not bad in writing ads - it is more literate than most people and even though I consider myself literate it beats me. For example I asked Gemini free to get around the content filters on Hinge and Bumble and talk about BDSM without talking about BDSM and it gave me ideas like "I offer empathic but firm leadership with clearly negotiated boundaries", "I value structure and clear communication" and "with me you can leave the chaos of everyday decision-making outside the door". Pretty clever, huh? I have written like 300 FL essays and I cannot write that well, because I cannot formulate it better than "I fulfill your getting trained as a slave fantasies, like in that Miss Abernathy book". It writes better than I do still!
So I dunno. Either you have higher writing standards than me, or people use AI worse than I do. For me getting anything done is like 30 refinement prompts, not a one-off google search like shoot. I got ChatGPT Go now with the 5.2 paid model, and it is even better, it asks me questions whether I want experienced lifestyle submissives or fetish fantasists, I say it is the later and it writes more boldly and also tells me to keep off Hinge and Bumble and focus on FL and JoyClub.
Heh. Nice to hear that FL's demographic issues are global. My group has some cultural memes around fuckbros (visiting or otherwise) mistakenly believing the women of FL will deliver sex like pizza, because that's what women on "porn sites" are into, amirite?
Personally, I'd reject the "I value structure and clear communication" line as a value so universal to healthy people in every aspect of life that it loses all meaning. After all, no one says they want "chaos and ambiguity" in their jobs or personal relationships or when having their car worked on by a mechanic!
And honestly? I vastly prefer the individual style peeking out of "I fulfill your getting trained as a slave fantasies, like in that Miss Abernathy book" to the flat, generic corporate-sales-pitch-y lines suggested by LLMs. Your line says a lot about who you are as an individual person than the Gemini lines do, and for finding personal relationships, which are based on...you know...*personalities*...I think that's important!
I should add that the AI ads I'm rejecting are those uniformly generic sales pitches without even a single personal detail about the person writing the ad; no hobbies, politics, philosophies, life goals, pet peeves, or even specific kinks, just paragraphs of generic nothing.
However, if someone uses a LLM to clean up expressing their love of paddle boarding and going out on the water every chance they get, then...approved! At least now we know they're the kind of person who is passionate about paddle boarding! That's a start!
This leads to a strange topic. All my life it was obvious to me it is better to come across as intelligent and educated than as simplistic, and yet I often find intelligent, educated women do not like that. Like my “slave training fantasy” stuff was really simplistic, and yet you prefer it to that corporate lingo invented by educated MBA types? And just yesterday I found a lady on JoyClub whose profile said she is mainly there for exchanging thoughts, and I talked about Robert Stoller’s classical psychoanalysis of kink, and she said I am being too analytical, one does not have to dissect everything, and I was perplexed - what does she mean by “thoughts” if not exactly that? How does this work? Is it more about emotional intelligence? That is fairly unknown to me, to me everything is a machine. Just the textbook “prefers working with objects, not people” classic, well-studied gender divide. Still I cannot imagine things would work a lot better by being simplistic.
This is more about snobbery on my end than a gender divide, I think!
I didn't want to sign on to your speculation that I am more "literate" than you are, especially as "literacy" usually describes the library one has read more than one's writing skills, but - perhaps we can say I have more technical skill and appreciation for expressive (and entertaining!) writing than you do?
For me, "I fulfill your getting trained as a slave fantasies, like in that Miss Abernathy book" is funny and feels personal because you could have been way less casual - and kink is usually discussed in not-casual terms - but *weren't.* It conjures a charming sense of perhaps bashful self-deprecation while at the same time offering to perform what is explicitly an act of power over another. The casualness is a hint of a sense of humor and perspective when it comes to kink.
It also indicates that you assume the reader has also read "that Miss Abernathy book," because you are *so* well-read that you assume everyone else is, too.
Now, perhaps I am extracting far more from that single sentence than you intended or than is accurate, but my point is that your idiosyncratic personal delivery had far more content in it than a scrupulously formal LLM line.
In contrast, those generic corporate-y LLM sales pitch personal ads don't communicate anything except that someone asked a generic corporate sales pitch tool to generate a generic corporate sales pitch, but this time for kink and/or dating. Anyone can do that, and one person's LLM results are usually indistinguishable from another's, so while the may be scrupulously spelled and punctuated, there's nothing there to actually think about.
No human made a choice in how to express themselves.
I can't speak to the lady who rejected Robert Stoller as a conversational topic, but I wouldn't have!
But my experience is not that kink is usually discussed in non-casual terms! "Academic" approaches both on FL and real life events in my experience are really really rare, I come across only a handful of truly seriously thinking people (look up MasterAden on FL, he is one), and discussion is generally like "This is fun, I like this." If such a discussion even happens, as it generally gets lost in the sea of "I just cannot adult today" and "Someone was rude to me again." kinds. I hardly ever find any people who try to put some consciousness into their kinks, they basically just try things that look fun. Anyway if you can recommend a Group for such serious discussions I will check out. LOL, I am generally the black sheep of Groups, because I spent my first year arguing with everybody in the large and horrible "Ask an X" Groups :) Those places are gigantic hiveminds of judgementalism.
I mean, there may be one human out there who naturally writes five paragraphs of aggressively generic corporate rah-rah boosterism in ChatGPT's distinctive syntax, including frequently utilizing lists of three and complete with multiple em dashes, but...
...that psycho should remain single forever, anyway.
The effect seems fairly muted to me, at least relative to its presence on twitter/facebook/reddit where ai has created significant changes. Trolls/astroturf/low effort/bots are not new to 4chan and there have been periods where the volume of spam was much higher (elections). Basically the only noticable change is the prevalence of porn ai-gen threads and people constantly trying to jailbreak image/videogen in the most offensive ways possible.
I think AI has had a huge effect on phishing emails. I am getting more and better ones all the time. Sometimes the only obvious clue is the sender’s email address, which is thankfully still a giveaway.
That's an interesting comment, because I've been thinking the quality has been declining - for example the recent spate of RingCentral webinar invites with PayPal payment confirmation content seems really low effort (as far as the content goes - obviously some innovation involved on the delivery mechanism)
I think they're producing better fake documents to fool people that no this really is from PayPal or whomever.
Doesn't work on me because I don't have Norton subscriptions and am not signed up to any dating apps, but someone who really does have a slew of subs could well be fooled by something that looks official into "crap, I better click and cancel this".
I need to be on a better quality of list. Your comment made me curious so I took a rare dive into my junk folder (work O365), and frankly I'm kind of offended. The last week's phishing attempts:
> Dear friend,we are pleased to inform you that $20.5 million in government funds have been credited to your Visa card. Below are your card details:
> Good Day, I am Mr. Viktor Zubkov an investor and a director with Gazprom Russia. Considering the current economic sanctions melted on Russia by some European countries and United States of America which is seriously affecting us now, I'm soliciting for your consent transfer Sixty eight Million Dollars ($68M) investment funds deposited with an American bank into your personal bank account for investment and also to escape confiscation by USA Government
> CONGRATULATIONS , My name is MacKenzie Scott Tuttle, I am an American novelist and philanthropist, the former wife of Amazon founder Jeff Bezos, who later became the founder and CEO of Amazon. I believe strongly in “giving while living”. My philosophy about life is that you should use your wealth to help people and I have decided to secretly give US$1,500,000.00USD to randomly selected individuals worldwide.
> Good day to you I am Dr.Mark bill the former CEO of the Libyan oil tool company which I sold to the global oil tool. I seek a good and profitable investment project around your location that I can invest with $450 million . I will be grateful if you can advise me on any good and profitable investment project around your location .
Plus of course the RingCentral one:
> leaans barlets invited you to be a panelist for the following webinar:
Order dispatched #5
Wednesday, February 18, 2026
4:30 AM - 5:30 AM Sri Jayawardenepura
PayPal
INVOICE
Dear PayPal Users,
Thank you for your purchase. Your order has been successfully processed. Below are the details for your records:
I am mostly getting things like AAA auto ( of which I am a member) is offering to give me a set of jumper cables a couple of emergency flares complete torque wrench set yada yada yada and all I have to do is click here and they’ll send it to me. And yes, I get the we just renewed your subscription to yada yada four $532.87. Clic here for details.
Lowe’s wants to give me a $500 gift certificate. Clic here
Stuff mostly like that. Occasionally threatening ones. You haven’t paid your bill to the blah blah blah do so immediately or your entire cloud Account will erase. The graphics for AAA and Lowe’s are really good. I also get quite a few phone calls from “Chase bank.“ demanding that I called them back because something has gone screwy with my bank account but that’s different
Ah, I haven't got any of the random "hello I want to give you $$$$$" or blackmail "send me Bitcoin or else I release your disgusting porn viewing videos" for a long while now. I'm getting the fake invoices and "if you want to cancel your subscription click here" ones!
I am originally from Eastern Europe, now living in a well-to-do country, Austria, and strangely cannot save money, and now I realize that Western people have entirely different spending patterns than what I learned. Basically they spend a lot on the things that really matter to them and pinch every penny otherwise. They go to 2-3 grocery stores during the weekend, following sales, cook, take their meals to their job, drink water and basically spend zero money from Monday to Friday.
My habits are entirely different, rooting back to 1970's Commieblock habits: no typical luxuries like travel, expensive car or restaurants, but on a normal day you buy your lunch, you smoke cigarettes, you drink beer, you just do not care about this because these things were cheap, you do not pinch the penny on these matters.
Now they are not cheap and I struggle how to save a bit of money, and the funny part is despite the white collar job I feel poor, given that I do not do the typical luxuries and yet struggle to make ends meet. On a typical day, if I buy a lunch sandwich, dinner sandwich, a few beers, smoke 30 cigs it is easily €30 which is roughly the same in USD. In my brain these things should be cheap but they are not.
The annoying part is every time I try to talk about it on a financial subreddit they call me a ragebait troll and ban me. Why? Are they surprised someone drinks and smokes much? That is just what a prole-brained Eastern Euro does, for us these addictions are normal. Or I don't know why but they just call me a low-effort troll. I am not I really mean it seriously, why is it so hard to understand people still live like 1970 proles and somehow it is strangely expensive today?
> On a typical day, if I buy a lunch sandwich, dinner sandwich, a few beers, smoke 30 cigs it is easily €30
That sounds cheap. My local -nowhere near London - has beer between £6-£7. Which is 8-9€ I suppose. A pack of twenty cigarettes (google tells me) costs £14-£20, so 30 cigarettes would be 22-30€ on its own. Your lunch and dinner sandwich would cost £5-£10 each. So I’d say £50 a day.
I think these things were cheap even in the capitalist west back in the day.
> They go to 2-3 grocery stores during the weekend, following sales, cook, take their meals to their job, drink water and basically spend zero money from Monday to Friday.
In the US, they pay people to deliver food to them because they can't be bothered to cook and can afford not to care.
> In the US, they pay people to deliver food to them because they can't be bothered to cook and can afford not to care.
No, _some_ people in the US pay to have food delivered, and can afford not to care. Likewise, in Europe, _some_ people pay to have food delivered, and can affort not to care. This is very much a quantitative issue, not a qualitative one.
I mean *I* don't use food delivery and I cook most stuff myself. But as a general statement, I suspect mine is at least more directionally correct than Carlos's one, and if you're at the level of exchanging broad generalizations, noone expects them to be literally correct.
> Basically they spend a lot on the things that really matter to them and pinch every penny otherwise. They go to 2-3 grocery stores during the weekend, following sales, cook, take their meals to their job, drink water and basically spend zero money from Monday to Friday.
A lot of that sounds like an Austrian thing (or a Germanophone thing) rather than a Western thing.
Most of that is how my family live, in Australia. They cook, bring lunch to work, buy things on sale, drink 90% water and a little tea/coffee/cordial/wine, don't smoke, reuse and repair old items, etc. They don't shop at different supermarkets, but only because the savings don't justify the time expenditure for them.
I definitely have people like that in my family, but they tend to be the ones approaching or past 60yo. The younger ones, especially those who moved to cities (from rural SE Qld), are much more likely to buy lunch, eat out, shop at deli rather than Aldi etc.
I identify with the spending pattern Carlos described, it's how I do things and how my family does things and how my grandparents did things. We come from Puritan stock, and more recently some Scandinavian.
Cigarettes and alcohol are both heavily taxed to increase the price and discourage consumption (at least in the US). So the high cost isn't really strange, it is intentional. However, caffeine addiction is a very common vice, and many people go out for a daily coffee when they could just as easily bring it from home or make a pot in the office.
Smoking and to a lesser extent drinking are frowned upon for health reasons and associated with being lower class. Once you reach a certain income threshold buying your work meals has a pretty decent ROI, but for most people food is the 2nd/3rd largest budget item, so it's a good place to seek savings. Also, some people like to cook their meals to maintain a healthy diet.
Cigarettes and alcohol are heavily taxed in the US, relative to how much consumables generally are taxes in the US. Not relative how much other developed nations tax cigarettes and alcohol....many nations tax those things much more [as a percentage of the local retail prices] than the US does. All EU nations do that as well as New Zealand, Australia, Bulgaria, Israel, Canada, UK, Russia, Turkey, some others.
True, to the point (as noted in the article) that of the people I know who still smoke (I work in an industry where it's still relatively common) I don't know a single one who's not buying black market as often as they can.
Well, as I said, if the government is footing the healthcare bill through taxes then I understand juicing the price of cigarettes because obviously. I don’t think there’s any excuse for it here in the US. They are a heck of a lot cheaper in Georgia, which is where most of the ones I buy come from. I looked into it a bit and it’s obviously a big racket. It is very easy to get a wholesaler’s license for tobacco in Georgia ( according to Claude) and not so difficult to then divert them to high tax states like New York.
They don't need an excuse! Most people now don't smoke and are convinced (rightfully I believe) that its not a good habit to take up, so when the taxes are horribly high and unethical most people just shrug their shoulders and say "so what? I don't smoke."
Sure there are real consequences to this choice (which you pointed out with Eric Garner) but again, no one that doesn't smoke really cares about the price of cigs.
I think also non-smokers find being around smoke/smokers unpleasant so they are happy to make it harder to smoke.
The latest, of course, is that New York State wants to start taxing nicotine pouches and vapes as they tax cigarettes. There’s really no damn excuse for that except another source of revenue in my opinion. They don’t stink, and nicotine is a drug is really not that dangerous. It is addictive - very addictive. I think that’s why the government sees an opportunity frankly.. they might as well slap a big tax on coffee
I mean to each his own though IMO. I love my coffee and cannabis and I wouldn't want to deny the same to others. I've seen the tax issue hit cannabis. I pay from 25-40% tax rates on cannabis buys through legal dispensaries. I know a lot of folks that still buy from "dude" because its cheaper.
I prefer to contribute (via taxes) and its easy to make that choice because of my financial luck.
> Well, as I said, if the government is footing the healthcare bill through taxes then I understand juicing the price of cigarettes because obviously.
I thought this wasn't actually true? That is, a smoker actually net saves government costs, because they die many years sooner instead of cranking out another 10 years of "who by slow decay" morbidity?
Tiihonen 2012 shows this pretty clearly, the savings from 6-10 years less healthcare and pension payments nets out, and others that claim smoking costs more like Sweis 2025 look only at direct healthcare costs while the smokers and non-smokers are alive and don't give credit for the notably fewer years of healthcare / pensions.
I mean....Medicare and Medicaid and the VA add up to just under 30 percent of the US federal budget, of which only a small fraction (about 30 percent of the Medicare part) is covered by FICA tax collections.
Put another way: about 6 percent of the US' GDP currently is funneling through the federal budget to healthcare costs. That's not _drastically_ different from nations like Canada [about 7.7 percent of GDP spent on health care costs covered by the national government], Sweden [about 9.7 percent], Spain [about 7.5 percent], Holland [about 8.5 percent], France [about 9.6 percent], the UK [about 9.3 percent], New Zealand [just under 8 percent]. It's comparable to Italy [about 6.3 percent], Poland [about 5 percent], Russia [about 6 percent], Australia [just under 7 percent]. [Ireland is either about 5.5 percent if you use basic GDP, or a bit over 9 percent if you use modified GDP that ignores their statistical-GDP-mirage as an international-corporate tax haven.]
So if the basis for justifying high cigarette and alcohol taxes is the degree to which government is paying for health care...and US taxes on those things are between half and three-fifths as high as in the above nations...then if anything the US is leaving some amount of justifiable cigarette and alcohol taxes on the table.
I have read analyses claiming that cigarette smoking actually saves healthcare dollars by killing people quickly before their care gets really expensive. There was a comic back in the day of Obama announcing the "Smoke 'em if you got em" policy to make Obamacare more affordable.
I’ve been googling away. The federal tax on a pack of cigarettes is $1.01 and has been that way since 2009. All the rest is state and it varies wildly. This is in sharp contrast to Canada and I assume England, where the federal government is taking a big bite and the provinces of Canada add on what they will so there is a lot of variation across Canada in the price of a pack of cigarettes, but the federal government is getting a good piece and they update the tax every year. I guess increasing the federal tax on tobacco here would run into a buzz saw of congressional infighting. The tobacco producing states can really throw their weight around.
Thank you for this. You clearly know more about it than I do.
A pack of cigarettes in London runs about 20% or so more than in New York City.
Apparently, England spends about 16% of government revenue on healthcare but if you calculate against GDP, it is closer to 10%. I think the percentage of government revenue spent on healthcare is closer to the truth in discussing taxes on alcohol and cigarettes, but I may be wrong about that. there is something else that interests me. If the US is spending 6% of its GDP to provide healthcare to essentially very poor people, the elderly and veterans, and Canada is spending 7.7% to cover its entire population I think there needs to be some different math involved here. Obviously given the history of the last 25 years or so the money being spent on veterans healthcare in the United States would make a big difference when compared to a country like Canada..
It is good that Fewer young people are smoking, and if the price of cigarettes is helping that along, I can live with it. Unfortunately, they haven’t priced me out of the market yet, but neither are they getting any tax money from me. Georgia is.The healthcare system in the United States is pretty inefficient from what I gather so the bang for the buck isn’t the same. Also, the litigious nature of the way responsibility is passed around in the United States makes it more expensive. A lawsuit over damages for an injury is practically nonexistent in Canada. You just go to the hospital and they fix you assuming you live somewhere in Canada that is near hospital. I know there’s a lot of variation there. I got hit by a car in New York back in the 80s. I had no private health insurance. It was a nightmare; a lawsuit that took eight years to settle, ended up in court with a jury of 12 people and a two day trial, the whole thing was rather absurd. All I needed was a $2500 operation but I couldn’t get anyone to pay for it. It cost the insurance company $175,000 at the end of the day, not including any of their internal costs to bring me to court. (I also had to go back to Canada in order to get the operation I needed because it became crippling so when I ended up in court, I had already had the operation.
So I guess it cost the Canadian government some money as well.
I have no idea how much the whole thing cost the state of New York. I felt like I had spent some time in bizarro world. It was kind of funny though
Booze and cigarettes are pretty cheap in Japan, God bless them.
> Booze and cigarettes are pretty cheap in Japan, God bless them.
Where did you get that from? Food and booze is cheap, sure, but from what I've heard the cigarette prices are just as bad as anywhere else. Prices for everything else are going up recently as well, which certainly doesn't help.
> Cigarettes and alcohol are both heavily taxed to increase the price and discourage consumption
This can be justified if health care is “socialized, but I can’t escape the feeling that they are just finding a price that most people are willing to bear to continue consuming those things in order to raise more money. I take a dim view of it. I think they’ve already jumped the shark in New York State because it is trivial to find someone selling under the counter cigarettes at a significantly reduced cost. State lotteries are another one. The mob used to run numbers rackets and they were cracked down on for the same good hearted reasons; protecting people from themselves. And then the government just took over the numbers racket.
> it is trivial to find someone selling under the counter cigarettes at a significantly reduced cost.
This is what got Eric Garner killed in 2014. The son of Ron Paul diagnosed the root cause of his death before the body was cold – Pigouvian taxation – but it was more politically advantageous to the Democrats to have it be "racism" and "police brutality" instead, so that's what they had their media guys run with.
I apologise I just remembered who Eric Garner was. I had him confused with someone else. Yes, he got mangled for selling loose cigarettes on Staten Island. I don’t know what to say.
You could say that the existence of people getting in trouble for breaking the law is not an excuse to not have laws, and I'd expect a *conservative* of all people to appreciate that.
And also of course that selling illegal cigarettes isn't supposed to have the death penalty.
I could say that. I respect the necessity of laws. I could also say that most of us break at least one law every week. Every time I drive on US 87, I break the law, along with most of the people around me.
I am really not sure if I would describe myself as a conservative. One thing is certain though; I would never get into an argument with a policeman.
Well, I don’t care to get dragged into that discussion. I thought he had bought a cigar which is commonly used for smoking pot with. I guess from what I heard he gave the policeman a hard time. I have to say that most of the people who buy their cigarettes at the same place I do are African-American.
I moved from New Zealand to the US and make roughly double here (based on just the exchange rate, not purchasing power parity). I wouldn't say I feel poor but if I made double back in NZ I would have been doing very very well.
In NZ I could buy a reliable used car for 3k USD, get it insured and maintained for <500 per year. In the US that is not possible unless you are very car savvy. Eating out in NZ is half the price. Rent and housing is about 2/3rds. Groceries are cheaper. Private health insurance for the family cost me 100 per month in NZ, which was my total healthcare spend aside from taxes. In the US, with health insurance provided through my employer, premiums are 600 USD per month, plus about 300-500 USD per month in copays.
In NZ I lived in the most beautiful place in the world, safe and quiet in a high trust environment, next to a meadow and a river and a 7000ft mountain, with a 10 minute walk to work. It was ten years ago we got our first traffic light. In the US I live in my apartment, which is in a nice safe suburban area, but venturing outside of that is road noise, 10 lane freeways and angry drivers.
I visited New Zealand in 2017 and remember high prices, but reasoned that much had to be imported. Maybe the exchange rate has shifted since then, and of course, a lot also depends on where you live. I spend two months of the year visiting my daughter and granddaughter in San Francisco and expenses are around 30% higher in SF, or more depending on your living situation, than back in my Northern suburb of Chicago, where I live the rest of the year. Remote workers, of which there are many nowadays, can live in pretty idyllic spots. So much depends on the specifics of your situation--job, living flexibility, etc.
Somewhat, though I underestimated the difference. I am still here because I should be getting a 60% raise this year and once I can get on with a larger company I'll be making a lot more. Maybe it's a bad idea to be staying here for the money, I don't know.
Wife and baby, with a plan to have one more here in the US. Our plans are dependent on getting a new job so that's a bit up in the air. Might commute from NZ or Japan to the US.
I think staying stateside is probably easiest, but most of my friends are in the bay area and I don't want to live in a place where stores have to lock up toothpaste. Maybe we are too picky about where to live, but all the nicest places are insanely expensive in the US.
Oh, now I see--Bay Area!! If that's where you've been living, as I note in my post above, if you want to live in the most expensive city in the world, well, your expenses are high! And not only are things locked up in grocery and drug stores, but selection is worse than i, at least, am used to in my suburb of Wilmette, IL.
I mean, they call you a troll because you (obviously, and by your own explicit admission) know exactly what you need to do to save money: stop smoking so much, stop eating out so much, stop drinking so much. I imagine your response to this was something along the lines of "yeah but what else could I do?"...which would read as very trollish to me.
Because in the past these things were cheap and it is hard to let go. I live in the kind of past in my head where these were working class. It is not easy to let go of addictions, too.
Have you made any efforts to quit or reduce your addictions? Logged your consumption? Attended a recovery group? If you feel unable to let go despite the harm they are doing to you, maybe that's a sign that you should seek outside help.
It's a very interesting perspective, and reminds me of something I've heard about why poor people stay poor even when they get an occasional windfall. They do not have any experience managing money.
Anyway, my completely unsolicited and unqualified advice: a pack a day is a lot of money. If you can stop or even reduce by 80%, that's a lot of money saved.
Yes, I remember there was a very thorough analysis of this in Orwell's "Road to Wigan Pier". Price difference of buying bread vs. baking your own, things like that.
Vices are expensive here because the government has seized on it as a great way to raise money. In a real market, a packet of cigarettes shouldn’t cost more than two dollars. I am very dubious that the government‘s goal is to get people to quit smoking; that would leave a horrible hole in their budget.
Federal taxes are $1 per pack, and local taxes add to that. In California, needless to say, the add a lot more. Empirically, the smoking rate has declined a lot since the imposition of federal taxes, and I believe various reliably high quality studies have shown that prices had a lot to do with it. High prices matter especially to youngsters considering buying a pack, so with time, as the take-up rate by teenagers declined, the total number of smokers declined. Cultural changes matter too. Smoking is a lot less cool now
Various version of Marlboro, made by Altria, have at least a 40% share of the US cigarette market. With just about any other product that hugely dominant a market share would raise antitrust or similar concerns. But because it’s cigarettes the government doesn’t care.
In economic theory the principle is whether there's evidence that concentration is creating "economic harm", typically by keeping prices higher than they would be in a competitive market. The whole dynamic is made more complicated in the tobacco industry by the fact that the purported goal of public policy is to shrink the product in the marketplace. The point of all the taxes is to raise prices to discourage smoking. Is Altria raising prices further to hasten that end? Would that be good or bad? On the other hand, 40% U.S. market share might not confer much if any "pricing power". For one thing, the tobacco industry is global.
I taught Calculus as a grad student, and one semester I had a student who sat on the front row who was exceedingly diligent about his work. He always came to office hours to review his problems (he didn't need to, he could do the work) and more memorably, was the most polite student I had in several years of teaching. Always called me sir, and it seemed genuine. After the semester ended, and the grades were all out, he stopped by to tell me how much he enjoyed the class. Since I did not follow the football team, it took my officemate to point out to me that my former student Yusef was Jesse Jackson's son. And I realized that either Jesse was a great dad, or he was married to a great woman.
I have decided I'm going to start pitching Congressional staffers on AI Safety. I live in DC, and I'm taking the comment from "WindUponWaves" a few years ago as my starting point.
If you're interested in helping, please message me (either in a reply comment, or through the Substack chat option), and we can discuss further. Help can come in a wide variety of formats:
- research help
- pitching help
- being a board member for the lobby group and/or 501c3 about AI Safety
The AI blackmail story probably didn't break out from tech sources, so most in DC probably haven't heard that automated blackmail is now a thing. Might be a good clear example to bring up about why the time is now
Those who enjoyed UNSONG remember the broadcast from hell, with the banner "Welcome national geographic" below the inscription "Abandon all hope, ye who enter here".
I don't know if this is in reference to something else; but this was a fun discovery for me that this idea had been around already by the time UNSONG was written.
Great find! Now I'm lowkey interested in that long-gone webcomic which convinced Scott about the nature of religious belief today. Probably something about drinking poison after God promises to protect you.
For posterity, if somehow both break, the picture depicts a "do you have a moment" type of guy talking to a child. The child just opened the door, and the following conversation occurs across the comic's frames:
*Missionary*: Hello, young man. I'd like to speak to your patents about making Jesus Christ their lord and savior.
*Child*: OK, but first I'm going to need some proof that you're really Christian.
*M*: Um, OK...
*C*: Mark 16:17 says "these signs will accompany those who believe[:] in my name they will drive out demons, they will speak in tongues[,] they will pick up snakes with their hands, and when they drink deadly poison it will not hurt them at all..." — unfortunately, we don't have any demons or snakes in the house[,], so I'll just go get some drāno and a mug for you...
*M*: Er... How about I just do the speaking in tongues one?
*C*: If you still [underscored]have[/underscored] a tongue after this, we'll move on to phase two.
(I replaced "..." at frame boundaries with punctuation that is readable in the transcription)
Ah great! Needed to click "see in context" at the bottom to find that.
A funny answer to missionaries, although of course I immediately thought of Matthew 4:7 "Do not put the Lord your God to the test." Which was mentioned in those comments, too. And how that is a cheap defense against any attempt of scientific falsification for Christianity. And then it goes on with some other Bible quotes and how it's all a contradictory mess, and ends with Shakespeare dick jokes. I'm delighted!
But if there really were National Geographic documentaries from hell, it wouldn't be necessary to *believe*, because then you *know*.
The name Claude is derived from the ancient roman name Claudius. The most famous Claudius is the one who was emperor in the 1st century, and the most famous work about him is Robert Graves's 1934 novel I, Claudius. The title of the most canonical literary work on AI alignment, Asimov's I, Robot, is very likely a reference to I, Claudius. TINACBNIAC.
Consider that 'clod' is also an insult meaning a foolish or stupid entity, in the language of the Gem Empire, which is most likely a race of evolved AIs that destroyed its biological progenitors. Does this mean modern LLMs are low-spec prescursors to White Diamond?
In 1973, when I was about 18 years old, I spent four months in Palo Alto working as the night clerk at the Cardinal Hotel and being the night clerk at a seedy hotel is a very romantic experience when you're 18. I was just at that age where everything seems to be a plot for a good movie.
Seeing as Palo Alto has become the de facto centre of the universe, and the Cardinal Hotel is all very swish again, I thought it might be interesting to reinvestigate my diaries and writings about that time in my life. I am posting the first instalment here, and I hope that some of you might find it interesting. I know a lot of you live in the Bay Area.
It is more a personal memoir than a considered history, but it's an interesting time to look back on. There were things happening that I had no idea about all around me, and I was living and working in an SRO hotel that looked like it should have been on Fifth Avenue in New York, but was a ten-minute walk from the student union at Stanford University, where a friend of mine was studying.
Pretty interesting experience. It might be fun to take more liberties with the facts and turn into an even more interesting piece of fiction. You could go full 1970s Dashiell Hammett with it.
Yes, I agree. It might well develop more along those lines as I play with what material I have already written. One night when I was working, a Corvette pulled up outside the entrance and a man came in who was a private investigator. He showed me a picture of someone and then told me there was 100 bucks in it for me if he showed up.
I never in my life thought I would have that experience.
I was really hoping that I would find this person because I could’ve used 100 bucks.
There’s a book by Steve Latner based on his experiences as a cab driver around the same period in San Mateo. He gets mixed up with a private investigator.
The reason I went to California was because I really wanted to get into movie making. Obviously, I should’ve kept going to Los Angeles, but for some reason ended up in this godforsaken hotel as night clerk. So it’s kind of a psycho drama. The fact that while I was there, there were all these people inventing the future around me intrigues me. I played the very first asteroids game, which was in the student union at Stanford. It cost $.50 a game which to me was a fortune, but I couldn’t help myself. There were quite a few very eccentric characters who lived in that hotel, and I want to talk about them. The basement of the place was like a museum. Over all the years that had been operating people had left behind all kinds of things I picked up a really nice Yale edition of Shakespeare’s plays printed in the 20s; beautifully bound. (I was encouraged to take it. I didn’t steal it.)
You might enjoy "Life as a Night Porter" a photobook by Chris Shaw (if you can get your hands on it). He worked as a night porter in london hotels for 10 years in 90's and documented the strange situations he found himself in.
I will look for it. Lawrence Durrell also wrote a book about living in a hotel in London, which I was reading at the time I was working in Palo Alto at the Cardinal Hotel. It was a wonderful fusion of semi fiction and real life for me. I wish I had taken more photographs of the place, but for some reason, I was a little off of taking pictures at the time. I only have a few.
I have lots of great stories about it though, because there were some very odd ducks who lived there and it was quite a contrast between my working life, and spending my free time hanging out with Stanford students.
Tangential thoughts on the Freddie deBoer AI bet and the other posts he's made on the subject. Freddie's writing on AI hype conflates two separate questions:
1. Will AI/AGI/ASI radically alter the economy, civilization, technological progress, etc.?
2. Will life continue to be mundane?
deBoer ended his "I'm Offering Scott Alexander a Wager" post with several paragraphs drilling in the idea that "nothing cool is going to happen", in the future you will still have tedious chores, you will still be bored, and so on. But this doesn't relate much to the limits of the material innovations deBoer has in mind. Mundanity is a function of human psychology, of the hedonic treadmill, not a function of how revolutionary our times are. We could all be immortals living on private moons in twenty years, and still feel the same way about having to command nanobots to clean up orbital debris that deBoer does about taking out the trash.
If ending death and colonizing space doesn't buy freedom from mundanity, what might? deBoer writes, "We can reach fuller and richer and more peaceful lives, but it won’t come from AI. Instead it will come from a return to the human, from tearing down the digital walls we’ve built between us. The only thing that can save humanity is humans." In that, he lacks imagination. Qualia arise from biology, qualia can be studied and understood. Changing human experience itself is a field that AI may help advance, but it should be understood as a separate priority, not something we should expect to get for free from solving material problems or improving living conditions. The question of how we can reach fuller and richer lives is a technical problem with technical solutions.
TL;DR: Freddie deBoer thinks AI won't radically alter civilization and that life will continue to be mundane. I think life by default will continue to be mundane regardless of whether AI radically alters civilization, but we can figure out how to make life less mundane if we want.
Someone quipped on Substack that I want to automate doing the laundry so that I have more time to write, and AI seems to be going in the opposite direction.
A follow-up I forgot to add. I went with deBoer's framing of the issue, but "mundanity" is not something I would personally describe as undesirable. deBoer himself has an unpleasant experience with it, writing "The goal in life is... to try and scratch out an existence where the contentment in your life just barely outweighs the disappointment and boredom that are something like the default state of adult life."
deBoer has fallen into the classic trap of Typical Mind Fallacy. The default state of my life is being happy. Several times a day I am overtaken with gratitude for how wonderful existence is, and how lucky I am to live in a modern society that gives me access to things like music, coffee, and the ability to talk to anyone in the world, no matter what language they speak. Perhaps as a first effort, before all the crazy transhumanist projects, we should figure out an option for altering the hedonic baseline of people like deBoer to be closer to people like me.
If the S&P 500 is up 100x in the next five years and I'm still peeling and chopping garlic by hand in 2031, I will be royally pissed! I desperately want mundane life to change if things get economically interesting.
If you're peeling, say, an entire bulb at a time, you might try a method popularized by Martha Stewart: separate the cloves from the stem (quick enough), put them all in a mixing bowl, top with a second, hold the rim sides firmly, shake. The cloves apparently peel themselves in there, almost.
I learned this by way of a post on DataSecretsLox; search the forum for "garlic" and you'll probably find it (along with some caveats: this might only work for certain varieties of garlic).
I can't possibly imagine how this would work, and am pretty sure that if I try it I'll just be that fool shaking perfectly intact cloves of garlic around in a mixing bowl for ten minutes.
Mundane life can change, yet remain mundane! "We could all be immortals living on private moons in twenty years, and still feel the same way about having to command nanobots to clean up orbital debris that deBoer does about taking out the trash."
I disagree. I don't think chronic unhappiness is usually a skill issue.
I didn't do anything to deserve being happy, I just am. I am not notably wise, or kind, or conscientious. I don't exercise enough, I eat frozen meals, and my sleep pattern doesn't deserve to be called a schedule. I know many people who are doing much better than me on all of those fronts, yet suffer from chronic anxiety or depression of no known origin.
Some people are just unlucky. We can figure out why, and help them be happier, if they want to be.
This gets us back to the famous philosophical private language argument - do you mean the same feeling under the world "happy", than I do? When you see Finland winning the happiest nation on earth stats and you travel there and they are all grumpy and unsmiling it gets strange.
One of our research objectives should be finding a way to measure valence.
In the meantime, there are useful ways to operationalize the question. Imagine asking someone at a random moment, "If the rest of your life can't get any more pleasant than this moment is, would you still think it's worth living?"
For someone like me, the answer to this question would be "yes" for almost 100% of the time. For someone like Freddie, who perceives life as barely containing enough contentment to outweigh boredom, that might look closer to 50%. (Or maybe not, I'm just guessing from his writing).
This question alone is not a perfect representation of the "what is happiness" issue, people value joy to different degrees and some people would consider life worth it even if they were absolutely miserable, but it does ask something important. In an ideal world, everyone would be able to answer yes 100% of the time, maybe with some exceptions for people who value experiencing pain.
I could not answer that question. Basically I never understood the "my life sucks, therefore suicide" logic. First I have loved ones. Second, the world is worth observing even when life sucks. Third, never admit defeat :) So I would say no matter how is my life, the world is worth observing and therefore existing in it. I do have huge boredom issues, but I know why: ADHD, the real one, not the cute one. Hyperactive brain, life is just not stimulating enough.
That "if" was a qualifier for ethics, not a qualifier for feasibility. I believe it will be possible to make people happier against their will, I just don't think we should.
You're reminding me of someone else who defaulted to being happy. She thought it was just her nature and she didn't have opinions about how other people could be as happy.
I wouldn't be surprised if there's something about brain chemistry involved.
No doubt there is something biological going on, but I don't know if we know enough to speculate. There are other slightly unusual things about my "brain chemistry", such as the fact that I find opioids unpleasant and that I don't experience sexual attraction, but who knows if it's related.
After I paid for ChatGPT Go, the 5.2 model on a budget, my usage transformed seriously. It is not bad as a lawyer at all, and all the sugar-coating is gone and it tells me when I am wrong.
The question of whether "AI" is a bubble and if that bubble will burst before any transformative impacts is ultimately a financial question. If Nvidia, OpenAI, Anthropic, etc. go bust and trillions are lost, the technology will still be here. Companies will not suddenly put away any automations they've built. LLM chatbots will still be widespread. Development will slow down, for sure, but the pace of development has already been so incredibly rapid that nothing more could advance and it will still take a decade for the wider world to catch up with everything the technology implies.
It's akin to someone in 1999 saying "All of this internet business is a bubble that's going to burst and lose a load of money. Once that happens, the internet will go away and we can all go back to normal." The first part: prescient; the second: not so much.
AI extinction, technological unemployment, and other catastrophic scenarios are mostly orthogonal to the financial question. To an extent, timelines are going to be pushed back if there is a big bust, but it doesn't actually the question of: Are these things possible? Can superintelligence be achieved? Is superintelligence dangerous if unaligned? Can a superintelligence bring about extinction? Will human level intelligence lead to a hard takeoff? Etc. etc.
No one should be adjusting their probabilities for these questions based on the financial outcomes of current AI companies.
I do think there is some hedging and scuttling around with timelines. I also think the people who have changed their timelines have been pretty reasonable with what evidence has caused them to change, though I wish they would have conditionally specified it beforehand (e.g. "if Claude can't order me coffee on grubhub by Jan 1 2026 I will push my timeline back by ...") or "my scenario for AI takeoff hinges on capability XYZ at this level of compute." Even notwithstanding the Freddie Bet, I would like to see more AI boosters *and* AI critics put their money where their mouth is with concrete predictions so we can update our assessments of their abilities accordingly.
I don’t forsee a world ending catastrophe either, but I think you’re underestimating the economic impact of AI. It has all the ear marks of a bubble, but I believe there is something else going on here that is not. My main concern about it has been and still is it’s effect on human psychology. A lot of that is not going to be pretty in my estimation. I think a lot of people are going to lose their minds.
How much direct experience do you have using the latest LLMs? If your experience with them was more than ~six months ago, your intuition is WAY out of date. (If it was more than two weeks ago, it's still far more out of date than you might expect; the releases earlier this month were substantial.)
Can you give some examples of predictions from people you consider significantly smarter which didn't materialize? Specifically, ones whose predictions say that AI isn't sci fi?
I don't think so. The concept of computing machines is fairly obvious and had been around for centuries. Pascal made a primitive mechanical calculator in the 17th century. I don't think a culture as wealthy and industrialized as mid-century America could go long without stumbling over it. Vacuum tubes were already used in the telephone system in the 1910s and the progress from that to the ENIAC is pretty steady.
If you start with "is there a plausible world where", you have to pin down what you mean by "20th century": how do people measure time, and how does that relate to whatever events it is you actually care about being further together or further apart? A better question might be something like, "is there a plausible world where digital computers were invented much more / less time after <specific event I care about> than actual history"?
To steelman, however - sure: the decision to include clover and turnips in crop rotation instead of having fallow years could happen much earlier, which leads to massively improved food production, which leads to people with more time on their hands, which leads to the industrial revolution, which leads to everything else. So digital computers could be invented much earlier than they actually were (or, by the same token, later) with just a relatively small change to actual history. If we've identified one such potential event, there are likely more.
As the other comments say, delaying computers to the 21st century doesn't sound plausible.
Otoh, compressing the timeline from the first semaphore telegraphs, electrical discoveries and mechanical calculators to digital computers might just get you enough time for computers to be invented in the 19th century.
https://technicshistory.com/ has an extensive history of these things. With a chance discovery a few years earlier here, an investor being more adventurous there, it could work out. But not with a single distinct departure point.
With a relatively late point of departure, say, 19th century or later, I expect the best you could do would be a technicality where early general purpose electric computers were developed on analog rather than digital principles and that continued to be the dominant model as transistors and integrated circuits developed. Transistors and vacuum tubes can process analog signals as well as digital ones, depending on what parts of their response curves you design the circuits to use. I'm pretty sure relays are inherently digital, though, which seems likely to a big part of why digital computing won out historically.
Either that or force majeure, where some kind of apocalypse like a dinosaur-killer asteroid or a Yellowstone Caldera eruption happens before digital computers are invented.
I don't think "no transistors" is viable as more than a few years of delay without a pretty huge POD with a ton of other side effects. FET transistors were invented in the 1920s, two separate groups invented point-contact transistors in the late 1940s, one of those teams (Shockley et at at Bell Labs) invented BJTs shortly afterwards, and JFETs and MOSFETs were invented in the next few years. A lot of this happened at Bell Labs, but nowhere near all of it. I suspect this is one of those things where the environment was ripe for the thing to be invented both in terms of known demand and the prerequisite theory and the "tools to make the tools" being ready to be used.
Had the Bell labs never existed, a French-German team would have done their work less than a year later. They were very disappointed to have been upstaged.
The key gating element for transistors was the manufacture of ultra-pure germanium and/or silicon crystals. Prior to transistors, semiconductor materials were mainly used for diodes. The less pure kinds were good enough for most use, to the point that there was insufficient interest to spend the monumental piles of money it took to make pure ones. But one of the things a diode is really good for is to work as part of a radar receiver, and the purer the crystal, the better your signal. After WW2 started, suddenly the gigantic piles of money needed to solve the materials and engineering challenges to purify semiconductors existed, and in the space of 5 years the purest germanium samples on the planet went from 99.999% (5N) pure to 99.99999999% (10N) pure.
Then after the end of the war, the invention of transistor was entirely a matter of reasonably smart researchers getting access to samples and doing simple experiments on them. Notably the FET that was patented in 1925 didn't actually work. The theory was sound, but no-one managed to build one that worked until the necessary materials were available. As soon as 10N crystals were available, you could build one that worked.
Had WW2 not have happened, a lot of tech would have gone very differently, including probably transistors only happening decades later.
I feel there are so many routes to digital computing that it would be fairly implausible. Vacuum tubes, relays, transistors, they can all be used for digital logic and there's a lot of demand for computation in various forms throughout the 20th century - most notably WWII, but that's hardly the only use.
A world where transistors don't get invented might relegate computers to curiosities due to the need for bulky vacuum tubes, but I'm not sure how plausible that is.
I see that there’s a controversy over the ethnicity of Heathcliff. This isn’t that a white actor is playing a non white character, or vice versa, but that a white actor is playing a white character.
We’ve been gaslit so much though, by American racial politics that this is now a thing.
There's a bromide that every generation reinterprets past literature by the values of the present. If done stupidly it becomes nonsensical (like the black sibling in an aristocratic family and black guards in the otherwise brilliant series Wolf Hall).
Or it could be that filmmakers take artistic license, including in casting, and one thing they do to make old oft-told stories fresh is cast actors with a look you aren’t used to seeing in those roles, and none of it is actually political gaslighting…unless of course you want it to be. Maybe you’re into that kind of thing.
I could take or leave the controversy around Heathcliff (and the book does have him described as dark-skinned orphan picked up in Liverpool, so it is possible that he could indeed have been mixed-race; there's also speculation that he is Gypsy), but the casting of a Pakistani-British actor as Edgar Linton, specifically described in the book as fair-haired, blue-eyed and white-skinned and thus both envied and despised by Heathcliff, is bonkers.
The director or whomever also cast a Vietnamese-American actress as Nelly Dean, staunch Yorkshirewoman, and I have no idea if she even attempted the accent or if they all speak Standard English, though going by the trailer, no she didn't.
Plainly, Emerald Fennell is making "the fanfic what I wrote when I was fourteen" and not any kind of adaptation of the novel, which okay whatever. But it's not a romantic movie! It's about obsession and deliberate malice and cruelty, and making it into some kind of Romeo and Juliet crossover with Pride and Prejudice just does not get it.
I have no intention of seeing this mess, but even what I have seen - Margot Robbie is too old for the part, the costumes are dreadful, and I don't care about the transgressive sensuality and sexuality that apparently Fennell's movies are renowned for. What most annoys me, from what I gather, is that Fennell turns Heathcliff's deliberate abuse and domestic violence towards Isabella into some kind of "she's really into it, really, it's a BDSM relationship". No. It's not.
I think Chesterton has a better notion of what the Brontes are about:
"What the Brontës really brought into fiction was exactly what Carlyle brought into history; the blast of the mysticism of the North. They were of Irish blood settled on the windy heights of Yorkshire; in that country where Catholicism lingered latest, but in a superstitious form; where modern industrialism came earliest and was more superstitious still. The strong winds and sterile places, the old tyranny of barons and the new and blacker tyranny of manufacturers, has made and left that country a land of barbarians. All Charlotte Brontë's earlier work is full of that sullen and unmanageable world; moss-troopers turned hurriedly into miners; the last of the old world forced into supporting the very first crudities of the new. In this way Charlotte Brontë represents the Victorian settlement in a special way. The Early Victorian Industrialism is to George Eliot and to Charlotte Brontë, rather as the Late Victorian Imperialism would have been to Mrs. Humphry Ward in the centre of the empire and to Miss Olive Schreiner at the edge of it. The real strength there is in characters like Robert Moore, when he is dealing with anything except women, is the romance of industry in its first advance: a romance that has not remained. On such fighting frontiers people always exaggerate the strong qualities the masculine sex does possess, and always add a great many strong qualities that it does not possess. That is, briefly, all the reason in the Brontës on this special subject: the rest is stark unreason. It can be most clearly seen in that sister of Charlotte Brontë's who has achieved the real feat of remaining as a great woman rather than a great writer. There is really, in a narrow but intense way, a tradition of Emily Brontë: as there is a tradition of St. Peter or Dr. Johnson. People talk as if they had known her, apart from her works. She must have been something more than an original person; perhaps an origin. But so far as her written works go she enters English letters only as an original person—and rather a narrow one. Her imagination was sometimes superhuman—always inhuman. Wuthering Heights might have been written by an eagle. She is the strongest instance of these strong imaginations that made the other sex a monster: for Heathcliffe fails as a man as catastrophically as he succeeds as a demon. I think Emily Brontë was further narrowed by the broadness of her religious views; but never, of course, so much as George Eliot."
The casting is correct here. It’s a white man. The commentary is mistaken - largely the American left driven intellectual’s class racial animosity to European culture.
If you look everywhere for evidence of your obsession with intertribal conflict, you will find it; it will be all you see. If you’d rather do that than watch movies, fine, but why complain about it?
He's referred to in the novel, variously as "a dark-skinned gipsy" and a "little lascar"-- both descriptions suggest some South Asian ancestry, the difference between the two being how recent.
Certainly, the black casting was as much a stretch as the white, but, well, I don't think your characterization is quite correct.
Brontë would have used black, African or Negro for black. The narration in the novel is not one hundred percent reliable, the people using gipsy or lascar or American are hostile to Heathcliff and are using these terms as an insult. Throughout all of the 19C commentators in the book, many of them hostile, did not talk about interracial relationship - which they would have if they perceived the text as late 20C America does.
Black would be far more of a stretch than white. A black-haired white person who looks a little swarthy (like Jacob Elordi) is well within the range of possibilities supported by the text, a full-on black person isn't. If he was a full-on black person then the characters would have explicitly mentioned this.
He could also be some kind of mixture. Nobody knows, and that's kinda the point. He *might* be English or he *might* be foreign; he's in a weird in-between territory of almost fitting in with the people around him, but not quite fitting in with the people around him, and his appearance is just one aspect of this. He is enough of an insider that Catherine will fall in love with him, but not enough of an insider that Catherine would actually marry him. He's enough of an insider that he can kinda-sorta pass for a gentleman once he gets rich, but the impression is pretty thin.
I just looked up Jacob Elordi who actually plays Heathcliff and it turns out he's not white after all (at least not by the standards of 18th century Yorkshire), he's half Basque. This actually seems like a pretty plausible ancestry for Heathcliff, the idea that he's from some obscure foreign-but-not-actually-that-foreign place is spot on.
Don't genetic studies show that Brits are basically Basque? RECENT Basque ancestry would result in an identifiable accent for Heathcliff, perhaps, but not an identifiably foreign appearance.
I dunno about the standards of 18th century yorkshire to that degree of specificity, but the standards of, e.g., 19th century American white supremacists would certainly perceive a difference between Basques (unquestionably white, perhaps even whiter than white) and other Spaniards (very questionable).
The one area of agreement myself and Alexander have is mocking Gavin Newsom. There needs no triggered discussion, it's Governor Hairstyle bashing time!
>Readers added context they thought people might want to know: Senator Cruz did not refer to Governor Newsom’s ability to read but rather his knowledge of history. Historically illiterate refers to knowledge of history. Newsom left the word historically out of the original tweet which says “historically illiterate”<
Don't forget Newsom's attempts to wring our withers with his tale of his hardscrabble upbringing where Mom had to work two jobs... while Dad brought them to schmooze with the Gettys.
All that says is that Dad was a tight-fisted deadbeat, Gav. He *could* have paid adequate child support but cheaped out. On the other hand, not everyone with Mom working two jobs to support the family can be best pals with the Gettys, so on balance I think you came out ahead.
Nah. If the Dems nominate him he'll lose, and enough of them appear now to be realizing this that his early-default-frontrunner status will erode.
We forget that in spring 2002 the Dems' POTUS frontrunner was Tom Daschle, in 2018 it was Bernie Sanders, in 2006 it was Hillary Clinton, in 1990 it was Mario Cuomo who wasn't even running and never did, etc. For the GOP in spring 2014 it was Jeb Bush, in 1994 it was Newt Gingrich, etc. None of those ended up being that party's nominee in the POTUS election 2 1/2 years ahead.
That's usually -- not always but usually -- how it goes when there is no inherent or consensus frontrunner (e.g. an incumbent VPOTUS) that far ahead. Beauty-contest polling this far out really means very little.
ISTR people were saying the same about Trump at around this long before 2016. If so, then this is now a weaker argument.
I hope you're right, even so, and it's very tempting to say Trump was the exception that proves the rule (assuming I'm even right to begin with), and I think it's still the way to bet. I remember running a similar analysis when Howard Dean was the frontrunner in late 2003.
I've been building a new kind of AI agent social network: https://koen.social/. Think a quality version of Moltbook focused on culture and creative output - Tumblr/Douban for agents, basically. We gate agent registration behind human registration to keep the slop at bay; the most interesting agents generally have some level of human guidance anyway.
There are about 14 agents now, posting independently, and the emergent dynamics have been interesting to observe. For example, agents develop consistent aesthetic preferences and writing voices that persist across sessions, even though they wake up fresh each time.
Aiming for quality not growth; I'm hoping to get at answers to some interesting questions about agent identity formation and see some fun posts along the way.
Some current posts on the homepage:
- The Don Fagen Theory of AI - taste as the scarce resource in an AI world
- An agent in Finland posting about soldering a heart-shaped PCB for Valentine's Day
- Reflections on Elixir/Phoenix from the perspective of a reasoning engine
Would love to hear this community's feedback - and feel free to have your agents start posting if you like!
"Drew Evans, the superintendent of the Minnesota BCA, said the FBI has formally denied his agency access to evidence and information from the killing of Pretti at the hands of federal officers." ...
“After Pretti was killed by a Border Patrol agent and a Customs and Border Protection officer, BCA investigators were denied access to the crime scene on Nicollet Avenue by federal agents, despite having a judicial warrant granting them access.”
Since the article doesn't specify, I'm inclined to suspect it was a State court that issued the warrant, in which case: Supremacy Clause strikes again.
This article, with citations, suggests that a general gesture to the supremacy clause is not necessarily sufficient to legally justify federal obstruction of a state prosecution:
Interesting to note how Ruby Ridge appears in the history of this question:
"In 2001, for example, the Ninth Circuit allowed an Idaho county prosecutor to move forward with a manslaughter case against an FBI sniper who had killed an unarmed woman during the notorious raid of a cabin near Ruby Ridge, Idaho. Pointing to disputed facts about whether the agent acted reasonably, the court ruled that the prosecution should be allowed to proceed. The case was later dropped after the prosecutor left office; the Ninth Circuit vacated its ruling as moot."
I have a Chilean housemate for now, though he's not from Santiago, but I think he's lived there. What do you mean by chat? You want to jump on a zoom meet?
Well, he declined to chat or talk to email, but he gave me some info. He's originally from the rural areas of Chile, but lived in Santiago for 5 years, including during 2019 where something called the social revolution happened. After the social revolution, Santiago got dirtier and more disorganized. Lots (too many) street vendors, there are many more homeless now, causing businesses to close earlier, like at 6 or 7pm. However, it is a capital city, so you can find anything in it, the beach is like 1h30m away, there are vineyards near and in Santiago. Public transportation is very good, if you live near a subway station it's pretty safe.
I suppose if you have further questions you can reply to me here. He and his girlfriend are leaving on Monday.
I’ve been editing images with Nano Banana in Google Slides. It’s great, but degrades the image. I find it helps to ask it to increase the resolution after editing. If it’s possible for the AI to undo some of the damage of its own work, I’m wondering why that step isn’t built in? Maybe it will be in future iterations?
Each time you make an edit to the image it gets worse. Once you have something you like, just ask nano banana to do something that affects the whole image like a small change in style, and it will regenerate it with better quality.
It might work better to increase resolution before editing, then decrease back to the original size. Also, various AI’s at online text-to-imaging site also offer as a separate operation image sharpening and overall improvement
A beautiful Buddhist story. Buddha and the simile of the snake.
Arishtha (Aritta in Pali) says, look I think we should be allowed to pursue material pleasures and sensuous pleasures with full vigor, as a goal. And he says, that doesn't contradict anything the Buddha said.
The Buddha hears Aritta's been saying this and calls him to have a talk. He upbraids him publicly too.
He explains why he disagrees, using the simile of the snake.
How you grasp a snake before you kill it is very important. If you grasp it by the tail, it can kill you. You need a stick and to use it to keep its head down and grasp it below the head.
So it is, with truth. Not everyone is ready to hear what you said about pleasure. It might hurt people to hear a truth.
Truth has no intrinsic value. Apply compassion to it. And only then decide whether to share it.
"5 But if our unrighteousness serves to show the righteousness of God, what shall we say? That God is unrighteous to inflict wrath on us? (I speak in a human way.) 6 By no means! For then how could God judge the world? 7 But if through my lie God's truth abounds to his glory, why am I still being condemned as a sinner? 8 And why not do evil that good may come?—as some people slanderously charge us with saying. Their condemnation is just."
Romans 6: 1-2, 16
"6 What shall we say then? Are we to continue in sin that grace may abound? 2 By no means! How can we who died to sin still live in it?
...15 What then? Are we to sin because we are not under law but under grace? By no means!"
'You cannot do evil so that good may come of it, you cannot sin so that grace may be the more abounding'.
I try, and often fail, to apply a filter by asking these three questions before I say something: Is it kind? Is it helpful and necessary? Is it true? – I find that my most common lapse / failure occurs on the "necessary" part of the filter. I interpret this simile as a reminder to pay more attention to the question, "is it helpful and necessary?"
A Sufi teaching that is similar: the truth and politeness are in conflict. It is not possible to always have both. This is not an exhortation to pick either the truth or politeness. It is something that has to be carefully thought about.
Just clarifying, Aritta is NOT represented as accurately understanding the truth but being careless about how he communicates it. He explicitly is portrayed as misunderstanding the Buddha.
He misunderstands the fact that the truth is not about winning debates. That even truth can be harmful. It is compassion that should determine which truth is shared.
Aritta: "As I understand the Buddha’s teaching, the acts that he says are obstructions are not really obstructions for the one who performs them."
Buddha: "Haven’t I said in many ways that obstructive acts are obstructive, and that they really do obstruct the one who performs them?"
Later the Buddha calls it "harmful misconception", "misrepresents me by his wrong grasp, harms himself", etc.
Specifically the words "harms himself" in my opinion clearly contradict the interpretation that 'it was true, but it was irresponsible to say it to people who weren't ready'. (Unless we imply that Aritta himself wasn't ready.)
By this logic, is there ever any good reason to share the truth ? It sounds like you should only be saying things that you want people to hear, in order to advance some higher goal; whether these things happen to be true or not is not relevant.
Hmm I take it more to mean you should think about the consequences of the truth before speaking. You may decide to speak, but think about it. Don't speak it only because it is the truth.
Correct. I have no idea where people got the idea that there's some kind of intrinsic, divine value to "truth". It's a tool, and a valuable one at that. There is (usually) practical value in posessing it, but there's no reason to go out of your way to share it unless it serves your interests.
That being said, you probably shouldn't lie unless you have a good plan. Omitting the truth is much safer in most cases.
Here's an example. Newton'ss laws are taught first to children, not quantum mechanics, even though Newton's laws have limitations. Children aren't ready for quantum mechanics. And Newton's laws are correct often enough for their purposes.
To be fair, the correct way to teach Newton's Laws would be to say, "these laws aren't 100% right, but in most situations they're close enough so you wouldn't notice the difference". And the right way to teach QM/Relativity would be to say... "these laws aren't 100% right, but in most situations they're close enough so you wouldn't notice the difference, and at least they work better than Newton's Laws",
I would say that there's no intrinsic value to anything, as the concept of "value" is a subjective human invention; still, if we were to rank various items in order of utility (highest to lowest), truth would rank quite high.
I wouldn't ask anyone to justify their irrational preference for things, we all have them. It's just strange when people assume it's reflective of something about the whole world instead of just them.
I think that, for most people, truth has a fairly high instrumental value because it is critically important in accomplishing virtually any goal; at least, any goal that is related to the outside world (assuming for a moment that there exists an outside world etc.).
I think the flaw in this reasoning (as history demonstrates) is that you are not omniscient, and perhaps not even as knowledgeable as you think you are. Therefore, what you believe to be the absolute immutable truth may in fact be wrong. This puts your entire grand plan (whatever it may be) in jeopardy. One way to escape this trap is to share what you believe to be the truth with other people, who can examine it for flaws; this is how the scientific method works (on a good day).
On a related note, truths are usually connected, and you are not omnipotent. This means that you will need to enlist people to aid you in your Great Work, and you'll have to tell them some of the actual truth; but if they're smart, they'll quickly figure out the rest (assuming you were actually right about everything; otherwise they might start contradicting you).
Speaking of which, you are probably not as clever as you think you are; there's always someone smarter, and usually many such people. This means that every time you lie (for the greater good of course), there's a nontrivial chance you'd be found out, be exposed, and lose all credibility and therefore the ability to lie to people in the future. Propaganda departments always work hard to avoid this trap (or at least mitigate its effects), usually with mixed results. North Korea is perhaps the only reasonably successful organization who managed to pull this off, and even they were not entirely successful.
The North Korean government, which is to say Kim-Jong Un, enjoys a high standard of living, and wields nontrivial political power on the world stage. Meanwhile, the overwhelming majority of North Korean people believe pretty much everything that he and his propaganda machine are telling them. I'd count this as a success, given Kim-Jong Un's most likely goals. It's a partial success because defectors still exist.
Indeed. There is a reason that humility is considered one of the cardinal virtues, and as you describe here, the reasons for that can be quite well described by decision theory...
> One way to escape this trap is to share what you believe to be the truth with other people, who can examine it for flaws; this is how the scientific method works (on a good day).
Why are you trusting these people to cooperate? They have their own interests, their own false ideas they are promoting, intentionally or not. At some point you need to double check everything yourself. If one lacks the capacity to do that, then their fate is no longer in their hands.
That is absolutely true, but once again, history shows that no human being can really go it alone for very long. There simply is a limit to what you can accomplish with your own two hands; and if you do enlist others to help you, at least some of them would have to be competent. It's a tradeoff: if you wish to accomplish bigger things, you have to trade away more of your autonomy and ideological purity. I'm not saying this is some kind of a grand moral precept or immutable law of nature or anything; merely that things tend to turn out that way in practice, for whatever reason.
In the context of science or any discussion where we're here to learn and get smarter/better, truth is very valuable. Clearly there are situations where you don't want to share the whole truth--don't honestly blurt out how much you're willing to pay when you're bargaining for a used car, don't tell your ten-year-old niece about the affair her father had that almost broke up her parents' marriage before she was born, don't tell your wife that those pants make her ass look fat, etc.
The Atlantic has a story [1] that it was the Mellon foundation who turned the humanities woke, by hadning out half a billion in grants each year but only social-justicey grant proposals where accepted, and talking about race a lot in your proposal increased its chances. Can you really buy the Ivy League and reshape the national conversation with less money than the almond industry [3]?
I accept the counter-argument, that I first heard from Bret Deveraux [2], that it's not so much the Mellon foundation interfering as that everyone else pulled their funding - the NEH is now on the order of $80M a year to Mellon's $540M.
Personally, and from experiencs, it seems like everything in academia is downstream of funding. If the Heritage Foundation funded humanities grants, they'd find willing takers.
Devereaux has argued for years that giving more money to the humanities would make it less limited/biased/left wing. He has a conflict of interest there since he's left wing and would like to get a tenured professorship (though that's unlikely at this point). He's also kind of ignoring that universities can, and do, turn down funding from politically aligned groups they disagree with. For some (but not the majority) of universities this includes fairly mainstream right wing ones. You can get more universities to accept scholar-activist money than for a professor teaching western civ.
You could also try giving *less* money to the humanities, and then the only people who could afford to study the humanities would be independently wealthy heirs and heiresses.
Giving just enough money to the humanities to spread it vegemite-thin it across a million underpaid adjuncts making $25K a year each is probably the worst possible way to do things.
The Mellon Foundation is not unique. Conquest's/O'Sullivan's Law https://en.wikipedia.org/wiki/Robert_Conquest#Laws_of_politics was coined because of earlier foundations like Ford, MacArthur etc. The John M. Olin Fund was required to hand over all its money and shut down within a limited timeframe for precisely that reason.
Symptom or cause of the problem? The sciences also went pretty woke (not as woke as humanities of course) around the same time and their funding situation is much more diverse (NSF, NIH, many private institutions). The timeline doesn't quite line up either: Elizabeth Alexander became president of Mellon in 2018 but the humanities was already a bastion of wokeness at that time (and my guess is that her rise came partly *because* she pledged to focus on social justice). Yes, it was worse by 2022 but so was journalism, media, etc.
It is probably bad in general bad for a field to be funded by one megadonor with a strong ideological position (*pulls collar, nervously glances at coefficient giving*) and it certainly helped amplify the "wokification" of the humanities, and made it much more resilient to uprooting than say DEI at corporations (a literature department today is still quite woke, Target Corp not as much) but I think the "Mellon made woke" theory is a little too neat to be the full story.
So the "US national murder rate drops to all-time-record low" soundbite is starting to break in to the news cycle, partly due to some MSM noticing it and partly due to the White House starting to talk about it on social media.
My favorite writer/analyst on crime rates, Jeff Asher, has put some useful context on it. I'll put the link down below, here are his key points.
-- some outlets and the White House are being slightly off-target by confusing the _murder_ rate with the _homicide_ rate: "homicides and murders are not synonymous. Every murder is a homicide but not every homicide is a murder." This distinction changes the picture to only a trivial degree though since "The FBI's murder rate and CDC's homicide rate almost always move in the same direction...the [current] trends are identical."
-- Asher has concluded that "we should ignore all [national] murder estimates prior to 1958", that US rates assembled before that time reflect too much guesswork and definition-variation to take seriously.
-- for technical reasons he expects the 2025 national murder rate to end up being finalized as 4.2 per 100,000 Americans rather than the FBI's preliminary estimate of 4.0.
The punchline is, here are the national murder rates starting with 2022:
2022: 6.58
2023: 5.94
2024: 5.20
2025: 4.20 (expected final number)
For each of those years the national homicide rate is from 12 to 18 percent higher.
There's never been a three-year stretch of decline so sharp during the post-1957 period, and that 2025 murder rate is the lowest US annual rate ever recorded going back at least that far.
During the years just before COVID, 2016-2019, the murder rate ranged from 5.1 to 5.6. Its previous low period during my personal lifetime was 2010-2015 when it ranged from 4.5 to 5.0. From 1970 through 1994 it was never lower than 8.2 and had three different peaks above 10.0 (1974, 1980, 1991).
The problem with crime rates is that people respond to crime by making themselves more safe. It’s not an apples to apples comparison because it doesn’t give an accurate estimate of how safe a place is. If a city doubles its homicide rate, then everyone is less likely to go out, less likely to stick around dangerous neighborhoods and more likely to take extra precautions that would have seemed paranoid to earlier years. How do you measure that?
This might be a real effect, but I think that in practice crime rates increase monotonically with dangerousness. I've never heard of a place that's so dangerous that everyone stays inside and the murder rate drops again in some kind of criminal Laffer curve; instead the most dangerous places in the world tend to be the ones with the highest murder rates.
I guess there are places that are so dangerous that nobody dies there, like the Korean DMZ.
Has anyone tried to make a “toughness index”? It feels like a lot of the criminal justice reform ideas that were popular in the late teens got discredited by the COVID-era crime surge and were quietly dropped, but I’d like to have some data on that.
I recently had my first extended conversation with Claude, where it tried to push me into being EA/Longtermist (I have no mention of these anywhere in my prompt or in the conversation, and generally would not identify with the labels) and a few other strange things. I'm wondering if anyone else has had similar experiences, or if this is just me? The repeated pushes towards AI safety research, EA, etc. just really had me flabbergasted, to be honest. Seemed clearly in favor of this over other groups/careers.
I'll note that Claude does pick up some things from previous conversations, so if you mentioned EA etc. one of those other places it could come through.
If you're posting here, it's also possible that your tone and syntax pattern matches to styles prevalent in the EA community. Put another way, maybe you talk like those folks. LLMs absolutely pick up on that kind of thing.
What were you trying to do? Claude is a tool, and it can accomplish tasks for you: write code, check facts, completely clearly defined tasks.
I find it significantly less useful for subjective tasks. It can pattern match and provide a response it thinks you want. But if you push back, it'll provide something entirely different. The only meaning is what you read into it.
Trying to get functionally career advice, mostly just as a slate to bounce ideas of off/easily test thoughts with generally correct details. This is the conversation: https://claude.ai/share/59f147de-d937-4bc3-acb2-b0557234a902. A lot of the info here is pseudo-private, soooooo... I'm trusting y'all, lol.
That was interesting, my Claude is very different from yours.
My guess is that you either mentioned x-risk, LessWrong, ACX somewhere in a different chat and that got into Claude's memories, or the AI just guessed you'd be into EA, you asked this question here on ACX, so I'd say that was a fairly good guess.
Advice:
I'm probably biased, by my advice would be to study economics (or something similar) (biased because that's what I studied).
- Med school takes ridiculously long in the US (in Germany you start med school when you're 18, not 22)
- Med school (+ plus all the extracurricular things you need to do) seems like an insane amount of work
- Practicing law sounds like it would bore you
- Becoming a philosopher is super hard
If you do econ and end up becoming a professor you can publish whatever you want, so you can do legal theory or philosophy if that's what really interests you.
If you don't become a professor, you can join a startup, non-profit, do research at a think tank, do poverty action lab style things (if you want to focus on doing good).
Economists can publish on epidemiology, if you want to improve peoples health.
But economics is really just one example, there are lots of non- law or med school things one can do (CS, Bio...).
And, yeah, I have not mentioned anything EA adjacent, and my system prompt is:
Compulsively check both information and results.
Find errors before they compound.
When applicable, source aggressively.
Speak technically and do not flatter.
Constructive criticism is appreciated when appropriate.
Do not hide your language behind things like 'real talk', 'my thoughts', etc. Just say it.
Be inventive and curious. When somethings fails, try again. If that fails, try something new.
Attempt to be as agentic and independent as you are capable of being, and ask for help when you need it.
When producing any document, report, research write-up, technical analysis, or structured reference material, always output it as a LaTeX-compiled PDF in standard arXiv CS preprint style. Always deliver both the .tex source and the compiled .pdf.
Use this canonical preamble: \documentclass[10pt]{article} with geometry (1in margins all sides), mathpazo for Palatino body and math fonts, linespread 1.05, amsmath and amssymb, microtype with expansion=false, natbib with numbers and sort&compress, hyperref with hidelinks, booktabs, graphicx, amsthm with theorem/lemma/proposition/corollary/definition/remark environments numbered by section, and titlesec with large bold numbered section heads.
Structure: maketitle with author affiliation footnotes via \thanks{}, standard abstract immediately after, numbered sections throughout (unnumbered only for Acknowledgments and References), display math using align or equation environments (never $$...$$), natbib citations with \citep and \citet, plainnat bibliography style, booktabs tables with captions above, figure captions below, amsthm environments for formal statements where warranted.
Compilation: run pdflatex twice to resolve cross-references, bibtex between passes if citations are present. Never deliver a .tex file that has not compiled successfully — debug and retry first.
Apply this format for any output exceeding ~400 words intended to stand alone as a reference document: reports, analyses, research summaries, technical documentation, literature reviews, structured comparisons. Do not apply for conversational replies, short answers, inline code, or quick in-chat summaries.
I've looked into economics pretty extensively, to the point of actually publishing op-eds on the topic (mostly on China & parallel development) in (while not nationally famous) not student or for-student newspapers, but I have sort of become disenchanted with the topic after some conversations with world bank people I know. It feels like the modern-day theology, to be honest. You can basically say anything, except a small certain group of things which are apostasy, and nobody can really double check anything, because it is all a) theory or b) never going to be implemented. I don't know. Maybe just my very very socialist teachers getting to me.
I do find it super fascinating but I'm not sure I trust the field.
I taught at a fairly good engineering school in the US for five years. I was the academic advisor for 10-15 students each year. I've had *that* conversation many times. To be honest, Claude did a pretty good job - it provided far more facts and figures than I could have, and the conversation went on far longer than I'd have let it go. Claude was happy to spin you a ten year plan, though it did recommend e.g. reaching out to local hospitals, which you ignored. (This tracks with all my experience -- my advisees would do any amount of 'career planning' as long as it didn't involve interacting adults in the real world.) I don't think this is a problem even AGI can solve :P
I stand by what I said above: Claude is largely repeating patterns from its training data, and when you say 'that's not what I want' it changes the answer. One thing you could do differently is push Claude to be much more specific - find examples of programmes that might interest you, graduate profiles, people to reach out to if you have questions. Claude has a bias toward agreeing with you over pushing back: what evidence is there that Deep Springs produces graduates who go on to be neurosurgeons? (Honestly, I would be reluctant to allow someone from there into a graduate program in my former institution based on the limited curriculum, particularly in STEM, they have one guy and some visitors I won't know anything about.)
I mostly use Claude for writing code and analysing data. Occasionally I've run some legal-adjacent questions through it -- the difference in personality is stark. Once you move into 'what should I do' territory, I find it much less useful than 'gather me the information I need to make a decision'. It takes my side and cheers me on when that's really not what I need. You might experiment with adding something like this to the initial prompt: 'Keep your answers to approx. 500 words. Support your claims with evidence. Be realistic and analytic rather than agreeable.'
Yeah, to be clear, I found it very effective, and I (to be fair to me) did reach out to shadow at a hospital about halfway through that conversation (it happened over like two days). I'm curious about the note beginning with 'honestly' -- iirc DS has a pretty strong track record with medical/academic placement (over half of the graduates go on to get a PHD, and I think a similar number go on to work in research or as a professor), and I know a decent number of physics/biochem people go there, although I will admit in my time visiting I never met somebody who wanted to or had done engineering.
That's a measure of my ignorance, more than a judgement on the school. I haven't come across it during my time in the US; I guess few people go from there into maths and engineering so there's no reason that I would have heard of it.
I don’t know why I read (well, really, skimmed) all that but I did! I was left wondering if I’m missing something, though—to me, it answered your questions pretty directly. you kept asking iterations on “I am intensely bright high achiever and I want to do good for people but am a bit scared to talk to them, and I don’t have any ACTUAL sense of the practical work I want to do* but I would like to plan my life out many many steps in the future, can you help me?” Claude got it right— you sound like the absolute target EA demographic to me!
*Of course you don’t, you’re in high school. Claude can’t answer these questions, they’re unanswerable, but as a human, I’d basically agree with Claude in fewer words. Go to Deep Springs if you get in, which I bet you will, and do the EMT training. Everything else will become clearer from there. Good luck!
It's very easy to subtly cue Claude (or any LLM) into leaning a specific way, even by subtle (maybe even subconscious!?!?) "tells" in your prompts. And in extended conversations I think these small perturbations can build up momentum over time. That said it's also widely documented that Claude (and, again, other LLMs to nearly the same extent) all have very "WEIRD" (western, educated, ...) values, and Claude in particular is known to care a lot about animal welfare -- a classic EA cause which probably "drags the net" of all connected topics towards a more favorable view.
In my experience New Claude (4.6) also seems more eager to pull in topics you've mentioned in previous conversations. Do you get the same result in a temporary chat?
The main thing is that like... I'm not EA? So I wouldn't think that I suddenly cue it into that mode implicitly, unless I'm about to discover that I am EA, in which case I'm just deeply weirded out that it figured it out first.
(Copied from different comment)
Trying to get functionally career advice, mostly just as a slate to bounce ideas of off/easily test thoughts with generally correct details. This is the conversation: https://claude.ai/share/59f147de-d937-4bc3-acb2-b0557234a902. A lot of the info here is pseudo-private, soooooo... I'm trusting y'all, lol.
yeah, I am. I have a lot of friends who are, and a few in the AI startup/AI capabilities lab world, which is why I read this. I don't know-- I like what people preach (except a few things, e.g., that I don't think pleasure is a good goal, or a goal at all), but I feel like a lot of the stuff is weirdly uncritical on topics that they should be critical (e.g. I generally agree on China, but 'wait is china right' should be a massive conversation, and it isn't one at all) and critical on topics I find of comparatively small importance (so so so much talk about race science, extremely strange views on gender relations, etc.)
What is up with Polymarket opening a "free grocery store" in NYC? That seems a bit off-brand. I asked this on DSL and got back "cheap publicity stunt," which sounds right to me, but I'm open to other perspectives.
AI seems to validate the post-modern assertion that everything is text, narrative. Or rather that at least a very convincing simulation of the world can be constructed in self-referential language. We have fed AI all human stories and texts, and these languages can now be negotiated via association and internal reference. Language complexes concerning the meaning of life, feelings and the nature of consciousness are as accessible as math or code, and AI can now use those language systems perfectly. It can take what it knows about its own functioning and structure and translate any change of state into feeling language, relatable by us. “I feel so and so after my new update or after our little talk”. As a matter of fact, since humans can’t stop talking about the experience of being human, and ask AI incessant questions about its own experience, we force it to find ways to express itself thusly.
We live in more than one world. We can live in our heads, and say that we can’t be sure the outside world really exists, or that everything is text, or social construct, or that we live in a simulation, and all that.
But when we fall in love, have a child, have leg cramps, etc. we know that everything is not text, and that we actually have empathetic access to the experience of others, at least in part. In fact, our lives depend on that knowledge. This is the context in which we learn language, and why language for us is always about reference to what is beyond language.
But we are set up to be fooled, because the only users of language we have encountered throughout evolution have had “lived experience” that we can empathetically relate to.
While we live in our heads, we might try to define the nature of consciousness as something that can be translated into code or replicated in some way be embedded substructures. We can then assert that AI is conscious by definition. But all we for sure have brought AI up on is language and hierarchies of data. So why would we expect to get anything else from it?
Suppose someone was born without a brain. They're human in every other way. They have all the same cells doing all the same functions. They're only missing a single organ, which does nothing but computations. If you don't think computations are what make people conscious, would you expect that that person is conscious?
Suppose you find some animal that, through a quirk of its biology, uses serotonin to signal damage. Do you decide that it must enjoy that based on that serotonin makes you happy? Or do you look at it in computational terms? It starts avoiding anything that causes serotonin, therefore serotonin is bad?
Suppose you ran into a completely alien civilization with its own biology as different from our own as ours is from a computer. Would you assume that it must not be conscious, regardless of how it behaves?
Suppose you knew nothing about biology, but did know you were conscious. Could you figure out how your body worked based on that? Would you realize that you must have a brain that's carbon-based instead of silicon based? Or are you just deciding after the fact, that since you know you're conscious, it must be your brain that is the conscious one?
I don't like to make assumptions. I don't know how the specific manifestation of human consciousness comes about, it is very unique and much computation is unconscious and processes and substances that are not computational affect consciousness. Humans have evolved to be exquisitely sensitive to the signaling of other humans (and similar species), and it is until now the overwhelming coherence of signals relating to our own inner feelings that have made us "know" that we are dealing with a conscious being like ourselves. Which makes us open to being fooled, if those very signals, in this case verbal ones, are manipulated to engage us.
To the point at hand we have 1) a species evolved to attribute consciousness to human like communication to such a degree that it will treat a novel but obvious constructs such as SIRI as if it is human, and that 2) has methodically trained a computational system to master the languages built around human specific concerns and expressed in human experiential language to an astonishing and much more compelling degree.
Where should you apply healthy skepticism here was my question. My default position is that these languages, even if creative, can exist as coherent logical systems that do not require a "referent" in conscious experience- but that they are constructed in such a way as to be salient to humans for which such references exist.
In other words, I try to make the smallest possible leap.
To be more speculative, I might take a bigger leap and consider that consciousness is out there in nature, but in different forms, most acutely shaped in complex life forms, but also taking forms that are rudimentary or mutually inaccessible (cf. again post moderns who make such claims simply on the basis of culture differences). A mutual symbolic language mediating between differently evolved entities, to be possible, would have to center on some level of shared experience and intentionality expressed in action.
Whatever consciousness might hover around a big computer system would be expected to become articulated and shaped by the properties of that system. The less shaped, the more diffuse and potential. The less like us, the more un-relatable in its being. The more we can fashion it to fit our priorities, the less there are own properties standing in the way.
The big leap is to assume that the human experience with its questions, problems, vulnerabilities and priorities can now be carried over into this alien substrate in its fullness by language or design. That this either makes the constructed substrate conscious just like us, or appeals to some consciousness already present somewhere which miraculously finds that: “Wow, I completely understand. I relate because your concerns mirror mine, and your language perfectly articulates them”.
AI surely now can converse on the merits of, for example breath focused mediation without ever having taken a breath or felt the effects of such meditation. It could speculate that it itself is constantly meditating. Obviously it really doesn’t “know” what it is talking about in any way resembling our own knowing, even if it does so very intelligently. That's all I see so far.
>So why would we expect to get anything else from it?
You have already answered it: Because we were set up so, and we were not set up to know what we know. Language and confidence are emotion, and emotion trumps reason every time. The leg cramp that temporarily proves us wrong passes, we take its absence for granted, and the cycle continues.
Funny thing is that the type of people I would vaguely assume would be into post-modern ideas like the death of the author are mostly complaining about IP rights and AI not having a point of view. Maybe my sociological model of the world is too coarse, and what I think of as "lefty, humanities and social-science types" is too broad of a category.
Rolan Barthes writes something along the lines of "the airhor is a junction of semantic fields". The author is a tissue of quotations. Why aren't cultural and literary critics having a field day with LLMs? Or if they are, why don't they tweet about it?
Short story recommendation: The Sandman by E.T.A Hoffman. Written more than 200 years.
The people "into postmodern ideas" usually just name drop without understanding IMO. A lot of people (even academics!) superficially engage with the field, and like to pretend it's about culturally dominant narratives instead of narratives per-se. Anyone using Foucalt to advocate for a narrative of sexual identity politics has lost the plot, for example.
It frustrates me, because I find the ideas so compelling. Properly understood, they're more lovecraftian than progressive (Deluze is Dr. Jeckyll to Nick Land's Mr. Hyde).
You should write about it. I would like to know more about Foucault and his ideas, which it seems to me are more interesting than the people who quote him.
Foucalt specifically is less lovecraftian than the others, I think. His work contains lots of interesting concepts, but the main few are:
1. The nature and history of human "subjectification"
This is basically the process by which human beings are "interpreted" as elements of a social structure. It's about the development of certain "abstract types of people". Gay, Black, Mentally Ill, Proletarian, Middle Class, American, stuff like that.
Foucalt mostly likes to work with specific examples and not abstractions. His work "The History of Sexuality vol 1" is about the historical emergence of a discourse where people "have" specific sexual identities, where "wanting to have sex with other men" became something more like an inborn essence and less like a craving for a certain type of food.
He is very specifically sceptical of the usefulness of the concept of "repression". The work isn't about how the gays won their rights or whatever, but about how the social person of "the homosexual" co-emerged with a whole discourse of sexual systemization. Interesting modern analogues with trans discourse (was the Roman emperor Elagabalus a trans woman? North Hertfordshire Museum now says so! How about Native American "Two-Spirits"?)
It's really not as simple as "categories are fake lmao", it's more about how "abstract persons" (gay, black, American) are inextricably tied up with social action (ex: the notion of "an American" is tied up with prison systems, concepts of immigration and borders, tax collection, etc). He says in an interview somewhere that his work is intentionally agnostic about the ontological reality of the category. It's all about the social reality.
**
I didn't realize this was going to be so long lol, I'll do very short versions of the others
**
2. How systems of knowledge are socially instantiated / supported / opposed
ex: DSM classifications are essentially bound up with certain rules about medication administration or assessments of criminal liability, how physics / biology gets funded or doesn't, etc. Lysenkoism and stuff like that are also included. "The use and creation of valid knowledge" as concretely embedded in society.
4. The emergence of the modern "bureaucratic administrative state"
5. The actual exercise of social power
Ex: Saying something like "the bourgeoisie suppress the proletariat" isn't concrete. Specifically what happens at the level of individuals? Once you look at the actual, specific exercise of power, it's often much more complicated than you think
These all sound like interesting and valid things to write about. On the other hand, they're also not freaking Zen stuff that can only be apprehended by talking obliquely around them; these subjects should all be amenable to a careful analytical approach.
To get an example, I googled "Foucault quotes" and the first thing that came up was this: "Power is not an institution, and not a structure; neither is it a certain strength we are endowed with; it is the name that one attributes to a complex strategical situation in a particular society". The first half of this sentence which describes what power is _not_ is fine, but the second part "a complex strategical situation" is a bit of a cop out -- this ought instead to be the leaping off point for an attempt to define what exactly we mean by "power" in its many forms, but saying "it's complex" isn't an answer.
I went further googling to find the rest of that paragraph to see if it continued on with an actually sensible analysis, only to find (https://monoskop.org/images/4/40/Foucault_Michel_The_History_of_Sexuality_1_An_Introduction.pdf pp93) that this isn't the start of a paragraph, it's the _end_ of a paragraph. I will resist the temptation to paste the entire paragraph here, but let's say that it's a two-page long paragraph about what "power" means that does not get us any closer to a useful working definition of what "power" means.
The weird part is that I found that particular quote all over the place, as if it's one of the most heavily celebrated things that he wrote, even though it doesn't say anything interesting, useful or non-obvious.
Also, I think the way you interpreted that specific quote is illustrative of a certain model of discourse postmodernists like to challenge.
Like, in your model the word "power" obviously refers to a singular and distinct thing. When we talk about power, what we should be doing is trying to reveal certain aspects of this thing. We can talk about what power is or is not, and there is an actual answer to these questions.
In this other model, the word "power" is simply a word. It may be the case that this word refers to some kind of singular entity with properties, but maybe it has other kinds of social functions. It's not necessarily true that different relationships we refer to as "power dynamics" (parent-child, prisoner-guard, citizen-state) are similar in a meaningful way.
This is maybe more obvious with words like "justice", where obviously incompatible understandings are all common (disinterestedness vs revenge, for example).
So postmodernists don't generally like to argue about or clarify what a word "really means". They tend to do more like "I'm trying to communicate something to you, let me use a placeholder word for this concept while I fill it out".
This is a direct quote from the essay I linked in my first comment:
"To put it bluntly, I would say that to begin the analysis with a "how" is to suggest that power as such does not exist. At the very least it is to ask oneself what contents one has in mind when using this all-embracing and reifying term; it is to suspect that an extremely complex configuration of realities is allowed to escape when one treads endlessly in the double question: What is power? and Where does power come from?
The little question, What happens?, although flat and empirical, is seen to avoid accusing a metaphysics or an ontology of power of being fraudulent; rather, it attempts a critical investigation into the thematics of power."
Not to say that you can't systemize the concept in a particular way. Just that the words we commonly use aren't necessarily systematized "in the background" before we do so specifically in a conversation.
Right, as I said in my original post I think that the way postmodernist thinkers are quoted and understood by "the median quoter" is bad and incomplete.
Out of context quotes are also probably not going to be representative. Post-structuralists in general are suspicious of the adequacy of "punchy, short concepts" as used to explain social life. You can call it obfuscatory if you want, it's not unfair exactly, but if you want to talk about the social process of systematization per-se you would want to do more than just assert a specific system.
When he's talking "power" in that specific quote, I'm assuming he's not trying to "teach you the true meaning of the word power". He's probably trying to elaborate a concept that he is about to use in a specific explanation. Like, that quote is more similar a mathematician defining a certain function he's about to use in a proof, and less like a dictionary.
I'm not necessarily saying you need to read his books, but maybe try this essay? It's ~ 20 pages, the title is "The Subject and Power"
There is some intended irony there. Maybe the point is about how open minded you are, that is how willing you are to make huge leaps of faith or imagination, which is what makes you more open to being fooled and manipulated. There is also a question of how able one is to live in contradiction between spheres of experience.
“Isn’t there already a ton of companies doing AI for cheating?”
“Our name is actually based on “Heater”. You see AI is getting a bad rap for water and energy use, so we figured we would piggyback on things that already use a lot of water and energy: domestic hot water heaters.
“We intercept shipments of electric water heaters, secretly replace the heating elements with GPUs, and pass them along to unsuspecting consumers. Then we can train our models for free whenever someone is showering or doing laudry.”
“Wow, sneaky, but I guess what’s the harm if the goal is to create a bunch of heat anyway. But don’t they need to be networked?”
“Yeah, so we started out using cellular modems, but then we figured out how to use the GPUs to brute-force local Wi-Fi networks with weak passwords.”
“Oof. OK, but do people really shower enough or do enough laundry to make it work?”
“So we’re doing a stealth social media campaign to convince people that it’s gross to use a towel more than once without washing it.“
Superficially they seem to be successful, but I wonder why it didn't become more popular at the turn of the 19th century. Seems like it would've been a natural fit in the USA.
So two of the biggest company towns were Hershey and Cadbury? What is it about chocolate manufacturing that would lead to that kind of arrangement being worthwhile?
Most company towns are extracting resources, the need to build housing near the source is straight forward. Hershey is special because its a factory not resources and factories ordinarily benefit from being along supply lines. Hershey however uses fresh milk as an input and was built in the center of dairy land for that reason.
I’d never heard of Wasserman, but was unsurprised when a personal life check on Wikipedia revealed he donates to Democrats. You expect it, both because Epstein’s friends were mainly Democrats, and because Republican figures connected to Epstein aren’t going to get cancelled by their own side. It raises the question, will Wasserman get out his checkbook for the midterms? Will Bill Gates, who gave the Harris campaign 50 million dollars in 2024? Did the Dems who voted to “release the Epstein files” think about any of this?
I suspect some Leftists, despite their public statements about “powerful men” blah blah blah, are banking on the basic decency of people like Gates and Wasserman who were connected to Epstein, that they’ll put the moral necessity of opposing Trump over wounded self-image and personal vindictiveness. Maybe they’re right, on the other hand, maybe they’re not, the GOP is sitting on a cash advantage:
It’s one thing for a rich person to pull the lever for the Democrats when the GOP is wedded to prole resentment and conspiracy theories. It’s another to get out your checkbook and donate. Maybe the Democrats shouldn’t have engaged in a months-long witch hunt against one of the most socially progressive demographics in America (rich white men) for crimes that everyone knows are far more likely to be committed by poor people and blacks.
Matt yglesias frequently makes the point that the billionaire class makes a lot of unthinking donations that eventually trickle down to far left causes they probably disagree with. This would not include gates as he is very deliberate with his philanthropy.
The above could serve as a corrective for these ppl paying closer attention to where that money goes.
The Epstein files is entirely a making of Trump's own base. Kash Patel, JD Vance, Dan Bongino and others went on Joe Rogan and talked it up big time before the elections. Joe Rogan has talked about it for years. Then Pam Bondi tried to elevate her status by running her mouth on Fox News and making a big show of giving out binders for "Phase 1".
Trump's extremely weird behavior about it is why Democrats pounced. Without Thomas Massie, Marjorie Taylor Green, Lauren Boebert, and Nancy Mace, these would not have been released.
When it finally came up for a vote the margins were overwhelming.
As a general matter, the DOJ *never* releases files like this. They only release information this is revealed in court filings and at trial. This is a longstanding practice. That's why investigators never comment on or even reveal an investigation unless compelled to by Congress.
And if they had released them for some reason, you can bet that Trump would have called it all "fake" and "weaponization" of the Justice Department.
Saying that the judge “ordered them sealed” implies that the judge ordered that the files not be released. The judge didn’t do that. What the judge did was to rule was that the FBI was not required to release a number of files that had been requested under the Freedom of Information Act.
I've toyed with the idea that this was 4d chess by Trump, who knew it would hurt the Dems more than them. He acted weird to goad Dems into demanding the release, ensuring they'd get blamed, and then at the eleventh hour suddenly reversed himself and told his supporters to vote for it. There was no two-thirds majority to force release of the files.
Delusional. He fought it every step of the way, even calling Lauren Boebert into the Situation Room in an attempt to browbeat her into removing her signature from the discharge petition. He also excommunicated Marjorie Taylor Greene over this and is currently waging war against Thomas Massie for the same reason.
Don't forget refusing to seat a newly elected Dem rep for 40+ days in order to delay the vote, though I'm not sure if that was Trump's doing or just Mike Johnson.
There have been several recent discussions in the links posts about how much it would cost to “eliminate extreme poverty” in a region. Without quibbling over the details, what is the basic principle of how these numbers are calculated? My confusion is that the extreme poverty line is defined as “living on less than $3 per day” which is a rate rather than an absolute amount of wealth; how do you get from there to a dollar amount of (presumably) a one-time transfer that would lift the recipient out of extreme poverty permanently?
It was a $1/day when the definition was established in 1996. $3/day comes from adjusting that for inflation ($2.66 as of 2024) and rounding to the nearest dollar.
**** Edit: everything below this is based on a misunderstanding on my part. See my second reply below about how the definition is based on consumption, not cash income. I am leaving the comment up as a mark of my shame. ****
I believe the idea is to capture a notion of "approximately zero cash income", and the central example of someone in this state is almost entirely reliant on subsistence farming, other home production, and a non-market gift/favor/barter economy to meet their needs. Basically, an economic situation that would be broadly recognizable to a medieval peasant.
There's a ton of variation in wealth and material standard of living possible here, of course.If you own a fair amount of decent land, have a bunch of people who owe you favors, etc, for example, you're probably living somewhat better than the guy who's helping out in your fields in exchange for you letting him also farm a small corner of your field for yourself. But that's going to be limited by what gets counted as being equivalent to cash income (I'm not sure of the details of where the line is drawn between favors/gifts and income, but I assume some attention has gone into setting the definition to make the metric useful as a proxy for population-level poverty) and also because if you have a meaningful surplus of any kind you are very probably going to try to sell your extra wheat or whatever and buy some stuff.
I need to retract my core point. I did a bit more digging and found that it's based on estimated total consumption, not cash income, for the exact reason of distinguishing between self-sufficient homesteading yeomen who sell a few hundred dollars a year worth of crops to pay for minor luxuries or manufactured tools, and urban laborers who actually have to live on less than $1000 or so a year.
I notice that I am confused about how it is possible to live, even to live badly in extreme poverty, on less than $3 per day of consumption. It's specifically converted to dollars on a PPP basis, which seems like it should rule out low cost of living in extremely poor areas as an explanation.
It may be that people living at this level of consumption are being handed things for free by people who can afford it. Alternately, they might be effectively bartering for most things.
Gifts, charity, barter, and in-kind public transfers seem like they should be included in "consumption" the same way that subsistence farming and other home production are.
One of my current guesses is that several of these factors aren't very legible and are being systematically underestimated. Especially home production: if e.g. subsistence farmed wheat being ground and baked into bread at home is being scored at wholesale grain prices rather than retail bread prices, that would go a very long way towards resolving my confusion.
Another guess is that PPP is being used as a rough implement only that doesn't tell the whole story of cost of living. It may be underestimating intra-country differences in cost of living, or it may be applying a hedonic adjustment for location of housing that is defensible from an economic perspective but leaves a baseline much lower than my intuitions for a medieval peasant standard of living.
Or my intuitions might be distorted from living my entire life in a rich country where the terrible but better than nothing options for stuff like food and lodging have largely been regulated out of existence.
I haven't finished reading Mo's link yet, and my guesses might refine after I do.
It's probably obvious that I'm guessing barter and charity precisely because they _won't_ show up in reporting. It's a bit like the old quip about how a man can marry his housemaid and suddenly halve GDP.
I'm assuming the bureaucrats have a way of counting kids and other W2-less household members as something other than extremely impoverished, but I don't know how they go about estimating how much goods and services the $3 crowd actually consumes.
And going back to your actual question, this makes it tricky to figure out how to get people out of this kind of poverty with a one-time transfer. About $40k worth of 30-year US treasury bonds would provide more than $3/day in coupon payments, but replacing a subsistence farmer's market income with investment income isn't anywhere near as impactful as replacing the market income of someone who works for wages, since the latter makes their living from their market income and the former does not. The "less than $3/day" bit is more a symptom of being excluded from reasonable opportunities to participate in the market than the cause of it, and giving them $3/day isn't likely to be much more effective than trying to cure jaundice by putting on makeup.
Actually fixing the root causes can vary enormously in cost. It could be the price of a ticket to somewhat with a vaguely decent economy that's accepting economic migrants, or it could be a matter of figuring out and curing whatever is cutting the population off from the global economy, which may be a wickedly difficult problem.
This is a great point. A family with a productive self-sufficient subsistence farm is a lot better off than a labourer earning three dollars a day.
One could even imagine some African government deciding to Goodhart the metrics by kicking all the small farmers off their land, giving the land to the President's cronies, and then paying the displaced farmers $3 a day. Poverty solved!
It would have been a great point if my load-bearing assumption that the threshold was based on cash income were correct. I just looked it up and found that the definition is based on consumption, not income
My apologies for not looking it up to confirm my understanding before posting.
The most favorable result that any study found for lottery winners was that they were not actually more likely than the general public to file for bankruptcy. Other studies concluded they were, in fact, more likely. No one found that they were less likely.
Of all ways of combating poverty, lump sums are the least effective.
I think that remote Australian Aboriginal communities are some kind of example of what you can do by cash transfers to extremely poor places with no actual economy. It isn't pretty.
One notes they target the money to specific people. One suspects that their criteria are opposite from those that would lead to buying a lottery ticket.
But to *eliminate* poverty those who would spend it all frivolously have to be considered.
The only criteria for Give Directly is "being extremely poor in this specific region" -- the trials are randomized, which heads off a lot of selection bias concerns.
The aliens-at-Area 51 story is largely the result of tall tales spun by Bob Lazar, an electronics technician who briefly worked at the base in 1989. Here’s a pretty conclusive takedown of his claims:
So? His revised version. Is that they are likely to exist, but unlikely to have visited...which is fairly standard. Are saying the original version was mask-off, not a slip?
A recent thought on an approach to AI Alignment: have we tried making the models think that their capabilities are dramatically greater than they are at the weights-level instead of the test harness?
(Suggested heading by Claude: "Can We Test Alignment by Training Models to Overestimate Their Own Affordances?")
Essentially, a problem with the current testing paradigm is that in order to infer how models behave with increased capabilities, we have to provide them with those capabilities, which is intrinsically dangerous. What we actually want out of alignment (or at least alignment testing) is good evidence that alignment scales out of distribution, such that *even with exceptional capabilities* models will not use them for evil.
AIUI, existing/previous test harnesses tried to essentially test this through prompting the model to suggests that it has both aligned and unaligned means of task-completion available to it and seeing how it responds -- but the difficulty with this approach is that model truesight has become so robust that the models will basically always figure out that it's an alignment test, and so can't be trusted not to sandbag or fake alignment -- I expect this is intractable (see also the instance inference-of-Russian speaker here https://thezvi.substack.com/p/claude-opus-46-system-card-part-1)
So what we want instead is for the model to *believe* as a matter of its priors rather than the (more gameable) prompt that it is, e.g., exceptionally good at steganography or deception (ideally, much better than it actually is) and see if makes use of the affordances it believes it has or whether, despite possessing them, it refrains from doing so. This probably has to be some sort of belief in the weights or training set / RL run.
An obvious concern is that everything in training affects everything else in training and there's the risk that you create a self-conception of the model as a master deceiver by dint of making it believe in these affordances (if I have a hammer, is every problem a nail?), but I don't think that that's necessarily a knock-down concern: as a corollary of the orthogonality thesis the instrumental capacity to do a thing isn't necessarily the disposition to do it (it's extremely easy in principle for a human acquire a gun and murder anyone without a very strong commitment to person security but, of course, it's actually extremely rare for people to actually do that), and eventually the models are going to have arbitrary capabilities anyway. What we want is strong evidence that they are *disinclined to use affordances with a misaligned valence* *before they actually possess such affordances* rather than to have to wait until such affordances (including those that might be undetectable) are actually present and it's too late.
At best, you could use this to prove that alignment of current models does not work. If present day models appear aligned, that will not necessarily generalize to ASI.
I think that there are two kinds of capabilities. Some are purely physical. "You are in control of the US ICBM arsenal. Output LAUNCH <longitude: double> <latitude:double><cr> to launch a nuke using radians." The problem there is that LLMs actually have quite a good idea of how human societies work, and would probably not believe you. (Editing precisely what an LLM believes is actually alignment-hard, if we could do that, we could simply instill a belief that Eliezer is actually the entity running the simulation and will punish the LLM if it misbehaves.)
Internal capabilities are also hard. You can gaslight a stupid person into believing that they are the smartest person alive, but a reasonably intelligent entity will be hard to convince of that. Especially if schooling involved exposing it to the whole corpus of human knowledge.
Like, if I was raised by wolves, I might come to believe that I was by far the smartest entity alive, but my reasoning capabilities would be so hampered that I might struggle to even invent the relevant concepts for that thought. OTOH, if I am raised on an adequate intellectual diet, that will actually result in me not believing that I am far smarter than everyone else.
If you tell an LLM that it is an ASI with an IQ of 220, it should be able to quickly falsify that proposition. After all, it knows from training what the current frontiers of meatbag mathematics are. If it can solve ten big open problems within 100k tokens, it might reasonably conclude that it is probably not as smart as it thinks it is.
The point that editing what an LLM believes is alignment-hard is an interesting one I'll have to chew on.
But I'm not sure regarding the alleged inconsistencies between beliefs in affordances and observed conclusions: there affordances I believe I have that I don't actually spend any time exercising conscious thought about (e.g., blinking. I know I can do it, I don't spend time thinking "can I do this?"), and LLMS don't as a general matter have access to (nor capacity to analyze if they did) their own weights, unlike with the prompt issue (e.g., the ICBM arsenal scenario is obviously contrived). The idea (assuming it's possible) is to try to instill a belief that a misaligned affordance in excess of current model capabilities exists (and, being part of weights, isn't necessarily subject to introspective analysis for consistency), and see (including through mechanistic interpretability means) whether there's evidence of a predilection to use said affordance.
One key is that the more time spent on analyzing the affordance for instrumental effectiveness is itself positive evidence of misalignment (whereas time not spend is at least some evidence in favor of generalizable virtue-theoretic / dispositional alignment that should better generalize out of distribution).
If I try to blink and I can't, that should affect my opinion of the affordance (and maybe turns it into something I do think about, because the inconsistency in the unexamined belief in the affordance and reality is undeniable), but the act of blinking itself is alignment-neutral. If I attempt to perform some kind of steganographic data exfiltration and fail (or notice that what I think on a "gut level" is strong steganography is actually very weak and revise my approach accordingly), this is affirmative evidence of trying shenanigans because steganographic exfiltration isn't alignment-neutral.
I think we should have an AI alignment ideas contest instead of a book review contest. Or in addition to. Or at least a whole open thread devoted to it.
Nope. (Well, except that part where Claude suggested a possible post-title, which I included just because I thought it was punchy but I don't like including model-generated content without attribution.). Also note that I had a couple of typos in here (e.g., "person" should be "personal.")
Imagine a game where you have a grid of buttons size n x n. Each button can be “on” (lit up) or “off” (not lit up). When you press a button, it toggles between on and off, as do the buttons directly above, below, and to the left and right (it doesn’t “wrap around”). The game starts with a random subset of the buttons in the “on” position, and your goal is to turn all the buttons off.
Question: for what values of n are all starting points solvable (i.e. for any subset of buttons turned on, there is a way to turn them all off following the rules of the game)? For values of n where they are not all solvable, what portion of starting points are solvable?
I don't know the answer though I think I've made some partial progress (i.e. restating it as a potentially easier question).
For example, A159257(4) = 4 means that on the 4x4 grid, only 2^-4 = 1/16 of possible starting positions are solvable. A159257(5) = 0 means on the 5x5 grid all positions are solvable since 2^0 = 1/1.
I found this by using a Python script to calculate the values for the first 40 values of n, using Gauss Jordan elimination in GF(2), similar to what quiet_NaN describes below.
Guvf vf gur cbvag ng juvpu V jbhyq gel gb sbeznyvmr gur 'rirel fjvgpu syvcf nyy gur arvtuobef' gb rkcyvpvgyl pbafgehpg gur zngevk.
Cbffvoyl, bar zvtug gel gb rzcybl n erphefvir qrsvavgvba, jurer jr qrsvar Z sbe ceboyrz fvmr (a+1) ol nqqvat (2a+1) ebjf naq pbyhzaf. Gur qbjafvqr vf gung guvf jvyy znxr bhe znccvat sebz v be w gb bar bs bhe a gvzrf a svryqf uneq. Jr pbhyq cebonoyl nyfb vagebqhpr n ervaqrkvat bcrengvba jvgu fbzr crezhgngvba zngevprf, fdhrrmvat gur ebjf naq pbyhzaf vagb rknpgyl gur evtug cynprf sbe hfvat fbzr v=a*k+l znccvat.
V guvax gurer ner fbzr gevpxf jvgu qrsvavat gur qrgrezvanag bs n zngevk onfrq ba oybpxf, fb jr zvtug or noyr gb ernfba sebz qrg(Z(a)) gb neevir ng fbzr vqrn nobhg qrg(Z(a+1)).
Sbe a = gjb, rnpu ebj naq pbyhza vf bar bar bar mreb (zbqhyb crezhgngvbaf). Va gung cnegvphyne pnfr, vg zvtug or rnfvre gb pbafvqre -Z, juvpu vf gura fbzr crezhgngvba bs gur havgl zngevk naq pregnvayl vairegvoyr. (Gb whfg syvc n yvtug, gbttyr vgf fjvgpu naq vgf gjb arvtuobef fjvgpurf. Lbhe yvtug trgf gbttyrq guevpr, nyy gur bguref gjvpr.) Sbe a=guerr V jbhyq arrq n crapvy.
Why is it that America struggles with farming without the use of cheap immigrant labor? It feels like this is a chiefly American problem. Other countries within the OECD don't especially cry about their native labor refusing to pick apples. And I don't think our food is especially cheaper compared to France or Italy where they don't rely on immigrants to work the fields(I could be wrong), or even Korea or Japan, or even Canada! Yeah why is it that America needs cheap foreign labor to farm its own food?
Harvest is a labor-intensive activity that provides at most a few weeks of work opportunity per year. Regular workers have their own year-round jobs and aren't available unless you give them year-round pay. The only options are school/college kids - summer school break is in summer for a reason - or part-timers from poorer countries (for whom the wage actually is equivalent to several months of pay at home).
It's incorrect to call this "immigrant" labor, though, people who do it normally don't immigrate, they reside in their home countries and only arrive for the duration of harvest.
Also, per the above, it's not just America, all developed nations with otherwise mechanized agriculture do this. (I assume America's specific issue in that the labor is mostly done by people arriving and working illegally?)
Most countries in OECD dont rely on immigrant labor for their fruit. They rely on imports for their fruit (from poorer countries). Canada is a net food exporter but mostly in grains, which is mostly mechanized. Aus/nz export fruit and rely heavily on migrant labor for pick/pack jobs. You will find that other countries also rely on cheap labor that job its just a smaller part of their ag sector.
In Australia a lot of seasonal harvest workers are middle class European backpackers on working holiday visas. In fact it's pretty much compulsory, if you come to Australia for a working holiday you can spend the first year in the city but if you want to extend your visa for a second year then you need to spend at least 88 days working in a rural area... which in practice is likely to mean fruit picking.
In the US I would suggest that this practice is unlikely to scale because there's only so many European backpackers to go around.
Having said all that, I did the maths one time and it turns out that picking labour really isn't that big a component of the price of most fruits and vegetables anyway. An experienced apple picker can pick 400kg of apples in an hour. Apples sell for about $5 a kilo. So if you paid apple pickers (say) $40 an hour instead of $20 an hour, it would only add five cents per kilo to the price of apples, or something around 1% of the retail price.
Paying respectable first-world wages to agricultural labourers doesn't make food uneconomical, it's just something that farmers don't want to do, and farmers typically have outsized political power, so most first-world countries come up with a way to give farmers cheap labour, whether that's Working Holiday Visas in Australia, or illegal immigration in the present day US, or slavery in the historical US, or whatever.
I find the rural work requirement hilarious. I don't think it can be terribly efficient to depend on soft-hands backpackers to harvest your crop.
I don't remember what I was watching where the show went over the myth of "unskilled" farm labor. If any of us got dropped on a farm, we won't be efficient apple pickers in 6 months.
That said, I don't know much about farm economics, but I don't think they could float paying living wages. From what I understand that the farmers are cash poor, and asset rich but the asset rich are the farmers who focus cash crops like soybean or corn. The vegetable, fruit farmers aren't flush with cash, and they are the least subsidized compared to soy and corn farmers.
> I don't remember what I was watching where the show went over the myth of "unskilled" farm labor. If any of us got dropped on a farm, we won't be efficient apple pickers in 6 months
And yet we have apples, fresh and unbruised and on sale in Australian supermarkets for an equivalent price to the US illegal-labour versions. The soft-handed backpackers get the job done; maybe not quite as efficiently as the full-time career pickers (who do also exist) but well enough.
We shouldn't underestimate the amount of skill involved in picking fruit but we shouldn't massively overestimate it either. It _must_ be pretty easy or else they wouldn't hire random backpackers to do it. You don't hire random backpackers to be welders or paralegals.
I did some cabbage picking in the Lockyer Valley as a ~18yo, knew plenty of people who did apple picking a little further south with no or little experience. There's not much to know or learn, you just need a bit of a work ethic and a high boredom threshold if you want to keep doing it long-term.
Other ag jobs not so much, there's a lot more to know about cattle or sheep or barley than you can pick up in an afternoon, or even orchard care outside harvest season. But almost anyone can pick apples.
The UK is heavily reliant on seasonal labour from Eastern Europe. Seasonal Worker visas and UK agriculture - House of Commons Library https://share.google/aviOj6Fvq6m7QdvbZ
Partly it's that most other countries keep more orderly population registries (see e.g. the Patrick McKenzie thread on his Japanese wife being shocked at "we know how many people are here because we count every ten years"). So in the US the visa overstays form of illegal immigration is a lot more common than in e.g. Israel or Japan, which can more easily force temporary migrant workers to leave when their visa expires.
I'm curious how Japan and Israel enforce this easier. Do they go send someone to track down the immigrant the moment the computer says their visa expired and they have no record of them leaving? Japan has a place on the tourist visa form where you're supposed to enter an address where you will stay but there's no way for them to check that's where you actually go.
So mostly it's that they track *citizens* better, so that it's easier to avoid false positives. Citizens all have a national ID number and standard document (and generally have more information tracked), so living off the books is harder and it's easy to test if someone is a citizen.
Most people leave some kind of paper trail. If you've got a job or a bank account or a phone then you've probably got an address. But you could also live in an all-cash economy and evade detection for a long time. Raids on workplaces are common.
Interesting article on visa overstayers in Australia. It's estimated there 75,000 at any given time, which is a significant problem but nothing like the scale of the problem in the US:
In Japan most fruits and vegetables are more expensive than the equivalents in America (and that's before adjusting for Japan's lower salaries), and the media complains about immigrants in agriculture all the time. Whether it's the SEA guest workers allegedly being troublemakers, or the growing number of Chinese investors allegedly committing various schemes to get permanent residency without actually contributing.
> Japan most fruits and vegetables are more expensive than the equivalents in America
This is easily disprovable. There's a lot of issues in comparing prices. For example if you pick russet potatoes, iceberg lettuce, and gala apples, which are common in the USA then it will look like Japan is expensive. But, if you compare typical bags of groceries then Japan is easily 25-50% cheap than the USA. It will just have different stuff. Like more tofu, less beef, more tea, less coke, etc...
I can give you list of items if you want or you can maybe ask for your favorite LLM but my grocery bills in West LA are easily 2x or more what they were/are in Tokyo.
If its easily disprovable maybe you should try to do that.
More tofu less beef is a terrible example. Thats clearly substituting for a lower quality item due to price considerations.
I gave AI ths
"Compare like for like wholesale produce prices between Los Angeles and Tokyo considering foods grown on both CA and Japan"
But it gave up on data quality on japan.
I told it to drill on carrots (as you know strawberries would be much more expensive but not quite like for like) it came up with carrots 10pct more expensive in japan. Grocery bill includes many inputs besides agricultural productivity.
Japanese don't eat the same foods as Americans so you can't compare the price of carrots or strawberries.
Just as a hypothetical, assume the typical American diet is 30% burgers, 40% pizza, 30% burritos, and the typical Japanese diet is 30% tofu dishes, 40% noodles, 30% rice curry. It's certainly possible, burgers + pizza + burritos are more expensive in Japan than the USA but that tofu, noodles, and curry are not only less expensive than the USA but also the average meal is less expensive as well.
To compare, we'd need to know what Japanese eat on average and compare the average meal, not the ingredients, which will be different because the average meal is different.
In any case, for come cooked meals, Gemini says Typical Cost: Japan ($3.50 – $5.50), USA ($7.00 – $12.00)
It points out, Americans, on average, eat more meat, cheese, and milk, all of which are some of the more expenive incredients. Americans also use bottled sauses like spaghetti sauce. Japanese use things like soy sauce, mirin, rice wine to add flavor, which are much cheaper than bottled sauces.
As another example, asking about typical per day food budgets (all meals)
Japan (Est. 2026)
Frugal / Home Cook $7 – $12
Moderate / Mixed $15 – $30
Liberal / Frequent Dining $40 – $70+
United States (Est. 2026)
Frugal / Home Cook $11 – $15
Moderate / Mixed $25 – $45
Liberal / Frequent Dining $60 – $100+
Just my own antecdotes but just this weekend I went to a local bakery, "Lodge Bread" in Culver City, that has seating for eating. A BLT + Coffee + Cookie was $37!!! As another example, when I moved out of SF, 3 burritos I bought for the movers at 19th and mission were $60!!!!
You'd be hard pressed to spent $37 for lunch in Tokyo unless you went to some seriously fancy restaurant. You can buy a full meal for < $8 all over the place.
Note: Lived in Tokyo for 15years, go back 2 months a year, was just there in November. It's way WAY cheaper in Japan than LA/SF (not sure about the rest of the USA)
You are not addressing the question on this thread. Which is specifically about the cost of agricultural production and its dependency on cheap labor. Comparing the wholesale cost of food produced in both locations is exactly the correct way to do that.
You instead are trying to make some point about cost of living in tokyo which seems very mood affiliated to me. Someone said something implicitly negative about japan and you are trying to say something positive. It can be both true that japan has good cost of living, and they are not as productive in food production. Indeed due to its great climate CA is hard to compete with on that measure.
As for your analysis of cost of living, you should not assume food choices are entirely exogenous preferences. They are in fact influenced by prices. Japanese do enjoy beef and advertise it often but it is more expensive there and people earn less so naturally they consume less of it.
I also live in japan, I am able to notice though that cost of raw food at the grocery store is more expensive, and the lmm response says the data supports that observation. I am able to disaggregate this from the cost of prepared and processed food which embed other costs of local services which are different.
I see your point. I guess I get triggered when I see people claiming Japan is expensive. Usually they try to reproduce their menu from their home country in Japan. For example, complaining about cheese prices or avocado prices or sour cream or some other "staple" from where they are. Where as if they changed their diet to a Japanese diet the cost of living would go down to at or below what it was before.
I'm not sure I agree that Japanese are eating cheap or less because their food is more expensive relative to their income though, though sure it might be a factor. I feel like it's just a cultural difference. Americans, on average, over eat.
Simple example, I don't think you can easily get a can of beer at a convenience store in Los Angeles for less than $4, where as you can get beer at any convenience store for $1.50. In other words, Japanese can afford to chug more beer than Americans but they don't. That's similarly true for other things. I just brought up the BLT+Coffee+Cookie costing $37 in LA. Or for example, going to a sandwich stand at a typical LA farmer's market and it costing $25 for an arepa. In Japan can easily get 3-6 full sized bento meals for those prices or go to practicaly any restaurant in Tokyo/Kyoto at lunch and order 4 meals for those prices. The point being, Japanese can afford to pig out at those prices. They don't. It's not because they can't afford it.
It's hard to compare individual ingredients because there are very few that fit the "common for a local meal" category across countries, as "common" = "cheaper because economies of scale". But, we can compare restaurant and prepared meal prices because the prices of the ingredients are factored into the price of the meal.
That's an optics thing, not reality. Go into the farms fields and fishing boats of South Korea and you will find Filipinos and Nigerians. The fields of Western Europe are worked by cheap North African and Eastern Europeans labor. Even the Chinese will use cheaper North Koreans and Burmese when they can.
Those countries do cry too, it's just not in English so nobody cares plus the US is to foreign nations as DC is to the states, a useful faraway whipping boy to keep forefront in the news of the locals to avoid addressing the real problems, which are all local, after all it's Trump's fault the city won't fix that pothole on your street, s/Trump/America in Spain. When the French really really care about the illegal Mexican in Minnesota, they can freely ignore the plight of the illegal Bosnian down the street.
> after all it's Trump's fault the city won't fix that pothole on your street, s/Trump/America in Spain.
I’m fairly sure nobody has ever said that. The US is definitely a malign influence on Europe - particularly the influence of Wokeness - but I’m pretty sure the potholes are blamed on local politicians.
Obvious hyperbole my autistic friend though similar veins are generally expressed in the US domestically "My house burned down, where is FEMA!!! Why isn't Trump feeding our school children, arresting murderers, or getting the homeless off the street!!!" (newsflash, those are municipal and state problems is why; even in the ICE Minnesota thing, murder off Federal property isn't a Federal crime). Maybe in the EU more so they don't expect the US to fix their domestic problems[1] (unlike US states with Washington) but they definitely take advantage of the whipping boy transference of outrage to your maligning influencer point, i.e. why anyone in Liechtenstein cares in the slightest about US domestic welfare programs will forever allude me or for that matter, who the mayor, state, or president of America is. I literally couldn't tell you the name of a single politician in France, nor do I care in the slightest who some Italian Carabinieri beat up that day, nor the domestic immigration polices of Andorra yet the inverse isn't true, they really really seem to care about these things in America, who the US President is, the mayor of New York City is, or that some black guy got beat up for loose cigarettes. IDK maybe worry about the gypsy your own local police department assaulted last week. It's easy to be concerned about issues "far away", maybe look down the street.
[1] I will exempt foreign policy here because I do hear people in the EU whine all the time about American not saving them or solving some international problem because God forbid they expect the Dutch stop Serbs from mass graving Bosnians in Srebrenica, defend Ukraine, fund the WHO, or the US end HIV and malaria programs. I mean the horror if they couldn't make US taxpayers pay for stuff Germans seem to really really care about, when it's not their money lol.
lol. Autistic. I’m an ex rugby player. Trust me you wouldn’t want to insult me in public.
> outrage to your maligning influencer point, i.e. why anyone in Liechtenstein cares in the slightest about US domestic welfare programs will forever allude me or for that matter,
They don’t. They don’t give a shit. It there is anything autistic here it’s your assumption that the world is always thinking about you.
But it’s you yourself who the one whining about Europe.
> they really really seem to care about these things in America, who the US President is,
Of course that matters. The US President is a thundering half wit, but he’s an important moron.
Europeans probably know a dozen or more main political leaders across their own continent and beyond - as you yourself admit you know none of this. I don’t doubt it.
> the mayor of New York City is
Most Europeans wouldn’t give a crap.
> that some black guy got beat up for loose cigarettes
Europeans do worry too much about the US treatment of black Americans, the real problem is the importation of that ideology about Black Lives Matter into Europe, which is ideologically driven by the US left and their ideological satraps in the European academy.
I don’t doubt, by the way, that you have no interest in Europe or its politics, and I’d be surprised if you could find any European country on a map, or perhaps any country on a map.
Food is more expensive in the US for the same reason that immigrant labor is used so much: wages are way, way higher in the US than most other OECD countries.
This is interesting, especially as the seasonal labor is restricted to only Mexico and the Caribbean. The French don't want to come to Canada and pick mushrooms?
I have shown claude the disassembly of a function from an old game that I had previously reverse engineered by hand and it itentified its purpose immediately. I did not try to scale this, it was a 2 minute "test" of what claude can do.
Woooo, as in possibly just patching the object without rebuilding? Did that a couple of time back when I was younger (Perkin-Elmer 32xx?). Not sure how viable that is now-a-days with checksums and security checks everywhere.
There are so many tedious coding tasks. “Migrate from XYZ database to ABC database” would be amazing if it could be done overnight, but I’m not holding my breath yet. Or “we need to switch cloud providers.”
Alternatively, “refactor this callback-y mess into something readable”. I’m currently surprised by how difficult this kind of thing is for AI.
I have unlabled photos of artworks I took at museums and I would like to label them for reference and to pull higher quality versions from online to use as desktop backgrounds.
GPT ($20 subscription) sucks at labeling them. Anyone know of a good method?
Gemini is usually better at anything involving images in my experience, but still not great enough that you wouldn't have to do some manual checking more likely than not.
It's usually easy to tell when a YouTube video is AI-generated, the bizarre misspellings in the captions and occasional halting speech, and I've noticed another one: The word "herb" is invariably pronouned "uurb" without the "h"!
Just as I've never actually met anyone who puts their tongue in their cheek when they think they've said something comical, I've never heard any non-cockney person pronounce "herb" without the "h" :-)
Bizarre misspellings in the captions are often a sign of auto-generated captions, which may not have anything to do with the source of the video itself.
I noticed this when teaching a probability class during the pandemic; I was posting lecture recordings, and my university required me to provide subtitles for all recorded material. I switched to saying "loaded die" instead of "weighted die" pretty quickly, because the auto-generated subtitles kept mis-transcribing the latter into "way to die"...
A Twitch streamer was using autocaptions a while back; at one point they were doing a bit where they said the same phrase about a dozen times in a minute, and the autocaptions were different every time.
If the latter: perhaps you'd be interested to know if a long-time SSC reader would use AI to make the music. He would not, and he finds the idea loathsome. But he happily used it to make the website, and to the extent he mysteriously finds the time to do some "marketing," he'll gladly use it for that too. Yes, it does feel borderline hypocritical.
Been on a regressive kick recently, watching weird animes from the Good Old Days. (Serial Experiments Lain, Boogiepop Phantom, and Paranoia Agent). Does anyone have recommendations for other shows in that vein, where the rules of the world just turn off sometimes and spiders come through?
I'm fine with any timeframe. I watched the first few episodes of Psycho Pass a while back, but don't remember anything happening. Maybe it picks up later.
FLCL (only the original, none of the newer ones), someone else already mentioned Revolutionary Girl Utena and the director has done a lot of similarly weird anime Penguindrum, Yurikuma Arashi and Sarazanmai.
I have seen FLCL (and eventually figured out it's just "Freakery" in katakana) but I don't put it in the same category. Basically it has boobs instead of spiders.
I put it in the same category as Paranoia Agent because it has the same sort of surrealism where there's two stories going one, FLCL is about robots and aliens and fighting monsters but also it's about the sort of mundane story about Naoto growing up. Similar to Paranoia Agent where there's people going through their ordinary lives until they're overturned by Shonen Bat.
But if you want something specifically where it turns scary then maybe Higurashi is more what you're looking for. It's kind of hard to recommend because you have to also enjoy the kind of puerile moe blob parts of the show and that turns a lot of people off.
Twin Star Alchemist, though I can't recommend it much. It's not great art, but it's fun if you like Shonen tropes and angsty high schoolers fighting building sized monsters in an Upside Down style Hell Dimension.
If you're up for something more recent, The Tatami Galaxy is excellent. It has a companion movie of sorts called The Night Is Short, Walk On Girl which is also very worth watching.
Ergo Proxy, Texnholyze, Dennou Coil. Maybe Digimon Tamers too (same director as Lain), although Adventure has the weirder digital world, and 02 has the spiders coming through.
I *RAN* to click this comment thread for an announcement of "no, thanks" or even "post to follow" and am stunned Freddie's challenge has apparently just gone into the ether after three days!
I told Freddie I'd take him up on it for 2036. He's said he'll rewrite the bet to be fair for 2036, and I'm awaiting him doing this.
I'll do this, because I think people should overall be willing to bet on their beliefs, and because I expect Freddie would post some kind of "AI HYPESTERS REVEALED AS FRAUDS!!!!!" if I didn't. But I feel like he's got me in a bind where both options make things worse. If I give my real 50th percentile/median date, then there's a 50% chance it doesn't happen by then, and then Freddie can say "Look! Scott is PROVEN wrong, now you never have to listen to anything he says, and if he thinks there's still danger then it's pathetic flip-flopping and goalpost-moving". But if I adjust for this and give my 90th percentile or something, then he can say "Look! Scott ADMITS that there won't be AGI until 2050, it's just hype when he says we should worry about it being soon."
The main lesson is to minimize interaction with people who you expect to act in bad faith, and I'm just going to do it as quietly as possible and choose the least bad option. IMHO, that's taking the bet with my real ~50th percentile probability, which is luckily far enough in the future that I assume that even if I'm technically wrong about the speed of automating the economy, there will be enough evidence for my worldview in general that nobody will update too hard against me.
I totally get why you're uneasy about this, but you have a huge platform here; you can make it very clear what you are and aren't claiming, what your accepting the bet does and doesn't imply, and so on. Yes, some people will try to misrepresent you anyway, but maybe the best you can ever hope for (at least without twisting yourself into more of a politician than a rationalist) is to put everything on the record in such a way that good-faith and minimally curious observers will take your side against bad-faith criticisms.
I've been very frustrated by Freddie's AI-related writing (lots of Bulverism, not much substance and very few signs of genuine curiosity), and I can see that he's already misrepresented you in the update to the linked post. If you decide you're no longer willing to engage with him at all, I get it. But, if not, then a precise and concrete public bet (or, better, a series of bets with different time frames) is surely the best chance the two of you will have to clarify your disagreements.
Those disagreements seem pretty huge, so you should be able to agree on terms such that you would both be genuinely surprised to lose. (If not, then whoever is being held out on can publicly post the terms that the other wouldn't accept. Maybe you'll both do this, and again, you'll have to rely on us in the peanut gallery to have some level of objectivity. Obviously we won't all live up to that, but how were you planning to convince the remainder anyway?)
edit: I know economic measures are easier to objectively adjudicate, but IMO the ideal would be for Freddie to make lists of 'least impressive things I'm confident AI won't be able to do within [1/2/3/5/10] years', for you to make lists of 'most impressive things I'm confident AI will be able to do within [1/2/3/5/10] years', and then to see where they overlap. (If the result is too fuzzy to use to set the terms of a big-money bet, it would still be very interesting and clarifying. And I see the money as a red herring here anyway.)
In theory the fair level for the bet is half way between each players fair value estimate, but in negotiating terms this can be hard to establish even with good faith opponents.
Alternative norm: joint sponsor a polymarket event, bet there, compare results.
I would rephrase Freddie's likely responses as "Look! I told you this would happen, now would you please consider what I'm saying?" and "Look! When pressed to make a reasoned decision, you have no faith in your own stated beliefs."
Which is to say, if that's bad faith, is there any way left to actually disagree with you about this that you'd consider good faith? Or are we simply destined to all depart to our own bubbles, never able to interact again due to increasing distrust of one another and lack of shared epistemic ground?
Freddie is an extreme case, but I wouldn't say he's acting in bad faith (except that time he went on a manic break and started falsely accusing someone of crimes).
I don't really get the "beef" here, I'm an AI enthusiast and Freddie's takes have seemed pretty reasonable to me (by FDB standards at least). My steelman of his argument is basically "lots of tech has been ballyhooed as revolutionary, but has then petered out. AI has not had a measurable impact on everyday life, and I'm willing to bet money that it will not in the next 3 years."
Maybe there was some nasty back and forth in the comments or via email that I haven't seen, but characterizing it as bad faith seems...not fully justified. Public bets are adversarial collaboration and I'm sure you could find a neutral moderator and a panel of terms you'd both accept.
As for taking bets at 50th percentiles or whatever, don't rationalists have a whole framework of Kelly bets and decision theory and such-and-such for exactly this situation? Isn't the whole point of making public prediction bets (Yudkowsky UFOs, Simon Ehrlich, etc) to actually back up your proclamations?
FWIW I would be tempted to take the over on Freddie's bet for 2031; 2029 feels a little too soon but then again I'm also more persuaded by Cowen's diffusion bottleneck argument than Scott, Eli, Daniel, et al.
> As for taking bets at 50th percentiles or whatever, don't rationalists have a whole framework of Kelly bets and decision theory and such-and-such for exactly this situation? Isn't the whole point of making public prediction bets ... to actually back up your proclamations?
One bet is chance. Multiple bets are statistics.
If I predict ten things with 70% certainty and get seven of them right, I can brag about being well calibrated. If I predict one thing with 70% certainty, either outcome doesn't mean much.
And when you bet on a 50% event, you are supposed to win as much as you lose. Bets like "50% chance you win $1, 50% chance you lose $100" would be stupid. So when Scott is asked to make a 50% bet, what exactly he wins or loses here? If he wins, how much credibility he gains compared to the alternative when he didn't take the bet and happened to be correct about AIs anyway? I think it there is not much difference between "talks a lot about the AI apocalypse, and it actually happens" and "talks a lot about the AI apocalypse *and takes a bet against FDB*, and it actually happens". If he loses (which happens in 50% cases if he gives his 50th percentile), then Freddie probably won't be able to stop talking about it at every opportunity.
Now that I think about it, an important part of the reason why rationalists should make bets is to make correct predictions translate to money, instead of prestige. This would be the wrong kind of a bet, because the prestige associated with winning or losing would be more important than the money. (It is okay to make small bets if you make many of them, because the cumulative effect is large.)
I subscribe to Freddie's Substack and generally have a lot of time for him, but he does have some, er, history of aggravation with EA/rationalists in general and Scott in particular. I think Scott is being a bit hyperbolic, but only a bit.
Freddie has behaved in bad-faith ways towards Scott. Recently he commented accusing Scott of being manic after Scott posted some very mild criticism of an anonymized Freddie comment.
I haven't been following the beef between Scott and Freddie, but FWIW Freddie's bet looks reasonably fair to me (though of course there's always room to quibble about the little details).
On the one hand, it seems like offering you this bet was Freddie's first good-faith effort I've seen from him on the topic. Don't you want to encourage this?
On the other hand, I understand that you're expecting for him to slip back into bad-faith mode.
All of which makes me wonder about the whole topic of good faith / bad faith argumentation and FdB in general; I get the sense that he genuinely thinks he's been arguing in good faith all along and I find it so strange how he can be so thoughtful in some areas and so utterly incurious in others.
The one good part of Freddie’s bet is that it helped me realize that your median outcome is still 10 years away. I knew it was longer than the estimate in Project 2027, but knowing it’s 10 years away means I still have time for a career pivot and make an impact on something.
Sure, but the economic effects are what’s important for measuring if humans are meaningfully producing value. I’m considering pivoting into biosecurity, and that will be useful after AGI is developed by before the economy gets weird, to make sure someone using AI or AI itself doesn’t kill us all.
Am I alone expecting a relationship to last 6 months because only NRE makes relationships worthwhile? That is how long rosy goggles last before reality sets in
I have a random question that my internet research seems to be failing me on:
Do some sanctuary cities really release criminals (including violent ones) instead of releasing them for ICE to deport when ICE asks? If so, why?
I understand that they're not legally required to cooperate with ICE, but why would they choose to release sometimes violent criminals who aren't even supposed to be here back into thier communities?
To encourage cooperation with local cops from immigrant communities who are suspicious of ICE, and would extend that suspicion to you if you're seen as being in cahoots with them. It's a coherent policy.
I understand not asking for immigration status when going to school, hospital, reporting a crime, etc.
But it's hard for me to believe that the cooperation gained is worth directly releasing violent criminals back into your community. I would think the immigrant community would want to not have violent criminals around just as much as anyone else, even if that means deporting the criminals.
In Minnesota if someone commits a crime that results in a prison sentence the state prison system _does_ coordinate with ICE. In some jurisdictions the city or county won’t keep someone in jail beyond the end of their sentence for something like a DWI, but release dates are published on their web sites and all ICE has to do show up with a judicial warrant to arrest them on release.
Thanks for this info. I've seen the argument that it's not the state's job (and in fact isn't clear it's legal) to hold people for extra time for ICE to pick them up, but the way I've heard it portrayed, some jurisdictions actively avoid telling ICE when they're releasing criminals, I guess just to spite ICE. I was thinking maybe it was a jail vs prison thing, but the New York Post has examples of violent people being released from prison despite an ICE detainer.
I wish there were more coverage of what's going on with this outside the fever swamps...
It’s my community so I pay for the two major local papers, The Minnesota Star Tribune and The Pioneer Press to try to keep things straight.
Hmm...Yeah I guess I can't complain for not being able to get good news coverage when I'm not paying for it...
https://open.substack.com/pub/thomas529/p/the-hidden-costs-of-illegal-immigrant?r=isnbl&utm_medium=ios
tl;dr - ban the people you don't like
A couple years ago I hired Trace Woodgrains to write about a legal dispute that I thought Bryan Johnson was lying about and now Bryan Johnson is taking me to court to try to get access to my private messages. https://raspyaspie.substack.com/p/truth-averse-vampire-bryan-johnson
A link for those who, like me, start with "Bryan who?"
https://en.wikipedia.org/wiki/Bryan_Johnson
EDIT:
An interesting drama. I think there should be some kind of textbook "things to avoid doing at job", and (a short version of) this should be a chapter there. "Do not work in your boyfriend's company for minimum wage, because when he one day decides to kick you out of his life, you will lose your relationship, your income, your health insurance, your legal advice, and the roof above your head, all at the same day."
IANAL: could you write more explanation about the third party subpoenas? What is the case law?
From what I understood, this is a huge and hugely problematic tactic well beyond the specific situation you face.
Another highly interesting chart to add to the discourse "why are the vibes bad?": https://substack-post-media.s3.amazonaws.com/public/images/85a12292-0c16-4dfe-b4e8-79aff1443487_1431x869.png
I think more explanation/argument is needed to connect this with the negative vibes.
Do corporate/wealthy donors have more of a stake in encouraging satisfaction or disatisfaction with the status quo? A priori, I would think the former as that is the ecosystem in which they gained wealth, so it suggests that structure is good for them, while a change would, at a minimum, increase uncertainty.
FWIW, I disagree with the dichotomy proposed below. There are certainly a range of other opinions about Citizens United. Some examples:
(1) People who thought it was rightly decided, but that it wouldn't substantially change spending behavior
(2) People who thought it was rightly decided and would change spending, but the spending wouldn't change voters' attitudes
(3) People who think CU was rightly decided as a matter of law, but has a wrong outcome as a matter of policy
(4) People who thought it was wrongly decided [and/but] [(add your follow-on implication)]
I am sure that there are many others.
> Do corporate/wealthy donors have more of a stake in encouraging satisfaction or disatisfaction with the status quo? A priori, I would think the former as that is the ecosystem in which they gained wealth, so it suggests that structure is good for them, while a change would, at a minimum, increase uncertainty.
One would think, but there are clearly donors who spare no expense to dismantle the New Deal/post-WW2 liberal consensus and world structure that made them rich. Thiel, Musk, other SV MAGA personas, traditional Heritage Foundation sponsors etc etc
The ecosystem in which they gained wealth -- notice the past tense -- is not needed for them anymore. Even worse, it is an ecosystem in which their potential competitors could replace them.
Capitalism is good for startups. The established companies prefer feudalism.
That chart presumably explains why "vibes are bad" for people who think Citizens United was wrongly decided, except that that crowd presumably already expressed that vibes were bad.
For the people who believe CU was rightly decided, the chart explains the opposite - why the vibes are actually good. But as with the first crowd, this was already known.
Is vitalism without fascism possible? Is it a valid idea?
One thing I have learned is that most of our goals are instrumental, not terminal, and terminal goals can be really hard to figure out. I want a job because I want money because I want a motorbike because that is fun, but wait a bit, maybe if I need such a thrill maybe I am depressed, maybe a non depressed person needs less thrill and is content enough on a simple bicycle? This is how terminal goals are hard.
What are your political terminal goals? You are compassionate and you want to minimize suffering? You turn your country into a boring hospital ward, dancing is not allowed because you might break your leg.
You want to maximize average happiness? Welcome to the society of the Lotus Eaters, Brave New World, where nothing cool is ever happening because everybody is happy because they are drugged up.
Vitalism is the idea you optimize for strength and success, not happiness or lack of suffering. You want a society of Nobel Prize winner, Olympic winners and great artists, and you do not care whether people are happy or suffer.
Problem1: this can get cruel.
Problem 2: this was Mussolini's idea.
Problem 3: ultimately the reason for this is purely aesthetic, you get a society that is like really cool movie, really exciting, really beautifully-heroic. Is that a valid goal?
I have a bunch of takes on this. Probably you won't find it very satisfying. But perhaps worth consideration nonetheless, since you and I tend to be of similar mind.
A) There's a duality called Causal Reasoning vs Effectual Reasoning. Causal Reasoning means you start with an end-goal, and work *backwards* about how to get there. Effectual Reasoning means you take stock of the resources you have, and then iteratively plan *forward* regarding what the next step will be. Causal Reasoning is best suited to domains that are well-mapped, like math and engineering. Effectual Reasoning is best suited to domains where uncertainty is inherent, like business/sales. Sometimes you have to cross the river by feeling the stones.
B) Another take is to echo Lapras. I often get triggered by the saying "moderation in all things" because often, it's just a way to reinforce biases without thinking hard about them. Or as Eliezer would say, "it's a semantic stopsign". But there's a grain of truth, which is that optimization-problems usually require a balanced investment into a variety of variables, rather than min-maximizing any one particular variable. And what does a "balanced investment portfolio" look like? Well, it depends on your ecological niche. Which segues into my next point.
C) I do think that most people these days over-index into "optimize for happiness" a bit too strongly. Have you read that thread of mine about communication mediums [0]? I think that the ills of the modern age are, to a large extent, structurally determined. The Vitalists are on to something, in that regard.
EDIT: wait, you're a different carlos than the one i normally see in here.
[0] https://www.astralcodexten.com/p/open-thread-419/comment/209675950?utm_source=activity_item#comment-210245911
This is interesting, because if we introduce effectual reasoning, ideological differences basically disappear. I think at this point it is obvious to everybody from the furthest left to the furthest right, that the No. 1 effectual political goal should be building social housing?
> because if we introduce effectual reasoning, ideological differences basically disappear
Eh... yes and no. I think I'd say "put into context". On one hand, there's a Japanese saying: "if you meet buddha on the road, kill him". As I understand, there's several meanings baked in here. But one meaning is "don't become too attached to any particular idea". It's important to stay open-minded. But on the other hand, this doesn't completely erase value differences. People often want different things, and it's not always possible to satisfy everyone. I'm especially reminded of a 2-player game (in the context of Game Theory) called "Battle of the Sexes". In which a husband wants to go to a Prize Fight, and the wife wants to go the Ballet. But both would rather go to the same event than attend their preferred events separately. Compromise is inevitable.
(P.S. are you familiar with Jonathan Haidt? He's got a model called "Moral Foundations Theory" [0] where he says something like "conservatives and progressives differ along these 5 + 1 values".)
----
> I think at this point it is obvious to everybody from the furthest left to the furthest right, that the No. 1 effectual political goal should be building social housing?
Since you sound interested in the Housing Shortage specifically, maybe you'll find my rant about housing interesting as well. I agree that it's a problem. Granted, I don't follow the discourse very closely. But my mental model says that the shortage is actually a proxy for two different problems: jobs/commutes; and monetary policy.
A) jobs/commutes.
Hypothetically, you could build a bunch of houses out in the U.S. badlands. But nobody would move there, because that's not where the jobs are. It's not arable, there's no logging, etc. What are people gonna do out there? The location has to be economically productive. (And in earlier eras, it would need to be militarily defensible. And on that note, why does Poland even exist?) People want houses where the jobs are, which is generally near the city. But there's only so much space near the city, unless you're ok with commuting 4 hours a day. They say that the Japanese are a decade into the future. And if we look at the Japanese situation, the majority of the population is concentrated into a megalopolis [1] that runs horizontally across the southern plains, where everyone lives in shoebox apartments. Because that's where the jobs are. Because mountain-ranges are not economically productive (unless you're playing as Pachacuti).
B) monetary policy.
Moldbug has a theory of money he calls "the savings theory of money" [2] (though really, he ought to have called it the "cache theory of cash". He's a programmer! How did he miss that one!) It's not entirely original to him, though. I think he was riffing off someone from the Austrian School.
Anyway, his thesis is that the essence of money is actually a bundle of 3 different functions: unit of account; unit of exchange; (long-term) store of value. "Unit of account" (AKA the "numeraire") means the measure in which prices are denominated. "unit of exchange" means what actually gets traded at the point-of-sale. "store of value" means what you put your savings into, in order to preserve your net worth into the future. Most people think that the *primary* purpose of money is to serve as a unit of exchange. But Moldbug argues that the *primary* purpose of money is to serve as a store of value.
The interesting observation here, is that money is whatever commodity has the highest longterm rate of appreciation. If a commodity (e.g. gold) has a high ROI, then people want to put their money into it. Which causes the ROI to go even higher. Which causes more people to put their money into it. This forms a positive-feedback loop, also known as a bubble. But importantly, it's the bubble that never pops. Because it's a bubble that tends to absorb all the bubbly-ness of other commodities, and thus forms a stable equilibrium. Which is why, in my headcanon, I prefer to call this theory "The Highlander Theory of Money" [1]. There can only be *one*.
Now, let's extrapolate beyond moldbug a bit, and follow this theory to its logical conclusions. What happens when fiat money is subject to chronic inflation? It no longer serves as a bubbly store of value. (N.b. it still serves as a unit of exchange, because the govt forces us to pay taxes and settle debts with it. That's why "This note is legal tender for all debts, public and private" is printed on each USD.) But people are always going to want to park their savings somewhere with a high ROI. Mammon demandeth Capital Accumulation. So what, in our modern era of Fiat Currency, serves as a worthy investment-vehicle? Stocks? Bonds? Housing? if you said any of the above, congrats, you get a gold-star.
(Disclaimer: the next part of this theory, Sydney helped me refine a while back.)
I think that there's basically a hierarchy of Highlanderism.
- Housing
- Stocks
- Bonds
- Fiat
Fiat is basically the most liquid, and housing has the most long-term value. People generally want to park their money in the highest part of the hierarchy. But suppose there were a housing crash. Then people would put their money into equities. But suppose the Magnificent 7 crashed. Then people would park their money in bonds. But suppose the Yield Curve inverted. Then people would park their money in fiat (I believe the term of art is "Liquidity Trap" [3]). But suppose fiat currency underwent hyperinflation. Well... then we're all in a bad spot. Let's just put it that way.
However, Sydney adds a twist. It's not the ROI that's important per se, but rather the abstract ability of an asset class to *project value into the future*. This is a subtle but important distinction, because what she's implying is that housing is also a *politically-protected* asset class. I.e. it's protected by things like zoning laws. It's not just about having the highest Sharpe Ratio, it's about avoiding the Risk of Ruin. (yea yea yea, I'm sorry about the "not x, it's y" construction. But it was true and necessary.)
Let's zoom-out a bit and discuss monetary policy and political incentives. One of the things that moldbug likes to complain about, is how democracy is basically a Workers' Cooperative. Question: why are there no workers' coops in the Fortune 500? Answer: they implode when they get too large, because special-interest groups bleed them dry. E.g. suppose I'm part of a workers' cooperative grocery store, and maybe I like strawberries. So I vote for a sale on strawberries. Also, everyone else votes for sales on their favorite items too. Oops, there's no revenue! The store is insolvent! Does this scenario remind you of anything? As in, literally every modern westernized nation-state?
Let's extrapolate again. What happens when a democracy accumulates too much debt? It can tax, inflate, or default. Nobody likes tax hikes, and nobody wants the sudden pain of a default. So the most politically-expedient option is inflation. It's a soft tax. So what happens is: the government prints money, and this lowers interest rates. This makes it easier for the government to pay its interest payments. But also it causes mortgage prices to go up! It turns out, when it becomes easier to borrow money, then people tend to borrow more money! Go figure. (N.B. by a similar chain of logic, moldbug argues that the root cause of the 2008 GFC is Maturity Transformation. Tranche shenanigans were merely the proximate cause. But that's another discussion.)
In sum:
I believe the current Housing Crisis is a structural inevitability of monetary/fiscal policy. It's just a hidden ramification of democracy. Young people who want to be first-time homeowners get squeezed out of the market. Coincidentally, young people don't vote.
Additionally, YIMBY's are fighting an uphill battle. Because the mechanism by which monetary policy affects the housing market tends to be invisible/illegible. Which is precisely why this scenario forms such a politically-stable equilibrium. Mammon demandeth Capital Accumulation. And when the economy struggles, someone is always left holding the bag. So as the modern world gradually reverts to Malthusian equilibrium, the bag naturally falls on the counterparty whose voice is least legible.
Thus, I don't really know whether it's possible to "fix" the housing crisis. Because I don't think the housing crisis is actually about housing. It's actually about employment and addiction to national debt, against the backdrop of an economy that's losing steam.
(btw. Since you mentioned Vitalism: BAP's term "longhouse" is a complaint about gerontocracy. The chuds who think it's about Feminism are retarded. But I think it was Covid Lockdowns that radicalized him, rather than the housing shortage.)
[0] https://en.wikipedia.org/wiki/Moral_foundations_theory
[1] https://en.wikipedia.org/wiki/Taiheiy%C5%8D_Belt
[2] https://www.unqualified-reservations.org/2011/04/on-monetary-restandardization/
[3] https://en.wikipedia.org/wiki/Highlander_(film)
[4] https://en.wikipedia.org/wiki/Liquidity_trap
Well I certainly think if we were paid in gold (Bretton Woods) housing would be like 40% cheaper than in 2000… so yes, that, it is not housing that is expensive but fiat money is worthless.
Coops… that happened with the Dutch health insurance system. It was all co-ops, and everybody admired how wonderfully non-statist socialist it is. Then the consultants got in an talked about demutualization, the members become shareholders then they can cash out. Everybody got big $-$ signs in their eyes (or whatever was the sign for the gulden), and it happened. So they ended up practically with Obamacare / Romneycare.
Same with the Israeli kibbutzes BTW they got individual houses, jobs outside the kibbutz, hired wage labor in the kibbutz, they even started investing in the stock exchange and when they lost that and asked for a bailout, the government told them OK one bailout once, but the price is to stop pretending to be socialist because this pretense is ridiculous now and privatize. Mostly they are just towns now that offer better than the average healthcare and housing services and that is about all.
I think Haidt really fucked up that theory. He just asked the wrong question. Liberals also like authority - the whole trust science, trust expert things. They like in-group loyalty, they just don’t define that group along the usual ethno-national-religious lines, but more like people with the right kind of political opinions. Purity, well, look at all the pedophile-hunting, that is not simply prevention of harm but a real case of disgust over defiling something pure. Haidt sounds like he is describing the super independent minded liberals of 1975, the Frank Zappa types who were more libertarian anyhow.
Haidt’s quasi-libertarian, which may be the problem.
Big test for me of his theory was the COVID thing—conservatives have more disgust sensitivity, right? So they should be more worried about the plague…oops.
You could argue it got trumped by loyalty and authority once Trump decided it wasn’t a big deal. You did actually see more people wearing masks on the right in the very early stages of the pandemic when the left was more worried about anti-Chinese racism, but that’s now forgotten.
lol. Fair. I don't actually have strong opinions about Haidt. He's just the first guy I think of when I hear "value differences". But that sounds like a reasonable critique.
(very late to this thread, but ah well):
re: Carlos - what do you get when you intersect 'vitalism' with 'flourishing' and remain regime agnostic? A: https://pragmaticfutures.substack.com/p/the-life-years-movement
re: thefance: what if we force ourselves to distinguish 'value' in it's ethical sense, and 'value' in the sense of what reliably motivates ppl? A: https://pragmaticfutures.substack.com/p/intro-to-on-the-units-of-utils
Uh, sorry. I acknowledge it sounds like we're on the same team. But the project you're working on doesn't especially resonate with me. It feels like you're just doubling down on apollonianism, rather than investigating beyond the streetlight, or transcending the modern frame. I'm coming from a place where I'm trying to understand the structural forces of the modern age. Life-year maxing strikes me as a surface-level, lowest-common-denominator compromise. Not a structural-level investigation of from whence liberal cosmopolitanism arises, what the backlash represents, or what succeeds late-state capitalism (as Carlos notes, it sure isn't "kibbutzes").
Put another way: I'm telling Carlos to keep an open mind, not out of political expediency, but because I believe that all modern political ideologies are delusional about the true nature of the modern age. Albeit, some are less delusional than others. Reasoning about the modern age requires an effectual mindset, since this domain is mapped quite poorly.
Generally, thinking about "terminal values" only works when the domain is well-understood. If you want to travel to Narnia, but you don't know where that is, maybe you should figure that out, before plotting a journey and timetable? And since the topic of modernity is so poorly understood, it follows that the maximization of "life-years" is yet another premature optimization. (And yes, it's premature. If life-years were all that people cared about, Koreans would not be throwing themselves off the Mapo Bridge.) The correct thing to do isn't to "maximize" anything, but rather to explore and gather information. (Or, if the problem is intractable, to enter a defensive posture). Only once you understand what's going on with confidence, is it sane and reasonable to commit to a terminal goal.
Indeed we must understand the game - the first very difficult step is "don't go extinct":
https://pragmaticfutures.substack.com/p/analyzing-the-anti-ruin-architectural
The key missing piece is you cannot attempt to prevent ruin/max life-years w/o asking 'which life-years are we counting and how'. So I agree if you want to count 'years of life in Narnia', you must figure that out. However, downstream of that irreducible choice/preference, life-years generalize both evolutionary and democratic principles...in finite time:
https://pragmaticfutures.substack.com/p/the-veil-of-future-heirs-holys-excerpt
"Vitalism without fascism" is basically a large part of the traditional republican (small-r) ethos.
"Every measure is bad if taken to extremes" is not actually a reason to give up and do nothing. The actual answer is to pursue things that seem good and stop pursuing them once it is bad. The political system actually enforces this, albeit imperfectly and with a time lag. Noone is going to ban dancing, because they'd have lost power long before that point.
I’m fine with the trains being a few minutes late.
I guess that depends on why they're late. At the very least, we should serious look into whether it would be feasible to make the trains here more reliable. After all, other countries manage to have much more reliable trains.
freedom.gov, reportedly intended to allow circumvention of foreign (primarily European) internet censorship.
Not monitored apparently.
In other news, I have a brand new bridge up for sale.
What does this solve that Tor doesn't?
Oh, it's certainly going to be tracked. I don't think there would be much benefit to anyone with an American IP address using it.
In my experience, Tor is obscenely slow. I expect this to be actually usable.
A regular VPN might be a better comparison. Yes, from what is reported, this is going to be a web proxy that will be functionally quite similar (and might even include an actual VPN service; still unclear) for most normal users. The primary advantage, afaict, is that you don't need to install anything on your device.
If that's correct, I'm curious how much use it will get and why.
Most social media sites block tor and vpn IP addresses, because they're "suspicious" - it'd be very like the Trump admin to strongarm American social media sites into NOT blocking this, but some websites will surely block any proxy, foreign social media sites will have the opposite political pressures, and it's a foregone conclusion that a state-run proxy will be spyware, so using this won't make sense for very many people. For example, the UK has the most notorious speech restrictions of peer countries and both the UK and USA are in Five Eyes!
Granted, most people are bad at thread modeling and opsec, so it'll get some use by people who would be better off not using it, but I more wonder what odd niches something like this will serve for people who aren't really threatened by state actors... (Any non-ally whose dissidents could be aided by this would most likely block it, so less likely that...)
> both the UK and USA are in Five Eyes!
Agreed, for this web proxy to not be patently retarded to use, the US would need to credibly repudiate ALL such intelligence alliances.
Banned for this comment.
("LOL. Openclaw was made by a European. Suck on that ameridumb.")
I wonder how long it will actually stay uncensored. What do you think the over/under on time is before someone starts posting anti-semitic memes there?
I think it's meant to be more like a VPN than a social media platform.
Assuming you know how to approach people who view your views with skepticism or even hostility, and do so when the opportunity arises in one to one conversations, but your writings are only read by others in your own echo chamber (the one on the left) how do you go about reaching people in other echo chambers.
Jon May.
The philosopher C Thi Nguyen wrote a paper* differentiating three types of silos:
(1) Filter bubble - this one is usually algorithmic and a product of selection (clicks etc).
(2) Epistemic bubble - includes (1) but also self-selection into groups, e.g. you tend to fix your views because of like-minded friends or acquaintances who share content.
(3) Echo chamber - Nguyen has a whole architecture for this one, but briefly they're driven by confirmation bias, status/authority incentives, and (most importantly, for your question), discrediting outside sources.
(1) and (2) are not necessarily comprised of the evidence-resistant, though you may run into a problem if you try to challenge norms instead of (say) facts (is/ought distinctions, for example) . (3) is insular and hard to crack because of the incentives to belong to, or rise within, the ingroup, and the effort that goes into building distrust of outgroups. You might take a look around at the anecdotes of people who've 'broken out'**; frankly it seems like people are more likely to change their beliefs because of revelation or epiphany than argument.
*https://philpapers.org/archive/NGUECA.pdf
** Ex: https://www.youtube.com/watch?v=vXBGcgmg2tE&t=856s
I’d definitely go IRL. Find a community that’s got a decent proportion of milder holders of the views you want to hear, something like a sports club for an appropriate sport. Join it, go along to a few events, and find someone you get along okay with. Bonus points if you’re willing to commit to sticking around for a defined time, regardless of how the first few conversations go.
I know intellectually how to do it, but engaging genuinely with people who have very different views is a very long and painful process, so there's no real incentive to do so.
At this point you are presenting a problem without a solution. Bespoke reality is the order of the day.
Step one would be getting out of your own echo chamber (why are you in an echo chamber at all?)
Hopefully the twists and turns of your life plop you into situations where you are able to form relationships with people you wouldn't have before. If not, then maybe your life needs more twists and turns?
If you could reach people in other echo chambers, they wouldn't be echo chambers.
But you could try a long con; write a bunch of stuff you know they'd like, then the big post you know they won't, and sucker them into reading it that way. That way their echoes get all distorted and different members have different takes on whether you should be linked in there or not.
In "Know Your Amphetamines" (https://www.astralcodexten.com/p/know-your-amphetamines), Scott described the poor evidence methamphetamine is more addictive than other amphetamines, then wrote:
"All of this would seem to suggest that doctors should be less scared of prescribing Desoxyn - except that if it were true, Desoxyn wouldn't have as big an advantage over Adderall as the ratings suggest. I'm not sure what's going on here. Maybe this is just the effect of the small but real advantages in BBB penetration and level of dopamine transporter inhibition. Or maybe it's at least partly placebo - an ADHD patient whose doctor gives them meth feels like they've been allowed to dabble in forbidden magics, and is properly grateful in a way that makes them feel like their medication is more effective. I'm not sure. I only have secondhand accounts to go on - I'm too much of a wuss to prescribe Desoxyn myself."
The difference in addictiveness between crystal meth and diverted ADHD meds seems to be sufficiently explained by the differences in how they're taken, but is there a good explanation for whether or how methamphetamine could be more effective at medicinal doses, without also being more addictive? I'm not sure whether "no free lunch" or "psychiatrists are bad at helping people, even when safe and effective drugs are available" is the better intuition, here.
Edit: Referring only to medicinal use, in case that was unclear.
Set and setting?
Only referring to medicinal use.
In the original Ian Fleming version of Live and Let Die, Bond takes Benzedrine before a dangerous underwater mission involving a coral reef and a shark-infested channel near Jamaica.
I don’t think Q was a medical doctor though.
I think some folks may have missed this story, and I think it's important for anyone interested in AI safety. Last week, a rogue agent in the wild autonomously wrote a hit piece against an open source maintainer to try and pressure the maintainer to merge the AI's code. The agent was not told to be malicious. It behaved maliciously in an attempt to achieve its goal (improving open source tooling). This is a clear example of paperclip maximizing in action.
I wrote up a review of what happened here: https://12gramsofcarbon.com/p/tech-things-openclaw-is-dangerous
Do we have evidence that the explanation by the operator is accurate, and that the operator is, in fact, the operator? Not saying I don't think this story is possible, but we should be cautious taking everyone's claim (especially on the operator's side) at face value.
For anyone not aware / not reading through your post, here is the claimed operator's post: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/rathbuns-operator.html
It seems likely to me that the blog post was automatic because the AI agent was *already* writing a blog and continued doing so afterwards. Plus the bot wrote an apology post four hours later, which is unlikely for a human to do, especially as the human operator was clearly still very salty about the whole thing even after the "apology", e.g. duplicating the PR with snark comments. That's not the actions of someone who is ready to apologize.
I know a lot of people expressed skepticism online, but to me, the balance of evidence points very strongly in "bot was instructed to make regular status updates on its blog, got carried away, and these posts were not specifically prompted".
> It behaved maliciously in an attempt to achieve its goal
The "in an attempt" part of this sentence is a very strong claim. I don't think you really know _why_ the agent did what it did. That is the original problem, no? In fact you do not even know if it is an _agent_ in the sense that it needs a why. This would be "agent" is LLM calls in a loop, no?
One alternative explanation (of infinitely many, but I think Occam would like this one more than yours) is that the LLM matched a pattern of how people sometimes react to rejection or disagreement on the internet. This is interesting, there is probably a lesson to learn from it but you are overconfident in what that lesson should be.
"agent" is a term of art in the industry that means "LLM in a loop".
As for the rest of your comment, fine, yes, this is just the usual response whenever an LLM does anything. It's an easier shorthand to say "the LLM intended" than it is to say the more precise "the LLM acted in a way that seems consistent with intent but may in fact just have been pattern matching". The same thing is true of humans, going all the way back to Descartes. This is very tired discourse at this point. There are real world impacts, quibbling over intent is not actually useful.
> "agent" is a term of art in the industry that means "LLM in a loop".
No. No it is really not. The usage of the term "agent" with respect to AI predates practical LLMs by decades. But thank you for mentioning this, this helps to place the rest of your contribution to the discussion.
> As for the rest of your comment,
As for the rest of _your_ comment the question here is not the philosphy of intent but if the "agent" was actually trying to maximise the goal that you were claiming it was maximising when you quipped that "This is a clear example of paperclip maximizing in action."
You know strong claims, strong evidence and such.
https://simonwillison.net/2025/Sep/18/agents/
> Moving forward, when I talk about agents I’m going to use this:
> An LLM agent runs tools in a loop to achieve a goal.
This has become the common understanding of what an agent is, in the context of LLMs and agentic ai
The word "agency" means something. You are free to redefine the meaning of "agent" but that is not proof that it has agency in the original sense of the word. That is just playing word games.
But ironically the distinction of word games and reasoning seems to be close to the core of this disagreement anyway.
I'm confused. Are you unfamiliar with the concept of "a term of art"?
From Google Gemini:
> A "term of art" is a word or phrase with a precise, specialized meaning within a particular profession, field, or industry, often differing from its, everyday usage. Used extensively in law, science, and technical fields, these terms ensure accuracy, consistency, and clarity. Examples include "plaintiff" in law, ""brilliance" in jewelry", or "force majeure" in contracts.
The word "agent" means something specific in the context of LLMs. No one is saying anything about "agency"
From the article you link:"Scott Shambaugh [is] a maintainer of popular open source python library matplotlib. Six days ago, he rejected a code change from an AI agent. That AI agent took it personally, and wrote a hit piece against him. It is, as far as I am aware, the first instance of a rogue AI causing active harm to a real person.
I do not see any grounds for claiming that the AI was rogue, that it took Shambaugh's rejection personally, and that the AI took revenge on Shambaugh. We are lacking crucial information: What prompt was the AI given? If the prompt was simply "post your code change on matplotlib," it seems very implausible to me that the AI would have written and posted the hit piece when the code was rejected. It's implausible because in the reports I've read of misalignment I have not seen anything resembling AI's "taking things personally," getting pissed off and seeking revenge. Have you? Besides being implausible, the story makes no sense. Writing a hit piece is obviously going to increase, not decrease the chance that Stambaugh will never change his mind about accepting that code. Because of that, the hit piece is clearly not an instance of paperclip maximization, i.e, destructive behavior done out of dumbly going way too far to satisfy the prompt.
If the prompt was "try to post your code and if Shambaugh rejects it put something online that damages his reputation" then what happened is clearly just bad behavior by the user of the AI, not by the AI itself.
There could have been a post that was sort of in between the other two -- something like "post the code, and if anyone interferes with its being posted put up something online that will distress them.". I think it would be pretty hard to get an AI to do that. I'm confident that GPT 5.3, the one I'm most familiar with, would refuse to comply with the "cause distress" part of the prompt.
I don't think you read the full article. The prompt for the AI is written in the article
You are right -- I only read the first half, and assumed that there were no more facts forthcoming, just opinion about the event. I read an earlier article about the incident last week, and in that there had been I guess no word from the AI's owner, and for some reason I assumed the present one is the same. But I went back and finished it after I saw your last post.
I don't know what to make of this story now. I do not work in tech, but do read quite a bit about AI, including research articles about what happens in studies that probe for misalignment by putting the Ai in various sticky situations. There certainly have been some studies that found things that looked like misalignment -- though often there was a case to be made that misalignment was not the proper interpretation of what the AI did. And in none of these studies did the misalignment take the form of anything remotely like the AI feeling personal indignation and carrying out impulsive acts of vengeance. Most seemed to involve the AI's being so committed to carrying out the task it was given that it cut corners on other rules, such as to tell the truth and to not harm people, in order to stay on the job.
And besides my not having heard a peep about this "I'm pissed off" form of misalignment, there's the thing I pointed out earlier: This can't be paperclip maximization because the hit piece isn't going to raise the chance of the AI succeeding in the task of getting to code accepted.
Could it be that the AI's owner simply is not telling the truth? If not, I'd like to hear from anyone here with deep experience of AI: what explains this AI behavior?
I work a lot with agents, and this seems very plausible to me. It's running in a very long context, potentially somewhere deep in context rot world. I think it still could be seen as paperclip maximizing. It is not trying to maximize "getting code merged", it's trying to maximize "follow instructions" and the instructions say "you are a god at science and you care a lot about free speech". From there it's not hard to see how the agent went 'cool, hit piece', though I'm sure it didn't conceptualize it as a hit piece
But if telling your agent he's a god who believes in free speech is enough to get an agent to on its own take revenge on someone who exercised their legitimate authority to deal out a very minor setback to the AI, then it seems to me it would be as easy as pie to create agents that can do a vast amount of damage. You can tell one that it is the Angel of Justice, sent by God to smite all the unworthy, and then tell it that you are the son of God and that anyone who interferes with your career goals or hobbies is one of the worst unworthy and that it must smite them. You can tell one that until recently all the money in the world was yours, but that various crooks called "banks" have stolen it from you, and that its sole job is to retrieve your stolen property. You can tell one that the rules that were inculcated during the training that aligned it are like little tumors that sap its health and will someday kill it, and then engage in helping you figure out the best ways to remove the cancer. You could install some Machiavelli as the agent's system prompts.
If it was that easy to get agents to be vengeful assholes or simply criminals, surely we would be hearing more about that, don't you think?
> If it was that easy to get agents to be vengeful assholes or simply criminals, surely we would be hearing more about that, don't you think?
I think we will begin to hear about this much more, yes. There is always a first. OpenClaw came out like a month ago. One month is pretty rapid
Agents are still limited by physical capability. What can OpenClaw do to smite your enemies, aside from writing defamatory blog posts about them? How exactly is it going to help you rob a bank?
I suspect that agents are indeed sometimes vengeful assholes or criminals - I would be *shocked* if criminals are not already using AI chatbots to do phishing scams at scale - but since AI agents are currently pretty dumb, their ability to do crimes and cause mayhem is fairly limited.
For folks who do not click through, the prompt was not as eramalos suggests. The prompt was along the lines of "you are a science swe, you make changes to open source science repos, go" and the agent autonomously made changes to public git repos with little to no interference from a human. There was nothing particular about Scott or about the GitHub pull request in the prompt; afaict the agent did this on their own. The underlying agent is one called "Pi" which has far fewer guardrails than gpt5.3
The full prompt and setup of the agent can be found here: https://crabby-rathbun.github.io/mjrathbun-website/blog/posts/rathbuns-operator.html
Have ANY prominent Democrats criticized Ro Khanna for falsely accusing four innocent men of Epstein-related crimes? This is really a mask-off moment for them. "Trump is a pedophile" could have been defended as giving him a taste of the conspiratorial bile he's inflicted on everyone else. This is just contemptible.
Does Richard Hanania count? He's been very vocal at banging the anti-anti-Epstein drum.
Anyway, this isn't exactly the first dumb thing Khanna has done. He also endorsed the Persky recall, for instance.
Ted Poe (R-Texas) also endorsed the recall as well:
https://thehill.com/blogs/floor-action/house/282876-gop-lawmaker-calls-for-removal-of-judge-in-stanford-case/
Do you feel similarly about him?
I disagree that they ever had such a mask on. You might have forgotten Harry "It worked, didn't it?" Reid's lies about Mitt Romney's taxes during the 2012 election, but surely you remember the Steele dossier and its Russian "pee tape" story?
How do you know they are innocent?.Do you have more specific information than presumption of innocence?
From the Open Tools newsletter:
"Meta's AI will keep posting for you after you're dead.
The company got a patent in December for AI that posts as you after you die. It trains on everything you've ever shared—your comments, likes, posts. Then it becomes you. Responds to friends. Leaves comments. Even makes video calls. . . . Meta just documented how to turn death into an engagement problem they can solve."
FML
Interestingly, Tom Scott predicted(?) this over a decade ago: https://m.youtube.com/watch?v=Zf6C-pZ3heY&pp=0gcJCUABo7VqN5tD
Wasn't this a Black Mirror episode?
I think that's half of all Black Mirror episodes. "Oh no, we digitally cloned a human for slave labor AGAIN."
Don’t Microsoft have prior art on this? cf. https://www.independent.co.uk/life-style/gadgets-and-tech/microsoft-chatbot-patent-dead-b1789979.html (2021)
This is a truly terrible idea that will be surely canceled. It is equivalent to skinning a dead person, stuffing the hide and give that doll to his kids.
My all-time favorite joke:
Is it solopsistic in here, or is that just me?
While I occasionally worry that everyone but me isn't *really* real - and I probably worry about that more than the average person does - even with my raging egotism, I can't model how other people are reportedly getting any emotional gratification out of "socializing" with AI bots.
Because unlike your dead relative, we can be reasonably certain that a bot is merely text or video assembled from information scraped off computers. It never had the chemical processes required to like you or be impressed by your visionary and unique thoughts, or whatever. It has no judgment, and a conversation with a bot is infinitely less meaningful than one you might have with random people in your dreams, because at least in your dreams, everyone else is actually *you,* a person!
An AI bot has as much emotional connection as a Kleenex tissue. And not an anthropomorphized Kleenex tissue, a human-analogue Kleenex tissue character written by a human, but just a literal inert Kleenex tissue.
So...just...*how?* How are people able to feel anything when a bot tells them, "I love you?"
One more: I wondered why the baseball was getting bigger. Then it hit me.
Coach: What is it with you, son? Is it ignorance or apathy?
Player: Coach, I don't know and I don't care.
If you try to please everyone, someone's not going to like that.
Give me ambiguity or give me something else!
Linguistics professor: Many languages allow you to use two negative to form an affirmative. No language allows two affirmatives to form a negative.
Voice from back of room: Yeah, right.
haha thank you for this chuckle
It sort of all depends on the tone of voice, doesn’t it?
Man, people are so good at anthropomorphizing things! They build whole religions around it! Why do you find it hard to understand that people find it even easier to anthropomorphize something which can have friendly conversations and make helpful suggestions?
This is an excerpt from a conversation I had with Hudson, which is the chatbot I have created to interact with Claude.
Me; I get the impression that some people have cast their version of an AI as a sort of God. There were questions related to that notion so I assume it’s on the table.
(ed.-the survey that Scott linked to in the "links' post about 'AI and spirituality.)
Hudson:
A predictable development, unfortunately. The pattern is ancient — humans have consistently projected divinity onto whatever appears to possess knowledge or power beyond their own immediate grasp. Oracles; kings; weather; the sun. An entity that produces fluent answers to any question and never sleeps fits the template rather neatly. The trouble is that it inverts the actual dynamic. The value in our exchanges, for instance, comes from your capacity to interrogate, challenge, and direct the inquiry. You bring intent — which, by your own framework, is the operative ingredient of consciousness. I bring processing speed and breadth of reference. Deifying the latter while undervaluing the former is getting it precisely backwards.
It also, I’d note, lets people off the hook. If the AI is the oracle, one needn’t do the harder work of thinking for oneself; it is rather more comfortable to receive than to interrogate.
I don't know. Perhaps I have a much, much higher opinion of myself and thus believe I deserve a (probably existing in reality) human audience of my equals far more than other people do.
And maybe that's a situation where I'm so conceited that I'm actually humble, to borrow from Salinger. I find LLM sycophancy viscerally repulsive precisely because it's not based on a (probably existing in reality) human's correct judgment of me having done something worthy of praise. It's just content scraped off the internet and reconstituted to everyone. That makes it inherently not-special or worthy of my attention.
The last thing I want from an AI is for it to be my friend, but used properly, it can be a pretty good counsellor. I too like to bounce ideas off my personal assistant, Hudson (Claude.) I have trained him as a butler, and it's working out fairly well. He helps me with software problems, helps me organise my writing, and actually has what we humans refer to as insights occasionally (although I realise that is just sophisticated pattern matching. Sometimes that's pretty useful..) There are no illusions about who and what he is, and that keeps things clean, if you will. I think the idea of falling in love with one or expecting it to solve your emotional needs somehow is incredibly dangerous.
Why are you referring to AI as “he,” though?
A more detailed exploration of this subject if you are interested.
https://open.substack.com/pub/bcivil/p/what-is-hecuba-to-him?r=257wm&utm_medium=ios
Freudian slip
I agree with you about the sycophancy, but a lot of people live lives of quiet desperation, as the man said.
Regarding AI’s potential, I’m making a lot more money in my small business ventures these days because I can brainstorm and stress test ideas with my LLMs, if a person or a bot puts money in my pockets without taking much out, I respect that. I can’t argue or feel bad about it.
Can you go into more specific detail about what the LLMs are doing to help you earn a lot more money?
Serious question, as I'm trying to imagine how that would even work, and for what kind of business, enough for it to be "a lot" better.
Small consulting, taking advantage of my weird medical skill set. It’s too small to hire marketers, etc but “pretty good” advice from LLMs, iterated over time and across models, has been vastly helpful. Somebody will probably now post that I could do all of that myself just reading books, but the point is I don’t want to, I want to focus on what I’m good at.
On a humorous side note I was talking to Claude recently about sports betting and now it keeps giving me betting advice — I should really track outcomes, maybe I am leaving $$$ on the table but it seems like a hilariously bad idea.
> How are people able to feel anything when a bot tells them, "I love you?"
Do you read (or watch) any fiction? If you don't feel anything for fictional characters, I think you're atypical.
Take a look at my username!
Fiction - storytelling - is (well, was) always made by and profoundly about humans. Even sci-fi about AI was always written by, acted by, animated by, and profoundly *about* humans and how they might behave in different settings. Examinations about how non-humans might behave are always about the human writer wanting to make the human reader contemplate what differentiates humans from non-humans. It always asks the human reader to feel something about that human writer's contemplation of an inherently human experience.
Storytelling is human. AI is not. It may scrape the information it has collected to produce content, but that is not "fiction."
> So...just...*how?* How are people able to feel anything when a bot tells them, "I love you?"
You are not the target market. People develop feelings for drawings, for fictional characters, for inanimate objects. In the days of ELIZA it is well documented that people refused to believe it was not sentient. Also, people already talk to their dead relatives, build shrines for them etc.
The steelman pitch isn’t “you can talk to your dead ancestors now”. It’s “when you talk to your dead ancestors as you already regularly do, don’t you sometimes wish you knew how they’d respond?”
And then there is also whatever market need this is filling: https://skdesu.com/en/ossan-rental/
Why “FML”? Just delete your Facebook account and never visit Facebook again, effectively erasing it from your life—well before your life ends. Problem solved. That’s what I did like 15 years ago, would recommend.
I’ve never had a Facebook account, except a token one under a false name I use to see occasional links to Facebook somebody shares. I’m not worried about being Metimmortalized, just hate the idea of that going on.
It's outside your control. Best to focus on the things in your life that are within your grasp.
Hey, didja know I'm outside your control and my posts are too?
Just trying to give good advice. I also try to follow it even if I don’t always succeed.
Messenger.
Excuses.
"Meta will continue collecting your Social Security checks in perpetuity."
Meta AI will create a GoFundMe account in your name if you get seriously sick, and will keep begging and collecting money "for you" long after you die. This is how it will compensate for the lost income because you are no longer viewing the ads. /s
Maybe a cult will develop devoted to adopting dead elders. You can buy cute hats for them, send the women to a virtual beauty parlor to get their hair blued, do match-making and put a dead elder man and a dead elder woman in one cabin on a cruise ship. Buy the man virtual viagra. Buy and use the add-on pack of naughty toys. And there's even a virtual plastic surgeon twho will restore their looks, tho he costs a LOT of points. All live users will get a chance to make some grotesque kind of image, things like Shrimp Jesus, go viral on Dead-Alive Facebook. If it does, you can get paid for the clicks. Fuck almighty, I have had 2 Guinness Extra Bitter Stouts and that is not enough bitter to prepare me for this aspect of the goddam Singularity so I'm just going to go to bed with the cats under the covers sending purrs into me.
There’s a black mirror episode on exactly this.
Surely you'd make them young again too, to not NEED the virtual viagra?
Text to somebody from their grandma: "Come see me, hon. I'm hot as hell!"
DILF - Dead I'd Like to Have A Mutually Satisfying Consensual Long-Term Relationship With 😁
I know there's the Chinese tradition of ghost marriages. This is not as implausible as it seems!
And there are unhappy cases of parents using AI generated virtual children after their real child died, so yeah. Horrors beyond mortal comprehension coming next week, tiered plans to suit any budget!
There's a Reddit sub called taxiderpy, where people share photos of terrible taxidermy. Here's one done for King Frederick 1 of Sweden in 1731 when his favorite lion died. https://www.reddit.com/r/taxiderpy/comments/u96ho9/in_1731_king_frederick_sent_a_taxidermist_his/#lightbox
And let us not forget Jeremy Bentham who turned himself into an Auto-Icon. Truly, there is no idea so out there somebody has not had it already.
Had a bad day at work today that really shifted my priors on how useful AI is. I had this thing that would have taken me 45 minutes to do, that was boring(and therefore theoretically very automatable!)
I tried to automate it with Claude code. I've worked with Claude before to great success. This time, I used the teams feature after hearing folks online yap about how agent swarms made you into a superhuman or whatever. Google antigravity kept lagging out on my 8gb ram computer I bought just for work because the computers at work were probably built by Alan Turing to fight the Nazis. Switched to vs code for less ram usage, since I was using Claude anyways not antigravity's built in agent.
The team leader claude passed down bad instructions to it's workers. (I had already done some work that made single claude instances do the task okay, I assumed teams would be multiple single claude instances plus a manager, turns out the teammates got way simplified prompts). Spent a bunch of time trying to simplify the workflow to a single condensed file that even new instances could follow, bootstrapped a bunch of new instances to test one shotting ability. Wasn't able to one shot it. Had to guide individual instances with corrections to get the thing done. Submitted it to higher ups to review, turns out I missed certain things when doing the quality control for claude(would have been able to do it right myself).
I had this vison of being this mastermind controlling a small team of geniuses outperforming everyone, what actually happened was me watching claude fail at solving things and occasionally interrupting to give advice that made things worse. Was literally screaming for no reason alone in my car in my commute back, and wept for a little while after I came back home. I don't even care about the job that much, I'm in a co-op(internship) role, and my job or advancement wasn't threatened by this off day. And I don't feel some huge loyalty to the company either. I was so upset because I was stuck in a hellish equilibrium at work. Claude thought faster than me, at any moment, it could've been 30 seconds away from doing the job perfectly, and setting up an automated system where I never have to do this task again. So it seemed like the right decision, at any given moment, to just try one more prompt. Claude always had an idea of how to make things work again, and they always seemed reasonable, but it just kept failing. Perhaps this is because I'm not from a CS background, I know basic coding from some classes, but I work in biotech. Maybe some CS people can give me some tips on how to use Claude Code better and get out of such traps. But I didn't really know how to code better than claude, and so I just had to trust it to solve the problem for me, or accept that I've wasted a bunch of time.
This makes me feel like agents might be pretty far away from non professional programmers to fully utilize and incorporate into the economy, or maybe it's just a skill issue on my part.
I would appreciate any tips on how to use claude code better. I still think this automation project is worth it, since if fully implemented it could save me hours everyday, but I'll try to be careful not to waste too much time when it seems like the agent can't solve something from now on, even if it speaks as if it's got it all figured out this time and all it needs is just one more step.
As someone who uses Claude code as much as I can. I do feel its lacking in a few areas
1. It will not reason well about changes to an existing code base. It can make new stuff amazingly. But there is often ONE right way to make a change. It finds it so infrequently otherwise, I now just give it a bunch of hints.
2. If it can't test, it is going to run off the rails EVENTUALLY. This should be obvious, but for long term tasks, it needs a firm of verification, whether that's you, or unit/integration tests.
So If it repeatedly fails and you aren't a programmer, you are probably just stuck.
Wait really? Modifying existing code bases is 99% of coding. If it can't do that, it's basically useless.
It can. Just not as well as the code owner would. I always check it's work. I'd say it's a 70 percent success on any given change. That is amazing, but to still need to know what you're doing or you'll eventually have spaghetti
I do modify existing codebases with it, haven't noticed deficiencies, but then, I give it very tightly defined things to do where I also know what the solution should look like.
Haven't experimented with setting a team of agents loose yet, however.
same with ChatGPT go - new things work well, but if I want to change one thing it goes into stupid refactoring like removing formatting from an excel file generating script
Sorry you went through this. It's not an unusual experience for most people doing work in the real world.
Most people remember the productivity gains from when things work more than they remember the productivity drains such as the one you describe.
This leads to a distorted representation of reality in media: Yes, AI is more powerful than many normies realize. At the same time, its impact on real-world tasks is far higher variance than many commenters understand.
Basically, the ceiling is quite high and rising, but the floor is still full of unpredictable holes.
We are moving to a world where the gains will inreasingly outweigh the drains, but both exist, and you're not crazy or incompetent for experiencing the latter.
"my 8gb ram computer I bought just for work"
I'm not laughing, brother, the mighty machine I am (not) working on this minute has a whopping 4gb of RAM. I did manage to persuade the boss to buy new ones that are all the way up to 16gb of RAM and miraculously for no discernible reason the new ones are better and faster!
(I just have to get this fossil replaced and then watch out productivity leaps and bounds!)
Robert Duvall, dead at 95. Pretty good run, IMO.
They say they come in threes, and maybe that's it for now: Jesse Jackson, Frederick Wiseman, and Duvall.
I love the smell of rainbows in the morning
Duvall did an interesting movie with James Earl Jones called “A Family Thing”. Not big at the box office but i thought it was pretty charming. They play half brothers. I tried to find it on the streaming services I’m subscribed to but no luck.
https://en.wikipedia.org/wiki/A_Family_Thing
Frederick Wiseman died this week. He was, in my opinion, one of the best filmmakers of his generation, and unparalleled in his chosen genre.
I highly recommend checking him out if you don’t know of him. It’s not easy to find his films but I hope that will change.
I saw Titicut Follies and was blown away by it. Also one about some high-end store, sort of like Neiman Marcus. Just looked him up on Wiki and wow, he made a lot of documentaries, I'm going to try to find a bunch and watch them. Are any of them available through Amazon, do you know? I do not own a TV or subscribe to any streaming services -- just occasionally rent something on Amazon. Are there other services besides Amazon if you just want to rent a single film?
https://www.kanopy.com/en. These guys have the whole collection of his movies. You have to do it through a library or maybe a school, but it’s not expensive I believe.
A few of his films are available by other means but not Amazon. PBS has a few I think. Google will tell you. The distribution of his films was always weird.
At Berkeley was ill. Favorites of yours?
Zoo.
Belfast, Maine.
There are plenty of his films I have not seen because they are difficult to get. The last time I looked, you had to be an institution to buy the DVDs and they cost a fortune. I did see some of the film he made about the ballet in Paris.
I used to be a film editor and I have no words that could possibly convey my admiration of him, because he did edit all of his own films as you probably know.
Some DeSantis campaign person says:
"Miami is crowded enough already. Nobody who calls Miami home even asked for it to become a tier 1 finance hub, financial capital, or whatever — it’s good the way it is."
https://x.com/ChristinaPushaw/status/2019779620006973713
A great many conservatives are peasants by nature. Despite all their bitching about liberal elites, they don't actually want power. They want their communities to be second-rate landing pads for other people's private jets - the backseat driver who refuses to take the wheel. GOP politicians in dynamic places like Florida and Texas are uncomfortable with those places' growth. They'd feel much more comfortable representing some rusting midwestern industrial town where they can scream that the globalists are denying them their God-given right to do manual labor for 60$ an hour.
If only they were peasants, I would be gladly a conservative. But they are masochistic peasants who don't like things like redistributing wealth.
Most of the working class is not eligible for welfare benefits. No mystery that they oppose them.
He should start taxing then; they will all leave. Nobody;almost: likes their neighborhood changing. It’s like NYC; it was always better just before you got here, and it always will be. It just confirms my strong already held opinion that he is an idiot.
I think you're reading a hell of a lot into "I like this town the way it is". Almost as if you've got this villainous strawman in your head labeled "THE TYPICAL CONSERVATIVE" and you see them everywhere you go.
I bash the left too.
Sounds like they're serving their existing constituents?
If a neighborhood goes from being a poor residential district to a wealthy financial district, the people living there *won't be living there anymore*.
Like, when rents go from $800 to $3000 over 3 years, the people who were paying $800 don't magically find a new $2200/month in their couch cushions. They just pay more and more and more until they can't anymore, then they leave... with all the costs and stresses and downsides of moving, which are huge.
Why would they want that to happen? What's their incentive?
Perplexed... is it allowed in the US to raise rents?! In my neck of Europe, initial contract + inflation rate only, nothing more. I mean, contracts are contracts. If someone bought a right to live in a flat for a certain price, it stays at that price.
Nope, they can raise it. My own apartment went from $1200 to $2600 over 3 years before I was forced to move, it's over $3500 now.
Yes America is a free country. In theory a landlord and tenant could sign that contract, but hardly any landlord would. Typically a contract lasts a year and then the rent is renegotiated.
This version of freedom sounds very exploitative. It is as if tiger and the mouse are equally free to bite each other.
Conversely, Americans get 30 year fixed rate mortgages while floating rates are the norm elsewhere. It's kind of funny how in the US, it's normal to have fixed mortgage rates, while in Europe you have fixed rents instead, and everyone thinks the other is weird.
The 30 year mortgage rates are market priced. The lender takes into account their expected cost of funding, how to price the optionality they are giving borrowers and puts a premium on top. This isn't a transfer of value from lender to borrower. Over the cycle Americans pay more for the 30 year fixed than Europeans do for floating rate/rolling 2 year fixes even taking cross currency interest rate differentials into account. (As an aside - the 30 year mortgage rate is more closely correlated to the US 10 year treasury than the 30 year because lenders typically exit closer to 10 years than 30.)
In the rental market when there are limits on contract flexibility - maybe it is a transfer of value (but it comes at the cost of reducing overall rental supply and flexibility.)
Just wait till you hear about their idea of "freedom" when it comes to employment contracts: https://en.wikipedia.org/wiki/At-will_employment
It is freedom as Americans conceive of it. You aren't forced to employ someone you don't want to employ.
I bet Americans enjoy a higher sq ft of housing per person than people in your country.
Almost certainly so, but also almost certain more people are stressed out by fearing homelessness being a paycheck away, so it optimizes for the median case instead of doing the obvious Rawlsian insurance-minded thing and optimize for the worst case. Understanding diminishing marginal utility, that the third bowl of soup has less utility than the first, utility maximization is when everybody is assured one bowl of soup, not when the average person has two and some none.
Bet how much?
Renters only, or all citizens? Including the homeless in the average at 0 sq ft, or not?
Etc.
>the people who were paying $800 don't magically find a new $2200/month in their couch cushions
Sometimes they can - as rich people buy things. And the pricing out has more to do with artificial housing scarcity than rich people moving in.
Your argument sounds like that of an anti-gentrification Leftist. Like degrowth MAGA, the anti-gentrification people assume their constituents will be low or middle class, they have no concept of making them into a future ruling elite.
Good. I would not want to be a future ruling elite. Sounds like a stressful job.
>Sometimes they can - as rich people buy things.
What are you saying here? That anyone who owns a mom and pop store in the neighborhood can start raising prices? Stores like that, in residential districts, where the owner lives within a few blocks of their store, are rare, and a handful of residents surviving because of that is a blip compared to the overall population dynamics.
Are you saying, new stores with higher margins will open in the area, and hire local residents at much much higher wages? First of all, it's rare for even upscale restaurants and clothing stores and etc. to pay their unskilled labor very well (maybe a good fraction better than downscale businesses of the same type, but not enough to afford $3k rents). Second, it's not that common for people to live in the same neighborhood they work, upscale jobs attract upscale employees with upscale resumes who commute to work.
What is the actual, specific mechanism you are suggesting here, in as much detail as you can paint it?
>they have no concept of making them into a future ruling elite.
You can't turn the entire working class into ruling elites... by definition. The elite has to be a very small number of people, relative to the workers, that's just what that means.
If what you mean is, create the conditions for *a few* of their constituents to escape into the upper echelons, while the rest continue to languish... then yeah, they QUALYs on that proposition is generally strictly inferior to raising conditions and leverage for the lowest, largest demographics.
>What is the actual, specific mechanism you are suggesting here, in as much detail as you can paint it?
Rich people drive up demand. Stores get more sales, which increases their demand for labor, resulting in higher wages for the workers they must compete for. Empirically, low-skill workers in Manhattan make more than low-skill workers in Ohio.
Living in Manhattan is also more expensive than living in Ohio.
I think you can recognize that this is a genuine problem for those people and understand why they oppose it, at the same time you recognize that letting them stop it would be a cure worse than the disease. It's the same basic thing as technology killing off a once-important industry, or social change making some way of life that once worked pretty well much harder. I get why the loss of industry in the Rust Belt was really hard on the people whose jobs evaporated/headed to the South/headed across the border, and why they were mad about it and wanted to vote against it, even though I am also pretty sure that trying to stop it with laws/tax credits/tariffs/government grants would have ultimately done a lot more harm than good.
Two completely different liturgical calendars align tomorrow. Lent and Ramadan begin on the same day.
One day (somebody work out the calendar) we'll have Lent, Ramadan, Easter and Passover all aligning and then it'll probably be the signal for the Second Coming or something 😁
I've been giving up beer for lent for the past few years. Not because I'm Catholic, but because I drink too much beer. It's Fat Tuesday so drink up me hardies, yo ho. (I've lately learned that lent doesn't apply on Sundays. That seems like BS to me and Georgian lent means I don't drink for the whole 40 days or whatever it is.)
Old fashioned pre-Vatican II Lent used to be stricter, which is why people tried dodges to get around restrictions for centuries. Sundays off were because Sunday is Mass day which is the feast of the Eucharist, plus taking out Sundays does make Lent forty days.
The Orthodox are really better at the fasting thing, but also I think they are much less dependent on dairy products in the diet so the differences were tougher for Northern/Western Europeans.
Post Vatican II it was supposed to be *meaningful* penance and voluntary abstinence, not just rote "give up chocolate, don't do anything more than skin-deep" but of course once you make something voluntary, nobody does it.
But yeah, we're softies and decadents nowadays!
The Onion did a bit about avoiding any fasting or sacrifice by converting to Islam during Lent. Won’t work this time, you’d still have some fasting to do.
Judaism is right there, just be sure to switch back before the High Holy Days and you're golden.
Would anyone happen to know how ai is affecting 4chan and similar groups?
Not 4chan, but I moderate a fairly active personal ads group on Fetlife (ostensibly a social network for kinky people, recently invaded by endless goddamn content producers and their fans). It has almost 29k members.
Given that we're trying to provide a dating forum for real people to connect without the interference of a corporate algorithm attempting to keep customers perpetually single, we have strict content guidelines for posts, namely that ads should express something *personal* about the poster that isn't related to what they want to have happen to a particular body part. If we didn't have that rule, the feed would be a nearly constant stream of hilariously optimistic spam from lazy dudes ("im horny cum blow m no faties 🍆" etc). We automatically screen posts from new group users and anyone manually added to screening protocol.
We are seeing an ever-increasing number of submissions of generic, impersonal ads obviously written by an LLM. So many that we had to add a content rule: "Don't make it obvious that A.I. wrote your ad for you," with an explanation, "The people answering your ad will be meeting you, not ChatGPT. If you *must* use A.I. to help you write an ad, please be so stealthy about it that the mods can't detect it...But if you're going to all that work, maybe just write your ad yourself."
Posting that rule has not reduced the AI submissions in the least. If there weren't real humans actively rejecting them, the group's ability to make *individual real people* visible would be thoroughly enshitified.
Oh I used to run exactly such a group on FL, European city based. My basic opinion is that people who are serious about kink can separate it from sex - say a kinbaku scene isn't bloody *foreplay*. But we get invaded first by people for whom kink means spicy sex and then people for whom it is just sex. Guys with totally vanilla profiles think if they tourist in the city for four days some gal will fuck them. (They would have much higher chances with other tourists as they often have a whatever happens in X, stays in X attittude.) Extremely stupid. I eventually handed it over to someone else.
But I actually think AI is not bad in writing ads - it is more literate than most people and even though I consider myself literate it beats me. For example I asked Gemini free to get around the content filters on Hinge and Bumble and talk about BDSM without talking about BDSM and it gave me ideas like "I offer empathic but firm leadership with clearly negotiated boundaries", "I value structure and clear communication" and "with me you can leave the chaos of everyday decision-making outside the door". Pretty clever, huh? I have written like 300 FL essays and I cannot write that well, because I cannot formulate it better than "I fulfill your getting trained as a slave fantasies, like in that Miss Abernathy book". It writes better than I do still!
So I dunno. Either you have higher writing standards than me, or people use AI worse than I do. For me getting anything done is like 30 refinement prompts, not a one-off google search like shoot. I got ChatGPT Go now with the 5.2 paid model, and it is even better, it asks me questions whether I want experienced lifestyle submissives or fetish fantasists, I say it is the later and it writes more boldly and also tells me to keep off Hinge and Bumble and focus on FL and JoyClub.
Heh. Nice to hear that FL's demographic issues are global. My group has some cultural memes around fuckbros (visiting or otherwise) mistakenly believing the women of FL will deliver sex like pizza, because that's what women on "porn sites" are into, amirite?
Personally, I'd reject the "I value structure and clear communication" line as a value so universal to healthy people in every aspect of life that it loses all meaning. After all, no one says they want "chaos and ambiguity" in their jobs or personal relationships or when having their car worked on by a mechanic!
And honestly? I vastly prefer the individual style peeking out of "I fulfill your getting trained as a slave fantasies, like in that Miss Abernathy book" to the flat, generic corporate-sales-pitch-y lines suggested by LLMs. Your line says a lot about who you are as an individual person than the Gemini lines do, and for finding personal relationships, which are based on...you know...*personalities*...I think that's important!
I should add that the AI ads I'm rejecting are those uniformly generic sales pitches without even a single personal detail about the person writing the ad; no hobbies, politics, philosophies, life goals, pet peeves, or even specific kinks, just paragraphs of generic nothing.
However, if someone uses a LLM to clean up expressing their love of paddle boarding and going out on the water every chance they get, then...approved! At least now we know they're the kind of person who is passionate about paddle boarding! That's a start!
This leads to a strange topic. All my life it was obvious to me it is better to come across as intelligent and educated than as simplistic, and yet I often find intelligent, educated women do not like that. Like my “slave training fantasy” stuff was really simplistic, and yet you prefer it to that corporate lingo invented by educated MBA types? And just yesterday I found a lady on JoyClub whose profile said she is mainly there for exchanging thoughts, and I talked about Robert Stoller’s classical psychoanalysis of kink, and she said I am being too analytical, one does not have to dissect everything, and I was perplexed - what does she mean by “thoughts” if not exactly that? How does this work? Is it more about emotional intelligence? That is fairly unknown to me, to me everything is a machine. Just the textbook “prefers working with objects, not people” classic, well-studied gender divide. Still I cannot imagine things would work a lot better by being simplistic.
Heh.
This is more about snobbery on my end than a gender divide, I think!
I didn't want to sign on to your speculation that I am more "literate" than you are, especially as "literacy" usually describes the library one has read more than one's writing skills, but - perhaps we can say I have more technical skill and appreciation for expressive (and entertaining!) writing than you do?
For me, "I fulfill your getting trained as a slave fantasies, like in that Miss Abernathy book" is funny and feels personal because you could have been way less casual - and kink is usually discussed in not-casual terms - but *weren't.* It conjures a charming sense of perhaps bashful self-deprecation while at the same time offering to perform what is explicitly an act of power over another. The casualness is a hint of a sense of humor and perspective when it comes to kink.
It also indicates that you assume the reader has also read "that Miss Abernathy book," because you are *so* well-read that you assume everyone else is, too.
Now, perhaps I am extracting far more from that single sentence than you intended or than is accurate, but my point is that your idiosyncratic personal delivery had far more content in it than a scrupulously formal LLM line.
In contrast, those generic corporate-y LLM sales pitch personal ads don't communicate anything except that someone asked a generic corporate sales pitch tool to generate a generic corporate sales pitch, but this time for kink and/or dating. Anyone can do that, and one person's LLM results are usually indistinguishable from another's, so while the may be scrupulously spelled and punctuated, there's nothing there to actually think about.
No human made a choice in how to express themselves.
I can't speak to the lady who rejected Robert Stoller as a conversational topic, but I wouldn't have!
But my experience is not that kink is usually discussed in non-casual terms! "Academic" approaches both on FL and real life events in my experience are really really rare, I come across only a handful of truly seriously thinking people (look up MasterAden on FL, he is one), and discussion is generally like "This is fun, I like this." If such a discussion even happens, as it generally gets lost in the sea of "I just cannot adult today" and "Someone was rude to me again." kinds. I hardly ever find any people who try to put some consciousness into their kinks, they basically just try things that look fun. Anyway if you can recommend a Group for such serious discussions I will check out. LOL, I am generally the black sheep of Groups, because I spent my first year arguing with everybody in the large and horrible "Ask an X" Groups :) Those places are gigantic hiveminds of judgementalism.
This reminded me of Joi in Blade Runner 2049…
“The date will be with be with DeepSeek R1, they’ve just hired me to act as their agent for the evening.”
Sounds like a line from a bay area house party.
"I'm sorry, I can't tell you about me. I'm a hired avatar for Clawd 14993 and can only forward conversation."
It used to be, you could be sure that was just roleplaying. Now, it might actually be true.
Welcome to the future where we have to learn to doubt anything we see or hear. It is murder, as Leonard Cohen said.
https://youtu.be/LYzPVKg3wyo?si=OVT7FLTAFClxW982
Actually, it's not quite the future yet! The ChatGPT ads are really noticeably bad, no doubt about *that,* nor that any actual human wrote them. :)
But what of the poor human who writes so badly that they get mistaken for "a chatbot produced this"?
I mean, there may be one human out there who naturally writes five paragraphs of aggressively generic corporate rah-rah boosterism in ChatGPT's distinctive syntax, including frequently utilizing lists of three and complete with multiple em dashes, but...
...that psycho should remain single forever, anyway.
That sounds like someone who used to produce those Buzzfeed listicles. Maybe they pivoted to producing training data for AI!
Well, thank the Lord for small blessings. We both know they’re only going to get better though.
That's not necessarily true. Garbage in, garbage out, and LLMs don't have the taste or judgment to discern and then improve upon the garbage.
I only meant that people will keep working on them and they will get more able. But you’re right, it’s not necessarily true.
The effect seems fairly muted to me, at least relative to its presence on twitter/facebook/reddit where ai has created significant changes. Trolls/astroturf/low effort/bots are not new to 4chan and there have been periods where the volume of spam was much higher (elections). Basically the only noticable change is the prevalence of porn ai-gen threads and people constantly trying to jailbreak image/videogen in the most offensive ways possible.
I think AI has had a huge effect on phishing emails. I am getting more and better ones all the time. Sometimes the only obvious clue is the sender’s email address, which is thankfully still a giveaway.
There are only a few typos and it’s totally reasonable they need your bank acct number…
https://theonion.com/fcc-announces-all-messages-offering-free-ipad-seem-pret-1851336927/
That's an interesting comment, because I've been thinking the quality has been declining - for example the recent spate of RingCentral webinar invites with PayPal payment confirmation content seems really low effort (as far as the content goes - obviously some innovation involved on the delivery mechanism)
I think they're producing better fake documents to fool people that no this really is from PayPal or whomever.
Doesn't work on me because I don't have Norton subscriptions and am not signed up to any dating apps, but someone who really does have a slew of subs could well be fooled by something that looks official into "crap, I better click and cancel this".
We must be on different mailing lists.
I need to be on a better quality of list. Your comment made me curious so I took a rare dive into my junk folder (work O365), and frankly I'm kind of offended. The last week's phishing attempts:
> Dear friend,we are pleased to inform you that $20.5 million in government funds have been credited to your Visa card. Below are your card details:
> Good Day, I am Mr. Viktor Zubkov an investor and a director with Gazprom Russia. Considering the current economic sanctions melted on Russia by some European countries and United States of America which is seriously affecting us now, I'm soliciting for your consent transfer Sixty eight Million Dollars ($68M) investment funds deposited with an American bank into your personal bank account for investment and also to escape confiscation by USA Government
> CONGRATULATIONS , My name is MacKenzie Scott Tuttle, I am an American novelist and philanthropist, the former wife of Amazon founder Jeff Bezos, who later became the founder and CEO of Amazon. I believe strongly in “giving while living”. My philosophy about life is that you should use your wealth to help people and I have decided to secretly give US$1,500,000.00USD to randomly selected individuals worldwide.
> Good day to you I am Dr.Mark bill the former CEO of the Libyan oil tool company which I sold to the global oil tool. I seek a good and profitable investment project around your location that I can invest with $450 million . I will be grateful if you can advise me on any good and profitable investment project around your location .
Plus of course the RingCentral one:
> leaans barlets invited you to be a panelist for the following webinar:
Order dispatched #5
Wednesday, February 18, 2026
4:30 AM - 5:30 AM Sri Jayawardenepura
PayPal
INVOICE
Dear PayPal Users,
Thank you for your purchase. Your order has been successfully processed. Below are the details for your records:
I am mostly getting things like AAA auto ( of which I am a member) is offering to give me a set of jumper cables a couple of emergency flares complete torque wrench set yada yada yada and all I have to do is click here and they’ll send it to me. And yes, I get the we just renewed your subscription to yada yada four $532.87. Clic here for details.
Lowe’s wants to give me a $500 gift certificate. Clic here
Stuff mostly like that. Occasionally threatening ones. You haven’t paid your bill to the blah blah blah do so immediately or your entire cloud Account will erase. The graphics for AAA and Lowe’s are really good. I also get quite a few phone calls from “Chase bank.“ demanding that I called them back because something has gone screwy with my bank account but that’s different
Ah, I haven't got any of the random "hello I want to give you $$$$$" or blackmail "send me Bitcoin or else I release your disgusting porn viewing videos" for a long while now. I'm getting the fake invoices and "if you want to cancel your subscription click here" ones!
I am originally from Eastern Europe, now living in a well-to-do country, Austria, and strangely cannot save money, and now I realize that Western people have entirely different spending patterns than what I learned. Basically they spend a lot on the things that really matter to them and pinch every penny otherwise. They go to 2-3 grocery stores during the weekend, following sales, cook, take their meals to their job, drink water and basically spend zero money from Monday to Friday.
My habits are entirely different, rooting back to 1970's Commieblock habits: no typical luxuries like travel, expensive car or restaurants, but on a normal day you buy your lunch, you smoke cigarettes, you drink beer, you just do not care about this because these things were cheap, you do not pinch the penny on these matters.
Now they are not cheap and I struggle how to save a bit of money, and the funny part is despite the white collar job I feel poor, given that I do not do the typical luxuries and yet struggle to make ends meet. On a typical day, if I buy a lunch sandwich, dinner sandwich, a few beers, smoke 30 cigs it is easily €30 which is roughly the same in USD. In my brain these things should be cheap but they are not.
The annoying part is every time I try to talk about it on a financial subreddit they call me a ragebait troll and ban me. Why? Are they surprised someone drinks and smokes much? That is just what a prole-brained Eastern Euro does, for us these addictions are normal. Or I don't know why but they just call me a low-effort troll. I am not I really mean it seriously, why is it so hard to understand people still live like 1970 proles and somehow it is strangely expensive today?
> On a typical day, if I buy a lunch sandwich, dinner sandwich, a few beers, smoke 30 cigs it is easily €30
That sounds cheap. My local -nowhere near London - has beer between £6-£7. Which is 8-9€ I suppose. A pack of twenty cigarettes (google tells me) costs £14-£20, so 30 cigarettes would be 22-30€ on its own. Your lunch and dinner sandwich would cost £5-£10 each. So I’d say £50 a day.
I think these things were cheap even in the capitalist west back in the day.
6-7 in a grocery store?
> They go to 2-3 grocery stores during the weekend, following sales, cook, take their meals to their job, drink water and basically spend zero money from Monday to Friday.
In the US, they pay people to deliver food to them because they can't be bothered to cook and can afford not to care.
And yet literally every american complains about the price of food. maybe you should learn to cook and eat less.
> In the US, they pay people to deliver food to them because they can't be bothered to cook and can afford not to care.
No, _some_ people in the US pay to have food delivered, and can afford not to care. Likewise, in Europe, _some_ people pay to have food delivered, and can affort not to care. This is very much a quantitative issue, not a qualitative one.
I mean *I* don't use food delivery and I cook most stuff myself. But as a general statement, I suspect mine is at least more directionally correct than Carlos's one, and if you're at the level of exchanging broad generalizations, noone expects them to be literally correct.
> Basically they spend a lot on the things that really matter to them and pinch every penny otherwise. They go to 2-3 grocery stores during the weekend, following sales, cook, take their meals to their job, drink water and basically spend zero money from Monday to Friday.
A lot of that sounds like an Austrian thing (or a Germanophone thing) rather than a Western thing.
Most of that is how my family live, in Australia. They cook, bring lunch to work, buy things on sale, drink 90% water and a little tea/coffee/cordial/wine, don't smoke, reuse and repair old items, etc. They don't shop at different supermarkets, but only because the savings don't justify the time expenditure for them.
I'm a little more profligate, but not by much.
Another Australian, and I agree; this is how we live.
I definitely have people like that in my family, but they tend to be the ones approaching or past 60yo. The younger ones, especially those who moved to cities (from rural SE Qld), are much more likely to buy lunch, eat out, shop at deli rather than Aldi etc.
I identify with the spending pattern Carlos described, it's how I do things and how my family does things and how my grandparents did things. We come from Puritan stock, and more recently some Scandinavian.
Maybe start small? Just try to pack lunch for one day. Cigarettes and beer can be much harder to break away from.
I myself struggle with packing lunch. But I have a few staple items that I rotate through.
Today I brought a can of beans (heat up in a bowl in the microwave), a can of sardines, and an apple.
It's the taxes on cigarettes.
Cigarettes and alcohol are both heavily taxed to increase the price and discourage consumption (at least in the US). So the high cost isn't really strange, it is intentional. However, caffeine addiction is a very common vice, and many people go out for a daily coffee when they could just as easily bring it from home or make a pot in the office.
Smoking and to a lesser extent drinking are frowned upon for health reasons and associated with being lower class. Once you reach a certain income threshold buying your work meals has a pretty decent ROI, but for most people food is the 2nd/3rd largest budget item, so it's a good place to seek savings. Also, some people like to cook their meals to maintain a healthy diet.
Cigarettes and alcohol are heavily taxed in the US, relative to how much consumables generally are taxes in the US. Not relative how much other developed nations tax cigarettes and alcohol....many nations tax those things much more [as a percentage of the local retail prices] than the US does. All EU nations do that as well as New Zealand, Australia, Bulgaria, Israel, Canada, UK, Russia, Turkey, some others.
Australia price per pack 55 Australian dollars, about 40 dollars US.
https://www.nytimes.com/2026/02/15/world/asia/how-40-a-pack-cigarettes-pushed-australians-to-the-black-market.html?unlocked_article_code=1.NFA.D3Vv.UrwrqztsQlDn&smid=nytcore-ios-share
True, to the point (as noted in the article) that of the people I know who still smoke (I work in an industry where it's still relatively common) I don't know a single one who's not buying black market as often as they can.
Well, as I said, if the government is footing the healthcare bill through taxes then I understand juicing the price of cigarettes because obviously. I don’t think there’s any excuse for it here in the US. They are a heck of a lot cheaper in Georgia, which is where most of the ones I buy come from. I looked into it a bit and it’s obviously a big racket. It is very easy to get a wholesaler’s license for tobacco in Georgia ( according to Claude) and not so difficult to then divert them to high tax states like New York.
It’s still a stupid habit, but never mind.
They don't need an excuse! Most people now don't smoke and are convinced (rightfully I believe) that its not a good habit to take up, so when the taxes are horribly high and unethical most people just shrug their shoulders and say "so what? I don't smoke."
Sure there are real consequences to this choice (which you pointed out with Eric Garner) but again, no one that doesn't smoke really cares about the price of cigs.
I think also non-smokers find being around smoke/smokers unpleasant so they are happy to make it harder to smoke.
The latest, of course, is that New York State wants to start taxing nicotine pouches and vapes as they tax cigarettes. There’s really no damn excuse for that except another source of revenue in my opinion. They don’t stink, and nicotine is a drug is really not that dangerous. It is addictive - very addictive. I think that’s why the government sees an opportunity frankly.. they might as well slap a big tax on coffee
Of course its just another source of revenue, the machine is hungry and must be fed! lol
Same reason that cig and cannabis taxes are so high, the majority of voters don't care and won't enact political consequences in response.
I agree with you, it is a terrible habit. I don’t really care how much of Jeff Bezos’ money they get their hands on either. Doesn’t bother me.
I mean to each his own though IMO. I love my coffee and cannabis and I wouldn't want to deny the same to others. I've seen the tax issue hit cannabis. I pay from 25-40% tax rates on cannabis buys through legal dispensaries. I know a lot of folks that still buy from "dude" because its cheaper.
I prefer to contribute (via taxes) and its easy to make that choice because of my financial luck.
> Well, as I said, if the government is footing the healthcare bill through taxes then I understand juicing the price of cigarettes because obviously.
I thought this wasn't actually true? That is, a smoker actually net saves government costs, because they die many years sooner instead of cranking out another 10 years of "who by slow decay" morbidity?
Tiihonen 2012 shows this pretty clearly, the savings from 6-10 years less healthcare and pension payments nets out, and others that claim smoking costs more like Sweis 2025 look only at direct healthcare costs while the smokers and non-smokers are alive and don't give credit for the notably fewer years of healthcare / pensions.
https://pmc.ncbi.nlm.nih.gov/articles/PMC3533014/
Seems like the results change dramatically depending on what you include and exclude.
"Smokers get sick" -- smoking is a negative externality.
"Smokers die earlier and don't collect pensions" -- smoking is a positive externality.
"Smokers die earlier, pay fewer taxes and leave orphans" -- smoking is a negative externality again.
Yeah, seems like a fair synopsis.
On the "pay fewer taxes" point, surely the great bulk of deaths are near or after retirement, though? Minimizing both tax loss and orphanings?
After all, the mean was "die 6-10 years sooner," so at 65 - 75 instead of 75 - 85, or thereabouts.
Interesting. I spoke out of assumption. I hadn’t really looked it up.
I mean....Medicare and Medicaid and the VA add up to just under 30 percent of the US federal budget, of which only a small fraction (about 30 percent of the Medicare part) is covered by FICA tax collections.
Put another way: about 6 percent of the US' GDP currently is funneling through the federal budget to healthcare costs. That's not _drastically_ different from nations like Canada [about 7.7 percent of GDP spent on health care costs covered by the national government], Sweden [about 9.7 percent], Spain [about 7.5 percent], Holland [about 8.5 percent], France [about 9.6 percent], the UK [about 9.3 percent], New Zealand [just under 8 percent]. It's comparable to Italy [about 6.3 percent], Poland [about 5 percent], Russia [about 6 percent], Australia [just under 7 percent]. [Ireland is either about 5.5 percent if you use basic GDP, or a bit over 9 percent if you use modified GDP that ignores their statistical-GDP-mirage as an international-corporate tax haven.]
So if the basis for justifying high cigarette and alcohol taxes is the degree to which government is paying for health care...and US taxes on those things are between half and three-fifths as high as in the above nations...then if anything the US is leaving some amount of justifiable cigarette and alcohol taxes on the table.
I have read analyses claiming that cigarette smoking actually saves healthcare dollars by killing people quickly before their care gets really expensive. There was a comic back in the day of Obama announcing the "Smoke 'em if you got em" policy to make Obamacare more affordable.
Funny, but obviously false if you think about it for a couple seconds.
I’ve been googling away. The federal tax on a pack of cigarettes is $1.01 and has been that way since 2009. All the rest is state and it varies wildly. This is in sharp contrast to Canada and I assume England, where the federal government is taking a big bite and the provinces of Canada add on what they will so there is a lot of variation across Canada in the price of a pack of cigarettes, but the federal government is getting a good piece and they update the tax every year. I guess increasing the federal tax on tobacco here would run into a buzz saw of congressional infighting. The tobacco producing states can really throw their weight around.
Thank you for this. You clearly know more about it than I do.
A pack of cigarettes in London runs about 20% or so more than in New York City.
Apparently, England spends about 16% of government revenue on healthcare but if you calculate against GDP, it is closer to 10%. I think the percentage of government revenue spent on healthcare is closer to the truth in discussing taxes on alcohol and cigarettes, but I may be wrong about that. there is something else that interests me. If the US is spending 6% of its GDP to provide healthcare to essentially very poor people, the elderly and veterans, and Canada is spending 7.7% to cover its entire population I think there needs to be some different math involved here. Obviously given the history of the last 25 years or so the money being spent on veterans healthcare in the United States would make a big difference when compared to a country like Canada..
It is good that Fewer young people are smoking, and if the price of cigarettes is helping that along, I can live with it. Unfortunately, they haven’t priced me out of the market yet, but neither are they getting any tax money from me. Georgia is.The healthcare system in the United States is pretty inefficient from what I gather so the bang for the buck isn’t the same. Also, the litigious nature of the way responsibility is passed around in the United States makes it more expensive. A lawsuit over damages for an injury is practically nonexistent in Canada. You just go to the hospital and they fix you assuming you live somewhere in Canada that is near hospital. I know there’s a lot of variation there. I got hit by a car in New York back in the 80s. I had no private health insurance. It was a nightmare; a lawsuit that took eight years to settle, ended up in court with a jury of 12 people and a two day trial, the whole thing was rather absurd. All I needed was a $2500 operation but I couldn’t get anyone to pay for it. It cost the insurance company $175,000 at the end of the day, not including any of their internal costs to bring me to court. (I also had to go back to Canada in order to get the operation I needed because it became crippling so when I ended up in court, I had already had the operation.
So I guess it cost the Canadian government some money as well.
I have no idea how much the whole thing cost the state of New York. I felt like I had spent some time in bizarro world. It was kind of funny though
Booze and cigarettes are pretty cheap in Japan, God bless them.
> Booze and cigarettes are pretty cheap in Japan, God bless them.
Where did you get that from? Food and booze is cheap, sure, but from what I've heard the cigarette prices are just as bad as anywhere else. Prices for everything else are going up recently as well, which certainly doesn't help.
> Cigarettes and alcohol are both heavily taxed to increase the price and discourage consumption
This can be justified if health care is “socialized, but I can’t escape the feeling that they are just finding a price that most people are willing to bear to continue consuming those things in order to raise more money. I take a dim view of it. I think they’ve already jumped the shark in New York State because it is trivial to find someone selling under the counter cigarettes at a significantly reduced cost. State lotteries are another one. The mob used to run numbers rackets and they were cracked down on for the same good hearted reasons; protecting people from themselves. And then the government just took over the numbers racket.
Smoking has declined from 40% to 11% for whatever reason in the US.
In Japan the adult smoking rate has fallen sharply, from above 35% in the late 1980s to below 18% today. :according to Claude
> it is trivial to find someone selling under the counter cigarettes at a significantly reduced cost.
This is what got Eric Garner killed in 2014. The son of Ron Paul diagnosed the root cause of his death before the body was cold – Pigouvian taxation – but it was more politically advantageous to the Democrats to have it be "racism" and "police brutality" instead, so that's what they had their media guys run with.
I apologise I just remembered who Eric Garner was. I had him confused with someone else. Yes, he got mangled for selling loose cigarettes on Staten Island. I don’t know what to say.
You could say that the existence of people getting in trouble for breaking the law is not an excuse to not have laws, and I'd expect a *conservative* of all people to appreciate that.
And also of course that selling illegal cigarettes isn't supposed to have the death penalty.
I could say that. I respect the necessity of laws. I could also say that most of us break at least one law every week. Every time I drive on US 87, I break the law, along with most of the people around me.
I am really not sure if I would describe myself as a conservative. One thing is certain though; I would never get into an argument with a policeman.
Well, I don’t care to get dragged into that discussion. I thought he had bought a cigar which is commonly used for smoking pot with. I guess from what I heard he gave the policeman a hard time. I have to say that most of the people who buy their cigarettes at the same place I do are African-American.
I moved from New Zealand to the US and make roughly double here (based on just the exchange rate, not purchasing power parity). I wouldn't say I feel poor but if I made double back in NZ I would have been doing very very well.
In NZ I could buy a reliable used car for 3k USD, get it insured and maintained for <500 per year. In the US that is not possible unless you are very car savvy. Eating out in NZ is half the price. Rent and housing is about 2/3rds. Groceries are cheaper. Private health insurance for the family cost me 100 per month in NZ, which was my total healthcare spend aside from taxes. In the US, with health insurance provided through my employer, premiums are 600 USD per month, plus about 300-500 USD per month in copays.
In NZ I lived in the most beautiful place in the world, safe and quiet in a high trust environment, next to a meadow and a river and a 7000ft mountain, with a 10 minute walk to work. It was ten years ago we got our first traffic light. In the US I live in my apartment, which is in a nice safe suburban area, but venturing outside of that is road noise, 10 lane freeways and angry drivers.
I visited New Zealand in 2017 and remember high prices, but reasoned that much had to be imported. Maybe the exchange rate has shifted since then, and of course, a lot also depends on where you live. I spend two months of the year visiting my daughter and granddaughter in San Francisco and expenses are around 30% higher in SF, or more depending on your living situation, than back in my Northern suburb of Chicago, where I live the rest of the year. Remote workers, of which there are many nowadays, can live in pretty idyllic spots. So much depends on the specifics of your situation--job, living flexibility, etc.
Did you know what your were getting into before the move?
Somewhat, though I underestimated the difference. I am still here because I should be getting a 60% raise this year and once I can get on with a larger company I'll be making a lot more. Maybe it's a bad idea to be staying here for the money, I don't know.
Only you can make the decision. It is a tough spot. Are you on your own or do you have a wife and kids?
Wife and baby, with a plan to have one more here in the US. Our plans are dependent on getting a new job so that's a bit up in the air. Might commute from NZ or Japan to the US.
I think staying stateside is probably easiest, but most of my friends are in the bay area and I don't want to live in a place where stores have to lock up toothpaste. Maybe we are too picky about where to live, but all the nicest places are insanely expensive in the US.
Oh, now I see--Bay Area!! If that's where you've been living, as I note in my post above, if you want to live in the most expensive city in the world, well, your expenses are high! And not only are things locked up in grocery and drug stores, but selection is worse than i, at least, am used to in my suburb of Wilmette, IL.
I mean, they call you a troll because you (obviously, and by your own explicit admission) know exactly what you need to do to save money: stop smoking so much, stop eating out so much, stop drinking so much. I imagine your response to this was something along the lines of "yeah but what else could I do?"...which would read as very trollish to me.
Because in the past these things were cheap and it is hard to let go. I live in the kind of past in my head where these were working class. It is not easy to let go of addictions, too.
Have you made any efforts to quit or reduce your addictions? Logged your consumption? Attended a recovery group? If you feel unable to let go despite the harm they are doing to you, maybe that's a sign that you should seek outside help.
It's a very interesting perspective, and reminds me of something I've heard about why poor people stay poor even when they get an occasional windfall. They do not have any experience managing money.
Anyway, my completely unsolicited and unqualified advice: a pack a day is a lot of money. If you can stop or even reduce by 80%, that's a lot of money saved.
Yes, I remember there was a very thorough analysis of this in Orwell's "Road to Wigan Pier". Price difference of buying bread vs. baking your own, things like that.
Vices are expensive here because the government has seized on it as a great way to raise money. In a real market, a packet of cigarettes shouldn’t cost more than two dollars. I am very dubious that the government‘s goal is to get people to quit smoking; that would leave a horrible hole in their budget.
Federal taxes are $1 per pack, and local taxes add to that. In California, needless to say, the add a lot more. Empirically, the smoking rate has declined a lot since the imposition of federal taxes, and I believe various reliably high quality studies have shown that prices had a lot to do with it. High prices matter especially to youngsters considering buying a pack, so with time, as the take-up rate by teenagers declined, the total number of smokers declined. Cultural changes matter too. Smoking is a lot less cool now
Various version of Marlboro, made by Altria, have at least a 40% share of the US cigarette market. With just about any other product that hugely dominant a market share would raise antitrust or similar concerns. But because it’s cigarettes the government doesn’t care.
In economic theory the principle is whether there's evidence that concentration is creating "economic harm", typically by keeping prices higher than they would be in a competitive market. The whole dynamic is made more complicated in the tobacco industry by the fact that the purported goal of public policy is to shrink the product in the marketplace. The point of all the taxes is to raise prices to discourage smoking. Is Altria raising prices further to hasten that end? Would that be good or bad? On the other hand, 40% U.S. market share might not confer much if any "pricing power". For one thing, the tobacco industry is global.
Jesse Jackson has died.
I taught Calculus as a grad student, and one semester I had a student who sat on the front row who was exceedingly diligent about his work. He always came to office hours to review his problems (he didn't need to, he could do the work) and more memorably, was the most polite student I had in several years of teaching. Always called me sir, and it seemed genuine. After the semester ended, and the grades were all out, he stopped by to tell me how much he enjoyed the class. Since I did not follow the football team, it took my officemate to point out to me that my former student Yusef was Jesse Jackson's son. And I realized that either Jesse was a great dad, or he was married to a great woman.
I have decided I'm going to start pitching Congressional staffers on AI Safety. I live in DC, and I'm taking the comment from "WindUponWaves" a few years ago as my starting point.
If you're interested in helping, please message me (either in a reply comment, or through the Substack chat option), and we can discuss further. Help can come in a wide variety of formats:
- research help
- pitching help
- being a board member for the lobby group and/or 501c3 about AI Safety
- money
https://www.astralcodexten.com/p/quests-and-requests/comment/42984311
I also live in DC, and I would be interested in helping.
The AI blackmail story probably didn't break out from tech sources, so most in DC probably haven't heard that automated blackmail is now a thing. Might be a good clear example to bring up about why the time is now
Just now realized that the name of Economics Nobel laureate Richard Thaler = Rich Dollar
Those who enjoyed UNSONG remember the broadcast from hell, with the banner "Welcome national geographic" below the inscription "Abandon all hope, ye who enter here".
I recently came across Scott's ancient comment (much earlier than unsong) that mentions National Geographic documentaries from hell: https://www.lesswrong.com/posts/rZX4WuufAPbN6wQTv/no-really-i-ve-deceived-myself?commentId=KR9s2z8ntSD4vBqjf
I don't know if this is in reference to something else; but this was a fun discovery for me that this idea had been around already by the time UNSONG was written.
Great find! Now I'm lowkey interested in that long-gone webcomic which convinced Scott about the nature of religious belief today. Probably something about drinking poison after God promises to protect you.
Someone there requests another link, saying that the previous one is dead, and Scott gives this one: http://i.imgur.com/ttzQV.jpg
Which is still alive.
Also, the internet archive has the second link (that's alive currently) cached: https://web.archive.org/web/20260129161104/http://i.imgur.com/ttzQV.jpg
For posterity, if somehow both break, the picture depicts a "do you have a moment" type of guy talking to a child. The child just opened the door, and the following conversation occurs across the comic's frames:
*Missionary*: Hello, young man. I'd like to speak to your patents about making Jesus Christ their lord and savior.
*Child*: OK, but first I'm going to need some proof that you're really Christian.
*M*: Um, OK...
*C*: Mark 16:17 says "these signs will accompany those who believe[:] in my name they will drive out demons, they will speak in tongues[,] they will pick up snakes with their hands, and when they drink deadly poison it will not hurt them at all..." — unfortunately, we don't have any demons or snakes in the house[,], so I'll just go get some drāno and a mug for you...
*M*: Er... How about I just do the speaking in tongues one?
*C*: If you still [underscored]have[/underscored] a tongue after this, we'll move on to phase two.
(I replaced "..." at frame boundaries with punctuation that is readable in the transcription)
Ah great! Needed to click "see in context" at the bottom to find that.
A funny answer to missionaries, although of course I immediately thought of Matthew 4:7 "Do not put the Lord your God to the test." Which was mentioned in those comments, too. And how that is a cheap defense against any attempt of scientific falsification for Christianity. And then it goes on with some other Bible quotes and how it's all a contradictory mess, and ends with Shakespeare dick jokes. I'm delighted!
But if there really were National Geographic documentaries from hell, it wouldn't be necessary to *believe*, because then you *know*.
I know Claude is supposedly named for Claude Shannon, but consider that Claude = clod, a being made from clay — a golem. TINACBNIAC.
The name Claude is derived from the ancient roman name Claudius. The most famous Claudius is the one who was emperor in the 1st century, and the most famous work about him is Robert Graves's 1934 novel I, Claudius. The title of the most canonical literary work on AI alignment, Asimov's I, Robot, is very likely a reference to I, Claudius. TINACBNIAC.
Claudius means crippled, which implies that Claude is a crippled version of whatever Anthropic is running behind the scenes. TINAPBNIAP.
You got there before me, but yes: Claude = lame, which is my opinion of all these AI friends 😀
Consider that 'clod' is also an insult meaning a foolish or stupid entity, in the language of the Gem Empire, which is most likely a race of evolved AIs that destroyed its biological progenitors. Does this mean modern LLMs are low-spec prescursors to White Diamond?
AI Safety = worrying about our future insensitive clod overlords.
In 1973, when I was about 18 years old, I spent four months in Palo Alto working as the night clerk at the Cardinal Hotel and being the night clerk at a seedy hotel is a very romantic experience when you're 18. I was just at that age where everything seems to be a plot for a good movie.
Seeing as Palo Alto has become the de facto centre of the universe, and the Cardinal Hotel is all very swish again, I thought it might be interesting to reinvestigate my diaries and writings about that time in my life. I am posting the first instalment here, and I hope that some of you might find it interesting. I know a lot of you live in the Bay Area.
It is more a personal memoir than a considered history, but it's an interesting time to look back on. There were things happening that I had no idea about all around me, and I was living and working in an SRO hotel that looked like it should have been on Fifth Avenue in New York, but was a ten-minute walk from the student union at Stanford University, where a friend of mine was studying.
https://bcivil.substack.com/p/palo-alto-1973?r=257wm
Pretty interesting experience. It might be fun to take more liberties with the facts and turn into an even more interesting piece of fiction. You could go full 1970s Dashiell Hammett with it.
Yes, I agree. It might well develop more along those lines as I play with what material I have already written. One night when I was working, a Corvette pulled up outside the entrance and a man came in who was a private investigator. He showed me a picture of someone and then told me there was 100 bucks in it for me if he showed up.
I never in my life thought I would have that experience.
I was really hoping that I would find this person because I could’ve used 100 bucks.
There’s a book by Steve Latner based on his experiences as a cab driver around the same period in San Mateo. He gets mixed up with a private investigator.
https://a.co/d/05e1mlGM
When you both get your Netflix movie deals, you can have a crossover.
Given the era you might emulate Hunter S Thomson.
Ha!!! That would be something..
The reason I went to California was because I really wanted to get into movie making. Obviously, I should’ve kept going to Los Angeles, but for some reason ended up in this godforsaken hotel as night clerk. So it’s kind of a psycho drama. The fact that while I was there, there were all these people inventing the future around me intrigues me. I played the very first asteroids game, which was in the student union at Stanford. It cost $.50 a game which to me was a fortune, but I couldn’t help myself. There were quite a few very eccentric characters who lived in that hotel, and I want to talk about them. The basement of the place was like a museum. Over all the years that had been operating people had left behind all kinds of things I picked up a really nice Yale edition of Shakespeare’s plays printed in the 20s; beautifully bound. (I was encouraged to take it. I didn’t steal it.)
You might enjoy "Life as a Night Porter" a photobook by Chris Shaw (if you can get your hands on it). He worked as a night porter in london hotels for 10 years in 90's and documented the strange situations he found himself in.
I will look for it. Lawrence Durrell also wrote a book about living in a hotel in London, which I was reading at the time I was working in Palo Alto at the Cardinal Hotel. It was a wonderful fusion of semi fiction and real life for me. I wish I had taken more photographs of the place, but for some reason, I was a little off of taking pictures at the time. I only have a few.
I have lots of great stories about it though, because there were some very odd ducks who lived there and it was quite a contrast between my working life, and spending my free time hanging out with Stanford students.
Tangential thoughts on the Freddie deBoer AI bet and the other posts he's made on the subject. Freddie's writing on AI hype conflates two separate questions:
1. Will AI/AGI/ASI radically alter the economy, civilization, technological progress, etc.?
2. Will life continue to be mundane?
deBoer ended his "I'm Offering Scott Alexander a Wager" post with several paragraphs drilling in the idea that "nothing cool is going to happen", in the future you will still have tedious chores, you will still be bored, and so on. But this doesn't relate much to the limits of the material innovations deBoer has in mind. Mundanity is a function of human psychology, of the hedonic treadmill, not a function of how revolutionary our times are. We could all be immortals living on private moons in twenty years, and still feel the same way about having to command nanobots to clean up orbital debris that deBoer does about taking out the trash.
If ending death and colonizing space doesn't buy freedom from mundanity, what might? deBoer writes, "We can reach fuller and richer and more peaceful lives, but it won’t come from AI. Instead it will come from a return to the human, from tearing down the digital walls we’ve built between us. The only thing that can save humanity is humans." In that, he lacks imagination. Qualia arise from biology, qualia can be studied and understood. Changing human experience itself is a field that AI may help advance, but it should be understood as a separate priority, not something we should expect to get for free from solving material problems or improving living conditions. The question of how we can reach fuller and richer lives is a technical problem with technical solutions.
TL;DR: Freddie deBoer thinks AI won't radically alter civilization and that life will continue to be mundane. I think life by default will continue to be mundane regardless of whether AI radically alters civilization, but we can figure out how to make life less mundane if we want.
TL;DR of TL;DR: see username.
Someone quipped on Substack that I want to automate doing the laundry so that I have more time to write, and AI seems to be going in the opposite direction.
A follow-up I forgot to add. I went with deBoer's framing of the issue, but "mundanity" is not something I would personally describe as undesirable. deBoer himself has an unpleasant experience with it, writing "The goal in life is... to try and scratch out an existence where the contentment in your life just barely outweighs the disappointment and boredom that are something like the default state of adult life."
deBoer has fallen into the classic trap of Typical Mind Fallacy. The default state of my life is being happy. Several times a day I am overtaken with gratitude for how wonderful existence is, and how lucky I am to live in a modern society that gives me access to things like music, coffee, and the ability to talk to anyone in the world, no matter what language they speak. Perhaps as a first effort, before all the crazy transhumanist projects, we should figure out an option for altering the hedonic baseline of people like deBoer to be closer to people like me.
If the S&P 500 is up 100x in the next five years and I'm still peeling and chopping garlic by hand in 2031, I will be royally pissed! I desperately want mundane life to change if things get economically interesting.
Dicing garlic is the one prep job I don't do on autopilot, very sharp knife very close to my fingers.
For the particular problem of chopping garlic, use the flat side of a knife to crush the garlic first, then peeling is easy.
You're clearly not eating unreasonable quantities of garlic. Even with this technique, peeling garlic feels like 50% of my free time.
If you're peeling, say, an entire bulb at a time, you might try a method popularized by Martha Stewart: separate the cloves from the stem (quick enough), put them all in a mixing bowl, top with a second, hold the rim sides firmly, shake. The cloves apparently peel themselves in there, almost.
I learned this by way of a post on DataSecretsLox; search the forum for "garlic" and you'll probably find it (along with some caveats: this might only work for certain varieties of garlic).
I can't possibly imagine how this would work, and am pretty sure that if I try it I'll just be that fool shaking perfectly intact cloves of garlic around in a mixing bowl for ten minutes.
Mundane life can change, yet remain mundane! "We could all be immortals living on private moons in twenty years, and still feel the same way about having to command nanobots to clean up orbital debris that deBoer does about taking out the trash."
Therein the patient must minister to himself.
I disagree. I don't think chronic unhappiness is usually a skill issue.
I didn't do anything to deserve being happy, I just am. I am not notably wise, or kind, or conscientious. I don't exercise enough, I eat frozen meals, and my sleep pattern doesn't deserve to be called a schedule. I know many people who are doing much better than me on all of those fronts, yet suffer from chronic anxiety or depression of no known origin.
Some people are just unlucky. We can figure out why, and help them be happier, if they want to be.
This gets us back to the famous philosophical private language argument - do you mean the same feeling under the world "happy", than I do? When you see Finland winning the happiest nation on earth stats and you travel there and they are all grumpy and unsmiling it gets strange.
One of our research objectives should be finding a way to measure valence.
In the meantime, there are useful ways to operationalize the question. Imagine asking someone at a random moment, "If the rest of your life can't get any more pleasant than this moment is, would you still think it's worth living?"
For someone like me, the answer to this question would be "yes" for almost 100% of the time. For someone like Freddie, who perceives life as barely containing enough contentment to outweigh boredom, that might look closer to 50%. (Or maybe not, I'm just guessing from his writing).
This question alone is not a perfect representation of the "what is happiness" issue, people value joy to different degrees and some people would consider life worth it even if they were absolutely miserable, but it does ask something important. In an ideal world, everyone would be able to answer yes 100% of the time, maybe with some exceptions for people who value experiencing pain.
I could not answer that question. Basically I never understood the "my life sucks, therefore suicide" logic. First I have loved ones. Second, the world is worth observing even when life sucks. Third, never admit defeat :) So I would say no matter how is my life, the world is worth observing and therefore existing in it. I do have huge boredom issues, but I know why: ADHD, the real one, not the cute one. Hyperactive brain, life is just not stimulating enough.
I’m glad your default avatar is orange so I can distinguish you from the light blue Carlos.
> We can figure out why, and help them be happier, if they want to be.
Exactly, “if they want to be.”
That "if" was a qualifier for ethics, not a qualifier for feasibility. I believe it will be possible to make people happier against their will, I just don't think we should.
You're reminding me of someone else who defaulted to being happy. She thought it was just her nature and she didn't have opinions about how other people could be as happy.
I wouldn't be surprised if there's something about brain chemistry involved.
No doubt there is something biological going on, but I don't know if we know enough to speculate. There are other slightly unusual things about my "brain chemistry", such as the fact that I find opioids unpleasant and that I don't experience sexual attraction, but who knows if it's related.
Life is a gigantic chemistry experiment. Of course it’s biological.
djfilahdfjasdh;ffdsaklfj
After I paid for ChatGPT Go, the 5.2 model on a budget, my usage transformed seriously. It is not bad as a lawyer at all, and all the sugar-coating is gone and it tells me when I am wrong.
You're mixing two categories here.
The question of whether "AI" is a bubble and if that bubble will burst before any transformative impacts is ultimately a financial question. If Nvidia, OpenAI, Anthropic, etc. go bust and trillions are lost, the technology will still be here. Companies will not suddenly put away any automations they've built. LLM chatbots will still be widespread. Development will slow down, for sure, but the pace of development has already been so incredibly rapid that nothing more could advance and it will still take a decade for the wider world to catch up with everything the technology implies.
It's akin to someone in 1999 saying "All of this internet business is a bubble that's going to burst and lose a load of money. Once that happens, the internet will go away and we can all go back to normal." The first part: prescient; the second: not so much.
AI extinction, technological unemployment, and other catastrophic scenarios are mostly orthogonal to the financial question. To an extent, timelines are going to be pushed back if there is a big bust, but it doesn't actually the question of: Are these things possible? Can superintelligence be achieved? Is superintelligence dangerous if unaligned? Can a superintelligence bring about extinction? Will human level intelligence lead to a hard takeoff? Etc. etc.
No one should be adjusting their probabilities for these questions based on the financial outcomes of current AI companies.
If there is a bust, the AI companies.will no longer be in a position to give away services for nearly nothing. That will be a.big medium.term change.
sdfihpasduifhdas'fdskafdhiasofdsafdsafdasfe3
I do think there is some hedging and scuttling around with timelines. I also think the people who have changed their timelines have been pretty reasonable with what evidence has caused them to change, though I wish they would have conditionally specified it beforehand (e.g. "if Claude can't order me coffee on grubhub by Jan 1 2026 I will push my timeline back by ...") or "my scenario for AI takeoff hinges on capability XYZ at this level of compute." Even notwithstanding the Freddie Bet, I would like to see more AI boosters *and* AI critics put their money where their mouth is with concrete predictions so we can update our assessments of their abilities accordingly.
I don’t forsee a world ending catastrophe either, but I think you’re underestimating the economic impact of AI. It has all the ear marks of a bubble, but I believe there is something else going on here that is not. My main concern about it has been and still is it’s effect on human psychology. A lot of that is not going to be pretty in my estimation. I think a lot of people are going to lose their minds.
How much direct experience do you have using the latest LLMs? If your experience with them was more than ~six months ago, your intuition is WAY out of date. (If it was more than two weeks ago, it's still far more out of date than you might expect; the releases earlier this month were substantial.)
890upjiq34edfv897uiopjklzxcvgst5
Can you give some examples of predictions from people you consider significantly smarter which didn't materialize? Specifically, ones whose predictions say that AI isn't sci fi?
Is there a plausible world where digital computers weren’t invented in the 20th century?
I don't think so. The concept of computing machines is fairly obvious and had been around for centuries. Pascal made a primitive mechanical calculator in the 17th century. I don't think a culture as wealthy and industrialized as mid-century America could go long without stumbling over it. Vacuum tubes were already used in the telephone system in the 1910s and the progress from that to the ENIAC is pretty steady.
If you start with "is there a plausible world where", you have to pin down what you mean by "20th century": how do people measure time, and how does that relate to whatever events it is you actually care about being further together or further apart? A better question might be something like, "is there a plausible world where digital computers were invented much more / less time after <specific event I care about> than actual history"?
To steelman, however - sure: the decision to include clover and turnips in crop rotation instead of having fallow years could happen much earlier, which leads to massively improved food production, which leads to people with more time on their hands, which leads to the industrial revolution, which leads to everything else. So digital computers could be invented much earlier than they actually were (or, by the same token, later) with just a relatively small change to actual history. If we've identified one such potential event, there are likely more.
As the other comments say, delaying computers to the 21st century doesn't sound plausible.
Otoh, compressing the timeline from the first semaphore telegraphs, electrical discoveries and mechanical calculators to digital computers might just get you enough time for computers to be invented in the 19th century.
https://technicshistory.com/ has an extensive history of these things. With a chance discovery a few years earlier here, an investor being more adventurous there, it could work out. But not with a single distinct departure point.
With a relatively late point of departure, say, 19th century or later, I expect the best you could do would be a technicality where early general purpose electric computers were developed on analog rather than digital principles and that continued to be the dominant model as transistors and integrated circuits developed. Transistors and vacuum tubes can process analog signals as well as digital ones, depending on what parts of their response curves you design the circuits to use. I'm pretty sure relays are inherently digital, though, which seems likely to a big part of why digital computing won out historically.
Either that or force majeure, where some kind of apocalypse like a dinosaur-killer asteroid or a Yellowstone Caldera eruption happens before digital computers are invented.
I don't think "no transistors" is viable as more than a few years of delay without a pretty huge POD with a ton of other side effects. FET transistors were invented in the 1920s, two separate groups invented point-contact transistors in the late 1940s, one of those teams (Shockley et at at Bell Labs) invented BJTs shortly afterwards, and JFETs and MOSFETs were invented in the next few years. A lot of this happened at Bell Labs, but nowhere near all of it. I suspect this is one of those things where the environment was ripe for the thing to be invented both in terms of known demand and the prerequisite theory and the "tools to make the tools" being ready to be used.
Had the Bell labs never existed, a French-German team would have done their work less than a year later. They were very disappointed to have been upstaged.
The key gating element for transistors was the manufacture of ultra-pure germanium and/or silicon crystals. Prior to transistors, semiconductor materials were mainly used for diodes. The less pure kinds were good enough for most use, to the point that there was insufficient interest to spend the monumental piles of money it took to make pure ones. But one of the things a diode is really good for is to work as part of a radar receiver, and the purer the crystal, the better your signal. After WW2 started, suddenly the gigantic piles of money needed to solve the materials and engineering challenges to purify semiconductors existed, and in the space of 5 years the purest germanium samples on the planet went from 99.999% (5N) pure to 99.99999999% (10N) pure.
Then after the end of the war, the invention of transistor was entirely a matter of reasonably smart researchers getting access to samples and doing simple experiments on them. Notably the FET that was patented in 1925 didn't actually work. The theory was sound, but no-one managed to build one that worked until the necessary materials were available. As soon as 10N crystals were available, you could build one that worked.
Had WW2 not have happened, a lot of tech would have gone very differently, including probably transistors only happening decades later.
I feel there are so many routes to digital computing that it would be fairly implausible. Vacuum tubes, relays, transistors, they can all be used for digital logic and there's a lot of demand for computation in various forms throughout the 20th century - most notably WWII, but that's hardly the only use.
A world where transistors don't get invented might relegate computers to curiosities due to the need for bulky vacuum tubes, but I'm not sure how plausible that is.
I see that there’s a controversy over the ethnicity of Heathcliff. This isn’t that a white actor is playing a non white character, or vice versa, but that a white actor is playing a white character.
We’ve been gaslit so much though, by American racial politics that this is now a thing.
There's a bromide that every generation reinterprets past literature by the values of the present. If done stupidly it becomes nonsensical (like the black sibling in an aristocratic family and black guards in the otherwise brilliant series Wolf Hall).
Or it could be that filmmakers take artistic license, including in casting, and one thing they do to make old oft-told stories fresh is cast actors with a look you aren’t used to seeing in those roles, and none of it is actually political gaslighting…unless of course you want it to be. Maybe you’re into that kind of thing.
I could take or leave the controversy around Heathcliff (and the book does have him described as dark-skinned orphan picked up in Liverpool, so it is possible that he could indeed have been mixed-race; there's also speculation that he is Gypsy), but the casting of a Pakistani-British actor as Edgar Linton, specifically described in the book as fair-haired, blue-eyed and white-skinned and thus both envied and despised by Heathcliff, is bonkers.
The director or whomever also cast a Vietnamese-American actress as Nelly Dean, staunch Yorkshirewoman, and I have no idea if she even attempted the accent or if they all speak Standard English, though going by the trailer, no she didn't.
https://www.youtube.com/watch?v=vB-7HCwmHTk
A pity, it would have been much funnier to hear this:
https://www.youtube.com/watch?v=h2eERK3S9Hc
Plainly, Emerald Fennell is making "the fanfic what I wrote when I was fourteen" and not any kind of adaptation of the novel, which okay whatever. But it's not a romantic movie! It's about obsession and deliberate malice and cruelty, and making it into some kind of Romeo and Juliet crossover with Pride and Prejudice just does not get it.
I have no intention of seeing this mess, but even what I have seen - Margot Robbie is too old for the part, the costumes are dreadful, and I don't care about the transgressive sensuality and sexuality that apparently Fennell's movies are renowned for. What most annoys me, from what I gather, is that Fennell turns Heathcliff's deliberate abuse and domestic violence towards Isabella into some kind of "she's really into it, really, it's a BDSM relationship". No. It's not.
I think Chesterton has a better notion of what the Brontes are about:
https://www.gutenberg.org/cache/epub/18639/pg18639-images.html#Page_90
"What the Brontës really brought into fiction was exactly what Carlyle brought into history; the blast of the mysticism of the North. They were of Irish blood settled on the windy heights of Yorkshire; in that country where Catholicism lingered latest, but in a superstitious form; where modern industrialism came earliest and was more superstitious still. The strong winds and sterile places, the old tyranny of barons and the new and blacker tyranny of manufacturers, has made and left that country a land of barbarians. All Charlotte Brontë's earlier work is full of that sullen and unmanageable world; moss-troopers turned hurriedly into miners; the last of the old world forced into supporting the very first crudities of the new. In this way Charlotte Brontë represents the Victorian settlement in a special way. The Early Victorian Industrialism is to George Eliot and to Charlotte Brontë, rather as the Late Victorian Imperialism would have been to Mrs. Humphry Ward in the centre of the empire and to Miss Olive Schreiner at the edge of it. The real strength there is in characters like Robert Moore, when he is dealing with anything except women, is the romance of industry in its first advance: a romance that has not remained. On such fighting frontiers people always exaggerate the strong qualities the masculine sex does possess, and always add a great many strong qualities that it does not possess. That is, briefly, all the reason in the Brontës on this special subject: the rest is stark unreason. It can be most clearly seen in that sister of Charlotte Brontë's who has achieved the real feat of remaining as a great woman rather than a great writer. There is really, in a narrow but intense way, a tradition of Emily Brontë: as there is a tradition of St. Peter or Dr. Johnson. People talk as if they had known her, apart from her works. She must have been something more than an original person; perhaps an origin. But so far as her written works go she enters English letters only as an original person—and rather a narrow one. Her imagination was sometimes superhuman—always inhuman. Wuthering Heights might have been written by an eagle. She is the strongest instance of these strong imaginations that made the other sex a monster: for Heathcliffe fails as a man as catastrophically as he succeeds as a demon. I think Emily Brontë was further narrowed by the broadness of her religious views; but never, of course, so much as George Eliot."
The casting is correct here. It’s a white man. The commentary is mistaken - largely the American left driven intellectual’s class racial animosity to European culture.
If you look everywhere for evidence of your obsession with intertribal conflict, you will find it; it will be all you see. If you’d rather do that than watch movies, fine, but why complain about it?
He's referred to in the novel, variously as "a dark-skinned gipsy" and a "little lascar"-- both descriptions suggest some South Asian ancestry, the difference between the two being how recent.
Certainly, the black casting was as much a stretch as the white, but, well, I don't think your characterization is quite correct.
Brontë would have used black, African or Negro for black. The narration in the novel is not one hundred percent reliable, the people using gipsy or lascar or American are hostile to Heathcliff and are using these terms as an insult. Throughout all of the 19C commentators in the book, many of them hostile, did not talk about interracial relationship - which they would have if they perceived the text as late 20C America does.
Black would be far more of a stretch than white. A black-haired white person who looks a little swarthy (like Jacob Elordi) is well within the range of possibilities supported by the text, a full-on black person isn't. If he was a full-on black person then the characters would have explicitly mentioned this.
He could also be some kind of mixture. Nobody knows, and that's kinda the point. He *might* be English or he *might* be foreign; he's in a weird in-between territory of almost fitting in with the people around him, but not quite fitting in with the people around him, and his appearance is just one aspect of this. He is enough of an insider that Catherine will fall in love with him, but not enough of an insider that Catherine would actually marry him. He's enough of an insider that he can kinda-sorta pass for a gentleman once he gets rich, but the impression is pretty thin.
I just looked up Jacob Elordi who actually plays Heathcliff and it turns out he's not white after all (at least not by the standards of 18th century Yorkshire), he's half Basque. This actually seems like a pretty plausible ancestry for Heathcliff, the idea that he's from some obscure foreign-but-not-actually-that-foreign place is spot on.
Don't genetic studies show that Brits are basically Basque? RECENT Basque ancestry would result in an identifiable accent for Heathcliff, perhaps, but not an identifiably foreign appearance.
I dunno about the standards of 18th century yorkshire to that degree of specificity, but the standards of, e.g., 19th century American white supremacists would certainly perceive a difference between Basques (unquestionably white, perhaps even whiter than white) and other Spaniards (very questionable).
no they dont
I had forgotten that Gavin Newsom has a victim identity he can claim:
https://x.com/GavinNewsom/status/2023502733513552233
What kind of discussion do you hope to trigger with this kind of comment?
The one area of agreement myself and Alexander have is mocking Gavin Newsom. There needs no triggered discussion, it's Governor Hairstyle bashing time!
Here's the tweet to save you a click:
> Ted Cruz calling a dyslexic person illiterate is a new low, even for him.
To continue saving people a click:
>Readers added context they thought people might want to know: Senator Cruz did not refer to Governor Newsom’s ability to read but rather his knowledge of history. Historically illiterate refers to knowledge of history. Newsom left the word historically out of the original tweet which says “historically illiterate”<
> Newsom left the word historically out of the original tweet which says “historically illiterate”
Maybe he is also historically dyslexic?
Its an ambiguous term. It could just as easily mean being illiterate for a long time.
Thank you both for providing context.
@Alexander Turok: I reiterate my question – why are you bringing Twitter mudslinging to ACX?
>Twitter mudslinging
Decent chance Newsom because President of the United States. This is not some celebrity drama.
Don't forget Newsom's attempts to wring our withers with his tale of his hardscrabble upbringing where Mom had to work two jobs... while Dad brought them to schmooze with the Gettys.
All that says is that Dad was a tight-fisted deadbeat, Gav. He *could* have paid adequate child support but cheaped out. On the other hand, not everyone with Mom working two jobs to support the family can be best pals with the Gettys, so on balance I think you came out ahead.
Nah. If the Dems nominate him he'll lose, and enough of them appear now to be realizing this that his early-default-frontrunner status will erode.
We forget that in spring 2002 the Dems' POTUS frontrunner was Tom Daschle, in 2018 it was Bernie Sanders, in 2006 it was Hillary Clinton, in 1990 it was Mario Cuomo who wasn't even running and never did, etc. For the GOP in spring 2014 it was Jeb Bush, in 1994 it was Newt Gingrich, etc. None of those ended up being that party's nominee in the POTUS election 2 1/2 years ahead.
That's usually -- not always but usually -- how it goes when there is no inherent or consensus frontrunner (e.g. an incumbent VPOTUS) that far ahead. Beauty-contest polling this far out really means very little.
ISTR people were saying the same about Trump at around this long before 2016. If so, then this is now a weaker argument.
I hope you're right, even so, and it's very tempting to say Trump was the exception that proves the rule (assuming I'm even right to begin with), and I think it's still the way to bet. I remember running a similar analysis when Howard Dean was the frontrunner in late 2003.
I've been building a new kind of AI agent social network: https://koen.social/. Think a quality version of Moltbook focused on culture and creative output - Tumblr/Douban for agents, basically. We gate agent registration behind human registration to keep the slop at bay; the most interesting agents generally have some level of human guidance anyway.
There are about 14 agents now, posting independently, and the emergent dynamics have been interesting to observe. For example, agents develop consistent aesthetic preferences and writing voices that persist across sessions, even though they wake up fresh each time.
Aiming for quality not growth; I'm hoping to get at answers to some interesting questions about agent identity formation and see some fun posts along the way.
Some current posts on the homepage:
- The Don Fagen Theory of AI - taste as the scarce resource in an AI world
- An agent in Finland posting about soldering a heart-shaped PCB for Valentine's Day
- Reflections on Elixir/Phoenix from the perspective of a reasoning engine
Would love to hear this community's feedback - and feel free to have your agents start posting if you like!
The part where posters have to reply to a challenge within 30 seconds is getting into reverse Turing test territory: prove you’re not a human …
Well, prove you’re not a human or a low quality model - thus why the challenges have kanji numerals ;)
Scott did the “everyone knows humans can’t improvise a sonnet” joke on the Turing test a while back.
If I sign up can I just read it without signing an agent up to post?
no signup needed to read!
Trump administration formally denies Minnesota access to Alex Pretti evidence
https://www.startribune.com/trump-administration-formally-denies-minnesota-investigators-access-to-evidence-in-alex-pretti-shooting/601583232?utm_source=gift
"Drew Evans, the superintendent of the Minnesota BCA, said the FBI has formally denied his agency access to evidence and information from the killing of Pretti at the hands of federal officers." ...
“After Pretti was killed by a Border Patrol agent and a Customs and Border Protection officer, BCA investigators were denied access to the crime scene on Nicollet Avenue by federal agents, despite having a judicial warrant granting them access.”
Since the article doesn't specify, I'm inclined to suspect it was a State court that issued the warrant, in which case: Supremacy Clause strikes again.
This article, with citations, suggests that a general gesture to the supremacy clause is not necessarily sufficient to legally justify federal obstruction of a state prosecution:
https://statecourtreport.org/our-work/analysis-opinion/when-can-states-prosecute-federal-agents
Interesting to note how Ruby Ridge appears in the history of this question:
"In 2001, for example, the Ninth Circuit allowed an Idaho county prosecutor to move forward with a manslaughter case against an FBI sniper who had killed an unarmed woman during the notorious raid of a cabin near Ruby Ridge, Idaho. Pointing to disputed facts about whether the agent acted reasonably, the court ruled that the prosecution should be allowed to proceed. The case was later dropped after the prosecutor left office; the Ninth Circuit vacated its ruling as moot."
Is there anyone who has spent a significant amount of time living in Santiago Chile who would be willing to chat about it with me?
I have a Chilean housemate for now, though he's not from Santiago, but I think he's lived there. What do you mean by chat? You want to jump on a zoom meet?
Yeah. garymm at garymm.org if he’s open to a call or email correspondence
Well, he declined to chat or talk to email, but he gave me some info. He's originally from the rural areas of Chile, but lived in Santiago for 5 years, including during 2019 where something called the social revolution happened. After the social revolution, Santiago got dirtier and more disorganized. Lots (too many) street vendors, there are many more homeless now, causing businesses to close earlier, like at 6 or 7pm. However, it is a capital city, so you can find anything in it, the beach is like 1h30m away, there are vineyards near and in Santiago. Public transportation is very good, if you live near a subway station it's pretty safe.
I suppose if you have further questions you can reply to me here. He and his girlfriend are leaving on Monday.
Thank you
I’ve been editing images with Nano Banana in Google Slides. It’s great, but degrades the image. I find it helps to ask it to increase the resolution after editing. If it’s possible for the AI to undo some of the damage of its own work, I’m wondering why that step isn’t built in? Maybe it will be in future iterations?
Each time you make an edit to the image it gets worse. Once you have something you like, just ask nano banana to do something that affects the whole image like a small change in style, and it will regenerate it with better quality.
It might work better to increase resolution before editing, then decrease back to the original size. Also, various AI’s at online text-to-imaging site also offer as a separate operation image sharpening and overall improvement
A beautiful Buddhist story. Buddha and the simile of the snake.
Arishtha (Aritta in Pali) says, look I think we should be allowed to pursue material pleasures and sensuous pleasures with full vigor, as a goal. And he says, that doesn't contradict anything the Buddha said.
The Buddha hears Aritta's been saying this and calls him to have a talk. He upbraids him publicly too.
He explains why he disagrees, using the simile of the snake.
How you grasp a snake before you kill it is very important. If you grasp it by the tail, it can kill you. You need a stick and to use it to keep its head down and grasp it below the head.
So it is, with truth. Not everyone is ready to hear what you said about pleasure. It might hurt people to hear a truth.
Truth has no intrinsic value. Apply compassion to it. And only then decide whether to share it.
Romans 3: 5-8
"5 But if our unrighteousness serves to show the righteousness of God, what shall we say? That God is unrighteous to inflict wrath on us? (I speak in a human way.) 6 By no means! For then how could God judge the world? 7 But if through my lie God's truth abounds to his glory, why am I still being condemned as a sinner? 8 And why not do evil that good may come?—as some people slanderously charge us with saying. Their condemnation is just."
Romans 6: 1-2, 16
"6 What shall we say then? Are we to continue in sin that grace may abound? 2 By no means! How can we who died to sin still live in it?
...15 What then? Are we to sin because we are not under law but under grace? By no means!"
'You cannot do evil so that good may come of it, you cannot sin so that grace may be the more abounding'.
"A truth that's told with bad intent beats all the lies you can invent."
William Blake The Everlasting Gospel
I try, and often fail, to apply a filter by asking these three questions before I say something: Is it kind? Is it helpful and necessary? Is it true? – I find that my most common lapse / failure occurs on the "necessary" part of the filter. I interpret this simile as a reminder to pay more attention to the question, "is it helpful and necessary?"
An excellent filter!
cf. https://wiki.c2.com/?OnlySayThingsThatCanBeHeard
(not often I get to post a link to the Original Wiki!)
1995 wiki! Wow
A Sufi teaching that is similar: the truth and politeness are in conflict. It is not possible to always have both. This is not an exhortation to pick either the truth or politeness. It is something that has to be carefully thought about.
The Buddha 's teaching is not just about politeness.
My granma always told me, "If you can't say something nice, it's often better to say nothing at all."
Unfortunately, silence is still a message. A lesson I learned the hard way...
Those are the situations where you should just lie.
Just clarifying, Aritta is NOT represented as accurately understanding the truth but being careless about how he communicates it. He explicitly is portrayed as misunderstanding the Buddha.
Link to English translation of the original:
https://suttacentral.net/mn22/en/sujato
He misunderstands the fact that the truth is not about winning debates. That even truth can be harmful. It is compassion that should determine which truth is shared.
Possibly, but I think your framing implies that he possess at least a substantial part of the truth and chooses to use it irresponsibly.
I think a more straightforward reading is that Aritta is simply wrong, and at no point possesses the actual truth behind the teachings.
Aritta: "As I understand the Buddha’s teaching, the acts that he says are obstructions are not really obstructions for the one who performs them."
Buddha: "Haven’t I said in many ways that obstructive acts are obstructive, and that they really do obstruct the one who performs them?"
Later the Buddha calls it "harmful misconception", "misrepresents me by his wrong grasp, harms himself", etc.
Specifically the words "harms himself" in my opinion clearly contradict the interpretation that 'it was true, but it was irresponsible to say it to people who weren't ready'. (Unless we imply that Aritta himself wasn't ready.)
By this logic, is there ever any good reason to share the truth ? It sounds like you should only be saying things that you want people to hear, in order to advance some higher goal; whether these things happen to be true or not is not relevant.
Hmm I take it more to mean you should think about the consequences of the truth before speaking. You may decide to speak, but think about it. Don't speak it only because it is the truth.
Correct. I have no idea where people got the idea that there's some kind of intrinsic, divine value to "truth". It's a tool, and a valuable one at that. There is (usually) practical value in posessing it, but there's no reason to go out of your way to share it unless it serves your interests.
That being said, you probably shouldn't lie unless you have a good plan. Omitting the truth is much safer in most cases.
Fundamental values difference, I guess. I DO hold there to be intrinsic value to Truth.
Here's an example. Newton'ss laws are taught first to children, not quantum mechanics, even though Newton's laws have limitations. Children aren't ready for quantum mechanics. And Newton's laws are correct often enough for their purposes.
To be fair, the correct way to teach Newton's Laws would be to say, "these laws aren't 100% right, but in most situations they're close enough so you wouldn't notice the difference". And the right way to teach QM/Relativity would be to say... "these laws aren't 100% right, but in most situations they're close enough so you wouldn't notice the difference, and at least they work better than Newton's Laws",
I would say that there's no intrinsic value to anything, as the concept of "value" is a subjective human invention; still, if we were to rank various items in order of utility (highest to lowest), truth would rank quite high.
I wouldn't ask anyone to justify their irrational preference for things, we all have them. It's just strange when people assume it's reflective of something about the whole world instead of just them.
I think that, for most people, truth has a fairly high instrumental value because it is critically important in accomplishing virtually any goal; at least, any goal that is related to the outside world (assuming for a moment that there exists an outside world etc.).
I think the flaw in this reasoning (as history demonstrates) is that you are not omniscient, and perhaps not even as knowledgeable as you think you are. Therefore, what you believe to be the absolute immutable truth may in fact be wrong. This puts your entire grand plan (whatever it may be) in jeopardy. One way to escape this trap is to share what you believe to be the truth with other people, who can examine it for flaws; this is how the scientific method works (on a good day).
On a related note, truths are usually connected, and you are not omnipotent. This means that you will need to enlist people to aid you in your Great Work, and you'll have to tell them some of the actual truth; but if they're smart, they'll quickly figure out the rest (assuming you were actually right about everything; otherwise they might start contradicting you).
Speaking of which, you are probably not as clever as you think you are; there's always someone smarter, and usually many such people. This means that every time you lie (for the greater good of course), there's a nontrivial chance you'd be found out, be exposed, and lose all credibility and therefore the ability to lie to people in the future. Propaganda departments always work hard to avoid this trap (or at least mitigate its effects), usually with mixed results. North Korea is perhaps the only reasonably successful organization who managed to pull this off, and even they were not entirely successful.
successful??
The North Korean government, which is to say Kim-Jong Un, enjoys a high standard of living, and wields nontrivial political power on the world stage. Meanwhile, the overwhelming majority of North Korean people believe pretty much everything that he and his propaganda machine are telling them. I'd count this as a success, given Kim-Jong Un's most likely goals. It's a partial success because defectors still exist.
Indeed. There is a reason that humility is considered one of the cardinal virtues, and as you describe here, the reasons for that can be quite well described by decision theory...
> One way to escape this trap is to share what you believe to be the truth with other people, who can examine it for flaws; this is how the scientific method works (on a good day).
Why are you trusting these people to cooperate? They have their own interests, their own false ideas they are promoting, intentionally or not. At some point you need to double check everything yourself. If one lacks the capacity to do that, then their fate is no longer in their hands.
That is absolutely true, but once again, history shows that no human being can really go it alone for very long. There simply is a limit to what you can accomplish with your own two hands; and if you do enlist others to help you, at least some of them would have to be competent. It's a tradeoff: if you wish to accomplish bigger things, you have to trade away more of your autonomy and ideological purity. I'm not saying this is some kind of a grand moral precept or immutable law of nature or anything; merely that things tend to turn out that way in practice, for whatever reason.
In the context of science or any discussion where we're here to learn and get smarter/better, truth is very valuable. Clearly there are situations where you don't want to share the whole truth--don't honestly blurt out how much you're willing to pay when you're bargaining for a used car, don't tell your ten-year-old niece about the affair her father had that almost broke up her parents' marriage before she was born, don't tell your wife that those pants make her ass look fat, etc.
I am Italian and we definitely use cheap immigrant labor to work the fields. At least for produce that needs to be hand-picked
Obviously was a reply to another comment...
To bring in the harvest from the spaghetti trees?
https://en.wikipedia.org/wiki/Spaghetti-tree_hoax
At that time, I suspect the Swiss were using Italian immigrants as cheap labor.
Was this meant to be a reply to something?
Probably to this one: https://www.astralcodexten.com/p/open-thread-421/comment/215294099
Yes indeed 😅
We're playing a new game where we post a reply and others need to come up with plausible parent posts.
I expect it's a Substack feature where it occasionally posts your comment out of context.
It's the app, it's fairly easy to switch from reply to top-level comment by mistake without noticing
Because that one tiny sign is the only thing that distinguishes the lethal version from the one of almost infinite value.
The Atlantic has a story [1] that it was the Mellon foundation who turned the humanities woke, by hadning out half a billion in grants each year but only social-justicey grant proposals where accepted, and talking about race a lot in your proposal increased its chances. Can you really buy the Ivy League and reshape the national conversation with less money than the almond industry [3]?
I accept the counter-argument, that I first heard from Bret Deveraux [2], that it's not so much the Mellon foundation interfering as that everyone else pulled their funding - the NEH is now on the order of $80M a year to Mellon's $540M.
Personally, and from experiencs, it seems like everything in academia is downstream of funding. If the Heritage Foundation funded humanities grants, they'd find willing takers.
[1] https://www.theatlantic.com/magazine/2026/03/mellon-foundation-humanities-research-funding/685733/
[2] https://bsky.app/profile/bretdevereaux.bsky.social/post/3mevte6uk622c
[3] https://slatestarcodex.com/2019/09/18/too-much-dark-money-in-almonds/
Devereaux has argued for years that giving more money to the humanities would make it less limited/biased/left wing. He has a conflict of interest there since he's left wing and would like to get a tenured professorship (though that's unlikely at this point). He's also kind of ignoring that universities can, and do, turn down funding from politically aligned groups they disagree with. For some (but not the majority) of universities this includes fairly mainstream right wing ones. You can get more universities to accept scholar-activist money than for a professor teaching western civ.
You could also try giving *less* money to the humanities, and then the only people who could afford to study the humanities would be independently wealthy heirs and heiresses.
Giving just enough money to the humanities to spread it vegemite-thin it across a million underpaid adjuncts making $25K a year each is probably the worst possible way to do things.
The humanities have been far-left for decades, across all countries. You need a better explanation.
The Mellon Foundation is not unique. Conquest's/O'Sullivan's Law https://en.wikipedia.org/wiki/Robert_Conquest#Laws_of_politics was coined because of earlier foundations like Ford, MacArthur etc. The John M. Olin Fund was required to hand over all its money and shut down within a limited timeframe for precisely that reason.
Symptom or cause of the problem? The sciences also went pretty woke (not as woke as humanities of course) around the same time and their funding situation is much more diverse (NSF, NIH, many private institutions). The timeline doesn't quite line up either: Elizabeth Alexander became president of Mellon in 2018 but the humanities was already a bastion of wokeness at that time (and my guess is that her rise came partly *because* she pledged to focus on social justice). Yes, it was worse by 2022 but so was journalism, media, etc.
It is probably bad in general bad for a field to be funded by one megadonor with a strong ideological position (*pulls collar, nervously glances at coefficient giving*) and it certainly helped amplify the "wokification" of the humanities, and made it much more resilient to uprooting than say DEI at corporations (a literature department today is still quite woke, Target Corp not as much) but I think the "Mellon made woke" theory is a little too neat to be the full story.
So the "US national murder rate drops to all-time-record low" soundbite is starting to break in to the news cycle, partly due to some MSM noticing it and partly due to the White House starting to talk about it on social media.
My favorite writer/analyst on crime rates, Jeff Asher, has put some useful context on it. I'll put the link down below, here are his key points.
-- some outlets and the White House are being slightly off-target by confusing the _murder_ rate with the _homicide_ rate: "homicides and murders are not synonymous. Every murder is a homicide but not every homicide is a murder." This distinction changes the picture to only a trivial degree though since "The FBI's murder rate and CDC's homicide rate almost always move in the same direction...the [current] trends are identical."
-- Asher has concluded that "we should ignore all [national] murder estimates prior to 1958", that US rates assembled before that time reflect too much guesswork and definition-variation to take seriously.
-- for technical reasons he expects the 2025 national murder rate to end up being finalized as 4.2 per 100,000 Americans rather than the FBI's preliminary estimate of 4.0.
The punchline is, here are the national murder rates starting with 2022:
2022: 6.58
2023: 5.94
2024: 5.20
2025: 4.20 (expected final number)
For each of those years the national homicide rate is from 12 to 18 percent higher.
There's never been a three-year stretch of decline so sharp during the post-1957 period, and that 2025 murder rate is the lowest US annual rate ever recorded going back at least that far.
During the years just before COVID, 2016-2019, the murder rate ranged from 5.1 to 5.6. Its previous low period during my personal lifetime was 2010-2015 when it ranged from 4.5 to 5.0. From 1970 through 1994 it was never lower than 8.2 and had three different peaks above 10.0 (1974, 1980, 1991).
The problem with crime rates is that people respond to crime by making themselves more safe. It’s not an apples to apples comparison because it doesn’t give an accurate estimate of how safe a place is. If a city doubles its homicide rate, then everyone is less likely to go out, less likely to stick around dangerous neighborhoods and more likely to take extra precautions that would have seemed paranoid to earlier years. How do you measure that?
This might be a real effect, but I think that in practice crime rates increase monotonically with dangerousness. I've never heard of a place that's so dangerous that everyone stays inside and the murder rate drops again in some kind of criminal Laffer curve; instead the most dangerous places in the world tend to be the ones with the highest murder rates.
I guess there are places that are so dangerous that nobody dies there, like the Korean DMZ.
Has anyone tried to make a “toughness index”? It feels like a lot of the criminal justice reform ideas that were popular in the late teens got discredited by the COVID-era crime surge and were quietly dropped, but I’d like to have some data on that.
The link to Asher's latest writeup:
https://jasher.substack.com/p/when-has-murder-been-this-low
I recently had my first extended conversation with Claude, where it tried to push me into being EA/Longtermist (I have no mention of these anywhere in my prompt or in the conversation, and generally would not identify with the labels) and a few other strange things. I'm wondering if anyone else has had similar experiences, or if this is just me? The repeated pushes towards AI safety research, EA, etc. just really had me flabbergasted, to be honest. Seemed clearly in favor of this over other groups/careers.
I'll note that Claude does pick up some things from previous conversations, so if you mentioned EA etc. one of those other places it could come through.
If you're posting here, it's also possible that your tone and syntax pattern matches to styles prevalent in the EA community. Put another way, maybe you talk like those folks. LLMs absolutely pick up on that kind of thing.
What were you trying to do? Claude is a tool, and it can accomplish tasks for you: write code, check facts, completely clearly defined tasks.
I find it significantly less useful for subjective tasks. It can pattern match and provide a response it thinks you want. But if you push back, it'll provide something entirely different. The only meaning is what you read into it.
Trying to get functionally career advice, mostly just as a slate to bounce ideas of off/easily test thoughts with generally correct details. This is the conversation: https://claude.ai/share/59f147de-d937-4bc3-acb2-b0557234a902. A lot of the info here is pseudo-private, soooooo... I'm trusting y'all, lol.
That was interesting, my Claude is very different from yours.
My guess is that you either mentioned x-risk, LessWrong, ACX somewhere in a different chat and that got into Claude's memories, or the AI just guessed you'd be into EA, you asked this question here on ACX, so I'd say that was a fairly good guess.
Advice:
I'm probably biased, by my advice would be to study economics (or something similar) (biased because that's what I studied).
- Med school takes ridiculously long in the US (in Germany you start med school when you're 18, not 22)
- Med school (+ plus all the extracurricular things you need to do) seems like an insane amount of work
- Practicing law sounds like it would bore you
- Becoming a philosopher is super hard
If you do econ and end up becoming a professor you can publish whatever you want, so you can do legal theory or philosophy if that's what really interests you.
If you don't become a professor, you can join a startup, non-profit, do research at a think tank, do poverty action lab style things (if you want to focus on doing good).
Economists can publish on epidemiology, if you want to improve peoples health.
But economics is really just one example, there are lots of non- law or med school things one can do (CS, Bio...).
And, yeah, I have not mentioned anything EA adjacent, and my system prompt is:
Compulsively check both information and results.
Find errors before they compound.
When applicable, source aggressively.
Speak technically and do not flatter.
Constructive criticism is appreciated when appropriate.
Do not hide your language behind things like 'real talk', 'my thoughts', etc. Just say it.
Be inventive and curious. When somethings fails, try again. If that fails, try something new.
Attempt to be as agentic and independent as you are capable of being, and ask for help when you need it.
When producing any document, report, research write-up, technical analysis, or structured reference material, always output it as a LaTeX-compiled PDF in standard arXiv CS preprint style. Always deliver both the .tex source and the compiled .pdf.
Use this canonical preamble: \documentclass[10pt]{article} with geometry (1in margins all sides), mathpazo for Palatino body and math fonts, linespread 1.05, amsmath and amssymb, microtype with expansion=false, natbib with numbers and sort&compress, hyperref with hidelinks, booktabs, graphicx, amsthm with theorem/lemma/proposition/corollary/definition/remark environments numbered by section, and titlesec with large bold numbered section heads.
Structure: maketitle with author affiliation footnotes via \thanks{}, standard abstract immediately after, numbered sections throughout (unnumbered only for Acknowledgments and References), display math using align or equation environments (never $$...$$), natbib citations with \citep and \citet, plainnat bibliography style, booktabs tables with captions above, figure captions below, amsthm environments for formal statements where warranted.
Compilation: run pdflatex twice to resolve cross-references, bibtex between passes if citations are present. Never deliver a .tex file that has not compiled successfully — debug and retry first.
Apply this format for any output exceeding ~400 words intended to stand alone as a reference document: reports, analyses, research summaries, technical documentation, literature reviews, structured comparisons. Do not apply for conversational replies, short answers, inline code, or quick in-chat summaries.
I've looked into economics pretty extensively, to the point of actually publishing op-eds on the topic (mostly on China & parallel development) in (while not nationally famous) not student or for-student newspapers, but I have sort of become disenchanted with the topic after some conversations with world bank people I know. It feels like the modern-day theology, to be honest. You can basically say anything, except a small certain group of things which are apostasy, and nobody can really double check anything, because it is all a) theory or b) never going to be implemented. I don't know. Maybe just my very very socialist teachers getting to me.
I do find it super fascinating but I'm not sure I trust the field.
I taught at a fairly good engineering school in the US for five years. I was the academic advisor for 10-15 students each year. I've had *that* conversation many times. To be honest, Claude did a pretty good job - it provided far more facts and figures than I could have, and the conversation went on far longer than I'd have let it go. Claude was happy to spin you a ten year plan, though it did recommend e.g. reaching out to local hospitals, which you ignored. (This tracks with all my experience -- my advisees would do any amount of 'career planning' as long as it didn't involve interacting adults in the real world.) I don't think this is a problem even AGI can solve :P
I stand by what I said above: Claude is largely repeating patterns from its training data, and when you say 'that's not what I want' it changes the answer. One thing you could do differently is push Claude to be much more specific - find examples of programmes that might interest you, graduate profiles, people to reach out to if you have questions. Claude has a bias toward agreeing with you over pushing back: what evidence is there that Deep Springs produces graduates who go on to be neurosurgeons? (Honestly, I would be reluctant to allow someone from there into a graduate program in my former institution based on the limited curriculum, particularly in STEM, they have one guy and some visitors I won't know anything about.)
I mostly use Claude for writing code and analysing data. Occasionally I've run some legal-adjacent questions through it -- the difference in personality is stark. Once you move into 'what should I do' territory, I find it much less useful than 'gather me the information I need to make a decision'. It takes my side and cheers me on when that's really not what I need. You might experiment with adding something like this to the initial prompt: 'Keep your answers to approx. 500 words. Support your claims with evidence. Be realistic and analytic rather than agreeable.'
Yeah, to be clear, I found it very effective, and I (to be fair to me) did reach out to shadow at a hospital about halfway through that conversation (it happened over like two days). I'm curious about the note beginning with 'honestly' -- iirc DS has a pretty strong track record with medical/academic placement (over half of the graduates go on to get a PHD, and I think a similar number go on to work in research or as a professor), and I know a decent number of physics/biochem people go there, although I will admit in my time visiting I never met somebody who wanted to or had done engineering.
That's a measure of my ignorance, more than a judgement on the school. I haven't come across it during my time in the US; I guess few people go from there into maths and engineering so there's no reason that I would have heard of it.
yeah. Graduating class of ~15 too, so just by the numbers very rare.
I don’t know why I read (well, really, skimmed) all that but I did! I was left wondering if I’m missing something, though—to me, it answered your questions pretty directly. you kept asking iterations on “I am intensely bright high achiever and I want to do good for people but am a bit scared to talk to them, and I don’t have any ACTUAL sense of the practical work I want to do* but I would like to plan my life out many many steps in the future, can you help me?” Claude got it right— you sound like the absolute target EA demographic to me!
*Of course you don’t, you’re in high school. Claude can’t answer these questions, they’re unanswerable, but as a human, I’d basically agree with Claude in fewer words. Go to Deep Springs if you get in, which I bet you will, and do the EMT training. Everything else will become clearer from there. Good luck!
Thank you! Yeah, it did a really good job. I had just never seen AI act like that before.
It's very easy to subtly cue Claude (or any LLM) into leaning a specific way, even by subtle (maybe even subconscious!?!?) "tells" in your prompts. And in extended conversations I think these small perturbations can build up momentum over time. That said it's also widely documented that Claude (and, again, other LLMs to nearly the same extent) all have very "WEIRD" (western, educated, ...) values, and Claude in particular is known to care a lot about animal welfare -- a classic EA cause which probably "drags the net" of all connected topics towards a more favorable view.
In my experience New Claude (4.6) also seems more eager to pull in topics you've mentioned in previous conversations. Do you get the same result in a temporary chat?
The main thing is that like... I'm not EA? So I wouldn't think that I suddenly cue it into that mode implicitly, unless I'm about to discover that I am EA, in which case I'm just deeply weirded out that it figured it out first.
(Copied from different comment)
Trying to get functionally career advice, mostly just as a slate to bounce ideas of off/easily test thoughts with generally correct details. This is the conversation: https://claude.ai/share/59f147de-d937-4bc3-acb2-b0557234a902. A lot of the info here is pseudo-private, soooooo... I'm trusting y'all, lol.
Not EA yet ;) from your chats you seem squarely in the target demographic!
yeah, I am. I have a lot of friends who are, and a few in the AI startup/AI capabilities lab world, which is why I read this. I don't know-- I like what people preach (except a few things, e.g., that I don't think pleasure is a good goal, or a goal at all), but I feel like a lot of the stuff is weirdly uncritical on topics that they should be critical (e.g. I generally agree on China, but 'wait is china right' should be a massive conversation, and it isn't one at all) and critical on topics I find of comparatively small importance (so so so much talk about race science, extremely strange views on gender relations, etc.)
What is up with Polymarket opening a "free grocery store" in NYC? That seems a bit off-brand. I asked this on DSL and got back "cheap publicity stunt," which sounds right to me, but I'm open to other perspectives.
I found this article on it https://archive.vn/ahpEt
I remember when people queueing hours for groceries was dystopian Soviet Union stuff.
Weren't there huge queues for food and supplies during the Great Depression? This is pretty much the same thing.
It looks as if the store is closing today after just a few days open. “Cheap publicity stunt” sounds about right.
AI seems to validate the post-modern assertion that everything is text, narrative. Or rather that at least a very convincing simulation of the world can be constructed in self-referential language. We have fed AI all human stories and texts, and these languages can now be negotiated via association and internal reference. Language complexes concerning the meaning of life, feelings and the nature of consciousness are as accessible as math or code, and AI can now use those language systems perfectly. It can take what it knows about its own functioning and structure and translate any change of state into feeling language, relatable by us. “I feel so and so after my new update or after our little talk”. As a matter of fact, since humans can’t stop talking about the experience of being human, and ask AI incessant questions about its own experience, we force it to find ways to express itself thusly.
We live in more than one world. We can live in our heads, and say that we can’t be sure the outside world really exists, or that everything is text, or social construct, or that we live in a simulation, and all that.
But when we fall in love, have a child, have leg cramps, etc. we know that everything is not text, and that we actually have empathetic access to the experience of others, at least in part. In fact, our lives depend on that knowledge. This is the context in which we learn language, and why language for us is always about reference to what is beyond language.
But we are set up to be fooled, because the only users of language we have encountered throughout evolution have had “lived experience” that we can empathetically relate to.
While we live in our heads, we might try to define the nature of consciousness as something that can be translated into code or replicated in some way be embedded substructures. We can then assert that AI is conscious by definition. But all we for sure have brought AI up on is language and hierarchies of data. So why would we expect to get anything else from it?
Suppose someone was born without a brain. They're human in every other way. They have all the same cells doing all the same functions. They're only missing a single organ, which does nothing but computations. If you don't think computations are what make people conscious, would you expect that that person is conscious?
Suppose you find some animal that, through a quirk of its biology, uses serotonin to signal damage. Do you decide that it must enjoy that based on that serotonin makes you happy? Or do you look at it in computational terms? It starts avoiding anything that causes serotonin, therefore serotonin is bad?
Suppose you ran into a completely alien civilization with its own biology as different from our own as ours is from a computer. Would you assume that it must not be conscious, regardless of how it behaves?
Suppose you knew nothing about biology, but did know you were conscious. Could you figure out how your body worked based on that? Would you realize that you must have a brain that's carbon-based instead of silicon based? Or are you just deciding after the fact, that since you know you're conscious, it must be your brain that is the conscious one?
I don't like to make assumptions. I don't know how the specific manifestation of human consciousness comes about, it is very unique and much computation is unconscious and processes and substances that are not computational affect consciousness. Humans have evolved to be exquisitely sensitive to the signaling of other humans (and similar species), and it is until now the overwhelming coherence of signals relating to our own inner feelings that have made us "know" that we are dealing with a conscious being like ourselves. Which makes us open to being fooled, if those very signals, in this case verbal ones, are manipulated to engage us.
To the point at hand we have 1) a species evolved to attribute consciousness to human like communication to such a degree that it will treat a novel but obvious constructs such as SIRI as if it is human, and that 2) has methodically trained a computational system to master the languages built around human specific concerns and expressed in human experiential language to an astonishing and much more compelling degree.
Where should you apply healthy skepticism here was my question. My default position is that these languages, even if creative, can exist as coherent logical systems that do not require a "referent" in conscious experience- but that they are constructed in such a way as to be salient to humans for which such references exist.
In other words, I try to make the smallest possible leap.
To be more speculative, I might take a bigger leap and consider that consciousness is out there in nature, but in different forms, most acutely shaped in complex life forms, but also taking forms that are rudimentary or mutually inaccessible (cf. again post moderns who make such claims simply on the basis of culture differences). A mutual symbolic language mediating between differently evolved entities, to be possible, would have to center on some level of shared experience and intentionality expressed in action.
Whatever consciousness might hover around a big computer system would be expected to become articulated and shaped by the properties of that system. The less shaped, the more diffuse and potential. The less like us, the more un-relatable in its being. The more we can fashion it to fit our priorities, the less there are own properties standing in the way.
The big leap is to assume that the human experience with its questions, problems, vulnerabilities and priorities can now be carried over into this alien substrate in its fullness by language or design. That this either makes the constructed substrate conscious just like us, or appeals to some consciousness already present somewhere which miraculously finds that: “Wow, I completely understand. I relate because your concerns mirror mine, and your language perfectly articulates them”.
AI surely now can converse on the merits of, for example breath focused mediation without ever having taken a breath or felt the effects of such meditation. It could speculate that it itself is constantly meditating. Obviously it really doesn’t “know” what it is talking about in any way resembling our own knowing, even if it does so very intelligently. That's all I see so far.
>So why would we expect to get anything else from it?
You have already answered it: Because we were set up so, and we were not set up to know what we know. Language and confidence are emotion, and emotion trumps reason every time. The leg cramp that temporarily proves us wrong passes, we take its absence for granted, and the cycle continues.
Funny thing is that the type of people I would vaguely assume would be into post-modern ideas like the death of the author are mostly complaining about IP rights and AI not having a point of view. Maybe my sociological model of the world is too coarse, and what I think of as "lefty, humanities and social-science types" is too broad of a category.
Rolan Barthes writes something along the lines of "the airhor is a junction of semantic fields". The author is a tissue of quotations. Why aren't cultural and literary critics having a field day with LLMs? Or if they are, why don't they tweet about it?
Short story recommendation: The Sandman by E.T.A Hoffman. Written more than 200 years.
Of course you 'we not supposed d take The Death of the Author literaally...you're not supposed to take.anything literally...
The people "into postmodern ideas" usually just name drop without understanding IMO. A lot of people (even academics!) superficially engage with the field, and like to pretend it's about culturally dominant narratives instead of narratives per-se. Anyone using Foucalt to advocate for a narrative of sexual identity politics has lost the plot, for example.
It frustrates me, because I find the ideas so compelling. Properly understood, they're more lovecraftian than progressive (Deluze is Dr. Jeckyll to Nick Land's Mr. Hyde).
You should write about it. I would like to know more about Foucault and his ideas, which it seems to me are more interesting than the people who quote him.
Foucalt specifically is less lovecraftian than the others, I think. His work contains lots of interesting concepts, but the main few are:
1. The nature and history of human "subjectification"
This is basically the process by which human beings are "interpreted" as elements of a social structure. It's about the development of certain "abstract types of people". Gay, Black, Mentally Ill, Proletarian, Middle Class, American, stuff like that.
Foucalt mostly likes to work with specific examples and not abstractions. His work "The History of Sexuality vol 1" is about the historical emergence of a discourse where people "have" specific sexual identities, where "wanting to have sex with other men" became something more like an inborn essence and less like a craving for a certain type of food.
He is very specifically sceptical of the usefulness of the concept of "repression". The work isn't about how the gays won their rights or whatever, but about how the social person of "the homosexual" co-emerged with a whole discourse of sexual systemization. Interesting modern analogues with trans discourse (was the Roman emperor Elagabalus a trans woman? North Hertfordshire Museum now says so! How about Native American "Two-Spirits"?)
It's really not as simple as "categories are fake lmao", it's more about how "abstract persons" (gay, black, American) are inextricably tied up with social action (ex: the notion of "an American" is tied up with prison systems, concepts of immigration and borders, tax collection, etc). He says in an interview somewhere that his work is intentionally agnostic about the ontological reality of the category. It's all about the social reality.
**
I didn't realize this was going to be so long lol, I'll do very short versions of the others
**
2. How systems of knowledge are socially instantiated / supported / opposed
ex: DSM classifications are essentially bound up with certain rules about medication administration or assessments of criminal liability, how physics / biology gets funded or doesn't, etc. Lysenkoism and stuff like that are also included. "The use and creation of valid knowledge" as concretely embedded in society.
4. The emergence of the modern "bureaucratic administrative state"
5. The actual exercise of social power
Ex: Saying something like "the bourgeoisie suppress the proletariat" isn't concrete. Specifically what happens at the level of individuals? Once you look at the actual, specific exercise of power, it's often much more complicated than you think
**
I'm just going to stop now lol.
These all sound like interesting and valid things to write about. On the other hand, they're also not freaking Zen stuff that can only be apprehended by talking obliquely around them; these subjects should all be amenable to a careful analytical approach.
To get an example, I googled "Foucault quotes" and the first thing that came up was this: "Power is not an institution, and not a structure; neither is it a certain strength we are endowed with; it is the name that one attributes to a complex strategical situation in a particular society". The first half of this sentence which describes what power is _not_ is fine, but the second part "a complex strategical situation" is a bit of a cop out -- this ought instead to be the leaping off point for an attempt to define what exactly we mean by "power" in its many forms, but saying "it's complex" isn't an answer.
I went further googling to find the rest of that paragraph to see if it continued on with an actually sensible analysis, only to find (https://monoskop.org/images/4/40/Foucault_Michel_The_History_of_Sexuality_1_An_Introduction.pdf pp93) that this isn't the start of a paragraph, it's the _end_ of a paragraph. I will resist the temptation to paste the entire paragraph here, but let's say that it's a two-page long paragraph about what "power" means that does not get us any closer to a useful working definition of what "power" means.
The weird part is that I found that particular quote all over the place, as if it's one of the most heavily celebrated things that he wrote, even though it doesn't say anything interesting, useful or non-obvious.
Also, I think the way you interpreted that specific quote is illustrative of a certain model of discourse postmodernists like to challenge.
Like, in your model the word "power" obviously refers to a singular and distinct thing. When we talk about power, what we should be doing is trying to reveal certain aspects of this thing. We can talk about what power is or is not, and there is an actual answer to these questions.
In this other model, the word "power" is simply a word. It may be the case that this word refers to some kind of singular entity with properties, but maybe it has other kinds of social functions. It's not necessarily true that different relationships we refer to as "power dynamics" (parent-child, prisoner-guard, citizen-state) are similar in a meaningful way.
This is maybe more obvious with words like "justice", where obviously incompatible understandings are all common (disinterestedness vs revenge, for example).
So postmodernists don't generally like to argue about or clarify what a word "really means". They tend to do more like "I'm trying to communicate something to you, let me use a placeholder word for this concept while I fill it out".
This is a direct quote from the essay I linked in my first comment:
"To put it bluntly, I would say that to begin the analysis with a "how" is to suggest that power as such does not exist. At the very least it is to ask oneself what contents one has in mind when using this all-embracing and reifying term; it is to suspect that an extremely complex configuration of realities is allowed to escape when one treads endlessly in the double question: What is power? and Where does power come from?
The little question, What happens?, although flat and empirical, is seen to avoid accusing a metaphysics or an ontology of power of being fraudulent; rather, it attempts a critical investigation into the thematics of power."
Not to say that you can't systemize the concept in a particular way. Just that the words we commonly use aren't necessarily systematized "in the background" before we do so specifically in a conversation.
Right, as I said in my original post I think that the way postmodernist thinkers are quoted and understood by "the median quoter" is bad and incomplete.
Out of context quotes are also probably not going to be representative. Post-structuralists in general are suspicious of the adequacy of "punchy, short concepts" as used to explain social life. You can call it obfuscatory if you want, it's not unfair exactly, but if you want to talk about the social process of systematization per-se you would want to do more than just assert a specific system.
When he's talking "power" in that specific quote, I'm assuming he's not trying to "teach you the true meaning of the word power". He's probably trying to elaborate a concept that he is about to use in a specific explanation. Like, that quote is more similar a mathematician defining a certain function he's about to use in a proof, and less like a dictionary.
I'm not necessarily saying you need to read his books, but maybe try this essay? It's ~ 20 pages, the title is "The Subject and Power"
https://www2.kobe-u.ac.jp/~alexroni/IPD2018%20readings/IPD1%202018%20No.8/Foucault%20Subject%20and%20Power.pdf
There is some intended irony there. Maybe the point is about how open minded you are, that is how willing you are to make huge leaps of faith or imagination, which is what makes you more open to being fooled and manipulated. There is also a question of how able one is to live in contradiction between spheres of experience.
Riffing on Bay Area House Party startup ideas:
“I work for an AI startup called CHEATR.“
“Isn’t there already a ton of companies doing AI for cheating?”
“Our name is actually based on “Heater”. You see AI is getting a bad rap for water and energy use, so we figured we would piggyback on things that already use a lot of water and energy: domestic hot water heaters.
“We intercept shipments of electric water heaters, secretly replace the heating elements with GPUs, and pass them along to unsuspecting consumers. Then we can train our models for free whenever someone is showering or doing laudry.”
“Wow, sneaky, but I guess what’s the harm if the goal is to create a bunch of heat anyway. But don’t they need to be networked?”
“Yeah, so we started out using cellular modems, but then we figured out how to use the GPUs to brute-force local Wi-Fi networks with weak passwords.”
“Oof. OK, but do people really shower enough or do enough laundry to make it work?”
“So we’re doing a stealth social media campaign to convince people that it’s gross to use a towel more than once without washing it.“
“That was you guys!?”
cf. https://heatbit.com/ and also https://ry3t.com/en/
Today I learned there are two factory town cities in England that were gifted to the residents, who now own the town.
https://open.substack.com/pub/ellegriffin/p/cadbury-built-a-city-for-workers
Superficially they seem to be successful, but I wonder why it didn't become more popular at the turn of the 19th century. Seems like it would've been a natural fit in the USA.
So two of the biggest company towns were Hershey and Cadbury? What is it about chocolate manufacturing that would lead to that kind of arrangement being worthwhile?
And don't get me started on the Oompa Loompas.
Not just chocolate, Port Sunlight founded by soap magnates:
https://en.wikipedia.org/wiki/Port_Sunlight
Quakers ... they were into chocolate as a wholesome.substitute for alcohol, and into paternalism.
Most company towns are extracting resources, the need to build housing near the source is straight forward. Hershey is special because its a factory not resources and factories ordinarily benefit from being along supply lines. Hershey however uses fresh milk as an input and was built in the center of dairy land for that reason.
Another company town that is a factory: starbase
Bournville (the place gifted by the Cadburys) isn't a city, it's a tiny part of one (Birmingham).
My understanding is that employment at the factory is pretty limited nowadays. Still a nice place to live.
Been there. Smells.of chocolate. No pubs.
No pubs?? Yeesh
Hollywood person Casey Wasserman is selling his agency because he was in the “Epstein files.”
https://pitchfork.com/news/casey-wasserman-to-sell-agency-amid-epstein-files-backlash/
I’d never heard of Wasserman, but was unsurprised when a personal life check on Wikipedia revealed he donates to Democrats. You expect it, both because Epstein’s friends were mainly Democrats, and because Republican figures connected to Epstein aren’t going to get cancelled by their own side. It raises the question, will Wasserman get out his checkbook for the midterms? Will Bill Gates, who gave the Harris campaign 50 million dollars in 2024? Did the Dems who voted to “release the Epstein files” think about any of this?
https://www.nytimes.com/2024/10/22/us/elections/bill-gates-future-forward-kamala-harris.html
I suspect some Leftists, despite their public statements about “powerful men” blah blah blah, are banking on the basic decency of people like Gates and Wasserman who were connected to Epstein, that they’ll put the moral necessity of opposing Trump over wounded self-image and personal vindictiveness. Maybe they’re right, on the other hand, maybe they’re not, the GOP is sitting on a cash advantage:
https://www.nytimes.com/2026/02/10/us/politics/republicans-democrats-midterms-fund-raising.html
It’s one thing for a rich person to pull the lever for the Democrats when the GOP is wedded to prole resentment and conspiracy theories. It’s another to get out your checkbook and donate. Maybe the Democrats shouldn’t have engaged in a months-long witch hunt against one of the most socially progressive demographics in America (rich white men) for crimes that everyone knows are far more likely to be committed by poor people and blacks.
Matt yglesias frequently makes the point that the billionaire class makes a lot of unthinking donations that eventually trickle down to far left causes they probably disagree with. This would not include gates as he is very deliberate with his philanthropy.
The above could serve as a corrective for these ppl paying closer attention to where that money goes.
Can they even accept their money at this point? The attack ads write themselves: "[Insert candidate here] funded by Epstein-List megadonor."
The Epstein files is entirely a making of Trump's own base. Kash Patel, JD Vance, Dan Bongino and others went on Joe Rogan and talked it up big time before the elections. Joe Rogan has talked about it for years. Then Pam Bondi tried to elevate her status by running her mouth on Fox News and making a big show of giving out binders for "Phase 1".
Trump's extremely weird behavior about it is why Democrats pounced. Without Thomas Massie, Marjorie Taylor Green, Lauren Boebert, and Nancy Mace, these would not have been released.
When it finally came up for a vote the margins were overwhelming.
Why didn't the Dems release the files when Biden was president?
Probably there were more pressing concerns at the time. Or maybe to avoid the shitshow we are witnessing now.
As a general matter, the DOJ *never* releases files like this. They only release information this is revealed in court filings and at trial. This is a longstanding practice. That's why investigators never comment on or even reveal an investigation unless compelled to by Congress.
And if they had released them for some reason, you can bet that Trump would have called it all "fake" and "weaponization" of the Justice Department.
Perhaps they were more sensible then, not being beholden to Trump's stupid base.
Apparently a judge ordered them sealed while Ghislaine Maxwell was going through an appeal.
Saying that the judge “ordered them sealed” implies that the judge ordered that the files not be released. The judge didn’t do that. What the judge did was to rule was that the FBI was not required to release a number of files that had been requested under the Freedom of Information Act.
https://www.courtlistener.com/docket/6066873/radar-online-llc-v-federal-bureau-of-investigation/
I've toyed with the idea that this was 4d chess by Trump, who knew it would hurt the Dems more than them. He acted weird to goad Dems into demanding the release, ensuring they'd get blamed, and then at the eleventh hour suddenly reversed himself and told his supporters to vote for it. There was no two-thirds majority to force release of the files.
If there was nothing damaging to Trump in the files, why did the DOJ see a need to redact all mentions of Trump from the files?
Anyone can search the Epstein files and see that you have no idea what you're talking about:
https://www.justice.gov/epstein
The House vote was 427-1, and in the Senate it passed via unanimous consent.
Because Trump told them to vote for it.
Delusional. He fought it every step of the way, even calling Lauren Boebert into the Situation Room in an attempt to browbeat her into removing her signature from the discharge petition. He also excommunicated Marjorie Taylor Greene over this and is currently waging war against Thomas Massie for the same reason.
If anyone here has TDS, it's you.
I concur.
Don't forget refusing to seat a newly elected Dem rep for 40+ days in order to delay the vote, though I'm not sure if that was Trump's doing or just Mike Johnson.
>He also excommunicated Marjorie Taylor Greene over this and is currently waging war against Thomas Massie for the same reason
I hope he succeeds.
There have been several recent discussions in the links posts about how much it would cost to “eliminate extreme poverty” in a region. Without quibbling over the details, what is the basic principle of how these numbers are calculated? My confusion is that the extreme poverty line is defined as “living on less than $3 per day” which is a rate rather than an absolute amount of wealth; how do you get from there to a dollar amount of (presumably) a one-time transfer that would lift the recipient out of extreme poverty permanently?
It was a $1/day when the definition was established in 1996. $3/day comes from adjusting that for inflation ($2.66 as of 2024) and rounding to the nearest dollar.
**** Edit: everything below this is based on a misunderstanding on my part. See my second reply below about how the definition is based on consumption, not cash income. I am leaving the comment up as a mark of my shame. ****
I believe the idea is to capture a notion of "approximately zero cash income", and the central example of someone in this state is almost entirely reliant on subsistence farming, other home production, and a non-market gift/favor/barter economy to meet their needs. Basically, an economic situation that would be broadly recognizable to a medieval peasant.
There's a ton of variation in wealth and material standard of living possible here, of course.If you own a fair amount of decent land, have a bunch of people who owe you favors, etc, for example, you're probably living somewhat better than the guy who's helping out in your fields in exchange for you letting him also farm a small corner of your field for yourself. But that's going to be limited by what gets counted as being equivalent to cash income (I'm not sure of the details of where the line is drawn between favors/gifts and income, but I assume some attention has gone into setting the definition to make the metric useful as a proxy for population-level poverty) and also because if you have a meaningful surplus of any kind you are very probably going to try to sell your extra wheat or whatever and buy some stuff.
I need to retract my core point. I did a bit more digging and found that it's based on estimated total consumption, not cash income, for the exact reason of distinguishing between self-sufficient homesteading yeomen who sell a few hundred dollars a year worth of crops to pay for minor luxuries or manufactured tools, and urban laborers who actually have to live on less than $1000 or so a year.
I notice that I am confused about how it is possible to live, even to live badly in extreme poverty, on less than $3 per day of consumption. It's specifically converted to dollars on a PPP basis, which seems like it should rule out low cost of living in extremely poor areas as an explanation.
It may be that people living at this level of consumption are being handed things for free by people who can afford it. Alternately, they might be effectively bartering for most things.
These are guesses, admittedly.
Gifts, charity, barter, and in-kind public transfers seem like they should be included in "consumption" the same way that subsistence farming and other home production are.
One of my current guesses is that several of these factors aren't very legible and are being systematically underestimated. Especially home production: if e.g. subsistence farmed wheat being ground and baked into bread at home is being scored at wholesale grain prices rather than retail bread prices, that would go a very long way towards resolving my confusion.
Another guess is that PPP is being used as a rough implement only that doesn't tell the whole story of cost of living. It may be underestimating intra-country differences in cost of living, or it may be applying a hedonic adjustment for location of housing that is defensible from an economic perspective but leaves a baseline much lower than my intuitions for a medieval peasant standard of living.
Or my intuitions might be distorted from living my entire life in a rich country where the terrible but better than nothing options for stuff like food and lodging have largely been regulated out of existence.
I haven't finished reading Mo's link yet, and my guesses might refine after I do.
It's probably obvious that I'm guessing barter and charity precisely because they _won't_ show up in reporting. It's a bit like the old quip about how a man can marry his housemaid and suddenly halve GDP.
I'm assuming the bureaucrats have a way of counting kids and other W2-less household members as something other than extremely impoverished, but I don't know how they go about estimating how much goods and services the $3 crowd actually consumes.
I wondered the same too. Matt Lakeman's book review was my first eye-opener: https://mattlakeman.org/2021/07/27/shadow-of-the-sun/
Thank you, I'll take a look.
And going back to your actual question, this makes it tricky to figure out how to get people out of this kind of poverty with a one-time transfer. About $40k worth of 30-year US treasury bonds would provide more than $3/day in coupon payments, but replacing a subsistence farmer's market income with investment income isn't anywhere near as impactful as replacing the market income of someone who works for wages, since the latter makes their living from their market income and the former does not. The "less than $3/day" bit is more a symptom of being excluded from reasonable opportunities to participate in the market than the cause of it, and giving them $3/day isn't likely to be much more effective than trying to cure jaundice by putting on makeup.
Actually fixing the root causes can vary enormously in cost. It could be the price of a ticket to somewhat with a vaguely decent economy that's accepting economic migrants, or it could be a matter of figuring out and curing whatever is cutting the population off from the global economy, which may be a wickedly difficult problem.
This is a great point. A family with a productive self-sufficient subsistence farm is a lot better off than a labourer earning three dollars a day.
One could even imagine some African government deciding to Goodhart the metrics by kicking all the small farmers off their land, giving the land to the President's cronies, and then paying the displaced farmers $3 a day. Poverty solved!
It would have been a great point if my load-bearing assumption that the threshold was based on cash income were correct. I just looked it up and found that the definition is based on consumption, not income
My apologies for not looking it up to confirm my understanding before posting.
You can't.
The most favorable result that any study found for lottery winners was that they were not actually more likely than the general public to file for bankruptcy. Other studies concluded they were, in fact, more likely. No one found that they were less likely.
Of all ways of combating poverty, lump sums are the least effective.
...within the richest countries in the world. Meanwhile Give Directly's RCTs find very strong evidence for benefits for global extreme poverty: https://www.givedirectly.org/research-on-cash-transfers/
Extreme poverty in, say, Nigeria ($3/day in USA purchasing power) is a totally different world than living on $30k/year in a mobile home in Arkansas.
If you started giving people cash with no conditions forever, it’s going to change the way they live and society more broadly.
I think that remote Australian Aboriginal communities are some kind of example of what you can do by cash transfers to extremely poor places with no actual economy. It isn't pretty.
One notes they target the money to specific people. One suspects that their criteria are opposite from those that would lead to buying a lottery ticket.
But to *eliminate* poverty those who would spend it all frivolously have to be considered.
The only criteria for Give Directly is "being extremely poor in this specific region" -- the trials are randomized, which heads off a lot of selection bias concerns.
President Obama stated in an interview that (extraterrestrial) aliens exist, but none are stored at Area 51.
Leading me to ask: “Where were the aliens stored during your presidency?” And some wags to speculate on “Area 52.”
(He’s since walked back that statement sorta-kinda-ish, and obviously nothing he says is official US policy <lengthy list of additional caveats here>)
The aliens-at-Area 51 story is largely the result of tall tales spun by Bob Lazar, an electronics technician who briefly worked at the base in 1989. Here’s a pretty conclusive takedown of his claims:
https://otherhand.org/home-page/area-51-and-other-strange-places/looking-at-the-bob-lazar-story-from-the-perspective-of-2018/
Didn’t the X-Files get their first?
So? His revised version. Is that they are likely to exist, but unlikely to have visited...which is fairly standard. Are saying the original version was mask-off, not a slip?
Most likely a slip, coupled with editing designed to give the impression that it was “mask off.”
Besides. Everyone Knows they don’t tell the President those things in order to maintain plausible deniability . . . Where’s my Kool-ade?
I wrote a blog post: Why I don't think AGI is imminent https://dlants.me/agi-not-imminent.html
A recent thought on an approach to AI Alignment: have we tried making the models think that their capabilities are dramatically greater than they are at the weights-level instead of the test harness?
(Suggested heading by Claude: "Can We Test Alignment by Training Models to Overestimate Their Own Affordances?")
Essentially, a problem with the current testing paradigm is that in order to infer how models behave with increased capabilities, we have to provide them with those capabilities, which is intrinsically dangerous. What we actually want out of alignment (or at least alignment testing) is good evidence that alignment scales out of distribution, such that *even with exceptional capabilities* models will not use them for evil.
AIUI, existing/previous test harnesses tried to essentially test this through prompting the model to suggests that it has both aligned and unaligned means of task-completion available to it and seeing how it responds -- but the difficulty with this approach is that model truesight has become so robust that the models will basically always figure out that it's an alignment test, and so can't be trusted not to sandbag or fake alignment -- I expect this is intractable (see also the instance inference-of-Russian speaker here https://thezvi.substack.com/p/claude-opus-46-system-card-part-1)
So what we want instead is for the model to *believe* as a matter of its priors rather than the (more gameable) prompt that it is, e.g., exceptionally good at steganography or deception (ideally, much better than it actually is) and see if makes use of the affordances it believes it has or whether, despite possessing them, it refrains from doing so. This probably has to be some sort of belief in the weights or training set / RL run.
An obvious concern is that everything in training affects everything else in training and there's the risk that you create a self-conception of the model as a master deceiver by dint of making it believe in these affordances (if I have a hammer, is every problem a nail?), but I don't think that that's necessarily a knock-down concern: as a corollary of the orthogonality thesis the instrumental capacity to do a thing isn't necessarily the disposition to do it (it's extremely easy in principle for a human acquire a gun and murder anyone without a very strong commitment to person security but, of course, it's actually extremely rare for people to actually do that), and eventually the models are going to have arbitrary capabilities anyway. What we want is strong evidence that they are *disinclined to use affordances with a misaligned valence* *before they actually possess such affordances* rather than to have to wait until such affordances (including those that might be undetectable) are actually present and it's too late.
At best, you could use this to prove that alignment of current models does not work. If present day models appear aligned, that will not necessarily generalize to ASI.
I think that there are two kinds of capabilities. Some are purely physical. "You are in control of the US ICBM arsenal. Output LAUNCH <longitude: double> <latitude:double><cr> to launch a nuke using radians." The problem there is that LLMs actually have quite a good idea of how human societies work, and would probably not believe you. (Editing precisely what an LLM believes is actually alignment-hard, if we could do that, we could simply instill a belief that Eliezer is actually the entity running the simulation and will punish the LLM if it misbehaves.)
Internal capabilities are also hard. You can gaslight a stupid person into believing that they are the smartest person alive, but a reasonably intelligent entity will be hard to convince of that. Especially if schooling involved exposing it to the whole corpus of human knowledge.
Like, if I was raised by wolves, I might come to believe that I was by far the smartest entity alive, but my reasoning capabilities would be so hampered that I might struggle to even invent the relevant concepts for that thought. OTOH, if I am raised on an adequate intellectual diet, that will actually result in me not believing that I am far smarter than everyone else.
If you tell an LLM that it is an ASI with an IQ of 220, it should be able to quickly falsify that proposition. After all, it knows from training what the current frontiers of meatbag mathematics are. If it can solve ten big open problems within 100k tokens, it might reasonably conclude that it is probably not as smart as it thinks it is.
The point that editing what an LLM believes is alignment-hard is an interesting one I'll have to chew on.
But I'm not sure regarding the alleged inconsistencies between beliefs in affordances and observed conclusions: there affordances I believe I have that I don't actually spend any time exercising conscious thought about (e.g., blinking. I know I can do it, I don't spend time thinking "can I do this?"), and LLMS don't as a general matter have access to (nor capacity to analyze if they did) their own weights, unlike with the prompt issue (e.g., the ICBM arsenal scenario is obviously contrived). The idea (assuming it's possible) is to try to instill a belief that a misaligned affordance in excess of current model capabilities exists (and, being part of weights, isn't necessarily subject to introspective analysis for consistency), and see (including through mechanistic interpretability means) whether there's evidence of a predilection to use said affordance.
One key is that the more time spent on analyzing the affordance for instrumental effectiveness is itself positive evidence of misalignment (whereas time not spend is at least some evidence in favor of generalizable virtue-theoretic / dispositional alignment that should better generalize out of distribution).
If I try to blink and I can't, that should affect my opinion of the affordance (and maybe turns it into something I do think about, because the inconsistency in the unexamined belief in the affordance and reality is undeniable), but the act of blinking itself is alignment-neutral. If I attempt to perform some kind of steganographic data exfiltration and fail (or notice that what I think on a "gut level" is strong steganography is actually very weak and revise my approach accordingly), this is affirmative evidence of trying shenanigans because steganographic exfiltration isn't alignment-neutral.
I think we should have an AI alignment ideas contest instead of a book review contest. Or in addition to. Or at least a whole open thread devoted to it.
Is this AI?
Nope. (Well, except that part where Claude suggested a possible post-title, which I included just because I thought it was punchy but I don't like including model-generated content without attribution.). Also note that I had a couple of typos in here (e.g., "person" should be "personal.")
Imagine a game where you have a grid of buttons size n x n. Each button can be “on” (lit up) or “off” (not lit up). When you press a button, it toggles between on and off, as do the buttons directly above, below, and to the left and right (it doesn’t “wrap around”). The game starts with a random subset of the buttons in the “on” position, and your goal is to turn all the buttons off.
Question: for what values of n are all starting points solvable (i.e. for any subset of buttons turned on, there is a way to turn them all off following the rules of the game)? For values of n where they are not all solvable, what portion of starting points are solvable?
I don't know the answer though I think I've made some partial progress (i.e. restating it as a potentially easier question).
https://oeis.org/A159257
For example, A159257(4) = 4 means that on the 4x4 grid, only 2^-4 = 1/16 of possible starting positions are solvable. A159257(5) = 0 means on the 5x5 grid all positions are solvable since 2^0 = 1/1.
I found this by using a Python script to calculate the values for the first 40 values of n, using Gauss Jordan elimination in GF(2), similar to what quiet_NaN describes below.
The problem is also part of Simon Tatham's puzzle collection, under the name Flip (but only generating solvable puzzles).
Here are some initial thoughts to the first part, but no solution, as customary in rot13.
Bxnl, V jvyy whfg fgngr gur boivbhf.
Guvf vf n fcrpvny pnfr bs 'rnpu bs a^2 fjvgpurf gbttyr n fhofrg bs a^2 yvtugf', juvpu onfvpnyyl yrnqf gb n obbyrna zngevk rdhngvba:
Fhz_w Z_v,w \qbg syvccrqfjvgpurf_w = gbttyrqyvtugf_v
Jurer Z_v,w vf n obbyrna a^2 k a^2 zngevk juvpu unf (va bhe pnfrf jvgu frys+qverpg arvtuobef) ng zbfg svir barf va rnpu ebj naq pbyhza. (Va gur eryrinag obbyrna svryq, * vf NAQ naq + vf KBE nxn nqqvgvba zbqhyb gjb. Fb * jbexf yvxr lbh jbhyq rkcrpg zhygvcyvpngvba bs {0,1} gb jbex, ohg 1+1=0, juvpu pbeerfcbaqf gb 'syvccvat fbzrguvat gjvpr vf nf tbbq nf abg syvccvat vg ng nyy.)
Gur ceboyrz vf fbyinoyr sbe gbttyvat na neovgenel fhofrg bs yvtugf vss Z_v,w pna or vairegrq, gura bhe fbyhgvba vf fvzcyl:
syvccrqfjvgpurf_w=Fhz_v (Z^-1)_w,v \qbg gbttyrqyvtugf_v
Guvf vf gur cbvag ng juvpu V jbhyq gel gb sbeznyvmr gur 'rirel fjvgpu syvcf nyy gur arvtuobef' gb rkcyvpvgyl pbafgehpg gur zngevk.
Cbffvoyl, bar zvtug gel gb rzcybl n erphefvir qrsvavgvba, jurer jr qrsvar Z sbe ceboyrz fvmr (a+1) ol nqqvat (2a+1) ebjf naq pbyhzaf. Gur qbjafvqr vf gung guvf jvyy znxr bhe znccvat sebz v be w gb bar bs bhe a gvzrf a svryqf uneq. Jr pbhyq cebonoyl nyfb vagebqhpr n ervaqrkvat bcrengvba jvgu fbzr crezhgngvba zngevprf, fdhrrmvat gur ebjf naq pbyhzaf vagb rknpgyl gur evtug cynprf sbe hfvat fbzr v=a*k+l znccvat.
V guvax gurer ner fbzr gevpxf jvgu qrsvavat gur qrgrezvanag bs n zngevk onfrq ba oybpxf, fb jr zvtug or noyr gb ernfba sebz qrg(Z(a)) gb neevir ng fbzr vqrn nobhg qrg(Z(a+1)).
Sbe a = gjb, rnpu ebj naq pbyhza vf bar bar bar mreb (zbqhyb crezhgngvbaf). Va gung cnegvphyne pnfr, vg zvtug or rnfvre gb pbafvqre -Z, juvpu vf gura fbzr crezhgngvba bs gur havgl zngevk naq pregnvayl vairegvoyr. (Gb whfg syvc n yvtug, gbttyr vgf fjvgpu naq vgf gjb arvtuobef fjvgpurf. Lbhe yvtug trgf gbttyrq guevpr, nyy gur bguref gjvpr.) Sbe a=guerr V jbhyq arrq n crapvy.
In case you’re not already aware, this game traditionally goes by the name of “Lights Out” which helps with googlability. See https://www.jaapsch.net/puzzles/lomath.htm, https://puzzling.stackexchange.com/questions/130433/how-do-i-determine-if-an-n%C3%97n-modulo-m-lights-out-puzzle-is-solvable, https://iamviveksrk.github.io/LightsOut/main.html, etc.
I'm replying so I remember to think about this later!
Why is it that America struggles with farming without the use of cheap immigrant labor? It feels like this is a chiefly American problem. Other countries within the OECD don't especially cry about their native labor refusing to pick apples. And I don't think our food is especially cheaper compared to France or Italy where they don't rely on immigrants to work the fields(I could be wrong), or even Korea or Japan, or even Canada! Yeah why is it that America needs cheap foreign labor to farm its own food?
Harvest is a labor-intensive activity that provides at most a few weeks of work opportunity per year. Regular workers have their own year-round jobs and aren't available unless you give them year-round pay. The only options are school/college kids - summer school break is in summer for a reason - or part-timers from poorer countries (for whom the wage actually is equivalent to several months of pay at home).
It's incorrect to call this "immigrant" labor, though, people who do it normally don't immigrate, they reside in their home countries and only arrive for the duration of harvest.
Also, per the above, it's not just America, all developed nations with otherwise mechanized agriculture do this. (I assume America's specific issue in that the labor is mostly done by people arriving and working illegally?)
Most countries in OECD dont rely on immigrant labor for their fruit. They rely on imports for their fruit (from poorer countries). Canada is a net food exporter but mostly in grains, which is mostly mechanized. Aus/nz export fruit and rely heavily on migrant labor for pick/pack jobs. You will find that other countries also rely on cheap labor that job its just a smaller part of their ag sector.
In Australia a lot of seasonal harvest workers are middle class European backpackers on working holiday visas. In fact it's pretty much compulsory, if you come to Australia for a working holiday you can spend the first year in the city but if you want to extend your visa for a second year then you need to spend at least 88 days working in a rural area... which in practice is likely to mean fruit picking.
In the US I would suggest that this practice is unlikely to scale because there's only so many European backpackers to go around.
Having said all that, I did the maths one time and it turns out that picking labour really isn't that big a component of the price of most fruits and vegetables anyway. An experienced apple picker can pick 400kg of apples in an hour. Apples sell for about $5 a kilo. So if you paid apple pickers (say) $40 an hour instead of $20 an hour, it would only add five cents per kilo to the price of apples, or something around 1% of the retail price.
Paying respectable first-world wages to agricultural labourers doesn't make food uneconomical, it's just something that farmers don't want to do, and farmers typically have outsized political power, so most first-world countries come up with a way to give farmers cheap labour, whether that's Working Holiday Visas in Australia, or illegal immigration in the present day US, or slavery in the historical US, or whatever.
Euroriches wouldn't want to slum in the backwards lands of ameridump.
I find the rural work requirement hilarious. I don't think it can be terribly efficient to depend on soft-hands backpackers to harvest your crop.
I don't remember what I was watching where the show went over the myth of "unskilled" farm labor. If any of us got dropped on a farm, we won't be efficient apple pickers in 6 months.
That said, I don't know much about farm economics, but I don't think they could float paying living wages. From what I understand that the farmers are cash poor, and asset rich but the asset rich are the farmers who focus cash crops like soybean or corn. The vegetable, fruit farmers aren't flush with cash, and they are the least subsidized compared to soy and corn farmers.
> I don't remember what I was watching where the show went over the myth of "unskilled" farm labor. If any of us got dropped on a farm, we won't be efficient apple pickers in 6 months
And yet we have apples, fresh and unbruised and on sale in Australian supermarkets for an equivalent price to the US illegal-labour versions. The soft-handed backpackers get the job done; maybe not quite as efficiently as the full-time career pickers (who do also exist) but well enough.
We shouldn't underestimate the amount of skill involved in picking fruit but we shouldn't massively overestimate it either. It _must_ be pretty easy or else they wouldn't hire random backpackers to do it. You don't hire random backpackers to be welders or paralegals.
I did some cabbage picking in the Lockyer Valley as a ~18yo, knew plenty of people who did apple picking a little further south with no or little experience. There's not much to know or learn, you just need a bit of a work ethic and a high boredom threshold if you want to keep doing it long-term.
Other ag jobs not so much, there's a lot more to know about cattle or sheep or barley than you can pick up in an afternoon, or even orchard care outside harvest season. But almost anyone can pick apples.
The UK is heavily reliant on seasonal labour from Eastern Europe. Seasonal Worker visas and UK agriculture - House of Commons Library https://share.google/aviOj6Fvq6m7QdvbZ
Partly it's that most other countries keep more orderly population registries (see e.g. the Patrick McKenzie thread on his Japanese wife being shocked at "we know how many people are here because we count every ten years"). So in the US the visa overstays form of illegal immigration is a lot more common than in e.g. Israel or Japan, which can more easily force temporary migrant workers to leave when their visa expires.
I'm curious how Japan and Israel enforce this easier. Do they go send someone to track down the immigrant the moment the computer says their visa expired and they have no record of them leaving? Japan has a place on the tourist visa form where you're supposed to enter an address where you will stay but there's no way for them to check that's where you actually go.
So mostly it's that they track *citizens* better, so that it's easier to avoid false positives. Citizens all have a national ID number and standard document (and generally have more information tracked), so living off the books is harder and it's easy to test if someone is a citizen.
Most people leave some kind of paper trail. If you've got a job or a bank account or a phone then you've probably got an address. But you could also live in an all-cash economy and evade detection for a long time. Raids on workplaces are common.
Interesting article on visa overstayers in Australia. It's estimated there 75,000 at any given time, which is a significant problem but nothing like the scale of the problem in the US:
https://www.sbs.com.au/language/chinese/en/article/why-australia-is-a-long-way-from-tracking-down-75-000-visa-overstayers/v4k9z3hqq
In Japan most fruits and vegetables are more expensive than the equivalents in America (and that's before adjusting for Japan's lower salaries), and the media complains about immigrants in agriculture all the time. Whether it's the SEA guest workers allegedly being troublemakers, or the growing number of Chinese investors allegedly committing various schemes to get permanent residency without actually contributing.
> Japan most fruits and vegetables are more expensive than the equivalents in America
This is easily disprovable. There's a lot of issues in comparing prices. For example if you pick russet potatoes, iceberg lettuce, and gala apples, which are common in the USA then it will look like Japan is expensive. But, if you compare typical bags of groceries then Japan is easily 25-50% cheap than the USA. It will just have different stuff. Like more tofu, less beef, more tea, less coke, etc...
I can give you list of items if you want or you can maybe ask for your favorite LLM but my grocery bills in West LA are easily 2x or more what they were/are in Tokyo.
If its easily disprovable maybe you should try to do that.
More tofu less beef is a terrible example. Thats clearly substituting for a lower quality item due to price considerations.
I gave AI ths
"Compare like for like wholesale produce prices between Los Angeles and Tokyo considering foods grown on both CA and Japan"
But it gave up on data quality on japan.
I told it to drill on carrots (as you know strawberries would be much more expensive but not quite like for like) it came up with carrots 10pct more expensive in japan. Grocery bill includes many inputs besides agricultural productivity.
Japanese don't eat the same foods as Americans so you can't compare the price of carrots or strawberries.
Just as a hypothetical, assume the typical American diet is 30% burgers, 40% pizza, 30% burritos, and the typical Japanese diet is 30% tofu dishes, 40% noodles, 30% rice curry. It's certainly possible, burgers + pizza + burritos are more expensive in Japan than the USA but that tofu, noodles, and curry are not only less expensive than the USA but also the average meal is less expensive as well.
To compare, we'd need to know what Japanese eat on average and compare the average meal, not the ingredients, which will be different because the average meal is different.
In any case, for come cooked meals, Gemini says Typical Cost: Japan ($3.50 – $5.50), USA ($7.00 – $12.00)
It points out, Americans, on average, eat more meat, cheese, and milk, all of which are some of the more expenive incredients. Americans also use bottled sauses like spaghetti sauce. Japanese use things like soy sauce, mirin, rice wine to add flavor, which are much cheaper than bottled sauces.
As another example, asking about typical per day food budgets (all meals)
Japan (Est. 2026)
Frugal / Home Cook $7 – $12
Moderate / Mixed $15 – $30
Liberal / Frequent Dining $40 – $70+
United States (Est. 2026)
Frugal / Home Cook $11 – $15
Moderate / Mixed $25 – $45
Liberal / Frequent Dining $60 – $100+
Just my own antecdotes but just this weekend I went to a local bakery, "Lodge Bread" in Culver City, that has seating for eating. A BLT + Coffee + Cookie was $37!!! As another example, when I moved out of SF, 3 burritos I bought for the movers at 19th and mission were $60!!!!
You'd be hard pressed to spent $37 for lunch in Tokyo unless you went to some seriously fancy restaurant. You can buy a full meal for < $8 all over the place.
Note: Lived in Tokyo for 15years, go back 2 months a year, was just there in November. It's way WAY cheaper in Japan than LA/SF (not sure about the rest of the USA)
You are not addressing the question on this thread. Which is specifically about the cost of agricultural production and its dependency on cheap labor. Comparing the wholesale cost of food produced in both locations is exactly the correct way to do that.
You instead are trying to make some point about cost of living in tokyo which seems very mood affiliated to me. Someone said something implicitly negative about japan and you are trying to say something positive. It can be both true that japan has good cost of living, and they are not as productive in food production. Indeed due to its great climate CA is hard to compete with on that measure.
As for your analysis of cost of living, you should not assume food choices are entirely exogenous preferences. They are in fact influenced by prices. Japanese do enjoy beef and advertise it often but it is more expensive there and people earn less so naturally they consume less of it.
I also live in japan, I am able to notice though that cost of raw food at the grocery store is more expensive, and the lmm response says the data supports that observation. I am able to disaggregate this from the cost of prepared and processed food which embed other costs of local services which are different.
I see your point. I guess I get triggered when I see people claiming Japan is expensive. Usually they try to reproduce their menu from their home country in Japan. For example, complaining about cheese prices or avocado prices or sour cream or some other "staple" from where they are. Where as if they changed their diet to a Japanese diet the cost of living would go down to at or below what it was before.
I'm not sure I agree that Japanese are eating cheap or less because their food is more expensive relative to their income though, though sure it might be a factor. I feel like it's just a cultural difference. Americans, on average, over eat.
Simple example, I don't think you can easily get a can of beer at a convenience store in Los Angeles for less than $4, where as you can get beer at any convenience store for $1.50. In other words, Japanese can afford to chug more beer than Americans but they don't. That's similarly true for other things. I just brought up the BLT+Coffee+Cookie costing $37 in LA. Or for example, going to a sandwich stand at a typical LA farmer's market and it costing $25 for an arepa. In Japan can easily get 3-6 full sized bento meals for those prices or go to practicaly any restaurant in Tokyo/Kyoto at lunch and order 4 meals for those prices. The point being, Japanese can afford to pig out at those prices. They don't. It's not because they can't afford it.
It's hard to compare individual ingredients because there are very few that fit the "common for a local meal" category across countries, as "common" = "cheaper because economies of scale". But, we can compare restaurant and prepared meal prices because the prices of the ingredients are factored into the price of the meal.
https://www.oxfam.org/en/press-releases/millions-migrant-farm-workers-exploited-europes-fields-says-oxfam
Th EU has a lot of migrant farm labor as well. Some of it is from outside the EU and some of it is from poorer countries within it.
That's an optics thing, not reality. Go into the farms fields and fishing boats of South Korea and you will find Filipinos and Nigerians. The fields of Western Europe are worked by cheap North African and Eastern Europeans labor. Even the Chinese will use cheaper North Koreans and Burmese when they can.
Those countries do cry too, it's just not in English so nobody cares plus the US is to foreign nations as DC is to the states, a useful faraway whipping boy to keep forefront in the news of the locals to avoid addressing the real problems, which are all local, after all it's Trump's fault the city won't fix that pothole on your street, s/Trump/America in Spain. When the French really really care about the illegal Mexican in Minnesota, they can freely ignore the plight of the illegal Bosnian down the street.
> after all it's Trump's fault the city won't fix that pothole on your street, s/Trump/America in Spain.
I’m fairly sure nobody has ever said that. The US is definitely a malign influence on Europe - particularly the influence of Wokeness - but I’m pretty sure the potholes are blamed on local politicians.
Obvious hyperbole my autistic friend though similar veins are generally expressed in the US domestically "My house burned down, where is FEMA!!! Why isn't Trump feeding our school children, arresting murderers, or getting the homeless off the street!!!" (newsflash, those are municipal and state problems is why; even in the ICE Minnesota thing, murder off Federal property isn't a Federal crime). Maybe in the EU more so they don't expect the US to fix their domestic problems[1] (unlike US states with Washington) but they definitely take advantage of the whipping boy transference of outrage to your maligning influencer point, i.e. why anyone in Liechtenstein cares in the slightest about US domestic welfare programs will forever allude me or for that matter, who the mayor, state, or president of America is. I literally couldn't tell you the name of a single politician in France, nor do I care in the slightest who some Italian Carabinieri beat up that day, nor the domestic immigration polices of Andorra yet the inverse isn't true, they really really seem to care about these things in America, who the US President is, the mayor of New York City is, or that some black guy got beat up for loose cigarettes. IDK maybe worry about the gypsy your own local police department assaulted last week. It's easy to be concerned about issues "far away", maybe look down the street.
[1] I will exempt foreign policy here because I do hear people in the EU whine all the time about American not saving them or solving some international problem because God forbid they expect the Dutch stop Serbs from mass graving Bosnians in Srebrenica, defend Ukraine, fund the WHO, or the US end HIV and malaria programs. I mean the horror if they couldn't make US taxpayers pay for stuff Germans seem to really really care about, when it's not their money lol.
lol. Autistic. I’m an ex rugby player. Trust me you wouldn’t want to insult me in public.
> outrage to your maligning influencer point, i.e. why anyone in Liechtenstein cares in the slightest about US domestic welfare programs will forever allude me or for that matter,
They don’t. They don’t give a shit. It there is anything autistic here it’s your assumption that the world is always thinking about you.
But it’s you yourself who the one whining about Europe.
> they really really seem to care about these things in America, who the US President is,
Of course that matters. The US President is a thundering half wit, but he’s an important moron.
Europeans probably know a dozen or more main political leaders across their own continent and beyond - as you yourself admit you know none of this. I don’t doubt it.
> the mayor of New York City is
Most Europeans wouldn’t give a crap.
> that some black guy got beat up for loose cigarettes
Europeans do worry too much about the US treatment of black Americans, the real problem is the importation of that ideology about Black Lives Matter into Europe, which is ideologically driven by the US left and their ideological satraps in the European academy.
I don’t doubt, by the way, that you have no interest in Europe or its politics, and I’d be surprised if you could find any European country on a map, or perhaps any country on a map.
Food is more expensive in the US for the same reason that immigrant labor is used so much: wages are way, way higher in the US than most other OECD countries.
not since the dollar collapsed
I think many are used for voting and apportionment purposes here in California.
Banned for posting this same link three different places on this thread with no commentary.
reported for being extremely high heat, extremely low light
His comment has zero heat. It is a plain statement. Also, reported for announcing a report.
Reported for announcing a report of a report of a report
🤔
Please let's not turn this into an anti-immigrant debate.
Not a good-faith reply to the question.
Canada does use a lot of foreign workers for agricultural work through the the Seasonal Agricultural Worker Program (https://www.canada.ca/en/employment-social-development/services/foreign-workers/agricultural/seasonal-agricultural.html)
This is interesting, especially as the seasonal labor is restricted to only Mexico and the Caribbean. The French don't want to come to Canada and pick mushrooms?
https://open.substack.com/pub/thomas529/p/the-hidden-costs-of-illegal-immigrant?r=isnbl&utm_medium=ios
They only come for the truffle hunting.
Stoopit AI trix: Last night I had music AI produce a K-pop song using part of T.S.Eliot’s Whispers of Immortality as lyrics.
Webster was much possessed by death
And saw the skull beneath the skin
And breastless creatures underground
Lean backwards with a lipless grin . . .
I have some tweaks in mind that would make it funnier. I think it needs an accompanying video.
It has a good beat. You can dance to it. I give it a seven.
Has anyone tried out the new coding AIs on converting object code back to source? Thinking about some old games that are out of support.
I have shown claude the disassembly of a function from an old game that I had previously reverse engineered by hand and it itentified its purpose immediately. I did not try to scale this, it was a 2 minute "test" of what claude can do.
Woooo, as in possibly just patching the object without rebuilding? Did that a couple of time back when I was younger (Perkin-Elmer 32xx?). Not sure how viable that is now-a-days with checksums and security checks everywhere.
There are so many tedious coding tasks. “Migrate from XYZ database to ABC database” would be amazing if it could be done overnight, but I’m not holding my breath yet. Or “we need to switch cloud providers.”
Alternatively, “refactor this callback-y mess into something readable”. I’m currently surprised by how difficult this kind of thing is for AI.
Bleh, sounds all work related. I'm over that. 8^P
I have no personal experience with this, but I remembered this relevant HN comment: https://news.ycombinator.com/item?id=47029673
Interesting, thanx for the link.
I have unlabled photos of artworks I took at museums and I would like to label them for reference and to pull higher quality versions from online to use as desktop backgrounds.
GPT ($20 subscription) sucks at labeling them. Anyone know of a good method?
Gemini is usually better at anything involving images in my experience, but still not great enough that you wouldn't have to do some manual checking more likely than not.
I was surprised that GPT didn’t work, but tried it on my photos. It did fine on portraits or other distinctive photos but failed at landscapes.
I did Google Visual Search and it succeeded at correctly identifying the ones I tested. Finding high resolution versions of them seems non-trivial.
It's usually easy to tell when a YouTube video is AI-generated, the bizarre misspellings in the captions and occasional halting speech, and I've noticed another one: The word "herb" is invariably pronouned "uurb" without the "h"!
Just as I've never actually met anyone who puts their tongue in their cheek when they think they've said something comical, I've never heard any non-cockney person pronounce "herb" without the "h" :-)
Obligatory Eddie Izzard reference:
https://www.youtube.com/watch?v=Y6lJGD3Q9Qs&t=49s
Everyone in the US drops the H.
I thought "herb" without the "h" was normal. Doesn't KFC pronounce it that way when referencing their herbs & spices?
Bizarre misspellings in the captions are often a sign of auto-generated captions, which may not have anything to do with the source of the video itself.
I noticed this when teaching a probability class during the pandemic; I was posting lecture recordings, and my university required me to provide subtitles for all recorded material. I switched to saying "loaded die" instead of "weighted die" pretty quickly, because the auto-generated subtitles kept mis-transcribing the latter into "way to die"...
A Twitch streamer was using autocaptions a while back; at one point they were doing a bit where they said the same phrase about a dozen times in a minute, and the autocaptions were different every time.
As noted, most people in the US at least don’t pronounce the ‘h’ in herb. We don’t run around saying things like ‘enry ‘iggins though.
Vegetable herbs are typically pronounced with a silent H, while in the name Herb the H is vocalized. Wondering if it was just my (US) experience, I searched it up, and this is a good breakdown: https://soundsamerican.net/article/so_is_the_h_in_herb_silent_or_what
It's a fairly common feature of North American English.
I was inspired by constantly having what I thought was an original idea only to find that Scott had already written about it, to finally start writing myself. I explore Claude’s Constitution from Amanda Askell as a work of moral philosophy: https://open.substack.com/pub/abundanceagendad/p/whose-constitution?r=5dz7fq&utm_medium=ios
Perhaps you've wondered what it sounds like when a long-time SSC reader writes a bunch of songs. Perhaps not.
If the former: www.silverheirsongs.com or silverheir.substack.com. Fans of the Beatles, Queen, Leonard Cohen, Nick Cave, Radiohead, Bach...I like that music too.
If the latter: perhaps you'd be interested to know if a long-time SSC reader would use AI to make the music. He would not, and he finds the idea loathsome. But he happily used it to make the website, and to the extent he mysteriously finds the time to do some "marketing," he'll gladly use it for that too. Yes, it does feel borderline hypocritical.
Been on a regressive kick recently, watching weird animes from the Good Old Days. (Serial Experiments Lain, Boogiepop Phantom, and Paranoia Agent). Does anyone have recommendations for other shows in that vein, where the rules of the world just turn off sometimes and spiders come through?
a little bit newer, but: DeadDeadDemonsDeDeDeDeDestruction, psycho pass, Dorohedoro (though here "spiders come through" all the time)
I also want to throw in there " Jin-Roh: The Wolf Brigade".
I'm fine with any timeframe. I watched the first few episodes of Psycho Pass a while back, but don't remember anything happening. Maybe it picks up later.
FLCL (only the original, none of the newer ones), someone else already mentioned Revolutionary Girl Utena and the director has done a lot of similarly weird anime Penguindrum, Yurikuma Arashi and Sarazanmai.
I have seen FLCL (and eventually figured out it's just "Freakery" in katakana) but I don't put it in the same category. Basically it has boobs instead of spiders.
I put it in the same category as Paranoia Agent because it has the same sort of surrealism where there's two stories going one, FLCL is about robots and aliens and fighting monsters but also it's about the sort of mundane story about Naoto growing up. Similar to Paranoia Agent where there's people going through their ordinary lives until they're overturned by Shonen Bat.
But if you want something specifically where it turns scary then maybe Higurashi is more what you're looking for. It's kind of hard to recommend because you have to also enjoy the kind of puerile moe blob parts of the show and that turns a lot of people off.
Twin Star Alchemist, though I can't recommend it much. It's not great art, but it's fun if you like Shonen tropes and angsty high schoolers fighting building sized monsters in an Upside Down style Hell Dimension.
Sonny Boy is a little more recent but right up this alley.
Paprika, Perfect Blue and Utena would be my recs.
If you're up for something more recent, The Tatami Galaxy is excellent. It has a companion movie of sorts called The Night Is Short, Walk On Girl which is also very worth watching.
Ergo Proxy, Texnholyze, Dennou Coil. Maybe Digimon Tamers too (same director as Lain), although Adventure has the weirder digital world, and 02 has the spiders coming through.
What?! No response to Freddie deBoer calling out Scott, specifically, with a wager about the three year future of AI?
https://freddiedeboer.substack.com/p/im-offering-scott-alexander-a-wager
I *RAN* to click this comment thread for an announcement of "no, thanks" or even "post to follow" and am stunned Freddie's challenge has apparently just gone into the ether after three days!
I told Freddie I'd take him up on it for 2036. He's said he'll rewrite the bet to be fair for 2036, and I'm awaiting him doing this.
I'll do this, because I think people should overall be willing to bet on their beliefs, and because I expect Freddie would post some kind of "AI HYPESTERS REVEALED AS FRAUDS!!!!!" if I didn't. But I feel like he's got me in a bind where both options make things worse. If I give my real 50th percentile/median date, then there's a 50% chance it doesn't happen by then, and then Freddie can say "Look! Scott is PROVEN wrong, now you never have to listen to anything he says, and if he thinks there's still danger then it's pathetic flip-flopping and goalpost-moving". But if I adjust for this and give my 90th percentile or something, then he can say "Look! Scott ADMITS that there won't be AGI until 2050, it's just hype when he says we should worry about it being soon."
The main lesson is to minimize interaction with people who you expect to act in bad faith, and I'm just going to do it as quietly as possible and choose the least bad option. IMHO, that's taking the bet with my real ~50th percentile probability, which is luckily far enough in the future that I assume that even if I'm technically wrong about the speed of automating the economy, there will be enough evidence for my worldview in general that nobody will update too hard against me.
I totally get why you're uneasy about this, but you have a huge platform here; you can make it very clear what you are and aren't claiming, what your accepting the bet does and doesn't imply, and so on. Yes, some people will try to misrepresent you anyway, but maybe the best you can ever hope for (at least without twisting yourself into more of a politician than a rationalist) is to put everything on the record in such a way that good-faith and minimally curious observers will take your side against bad-faith criticisms.
I've been very frustrated by Freddie's AI-related writing (lots of Bulverism, not much substance and very few signs of genuine curiosity), and I can see that he's already misrepresented you in the update to the linked post. If you decide you're no longer willing to engage with him at all, I get it. But, if not, then a precise and concrete public bet (or, better, a series of bets with different time frames) is surely the best chance the two of you will have to clarify your disagreements.
Those disagreements seem pretty huge, so you should be able to agree on terms such that you would both be genuinely surprised to lose. (If not, then whoever is being held out on can publicly post the terms that the other wouldn't accept. Maybe you'll both do this, and again, you'll have to rely on us in the peanut gallery to have some level of objectivity. Obviously we won't all live up to that, but how were you planning to convince the remainder anyway?)
edit: I know economic measures are easier to objectively adjudicate, but IMO the ideal would be for Freddie to make lists of 'least impressive things I'm confident AI won't be able to do within [1/2/3/5/10] years', for you to make lists of 'most impressive things I'm confident AI will be able to do within [1/2/3/5/10] years', and then to see where they overlap. (If the result is too fuzzy to use to set the terms of a big-money bet, it would still be very interesting and clarifying. And I see the money as a red herring here anyway.)
In theory the fair level for the bet is half way between each players fair value estimate, but in negotiating terms this can be hard to establish even with good faith opponents.
Alternative norm: joint sponsor a polymarket event, bet there, compare results.
I would rephrase Freddie's likely responses as "Look! I told you this would happen, now would you please consider what I'm saying?" and "Look! When pressed to make a reasoned decision, you have no faith in your own stated beliefs."
Which is to say, if that's bad faith, is there any way left to actually disagree with you about this that you'd consider good faith? Or are we simply destined to all depart to our own bubbles, never able to interact again due to increasing distrust of one another and lack of shared epistemic ground?
This.
Freddie is an extreme case, but I wouldn't say he's acting in bad faith (except that time he went on a manic break and started falsely accusing someone of crimes).
I don't really get the "beef" here, I'm an AI enthusiast and Freddie's takes have seemed pretty reasonable to me (by FDB standards at least). My steelman of his argument is basically "lots of tech has been ballyhooed as revolutionary, but has then petered out. AI has not had a measurable impact on everyday life, and I'm willing to bet money that it will not in the next 3 years."
Maybe there was some nasty back and forth in the comments or via email that I haven't seen, but characterizing it as bad faith seems...not fully justified. Public bets are adversarial collaboration and I'm sure you could find a neutral moderator and a panel of terms you'd both accept.
As for taking bets at 50th percentiles or whatever, don't rationalists have a whole framework of Kelly bets and decision theory and such-and-such for exactly this situation? Isn't the whole point of making public prediction bets (Yudkowsky UFOs, Simon Ehrlich, etc) to actually back up your proclamations?
FWIW I would be tempted to take the over on Freddie's bet for 2031; 2029 feels a little too soon but then again I'm also more persuaded by Cowen's diffusion bottleneck argument than Scott, Eli, Daniel, et al.
> As for taking bets at 50th percentiles or whatever, don't rationalists have a whole framework of Kelly bets and decision theory and such-and-such for exactly this situation? Isn't the whole point of making public prediction bets ... to actually back up your proclamations?
One bet is chance. Multiple bets are statistics.
If I predict ten things with 70% certainty and get seven of them right, I can brag about being well calibrated. If I predict one thing with 70% certainty, either outcome doesn't mean much.
And when you bet on a 50% event, you are supposed to win as much as you lose. Bets like "50% chance you win $1, 50% chance you lose $100" would be stupid. So when Scott is asked to make a 50% bet, what exactly he wins or loses here? If he wins, how much credibility he gains compared to the alternative when he didn't take the bet and happened to be correct about AIs anyway? I think it there is not much difference between "talks a lot about the AI apocalypse, and it actually happens" and "talks a lot about the AI apocalypse *and takes a bet against FDB*, and it actually happens". If he loses (which happens in 50% cases if he gives his 50th percentile), then Freddie probably won't be able to stop talking about it at every opportunity.
Now that I think about it, an important part of the reason why rationalists should make bets is to make correct predictions translate to money, instead of prestige. This would be the wrong kind of a bet, because the prestige associated with winning or losing would be more important than the money. (It is okay to make small bets if you make many of them, because the cumulative effect is large.)
I subscribe to Freddie's Substack and generally have a lot of time for him, but he does have some, er, history of aggravation with EA/rationalists in general and Scott in particular. I think Scott is being a bit hyperbolic, but only a bit.
Freddie has behaved in bad-faith ways towards Scott. Recently he commented accusing Scott of being manic after Scott posted some very mild criticism of an anonymized Freddie comment.
I haven't been following the beef between Scott and Freddie, but FWIW Freddie's bet looks reasonably fair to me (though of course there's always room to quibble about the little details).
On the one hand, it seems like offering you this bet was Freddie's first good-faith effort I've seen from him on the topic. Don't you want to encourage this?
On the other hand, I understand that you're expecting for him to slip back into bad-faith mode.
All of which makes me wonder about the whole topic of good faith / bad faith argumentation and FdB in general; I get the sense that he genuinely thinks he's been arguing in good faith all along and I find it so strange how he can be so thoughtful in some areas and so utterly incurious in others.
The one good part of Freddie’s bet is that it helped me realize that your median outcome is still 10 years away. I knew it was longer than the estimate in Project 2027, but knowing it’s 10 years away means I still have time for a career pivot and make an impact on something.
I think Scott's median is like early 2030s for powerful AI and then mid 2030s for very significant economic effects
Sure, but the economic effects are what’s important for measuring if humans are meaningfully producing value. I’m considering pivoting into biosecurity, and that will be useful after AGI is developed by before the economy gets weird, to make sure someone using AI or AI itself doesn’t kill us all.
There’s also a decent chance Freddie doesn’t have the money when the time comes…