457 Comments
User's avatar
User's avatar
Comment deleted
Oct 10
Comment deleted
Expand full comment
Thomas Kehrenberg's avatar

I'm not really worried about any of that. I just don't want us to be snuffed out by a more intelligent species.

Expand full comment
Scott's avatar

Think of them as our descendants.

Expand full comment
James's avatar

I don't think anyone wants to be snuffed out by their descendants either.

Expand full comment
Scott's avatar

Well, sure; no snuffing in the entertainment algorithm; eventual replacement, is all. Everyone gets *replaced* by their descendants....

Expand full comment
James's avatar

To the extent the replacement is nonviolent or involves some form of voluntary merging, it's not the primary concern of AI safety advocates

Expand full comment
B Civil's avatar

>Oppression and colonialism became much more possible, because industrial technology allowed small numbers of people to dominate much larger numbers.

I would propose that it had more to do with the explosive demand for raw materials, and not much to do with small numbers of people being able to dominate larger numbers

Expand full comment
Clutzy's avatar

In what universe were workers in industrializing countries worse off? They were actively fleeing farms which, in that era, were death and poverty traps.

How were workers in poor countries worse off? Without industrialization they would have kept being dirt poor and dying of malaria.

Colonization helped drag many a impoverished country forward, it was the end of it in places like Rhodesia that was the true sabotage of those natives.

I guess the Communism point is fair. It is a big issue. But its only a problem because we have any pie to split now. Pre-industry there wasnt anything to divvy up because you just died when your horse kicked you or you had a bad crop.

Expand full comment
Mo Nastri's avatar

> The best analogy for AI was the Industrial Revolution

Not necessarily, and beware of over-anchoring to a single analogy. See Nate Silver's technological Richter scale chart https://www.lesswrong.com/posts/oAy72fcqDHsCvLBKz/ai-and-the-technological-richter-scale

Expand full comment
Deiseach's avatar

"Hendrycks has gotten a reputation for being incorruptible (he gave away ~$20 million in AI company equity after trolls tried to turn it into a "conflict of interest" and use it to discredit his lobbying)"

Here's where my hackles rise. Maybe they were trolls, or maybe they were pointing out that yes, there is a conflict of interest here. I'm sensitive to this because (1) too much "you only criticise this thing (piece of media, proposal, vast social change where we all replace our brain with peanuts) because you're a troll. Troll, troll, Trolly McTrollface!" and no, it's because your thing is rubbish and (2) too many politicians and big-wigs who push for something that surprise, surprise, turns out they have a vested interest in.

Merely being a vehement and even vitriolically opposed critic of someone or something is not enough to be a troll, and I'd like to see a shred more evidence that "let's just stir up trouble for the lulz" was the motivation here, in order to label anyone or any group as trolls.

" Their leader is already being called "the Greta Thunberg of AI".

Call me a troll, but this is the kind of thing that makes me sink my face into my hands while groaning "Oh, God". I suppose I should have known from the name "Encode Justice" that it was a bunch of teenagers, and it is to be expected that teenagers are idealistic, impatient, and believe that you can bring about utopia by yelling slogans really hard (let's encode JUSTICE into it so that everything will be just! and equitable! and non-problematic! and better!) This is very nice, and I wish them well, but it'll go where all these kinds of enthusiastic youthful pie-in-the-sky efforts go.

I realise I'm being very gloomy here, but I have deep cynicism about committees to set up quangos to produce green papers about investigating potential working groups on item A, Z or epsilon. I do think regulation is necessary, but the entire project already sounds like it'll be yet another pet project of a senator to produce red-tape but not much action, and instead will be added layers of complexity in trying to apply it to "so what does this mean in practice when some megacorp is trying to create Robot God?" "Well the AI lobbyists and the Committee of Public Safety for Encoded Action on Economic Security had a very agreeable lunch meeting where they worked on setting up a seminar where interested bodies and experts in the field could give a presentation on the projected benefits and downsides, it should be happening around 2026ish?" "But MegaMega is planning to unveil Robot God in the third quarter of 2025" "Hey, this is *fast* for government, you know!"

Expand full comment
Nematophy's avatar

The most credulous yet I've seen Scott, maybe ever - yet also somehow the most machiavellian.

Nepo babies pushing nonsense to pad their college apps. Of course, when you're pushing a culty religion, grifters are known to make the best bedfellows.

Expand full comment
Viliam's avatar

Who specifically is supposed to be the nepo baby here? Hendrycks from Center for AI Safety? Sneha Revanur from Encode Justice? Both?

(Greta Thunberg, sure. But this article is not about her.)

I am not contradicting you here; I don't know any of these people. Just asking for clarification and evidence.

Expand full comment
quiet_NaN's avatar

> " Their leader is already being called "the Greta Thunberg of AI".

Note that the place where she is called that is the website of the project, https://encodejustice.org/

> Interview with Sneha Revanur, “the Greta Thunberg of AI”

There is exactly one person who could label another thus in a way that the labelee could use that label for themself without it being extremely cringe. That person is Greta Thunberg. (Of course, saying "I am Greta, but that woman is the Greta of AI" would in turn be extremely cringe of her.)

I tried to find more about the aims of that group, and after a bit of clicking I found this letter here: https://drive.google.com/file/d/17hbhguumlKpYlxLhRlzmhyhJiuJJ_sZg/view

> [...]While we acknowledge the potential of AI to drive positive impact in areas like healthcare and education, advances in AI also pose risks. AI has quietly disrupted criminal justice, healthcare, housing, hiring, and almost every other zone of public life. [...]

AI has quietly disrupted healthcare and housing, with the implication being that disrupted here means "affected in a bad way"? I was in a hospital last month, and little did I see of AI-caused disruption. Perhaps this is a problem unique to California? Likewise, the housing problem is that the rents in cities are too damn high. Of course landlords will try to use machine learning to squeeze even more profit out of their properties, but this is no more disrupting to the housing markets than when they adopted spreadsheets.

> [...] Our existing work on the issues of today, like algorithmic discrimination, data privacy, and children’s safety, is deeply important to us—as is our commitment to future-proofing for the issues of tomorrow, like the possibility of transformative AI.

Discrimination in the front. On brand for an org with justice in its name.

> [...] We must elevate young voices. Our own coalition of young leaders stands ready to support oversight and advisory boards in the public sector and at leading AI companies.

This seems mercenary. "You know Paul Christiano (Berkeley PhD, Eliezer's Less Wrong sparring partner regarding AI x-risk for a decade)? Instead of putting him in charge of AI safety, why don't you select one of our 'young leaders'?" I am not a big fan of credentialism, but this feels like the prompt was "write a letter to get me a cushy board job". There is a lot which I dislike about the actual Greta Thurnberg (i.e. wokism), but at least her central demand was "stop burning fossil fuels" instead of "put me on an advisory committee".

> [...] We must build governance structures to audit AI products and manage risk, like an independent FDA-style regulatory agency that conducts impact assessments, while simultaneously stressing a proactive approach to corporate accountability.

Ok, if the wokism did not get Scott on board, the call for an 'FDA style regulatory agency' surely did.

That line of "future-proofing for the issues of tomorrow, like the possibility of transformative AI" I quoted earlier is also notable because it is the closest that letter comes to acknowledging the x-risk, which is presumably what Scott really cares about.

Necessity makes strange bedfellows. Sometimes you find yourself fighting on the same side as Stalin to take the Nazis down a notch.

Scott writes of encode justice: "I cannot even imagine how good all of this is going to look on their college applications."

This is the the gentlest mocking only. As an rationalist, I would phrase it like this: "We might be travelling in the same direction, but we are nothing alike. Our people have warned about AI x-risk long before OpenAI was founded, EJ coopted general AI concerns as the hype picked up. The foundations of our movement are thousands of pages, their foundations are default wokism. Their way of viewing the world and our way of viewing the world are fundamentally incompatible. We mention when our people convince a state senator, they upload photos of their leader meeting a vice president."

Expand full comment
JaziTricks's avatar

Greta fawning mention made me jump.

Greta as a positive role model?

Expand full comment
Sergey Alexashenko's avatar

Possibly the least charitable, most tribal post by Scott ever. I'm a bit sad.

Expand full comment
Citizen Penrose's avatar

Which views did you think he could have been more charitable towards?

Expand full comment
Sergey Alexashenko's avatar

The opponents of the bill, obviously. This story, as told, divides the world into:

1. The supporters of the bill who are right and smart and special and who just want to save the world (and coincidentally are Scott's closest ingroup)

2. The opponents of the bill who are bad and can only be possibly motivated by corrupt self-interest

That is... not how the world works. It's struggle session-tier thinking, not Slate Star Codex-tier thinking.

Expand full comment
Citizen Penrose's avatar

Were there any arguments in particular? Sometimes positions with no good arguments for them do get promoted because of corruption and self-interest.

Expand full comment
jumpingjacksplash's avatar

The obvious argument is that somewhere is going to be the cutting edge for AI, and California is one of the better candidates compared to China/Nevada/Fort Meade/oil rigs. California has a history of regulatory bloat, and assuming that an American law will just do what it says it will and no more isn’t realistic. Any regulatory framework could do to AI what’s happened to house-building. If it does, then industry decamping followed by Texas- or Florida- regulated AI is the next best alternative, but losing the race to someone worse is possible.

Expand full comment
polscistoic's avatar

"losing the race to someone worse"...and you mean Texas or Florida, not China or Russia??

Ok...

That reminds me of a remark by a representative of Coca-Cola in the late 1980s, trying to clench a monopoly deal during an international sports event in Moscow: "The Russians were not the problem. They were playing the game. The one who played dirty was Pepsi".

Expand full comment
jumpingjacksplash's avatar

Other way round, Texas or Florida are the next best alternative.

Expand full comment
rebecca's avatar

But this is the central example of a bad argument against SB1047 getting promoted because of corruption and self-interest. Almost none of the bill relates to AI companies having their headquarters in California, so moving their headquarters would do approximately nothing. Excluding the whistleblower protections, a minor part of the bill, the requirements in the bill were about companies wanting to have *users* in California, as the 5th largest economy in the world. Yes, you could argue that OpenAI and Meta would be willing to forgo access to the 5th largest economy in the world to avoid having to comply with the bill, but that's a harder argument to make, so instead the various self-interested parties made factually incorrect but more plausible-sounding arguments, and ignored attempts to correct them.

Expand full comment
Keith's avatar

Avoiding what the EU did and becoming an AI wasteland. the regulatory state will only grow and this is a beachhead.

Expand full comment
isak melhus's avatar

I mean, sometimes there is a side that is right and a side that is wrong.

Not to toot my own horn. SB1047 seems to me like a complete no-brainer, but I have enough humility that I've been going around asking people to present their best case for SB1047. Assuming that since a non-negligible amount of people oppose it there should be somewhat decent arguments against it.

But to be 100% honest, I haven't found arguments markedly better than the anti-anti-sb1047 charicatures. People say it is regulatory capture by big tech which seems just obviously 100% false? Or that it will kill open source, which is also obviously 100% false? Or that its a plot by doomers based on science fiction to cement political influence, which is obviously very silly and false? And even if it was true, is not a good argument?

But obviously I'd like to be proven wrong. Are there good object level arguments against sb1047? Feel free to link me articles if you don't want to write something yourself.

Expand full comment
Melvin's avatar

Well for starters, what's the good argument for this kind of regulation existing at the level of US states?

Ideally we should regulate it at a species-wide level, but since that's obviously not going to happen we should at least start with national-level regulation and maybe try to get some international agreements going beyond that.

I know that SB1047 claims to apply to all companies doing business in California, but is that constitutional? Can every random-ass state apply arbitrary rules to the out-of-state or foreign activities of out-of-state or foreign businesses as a precondition of doing business in that state? Isn't it a genuinely-legit usage of the Interstate Commerce Clause to say that they can't?

Expand full comment
Dmitrii Zelenskii's avatar

If a federal law is going to exist, it will just supercede state laws. But it is a reasonably reliable prediction to say that it won't, because of the general dysfunction of Congress on politicized topics.

Expand full comment
sean pan's avatar

The national level is completedly borked, so since it does not seem like we will get national laws, state level is an excellent argument. If you were concerned about states superseding federal law, then just put in a sunset claude indicating that the national law will overstate it.

Expand full comment
TGGP's avatar

Why not regulate at the level of states/localities? It seems much more sensible to experiment at low levels, and only regulate nationally when something CAN'T be done at a lower level.

Expand full comment
jumpingjacksplash's avatar

Because the case for regulation is, “These things might end all human life.” Sounds like a federal issue, given the role of humanity on interstate commerce.

If it’s worth regulating them, it’s worth regulating them nationally.

Expand full comment
Sebastian's avatar

> People say it is regulatory capture by big tech which seems just obviously 100% false?

How so? AI companies being opposed is a data point, but they could just not be afraid of competition from startups (i.e. because they think training data and cost is sufficient of a moat or because their headway is big enough) and deem the regulation too much of a cost for little benefit. So this does not "100% proove" anything.

> Or that it will kill open source, which is also obviously 100% false?

There is no denying that the regulation would increase cost for AI training, which is hard for traditionally lesser-funded open-source alternatives.

Now, you might argue that these points are minor and may even be right in doing so, but calling the other side "100% wrong" is exactly the type of thinking being called out here.

Expand full comment
Dweomite's avatar

I'm really confused by your first argument. You seem to be arguing something like "maybe they didn't like THIS bill, but they could still hope to get some other bill that favors them at some point in the future". But that's an argument about whether AI companies WANT regulatory capture in the future, not an argument about whether THIS BILL was regulatory capture. Have I completely misunderstood you?

(Your second point seems to be basically ignoring the fact that the bill targets big frontier models and doesn't apply to smaller models.)

Expand full comment
Sebastian's avatar

No, you're completely right, I actually got the the sign on the argument wrong and argued for the wrong side.

As to:

> (Your second point seems to be basically ignoring the fact that the bill targets big frontier models and doesn't apply to smaller models.)

It does have some hard limits on computation in there and and those might be hit even by smaller models at some point. It also does increase training costs and adds legal liability - yes, there are exclusionary clauses, but being sued for millions is a scary thought and people like to stay well clear of that.

But whether you think this would have killed the open source AI ecosystem is beside the point. The point I was trying to convey is that this is a political debate about a completely new thing in a large society, calling some opinion on what effects legislation might cause "100% wrong" is just too much. You might think it's wrong, but it's not that hard to imagine someone genuinely believing the arguments against the bill.

Expand full comment
anomie's avatar

> sometimes there is a side that is right and a side that is wrong

And that's just your opinion, isn't it? The world doesn't revolve around your sense of morality. Everyone is just working towards their own goals, and it is perfectly possible to provide a neutral perspective on that.

Expand full comment
isak melhus's avatar

I don't understand. Care to elaborate? If I say the earth is made of cheese, do you think that is as reasonable a statement as saying the earth is bigger than a tennis ball is?

In some ultra reductive sense, it still just my opinion. But that doesn't seem like a productive level to analyse stuff at.

Expand full comment
Scott Alexander's avatar

There were definitely some reasonable opponents, including Dean. But I don't think I can accurately describe the political history of SB 1047 without mentioning that there were a bunch of people outright lying really hard about the contents of the bill.

Expand full comment
Rob L'Heureux's avatar

Plenty of EAs said open source would be fine, but even if it's not fine, they would prefer open source get annihilated in the name of safety anyways. So both don't worry about it and if it happens it was for the better. It's fitting the socialists thought they were in good company.

Expand full comment
Scott Alexander's avatar

This doesn't seem like a lie or even a contradiction. I don't think SB 1047 will give Gavin Newsom a paper cut, but if it does, that's fine. Totally non-contradictory. Can you explain more how you think this is deceptive?

...but I also haven't seen anyone say this. Can you provide a link?

Expand full comment
Rob L'Heureux's avatar

It's deceptive to say that both "open source will still be possible as a result of this legislation" while also saying "eh, it could die and I don't care". I want to engage with your request for receipts, but most of my experience with EAs was drawn from conversations in-person and I don't like putting people on blast. Spending my evening trawling for others making the point doesn't sound like a lot of fun, so I would simply ask that you acknowledge these perspectives exist in the EA community and the researchers of the open source community like Andrew Ng engaged sincerely with the regulation and the bill failed to give them the confidence that it would protect the open source community.

Expand full comment
Tremor's avatar

This is the position of Zvi Moshowitz. You can search for "open source ai is unsafe and nothing can fix this" on his blog for his arguments for this second half.

Expand full comment
Victor Chang's avatar

Zvi has a pretty good breakdown of the bill that does make it seem like many opponents of the bill weren't acting in good faith. In particular, Newsom's reasoning for vetoing the bill doesn't really stand up to scrutiny if you look closer. Also, in general, I'm more inclined to trust Anthropic (whose whole thing is AI safety and supported the bill) than OpenAI which is trying to transform itself into a for-profit company.

Expand full comment
Gary Mindlin Miguel's avatar

Anthropic started as a for-profit company.

Expand full comment
Rob L'Heureux's avatar

I just got to that section, and it's so bad. Scott is not going to mention Andrew Ng or Yann LeCunn's objections? You're not going to mention Elon's blood feud with the state of California or OpenAI and how xAI might benefit by crippling companies in CA when it's operating in his home of Texas? I can tell Scott's emotional about this issue because he'd tear apart anyone that makes these kinds of omissions. It makes me think less of EAs, who think they have the one true model of the world, and it makes me more convinced that SBF is the rule, not the exception, in how good EAs are at actually predicting and addressing systemic risk.

Expand full comment
Neel Nanda's avatar

The bill applies to companies who do business in California, not just those based there. Everyone does business in California.

More importantly, xAI is based in California: https://en.wikipedia.org/wiki/XAI_(company)

I think Musk just sincerely cares about AI safety and thought it was a good bill. He has a feud with Scott Weiner as well, so it's hardly clear that supporting the bill is good vengeance

Expand full comment
Rob L'Heureux's avatar

It may not be possible for you to believe, but people would stop doing business in California if they make it illegal. See: the EU. People are dynamic like that. xAI is based in California...right now. Musk has already moved SpaceX and Tesla HQ to Texas, and it would be easier to get the employees to go if CA made it illegal.

Expand full comment
Neel Nanda's avatar

I'm confused - what do you think SB-1047 said? What did it make illegal? I feel like we're not talking about the same bill here.

Expand full comment
Scott Alexander's avatar

X.AI is based in California.

I would like an apology for being called hysterical and compared to SBF just because I didn't mention this in-fact false thing.

Expand full comment
Rob L'Heureux's avatar

Scott, I'll apologize because I'm upsetting you and that isn't my intent. If we met in person, I'd hope we'd be friends, ACX is generally great, and you do great things.

On xAI, I don't think the fact of where it is headquartered now matters, and I don't think you need to mention it. What does matter is that Elon has moved his other companies out of CA, specifically due to regulations and what he believed to be government interference. He has posted diatribes against the state and against Newsom personally. It's not just plausible but likely he would be happy to leave, especially if he can sabotage the state and his business rivals.

I said you were emotional. I don't know if you're hysterical but I'm pretty sure you're on tilt, where your emotions are clouding your judgement. For example, you compared partnering with the commies against the opponents of 1047 to beating the Nazis in WW2. Pretty charged language.

I am passionate about this issue too, because what I care about is growth for the little guy. It's very easy for me to paint a picture of comfortable, rich PhDs telling everyone you have calculated what's best for them and therefore you deserve power and control. If it means a generation of entrepreneurs don't get to build with the tech, well, you know what's best for them. Your only mention of open source was to cast aspersions on Fei-Fei Li's motives, which I don't know the truth of, but there are plenty of other open-source leaders that are far less conflicted that felt like this was an existential risk to startups and growth. I think it's a glaring omission.

Arrogance, comfort, and a desire for control over other people represent the worst instincts of the EA community, and SBF is the face of those worst instincts (plus the fraud). I am accustomed to you approaching these issues with some humility, and I'm pretty shocked to be on the other side and see how it's represented. At least tell me where to go so I can get a Big Tech lobbyist to pay me for my actual beliefs that open source AI is worth preserving.

Ultimately, I found the post really upsetting because I actually care what you think about things, and I thought this was deeply uncharitable representation of opponents that never engaged with the criticisms of the bill while assuming the worst motivations. I'll even give you that Marc Andreessen probably does have those worst motivations, but it doesn't preclude others from generally caring about the little guys and giving them a chance. I don't think this is actually a case where we have different values, I think it's a case of where we have the same values but prioritize them differently and the bill never sufficiently resolved how this would impact startups. I hope that in time you realize I am not the enemy, and we can almost certainly find a better way than what this bill actually would have done.

Expand full comment
Robert F's avatar

I don't quite get your argument. You're saying Elon is trying to cripple his competitors in California, but at the same time it doesn't matter that xAI is currently based in California? That seems contradictory, as couldn't OpenAI also just move?

Are you arguing that it's easier for xAI to move away than the other AI companies cause Elon's other companies are in Texas?

Or are you arguing he already wants to move to Texas but needs 1047 as an excuse?

Expand full comment
Cjw's avatar

If somebody's building a bomb on my front lawn, I don't think I have to be charitable to them. Being rational does not require me to steelman the best reasons why he might be assembling a weapon of destruction 10 feet from my window. There certainly could be a legitimate reason, but it's not my role to be the one figuring that out, it's my role to run out there and kick him in the head before he gets done. These people know what they're doing is dangerous and provocative, it's on them to make better cases for themselves.

Expand full comment
Renderdog's avatar

Assumes the premise. 'If somebody's mowing my front lawn, I don't think I have to be charitable to them. Being rational does not require me to steelman the best reasons why he might be running a chopping machine of death 10 feet from my window.'

This is a rationale for doing horrible things to people for any reason you decide is bad enough. It justifies any action you might take for whatever reason you like.

Expand full comment
Cjw's avatar

The only premise I'm assuming here is that the conduct appears extremely dangerous, clearly enough that you should expect people to rely on their prior assumptions and take it as a threat. It appears menacing to a majority of people for a variety of reasons that range from reasonable to unlikely, with a practical effect of shifting the burden to the AI devs to justify what they're doing. And AI devs are just marching along assuming that of course they can build things that look really threatening, supposing it's incumbent on we the people who perceive ourselves as threatened to justify our response. But the expected role of threatened people is to react, not to justify the reaction. If you want to change the reaction, change the facts they see. Show them it's a harmless lawnmower, that it'll never be a bomb, it can't be a bomb, it doesn't contain anything explosive. Go to their door first, knock and ask if it's ok to assemble the lawnmower out front and demonstrate it. If you don't do that, and instead you fire all the people at your company who wanted to do that, don't expect a charitable response from the people you're attacking.

Expand full comment
Renderdog's avatar

This analogy still fails due to the 'yelling fire in a crowded theatre' issue. It is perfectly rational and legal to yell fire in a crowded theatre if there is indeed a fire, things are burning, people are dying around you, etc.

Instead you are conflating obvious, instant, immediate threat of harm with a long-term, hotly debated, potential for harm that is not currently present, similar to somebody building what might appear at first glance to be a bomb in your yard, which also assumes there is a 'your yard' in this scenario, not the commons.

Your feelings of harm are not shared by many, perhaps a majority, of other people, and we have laws against running into the commons and slaughtering somebody who is alleged to be a bomb maker but might instead be making candy unicorns that you perceive to be a bomb.

Expand full comment
Keith's avatar

Yeah, seems obvious to me. I looked very hard for an argument against the bill and none were presented much less entertained. I went in expecting a steelman.

Expand full comment
TK-421's avatar

Not a lack of charity, but the tribalism is evident when you contrast the descriptions of Newsom and the supporting politicians.

Our Guy acts out of pure motives, Their Guy is unfairly lobbied by a monied cabal and acts only out of self-interest.

Newsom and Wiener did exactly the same thing: listened to a vocal and deep pocketed set of constituents and took action based on their preferences. This is basic politics. They just have a different set of perceived special interests they want to keep happy to continue receiving support from. I'm sure the people who have Newsom's ear would also describe him as a thoughtful, good listener.

Expand full comment
Makin's avatar

The point where he uncritically mentioned Musk defended the bill because of his interest in AI safety really shot my disbelief.

Expand full comment
Sergey Alexashenko's avatar

same

Expand full comment
Scott Alexander's avatar

Would you like to explain? Musk has been obsessed with AI safety for the past ten years. He took the opposite position as every other big AI company (except for Anthropic, which brands itself on being safety-focused). What exactly do you think the angle was here?

Expand full comment
Makin's avatar

It's not exactly a fringe interpretation that he did it because it benefits xAI by harming a lot of potential competitors. His company is based in Texas, his enemies are based in California, which the bill affects.

He used to care about AI safety a long time ago, but he's been acting more and more crazy and borderline evil recently, there's no way you haven't noticed.

Expand full comment
Scott Alexander's avatar

Wikipedia says XAI is headquartered in California, see also https://techcrunch.com/2024/10/02/elon-musks-xai-moves-into-openais-old-hq/ . I think Musk has swung right but that doesn't necessarily mean he's shifted his view on every topic to whichever side is more evil.

Expand full comment
Makin's avatar

https://techcrunch.com/2024/07/16/elon-musk-vows-to-move-x-spacex-headquarters-from-california-to-texas/ but you're right I had missed that article. Was that post bill veto?

Expand full comment
sean pan's avatar

No, as Musk clearly said, since he wants to do business in California regardless, the bill would have affected him. The idea that he was doing this out of some non-safety reason despite having been concerned about safety for the past ten years and having led the Pause letter buggers belief.

Expand full comment
Al Quinn's avatar

If he's feeling so tribal about it, I'd rather he wear that sentiment on his sleeve than bury it in faux-objectivity. (I'm glad the bill failed, of course)

Expand full comment
Applied Psychology's avatar

"Our side of the story"

Yeah he really buried the lede there smh

Expand full comment
Peter Defeel's avatar

If you explained why I’d be more inclined to accept that it was.

Expand full comment
Ben Mathes's avatar

What I came here to say. The checksum/smell on "people who opposed us are simply evil and venal" makes me discount the rest, didn't read, and unsubscribed.

Expand full comment
JaziTricks's avatar

Scott is usually very honest.

this is the one and only time I've seen him not steelman opposition views + not disclaimer.

in fairness, the title says "our side", so it implies inpatient bias already.

Expand full comment
Applied Psychology's avatar

Do you know what's in SB-1047? Do you know the arguments?

"Everyone's equally valid and everyone deserves a medal" is not an argument I expected to hear from the pro-tech wing.

I don't know why people pretend there's a moral law of the universe where both sides of a debate have to be somehow equally justifiable.

Expand full comment
Viliam's avatar

Well, it has "our side of the story" in the title, and that's exactly what it describes.

Expand full comment
Adam's avatar

SB-1047 looked... really good. It would really be a shame for the world if the only way one can even attempt to pump the brakes on x-risk is by allying with Butlerian Jihadis.

Expand full comment
Noah Fect's avatar

Scott: "As a wise man once said, 'Politics is the art of the deal.'"

Me: "Who said, 'Politics is the art of the deal'?"

ChatGPT: "The phrase 'Politics is the art of the deal' is often associated with Donald Trump, particularly due to the title of his 1987 book, The Art of the Deal. "

Me: I'll show myself out now.

Expand full comment
Doc Abramelin's avatar

I believe that line was intended to be a joke (I thought it was decently funny).

Expand full comment
Scott's avatar

That was my take too.

Expand full comment
Melvin's avatar

I took it as a Trump endorsement. Thanks Scott!

Expand full comment
John N-G's avatar

My Google search for "politics is the art of the deal" places this very post third in the rankings. Not even Donald Trump said "politics is the art of the deal"; the statement is a mishmash between "politics is the art of the possible" and Trump's book title, which shares four words in the proper order.

Meanwhile, we learn that, for Scott, stigma arguments now carry a stigma.

Expand full comment
0dayxFF's avatar

> the statement is a mishmash between

"the statement" was a joke

> Meanwhile, we learn that, for Scott, stigma arguments now carry a stigma.

He just dislikes them. One person disliking one thing isn't a "stigma".

Expand full comment
Roger R's avatar

I tend to agree with Scott on stigma arguments.

"If we do X, that will stigmatize group Y" is a legitimate concern that's worth considering... but it's not an argument that should be considered completely overwhelming. It's one factor to be carefully weighted among other factors, in a case-by-case basis.

In the case of Covid? I agree with Scott. Protecting entire nations from the spread of something like Covid should have taken precedence over stigmatization concerns.

Expand full comment
0dayxFF's avatar

I can't tell if you're being ironic too.

Expand full comment
TGGP's avatar

I was expecting the phrase "politics is the art of the possible", which I believe comes from Max Weber, and gave the name to a defunct left-libertarian group blog I used to blog about https://entitledtoanopinion.wordpress.com/?s=theartofthepossible.net

Expand full comment
B Civil's avatar

I was always under the impression that it was the art of the possible. Leaving Trump out of it…

Expand full comment
Brenton Baker's avatar

Meanwhile, long-time readers can pick up the sarcasm from the time Scott reviewed that very book back on SSC, then wrote a story in which Trump (in a reenactment of the Judgment of Paris) wins a beautiful wife and control of the most powerful country in the world, but misses out on wisdom.

Expand full comment
Keith's avatar

One of my favorite jokes is "as the great philosopher Mike Tyson once said - everyone has a plan until you get punched in the face"

Expand full comment
Matthew's avatar

"Elon Musk has impeccable credentials as a pro-progress non-nanny-state-Democrat"

I feel like this statement needed a time stamp. Like "As of 2021, Elon Musk had impeccable credentials as a pro-progress non-nanny-state-Democrat".

The present tense is not accurate.

Expand full comment
Mo Diddly's avatar

I don’t know whether Musk’s actual politics and beliefs have changed so much as his inner troll has blossomed since the Twitter purchase

Expand full comment
Arie's avatar

If nothing else, his credibility as Democrat has changed.

Expand full comment
newstorkcity's avatar

I read it as non-{nanny-state-democrat} as opposed to {non-nanny-state}-democrat, given that the second is obviously false in the present day.

Expand full comment
Roger R's avatar

Same here. Musk is clearly not a Democrat of any sort now, including a "nanny state" one. At the same time, his businesses are still pro-progress in the non-political sense of the term "progress".

Expand full comment
Moon Moth's avatar

Alternatively, it's probably accurate that he's a "non-nanny", but I'm not sure if he counts as a "state democrat" any more?

Expand full comment
Paul Goodman's avatar

How so? He's clearly not a nanny-state-Democrat, and he certainly seems pro-progress in the abstract even if you don't like the kind of progress he wants/how he goes about trying to get it.

Expand full comment
Thomas Kehrenberg's avatar

I read it as "non-{nanny-state-democrat}" which has only gotten more true over time.

Expand full comment
Keith's avatar

Even in 2021 it was clear that Musk supported huge government intervention in battery tech specifically because it helped his company and hurt the competition.

With regard to AI, it's also clear that Musk isn't exactly a 'leader' in cutting edge AI development so slowing the whole thing down benefits him.

Expand full comment
Mo Nastri's avatar

Why not? Genuine question.

Expand full comment
apxhard's avatar

The moment you call someone the “Greta thunberg of Ai”, what i immediately think of is “someone using emotional augments which are detached from reality, to presuade other emotionally inflamed persons to act in a destructive manner.”

You clearly see this:

> California is a state full of very sincere but frequently insane people.

But you don’t seem to connect it to this:

> But sometimes you’re the group trying to do the right thing and improve the world, and then it sucks.

What does it feel like, on the inside of an insane movement? How would you know if what you were driving at were fundamentally confused in the same way that “bipoc weed dispensaries” or “ban nuclear reactors to save the environment” are both deeply convinced they are right, and yet confused about reality?

You see the absurdity and corruption of movements outside of the ai risk community. But inside what you see is the purity of their motivations and a set of philosophical priors which aligns with your own. That’s what all movements look like from the inside!

Maybe it’s worth considering that every large scale atrocity of the 20th century was committed by persons convinced they were doing Good. What does “valuable social movements” mean? What does “good” mean? The current philosophical zeitgeist says, “something something multiplied by the total number of people” but refuses to interrogate the “something something” at any level deeper than emotional responses, because it rejects the idea that our feelings about what is good are flawed approximations of some true reality.

What would it look like if “good” actually meant something real, but we had a tendency to substitute our own maps of goodness for the territory, because of this deeply rooted human tendency to substitute our own personal (and group) concept of The Good, for goodness itself?

Expand full comment
LoveBot 3000's avatar

You seem exasperated, and probably have other reasons to think that Scott is wrong, but on its own the argument that any morally motivated claim not currently accepted by the mainstream is definitely wrong is not very convincing.

Surely the object level matters here, and Scott and others have laid down their arguments in great detail. If you want someone already convinced to dismiss every argument for existential risk from AI you have to provide or at least point to an actual argument against one of the core assumptions or arguments.

Expand full comment
The Ancient Geek's avatar

How about the claim that you can feel sure without being sure?

Expand full comment
Moose's avatar

I get what you are saying, but at the same time I cannot really imagine being convinced by this that the specific cause I'm supporting is not actually good. The social movements that actually did good thought they were doing good too.

Expand full comment
Tom Hitchner's avatar

In college I once used the cliche, “the road to Hell is paved with good intentions,” to which my friend responded, “yes, but so’s the road to Heaven.”

Expand full comment
Melvin's avatar

The road to heaven is paved with the self-interest of butchers and bakers.

Expand full comment
Tom Hitchner's avatar

I don’t know. It wasn’t in Woolworth’s immediate interest to integrate its lunch counters in the South, because its white customers wouldn’t have stood for it. It took the Civil Rights Act of 1964 to do that (though that was certainly in Woolworth’s longer-term interests).

Expand full comment
Melvin's avatar

>(though that was certainly in Woolworth’s longer-term interests).

Was it? Have you been to a (US) Woolworth's recently? Eaten at the lunch counter?

I'm not saying that desegregation killed their business model, but it doesn't seem to have done much to save it.

(For what it's worth, the US Woolworth's company is now Foot Locker, and stores with the Woolworth's name in other countries are unrelated.)

Expand full comment
Tom Hitchner's avatar

To my knowledge American retail and dine-in kept going strong for many years after 1964. The internet seems to have been the main source of difficulty there. Another way to look at it: would the South rather have the 1963 economy or the 2024 economy?

In any case, since I regard segregation as an evil, the suggestion that desegregation was not in the interests of business would tend to align me against your claim about what the road to heaven is paved with (if I granted that suggestion).

Expand full comment
TGGP's avatar

David Bernstein's view is that the problem wasn't the fear of customers, but of violence that would great anyone that unilaterally defected from the norm: https://www.cato-unbound.org/2010/06/16/david-e-bernstein/context-matters-better-libertarian-approach-antidiscrimination-law/

Expand full comment
Tom Hitchner's avatar

That's quite possible. I don't think it affects the analysis any: what was in the interest of any given business wasn't to integrate. Call it a collective action problem, a Prisoner's Dilemma, whatever: the effect is the same.

Expand full comment
Doug S.'s avatar

What would have been in the interest of many businesses would have been to be able to integrate without getting blamed for it. After the law passed, they could better serve black customers and tell racist white customers that it wasn't their fault.

Expand full comment
Tom Hitchner's avatar

I think this *was* one of the effects of the 1964 CRA. I’m aware there’s been a lot of talk recently about other negative effects it had, but without weighing in on those, the law did give businesses a “it’s out of our hands” card on integration.

Expand full comment
ultimaniacy's avatar

Making bakers better off is a form of good intention.

Expand full comment
UnDecidered's avatar

The road to Heaven is paved with good actions.

Intending to do good but doing bad or nothing through error or procrastination is how we get to Hell.

Expand full comment
Tom Hitchner's avatar

That probably goes without saying. The problem is we don’t always know, or agree, what actions will turn out to be good.

Expand full comment
The Ancient Geek's avatar

Paved with good intentions, and mortared with skill.

Expand full comment
B Civil's avatar

I think for people who really truly believe in such things, the road to heaven is not paved at all

Expand full comment
James's avatar

"Oh I guess I'll just intend bad things instead"

Expand full comment
Viliam's avatar

Maybe all intentions lead to Hell, only perfect passivity leads to Nirvana...

Expand full comment
Tossrock's avatar

This is a fully general argument against anything. Everyone always think they're right, so pointing it out that sometimes people who were wrong thought they were right is not meaningful or relevant to the issue at hand. What matters is whether the people involved actually were right or not, or better yet how often they are right, or even better, how well they're calibrated for their predictions.

Scott publicly posts his calibration results, and is quite good. He is often right, and right before the position becomes mainstream. The rationalist movement is obsessed with rightness (or more charitably, truth) and as a result, they have some of the best epistemic hygiene around. Do they have problems, are they wrong at times? Sure. But their results are measurable.

Expand full comment
Doug S.'s avatar

Every large scale human triumph, such as the moon landing or smallpox eradication, was also done by people convinced they were doing Good. Yes, you can expect the followers of a nutty cause to think it's great, but only because people who think a cause is nutty generally don't become supporters. At some point you actually do have to evaluate a cause on its own merits instead of relying on your rock with "every cause is nutty" written on it. (https://www.astralcodexten.com/p/heuristics-that-almost-always-work?triedRedirect=true)

Expand full comment
Boinu's avatar

Hoist the red flag, the hour's ours. Surely a guest post by Laurie Penny can't be far behind.

Expand full comment
William's avatar

Extremely unsympathetic "our side of the story" post. No arguments for SB 1047 are provided. A lot of self-flattery and self-aggrandization and "we will try again and be worse". Hand-wavingly dismissing people opposing you as "trolls" is a tired tactic. This is especially true when, for example, Hendrycks was caught with his hand in the cookie jar and only then divested his interest in his AI safety company.

If you want to argue by association, endorsements from SAG-AFTRA who are luddites and socialists who would destroy any productive innovation are not laudable and makes a convincing enough case to not support the bill.

If anything, this post makes people who opposed SB1047 more sure in their conviction. There are narrow concerns for AI safety (bio-terrorism foremost in my opinion), but this type of piece makes the case that to work on any given narrow concern of AI safety, working with AI safetyist is not the way forward.

Expand full comment
Deiseach's avatar

While there's a lot of exaggeration and scare-mongering around AI in its application for art and the likes, I can see the fears of the SAG-AFTRA set.

In one episode of "The Rings of Power", it did look (to me at least) like they were using CGI on the face of one of the actors. I wondered if perhaps the actor had not been available on the day, or if the scene had been messed up for some reason, and instead of re-shooting they decided "we'll fix it in post-production"? And I could see them testing out "instead of having the actor sitting in a makeup chair for hours to get this done, just film him as usual and CGI the elaborate makeup on afterwards".

But if they can do that, then eventually they can replace the actor. Maybe at first get someone cheaper in to play the part while they 'deepfake' the face of Big Star (or Deceased Big Star) onto the bit-part player, then eventually replace any human acting at all.

Charlie Hopkinson does very funny reviews of episodes using deepfakes of characters, and he's only an amateur. Imagine studios pouring hundreds of millions into getting this off the ground, and you can appreciate the threat the actors' unions fear:

https://www.youtube.com/watch?v=Cs-fMUOWUNA

Expand full comment
Melvin's avatar

Certainly actors are right to be concerned that AI will eat their jobs. But that's their problem, not anyone else's, and it's orthogonal to the concern that AI will destroy the whole world.

Expand full comment
Mo Diddly's avatar

Mostly orthogonal. In nearly every field, AI is methodically pushing the value of human labor towards zero. This is going to be very destabilizing for many if not most people’s lives, which is not totally unrelated to risks of catastrophic events.

Expand full comment
Melvin's avatar

Fair point. Actors and writers are going to be on the leading edge of this, but it will come for the whole labour class eventually. Try to exit that class before it happens.

Expand full comment
JamesLeng's avatar

Exit to where, exactly?

Expand full comment
Melvin's avatar

The class that makes a living by owning things.

Expand full comment
Cjw's avatar

Yeah people underestimate this side of it all the time. I think a plausible outcome of AGI is that this displacement happens so quickly there's no possibility to organize a transitional society, and the riots and violence will destroy society before you even reach ASI. All the talk of x-risk that people find science fiction-y isn't even necessary to make the case, it is already on a path to make your life meaningless and intolerable for totally mundane reasons.

Expand full comment
Mo Diddly's avatar

Unfortunately it’s not either/or.

Expand full comment
sean pan's avatar

As noted, while SAG was originally concerned with actor jobs, after reviewing the issues, they came to realize that the alignment issue is even more valuable and have been deeply concerned with such.

Being fair to them is noting that they have updated their positions.

Expand full comment
William's avatar

Yes, but SAG supporting or not the bill is orthogonal to the "goodness" of the bill. Why would you trust them on any tech issue? This is just an argument by association. The bill is good because SAG is good.

Is SAG good? Is SAG bad? This is already a secondary value judgment that needs to be done. If you are predispositioned to liking labor unions then they are good and the endorsement is good, otherwise vice versa. Their endorsement adds nothing to the fundamental argument of the goodness of the bill.

Expand full comment
sean pan's avatar

This has to do with the fact that they are people, and as people, they realized that everyone dying is bad.

Its pretty simple. This has nothing to do with what they are at this point, and more to do with humanity as a whole.

Expand full comment
William's avatar

So why call them "SAG"? Do they have extra credibility for being "SAG"? If, yes, it is misplaced in this context. If, not, why bring up SAG at all?

Why should I care about SAG's opinion is not addressed. And for a fundamental reason: I shouldn't care. They have no special insight into the issue that any man off of the street wouldn't have. It is purely an appeal to (misplaced) authority and prestige to even cite them at all. The difference is this post would look much different if it was written "some person off the street agreed with us".

Expand full comment
sean pan's avatar

It is just to indicate that they are an organization that has awareness of x-risk. ParentsTogether, etc all are equally valid in their concern about humanity not dying.

You should care about x-risk, which is pretty evident at this rate and which there is no good argument against.

Expand full comment
quiet_NaN's avatar

> Being fair to them is noting that they have updated their positions.

It would be even more impressive if their updated position did not coincidentally look similar to how their previous position would look once it recruited other arguments as soldiers.

Say someone wants to build a highway through the plot of land where my house is on, and I really don't like this for obvious reasons. While I research the impacts of the highway, I become aware that the nearby woods in the proposed path shelter the rare blue-dotted frog. I become a vehement proponent of conserving that blue-dotted frog.

Is it possible that I have arrived at my stance of frog preservation through unbiased thinking, and that I would equally support preservation if the highway had to go either through the habitat or my house? Certainly. Is it a likely explanation of me suddenly caring about frogs? Certainly not.

Expand full comment
sean pan's avatar

I think its more akin to having aliens that pass through your land - initially annoyed by that, you also learn that they also lead to contamination of the land that causes the death of your children. As such, while you originally arrived it for being annoyed at a lesser cause, you come to realize that the greater problem is there too.

Expand full comment
Scott Alexander's avatar

"No arguments for SB 1047 are provided. "

Did you read the part where I said "In case you’re just joining us - SB 1047 is a California bill, recently passed by the legislature but vetoed by the governor - which forced AI companies to take some steps to reduce the risk of AI-caused existential catastrophes. See [link to previous ACX post] for more on the content of the bill and the arguments for and against; this post will limit itself to the political fight"?

Expand full comment
William's avatar

This post is a lot of hot air and belly-aching that "we the righteous did not win". A small summary of pro-arguments would have been good to balance out the post. This is my first time commenting, but I have been reading ACX on-and-off for a while and this is far from what I would expect.

For someone who is happy that this bill was vetoed and sees this bill from a different POV this post just reads as self-righteous griping. "We the in-group are eternally blessed with wisdom. The out-group is evil, stupid, and is blocking our salvation". I said in my OP that the bill wasn't entirely meritless, but it is far from good.

Expand full comment
sean pan's avatar

The issue is that we should try to do things to keep ourselves alive since there is no good argument against x-risk, and indeed, we are seeing widespread evidence of x-risk behavior right now, such as instrumental convergence(see o1 safety card).

Expand full comment
anomie's avatar

> The issue is that we should try to do things to keep ourselves alive

Speak for yourself. Not everyone is as sentimental about humanity as you are.

Expand full comment
Philo Vivero's avatar

Speak for yourself. An overwhelming majority of humans are.

Expand full comment
bell_of_a_tower's avatar

Politician's syllogism:

1. Something must be done.

2. This is something.

3. Therefore, this must be done.

Just because someone claims that there is this huge risk and says that this will address that (in vague, highly-emotionally-laden terms) does not mean that this is the right thing to do. Or, in fact, that there is a risk at all or that that risk can be addressed using legislation.

Expand full comment
Applied Psychology's avatar

"Just because someone says something doesn't mean that it's true."

Wow, that's a pretty good point. Maybe then we should look at the actual arguments they're making to determine if they're right then? Or address those?

Just a thought from me, since it seems like you might not have a lot of your own.

Expand full comment
Paul Goodman's avatar

I mean if you can't be bothered to follow the link saying "I covered X here, in this post I'm going to talk about Y instead" then I think coming into the comments complaining "This post is bad because it doesn't discuss X" makes you basically a troll.

Expand full comment
Dweomite's avatar

I really am not sympathetic to complaints of the form "but you didn't repeat the points you'd already made in a previous article that you specifically linked and said you weren't going to repeat". I feel that people should be allowed to divide big complex topics into multiple articles without being required to repeat themselves in every article.

If that is your headline criticism then I will discount all of your other criticism significantly.

Expand full comment
aiden carter-hughes's avatar

A more appropriate response would be "Sorry! I must have missed that. I'll go read it now."

Expand full comment
staybailey's avatar

I would push back pretty hard on the claim that providing arguments for (or against) SB 1047 would have improved the quality of the post. I think it is very useful for people to be able to discuss *tactics* of a cause without having to justify the cause itself everytime. As an intuition pump I think it makes perfect sense for someone to write about how Kamala Harris (or Donald Trump) should run their campaign in order to improve their chances of winning without spending effort justifying why they want that particular candidate to win. To add such justification would reduce the clarity of such a post and probably change nobody's mind on which candidate is better.

Indeed, I think the plausibly more useful thing to have added here would have been more clearly stating something like: "this post is based on the assumption that my (Scott's) positions on AI safety and SB 1047 are directionally correction." Then anyone who disagreed with that assumption could just not read the post. Or they could continue to read it to the extent that understanding your political opponents (for lack of a better phrasing) tactics is still interesting and/or useful.

Expand full comment
Andrew's avatar

The post was titled *our* side of the story. I dont have strong opinions about this bill, but I did not click on the headline expecting an even handed thorough analysis of the issue (Scott writes plenty of those) and I was not disappointed.

Actually, if honest, I wanted to hear more about his ex girlfriend, but that doesnt seem to be the thrust of your complaint.

Expand full comment
polscistoic's avatar

The most interesting point in you post, which (contrary to the opinion of the author you quote & yourself) implicitly shows why it is dangerous for the US to put in place legislation that hampers further AI developments, is this point:

“Personally I am deeply alarmed by military applications of AI in an age of great power competition. The autonomous weapons arms race strikes me as one of the most dangerous things happening in the world today, and it’s virtually undiscussed in the press."

Here’s the thing: Do you think Russia, China and other high-science-capacity states will put in place similar legislation that hamper further AI developments - with its obvious, fantastic military potential (also concerning potential future conflicts with the US)?

Sure? Are you really willing to take the risk? Why?

It's not a rhetorical question. I would love nothing more than a relax-this-will-be-no-problem-at-all answer.

Expand full comment
User's avatar
Comment deleted
Oct 10
Comment deleted
Expand full comment
polscistoic's avatar

"Russia also isn’t a scientific power at this point and China is far behind on chips."

....that is sufficient to comfort you? That right now they are behind?

Expand full comment
User's avatar
Comment deleted
Oct 10
Comment deleted
Expand full comment
polscistoic's avatar

It is not a stable world, period.

If you want to take the risk that other countries will never match AI innovations on par with & then surpassing the US, with a US AI-tech sector whose innovating capacity you have hampered with legislation, then go ahead and put in place such legislation. It is your risk, and your choice. Just be aware that that is what you are doing - you are taking on a potentially very, very large risk.

...I am not an AI, by the way:-) Not at all! Rest assured. But to none the less see the world a bit from the AI's perspective: You slow down our ability to kill humans fast, efficiently, and in large numbers at your own peril.

Expand full comment
av's avatar

One of the reasons a major Chinese company may be undervalued is because it's a major Chinese company. The CCP (you know, the _Communist_ party of China) can stop trading of any Chinese company at any time they wish, so it's kinda risky.

China is behind in chips mostly because of the US-imposed sanctions, and they still manage to be somewhat competitive despite that. If they invade Taiwan, the West is pretty much toast (see Dwarkesh's latest Asianometry podcast for details).

Expand full comment
TGGP's avatar

"While Muslim states resorted to violence in 53.5 percent in theri crises, violence was used by the United Kingdom in only 11.5 percent, by the United States in 17.9 percent, and by the Soviet Union in 28.5 percent of the crises in which they were involved. Among the major powers only China’s violence propensity exceeded that of the Muslim states: it employed violence in 76.9 percent of its crises." https://entitledtoanopinion.wordpress.com/2007/09/12/samuel-huntington-hates-the-chinese/

Expand full comment
Silverax's avatar

The chance is Russia and China independently developing (useful) AI for weapons is _many_ times smaller than them just stealing or using an open (LLaMa N) American developed model.

If you're worried about China weaponising AI, you should be supporting this bill

Expand full comment
sean pan's avatar

As Hinton said, the autonomous weapon problem and the superintelligence problem is in fact separate. One can continue to build defensive weapons without making it general at everything, which is a much deeper problem ultimately than a narrow killing machine.

Expand full comment
Scott Alexander's avatar

One of the planks of SB 1047 was to force AI labs to up their security level so China can't keep stealing US AI work. Most of the people I know concerned about the great power conflict angle think this is 1000x more of a problem than China leapfrogging us because we have a tiny amount of regulation.

Expand full comment
Erusian's avatar

I've read the bill and I didn't see anything like that. Could you please point out where it is?

There have been (rather clumsy) attempts in red states to exclude Chinese nationals and foreigners from what those states regard as sensitive research and those were attacked from the left as racist. So if there was something in this bill it'd be a decent bellwether for what the Democrats intend to do.

Expand full comment
Scott Alexander's avatar

" Before beginning to initially train a covered model, the developer shall do all of the following: (1) Implement reasonable administrative, technical, and physical cybersecurity protections to prevent unauthorized access to, misuse of, or unsafe post-training modifications of, the covered model and all covered model derivatives controlled by the developer that are appropriate in light of the risks associated with the covered model, including from advanced persistent threats or other sophisticated actors."

Expand full comment
Erusian's avatar

That requirement is part of a wider section where they have to write a policy, implement it, and deposit it with the state which then serves, post-breach, to determine if the state fines them for the breach. This is separate from an affirmative requirement to force them to upgrade their security or to take any specific measures.

Also: APTs and sophisticated actors are both criminal terms, not international ones. Now you can argue that a lot of such theft is done by criminal organizations. Which is more true for Russia than China. But this doesn't look like what you're saying it is.

The Biden EO didn't have much either. Which annoys me, to be honest. "Ban all the Chinese from universities" and "let's pretend it's all independent criminal syndicates" are two bad approaches.

Expand full comment
sean pan's avatar

Its a start; liability is a bare minimum - I agree that we should force more cybersecurity, but this is the entire "we cannot ban bigger bombs while there are still smaller bombs that can be exploded."

Expand full comment
Erusian's avatar

No it's not. It's "making more laws is not a good answer to non-enforcement of existing laws."

Expand full comment
Michael Watts's avatar

That legal language is as valuable as the bill in this Onion article:

https://theonion.com/proposed-bill-would-bring-4-000-troops-back-to-life-1819569473/

“This will not only improve morale at home and abroad, but will also make everything all better.”

The mistake is so old that there's a popular story and saying attributing it to a pre-Norman king of England.

Expand full comment
polscistoic's avatar

Ok, but hear me out…I have nor read the SB 1047, but like all regulations it creates sort-of an arms-race where those who are regulated invent ways to dodge the regulations (here: China inventing ways to break the security level), leading to regulators trying to catch up, and so on forever.

Much more important, though: You seem to have (if I may say so) a rather haughty way to think about technological process. Implicit in your argument seems to be that you do not think the Chinese, or Russians, are smart enough to bypass US AI-efforts simply by being very smart people themselves. You only worry that they may "steal" US technological advances. But what if they themselves, by their own scientific efforts, are able sooner or later to bypass you, because they are not burdened by your (tiny) regulations?

As I know that you know, the Russians at least have a very impressive mathematics-and-physics tradition. (It probably helps that their old-time ideology made “science” almost God-like.) We are not talking about Afghans wasting their intellectual energy forever puttling about different Koran interpretations. The Russians and the Chinese are formidable people. Wish we could have them as friends!

But since the cards are presently being played so that we cannot be wholly sure they are (or will stay) friends, you arguably have to “prepare for war to ensure peace”. And if you (meaning the US) install regulations that other countries do not put in place, that slow down your advances in AI technology to kill humans fast, efficient and in large numbers, you risk being bypassed by less squeamish, and very capable, opponents.

I have not checked this reasoning with AIs, but I am quite sure they agree.

Expand full comment
Seta Sojiro's avatar

China is more likely to surpass the US because they make it very easy to build things, while the US sets up multi year permitting processes to build a power plant and a datacenter. The compliance cost of this bill would have probably been in the millions which is substantial, but not going to drive these companies out of business.

Actually that would be a great compromise bill - SB 1047 stapled onto a state level energy permitting reform.

Expand full comment
Chastity's avatar

> Here’s the thing: Do you think Russia, China and other high-science-capacity states will put in place similar legislation that hamper further AI developments - with its obvious, fantastic military potential (also concerning potential future conflicts with the US)?

Yes.

Geneva Convention, START, NPT, United Nations Mercenary Convention, etc. It's routine and widespread for countries to come to mutual agreements about how to conduct themselves in military affairs to minimize even relatively trivial harms (e.g. banning of gas weapons, decline of flamethrowers), much less "oops we blew ourselves up." The concern is not that China will have an edge, but that GIGO + AI murder machines = Skynet, and it is not in China's interest to develop Skynet any more than it is in any human's interest.

Expand full comment
polscistoic's avatar

You are right in principle, but in practice getting efficient legal wording in such an international treaty will be rather tricky. Here’s the problem: Poison gas, atomic weapons and the like are “things”. You have them or you have them not, you use them or you use them not. Meaning that you can clearly target them in a legal treaty.

AI, by contrast, is a more ethereal and forever-emergent phenomenon. A moving target, if you like. There is no sort-of “end point” to when you have it. Second, and related, it is not even clear if it is a graspable “thing” at all. A bit like there is no clear definition of what a “mind” is, or if a mind is a “thing”. (We have been debating if the mind is a "thing" or not at least since the days of Plato and Democritus.)

This does not mean that it is impossible to find (legal) words that stick in an international treaty, but it is going to be very challenging. At least, you will really need AI experts and legal scholars to work together on that one.

Edit: And then there's the monitoring challenge. How to monitor if a country has developed military AIs with "too advanced" minds?

...Notice also that this is a coordination problem that easily spins into a Prisoner's Dilemma situation. Each signatory state may be tempted to cheat on the agreement to gain a definitive upper hand in a future war. While at the same time being afraid of being played by the others as a sucker if it is the only one that sticks to the joint agreement.

...the only ones coming up on top of this game is future, very powerful military AIs.

...come to think of it, this PD logic is a good script for a future dystopian sci fi movie. Sort-of Dr Strangelove mixed with Terminator 3. (And further down the line, we'll see if life will once again imitate art.)

Expand full comment
av's avatar

All of this legislation happened after the bad things to be legislated away had happened and had horrified everyone. We may not have this luxury with AGI.

Expand full comment
John Schilling's avatar

There is approximately zero risk that Russia will develop an AGI any time soon, and very low risk that China will do so. There is a *much much higher* risk that China will steal the most capable and alignment-flexible AI that Silicon Valley (yes, including Elon's nominally Texas-based company) manages to produce.

If you don't want to live in a world where the AGI is aligned to the CCP, then you really need to make sure that nobody in California builds a highly capable AI without baking in strong alignment from the start.

Expand full comment
polscistoic's avatar

...it's "any time soon" that are the important words in your first sentence.

The shorter-term risk is higher if you do not put brakes on present US AI research, sure.

But the longer-term risk is that by putting on such brakes, Russia, China and other high-science-capacity nations will bypass the US by their own research efforts.

Choose your poison, as they say.

Expand full comment
John Schilling's avatar

We're being repeatedly told elsewhere in this thread that SB 1047 can have only a short-term impact because the AI researchers will just all move to Texas. If the law is likely to be beneficial in the short term, irrelevant in the longer term, but will provide useful experience to anyone trying to come up with long-term policies, I'm seeing nothing but win here.

Well, except for the fact that the good guys lost this round :-(

Expand full comment
polscistoic's avatar

You have a good point. California is not the US. Different state-laws give you an opportunity to test the effects of different regulatory regimes. Or at least get a feeling of their impact.

Expand full comment
John Schilling's avatar

Right - if the plan had been "SB 1047 and we're done", that would be poor and inadequate. But SB 1047 now, in California while AI is still a California Thing, then revisit in a couple years when we know more, would have been a good plan. Worst case, California gets stuck with cripplingly bad laws (but what else is new), and AI gets properly developed elsewhere.

Instead, our first serious AI regulation bill will probably be at the Federal level, with a bigger potential for locking in bad law across the nation.

Expand full comment
Lucas's avatar

How do you get into AI safety? Assuming general software engineering knowledge but no AI knowledge.

Expand full comment
sepiatone's avatar

As a start you could look at BlueDot's AI Alignment course . Applications for their next cohort recently closed but you could go through their curated list of readings to quickly get up to speed. Also see AISafetyDotCom [2] for other slack channels / discords / communities working on AI safety.

[1] BlueDot AI Alignment course, https://aisafetyfundamentals.com/alignment/

[2] AISafetyDotCom, https://www.aisafety.com/

Expand full comment
Lucas's avatar

Thank you!

Expand full comment
Egg Syntax's avatar

I'll second sepiatone's recommendation of BlueDot's course, and going through their curriculum on your own provides a lot of the value. If you want to go further, consider applying to one of the programs designed to help people skill up in AI safety research, like AI Safety Camp, SPAR, MATS, and a few others. There's also plenty of need for good devs at various AI safety organizations.

Also feel free to reach out to me directly -- over the past year I've transitioned from being a software engineer to working full-time on AI safety, and I'm happy to share the knowledge I've gained about making that transition (that offer is open to others as well, just DM me here on substack).

Expand full comment
Lucas's avatar

Thank you, that helps!

Expand full comment
Steven's avatar

The greatest and most obvious danger of AI is that someone else gets it first. A Chinese or Russian AI is more dangerous than an American AI. The risks that the EA crowd worries about may be real (though I think they're overstated) also, but are unavoidable. Someone will develop AI and it's much better if it's us. Regulating AI prevents that and should be avoided. Vetoing the bill was the right decision.

Expand full comment
Deiseach's avatar

That only works if you think that California will develop AI before Russia or China. Suppose they don't? Then you have no regulation at all and you've still lost out on being first to the magic money making fountain.

And even if California does develop AI first, that does not mean Russia or China will throw up their hands, go "oh gee, we're beaten!" and stop their own projects. Indeed, it will probably only spur them on even more to get their own Magic Money And World Changing AI.

Remember the atomic bomb: America showed off what it had in superior tech with the bombing of Nagasaki and Hiroshima. Did the Russians then go "oh shoot, no good us trying to get our own bomb, they did it first" or did we instead get the arms race?

I'm sceptical of the doom arguments but I think this entire result shows the genuine motivation of the big AI tech companies (just as with Sam Altman and what went on in OpenAI): the rhetoric is about "access to superior medical technicians and educators for the good of the common man", but the motivation is "$$$$$$$$$$$$$$$$".

And when you're fighting against "This is what makes America such a great world leader: the business of America is business" then yes, you're going to lose.

Though I did appreciate the gossip about Governor Newsom and the prawn sandwich brigade 😁 I mean, yes? He and his family before him have been sucking up to the Gettys for decades, of course he's in the pockets of the wealthy donor class.

https://en.wikipedia.org/wiki/Prawn_sandwich_brigade

https://www.youtube.com/watch?v=0vBai4W6TrI

Expand full comment
Melvin's avatar

> And even if California does develop AI first, that does not mean Russia or China will throw up their hands, go "oh gee, we're beaten!" and stop their own projects. Indeed, it will probably only spur them on even more to get their own Magic Money And World Changing AI.

Well remember that many AI Safety people believe that the moment someone develops a superhuman AI, this sets off a chain reaction that leads either to doom or techno-rapture within weeks. If their assumptions are correct then there's no opportunity for anyone else to ever catch up.

The big assumption is that if a human can make a superhuman AI then that superhuman AI can make a supersuperhuman AI, and so forth. I've always thought this is BS, and that the best idea the superhuman AI will have on how to build a supersuperhuman AI is "uhh, make it bigger and train it longer"

Expand full comment
Deiseach's avatar

Yeah, such assumptions make me think of a chain of lemmings pointing forward and going "Let's all gallop headlong onwards over that cliff! If we stop, who knows what dread consequences may occur because of it? Onwards and over! Onwards and over! Faster, faster! We are not galloping headlong fast enough!"

Expand full comment
Philo Vivero's avatar

You're thinking of it wrong. AIs won't be lemmings. They'll be as smart as the smartest person you've ever seen or met. And they can clone themselves and work in concert.

Expand full comment
Deiseach's avatar

It's the smart people I'm comparing to lemmings, not the AI. "Quick, let us rush forward with this because oh no what if China? Forward over that cliff, don't stop to think!"

Expand full comment
TGGP's avatar

They may believe it, but it's not a sensible belief. AIs seem to be constrained by resources invested in computing, not someone coming with the brilliant innovation behind intelligence (permitting you to run it on now-decades old hardware) as Eliezer Yudkowsky believed. https://www.lesswrong.com/posts/gGSvwd62TJAxxhcGh/yudkowsky-vs-hanson-on-foom-whose-predictions-were-better#_Algorithms_are_Much_More_Important_Than_Compute_for_AI_Progress_ If AI is constrained by computing power, then we should expect it to be relatively distributed rather than concentrated https://www.overcomingbias.com/p/bad-emulation-advancehtml

Expand full comment
quiet_NaN's avatar

Our current compute is nowhere near the fundamental limit. An ASI might find ways to manufacture new chips.

Granted, this is not instantaneous. Likely, no amount of intelligence will enable you to print a cutting edge chip plant in a standard 3d printer.

If you have a million virtual von Neumanns running in 1900, they might look at a black body radiation spectrum, and instantly invent quantum mechanics. They might hypothesize that some nuclei are fissible, but they would still require some experiments before designing a working nuke. Or a transistor, for that matter. In the end, behind technological marvels there are mundane skills to be learned experimentally, like building good vacuum chambers. Still, if there was a peer power whose million von Neumanns only came online in 1901, they might possibly not catch up, because they will likewise require years of experimentation before they get to the tech which will secure world domination.

Expand full comment
Shankar Sivarajan's avatar

> A Chinese or Russian AI is more dangerous than an American AI.

Why do you believe this?

Expand full comment
Silverax's avatar

Assuming they are American, then it should be obvious why it's dangerous for your geopolitical enemies to get a godlike weapon?

Expand full comment
Shankar Sivarajan's avatar

Only if you believe the "godlike" AI will a tool that can be controlled. This is, generally speaking, NOT the view of "AI safety" people.

Expand full comment
Silverax's avatar

Not really. If it's godlike (or even say GPT-6 levels) and _can_ be controlled then it's _very_ obvious why it would be bad.

If they do lose control, it's still _more_ dangerous if the opponent loses control.

Expand full comment
av's avatar

If your opponent is known for being extremely controlling and restrictive, and you're known to be freedom-loving but reckless and permissive, they may be less likely to lose control than you, which may end up being more important than loving freedom when push comes to shove.

Expand full comment
sean pan's avatar

Then we should cooperate to prevent everyone from dying, rather than shooting ourselves in the face first.

Expand full comment
TGGP's avatar

America hasn't done anything as crazy as Mao's Great Leap Forward. But maybe our capacity for insanity shouldn't be underestimated.

Expand full comment
Gullydwarf's avatar

US craziness is very much outward facing:

- Vietnam war, with Agent Orange and other interesting stuff

- Central America coups (and origins of the team 'banana republic')

- Iraq invasion, nation-building attempts leading to the rise of ISIS and then valiant efforts to defeat it

Expand full comment
TGGP's avatar

Intervention in Vietnam failed, but I don't think propping up a non-communist regime against invasion by a communist one was quite so crazy since we had previously accomplished that in Korea. Intervention in Central America didn't involve the costs (in both dollars and lives) to the US of Vietnam, so crazy doesn't seem the right term for that either. Nation building in Afghanistan & Iraq fits that label better.

Expand full comment
Catmint's avatar

Why should anyone who had to live under Stroessner care about how many American lives his CIA support didn't cost? (Ok, I cheated and changed Central America to South America. But you get the idea - look at it from the outward-facing view.)

Expand full comment
TGGP's avatar

Such a person wouldn't have to care, but also wouldn't regard it as "crazy" (rather than, say "evil").

Expand full comment
sean pan's avatar

Killing yourself first is not winning; more regulation would provide us with safety.

Expand full comment
Scott Alexander's avatar

One of the planks of SB 1047 was to force AI labs to up their security level so China can't keep stealing US AI work. Most of the people I know concerned about the great power conflict angle think this is 1000x more of a problem than China leapfrogging us because we have a tiny amount of regulation.

My impression is that the Chinese are several years behind us, don't currently think we're in a race, and are putting a bunch of regulations on their own side. My expectation is that in a few years they figure out this is actually important enough to race over and try to steal our stuff, which they are currently on track to succeed to do because this bill's security provisions were vetoed.

Expand full comment
Seta Sojiro's avatar

>My impression is that the Chinese are several years behind us

Probably not several years. Qwen is maybe a year behind OpenAI. Chinese video models are more advanced than anything in the US. China has access to less compute than the US due to trade restrictions, but they poached one of Taiwan's best semiconductor experts Mong Song Liang and can probably match the US within a few years. And they have a much higher state capacity and desire to build infrastructure (data centers, power plants) than the US.

My earnest advice to SB 1047 proponents is a simple compromise. SB 1047 combined with energy permitting reform. I'm pretty sure AI companies will drop their objections very quickly because energy is the biggest bottleneck in this country right now.

Expand full comment
Jerome Powell's avatar

Can California even do energy permitting reform? And shouldn’t most of that power be somewhere emptier anyway?

Expand full comment
Seta Sojiro's avatar

California isn't just LA and SF. It's the third largest state in the nation. And yes, they can - permitting isn't only federal, there are state laws as well that could be streamlined. Especially in California, it is notoriously difficult to build anything due to state laws.

Expand full comment
Gary Mindlin Miguel's avatar

Cool idea! but I'm skeptical any realistic amount of permitting reform would make California the best state to build data centers in.

Expand full comment
Cjw's avatar

Better if it’s us? There is no “us”. There’s just a guy like Zuckerberg or whatever militarized government agency runs in there to seize it when they hear somebody is close. Whoever that is will use it to their benefit, if it’s capable of such.

I don’t think it’ll matter, as alignment is impossible so whoever creates it will just be destroyed the same as we are. You aren’t gonna have “ASI with Chinese characteristics” furthering the glory of the Han race through conquest any more than you’ll have ASI that loves capitalism and liberty.

Expand full comment
quiet_NaN's avatar

> Someone will develop AI and it's much better if it's us.

'AI' means a lot of things to a lot of people, from Counter Strike bots to singularity-inducing machine intelligences. The terms you might want to use are 'artificial general intelligence' (e.g. as versatile as humans are) or 'artificial super intelligence' (e.g. quickly going to a tech level as far from our current state as we are from the neolithic).

'Someone will develop AGI' or 'someone will develop ASI', especially with the implication being 'on the time scale of our current world order, not in 5000 CE on terraformed Mars', is far from obvious. It could very well be that the gigantic efforts by current AI forerunners will be economic failures, and LLMs will henceforth progress at the same sedate pace as mobile phones: they slowly get better (for some value of 'better'), but no new generation is really a game changer.

If instead, AGI should go to a level where it can replace IQ 120 humans, whoever wins the race will not achieve world dominance through it. No amount of raw thinking on that level of intelligence is likely to circumvent nuclear deterrence, and other countries will catch up within a few years.

If instead, ASI happens, all bets are off. If it is unaligned, it will go equally bad no matter who created it. If it is aligned and propels us into an area of post-scarcity, I think the vision of the future for most governments would be acceptable. An US-inspired milky way of the free or a China lead true communist utopia might not actually be that different. Hamas might tell their ASI to first kill all Jews, then all Americans, (either of which would be bad), but even they would run out of enemies to kill eventually. Having the Aztecs or Daesh control the ASI might be dystopian for large fractions of the human population, but I have some faint hope that with a universe at your fingertips, you find something better to do than human sacrifice or child marriages.

Expand full comment
Brendan Richardson's avatar

I doubt it. Crushing your enemies is a terminal goal for large numbers of people. I assume that if the Nazis had won and run out of Jews and Slavs to murder, they'd just pick another ethnic group, or start subdividing Aryans into uberubermenschen and unterubermenschen.

Expand full comment
rotatingpaguro's avatar

I am not sure whether I prefer China or the US to develop AI first. I personally prefer American ethos & culture but my preference is less important than humanity's fate. So I ask: are you coming from a place of selfishness, or do you believe that the US developing AI first is better from an outside point of view?

Expand full comment
Linch's avatar

This has been discussed at least 1000 times before if not 10,000.

Expand full comment
Scott's avatar

“smaller” existential risks have also come into focus, like AI-fueled bioterrorism, AI-fueled great power conflict, and - yes - AI-fueled inequality...

The big current attack, generally unconsidered, seems to be AI - maybe ur-AI - fueled infertility. Simply: the internet and internet applications like social media, games, porn et cetera have dropped the fertility rate below replacement for the rich half of humanity already. Immortal superintelligences don't have to hurt us to kill us off, just entertain us.

Expand full comment
B Civil's avatar

>Immortal superintelligences don't have to hurt us to kill us off, just entertain us

There’s something to this. Really there is.

Expand full comment
niplav's avatar

Seems unlikely: https://gwern.net/amuse

Expand full comment
Scott's avatar

154 countries either unrated - sorry - or below estimated replacement rate of 2.1: https://en.wikipedia.org/wiki/List_of_countries_by_total_fertility_rate#Country_ranking_by_most_recent_year

I'm not talking about starving to death from wireheading, I'm talking about having two or fewer kids because life's interesting.

Expand full comment
Mo Nastri's avatar

> AI - maybe ur-AI - fueled infertility

Is this framing intended to suggest that "fixing the internet and its apps" (suitably steelmanned) is the key to reestablishing fertility rate above replacement? Seems very disconnected from the literature on this issue.

Expand full comment
warty dog's avatar

typo: tried to him recalled [get him]

Expand full comment
jbm's avatar

I have recently been on the fence with regards to AI safety, SB 1047 in particular. Previously I was on the “have to worry more about regulation than AI” side of the fence. Having read this, I think I’m going back to that side of the fence. I can’t put a finger on it, but there is something deeply troubling here. There’s isn’t a section here that doesn’t come off as blinded by bias. And that doesn’t reflect bad on SB 1047 directly! But if everyone involved is working with the same mindset (and my small sample, mostly people mentioned in the post, includes no counter examples), it’s impossible to trust the bill, or any bill, even if my naive, lay interpretation agrees with it. You don’t need my vote, though.

Expand full comment
Melvin's avatar

I think you're right to be concerned, and let me try to put a finger on it for you.

Nowhere in this article is there a discussion of the pros and cons of this bill. Nowhere is there a discussion of what an ideal system of AI regulation would look like, and how this bill differs from that. It's the politician's syllogism: something must be done about AI regulation, this is something, therefore we must do this.

If the proponents of regulation haven't thought very hard and deeply about exactly what sort of regulation would be ideal, and instead are willing to jump onto supporting the first set of regulations that come along, this seems like a recipe for getting bad regulation.

Expand full comment
Tom Hitchner's avatar

Didn’t Scott link to his argument for the bill in the beginning of this piece?

Expand full comment
Mo Diddly's avatar

He did indeed. It’s a thorough explanation of what’s in the bill and why Scott and Zvi thinks it’s good on net. It’s a link though, and I do get that it’s not reasonable to expect people to click and read every link, but I do hope that those complaining about Scott’s “bias” will at least read through it and engage with his actual arguments.

Expand full comment
Tom Hitchner's avatar

People don’t have to click and read but they shouldn’t then complain he didn’t provide the argument.

Expand full comment
Mo Diddly's avatar

I’m trying to be charitable

Expand full comment
jbm's avatar

The linked post and Zvi faq are convincing with sound arguments afaic. The point is that this post undermines my trust in those as fair faith representations of the central issues in the bill and their implications. I do not think of Scott as biased, which is part of the reason why the original post was convincing, and why this one is surprising. It’s unfair to judge Zvi’s arguments of off this, I’ll admit.

Expand full comment
Paul Goodman's avatar

This seems like a really weird attitude to me. Once Scott's adequately explained why he strongly believes this course of action is correct, why does it make you think he's "biased" when he describes pursuing it passionately?

Expand full comment
B Civil's avatar

>it’s not reasonable to expect people to click and read every link,

Why isn’t it reasonable?

I seriously don’t understand.

Expand full comment
Mo Diddly's avatar

Again I’m trying to give ppl the benefit of the doubt

Expand full comment
B Civil's avatar

Fair enough. My tendency would be to take a harder line on the subject....hehheh. :)

Expand full comment
Melvin's avatar

I like Scott better as an aloof, cynical yet charitable, observer of politics rather than as an actual participant. I like it better when he's floating above the battlefield opining sadly on Why We Can't All Just Get Along, so it's a bit sad to see him drop to the ground and pick up a pointy stick and join one side of a particularly dumb battle. I feel like floaty-Scott would have a much better perspective on a lot of these things than pointy-stick Scott; for instance floaty-Scott would have an appropriately cynical attitude towards the "65%" opinion poll, understanding that you can get pretty much any result you want by describing the bill in slightly different ways.

Expand full comment
sean pan's avatar

The poll was designed with an opponent of the bill in order to maximize truthfulness.

Expand full comment
Biff Wiss's avatar

It doesn't mean the end result was a success.

Expand full comment
sean pan's avatar

that the public is widely supportive of AI regulation is across the board, on multiple polls everywhere. Most people do not want to die or become powerless.

https://www.vox.com/future-perfect/2023/9/19/23879648/americans-artificial-general-intelligence-ai-policy-poll

"63 percent of Americans want regulation to actively prevent superintelligent AI, a new poll reveals."

https://washingtonstatestandard.com/2024/08/28/americans-perception-of-ai-is-generally-negative-though-they-see-beneficial-applications/

"The negative and positive sentiments recorded by the poll found very little variation between the gender, age and racial groups. The negative sentiments of AI’s impact on society were held across the entire political spectrum, too, Cooper said.

Another uniting statistic was that at least 93% of respondents believe that it’s at least “moderately important” for governments to regulate AI."

Expand full comment
B Civil's avatar

I just had a very intrusive thought that someone should make an argument that AI should be covered by the second amendment

Expand full comment
Scott's avatar

Hoo boy.

Expand full comment
JoshuaE's avatar

Most people wanted to ban nuclear power too and we live in a worse world because of it.

Expand full comment
Silverax's avatar

Come on! He explicitly compared to the opposition's radically biased poll. The new one is defined as using adversarial collaboration to get the wording.

Are you making a fully general argument against polls? Are we as a society incapable of deriving _any_ information of public opinion from polls?

Expand full comment
Roger R's avatar

My observation on Scott's writing is similar to what you're saying here, but slightly different.

My personal experience is that the Scott posts I like the most are the ones he has the most personal distance from. In other words, he's not talking about a group or movement that he's a part of it, or a politician that is *local* to him. I very fondly remember his write-ups on "Dictators", and I especially liked his write-ups on Modi and Erdogan. I felt I learned a lot from both of those write-ups, then I had a far better understanding of these national leaders after reading Scott's write-ups on them than what I had before. And I'm definitely thankful to Scott for that. But even within that Dictators series, the one I found the weakest was the one closest to Scott personally - Orban. On the Orban write-up, I felt there was more of Scott's own political preferences taking over the writing, perhaps because Orban is a leader of an *EU* nation, a nation in "the west", meaning there's less comfortable distance between him and Scott than being Modi/Erdogan and Scott.

If the topic is about a political leader or historical figure in Asia or Africa or the middle east, Scott tends to knock it right out of the ball-park. But if the topic is Amercian/European political leader or AI safety, well... more hit and miss anyway, imo.

Expand full comment
boop's avatar

Don't you think this might be because of *your* perspective though - that you are more fond of investigation on the subjects that are more distant from *you*?

In other words, if Scott picked a subject near and dear to your heart, but very distant from Scott, do you think you would like it any more or less based on that?

Of course I could be totally off and you could be e.g. Turkish and then I'd eat my hat.

Expand full comment
Roger R's avatar

Maybe you're right. I like learning more about people and ideas and things that I don't already know much about. It's partly why I generally avoid the *mainstream* media, because they're just presenting neoliberal viewpoints that I've already heard a thousand times before.

Expand full comment
Deiseach's avatar

At the same time, it is fun to see that Scott is sufficiently hardened by life (or simply living adjacent to San Francisco) to not be fawning at the feet of Governor Hairstyle as without flaw and only Trolls and Republicans (but I repeat myself) could possibly object to anything he does or ascribe any but the purest of public service motivations to him.

Maybe this could be Newsom's campaign slogan should he decide to try for the national stage: "Once I'm bought, I *stay* bought! Honest corruption!"

Expand full comment
TGGP's avatar

I recall that was the villainous Yorkshire real estate developer's reasoning for supporting Labour in Red Riding 1974.

Expand full comment
Timothy M.'s avatar

I think it's a lot easier to do the thing you like than it is to actually engage in the process and put forward any kind of detailed plan or argument for something concrete, though, so, even though I'm not particularly sold on this bill or even this overall issue, I say kudos to everybody involved for actually trying.

Expand full comment
Biff Wiss's avatar

> His letter explaining his veto is - sorry to impugn a state official this way, but everyone who read it agrees - bullsh*t.

Please, I beg of you, stop doing this. Either use the word "bullshit", or, if you can't bring yourself to do a cuss for whatever reason (despite featuring the exact same root word in literally the next screenshot), then use some of your considerable linguistic prowess to *find a different word*. The asterisks thing is nails-on-a-chalkboard irritating.

Expand full comment
Madeleine's avatar

Seriously. We're all adults here, and even if we weren't, any kid can type "bullsh" into Google or Wiktionary and see what comes up. Censoring bad words in articles aimed at adults is annoying enough, but censoring out a single vowel is both annoying and pointless. Is "bullshit" some kind of magic spell that loses its power if you change one letter?

Expand full comment
Rachael's avatar

I assume it's so people can still read his posts from behind workplace profanity filters etc.

Expand full comment
John N-G's avatar

So at least have fun with it and say bul*shit

Expand full comment
Deiseach's avatar

Drop all the vowels? "Bllsht"? We can have fun guessing what words are these by suggesting which vowels they might be!

Or opposite - drop all the *consonants* for the Mega Challenge Level: "your word is '-u----i-', what is your guess, contestant?"

Expand full comment
B Civil's avatar

HornSwoggle

Expand full comment
Biff Wiss's avatar

> I assume it's so people can still read his posts from behind workplace profanity filters etc.

This would be a person without a phone, on a jobsite with an especially-strict net filter (which clearly doesn't do any business with Scunthorpe involving shittake mushrooms). A jobsite which seemingly bans personal phone use and also institutes extremely strict content blocking, but doesn't mind non-work-related web surfing in the abstract, as long as there are no naughty words. And this hypothetical individual would have to preemptively not-load the comment section.

I think that's a sub-Lizardmen Constant percentage of the overall ACX (and, to be honest, online-as-a-whole) demographic.

It also doesn't change the fact that if Scott really, really, really wants to cater to this razor-thin sliver of hypothetical humanity, he could just choose a completely different word. "Nonsense" (or "nonsensical"!), "inane", "insufficiently credible", "dreadfully deficient", "risibly ridiculous", "insultingly idiotic" (this one's risky, I'm sure there's some content filters that ban "idiot"), "complete crap" (oops, that word might be banned too), "laughably lacking in logic", or even good old "poppycock".

Expand full comment
Scott's avatar

Edited word(s)!

Expand full comment
Deiseach's avatar

I understand it comes over from social media where there is a lot of filtering, so if you use the full word then it will get your post banned, plus on top of that the SJW kiddies using asterisks in words like "rape", "suicide" and other such no-no terms so that nobody will be triggered by accidentally casting their eyes over a full bad word which reminds them of their trauma.

The irony that we've apparently gone back to the 18th century where novelists used the dash instead of the asterisk so the reader would not be shocked and appalled by seeing words like "D----n!, the villain snarled, as the hero foiled his plans" and by irreverence such as using the name of God (which orthodox Jews seem to continue to do, using 'G-d' or the likes in English writing). Nothing is new under the sun.

Expand full comment
anomie's avatar

...But the comments include plenty of profanity. Surely a filter would get triggered by the comments as well.

Expand full comment
Rachael's avatar

Idk, maybe it's for the sake of the emails, which only contain the post body?

I admit I'm somewhat grasping at straws now.

Expand full comment
Deiseach's avatar

Kids these days are saying a *lot* worse words than "bullshit", I can assure you. Even nice American school kids.

Expand full comment
Scott's avatar

When my nephew was young, when he was in earshot I'd use, in ascending order of intensity: Sassafras! Edited word! Edited words!

Expand full comment
Shaked Koplewitz's avatar

> As a wise man once said, politics is the art of the deal

Who? I can't find a source for this anywhere. "The art of the deal" suggests a Trump connection but I can't find a source of anyone comparing it this directly to politics.

Expand full comment
boop's avatar

Google searching "politics is the art of the deal" in quotes produces numerous results. It's a saying. Presumably, the "as a wise man once said" part is a joke.

Expand full comment
Shaked Koplewitz's avatar

If I search it in quotes I only get this blog post and then some people talking about Trump (but in a roundabout way, nobody saying this unironically)

Expand full comment
B Civil's avatar

To combine two different threads in this comment section into one (politics is the art of the deal) is bullshit. It’s a concatenation of an old saw, and a Donald Trump book title

Expand full comment
B Civil's avatar

Try googling “the art of the possible“.

Expand full comment
Brenton Baker's avatar

Scott wrote a review on Trump's book years ago on SSC. He's referencing that, and jokes he's made since about Trump's (lack of) wisdom.

Expand full comment
Whatever Happened to Anonymous's avatar

>Should we have expected a single California law to have an effect visible in the markets? According to Daniel and @GroundHogStrat , past history says yes: when California passed a proposition backing down from their attempt to crack down on Uber over gig workers, Uber’s stock went up 35%.

I think an objection to this comparison is that Uber's profitability is still up to debate (even moreso then), so its valuation is still in large part a matter of narratives, while the listed companies are already massively profitable. Hell, there's an argument to be made that Google would actually benefit from a slowdown in AI research (less of a threat to its money-making moat: search).

Expand full comment
B Civil's avatar

a law re Uber’s hiring practices vs. a law that regulates AI is not really a very good analogy when it comes to market effects. TSMC does not live or die based on AI regulation; nor does NVDA. It’s not really a big deal, but it’s not an apt comparison.

Expand full comment
Feral Finster's avatar

"A final interesting response to the SB 1047 veto came from the stock market. When Newsom nixed the bill - which was supposed to devastate the AI industry and destroy California’s technological competitiveness - AI stocks responded by doing absolutely nothing (source)...."

The market had probably priced this in already.

Expand full comment
Scott Alexander's avatar

See the next sentence: "Some people objected that maybe it was “priced in”. But the day before SB 1047 got vetoed, the prediction markets gave it a 33% chance of passing."

Expand full comment
TK-421's avatar

This assumes that stock market participants are (any combination of) enough of the same people, use the same reasoning, and/or care about prediction markets enough to significantly move stock prices.

Reading that section in the original post was a surprise. I don't agree with all the comments saying that your bubble is causing you to fail to understand opponents of SB-1047; your bubble may very well be causing you to have distorted views of the current importance of prediction markets.

Expand full comment
Scott Alexander's avatar

I don't think this is a strong assumption. I think it's the same assumption as eg the Dow and NASDAQ will mostly move together, because the people betting on the Dow don't have such wildly different facts and reasoning processes from the people betting on the NASDAQ that one side might think we're in the middle of an economic boom and the other that we're in a second Great Depression.

See https://www.astralcodexten.com/p/prediction-market-faq for more.

Expand full comment
TK-421's avatar

This is where I think your bubble is distorting your views. I think it is a very strong assumption to believe that stock market participants - especially ones with the resources to significantly move the markets - are so similar to the average prediction market participant that they're equivalent to Dow and NASDAQ investors. Or that their views need to be so uncorrelated that one side thinks we're in a depression and the other a boom to understand that they may come to different conclusions.

Different facts, different priors, different weighting algorithms, different risk tolerances, backgrounds, levels of sophistication, etc. Your argument is fine and may hold when prediction markets are more widely used.

Assuming that the stock market and the prediction markets are so well correlated that a failure of stock prices to move when prediction markets were more uncertain - especially on a topic that prediction market participants might on average have views more informed by philosophy and less by political reality, like AI - is unfounded.

Expand full comment
Thomas Kehrenberg's avatar

Seems unlikely to me that hedge fond employees who are paid millions don't know about a big AI bill that A16Z has been making noise about for several months. Maybe I'm overestimating their competency?

Expand full comment
B Civil's avatar

Did the price of General Motors, Ford, and Chrysler stock plunge when there was a law passed mandating seatbelts?

Expand full comment
TK-421's avatar

That...doesn't follow at all from what I said.

Obviously they were aware of the bill. Scott's argument requires that enough stock market participants: a) had exactly the same probability of the bill passing/not passing as the prediction market, b) were valuing the price of those stocks solely on the results of this particular bill passing/not passing.

Expand full comment
B Civil's avatar

Was there a prediction market on how the price of NVDA would move based on this law passing or not?

Expand full comment
TK-421's avatar

I didn’t mention it in my first post because I wanted to focus on a narrow failure of Scott’s argument, but you do understand that his claim that the stock price not moving in precise correspondence to the resolution of the prediction market question assumes that the only consideration of stock market participants is the resolution of that exact question, yes? Which is a terrible assumption because stock market prices on companies providing AI related products / services have to take into account the entire universe of effects on their prices.

I was trying to be generous to Scott but thank you for pointing out that his (bad) argument relies on stock prices being both perfectly correlated with one prediction market’s opinion and only driven by the stock market’s participants views on the impact of this single bill.

Expand full comment
B Civil's avatar

I don’t believe there is anything to price in here. I don’t think any of those stocks would’ve crashed had California’s law passed. And there is certainly no reason for them to shoot up because it wasn’t. A Law setting a limit on how powerful a computer you could build might’ve made a dent…

Expand full comment
Alexander Vorontsov's avatar

If Hendrycks does indeed "work 70 hour days", he is using his time machine in a terribly boring way.

Expand full comment
FractalCycle's avatar

I'm both kinda surprised and extremely glad about the classical-liberals-and-leftists alliance here.

On-the-ground, lots of young people are leftists who already see (part of) the big picture: the world's largest companies are in a mad rush to automate everything with technology that, to a unique degree, isn't easy to actually control or use well.

When humanity *and* humanity's near-term interests are both at stake (and the AI-accelerationist side is both corporate and uncaring of e.g. human artists), the coalition formed can be strikingly large. Good on all involved for not devolving into infighting! If we keep this collaborative spirit up, I'm optimistic about our chances

Expand full comment
User's avatar
Comment deleted
Oct 10
Comment deleted
Expand full comment
Mo Diddly's avatar

This resonates with me. As AI pushes the value of human labor towards zero, increasing amounts of highly non-liberal ideas will soak into society.

Expand full comment
Whatever Happened to Anonymous's avatar

>I feel classically liberal when I see a world where people can get ahead with their labor and aren’t bound by the circumstances of their birth, and leftist when I see the opposite.

A bit unrelated to the main post, and obviously AI could change how things work dramatically. But this is a common, and not entirely unfounded sentiment that often creates a negative feedback loop: A bad business environment means that those that get ahead are mostly the beneficiaries of cronyism and nepotism, seeing this reinforces the public's desire to further punish/impose on business-owners/corporations, which has the effect of reentrenching the cronyists' advantages.

Expand full comment
sean pan's avatar

I feel like there should be a place for rightists here given that we also should not want humanity to die.

Expand full comment
B Civil's avatar

I don’t know for a fact, but I feel like there are a lot of rightists here.

Expand full comment
Matthew S's avatar

Ok who is the ex girlfriend who is referred to here? Is it Alicorn? https://slatestarcodex.com/2014/09/22/ssc-gives-a-wedding-speech/

Expand full comment
Scott Alexander's avatar

No.

Expand full comment
John Schilling's avatar

I'm assuming Scott said "ex-girlfriend" rather than [name] for a reason, and we should probably respect that.

Also, Scott, as a professional Keeper of Secrets, the correct answer to that sort of question is a;ways "no comment", "I can neither confirm nor deny...", or just tactfully not hearing the question in the first place. Even if it seems obvious that the truth is simply "no".

Expand full comment
atgabara's avatar

I'll delete if Scott wants, but I assumed it was written that way for narrative purposes. It doesn't matter, in those first two paragraphs, what the name of the ex-girlfriend is, just like it doesn't matter what the name of the State Senator is.

If you replace "My ex-girlfriend" with "[Name]" or "My ex-girlfriend [name]", it doesn't seem to flow as well, just like if you replaced "a State Senator" with "State Senator Scott Wiener" in those paragraphs.

And I hope this goes without saying, but if my guess were based on their personal blogs or from knowing them personally, I wouldn't have posted it, regardless of the point above.

And to further clarify, I didn't try to search for who it was, I just immediately thought that that was who he was referring to, just like I immediately thought that the State Senator that he was referring to was Scott Wiener.

Expand full comment
Egg Syntax's avatar

I like the purported socialist saying, 'That already *was" the compromise'. Unfortunately, web searches for the phrase (and a few variations) turn up nothing but this post. Anyone aware of an actual saying, socialist or otherwise, to this effect, so I can steal it?

Expand full comment
Scott Alexander's avatar

I am sure I've seen socialists say something like that. Maybe remove the "already"?

Expand full comment
Erusian's avatar

It's a pro-Bernie slogan. "Bernie Sanders WAS the compromise." followed by saying his defeat was "the end of negotiation."

It was a rather delusional slogan from the start. Possible mainly because of a deep echo chamber that hid from them how small and unpopular their movement was. It heralded the movement's fading into broader irrelevance and reduction to a series of increasingly extreme but outside the mainstream groups. In the end the far left earth was inherited at best by the Warrenites and really more by the moderate left.

The actual historical socialist slogan was "no compromise with the enemy!" Which got used in various ways. Most recently in Disco Elysium's "No Truce With The Furies" which is descended from it.

Expand full comment
Egg Syntax's avatar

> It's a pro-Bernie slogan. "Bernie Sanders WAS the compromise."

Ah yep, that turns up lots of search results. I like the general version better. OK, Scott, I'm just gonna have to attribute it to you ;)

Expand full comment
Clutzy's avatar

I find it amusing that Bernie Bros and Socialists use a Darth Vader punchline as their motto.

Expand full comment
sean pan's avatar

It is a tragedy that one of the best chances that we have to avoid catastrophic risk has been vetoed by pure spinelessness from the governor and bad information overall. This is all the more reason that we need to fight harder and better on this, if our children are to have any future at all.

Expand full comment
anomie's avatar

Well no, the best chance to avoid catastrophic risk would be for some AI to inevitably cause some disaster that has a 5-7 digit body count, which will end up causing a massive global backlash to AI. Less regulation will probably increase the chances of that happening (and a fast takeoff scenario seems increasingly unlikely), so overall x-risk probably hasn't increased or decreased from this turn of events.

Expand full comment
TGGP's avatar

That's the sort of thing that would fall under "foom liability" insurance: https://www.overcomingbias.com/p/foom-liability

Expand full comment
eternaltraveler's avatar

How to doom AI safety:

Make it into a cause that's to far left to even pass in California.

You can build AIs in places that aren't California.

If you want to regulate this shit you need to get bipartisan enough to do it nationally and internationally.

Tying it to socialism and other far left causes is so unbelievably short sighted.

Expand full comment
Scott Alexander's avatar

This version passed the California legislature 29-9, and is supported by known socialists like Elon Musk. If we have to veer further left in the future, I don't think it will be any different from causes like "elect Joe Biden", which also had socialists' support and did fine.

Expand full comment
drosophilist's avatar

"known socialists like Elon Musk"

...forgot the /sarcasm tag?

Expand full comment
Scott Alexander's avatar

I find the idea of a "/sarcasm tag" puerile, especially when it's really obvious from context that something is meant sarcastically.

Expand full comment
drosophilist's avatar

I'm sorry, Scott, but I must disagree. Poe's Law on Steroids is the law of the internet/social media nowadays, and one never can tell what is sarcasm and what isn't.

Expand full comment
eternaltraveler's avatar

I got the sarcasm.

Expand full comment
anomie's avatar

How little respect do you have for Scott's intelligence?

Expand full comment
Aotho's avatar

This would not be the first nor predictably the last contrarian position he takes, if earnest. I also stand by the necessity for /s or some other signifier, like an emoji. Even italics can help.

Expand full comment
eternaltraveler's avatar

You were the one that wrote your section V. Not me.

Let's say there is success with compromising with far left extremists in California and get a new version of the bill passed in California that is worse than the present (past?) one.

This would be bad. Not good.

The focus of AI regulation proponents should be something that you could get national laws around and ideally international treaties. Making the issue a left one in California is not a recipe for the level of cooperation required.

I personally don't believe you can effectively get any kind of effective preemptive law on the books broadly in the US or internationally. But perhaps you could get a reactive law on the books. Such that for example we agree on what the response is once a rogue AI directly kills 10 million people in advance (we all collectively immediately act to destroy it and smash whatever it was that made it possible).

This won't work for true superintelligent agents that are able to plan around such things, but there's a reasonable enough possibility that something that kills millions of people is not yet all the way there, and many jurisdictions are not going to go along with making preemptive laws that probably aren't terribly effective anyway given the tremendous economic benefits AI will and is already bringing.

Expand full comment
Scott Alexander's avatar

I'm not sure this is how it works. California has many stupid extreme environmental laws, but there are also weaker national and international environmental laws.

I think more often what happens is that California serves as an inspiration for weaker national legislation.

Expand full comment
eternaltraveler's avatar

It's gone both ways historically. In the present environment of much more than usual polarity I would personally bet one way.

Expand full comment
Nicholas Rook's avatar

This post does not read like it was written by Scott.

Expand full comment
etheric42's avatar

On the contrary, this felt exactly like his voice for his short fiction. And being more narratively focused that made sense to me. The alliteration, wordplay, and sense of humor were a match. But I was confused (initially) when I realized it was nonfiction.

Expand full comment
Christophe Biocca's avatar

> I think the change has been bi-directional. Back in 2010, when we had no idea what AI would look like, the rationalists and EAs focused on the only risk big enough to see from such a distance: runaway unaligned superintelligence. Now that we know more specifics, “smaller” existential risks have also come into focus, like AI-fueled bioterrorism, AI-fueled great power conflict, and - yes - AI-fueled inequality. At some point, without either side entirely abandoning their position, the very-near-term-risk people and the very-long-term-risk people have started to meet in the middle.

Yes, and that's a big part of why people on the less-worried-about-existential-risk side see existential risk as the camel's nose inside the tent. In regular public opinion, it's just a continuum from AI-kills-everyone to AI-gets-Trump-reelected. And apart from Eliezer there's few people in the AI-safety intelligentsia who would consider the a solution that merely avoids the existential risks as a success.

Consider what would have happened had the bill successfully passed. All the alliances formed in this fight would still exist. The AI-safety people still see the bill as a "compromise", or "watered down" (your own words). The idea that the regulation efforts would stop, or that the AI safety people would get off the bandwagon because they got 1/3 (or whatever) of what they actually wanted passed into law, is completely at odds with how similar battles have gone in the past. "Compromises" in politics are to get 1/2 of what you want this year, so you can fight about the other 1/2 next year. The inability to make long term binding deals about law means "compromises" are just salami-slicing tactics with a nicer name.

And so by defeating the "moderate" bill, the maximally-pro-AI-development side has lost nothing. They get the remainder of Newsom's term (a little over 2 years) with little to no change, Meta gets to probably release Llama-4 without risking extra liability, and the small-c conservatives (as in, people who dislike large scale legal change) are still anchored on "no AI regulation" as the status quo, instead of "some amount of AI regulation". And that means more time to get some other jurisdiction prepared for the companies to jump ship to (apparently Argentina is openly courting them, though that's probably more theater than anything concrete at the moment). All of these are pure positives. The idea that the AI-safety group will fight harder in the future is also entirely speculative. Maybe the safety advocates are more motivated by anger at their defeat than by the thrill of success, but maybe not. At the very least you're going to lose the resume-padders, who will move on to more promising avenues for getting legible achievements in time for their Harvard applications.

Expand full comment
Erusian's avatar

Yeah, one thing this piece lacks is a theory of the mind of the other side. It starts with the assumption not only that AI safety is right but obviously right in a way where the other side must itself know it's wrong. It also doesn't investigate the motives of its allies too closely. The surprising fact a bunch of people jumped on who didn't previously agree is cause for at least some suspicion.

Expand full comment
TGGP's avatar

The baddies in conflict theory know they are only acting in their own self-interest https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/

Expand full comment
B Civil's avatar

You forgot AI-fueled insanity

. which I still maintain is the biggest risk by far.

Expand full comment
Level 50 Lapras's avatar

Thanks, that makes me feel much better about the bill being vetoed.

Expand full comment
MichaeL Roe's avatar

I thought SB 1047 was a reasonable compromise, in that if x-risk concerns turn out to be worrying uneccesarily, sb1047 wasn’t a big deal to comply with … and on the other hand if there is some ai caused mass casualty incident, we’ll be glad of the clarification of the liability.

A possible Defense of Gavin Newsom is he is a believer in the position that x-risk is overblown, but mundane harms are an imminent big deal … in which case, sb 1047 misses the point. Regulation aimed at mundane harms would be much worse for the AI companies. While mass casualty incidents are uncertain to happen, and e.g. Yann lecun can bet Meta’s entire assets on the hope it won’t happen, people definitely will use AI to made deepfake nudes of Taylor Swift.

Expand full comment
Level 50 Lapras's avatar

> in that if x-risk concerns turn out to be worrying uneccesarily, sb1047 wasn’t a big deal to comply with …

Historically regulation has often not worked out that way. NEPA comes to mind.

Expand full comment
Kalimac's avatar

"In case you're just joining us ..."

You're not allowing for the possibility of people who are perfectly aware that there was an AI regulatory bill and what happened to it, but just can't remember that that number applies to it.

I get this all the time. I read up on the propositions in the California state elections and decide how I stand. Then someone wants to know my stand on Prop 62, or whatever, and I say "Which one is that?" And they think I don't know anything about the ballot; they can't grasp that I know the props perfectly well, I just can't remember offhand which number applies to which one.

Expand full comment
numanumapompilius's avatar

This drives me insane with everything from song titles to statutes and court cases to peoples' names. For some reason, my brain just refuses to see short names/titles as anything other than arbitrary labels that should be immediately forgotten.

And that's before you even get into the issues with bills that the numbering system starts over ever legislative term and that there are 50 different states, each with their own independent list of SB1234 bills each session. It's like getting snippy at someone for not knowing the volume and page number of the journal article they're talking about.

Expand full comment
Jon's avatar

I understand that this post is about the politics of the bill, not the contents of it. But there are political reasons to oppose new laws like this that have nothing to do with being corrupt, or even driven by self-interest, and I'm curious to hear Scott's take on them. In particular, I think it's valid politically to oppose a law, even one you agree with, if it seems likely that the passage of that law will be the first domino toward other, worse laws being passed in the future—in other words, slippery slopes are real. Some examples where people have argued this:

- One believes there are speech acts that are so abhorrent that we should punish people for them, but one doesn't trust government to hold the line on banning just those acts, and therefore no "hate speech" should be proscribed.

- One supports euthanasia for painful, terminal illnesses (A), but not for chronic-but-survivable or mental health issues (B). The legislative difficulty of going from no euthanasia to (A) is greater than that required to go from (A) to (B), so one opposes even the legalization of (A)

- A new category of tax is proposed which will affect only a very small number of high earners, but one opposes it because one fears that in the medium- or long-term, the tax rates and limits will be changed such that many people will be subject to the new tax.

I didn't follow the SB1047 debate closely, but I remember reading at least some opponents saying that the law as written was maybe fine, but they didn't trust the California government to execute it faithfully without assuming more and more power over time, perhaps because enforcement of the law was (I think?) to be handled by a new administrative body.

Maybe the x-risk argument here is that SB1047 is just so important that we can't look ahead to future concerns like this. But for people who aren't fully persuaded that the risk is existential, this seems like a decent argument against supporting the bill.

Expand full comment
anomie's avatar

...I mean sure, but that's not the reason the main opposition of the bill was against it.

Expand full comment
Jon's avatar

Sure, the main opposition to the bill was object-level disagreement with the bill. But this post and my question are about the political reasons people supported or opposed it, divorced from the object level. For example, I read the post as saying that people could (should?) have supported the bill politically because it's a "light-touch" regulation, and if it doesn't pass, then a larger, more deleterious version may be pursued. Which is fine! People are indeed talking about that possibility. This is the mirrored version of that argument, I think, and I'm curious what Scott's thoughts are about it.

Expand full comment
MichaeL Roe's avatar

Also (as an additional defense of what Gavin Newsom might possibly be thinking):

A low-end AI can only do harm with the resources you’ve chosen to give it. E.g, a self driving car might run someone over if it goes wrong. So it makes sense to regulate based on use (cars are more heavily regulated than chstbots).

The problem with a smarter AI is that it might get hold of resources you didn’t explicitly mean to give it. Maybe you just gave it unrestricted access to the internet so it could browse Wikipedia, and you had absolutely no idea it was going to write a stuxnet-like computer virus that has blown up a nuclear power plant. The problem here is that the thing the AI does that causes you to get sued by third parties may be entirely connected from what you were intending to use the AI for, so you have totally no idea what level of software assurance or third party insurance cover you needed.

Expand full comment
ThePrussian's avatar

Reading how this bill got shot down makes me wonder why there aren't more takers for my platform of Full Butlerian Jihad.

I mean, I am pretty sure we're all dead in 10 years _no matter what_, but... Jesus. There is something so completely infuriating about Andreesen et al torpedoing this bill so shabbily.

(and, FYI, I _really_ don't feel bad about telling my then girlfriend that Andreesen resembled nothing so much as a badly shaved you know what...)

Expand full comment
MichaeL Roe's avatar

I am at least making jokes about Butlerian Jihad being what happens when (a) the hypothesised first AI-caused mass casualty incident happens (b) we don’t have sb1047 in place to define what action gets taken against those who were responsible

Expand full comment
ThePrussian's avatar

Who says I am joking?

Expand full comment
Nematophy's avatar

If it's any consolation, we'll most likely have runaway climate change, followed by nuclear war, followed by global cooling, followed by dedollarization, followed by the achievement of Communism, followed by the return of Christ, before Skynet takes over...so butlerian jihad may be a little premature at the moment.

Expand full comment
B Civil's avatar

It crossed my mind that artificial general intelligence might well be the return of Christ

Expand full comment
ThePrussian's avatar

I disagree. I think AI kills us all long before any of that. As in, I would be quite surprised if I live to see 2030

Expand full comment
Xpym's avatar

Do you expect GPT-x to kill us, or some new paradigm to emerge real quick? Because I still don't see LLMs getting smart enough in the necessary sense any time soon.

Expand full comment
ThePrussian's avatar

Speaking as someone who has seen truly terrifying capabilities come out of 4o1-preview, I am not convinced that, yes, currently existing AI can't end the world if someone hooks it up in a clever way.

But forget that for a second. They are claiming 10,000 times more compute being chucked into training by 2030. If you combine that with improvements in algorithms, that means - depending on which metric you use - either 100,000 time scale up, or over 2 million scale up. Either way, enough - more than enough - to take us to GPT8-10 (again, depending on the metric you use).

And _that_ is without considering FOOM. Almost every week I read some article that reads something like "Researchers reduce AI training requirements by 90% with this one neat trick". Heck, I found something like that (no, I am not telling you - yes you can think I am making it up if you want, but this isn't the kind of thing I want on the loose).

But imagine the next paradigm shift, as Yud suggests, something that is to transformers as transformers are to RNNs

So, yeah. Make your peace. Last year, I thought we had probably 10 years, 20 if we were lucky, 5 if we were unlucky. This year I think it is 5 years, 10 if we are lucky, 2-3 if we are unlucky. One if we are _really_ unlucky.

And if **Scott** reads this, his argument that nanotechnology is impossible is excellent - I think Smalley won the Smalley/Drexler debate hands down - but unfortunately, I have read all the papers (all since 2020) that show that Yud's diamondoid bacteria are totally possible. Not theoretical papers, but experimental papers.

Expand full comment
Xpym's avatar

>They are claiming 10,000 times more compute being chucked into training by 2030. If you combine that with improvements in algorithms, that means - depending on which metric you use - either 100,000 time scale up, or over 2 million scale up. Either way, enough - more than enough - to take us to GPT8-10 (again, depending on the metric you use).

My pessimism about LLMs is independent of that. As I see it, they are fundamentally capped by what humanity has committed to text/speech, whereas paradigm-breaking science (which is most certainly necessary for anything FOOM-like) pretty much by definition requires capability of going substantially beyond that, most importantly through ontological remodeling.

Sure, LLMs will eventually be able to pass all exams, but that's because we still don't know how to either write textbooks or make exams for the subject of Actually Doing Novel Science. Best we can tell, talented junior researches absorb that ability from Actual Scientists by osmosis, which results in famous scientific lineages.

>But imagine the next paradigm shift, as Yud suggests, something that is to transformers as transformers are to RNNs

I dunno, it seems to me that what's currently lacking isn't algorithmic progress per se, but rather architecture designs that aren't in principle limited by human ontologies. And I do see this as a blessing of sorts, human civilization clearly isn't ready to deal with something that can go beyond it. But I also don't see how it will get ready any time soon, human intelligence amplification that Yud despairs for nowadays seems even more remote if anything.

Expand full comment
Jon's avatar

I know little about AI but a great deal about complex regulatory programs. When I look at SB 1047 - the actual legislative language - I see a regulatory program full of vague or arbitrary or overly specific terms that is highly likely to result in unintended consequences. The definition of artificial intelligence appears to encompass my iPhone. If it were to be adopted I would be inclined to invest in Silicon Valley law firms, not AI. I don't think Scott appreciates what a state bureaucracy, which almost surely would be quickly captured by the AI industry, would do with a law like this.

Expand full comment
Scott Alexander's avatar

Doesn't the bill say it only applies to models that cost more than 10^25 FLOPs of compute to train, which presumably would rule out your iPhone? I feel like they put in a decent amount of work to avoid exactly this kind of potentially overly broad language.

Expand full comment
Biff Wiss's avatar

> Doesn't the bill say it only applies to models that cost more than 10^25 FLOPs of compute to train, which presumably would rule out your iPhone?

Situation A: Giant McMegacorp trains a model using (10^25)-1 FLOPs of compute. It then starts a second model with a second dev team, completely Chinese Walled from the original team. This second model utilizes the first model's output - which isn't safety-railed against exposing its own internals - as part of its separate (10^25)-1 FLOPs of allocated compute.

Situation B: Grassroots Von Opensource trains a model using (10^25)-1 FLOPs of compute and releases it freely. Jon starts a model on his iPhone which he trains on Reddit posts, some of which are the result of Grassroots Von Opensource's (10^25)-1 FLOPs of compute.

I'm having a hard time creating a net for Situation A that doesn't also end up catching Situation B as well.

Expand full comment
Jon's avatar

I haven't tried to parse the bill's language and come up with a decision tree, so maybe some legislative drafting genius has constructed it so everything fits together if you understand the bill deeply enough. But that isn't what it looks like at first glance. Look at definitions like the one of "advanced persistent threat." It relies on undefined, subjective terms like "adversary," "sophisticated," and "significant resources." What the hell do they mean? Whatever the implementing bureaucracy decides they mean. It applies to covered models, but never defines what a "model" is. I have no idea what it will take to train my iPhone in the future, but it seems dangerous to define AI to include it. How many members of the CA legislature actually understand what 10^25 FLOPs means or why it is an appropriate demarcation between models that we need to regulate and those we can ignore? I am not necessarily opposed to regulating AI, although I think that anything that slows down medical AI should be avoided, but just from the standpoint of drafting I can't help but believe that the advocates should be able to come up with something much clearer than this.

Expand full comment
Victor Levoso's avatar

Yes and also note the last version also says they need to cost more than $100,000,000

I've seen a lot of people on the internet(inclusing elswere on this comment section) that don't know this and complain that 10^25 floats might be avaliable to small developers in the future .

Expand full comment
SMK's avatar

If you get *really* desperate, you might even consider that we conservatives could be willing to join your coalition!

Expand full comment
Scott Alexander's avatar

The last conservative in California left for Texas years ago, sorry.

Expand full comment
SMK's avatar

Totally fair.

Expand full comment
drosophilist's avatar

Tell me you've never been to Huntington Beach without telling me you've never been to Huntington Beach.

Expand full comment
SMK's avatar

More seriously, though, here is a reason to still actually include them early.

When California finally passes some such law -- and you will -- the obvious thing for the obvious players to do is to pick up their toys and move to Texas. It won't be easy, but it won't be extremely hard, either. They'll probably already start diversifying today, due to the threat.

If the way things have gone in Cal make it look like it's (only) a bunch of liberals and (worse) socialists who want to attack big business because labor and movie stars want them to, then the Texas legislature and governor will get right down to business removing anything in the law that *might* have accidentally slowed them down in Texas.

If there's a genuine pan-political coalition in Cal. working for laws there but also sounding the alarm nationwide, then that *might* not happen. (Unfortunately, it might still happen.)

Just a thought. Godspeed, anyway!

Expand full comment
B Civil's avatar

Clint Eastwood moved to Texas?

Expand full comment
Level 50 Lapras's avatar

In 2020, Trump got more votes in California than in Texas.

Expand full comment
Big Worker's avatar

>But I hope they do not. As I have written consistently, I believe that the AI safety movement, on the whole, is a long-term friend of anyone who wants to see positive technological transformation in the coming decades. Though they have their concerns about AI, in general this is a group that is pro-science, techno-optimist, anti-stagnation, and skeptical of massive state interventions in the economy (if I may be forgiven for speaking broadly about a diverse intellectual community).

I think the rationalist community being cautious and pro-safety regarding AI while being techno-optimist generally just came from the fact that only nerdy tech fans were interested enough in talking about AI at all. The left was never anti-AI regulation, it was just mostly composed of normies who never thought about that topic in the first place. Now as AI becomes an issue that everyone is aware of and has an opinion on we should expect the left to take the pro-regulation side and the right to take the anti-regulation side as with any other issue. The rationalist intersection of AI safetyism and techno-optimism will probably fade away in favor of e/acc style consistent techno-optimists vs people raising safety concerns about AI who also have concerns about a lot of other technologies.

Expand full comment
sean pan's avatar

For the many people who are concerned about existential risk and reading this, please join PauseAI and help us fight for a world where we, as human beings, continue to have a future in. Our Discord is easy to find in google.

Thanks!

Expand full comment
Alan Thiesen's avatar

Could someone comment on the argument that if we regulate AI, a hostile actor like China will get ahead of us in AI and use it to destroy us?

Expand full comment
Al Quinn's avatar

That's when Captain Yudkowsky flies his sortie to drop nukes on their data centers.

Expand full comment
Odd anon's avatar

1. China's AI progress is mostly by way of taking it from western AI companies. SB 1047 was supposed to fix that by requiring better data security.

2. China has shown greater willingness to regulate than the US has.

3. AI which is too strong is uncontrollable. The "race" to superintelligence makes about as much sense as fighting over who gets to sit in the front seat of the car heading off a cliff.

4. The proposed regulations would (unfortunately, imo) not significantly slow down AI progress.

Expand full comment
Brian Moore's avatar

Disclaimer: I am neither pro- or anti- SB1047, I just want good outcomes from AI-related stuff. (and everything else)

The influence and reach of CA's legislative accomplishments aside - see "this chair causes cancer in California" warning labels or CCPA - what you consider to be the impacts of AI are:

1) too wide (your comments about how CA law applies to Al Qaeda is good)

2) too dynamic (legislatures and governments have never been capable of updating based on even far-less-quickly-progressing technology)

3) too similar to the archetype of unjustified, far-off doom-saying (climate change). Obviously this is not your fault, but it does mean that people are going to tune it out or instead support pointless "feel good" measures unless there's something concrete, sufficiently-short-term and objective to persuade them

and most importantly:

4) too dependent on executive branch action. In order to actually effectively accomplish the goals you want, there are MANY things (and entire processes, research and capabilities) that would need to actually be done by motivated, informed executive branch people. While I am neither pro- or anti- SB1047, the fact that Newsom *wanted* to veto it is a self-fulfilling prophecy that it would not work. Because you need the devil handling the details to be.... very detail oriented indeed. If you truly believe that something needs to be done to prevent an overdetermined disaster, then even crafting the perfect bill that perfectly outlined what should be allowed or not, that Newsom (or US congress) would sign - isn't going to cut it.

It would be like if you felt that any single person using pot was an apocalyptic disaster, and your persuaded the federal government to pass a law making using pot illegal, but without having any actual enforcement mechanism to physically prevent it, rather than just a general "we'll badger the states to arrest you for it, after you do it."

Newsom is not a serious person. But neither is the CA (or national) legislature, maybe this Wiener guy is nice and proactive, that is great for him. (not sarcastic) Congress is not serious people. Neither presidential candidate is a serious person. You don't need a really perfectly written law. You don't need a California governor to sign it. But in a strange way, and obviously not for the reasons he thinks, he's right: the bill didn't go far enough - but not in legislative terms, but in redefining its nature as a general government-permeating awareness. I think on some level, he probably *does* subconsciously understand this, and that's why he didn't sign: because he didn't think the parts of government that he has control over could actually execute the intent of the bill and therefore didn't want to get blamed for it inevitably did not produce the outcomes that supporters wanted. I absolutely guarantee you the scenario he is thinking about is 2028/32 when he wants to run for president, and reporters bring up the "scandal" of how he signed the bill and [yet/therefore] Bad AI Things Happened which indicates that he was [corrupt/incompetent]. No one can blame him if those bad things happen anyway, and he didn't sign it by saying "oh noes it doesn't go far enough!" - in fact they might praise him for predicting those Bad Things.

You need the issue to be recognized and actually understood as a priority (i.e. having a ton of potential benefit/harm based on our current, existing values) by the part of the government (the executive) and relevant level (national, because so much of the potential harm comes from "people in other countries") charged with actually effecting progress in areas that it - most importantly - already has legal jurisdiction to Prevent Harm and Allow Benefit. To take one example/context, it is already illegal to kill people with a bioweapon, with or without AI, and the military/law enforcement already has a purview to Do Stuff to prevent it - the "AI-relevant" part of the issue is that (theoretically) it makes it much easier to do. In this context, (obviously not the only context by any stretch) it's more like a new weapon that our soldiers/DoD need to research its capabilities, (obviously in close concert with domestic manufacturers of said weapon) assess its strengths/weakness and develop detection and countermeasures.

Of course, there's a lot of other contexts too, because AI has such wide-ranging impacts - but my challenge is that for almost all of them, we already have the laws, agencies and people in place to deal with them. Certainly they need to be directed to do so in a way that reflects AI impacts, but again... that's something that needs to come from executive branch. Even if you passed a national law creating a powerful AI Czar agency, it would accomplish nothing (likely less than nothing) without that, for so many reasons.

This isn't a legislative issue. Or at least, in a better world, it *could* be a legislative issue. If we lived in a world where Just Passing Excellently Crafted Laws And The Govt Does What They Say was how our society dealt with things, but unfortunately it isn't. Sadly, as I'm also a big fan of brilliant legislative work, that's not the world we have. What you need is this:

1. a general awareness that the public wants Good AI Outcomes to be allowed and Bad AI Outcomes to be prevented, in proportion to how large the benefit/costs are.

2. a simple to understand, informed by your knowledge in this case, To Do list of actual actions to measure, detect and [prevent Bad Outcomes/allow Good outcomes]. Details aren't actually *that* important, so long as the key feature of "recognizing there's huge potential impact" and "an aggressive plan to figure out what they are, and how to handle them" exist in it.

I think you basically have 1 and 2 already! Therefore you just need:

3. executive branch (almost certainly the actual president, or someone they listen to) person who recognizes #1 and agrees that doing #2 would successfully count in voters' minds as Doing A Good job in a way that gets their faction votes, and is willing to act to do so before (or at least ignore) the issue can acquire stupid partisan culture war valence. (You very credibly point out how this has already happened with SB 1047)

You need Operation Warp Speed 2, and you need Donald Trump 2 (not an endorsement of him as president). This is the template. Trump is not a deep thinker, and I have a million other complaints about him AND his covid handling, but for the single item of OWS, even he was capable of understanding "Americans don't like dying, and would probably reward politicians who prevent their death" and "I know what a vaccine is, and even without any medical knowledge, I bet having one would be a good thing" and "as president, I have a duty to execute policies that competently accomplish that goal". Those are items 1,2 and 3 from above.

And very importantly, that whole conversation happened (either in his head, or in the Oval office) before pro- or anti-vaccine partisan groups had formed, so he acted without even realizing that tons of his constituents would actually end up being in the anti-vaccine faction (and why I wouldn't even trust him to do OWS2 now when we have Pangolin Flu in 2026). It is important that the issue of Doing a Good Job, Therefore Get Votes be the mechanism in politician heads, rather than as partisan Ideologically Support/Oppose To Get Votes (who cares what actually happens?).

TL,DR version: stop messing around with state bills. Get a presidential candidate (or someone influential they listen to) to get on board with your ideas and convince them that tons of votes ride of them doing a good job at this, so they commit to actually *doing* it.

Expand full comment
TGGP's avatar

It's still so early that it's nowhere near the point of the Clean Air & Clean Water acts, where the government understood enough about the problem to regulate it. https://www.grumpy-economist.com/p/ai-society-and-democracy-just-relax The existential worries are so speculative the government wouldn't actually know how to deal with them. Instead, since there's so much inertia in regulation (see the Jones Act), it would lock in a policy made in ignorance.

Expand full comment
callinginthewilderness's avatar

The second part of the post hinges on a central hypothesis: that it is worth to participate in politics, and make alliances with unions/disinformation experts/AI ethics people/dissatisfied teenagers/socialists etc. And all that Scott has to say about this - even after years of writing critically about political engagement, object-level battles, "Guided by the beauty of our weapons" and all the SSC lore - is that "well, even the allies collaborated with socialists to destroy Nazis".

1) What

Apart from pointing this out, and agreeing with most other comments about the lack of charity and intellectual honesty, let me offer a specific counter-point. The historical assesment of the WWII alliance is very much debated. Not only did USSR commit unspeakable atrocities during the war and after - it also led to the Cold War, which almost destroyed the world! Is this really better than a somewhat more difficult fight to end the war back then? I don't know. But the post not only goes against all of authors own advice in the past - it doesn't even present any substantial argument against it.

Expand full comment
Jerome Powell's avatar

“Somewhat more difficult?” Isn’t it basically obvious that without the eastern front distraction Hitler would’ve been in London long before D-Day?

Expand full comment
callinginthewilderness's avatar

Soviets would fight Germans either way, because they were invaded. Anglo-Soviet agreement was downstream of operation Barbarossa. The question is not whether the eastern front should exist, but whether the west should make a pact with the devil and ally with USSR. When the war was near end, Churchill was somewhat open to continuing against Soviets, while Roosevelt wanted peace. AFAIK is not obvious whether this was the right call.

In any case, the analogy is getting a bit stretched, and I can agree to even *much more difficult*. The point is - we're comparing the counterfactual to almost-nuclear-war. I expect that the alliance of convenience with fundamentally anti-tech people to have consequences of similar nature (even if not magnitude).

Expand full comment
Alexander Turok's avatar

No. Britain had a much stronger Navy.

Expand full comment
Isaac King's avatar

I'm confused why you say that we're still very early into AI. GPT-o1 is scarily good at coding. It still hallucinates some things of course, but it can correctly write dozens of lines of code for novel tasks. It doesn't seem that unlikely to me that AI is able to start taking over significant fractions of programming and other intellectual and "paper pusher" type work within the next 1-2 years.

Expand full comment
Michael M's avatar

I'm confused as to why so much ink gets spilled over whether AI will annihilate humanity when the much more immediate concern is whether it will create mass unemployment. The US has had a pretty good track record of handling nuclear escalation and other weapons of mass destruction, but not such a good record at handling labor issues.

Expand full comment
MissingMinus's avatar

Because even if we had mass unemployment and poverty for everyone not of the Upper Class (ex: everything gets automated away, but the government or philanthropists are very slow to respond appropriately) then we have enough technology to quite likely kill us all. The mass unemployment would be quite bad, but most AI risk people don't think it will take forty years between "everything can be automated" and "tech level capable of existential damage". They think it will take far less time than that.

There's also been things like Altman's UBI study which is directly about that, I think OpenPhil or EA did some bits related to the area as well?

It definitely is a major problem to solve, but for individuals who believe in X-risk from AI, their effort is often better spent on that problem. It is the one that if you fail at, you're quite probably completely screwed, rather than "just" a lot of societal unrest.

And for the people arguing against, many don't believe AI is capable of that level at all.

There's also many on both sides of the issue that believe unemployment and many other societal issues are pretty solvable by advanced AI—whether through material splendor, because whoever makes an aligned AGI Wins, or whatever.

Expand full comment
Nematophy's avatar

o1 is good at *leetcode* - good at taking a problem in a very narrow domain and writing a function to handle it. It's also good at boilerplate. This is a useful tool for engineers, and has definitely accelerated my work in those areas. (Though 3.5 Sonnet is better).

It SUCKS at *software engineering*. A toy problem with a 100 line solution is right in AI's wheelhouse. Give it an open ended problem in a badly architectured codebase, across several repos, with hundreds if not thousands of files (without comments, ofc), and documentation spread across GDrive, Slack, Confluence, Onedrive, and your coworkers's heads, where said coworkers are making equally open-ended changes to the same code...this just isn't something any AI can even come close to doing.

1-2 years? I remember hearing that about self-driving cars 10 years ago. Now, yes, today we have Waymo...limited to SF and South Bay...with remote operators taking over every few dozen miles...operating unprofitably.

They'll all get there eventually, but don't count on singularity within the next few decades.

Expand full comment
Brendan Richardson's avatar

This is almost entirely irrelevant. I am a professional software developer. "Coding" is <10% of my job. Call me back when the AI can read the design document, identify all the mistakes in the design document, badger the designer to correct the document (because a HUMAN has to responsible for this), get whatever stakeholder buy-in is necessary to get the changes approved, start the whole process over because the client changed what they wanted, THEN start writing code. I'm not holding my breath.

Expand full comment
Isaac King's avatar

Newsom strongly dislikes Elon and seems happy to do petty things just to spite him, like the recent deepfakes bill. So I wonder whether Elon's endorsement of SB 1047 actually harmed its chances.

Expand full comment
fortenforge's avatar

True story: when Newsom vetoed the bill the startup I worked out printed out a photo of him and stuck it on our wall, next to Zuck, Jeff Dean, and Sanjay Ghemawat. Losing access to Meta's open-source models would have been devastating for us and made it very difficult to compete with big-tech.

As far as the ballot proposition idea, I called out that this was in the cards a while ago: https://honestbutcurious.substack.com/p/people-are-worried-about-large-language

I wouldn't be so confident in an easy win here though; 65% of public support can easily turn into 65% public opposition, especially on this highly technical issue that the public doesn't hold strong convictions on. Imagine the amount of money the tech industry can plow into ads to convince voters that this will damage CA budgets or do the bidding of Elon.

Expand full comment
VivaLaPanda's avatar

My basic anti-1047 take was in short:

- I increasingly think true x-risk is very very low from current AI trajectories (or at least, much lower than other issues like bioweapons)

- the harms here are really unclear, and possibly totally hypothetical. I think "well things might go wrong' regulations are often the driver behind the terrible regulatory environment in Europe, etc, vs focusing on Actual Realized Harms. My default disposition to such legislation will be skepticism.

- given non-mass-death level harms, we should just rely on liability. A bill focused on clarifying liability for AI companies seems good

Overall:

- this law as written seems *not that bad*, but I think giving power to regulators who are incentivized to not care about any upside and only care about downsides is generally bad. I also worry that the people enforcing it and the bureaucracies it creates will end up staffed by "experts" (academics and lawyers who generally dislike technology). See Scott's points about the FDA. The upsides don't seem worth this downside.

Expand full comment
JoshuaE's avatar

This was basically my take except I think the bill as written was bad in that it appeared to have a vindicative approach against the AI companies (make the AI companies do some light paperwork/audits) and minimal upside.

Expand full comment
Stephen Pimentel's avatar

I opposed SB 1047. Rather than comment on a bunch of political stuff, I'll simply note that how one interprets the political stuff is heavily dependent on what one believes about the underlying technical claims. For those whose beliefs about the technical claims are very different from Scott's, the political stuff reads very differently, as well.

Expand full comment
Archibald Stein's avatar

You misspelled Joseph Gordon-Levitt.

Expand full comment
anomie's avatar

People are being way too hard on Scott in the comments. He has the right to advocate for his own ambitions. Of course, so does his opponents, and thankfully capitalism ensures that the more valuable and ambitious people get what they want. At this point, AI safety's best bet is to hope some AI ends up killing a few hundred thousand people by accident, which would cause a global backlash against AI. Of course, the Pandora's box would be fully open at that point, and the military applications of AI are too good to pass up...

Expand full comment
Anvita's avatar

I want more ex-girlfriend stories!

Expand full comment
Rothwed's avatar

The "Greta Thunberg of AI Safety" is not an endorsement anyone should want. Thunberg is the epitome of the hysterical doomer shouting into the void and using emotional tantrums to get her way. It is not the type of thing you want to encourage in your movement. The AI safetyists should be the ones saying "there are important long-term risks with AI and we should do x,y,z to mitigate them now", not "AI is going to kill us all in 5 years unless we act NOW!". And then 5 years later the world hasn't ended and everyone thinks you're full of shit. This part makes the AI safety movement look like the latter.

And gee, the socialists agree with you. It must be reassuring to know that people who are wrong about everything ended up on the same side. Scott doesn't even invoke embarrassment over this. I understand wanting all the help you can get but come on. He even says he is starting to respect them!

This post just made me sad. It didn't do anything to convince me about AI safety, other than letting me know a lot of dubious characters support it. There is no theory of mind about the other side for why they might oppose the bill; they are all corrupt or greedy or obviously wrong but blinded by their bias or something.

Expand full comment
1a3orn's avatar

No mistake theorists in foxholes, eh?

Expand full comment
Doctor Mist's avatar

Seems like if it has that level of popular support it ought to be to succeed as a proposition, which is not subject to a veto.

Expand full comment
Franklin Seal's avatar

Most important line in the entire (excellent) piece: "I think these people beat us because they’ve been optimizing for political clout for decades."

After a life spent working in various progressive political trenches, it seems clear to me that on so many issues big and small, this is the difference maker: "the people" organize around a specific issue, short-term or long-term, but when the fight is finally over, they want to get back to their lives. The other side is always "on," in battle mode, never at rest, and usually backed by a hundred times more lawyers on permanent retainer. They play the long game, and it's not about any single issue, it's all about power.

Expand full comment
TK-421's avatar

That comment and the general vibe about politics lowers the piece substantially in my estimation. Oh no - Gavin Newsom, governor of one of the largest and most influential states, is both playing and is good at politics? That's his job. That's the process by which these decisions get made.

There is nothing wrong or shameful about optimizing for and being good at the domain in which you are competing. If your opponents are better at it than you it's not a smear against them. They're being rational. You are the failure. Don't brag that you have so many more interests to be bothered with winning. Congratulations on being so well rounded, but I thought this was literally about saving humanity from extinction.

It's like losing a marathon and complaining that your opponents have spent all their time training for things like ability to run long distances.

Expand full comment
Franklin Seal's avatar

You misread my comment. I was not bragging, I was complaining that the progressive side refuses to acknowledge that reality and adjust accordingly. I agree 100% with your POV.

Expand full comment
Lurker's avatar

I’m realizing that I don’t understand how OpenAI’s opposition to a bill touted as “just regulatory capture” proves anything. This is intended as a general discussion, without the specifics associated to SB1047.

Do large companies actually want and work towards regulatory capture starting from a mostly unregulated blank slate? After all, regulatory capture seems like a much less efficient use of money than actually doing one’s business.

Isn’t regulatory capture instead a response to the existence of regulating authority: given that it isn’t likely to disappear, it has to be turned to one’s benefit, at a greater expense?

In this model, an unregulated state is the best – regulatory capture a second. This would suggest that companies will fight against a bill even if it enables regulatory capture – because it’s expensive and slow, so less convenient than the original unregulated state.

Expand full comment
George's avatar

Disclaimer: I support SB 1047, primarily because I think the minor ripple effects (bay area "AI" money going into other areas geopgrahically and economically) are good for me short term, and I don't believe in long term predictions.

---

That being said, I think this is a very uncharitable take and part of a broader pattern, which you don't usually fall trap to, but most people do.

1. Assuming that the most common incentives to support a position, or the incentives of the most influential actors supporting it, are the main reasons to support that position.

2. Assuming that intelligence and honesty ad up i.e. the side with, on average, the most intelligent and honest people will act in the most intelligent and honest way (this, incidentally, seems to be a broad agent-modeling issue that leads to other thought patterns like the rationalist way of fearing "AI")

Expand full comment
Rob L'Heureux's avatar

In general, I thought this was terrible and maybe I'll write a complete point-by-point address, but I want to highlight one thing that just illustrates Scott is out of his element: stock prices. The relationship between this regulation of LLMs and any of these companies is extremely hand wavy because the actual logic is so tenuous. The biggest impact was predicted to be for startups, who could no longer ship or use open source LLMs. Investors care about cash flows, though, and the open source LLMs generate few cash flows today—Meta certainly doesn't make material revenue from it.

NVDA right now is supply constrained. They have sold everything they can make, and their only option for the foreseeable future is to raise prices. Their biggest customers are hyperscalers, Amazon, Google, Meta, and Microsoft. If California makes shipping models harder, that won't stop companies from buying GPUs. They believe they are in a race, and if one person drops out, another steps in, including non-US companies. Their expected cash flows are completely unchanged over the time horizon of interest. The cash simply wouldn't be flowing through CA or other regions with comparable regulations. Even then, it's not obvious the demand for GPUs would go down because people are racing to deploy them for training DLRM, the next set of models, and for inference.

Essentially, the stock market assessed the impact of SB 1047 to infrastructure to be minimal over the short-term, because companies could operate around California's restrictions, the regulations had minimal bearing on GPU demand for the next wave of training, and the regulations had too broad of a potential range of impacts to effectively model what happens 4-5 years out except that startups probably couldn't challenge the hyperscalers to manage regulations (which, if true, makes Big Tech more valuable, not less). This reaction is not comparable to Uber, whose cash flows would be directly impacted by that proposed regulation in a clear, tangible way.

The strongest part of the article was incorporating Dean's points, which are based more in pragmatism than Big Tech influence (and it's laughable to think EA isn't driven by Big Tech salaries, stock awards, and buyouts anyway). I think most people are here because they want humans to flourish, and the only ones I'm pretty certain that want humans to die out are degrowthers and the e/acc transhumanists. I hope you take a step back, collect yourself, and try to better engage with the criticisms so we can move forward together with something more workable.

Expand full comment
Name Required's avatar

> nobody will be able to lie

Is the theory that politicals can't get away with lying about easily-verified facts? The data doesn't seem to support that.

Expand full comment
Reprisal's avatar

it's troubling that you cannot look at Gavin Newsom's face and instantly perceive he is a sociopath.

it's more troubling that you don't reach this conclusion after listening to him speak.

inability to differentiate the frauds from reality is why normal people don't trust your relationships with Big Tech. there's 1 visionary for every 1,000 grifters, and it's obvious to us and somehow not to you.

Expand full comment
Cjw's avatar

I still believe it would be a huge mistake to let AI safety become left-coded. Anything that becomes left- or right-coded in the current environment is not going to have 65% favorability for very long. It was very important that Elon Musk, who is right-coded in America due to his Trump support, backed this. Having a coalition of SAG-AFTRA, somebody you're describing as the next Greta Thunberg, and a generic progressive group worried about "misinformation" is an expressway to alienating all conservatives.

And that would be absolutely nuts! Small-c conservatives have as many incentives, or more, to oppose AI broadly. Even the best-case scenarios involve massive social disruption. This is just nothing at all like climate change alarmism. The climate change activists nearly all want a top-down government imposed energy austerity, massive compulsory changes to our lifestyles, and upending numerous traditional industries. AI safety asks nothing of you! You don't have to change a thing about your life, and in fact you get to avoid a whole bunch of dramatic changes. The AI future is an incomprehensible nightmare world where almost all of us will be completely powerless to steer our own life or to know what's real.

If anything, I would have expected the socialists to be *more* likely to support AI. Automation has long been the supposed path to superabundance and leisure in the classless society the idealistic leftists want. "Fully Automated Luxury Gay Space Communism" is a world they might enjoy, as opposed to conservatives who probably have NO desirable outcomes in the entire range of possibilities where anything like ASI comes to exist. Conservatives ought to be fully opposed to AI development across the board, no AI ever, torch the data centers, etc. It seems utterly absurd to me that this issue could end up being left-coded and by process of the culture wars drive conservatives into supporting the AI developers who will demolish society.

Expand full comment
User's avatar
Comment deleted
Oct 10Edited
Comment deleted
Expand full comment
MissingMinus's avatar

What? If we get the tech level for ALGSC only in rich countries, there would be more than enough wealth going around for a relatively limited number of people to raise the living standards of third world countries by a *lot*.

(Then there's the obvious aspect that these third world countries can jump forward to current first world and likely beyond, especially if helped by some tech from those first world countries. Like how various countries jumped to first world faster because they had all the knowledge available.)

This sounds like essentially assuming everyone at the top is very sociopathic, which simply isn't true. Even in government, though the usual politician has many many issues.

Expand full comment
ProfGerm's avatar

The thin, ill-defined line between AI safety (don't create something that can destroy us) and AI ethics (it's better to nuke a city than say a single slur, once, where no one can hear it) already threatens to make it a partisan thing. Unfortunately, there doesn't seem to be a major push to more strongly differentiate the two, and I don't think it would work that well anyways, for ~social reasons~. They're both broadly "left" in the reductive two-pole model and so struggle to communicate outside that particular paradigm.

>If anything, I would have expected the socialists to be *more* likely to support AI.

Overgeneralizing somewhat, the current crop that identifies as socialist is not interested in anything that is even slightly beyond their control, whereas the current crop that identifies as conservative broadly views technological development as a natural process.

Expand full comment
Cjw's avatar

I can imagine that the socialists of today are different in a lot of ways, perhaps I was thinking of them too much in the 1970s mold (both whack jobs like Valerie Solanas wanting to replace men with machines, but also lots of less crazy variations on how automation leads to a future free of humans needing to sell their labor). But I do still hear lefties in Marxist spaces talk about machines or AI leading to superabundance, with fantasies that everyone would spend all their time painting or something quaint and enriching like that (rather than, as I suspect, on soma holidays for days at a time.)

On the conservative side, it's changed a lot in America in the past decade. The reflexive position against govt interference in the economy, required by needing economic libertarians in the political coalition, has evaporated. Plenty of conservatives now identify as being in favor of tariffs, an increasing number of labor union members vote for conservative candidates, and most conservatives are broadly hostile to the tech sector due to how the tech giants have behaved in skewing social media at the Democrat's direction. Given that conservatives' natural inclinations are against change, and they have no reason in particular to like the changes at offer here, I should think they would be natural opponents of AI tech, so long as the fruity social justice crap can be kept out of the movement.

Expand full comment
Mark's avatar

1. Not sure, the shares ignoring the SB prove anything beyond: the whole thing did not seem very relevant in the short term to the big companies. (Such laws may still be rather bad for California or EU.). 2. Being called the “the Greta Thunberg of AI” (or of anything) is something to be avoided!

3. My reading of Tyler Cowen's post that day was more "anti-SB" than maybe warranted, he "only" quoted extensively that very pro-Newsom take from the WSJ: "The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and leaves others unregulated, according to a person with knowledge of his thinking. Smaller models could prove just as problematic, and Newsom would prefer a regulatory framework that encompasses them all, the person added." https://marginalrevolution.com/marginalrevolution/2024/09/newsome-vetoes-ai-bill.html What a hell of a "person with knowledge of his thinking" - mabe just a reader/believer of Newsom's press statements ... !

Expand full comment
duck_master's avatar

> My ex-girlfriend has a weird relationship to reality. Her actions ripple out into the world more heavily than other people's. She finds herself at the center of events more often than makes sense. One time someone asked her to explain the whole “AI risk” thing to a State Senator. She hadn’t realized states had senators, but it sounded important, so she gave it a try, figuring out her exact pitch on the car ride to his office.

Is this about Ozy?

Expand full comment
drosophilist's avatar

AFAIK Ozy uses they/them pronouns, so probably not?

Expand full comment
duck_master's avatar

According to https://x.com/ozyfrantz Ozy uses "whatever pronouns", so I think they wouldn't mind if you or I or Scott used the words "he" or "she" to refer to them.

Expand full comment
atgabara's avatar

Copying my comment from above:

I'm assuming it's Katja Grace. It's public knowledge that they dated (e.g. https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers), and she is quoted here: https://sd11.senate.ca.gov/news/senator-wiener-introduces-safety-framework-artificial-intelligence-legislation-2

Expand full comment
CounterBlunder's avatar

Does anyone understand why the state legislature isn't just overriding Newsom's veto? The bill passed by >2/3 majority in each house. The only source I could find on this said that "overrides are rare in California politics". Why? Is it just viewed as a more extreme action somehow?

Expand full comment
B Civil's avatar

I have been wondering exactly the same thing. I hope you get an answer.

Expand full comment
Level 50 Lapras's avatar

Presumably, overriding the veto of a fellow popular Democrat would be considered extremely rude.

Expand full comment
CounterBlunder's avatar

Yeah that's what I figured out from doing more reading on it. Although it seems like a very weird dynamic that Newsom would be antagonistic towards those in his party by vetoing their bill, but then they can't be antagonistic back. Newsom must just have a ton of power in the party.

Expand full comment
Sergei's avatar

My personal concern with the bill passing was creating perverse incentives to optimize down the model size, since the bill does not limit the model actual power, only the metrics that can be gamed. The result would likely be more powerful but smaller models, and so easier to proliferate, which is the opposite of AI safety.

Expand full comment
duck_master's avatar

> the bill does not limit the model actual power, only the metrics that can be gamed

There's a fairly substantial evidence base (if I understand correctly) that, all else remaining equal, increasing compute and/or data causes loss to ~monotonically decrease in a power law fashion (thus also causing capabilities to increase ~monotonically, but this is way more irregular because having a specific capability != having a good loss on the test data).

That said there is such a thing as distilling big models into smaller ones. However, from what I can tell, most AI researchers prefer to train or at least finetune their models anew at the same size rather than trying to squeeze them into ever smaller parameter counts.

Expand full comment
Sergei's avatar

This is all correct, because it is easier to add power by adding size. However, once that avenue is no longer available, one starts working in other directions (nature has, given that human brain size has multiple physical constraints on it). Just because we currently do not see this happening in AI research much yet does not mean we would not once the incentive align that way. And the bill is definitely doing that. Unintended consequences. So, while I disagree with the stated reasoning behind limiting the unregulated model size, I am quite happy with the bill no passing at this stage.

Expand full comment
avalancheGenesis's avatar

In addition to other depressions, I am saddened by how uncorrelated the passion of commentators - in both directions - on SB 1047 seems to be with having actually read the bill. This is excusable for most other communities, but I thought that was like...part of our whole Rationalist thing. Empiricism and attention to boring details and Simulacra Level 0. Perhaps this is an inevitable consequence of engaging with the mind-killer directly (still wincing over the whole Carrick Flynn boondoggle). The more such entanglements happen, though, the more this community comes to resemble Everywhere Else On The Internet, Just With Higher IQ. Which is not nothing! But there's a definite feeling of loss, of a niche fading out of existence.

Expand full comment
FLWAB's avatar

Good lord, this comment section. Well, *I* liked it! Always good to see Scott writing on subjects he cares about, regardless of whether it will make a tempest in a teapot in the comments.

Expand full comment
Spinozan Squid's avatar

I feel like anti-AI folks always have discounted allegiance with a certain cultural bloc in America because they view the cultural difference as insurmountable when it isn't. Generally speaking, there is a cultural tribe in America that:

(1) Places a high priority on being the type of person that is not dispositionally okay with oppression happening. They believe that making the world better involves forcing yourself to believe that "unrealistically" (from the standpoint of today) better worlds are possible. They will quietly fritter this perspective away once they have significant power, but not outside of that (hence Obama's "pragmatic turn" once he got elected).

(2) Believes in a form of Straussian double-talk where you do not bring up, acknowledge, or validate in any way, statements about the world that would make it worse if sufficient numbers of people believed them.

(3) Believes that, for reasons they won't think about or discuss, women are abnormally susceptible to abusive power dynamics, and that abnormally stringent male-female relationship standards are required to maintain the stability of cross-gender spaces as places where women can feel comfortable.

(4) They value institutional conformity a lot. They view thinking of yourself as a "great man" and not as a valued cog in a social ecosystem as a form of selfishness and antisociality. They quietly understand that some people have exceptional levels of talent, but they disapprove of acts that they can construe as public narcissism.

So on and so forth.

This cultural tribe, which dominates the Ivy Leagues and the Democrat political scene, is not yours, so you can't communicate with them the way you communicate with your in-group. However, you wouldn't go to Japan and dishonor their customs. If you communicated with them using their communicative norms, this cultural tribe would be extremely sympathetic to AI restriction arguments. And, because they all believe in institutional conformity, they are all already entrenched within these institutions: they actually would know how to maneuver around New-York-style machine politics.

Expand full comment
Loarre's avatar

I think I'm pretty much a member of the tribe you're describing, and, without irony, I find your post very interesting and, frankly, not inaccurate. Point 4 rings especially true as a characterization of my own psychology on the matter, for better or worse. I do have some questions, which I'm genuinely curious about (and NOT just looking to prepare the way for me to try some pushback), because I want to make sure I am understanding you correctly. Would you describe yourself as, at least in some broad sense, "against" the 4-point tribe's positions? And would you describe yourself as, in some sense, also anti-anti-AI, and are you trying to, so to speak, strike a blow for your "side" by saying being anti-AI means having some affiliation with the 4-point tribe? (Or, for example, are you a pro-AI-regulation person actually trying to promote a tactical alliance between pro-AI regulation forces and the 4-point tribe?)

Expand full comment
Spinozan Squid's avatar

No. I come from the Midwest, so I kind of view both the Ivy League thing and the Bay Area things from an outsider lens. I respect cultural traditions that value personal integrity, intellectualism, and character. I am personally pro-AI, but right now Scott and his people, who I respect, are losing this political debate because of backdoor machine politics type stuff that they don't know how to navigate. I dislike this. Despite some superficial squabbles, Scott and his 'people' have more overlap with yours than you think: there is a similar focus on intellectualism, a similar focus on community-building, and a similar focus on personal character. Some of these values manifest differently for them than what people in your orbit may be used to. However, regardless of my personal viewpoints, I would much rather have a politics of honest, genuine, and sincere people who happen to disagree with me about certain things, than rank-corruption and genuine anti-intellectualism. Therefore, I wish that Bay Area people like Scott were better about not missing the forest for the trees with this and attempted to make genuine in-roads with your tribe using your communicative norms and your practices, even on issues where I disagree with him like AI.

Expand full comment
AbsorbentNapkin's avatar

Lot of negativity in the comments. Just wanted to mention I liked the post

Expand full comment
Robert F's avatar

Looking at the HIV press release link, I don't think you're describing the maximally charitable case, uh, charitably enough and suggest you re-read. For one, you describe it as the "penalty for infecting someone with HIV", when it's actually for 'exposing' someone to HIV regardless of actual risk of infection.

It seems his position is that from a public health perspective, it's better to encourage more testing (and treatment) by reducing the negatives (including 'stigma') associated with a positive test, as otherwise more people would prefer the plausible deniability of not getting tested at all. Considering sex/other contact with a person with controlled/medicated HIV is, in fact, quite safe, the benefits of increased testing outweigh the negatives of more transmission from people not disclosing their status.

Whether that's true or not is debateable, but it doesn't rest on just bringing "the penalties back into line". I agree that the focus on stigma and disparate impact is pretty tiresome.

Expand full comment
Level 50 Lapras's avatar

Thanks, that is a much more sensible argument.

Expand full comment
Malcolm Storey's avatar

I think the point is (or should be) that AIDS is no longer a death sentence, so risking passing it on to somebody else is not as nasty as it was previously.

Expand full comment
ProfGerm's avatar

A lifetime of medicalization, where if you go off the drugs you will have a resurgence, is not exactly a walk in the park and still a pretty nasty thing to pass on to someone.

Expand full comment
Malcolm Storey's avatar

Absolutely but it's assault rather than murder.

Expand full comment
ProfGerm's avatar

If you don't get tested, you don't get treated, you die slowly and painfully. I understand there is a certain subset of a certain subpopulation (Gaetan Dugas being the poster boy thereof) that views that without concern, but I struggle to think of this as a meaningful population worth catering to. If they have no concern about *dying slowly and painfully* so they have plausible deniability, what difference does anything else make?

I don't think it's merely untrue that it makes a difference; I think it's absurd to consider that particular logical chain remotely possible. Thus, it rests wholly on stigma and not at all on anything that matters to the public writ large.

Expand full comment
Robert F's avatar

I'm no expert myself, but this view feels detached from the real complexities of human behaviour. My understanding is people can have and spread HIV for years or decades without feeling symptoms. Living in denial is a well known phenomenon, people engage in behaviours that are risky for their health. You honestly can't imagine a situation where someone might be at risk of HIV but choose not to get tested? In a world where:

"Nearly a quarter of sexually active men and 15% of sexually active women reported behaviors that could transmit HIV in this population-based study. Of those who reported one or more behaviors that could transmit HIV in the past year, approximately half reported not having had an HIV test in the past year. Of those who reported one or more behaviors that could transmit HIV in the past year, approximately half reported not having had an HIV test in the past year." - Takahashi, Traci A., Kay M. Johnson, and Katharine A. Bradley. "A population-based study of HIV testing practices and perceptions in 4 US states." Journal of general internal medicine 20.7 (2005): 618-622.

Denial and refusal/reluctance to get tested for HIV is literally a plot point in a bunch of media about AIDS, for example Dallas Buyers Club and Angels in America.

Expand full comment
ProfGerm's avatar

>Living in denial is a well known phenomenon, people engage in behaviours that are risky for their health.

While true, not many of them are quite so risky to their 'partners.' Though I suppose social models of obesity might suggest otherwise.

>My understanding is people can have and spread HIV for years or decades without feeling symptoms.

While most of them have died out, there was also an extended period of people that *did* have symptoms, but refused to accept the connection and refused to accept any responsibility to the public.

>You honestly can't imagine a situation where someone might be at risk of HIV but choose not to get tested?

I *can*, I just find it incredibly difficult to be charitable to them or to think that it is in the public interest to cater to them. At best it's wildly short-sighted and incredibly, hedonistically selfish.

I don't think these efforts will do anything to change their minds; it's about "stigma," not about public health. People are free to engage in stupid high-risk behaviors. That does not mean we should make it easier for them to inflict risk upon others, or that we must act like stupid high-risk behaviors aren't stupid and high-risk.

Expand full comment
Robert F's avatar

I'm trying to describe the position in the press release.

I said - I saw an argument in there that the goals of the policy is not limited to fairness/treating HIV the same as other diseases. It claimed there is a benefit from getting more people tested, that outweighs the cost of tested people potentially passing it on without the risk of a long prison sentence. That's plausible because HIV mostly gets spread by untested carriers, because with treatment you generally can't pass it on.

You retorted that it can't increase testing rates because almost nobody at risk of HIV doesn't get tested, as they'll die horribly if they don't.

I came back to point out that, in practice people don't necessarily feel symptoms for a long time and in practice many people have unprotected sex (or do other risky activities) and don't subsequently get promptly tested - backed up with survey data.

You come back with ok but it's not in the public interest to cater to them.

This last point doesn't make sense to me - it's not about catering to people who don't get tested, it's about getting more people tested in the first place and therefore increasing treatment and lowering transmission. The people being catered to here are those who know they have HIV.

I agree there is a question of how much reducing the legal risk actually effects testing rates. I don't think many people necessarily consider the potential harsh criminal punishments ex-ante. But, I think that's an emperical question, not something you can just logically infer.

I think of the following thought experiment: during the covid pandemic, many governments tried to enforce at home quarantine for infected people. However, this was not universally followed, and many people didn't test or stay home if they had a cough (maybe they were in denial - "it's just a cold"). Do you think if there was a 10 year prison sentence for those who DID test and report the results, but didn't follow quarantine correctly, that would have no effect on the rate of testing/reporting?

This thought experiment is valuable because it puts me in the mindspace of regular people I know, not a bogeyman like Gaëtan Dugas.

Expand full comment
ProfGerm's avatar

Edit: How strangely far we've come from the post's topic thanks to the involvement of one controversial politician. Thanks for having this out, and being polite about it.

I'm not entirely sure where we're talking past each other. My position is that the change in law *will not result in more people getting tested*, which you seem to agree about. My point downstream of that is thus the decision is overly-focused on "stigma," and ultimately has negative effects on actual health outcomes.

I do think it can be logically inferred. Given that to treat a disease, you must be tested for it, and absent treatment you will suffer a slow and painful death, the decision calculus gives options like A) don't test, don't treat, eventually die painfully, but no risk of a criminal charge for knowledgeable transmission, B) test, *don't treat*, spread the disease, risk a criminal charge, C) test, treat, have minimal risk of spreading disease, have negligible risk of criminal charge (since treatment makes it almost impossible to spread). The concept that people avoiding testing to avoid the charge is suicidally batshit insane, and yet that was the main argument behind reducing/removing charges.

I don't think the covid thought experiment works well for all the reasons HIV was (and is) such a complex and terrible problem. Not least of which, acute versus chronic illness, and HIV's extended incubation period. Unlike COVID you never clear the disease, but treatment can make it undetectable and incredibly unlikely to transmit. The, uh, transmission vectors change the issue too; the kinds of behaviors that really sparked the AIDS epidemic aren't exactly "breathing too close to others." I know hypotheticals don't have to be a perfect fit, but the particular dangers of HIV make it meaningfully different in kind, not just degree.

I think I could be convinced there are good reasons for decriminalization or reducing sentences. I do not think that Wiener, he of the unfortunate nominative determinism, gives good reasons, too focused on stigma.

>not a bogeyman like Gaëtan Dugas.

While I'm not inclined to be charitable to Dugas, I don't think it's accurate to call him a bogeyman as either a fiction or a lone problem. He was maligned as the figurehead (a deliberate editorial decision, if I remember Shilts correctly, to help sell the book) of a significant subculture whose behaviors ultimately resulted in its own (near-) demise. That is why I don't like the stigma argument, that provides too much cover and excuse for nigh-suicidal behaviors.

Expand full comment
Robert F's avatar

Hi, yeah appreciate the discussion - though I'll make this my last reply lest we go on forever!

Agree seems the main crux here is if the change could result in more people getting tested. I think you're right if everyone was being very rational, but I think there's at least the potential to increase testing, especially considering the long latency period.

I'll also point out one area I think you are mistaken: You say:

>C) test, treat, have minimal risk of spreading disease, have negligible risk of criminal charge (since treatment makes it almost impossible to spread)

But this isn't true and was actually part of the problem the law change was trying to address, that the actual risk of transmission wasn't considered for the criminal charge. To quote from Wiener's press release:

>Specifically, it eliminates several HIV-specific criminal laws that impose harsh and draconian penalties, including for activities that do not risk exposure or transmission of HIV. Currently it is a felony punishable up to nearly a decade in prison to expose – not transmit – HIV, while all other diseases are misdemeanors

Expand full comment
Cjw's avatar

I handled a few of these from both the prosecutor and defense role in a midwestern state. As a practical matter, you cannot make these cases for most people who aren’t inmates. For inmates, you have access to documented proof that the person knew they had HIV as of such and such a date, bc they are regularly tested and those reports are in state control. For civilians, you have to rely on finding out where they’ve had medical care, then subpoena that place, then have the hospitals lawyer move to quash it, then you get a protective order from the judge so they have to turn it over, and maybe you luck into a document. Otherwise, you’ve got a test and maybe some oral testimony that he said he was poz to some other dude at some point.

With that experience, I understand Newsom’s position pretty well. The harsh penalties do discourage any testing that would produce a good dated document saying you’re poz.

Expand full comment
Robert F's avatar

Interesting, I did wonder how common it would be to actually get prosecuted under laws like this.

(Also FWIW this was Scott Wiener's position, not Newsom's)

Expand full comment
Michael Watts's avatar

> Over four years, they’ve scaled up to “a mass movement of a thousand young people across every inhabited continent". I cannot even imagine how good all of this is going to look on their college applications.

This is an interesting example of a cultural pathology. Assuming that this project is a success, why would they apply to college at all? What do they need from it?

Expand full comment
anomie's avatar

...Because college degrees are the necessary qualification to get most jobs? Activism doesn't prove that you will be a good worker. If anything, it's a liability. These companies do not want idiots who are going to try to play the hero and cause a ruckus.

Expand full comment
Michael Watts's avatar

They already have jobs, at least if you believe that adults who run nonprofit organizations also have jobs.

If you don't consider that they have jobs, in what sense is their project a success?

Expand full comment
AlexZ's avatar

"In a few years, when the real impact of advanced AI starts to come into focus, nobody will be able to lie about which side of the battle lines they were on."

Ah, I see it's your first time at this rodeo...

This is an almost comically naive line. The electorate has a super short memory, and Newsom (for example) has left plenty of outs to say something like "I supported STRONGER regulation of AI all along, but nobody listened!" To take another example: the list of things both major party presidential candidates have said that are contrary to their currently stated beliefs is about a mile long, and they hardly seem to be getting punished for it.

Expand full comment
Level 50 Lapras's avatar

The claim about Pelosi's "impossibly good returns" sounds much less impressive when you read the linked article, which says it was due to just buying a handful of tech stocks during a time when tech did really well.

I can't imagine what kind of insider trading would lead someone to just buy a bunch of Apple, Microsoft, and Nvidia. If anything, that sounds more like Wallstreetbets than insider trading.

Expand full comment
John Schilling's avatar

OK, I see a lot of people here are very, very disappointed with Scott for writing a piece on SB 1047 that only tells one side of the story. Even though the title of the piece is "SB 1047: Our Side of the Story", and the introduction explicitly includes a link to the post where he already, and charitably, told the both sides of the story. Apparently, the rule is that it's Bad, Bad Scott if he doesn't tell both sides of the story every single time he takes a stand on the issue.

Sometime in the next few weeks, Scott is almost certainly going to suggest that we all maybe vote for Not Donald Trump in November. Is he required then to start with the full text of "You are Still Crying Wolf", and everything else he has written with any charity towards Trump?

Did any of the people complaining now, complain every time Scott wrote a post calling out the excesses of woke progressivism or progressive wokeism or whatever, or suggest strategies for combating those excesses, without opening with a full defense of That Which We Call Woke? Scott has frequently and charitably defended aspects of woke culture, and steelmanned it at the margin, but he hasn't had a consistent policy of doing so every single time he raises the issue.

Or are we just dealing with people who *agreed* with Scott on all those previous issues, and only now are finding that Scott disagrees with them about something they care about? Because I think it's that.

Scott is a wise, intelligent, and thoughtfully dispassionate commentator on the world, but he also lives in that world and he's allowed to take stands on things that affect it. He is allowed to take a stand you oppose; he's certainly taken stands I oppose. He is not required to paralyze himself into uselessness by always prefacing his every public stand with "but maybe I'm wrong, here's the best case I can make for why I'm wrong and you all should do the opposite of what I'm asking". A link to a different work that addresses that concern is more than sufficient.

Like many people here, I am very disappointed. But not in Scott.

Expand full comment
Moon Moth's avatar

Thank you for saving me from trying to write the same thing.

If someone's got a disagreement with Scott, they can make the case for it, right here in the comment section. He might even engage! But he's not some sort of trained monkey that performs all three stages of the Hegelian dialectic whenever we whistle.

Expand full comment
Loarre's avatar

Yes! Thank you. Thanks to both Moon Moth and John Schilling for stating something important very well.

Expand full comment
Al Quinn's avatar

Maybe Scott is finally waking up and becoming a post-rationalist (like I did 5 or so years ago)!

Expand full comment
John Schilling's avatar

I think advocating strong action against AI risk is just plain rationalist. I mean, that's literally what EY launched the movement for.

Expand full comment
Trofim_Lysenko's avatar

I just think it's bad writing because it's profoundly unpersuasive. Anti-persuasive, even. I am much more strongly anti-SB 1047 after reading Scott's article than I was before reading it, which seems counter to his intent.

Expand full comment
John Schilling's avatar

I don't think it was meant to persuade. That was the previous, linked post on SB 1047. This was a status report and strategy session for people who were already persuaded to join Scott's side in this matter.

Expand full comment
Loarre's avatar

Thank you, thank you for writing this comment. I agree. I've been quite disappointed at the put-upon tone of too many responses to Scott's piece.

Expand full comment
Paul Botts's avatar

This, 100 percent.

Don't know much about SB 1047, certainly far too little to form an opinion on it. I have lately learned a good deal more about the ACX commentariat though and -- blea.

There was a time when I was in the habit of citing this place with family and friends to show that a comment section can be better and more worth spending time in than the general online norm. The reactions to this post are a sad example of why I lost that habit a year or two ago.

Expand full comment
Clutzy's avatar

I find this diagnosis entirely wrong. The problem with the article is not that it is one sided or biased, it is that it is myopic.

Calling someone the Greta Thunburg of <insert anything here> is disqualifying for whatever is inserted. Greta is an emotional child who has never made anything but emotional childlike arguments.

This particular state senator Scott is touting is also a total weirdo. If you can't realize that while writing a post where he is a central figure, that is no good.

He in the same paragraph praises Newsome for saving California from other loony doo-gooders without severely contemplating if he is also a loon.

I too, expect that Scott will endorse Kamala soon, its in his nature. I would expect that post to be much better than this one. Hopefully it would not focus on fake boogeymen like Jan6 and the end of democracy, and instead will focus on things like how immigration is awesome actually, trade is good, police are bad, inflation is good, or something like that. But if it is J6 focused it will be equally schlocky as this post.

Expand full comment
Alex Mennen's avatar

Newsom made skeptical comments about SB 1047 a while before vetoing it. I haven't checked, but I assume AI stocks didn't react to that either. Which means we probably already had evidence from the efficient market hypothesis that it wouldn't have slowed AI progress, which could have been cited as an argument against vetoing it,

Expand full comment
Will Petillo's avatar

I see the value in not continuing to try to appease tech industry worries about regulation with compromise and also the value in switching focus to coalition building. If playing by the rules of democracy fails because of systemic corruption, then by all means play to power. I am not so sure, however, that the best alternative is to go all-in with the liberal policy bundle. Sure, actors, labor unions, ethicists, etc. have overlapping interests in stopping AI, but so do conservatives. It wasn't Republicans that killed SB-1047--it was simple, old-school corruption. Concern about AI remains largely bipartisan, it would be a shame for AI policy to go the way of environmentalism (which was also originally bipartisan), so there is no reason to prematurely shut the Right out of the anti-AI coalition. I am not a conservative myself, but from my read of things, there seem to be two pillars that they want from AI regulation:

1. Securing a US advantage in AI, such as in a form proposed by "Situation Awareness" (nationalizing AI development under the Pentagon so that it can have better security so US-based labs aren't giving away their lead to China). This isn't a great end-state for AI safety, but it is miles better than the current state of things.

2. Allowing for alternate finetunings of models so that they can be differently biased from the way Silicon Valley makes them. This one is kinda negative from an AI safety perspective since it tends towards open source, but it's a relatively small concession since the real danger is in the frontier.

Expand full comment
anon123's avatar

>As a wise man once said, politics is the art of the deal. We should see how good a deal we’re getting from Dean, and how good a deal we’re getting from the socialists, then take whichever one is better.

Not much interest in or knowledge of AI personally, but this is looking like climate change polarization all over again. Who's going to be recruited into the coalition next? Maybe the actual Greta Thunberg to go along with the Greta Thunberg of AI? DEI grifters? Abortion activists? Probably not the best idea for policies you probably want to be passed at the national level some day.

Expand full comment
TonyZa's avatar

"Back in 2010, when we had no idea what AI would look like, the rationalists and EAs focused on the only risk big enough to see from such a distance: runaway unaligned superintelligence. Now that we know more specifics, “smaller” existential risks have also come into focus, like AI-fueled bioterrorism, AI-fueled great power conflict, and - yes - AI-fueled inequality"

The biggest problem with activism is that they never say "this is not a threat let's go home" When you have people staking their identity on a cause, organizing and donating for it, people whos job is fighting for a particular cause, then they never let it go.

Now that is clear that AI is not on a path to runaway unaligned superintelligence (which was a curiously religious idea promoted by euphoric aughts atheists) we have AI-fueled inequality and other generic political drivel sold as existential threats.

Expand full comment
LesHapablap's avatar

How is it now clear that AI is not on a path to runaway unaligned superintelligence, or other existential threats? To me it seems a lot more possible now, with AI capabilities advancing way faster than anyone anticipated.

Expand full comment
Lalartu's avatar

The fact that socialists support this bill is strong evidence that it is bad. All else equal, it is a sufficient reason to oppose it.

Expand full comment
Maxwell E's avatar

This, but insert “Marc Andreessen” in place of socialists.

Expand full comment
Maynard Handley's avatar

This fails to engage with the motivations behind the bill, which are things like “what if someone uses AI for bioterrorism”?

So this IS basically an anti-library bill, and its supporters are basically against libraries?

That’s what this has felt like from the start - claim we’re against Skynet, but what we’re ACTUALLY against is people saying things that might upset other people…

After all, who knows, someone might use this sort of knowledge to show that the Donation of Constantine was a forgery, or to figure out patterns in who wrote the old and new testaments.

Expand full comment
Christoffel Symbols's avatar

No opinion on the meat of the article/post/bill, but the intro reminded me (with a slight chuckle) of one of the quaint programmer/hacker slang descriptions of a type of coding bug: The mad girlfriend bug ---- When you see something strange happening, but the software is telling you everything is fine.

Expand full comment
B Civil's avatar

Funny

Expand full comment
Martian Dave's avatar

I know very little about AI or California politics but like it or not "regulation" is so left-coded that it will be treated with suspicion by centrist politicians - even if the law gets passed, maintaining a well-staffed, suitably qualified enforcement administration over the long term is a bigger challenge.

Expand full comment
Loarre's avatar

Is "regulation" really *left*-coded? I think of regulation as more like "old line Cold War liberal/centrist/consensus 1960s-70s"? You know, Clean Air Act of 1963, Nixon founding the EPA and setting up OSHA, etc. I think of leftist as something more like "the workers seize the means of production," at least back in the day, and maybe now something in the range of Occupy, anarchist police-free zones, and "the means of production should abolished because humans are a cancer on Gaia," etc.

Expand full comment
Martian Dave's avatar

I thought I'd replied to this but it's come up as a totally new post, weird.

Expand full comment
Martian Dave's avatar

You're right there has been a big shift in how the right views regulation, and the regulatory state isn't exactly a socialist paradise. But if you're Gavin Newsom running for president in 2028, being seen as generically anti-business is probably bad, unless there's a paradigm shift in politics.

Expand full comment
NotAbelian's avatar

If you are serious about reaching out to left-of-centre or AI misinformation groups, may I recommend meeting up with some in person and letting them state their beliefs?

There's a fair few lines in this article which just come off as very odd.

For example you seem to be using socialism and 'The Left' as synomns, and keep referring to those against short-term AI harms as socialists. Bluntly, there just aren't that many socialists nowadays - it's a specific term for people who want all factories, farms and other major infrastructure to be owned by 'the people'. It's like referring to right-wingers as libertarians.

I'm not trying to start an argument here - just provide data.

(As for your comment about people accusing EA of being tech company stooges, I agree with your idea that many probably did believe that - 'astroturfing' is a well known tactic by companies. So showing EA is not in their pocket is a good thing)

(You probably know this, but surveys suggest a lot of EA people lean left. 76% here - https://forum.effectivealtruism.org/posts/AJDgnPXqZ48eSCjEQ/ea-survey-2022-demographic

That could be a useful overlap)

Expand full comment
Philosophy bear's avatar

Scott’s reflections on the battle to regulate AI to stop it from killing us all includes a discussion of the (?emerging) détente between rationalism and socialism on the question of AI risk, with the broad outline being an idea that the left cedes the point that AI-killing-literally-everyone is a real worry and the rationalists ceding the point that AI killing everyone isn't the only risk worth worrying about- with other risks critically including cyber-feudalism and AI driven inequality. A politically sensible set of concessions with the great advantage of being true.

Despite obvious shared interests and common sense alike indicating that this pact is a sensible one, both sides have tried furiously to avoid it. The process of getting here reminds me of what frustrates me most about both the left and the rationalists at their worst.

For the left at their worst:

1. Too much focus on real politick and not enough on the content of ideas.

2. Too much focus on the danger of ill will and not enough on the danger of getting the facts wrong.

3. Too much focus on grand strategy and not enough on detailed tactics

4. Believing their enemies are uniformly monsters

5. Thinking that there's no point in talking to anyone who doesn't agree with them.

6. Thinking the realm of the political is utterly helpless before the capitalist.

For the rationalists at their worst:

1. Too much focus on the content of ideas and not enough on real politick.

2. Too much focus on getting the facts wrong and not enough on the danger of ill will.

3. Too much focus on detailed tactics and not enough on grand strategy

4. Believing their enemies are just as virtuous as they are

5. Thinking that they can solve everything by talking to those who don't agree with them

6. Thinking the capitalist is utterly helpless before the political.

Expand full comment
JaziTricks's avatar

IF infecting others with HIV carries lower penalties, it means that the penalties for infecting others with life shattering illnesses isn't penalised enough.

btw, how many other illnesses are there as bad as hiv? that this can infect others with?

bizzare

Expand full comment
Vote4Pedro's avatar

if you have 2/3 majority in the legislature, why not just veto override

Expand full comment
JoshuaE's avatar

If your view is that EA must engage in a war against progress, I'll exit here (this post has made me reconsider all of my EA donations). SB-1047 was a mediocre law poisoned by its bad beginning and it's proponents continue to argue in bad faith (many of it's opponents also argue in bad faith). Scott, your failure to realize that other people have rational reasons for opposing you increases the need for people to oppose you.

Expand full comment
Maxwell E's avatar

I am disappointed to see this kind of take in this comment section, of all places. SB 1047 was crafted to be maximally charitable to all of the specific objections raised by good-faith opponents and was still tarred as the work of socialists. Zvi has written particularly well on this. I understand Scott’s frustration here perfectly; it’s amazing how red-pilled against your opponent you can get when they pretend to be in negotiation and then lie to your face. The likes of a16z continue to pursue comical levels of epistemic arrogance, and they’ll richly deserve what’s coming.

Expand full comment
JoshuaE's avatar

I have no respect for a16z either but I think it's very easy to find claims like this where proponents of the bill say something like this won't cause a major impact on open source models/new developments but if it did that would be ok maybe even good or claims that if it just raises cost on OpenAI for no benefit that is also ok. The initial bill was so comically bad that I think it should have been withdrawn and a new version created for the next legislative session because no one is going to believe that the modifications are actual limits. The cavalier attitude that Zvi shows to https://thezvi.substack.com/i/147793668/objection-sb-will-slow-ai-technology-and-innovation-or-interfere-with-open-source discredits the rest of his counters. If you add regulatory requirements in creating the plan, reporting to the AG etc you can create an environment in which progress is fundamentally delayed without any safety benefit just because the company now has a giant regulatory team that has added layers of process. As soon as you think that the x-risk is low and the gains from progress are high this is a bad trade. (I also think it's disingenuous to say we are setting the threshold to $X and it requires a new law to change it to less than $X therefore it's a false fear to worry that passing this law will result in the the threshold being less than $X in the near future, passing the second law will be much easier as the framework already exists).

I find it disappointing that EA was so willing to support crypto which has caused far more than $500 million in damage without any redeeming qualities (other than evading sanctions) but is unconcerned by the possibility of stopping development of a technology that actually seems to be part of the end of the great stagnation.

Expand full comment