270 Comments

This is a good summary, and Zvis was a good run through also. However I question a few things

- Beyond "it's not that onerous", why should we have it is unclear, since we have no examples of what harms (do worms created by a coding platform count against them?).

- Who will police this beyond the platonic idea of a good bureaucrat? This is not a trivial problem, even an extremely well defined areas like medicine with high oversight things go wrong, as you have written about FDA. Here we have given away those guardrails.

- The assumption that the rules will continually get updated as we get better at training seems at odds with what you've written about at length about other various regulatory regimes.

- If everything goes right, sure it might be a nothing-burger. But what is the case where it goes wrong and strangles the industry? Surely that has to be taken into account in the analysis, considering the base rates.

I'm sympathetic to your theory that this is an existential risk with some probability which is why perhaps the last argument to you is weighted towards one side, but it is worth it wondering whether the risk of a 500 million cyber attack is really all that high. Notpetya, Epsilon, Veterans admin hack, they all were higher than that threshold.

Expand full comment

The first objection sounds like "no nuclear plant has ever melted down before, so why have nuclear safety"? I think it's useful to try to prevent harms before the harms arise, especially if the harms are "AI might create a nuclear weapon". But probably this is a disagreement too deep to resolve in one comment.

Who polices any law besides the government? I agree it would be great if we could get God to directly implement laws, but this seems pretty typical in that the government has to implement it through courts and stuff. This is kind of what I'm talking about when I say it's typical of regulation in any industry except better.

I think if you're precautionary principling, you should do it on both sides.

I agree that a $500 million cyberattack against critical infrastructure isn't that bad in the grand scheme of things. But an AI that can do a $500 million cyberattack probably *is* that bad - for example, you could ask it to do a second one.

Expand full comment

I don't think the first comment is fair but yes it is not something that can be easily resolved here. I also think you're arguing against a strawman, my point is only that to not take into account how this can go wrong, or assume any of the capabilities that allow us to protect against it, makes it a highly one-sided argument.

Effectively you are saying this can cause problems therefore let us police it, as opposed doing a cost-benefit analysis on what actually is likely to happen. I note again that this is the exact argument you have against the FDA. We would not have wanted medical Innovation to stop, we would not have wanted materials Innovation to stop, we did not even want nuclear Innovation to stop. This is just one more God that we ought to reject.

Expand full comment

How much work are you expecting the compliance to be? It seems to me that this is one or two extra headcount at OpenAI (or maybe zero, since they already do most of this). It's the equivalent of Phase 1 trials for a drug, not Phase 3. I don't think a law mandating Phase 1 trials for drugs would be unreasonable, especially if drugs harmed people other than the user such that we couldn't trust users/companies to handle the externalities.

Expand full comment

You might be right, but I also think that if this law were implemented a year ago we'd have an invisible graveyard of GPT4-ish level foundation models competing with openai. If you wanted to take off the FLOPs requirement, put it on some top labs (since you wouldn't need to make it a requirement, as you say it is expensive to train these things so there will always be only very few cos), and create a bit more legible standards with sunset provisions on when they go out of date, I would have far fewer objections.

As it stands what I see is that the industry is doing a bunch of tests, these tests are evolving almost faster than the models themselves, nobody quite understands how to test them in the first place, and we are trying to enshrine the necessity of doing those tests in law.

Expand full comment

I love this comment.

Expand full comment

Yeah, exactly.

If you had passed this law 2 or 3 years ago, with a limit of a 10^23 level model rather than 10^26, you'd likely have killed Llama 2 and just handed free money and lack of competition to Anthropic / OpenAI. If this kind of thing have happen, you'd also have stopped or limited several dozens of papers on interpretability / alignment that have used things like Llama 2. Thus, in the counter-factual world where this bill passed two years ago, I think it would be negative EV and negative safety EV, in that it would be (1) shoveling cash into OpenAI + Anthropic and (2) limiting our knowledge of the world.

Note also that many AI safety orgs have explicitly tried to criminalize AI at this level(https://1a3orn.com/sub/machine-learning-bans.html), and they just didn't have enough power to do so. Several of the organizations which have tried to ban open source in the past (CAIS, The Future Society) are in favor of this bill.

(I think that the multiple ambiguous locations in this bill is pretty clearly going to be enough for the legal dept to kill open source releases, especially in the uncertain light of how interpretation could change in the future. They'll look at "huh, we're opening ourselves to an unclear liability dependent on future judges interpretation of what "reasonable" is, and to potentially disastrous litigation *even if we end up winning*. Hell no.")

Expand full comment

This is exactly what everyone says about every regulation, that it would be that much work and surely it's worth it given how serious the issue is.

I'm not saying all regulation is bad, but I think you can see how there is slippery slope problem here. We should at least be able to find a better solution than "this doesn't seem that bad right now," "we'll change it if it gets bad," "probably this is important enough that even if we hamstring ourselves in hard to fix ways, it's worth it." These arguments have gone poorly most of the time in the past, surely we can advocate some better mechanism for addressing them.

Expand full comment

>But an AI that can do a $500 million cyberattack probably *is* that bad - for example, you could ask it to do a second one.

CoPilot or similar could probably write a botnet *right now* that could do damage in that ballpark -- this doesn't seem like the sort of thing the law is *intended* to cover, but that... is not super-reassuring to me given the current tendency towards 'creative' interpretations of enabling legislation within the regulatory state.

Expand full comment

Sorry, but no it couldn't.

It's already just past the threshold where it's useful as a coding aid. But in no way is it capable of creating a botnet or exploiting vulnerabilities in the wild (better than already existing non ai tools)

Expand full comment

“The first objection sounds like "no nuclear plant has ever melted down before, so why have nuclear safety"?” may not be as much of an “ad absurdum” as you intend. People did spend a lot of time thinking about nuclear plant safety in advance, and frequently overlooked risks that led to how actual incidents developed. And if you’re trying to be reassuring about regulations not smothering an important research field, nuclear power is probably not where you want to go. You might ask “well would you have preferred them not spending all that time pre-empting nuclear accidents?” - quite possibly. The cost of us not having more and better nuclear is staggering, while the value of a lot of the regulation written “in advance” is debatable.

(I accept your arguments elsewhere that this is actually as good a bill as we’re likely to get, and do not argue against it in general, just taking issue with this specific point.)

Expand full comment

> People did spend a lot of time thinking about nuclear plant safety in advance

Look up "Cockcroft's follies" (and then curl up and cry for a bit). Basically one engineer insisted on putting fiters on the chimneys of the Windscale nuclear weapons facility in the UK, which everyone said was a waste of time and money. Then the reactor caught fire, and but for the filters, we'd have had another Chernobyl-scale disaster.

Expand full comment

Thank you for the reference! I was vaguely familiar with the event, but reading on it now was interesting.

This was in 1957, after multiple rounds of nuclear regulations. The claim is not that regulating nuclear safety was effective- on the contrary, I was saying that it happened but did not help.

Incidentally, in the context of the broader discussion- seems the military nature of the project (it was entirely for military purposes) obstructed information flow and contributed to the accident. I’m not sure the lesson is to target the one big tech company that open-sources its AI.

Expand full comment

My objection is more like we're saying "nuclear plants may be dangerous, so let's work on uranium safety" and missing some critical water valve work that turns out to be important but we didn't predict. I agree that the existing companies recognize the risks and are already doing what they can about them. Things like this being a law seems more like a security blanket so people feel secure, without providing any additional actual security.

Expand full comment

There are at least three ACX regulars who think that AI boxing and control of off switches will be done by default and that enforcing both constitutes ~all risks going to 0. I don't know how assured they would be by this

Expand full comment

If this is true, then why do we need a law? Whether it works or not, the law would add nothing to it.

I agree with Scott's comment: "If the California state senate passed a bill saying that the sky was blue, I would start considering whether it might be green, or colorless, or maybe not exist at all." I suspect the law is to enforce something or support someone not readily apparent.

Expand full comment

1. I view those commentators pretty dimly.

2. I don't know of any evidence to show they are boxed by default, rather than immediately hooked up to an ec2 box and prompted to do things as an agent. If you have evidence otherwise I'd like to know. I was mostly saying that it may be convincing to those three people, who apparently believe that those two provisions are important, but don't know if they are enforced.

I'm pretty sure those people didn't think too deeply about this, and I would bet they would not show up to defend their views, because it has always been an excuse.

Expand full comment

> The first objection sounds like "no nuclear plant has ever melted down before, so why have nuclear safety"?

It's funny you should mention that since the Nuclear Regulatory Commission was established 14 years *after* the SL1 plant exploded and killed three people.

Nitpick aside, the juxtaposition of this with The Origins of Woke post is darkly ironic. Yes I know you acknowledged it yourself, but I hope that that will at least help you understand why other people might be worried about government regulations having unexpected severe negative impacts.

Expand full comment

The Atomic Energy commission was created before SL-1 and was responsible for relating reactor safety starting in 1954. Not that it did a particularly good job, but it did exist and tried to prevent accidents before they happened.

Expand full comment

Well obviously there was *some* regulation, but it was evidently bad enough that everyone decided to abolish it and create the NRC instead. And it's the NRC that is today's bugbear.

Expand full comment

The biggest problem is with the "any model equivalent to these" part. Realistically, AI's "killer apps" (at least in the mid-future) will be in writing reports, summarizing research papers, predicting stocks and the likes, and I have a hunch that more deterministic and classical algorithms will slowly follow where AI goes. Then, these classical algorithms (which will likely use a fraction of the compute) will fall under the "equivalent" rule. Of course, you can argue that they are not actually as smart as the AIs, just very good at the specific problems they were written for, but good luck convincing judges, who will mostly be laymen and know AI through its best-advertised applications.

EDIT: The relevant wording in the bill seems to be: "(2) The artificial intelligence model was trained using a quantity of computing power sufficiently large that it could reasonably be expected to have similar or greater performance as an artificial intelligence model trained using a quantity of computing power greater than 10^26 integer or floating-point operations in 2024 as assessed using benchmarks commonly used to quantify the general performance of state-of-the-art foundation models."

This is somewhat reassuring, since it implicitly defines the equivalence in terms of the compute used to train the model. That said, I am less than reassured, as the definition of an "artificial intelligence model" in the bill is extremely vague and covers all software and hardware. Has the Java Runtime Environment been trained using more than 10^26 integer or floating-point operations? One can easily answer "yes", since any user filing a bug is "training" it in a way.

Expand full comment

A small counterpoint, with caveats. There are certainly judges who sit for a case that they should recuse themselves from for lack of understanding over the contested matter. But, the vast majority of judges educate themselves on the matter at the heart of any suit that they sit for. They get more diligent, generally, the higher up in the court system that one goes. This goes all the way to the SCOTUS, which recently spent the better part of its oral argument time during the Trump immunity case having what amounts to a skull session.

A judge's job is to educate themselves on the topic before them, partly by using the counsels' briefs and filings and partly through their own research. Anywhere that you find humans, you can point to the possibility of error. That can certainly happen in law. Thankfully, there are higher courts that one can take a case to if a judge fails (with one final court at the top, of course).

Expand full comment

Uhm, it's enough to have a couple bad-faith or incompetent judges and attorneys to terrorize a company. Do you trust Californian judges to wield the law wisely when Elon Musk is concerned?

Expand full comment

That's partly my point, I do trust them to adjudicate wisely. But, I acknowledge that humans are at the helm and we will never be perfect. I can trust a system and/or its elements enough with some appreciation for the probability of its foibles coming to the fore.

Expand full comment

The SCO-Linux case took 18 years, during which the use of Linux in the industry remained under a cloud of uncertainty and worry. Why do you trust things to get better now?

Expand full comment

I don't.

Expand full comment

They already don't in Delaware, as we saw with that Tesla shareholder lawsuit.

Expand full comment

If that happened, in the worst case where everyone involved was being maximally stupid, you would have a very, very easy and safe limited duty exception on your hands. You'd file a paper.

Expand full comment

Limited duty exceptions need to be claimed ahead of use; they are not a good tool for when one gets hit by a surprise lawsuit on an already published product.

Expand full comment

Only the Attorney General can sue you under this bill, they are highly unlikely to care about your harmless model that technically needed to file for the exception, and filing for the form is trivial if your model is clearly below SoTA, and there is a grace period until 2026. It is really hard to generate an actual problem.

Expand full comment

>they are highly unlikely to care about your harmless model that technically needed to file for the exception

Unless, of course, you hold or have ever held uncalifornian opinions and are sufficiently prominent.

Expand full comment

I’m curious about the assumption that classical methods will slowly follow AI (interpreting you as comparing neural networks to pre-NN models). Can you give one example where this happened?

The “ImageNet moment” was 12 years ago- is there any classical method that consistently matches even the 2012 deep models (not to speak of anything later), or even just improves substantially on its 2012 state? If anything, there’s *less* attention and resources allocated to such research. The need for lighter models is satisfied by smaller and smarter deep models (ResNet-18, quantization, pruning, distillation etc.). Yes, XGBoost may be an exception but even that one is almost always (to the best of my knowledge) applied to features extracted from a neural net (in the context of vision. Tabular data is a different story.)

Since RNNs and later transformers took over NLP, can you give an example of classical methods “fighting back”? I’m less versed in NLP than in vision, but my firm impression is that most friends I have working in “classical NLP” have abandoned it in favor of transformers.

Recommender systems (particularly sequential ones) - any evidence of classical alternatives getting anywhere near SASRec, BERT4Rec, etc.? My impression is, once more, that the opposite is true.

Expand full comment

That's why it's called the bitter lesson. It's really fucking hard to swallow.

Expand full comment

I have no examples from ML so far, but the whole field is young and people still have more exciting stuff to do than saving cycles. Even so, the recent "go experts learn new tricks from AlphaGo" news seems like a first hint that things are moving. A similar thing has been happening in theoretical CS for ages, where a randomized algorithm is discovered first and then derandomized.

Expand full comment

W/r/t the testing needing to provide "reasonable assurance", if that's like "reasonableness" in negligence law then I think it's impossible to develop AI in a way that isn't negligent.

Under the traditional Learned Hand formula, you are obligated to take a precaution if the burden of the precaution (B) is less than the probability of the accident it would prevent (P) multiplied by the magnitude of the loss resulting from the accident (L). B < P*L. Given that the "loss" threatened by advanced AI is complete and total destruction of all value on earth, the right side of the equation is infinite, and reasonableness requires spending an unlimited amount on precautions and taking every single one within your power. Even if we just cap the L at $100T, the estimated value of the global economy, a p(doom) of even 10% would mean that any precaution up to $10T was justified. Now presumably there are smaller-scale cyber attacks and things of that nature that would be what actually happens and prompts a negligence suit, if DOOM happens nobody's around to sue, so this isn't gonna come up in court this way, but as a way to think about legal "reasonableness" that's what seems to be required.

Expand full comment

This gets worse once you take into account the probability of AI _saving_ us from destruction (see Meditations on Moloch for a general argument for this happening). In other words, Pascal’s mugging keeps being wrong.

Expand full comment

And people keep pretending the AI situation is a pascal mugging by asserting that if L is large, then P must be small.

No evidence that P is small is presented.

Expand full comment

Well you see, if your term of derision is used as a witty ender by a bad faith actor it's automatically correct.

I call this rule Rascal's Smugging and it has never served me wrong.

Expand full comment

[re-written to lower snark]

My point did not at all depend on p being small. At all. Donald’s response was both factually wrong (in claiming no evidence for small p is ever presented) and irrelevant.

Are you sure sarcasm and derision are how you want to make unsubstantiated points about snark and derision? Who’s being bad faith here?

Expand full comment

To reply seriously, "bad faith" is used mostly to foreshadow the "rascal" part of the awful rhyme, I don't think you're bad faith, just ignorant. And yes, that is exactly how I want to make my point. If you don't think this was nice and correct, maybe cut out the "attribute a position to someone else" part and put more object level points in your comment.

If you had put down why you believe others believe in Pascal's mugging esque arguments, if you had put down explicitly what made you think what was a refutation of the mugging, rather than saying "meditations on moloch" as a slogan or if your first sentence wasn't literally the first thing basically every AI risk person start out believing in, my comment would look increasingly foolish. You had many outs to making a substantial and good comment, to address arguments presented by the other side, and you instead chose to make a short and pithy one. Pointing out that the endpoint of this is rude and unpleasant is correct.

> My point did not at all depend on p being small.

It doesn't look like it generalizes to p(doom) 90%, not without significantly more argument. And it seems to rely on a symmetry borne more out of ignorance of the doom position, rather than knowing about orthogonality/instrumental convergence/capability gain generalizing way better than alignment, all of which directly answers "why is doom being considered over other possible world states". So far I still stand by this belief, even given your better comments elsewhere in this thread.

Edit: I read the original version of your comment and it seems to independently verify that you don't know what the other side's position is, and that you have a corpus of pre existing work that I expect to be similarly low quality. I'd address your original points if they were linked to and not merely hinted at.

Expand full comment

You said

> In other words, Pascal’s mugging keeps being wrong.

Pascal's mugging is a small p phenomena. If p was large, it wouldn't be a pascal's mugging.

> (in claiming no evidence for small p is ever presented)

Ok. At least occasionally people present something they claim as evidence for p being small. Usually they seem to just assert it. I haven't seen good evidence yet.

Expand full comment

Did you genuinely not see that my argument does not require p to be small? Or do you truly not know this counterargument to Pascal’s wager, that it doesn’t take into account alternatives (other gods, in the original form)? Or are you simply not engaging with the post, treating it as a prompt to talk about AI risk regardless of whatever was actually said?

Relatedly, are you going to insist that I never presented you with any evidence for my views on p? (Not necessarily convinced you- you claim such evidence was never presented. That’s false.)

Expand full comment

" Why are you playing Russian roulette. Isn't that stupid? What if the bullet kills you? "

"Pascal's mugging. There's also a chance the bullet chops out a lethal brain tumour. "

You can't take arbitrary risky actions and use "pascal's mugging" as a magic wand.

I agree that there are also some scenarios in which AI saves us. We need to weigh these scenarios up and decide on an AI policy based on both of them.

Yes you need to take into account alternatives. But if those alternatives are significantly less likely, it doesn't change the picture much.

Expand full comment

In the context of negligence law, the potential benefits of the activity are not an offset to the risk of loss when determining the reasonableness of a precaution. Every service or product is designed to have a positive value. A high speed train is of more utility than a rickshaw if you need to get from Moline to Chicago, but you still evaluate the reasonableness of a precaution based on B<P*L.

What you're suggesting is that the equation ought to be B < (P1*L) - (P2*G), where G is the potential GAIN from not taking the precaution. So if there were a 10% chance of a $10M loss from not taking precaution B, under the traditional calculus it's negligent/unreasonable not to do B if B costs below $1M. But in your conception, if there were ALSO a 5% chance that failing to put precaution B into place resulted in a $10M windfall, now you'd only be reasonable required to spend up to $500,000 on such a precaution. In practice, some companies do make decisions this way, because this is secretly the real calculation -- that's why imposing strict liability for products doesn't actually change behavior, because if you're risk-neutral then it is rational not to spend any more on precautions than the math tells you, so you just suck it up and payout a few claims. But this framework completely falls apart as the potential windfall increases in probability and value, and would eventually zero out the right side entirely and make the "reasonable" choice not to take ANY precautions, which simply can't possibly be the case when the L is as extreme as "all value on earth is obliterated".

I'm not an AI field expert to assign my own p(doom), but from observing the e/acc crowd and other accelerationists I think the direction they're mistaken is that they are overvaluing the "good" outcome. They could be right about p(doom) being much smaller than many AI safetyists argue, but still be wrong about the range of outcomes of AGI because they think the world it creates in the "best" scenarios is awesome. I think "fully automated luxury space communism" is horrible and dehumanizing and not something we should be striving for, so when they toss around the possibility of post-scarcity worlds ran by AI as a positive offsetting the risk of doom, they're seeing something as a G that I see as an L. The only AI outcomes with positive value to me are those which maintain human supremacy over the earth and a society that you and me would still recognize as human, which pretty much only happens if it turns out there's a ceiling to AI capabilities and no AGI takeoff either fast or slow is possible at all.

Expand full comment

This certainly gets at my point better than other comments in this thread… still, a few responses:

(1) “what I’m suggesting” is plain old expected-value based cost-benefit analysis. Not just something a few companies are making in secret but, like, as ubiquitous as it gets.

(2) Attempting to negate such considerations by gesturing at enormous outcomes _is_ Pascalian reasoning and has the usual Pascalian weaknesses (regardless of the value of p, as long as it’s not beyond 0.5). Choosing to believe in some god in the original Pascal wager has a weaknesses in that it ignores the possibility of other gods, every bit as vengeful. Similarly, worrying about AI destroying all value on earth ignores the possibility of AI _saving_ all value on earth. For an argument for why you might consider that possibility, see none other than Scott in Meditations on Moloch. Now your p has to compete with p2 of the latter scenario. Since for me p2 is not at all obviously smaller than p, you have to pay non-zero attention to what’s happening outside the tails. And what’s happening in the non-extreme scenarios is to me overwhelmingly pro-AI. To give one example, we haven’t found any new classes of antibiotics or good enough replacements since 1987. I don’t want my kids to grow in a post-antibiotics world. I hope AI will be one path to a solution.

(3) I haven’t made this argument above, but in the context of this bill and discussion, we need to consider the probability of each given measure actually helping. Not at all clear which measures are helpful for AI. So your p is now a smaller p’, taking into account the probability of a given measure harming you. OpenAI comes to mind.

Expand full comment

(1) presumes a lot of things. Are you risk-neutral, able to insure against the loss, morally ok with responsibility if the loss happens? If the loss happens, do society’s existing legal mechanisms provide a way to make people whole? As I pointed out that legal liability doesn’t actually create any mathematical incentive for precaution in strict liability over simple negligence regimes, there still exists political and social and reputational incentives to take precautions that a simple cost/benefit wouldn’t advise. But all of these controls are weakened for AI, bc in a large number of potential outcomes the legal, political and social structure is either destroyed or has no practical response. Even many non-doom outcomes still leave society in total upheaval with no available remedies for those who lose out.

My problem with (2) is that I don’t consider your p2 to be a good outcome. It’s just another type of bad outcome. Suppose I make a medicine that, by removing some mediating process or substance, becomes generally more effective but also p1) has an extreme tail chance of killing children and p2) an extreme tail chance of instantly curing leukemia, do I go for it? With AI, the p2 is more like “it cures cancer by putting kids into permanent comas”, the beneficial fringe outcomes of AI all accomplish marvelous things by destroying human society. The only good outcome is one where society remains largely the same but with somewhat healthier and more productive people, and I don’t consider that anywhere close to the median outcome. Getting new antibiotics at the cost of having tech that requires oppressive AI-enabled surveillance states to prevent it from creating endless plagues? Count me out.

As far as the specific bill goes, I will take anything that is a step towards preventing AI development. I want a full stop, permanent ban, burn anyone at the stake who ever suggests building this. So any roadblock is a good roadblock to me, so long as it doesn’t lull people into thinking they’re safe when they aren’t.

Expand full comment

Those all are good points. But I fear you still don’t fully engage with the downsides of the lack of AI. They may not at all look like anything you would accept. Again, Meditations on Moloch.

So, in short, for your specific values, it may be that there is no good outcome. As for me, having a potential relief for suffering and dissuading ourselves from using it for borderline-metaphysical reasons is the thing that’s unacceptable. Suffering sucks a great deal.

Expand full comment

The "ASI saving us from moloch" scenario is hopefully real and significant.

But it's not urgent. And it's not the only thing that can save us.

I think most good long term futures involve ASI eventually. And that an ASI created with current levels of skill would be very bad.

Thus we should delay AI. Put it off until we figure out alignment. And then use that ASI to kill molock. Although we can probably do quite a lot to kill moloch as just humans.

Near future AI is going to end badly. So lets ban it until either we figure alignment, or we have a better plan.

Expand full comment

If this is the case, isn't this the law functioning as intended?

Expand full comment

You claim “this wouldn’t ban open source” and then reference a footnote which essentially says “we talked privately to the senator’s office and they said they didn’t intend to ban open source”. Just make it clear in the bill that it does not regulate open source software. None of this unclear wording that will be interpreted however some future regulator wants.

Expand full comment

I think the language in the bill suggests that. I agree it should be made clearer.

Expand full comment

Note that you suggest two different things there - The bill intends to regulate AI software, both open and closed. Although I think it is already clear, I agree that a line to further clarify this is not a ban on covered open models would be useful, if only to calm everyone down.

Expand full comment

I'm trying to just talk about one thing - a ban of open source software. A regulation of open source software is a ban of the open source software that doesn't obey the regulation, which will surely exist because this is just a California regulation.

See, it's still confusing after all this discussion. I feel like Scott responded saying "this isn't a ban of open source" and you responded saying "this IS a ban of open source".

Does this bill intend to regulate open source software in California, or not? How would that work - there's a tier of open source models which are legal outside of California, but not legal inside of California? Who is punished - users of the banned open source software, distributors, developers?

To me that seems like a bad idea. California should not be regulating open source software, but if there has to be regulation on it, it should not be thrown in as a vaguely worded point 30 as an afterthought in a bill that was mostly designed for large tech companies.

Expand full comment

"You claim “this wouldn’t ban open source” and then reference a footnote which essentially says “we talked privately to the senator’s office and they said they didn’t intend to ban open source”. Just make it clear in the bill that it does not regulate open source software. None of this unclear wording that will be interpreted however some future regulator wants."

As in: "does not regulate."

You are equating any regulation of open models to a ban.

Expand full comment

What happens to open models that don't comply with the regulation? Are they not banned?

I also wouldn't be mollified by earnest assurances from senators that they totally don't mean to do X if it's not clear in the bill. Even if they're sincere, that doesn't mean future bad actors will care about these assurances.

Expand full comment

Under my reading of the bill, if you train a *covered* *non-derivative* model *in the state of California* then you have broken the law, and you are subject to punishment.

There is nothing in this bill making it illegal to *use* an open model under any circumstances, unless I am very much missing something.

The idea that 'regulate X in any way' -> 'ban X' because they might not comply is stating that you should be exempt from all rules whatsoever. I mean, ok, that's a position, but please speak directly into this microphone.

Expand full comment

The bit about "this has to be a single event" doesn't seem right; the bill specifies "a single incident or multiple related incidents".

Expand full comment

I suspect this is down to something like: hacking into and shutting off safety controls for a bunch of different power plants is a single *event*, consisting of *multiple related incidents*. Could still be worth teasing out exactly what the language of the bill would cover

Expand full comment

I'm actually really curious what the State Compute Cluster is supposed to be. Is it just grift? Like the state has to spend a bunch of money on compute, even if it's not going to do anything useful with it? Or is it a precursor to requiring advanced AI be training on the government's compute or something like that?

Expand full comment

My guess is the intended beneficiaries are academics in the UC system, but someone said (I haven't confirmed) that it isn't funded in the bill and so will probably never happen.

Expand full comment

Initial thoughts:

Why not make a bill more bottom-up, such as creating extra whistleblower protections, especially if they can point out provable danger like at the damage levels suggested by this bill? Such a bill would have broader implications than this AI bill. Isn't the problem with Scott's AI predictions that if there's no change in behavior, then this bill is useless?

A bigger worry, and probably an Eliezer-type worry, is that a bill like this brings us closer to the "teaching to the test" path of self-annihilation, where we create metrics of false closure. The goal becomes making an AI smart enough to fly under the radar. Maybe we're safer without radar and should keep ourselves vulnerable to an actual damage gradient priced by the free market.

Expand full comment

There actually is whistleblower protection, section 22607! I didn't mention it because it didn't seem relevant to me, but I don't know much about law so maybe it matters more than I thought.

This might not be convincing to anyone else, but I trust Paul Christiano (currently head of NIST, likely to be the person designing the tests) and I think he's very aware of this concern and has actually good security mindset. He is one of the only people in the world who Eliezer thinks is just mostly stupid and doomed as opposed to completely hopelessly stupid and doomed.

Expand full comment

At the risk of sounding like a war monger, one objection I haven't seen in these critiques is that our (U.S.A.) global adversaries are most likely not going to go along with any restrictions that we happen to put on ourselves which might tend to come back and bite us in the future in ways that could also be existential. Risks all around.

Expand full comment

We have no serious AI (or military) competitors except China, and China puts way more restrictions on their AIs than this.

I also don't think it will seriously slow down US AI progress.

Also, I suppose if the military wants AIs, they're already ordering the special version that *does* design nukes and commit $500 million cyber-attacks on critical infrastructure.

Expand full comment

Doesn't that sort of presuppose that serious competitors would be open about their efforts? I'd be shocked if Russia and North Korea didn't have some pretty serious under the radar efforts going on and I'd be even more shocked if the restrictions that China has are indeed imposed on their own military efforts.

I'm not nearly as concerned with nukes and cyber-attacks as I am with raising the capabilities of seemingly conventional forms of warfare. Consider fighter jets. At present, they are more limited by the need to keep the human passengers alive than by an technical limitations. Take the human out of the equation and you've got a lot more capable machine. Add to that swarms of miniature autonomous drones. The next war is going to be an interesting affair that I hope I don't have to witness.

Expand full comment

I don't see how this law is relevant to US military capabilities. I don't think any California law will stop the US military from doing what it wants. While tech companies like to position themselves as the only people that can make progress on AI, the truth is that the military can hire smart people and throw money at computers just as well as Google can.

Expand full comment

It seems pretty obvious that having Google, OpenAI, Meta, and the US military researching AI is going to lead to much more rapid progress than just the US military doing it.

Expand full comment

North Korea is not of any threat on AI development; not only they lack computational capacity, their national energy production is just not enough to train a frontier model.

(Of course, if Meta et al. keep releasing the weights of their models, NK doesn't need to do any training of their own beyond some fine tuning.)

Edit: To clarify, I mean ability to train a model on level that would meet SB1047's threshold. I believe North Korea might be able to train a medium-tier LLM chatbot or machine vision system if they really wanted to.

Expand full comment

It seem that the bottleneck for frontier AI development (today) is still on the people.

There's like 100 people in the world capable of actually making progress on AI. They are all either working on one of the major US labs or breaking away to create their own company and have basically a queue begging them to take billions in funding.

China/Russia/Iran creating dangerous AI is basically a non concern

Expand full comment

> that serious competitors would be open about their efforts?

Anyone competent enough to do this would likely be competent in general.

Russia is really short of western chips for it's missiles. Anyone with 2 braincells to rub together has fled to avoid being cannon fodder in Ukraine.

Putin isn't some 4D chess master.

Russia isn't secretly hyper-competent (probably).

The "what if fetal alcohol syndrome was a country?" Russia isn't making anything high tech.

Expand full comment

Russia and North Korea cannot afford to train GPT-6 in secret. Russia *might* be able to swing GPT-5, and I hope they try because GPT-5 is not going to figure out how to conquer the world and the effort would sap the resources Russia needs to rebuild their maybe-we-can-at-least-conquer-our-neighbors conventional army.

China, could probably afford to train GPT-6, but it would be a stretch and they really don't seem to be stretching to an alarming degree. We'll keep watching, of course.

China, Russia, or North Korea absolutely can and easily can deploy the kind of spies and hackers who can snag any sort of GPT out from under any company running under traditional Silicon-Valley "security" rules. And brick the whole system on the way out so we can't catch up. So a law that says, yes you can train GPT-5 or even -6 but you have to do it in a SCIF, is a net win against the threat of some enemy nation showing up with an advanced AI in the foreseeable future.

Also, since you managed to nerd-snipe me on this, the bit about fighter jets is just plain wrong. The few things a fighter jet could do except for the squishy human bits, are of very little tactical value in modern air combat. Dogfighting went out of style among sensible fighter pilots in 1940. The things human judgement and situational awareness can do that no present AI can do, are of great tactical value. And no, you can't count on doing that remotely if you're fighting Russia or China.

Expand full comment

Second-hand-need-sniped: doesn’t a lot of the cost of jets have to do with the squishy human inside, as well as the elevated desire to avoid risk? That was certainly my impression when I was actually personally involved in a tangentially related project.

If we compare a single human-flown F35 to a cost-equivalent army of drones (that we can afford to lose some of), does the human element win out?

Expand full comment

The weight of the crew-accommodation system is typically much less than the weight of all the combat systems, and the expense of those combat systems means you're going to put almost as much effort into getting them back in one piece as you would the pilots. So it's not as much of a difference as you might think on the aircraft side.

Mostly, when people propose drones to complement (but not replace) next-generation fighter aircraft, they're proposing drones with substantially reduced combat systems and thinking they can still be a useful part of the team. Which is legitimate, but you need the team. Among other things, the radar range equation gives the advantage (all else being equal) to the larger and more expensive vehicle. An F-22 sized aircraft can detect a comparable-technology F-5E sized aircraft at ~35% greater range than the Tiger could detect the Raptor.

So, given a manned F-22, a couple of relatively cheap drones would add a bit of capability that can be guided by the radar and the skilled judgement of the F-22, but if all you have are the drones then they'll probably just get shot out of the sky without ever knowing what hit them. And an F-22 sized drone is just as expensive as an F-22, but trades the skill and judgement of the human pilot for maybe two extra AMRAAMs.

Expand full comment

Interesting.

In Israel, pilots are under strict instructions to prioritize saving themselves over attempting a dangerous rescue for the jet. This suggests that getting the person back _is_ more valuable than the vehicle. But I don’t know how “dangerous” and hopeless the state of the plane would have to be. Also, this may be an artifact of Israel’s limited manpower.

I’m still a bit confused - all the systems for making sure information flows correctly to the pilot (e.g. the complicated calibrations for matching displays to the pilot’s turning head), the support for high g, the extra space and weight imbalance due to the cockpit, the HMD… all that is negligible?

What are the main advantages a human’s judgement confers? Let’s assume we’re talking about a battle over Taiwan, with humane considerations mostly irrelevant.

What of the cost of human error, particularly in complex combat situations? Pilots are awesome but not infinitely awesome. They have finite attention spans that are affected by overload, fatigue, g, hypoxia and emotions. Wouldn’t AI systems (perhaps in a few years) compensate for their inferior judgement with their all of human weaknesses?

Expand full comment

Gwern has been asking believers in Chinese AI supremacy for any paper that isn't trivial or a reimplementation and apparently hasn't gotten any. This isn't just an open call, he has been replying to believers with this challenge for a while now. So far, nothing.

Expand full comment

Is your opinion that e.g. the Chinese military would be unable to make serious AI advances in private?

Expand full comment

Not OP, but yes definitely.

I can imagine the west open sourcing research and advances and china spending the money to train a model based on that.

But I doubt they'd come up with the advances themselves

Expand full comment

Yes, +2 to this and endorsed.

I'll add that I think it's lazy to have an opinion on this without knowing some predictors, like if important Chinese AI scientists have stopped or slowed publication, what, if any volume of chips have been irregularly purchased, or any improvements in censorship or other capability downstream of LLMs.

These are ML analogues of figuring out nuclear weapon research progress, and I don't think these are unreasonable to know off hand if you are interested in capabilities instead of "Boo China".

Expand full comment

Interesting -- do you have a link to these conversations by any chance?

Expand full comment

To be clear, I am taking gwern at his word that he hasn't had a satisfactory paper, and not a first hand account from obsessively following him.

And I do not have access at this moment, but you won't get to see them if you weren't already following gwern since he has privated his twitter. I'll make sure to follow up when I do

Expand full comment

No worries, I was just curious. I have complicated feelings about how much I trust that guy anyway.

Expand full comment

What reason do you have to distrust him?

Expand full comment

Sorry for the delay.

https://twitter.com/gwern/status/1385987818418212864

To quote several tweets summarizing his position:

(Incidentally, not a single person even tried to rise to my challenge to name 3 Chinese DL research results from the past few years on par with just OpenAI's output over the past few months like CLIP/DALL-E.

No, sorry, 'rotary embedding' doesn't count.)

Jan 2023: in the past year we've seen in the West Chinchilla, Dramatron, Gato, DALL-E 2, Flan/U-PaLM, Stable Diffusion, Whisper, CICERO/DeepNash, Imagen Video/Phenaki, ChatGPT etc etc.

Can you name even 3 Chinese AI results as important?

(Besides GLM, which everyone says sucks.)

an 2024: in the past year in the West, we've seen GPT-4/-V/Gemini, MJv6/DALL-E-3, GAN revival...

I continue to hear Chinese DL will overtake any day now.

What 3 Chinese AI results are as important (and aren't just, like Bytedance, based on massive copying of OA API outputs)?

One consequence of pretending these falsified predictions about Chinese DL never happened is undertheorizing of *why*.

eg. *every* China bull case claimed that China bigtech's DL would be supercharged by privacy-invasion/huge private datasets. But.. that didn't happen? at all?

Expand full comment

Looks like he deleted it (I now have some vague recollection that he regularly scrubs his tweets?) but thanks for posting this!

Expand full comment

His account is privated so that every tweet link visited by a non follower looks deleted.

Expand full comment

They won't comply with OUR restrictions, but surely you don't imagine that China -- a corporatist state with an authoritarian central government -- is going to let their tech companies do whatever they want developing AI? I find this whole line of reasoning bizarre, we can't hamstring ourselves because our enemies may rush ahead they say, but our actual enemy has a tightly-controlled tech sector that reports to a dictator who puts down all threats to his power. Any sane government that has tech companies capable of this and the power and labor infrastructure to allow such research will be restricting and monitoring those companies. Not to do so risks waking up one day and finding out that for all intents and purposes some tech bro is now the unassailable ruler of your country. As this stuff advances, every government will have plans to monitor AI research and to quickly send in the stormtroopers to take over the facility and seize the controls before some nerd becomes King of California.

Expand full comment

A common opinion is that the endpoint of AI development is a singleton AI that can and probably will prevent competing AIs from arising. Some even describe this as the only hope to prevent AI apocalypse—develop an aligned singleton AI that can protect you from the unaligned ones.

If the Chinese government places any credence in this scenario, they would certainly see that an American-aligned singleton would ensure that China stays second-tier forever.

Perhaps they don’t, but it doesn’t strike me as impossible.

Expand full comment

I'm not sure why an AI would be aligned to national interests, and nation-states as we think of them may not even survive the transition. But more importantly, it threatens many individuals' hold on power. Is Chairman Xi gonna allow some company there to race to AI just to make sure it reflects Chinese values, knowing that the creation of the AI will nevertheless threaten his dominance? I don't think so. Perhaps the previous regime, with its views of Han supremacy, might have found this attractive. But Xi, having broken the tradition and remained in power beyond 10 years, doesn't seem likely to put such things ahead of his own control, at least on a surface-level analysis. Maybe a China expert could persuade me otherwise.

Now you're right they might see it that way if we were rushing headlong into our own, trying to cement American hegemony, but if we don't then why would they? Unless your enemy is actually threating this, better to play it safe than to unleash a powerful force you may not be able to control and that you can't be sure would advance your objectives anyhow.

Expand full comment

I agree. But does Xi? He might imagine he would have *some* degree of control over the values of a Chinese-originated singleton, but knows for sure he’ll have no say in the values of one that originates in America.

Expand full comment

The canonical example, for me, of a reasonable law that was stretched into insanity is NEPA, and by extension, CEQA and related environmental review laws.

Originally it required the government to do a short report for major projects, but lawsuit after lawsuit, and sympathetic judges, have turned it into the foundation of our environmental law system, where the focus is on making sure that you filled out your paperwork correctly, not that the project is good for the environment, and the main technique is endless delay.

https://www.nytimes.com/2024/04/16/opinion/ezra-klein-podcast-jerusalem-demsas.html

Expand full comment

Good summary, but it feels like you haven't dealt with the strongest objections to the bill.

1) The derivative model clause is way too vague and overreaches. (I think Zvi made this point too, but is this actually getting corrected?) It isn’t enough for the developer to determine their model isn't hazardous, they are also liable for any post-training modifications. This is a weird clause and isn't how open source software works, and it could criminalize the development of even innocuous open source models that are later modified for malicious uses. e.g. see this example from Quintin Pope about an email-writing open source model that ends up getting used to write phishing emails: https://x.com/QuintinPope5/status/1784999965514981810

2) Your piece on IRBs (https://www.astralcodexten.com/p/book-review-from-oversight-to-overkill) talks about how regulations often start out well-intentioned and reasonable, and end up mutating until they're strangling entire sections of the economy. This pattern is so common it's practically the default, e.g. NEPA/environmental regulations have made it so it's hard to build anything large-scale, IRBs/HIPAA end up restricting biomedical research costing lives, the FDA ends up being too risk-averse, etc. The worry is that this legislation sets the ground for a gradual ratcheting up of control and regulation over the one section of our economy that's truly dynamic and functioning well, i.e. tech and AI, and this is how it starts.

You could reply that this is a Fully General Objection to any legislation at all. The crux of that is...

3) It's premature. We haven't seen harms from LLMs. Yes there should be some monitoring on models above a compute limit and coordination among entities building large models; but do we really need this bill to do that?

(I think this particular point goes deep into other issues (timelines to superintelligence, capabilities of superintelligence, etc.) that are going to be too involved to resolve here.)

Expand full comment

1. Yeah, somewhat agreed. My impression is that the law is trying to say that if your AI is too stupid to cause harm, it's fine to release, and if it's smart enough to cause harm but too-well-trained, then it's (correctly) hard to release. There's a weird degenerate case where you release a small dumb foundation model, and instead of training their own foundation model someone else massively expands it. I think this is a weird edge case, but I agree it might be worth adding something in about it (and have left a comment on the bill saying so).

2. I accept this as the biggest risk. Against this, I think it's unlikely we get powerful AI with no regulation at all. If not this bill, it will be one that's much less careful to specify that it only means really big harms.

3. I want to avoid the kind of situation we ended up in where people were saying "Why prepare for COVID? It's not even in America yet" and then "Why prepare for COVID? It's only a handful of cases in America so far" and so on until it was a pandemic and we were totally unprepared. At some point AI will be smart enough to design super-Ebola. If we wait until someone does this before responding, then someone will have designed super-Ebola. I don't really know what this gets us. If everyone could agree on some intermediate case ("once it designs a moderately interesting bacterial genome") then I'd grumblingly be willing to wait until we did that. But realistically at that point people will just say "Why act now? All it's done is create a moderately interesting bacterial genome, there's no real harm!" If I trusted the world to go slowly, with five years in between AI generations, maybe things would still go okay. But I don't trust AI progress to move slower than political response ability.

Expand full comment

Re 3: The difference is that there are concrete, targeted things that the government can do about pandemics that are not infringements on traditional liberties. Would your objection be the same if the applicable anti-COVID policy was "let's prophylatically require everyone to wear masks"? If that were so, all of a sudden you would have people talking about cost/benefit, the uncertainty of the future harm versus the concrete current harms, the likelihood that masking would prevent a pandemic, etc.

Laws like this one have current costs in return for an unquantifiable and questionable future benefit against a strong AI. Further, you an't just handwave and assert the costs are "small," because the *whole point is to restrict the growth of AI to something that is "safe."* The future restriction of an entire area of technology is not a small cost. So either the costs are small, or the legislation is not effective.

Expand full comment

You've defined the intended benefits of the bill as "costs" and then complained that the "costs" are high. That's just linguistic sleight-of-hand.

That's not like complaining that masking up has costs, it's like complaining that preventing the proliferation of a whole class of new species is not a small cost and therefore ANY measure that successfully prevents pandemics is "high-cost".

Expand full comment

>Would your objection be the same if the applicable anti-COVID policy was "let's prophylatically require everyone to wear masks"?

My preferred anti-COVID policy would start with "let's not have anyone doing gain-of-function research outside of a true BSL-4 facility", and maybe something about cleaning up the wet markets. That doesn't seem too intrusive, particularly for the general public, and it's much more in line with what this bill is proposing than the universal-masking strawman.

Expand full comment

I don't really understand your "growth of AI" objection. If AI can't make super-Ebola, it won't restrict the growth. If it can make super-Ebola, do you really want it available unrestricted to everyone without the company addressing that capability in any way?

Expand full comment

I'm not sure I agree with this framing of dumb v. smart foundational models. First of all, there are and will be plenty of very powerful models that aren't foundational models that can do some narrow tasks extremely well- better than any human. They can potentially do really dangerous things without general skills/knowledge. Second, I don't think it's hard to imagine an LLM, for example, to which we carefully make sure not to feed any info on nuclear technology or weapons, for example. Is it going to be worse at language generally for having missed out on this training data? No, it can be still be extremely powerful and "smart" per your terminology. An LLM like that can be safe as-is, but become dangerous if a downstream user then, additionally, feeds it this nuclear info. So, I don't think the issue of derivatives is nearly as "edge case" as you make it sound.

Expand full comment

On 1) Not only did I make this point, but I have confirmation from someone involved in the process that they are working on a fix for this issue.

(I don't have anything new to add on the others, I've discussed them previously.)

Expand full comment

I think that one problem here is that a lot of technologists are used to complying with something like PCI DSS (security rules for using credit cards on the internet), which lays out specific, testable, requirements in tech-language.

And what they're seeing is that the document says "prove that the AI won't be able to make nukes" and they're worried that this is a vague requirement that there's no clear way to test.

What they're missing is that even where PCI DSS compliance is a legal requirement, the actual law will say "you must comply with industry standards in the security of credit card transactions" and the bureaucracy and courts will interpret that as meaning "have a current PCI DSS certification and don't cheat to obtain it".

That is, there are a lot of techies looking at this and they're not using to looking at laws and they're expecting to get an implementable specification and they're not seeing one and they're treating it like the sort of crap non-specification you get from non-technical clients where they ask for a miracle, yesterday, for free. And what they are missing is that there's going to be some sort of standards body drawing up an implementable, testable set of standards and all the law will mean in practice is "abide by the standards".

The alternative approach in drawing up this law would be to explicitly create a bureaucratic agency that draws up the standards, and that would be much more sclerotic than an industry body drawing up standards.

Compare this to the FDA: this is like a law that, instead of creating the FDA, said that you could sell any drug provided a body that was generally acceptable to doctors certified that the drug was safe and effective. There might end up being several such bodies, or just one, depending on whether it makes sense to compete. Courts would decide what constituted "generally acceptable to doctors", ie how widespread dissent with a certifying body would need to be before its certifications were invalidated.

Expand full comment

"Future AIs stronger than GPT-4 seem like the sorts of things which - like bad medicines or defective airplanes - could potentially cause damage."

This is an interesting comparison to make. We don't generally think it's bad for an airplane to fly too fast or a medicine to be too powerful, but you can imagine extremes of speed or power that would make an aircraft or drug unsafe. Is this a productive comparison?

Expand full comment

The FAA currently regulates all airplanes beyond a certain size (and probably there are other criteria too - the point is that one guy on a glider or ultralight is only lightly regulated, but a jumbo jet is heavily regulated). I think that's a good analogy for the current law.

Expand full comment

I think this analogy breaks down in the fact that large aircraft carrying many people have known costly failure modes. There are no known costly failure modes for large AI models. Only speculation that they might have such failure modes.

Expand full comment

It depends on where you draw the boundaries.

There has never been a plane flying that weighs 10000 tons. Is it speculation that if someone is on the cusp of flying one, we can predict it will cause tons of damage if it crashes in a populated area?

AI can write code. It's a bit shit now compared to a human, but it's there. If it gets significantly better it will be able to write exploits.

Security is basically a shit show on the wild. The second an AI can write code well enough to exploit know vulnerabilities then it will be like a fire catching on a petrol barrel.

This is not speculation, it's just using basic inference.

Expand full comment

It is not comparable. Regulation of airliners is about avoiding engineering failure that causes harm. The plane was supposed to fly but the wings fell off. That doesn't mean you can't cause harm with a plane - see 9/11.

Regulation of AI, such as this, is not about trying to avoid engineering failure. It is about trying to prevent humans from using a general purpose tool in unapproved ways. If the tool writes capable code, that is a good thing. It is the human that directs it to create exploits (and uses them).

I'd also call attention to the language "it will be able to write exploits." But in this argument, you are assigning agency to the AI. The AI can be prompted to generate code... but by whom? The AI does not generate code autonomously for its own purposes. It has no intent.

If a person wants code to exploit vulnerabilities, then that person can learn how to code, hire a person, purchase exploits... or eventually, use an AI. But it is the person that is culpable.

Expand full comment

> Regulation of airliners is about avoiding engineering failure that causes harm. That doesn't mean you can't cause harm with a plane - see 9/11.

After which they immediately went WOOPS, and added a bunch more regulation to stop it happening again.

> If a person wants code to exploit vulnerabilities, then that person can learn how to code, hire a person, purchase exploits... or eventually, use an AI. But it is the person that is culpable.

Why ban nukes. If a person wants to kill people, they could use a stick to club people to death, or use a nuke. But it's the person that's culpable.

On a certain scale of being easy to cause massive damage, culpability (ie punishment) isn't enough. You need to stop random people having anything TOO destructive. Nukes are far deadlier than sticks. We can live with one person in a million going mad and attacking someone with a stick. We can't live with 1 person in a million nuking a city.

Expand full comment

> After which they immediately went WOOPS, and added a bunch more regulation to stop it happening again.

If you want to regulate AI pointing to the result of airline security theatre is maybe not your best play.

Expand full comment

Regulations are not there only for engineering failure?

You can't fly a plane without a licence. You can't drive a car without a license.

Those are tools that can cause harm if misused, and even though the culpability of misuse is on the user it's still regulated.

You might disagree with regulation of tools in general. But I don't see how AI is in _any_ way special. Every society heavily regulates tools which cause harm!

Guns can kill people. This is not assigning agency to guns. It's just normal language. Yeah of course the shooter is culpable for killing someone. But basically all societies either ban or heavily regulate guns.

Expand full comment

Not especially relevant, but I believe there actually are rules against airplanes flying too fast, partly because sonic booms can cause damage and partly for military reasons (so there's time to confirm that they're not hostile before they've approached so close that you'd be in trouble if they were).

Expand full comment