470 Comments
User's avatar
scf0101's avatar

Can someone explain to me why “mass surveillance” is such a boogeyman? Do people really care that much about maintaining the 0.0001% probability they have of one day overthrowing the government? I’d personally rather take the massively decreased crime that comes with surveillance. Clearance rates in U.S. cities are abysmal. This country almost certainly has less than the optimal amount of surveillance. And for those that disagree - would you be in favor of the NYPD dismantling the camera system in NYC today?

Aaron Zinger's avatar

When it's impossible to overthrow the government, you get a worse government.

Scott Alexander's avatar

I don't think it's about violently overthrowing the government in particular. Here's some things that concern me:

- If someone criticizes the government, the government searches their email until it finds a reference to some very minor offense, like taking illegal drugs, then arrests them.

- The government searches an opposition candidate's browser history until it finds him watching weird porn, then publicizes this.

- The government uses AI to read everyone's texts, emails, and Internet comments, gives everyone a conservatism-vs-liberalism score, and only gives appointments/permits/etc to people on the "right" side.

- People who want to work with the government (eg police, office-holders, etc) have to watch what they say in private and can't even let off steam because that will make it to their superiors.

- Imagine you send an email to a friend saying something crude about women or LGBT peopple, and the government forwards it to your company's HR department so you get fired for being likely to create a hostile work environment or something.

All of these require the government to additionally do something bad with the information when they have it, but if you don't trust the government not to do bad things, preventing them from having the information is a good first defense.

Feral Finster's avatar

The government doesn't like the policies of a given politician and seeks to discredit him, and by extension, his policies.

The Ancient Geek's avatar

If you enter politics, you volunteer for that level of scrutiny.

meeeewith4es's avatar

Scrutiny currently tends to currently avoid what one did in their younger days, because we lived in a society with less paper trail, but that's changing. Why should a picture of you drinking with friends at 17 be used against you as a 50 year old politician? None of us are perfect-perfect, and the determining factor shouldn't be "how much was captured by the mass surveillance mechanism".

---

You could be doing fully legal activities that don't fit the moral image of a group of voters which could be used against you. I don't think it's appropriate if we go after the sex lives of politicians, for example.

---

Also, when we have models to generate realistic looking images, video and audio, which will likely only get worse, how can we assume that anything used against anyone is genuine?

This only gets worse with a surveillance arm attached to it, you cannot disprove you were at xyz because you were, but you cannot disprove the false statements about what happened in xyz because they know there's no witnesses.

Ponti Min's avatar

> None of us are perfect-perfect

And the few that are are probably psychologically unsuited to high office anyway.

Richard Gadsden's avatar

What constitutes entering politics?

Because, believe me, there isn't a Rubicon. Does registering to vote count? How about donating to a political campaign? How about a SuperPAC? How about /running/ a SuperPAC? Or what about a journalist, or a "journalist" who works for a partisan media organisation (an implicitly partisan one like Fox News, or an explicitly partisan one like Pod Save America)? What about a staff job for a politician? Or a civil service job that requires Senate confirmation?

If it's only candidates for political office, then you just get cleanskin candidates with the real powers behind their puppet strings.

Feral Finster's avatar

You volunteer to be blackmailed by the government, lest you advocate for entirely lawful policies that said government doesn't like?

Orwell wept.

The Ancient Geek's avatar

Scrutiny isn't just from the government. The government themselves complain about press scrutiny. Zero scrutiny is not optimal.

Feral Finster's avatar

That was non-responsive. Do you volunteer to be blackmailed by the government, lest you advocate for entirely lawful policies that said government doesn't like? Do you volunteer us?

Benjamin Scott's avatar

In other words, if you run for any office, it's fine that the feds could search every digital keystroke you have ever made (at the limit) for literally anything that could sink your chances of election?

This isn't possible today but in 2050 politicians will have lived their entire lives online and I don't want everything they have ever typed to be subject to that level of scrutiny.

Jimmy's avatar
Mar 2Edited

> In other words, if you run for any office, it's fine that the feds could search every digital keystroke you have ever made (at the limit) for literally anything that could sink your chances of election?

People will eventually learn that you shouldn't talk politics online.

Polytope's avatar

And I think a policy that naturally leads to the conclusion "You should never publicly or even semi-privately criticize the state" is actually quite bad

Victor's avatar

No, you don't. The Constitution was written to try to ensure that one party could not capture the government and use the power of the state to suppress other factions.

scf0101's avatar

I dunno, those don’t seem like terrible trade offs for getting rid of rampant disorder. The GDP increase from freeing wide swaths of urban areas from the grip of crime and allowing productive people to live there would be worth it in itself.

There are a billion people in China/Singapore/Gulf countries that would look at you sideways if you said “you won’t be surveilled anymore but crime goes up 5x and your streets are covered in graffiti, litter and everything in your shops is under lock and key”. Are they all wrong and brainwashed and westerners are all enlightened? That sounds like a religious argument to me.

FluffyBuffalo's avatar

You seem to assume that the people currently interested in mass surveillance would put them to use to "get rid of rampant disorder". Why would they?

scf0101's avatar

The places with mass surveillance have lower crime rates than similar cities/countries, all else equal.

Matt Gilliland's avatar

And you're basing that on what those places tell you about the crime rate, yes?

scf0101's avatar

I could walk anywhere in Shanghai or Singapore or Dubai at any time of day with absolutely 0 fear or anything happening. Try the same in NYC or SF.

quiet_NaN's avatar

I think that this is because mass surveillance is correlated with authoritarianism, and authoritarianism is correlated with a better suppression of crime.

For example, if you shoot apparent criminals where they stand instead of going through a court process, you can certainly deter a lot of crime. The (main) problem is that the people doing the shooting can not be trusted with that kind of power, and you end up with some shitty dictatorship most of the time, which is much worse than whatever crime level NYC has.

Performative Bafflement's avatar

> I think that this is because mass surveillance is correlated with authoritarianism, and authoritarianism is correlated with a better suppression of crime.

Except we've clearly been spying on every US citizen since 9/11, Snowden proved it, and we also have the most unusable downtowns of any country and highest crime in the developed world?

Viliam's avatar

> authoritarianism is correlated with a better suppression of crime

Soviet Union left some crime unchecked on purpose. Crimes against the state were punished severely, crimes against your fellow citizens less so. (Self-defense against the criminals was also punished when deemed "inappropriate". E.g. when a thug attacks you with his fists, and you defend by hitting him with a stick, you will get in trouble. Even if you happen to be a 90 years old grandma and your fists clearly wouldn't be adequate.)

The idea was that when people complain about the criminals, they are not complaining about the government, and they may even be asking for more control hoping that it would reduce the crime.

Generally, if you trust an authoritarian to fix a problem, the authoritarian has an incentive *not* to fix the problem, and instead ask for more power, under the pretext that more power is necessary to fix the problem. So the problem never gets fixed, but they keep you believing that we are "almost there".

Zanni's avatar

You clearly don't know much about Communist China. You can do ANYTHING there, if you bribe the right people. "Crime" as a concept is very different when bribes are involved.

Randall Randall's avatar

Jack Ma may want to suggest otherwise.

LightlySearedOnRealitysGrill's avatar

Would you be ok with putting cameras in your house? How else can we be sure you are not abusing your wife? If we put cameras in everyone''s house, think about how much rates of domestic abuse will go down!

Jimmy's avatar
Mar 2Edited

Do you have any better solutions for completely eliminating domestic violence? A lot of people would be pretty happy with that outcome.

Michael Friedman's avatar

Because that is how you get public acceptance for mass surveillance.

John Schilling's avatar

Right, but if you want to *keep* public acceptance for mass surveillance, you have to keep the rampant disorder. And keep telling people how much worse the disorder would be without the mass surveillance.

Michael Friedman's avatar

If you look at the surveillance states (China, Singapore, UAE) they deliver on the order and in return they get acceptance by their citizens for the surveillance.

Even in China, where the government is broadly unpopular (I lived there twenty years) there is general acceptance that execution is excellent and that the safety the government delivers is one of the positives.

Can you provide any examples of governments that maintain a surveillance state with the acquiescence of the governed but do not deliver order?

Lapsed Pacifist's avatar

We can get rid of rampant disorder by having beat cops walking around and by prosecuting minor crimes like intoxication and petty theft instead of letting them go. We don't actually need a panopticon.

scf0101's avatar

So would you be in favor of dismantling the camera system in NYC that the NYPD uses?

Taymon A. Beal's avatar

A person could coherently support a system of surveillance cameras in public spaces, while opposing allowing the government to look through people's private documents and communications and browsing histories at will. What benefits do you think the latter will have, and what evidence do you have to support this?

(I don't know enough about the NYC camera system in particular to have an opinion on it.)

Paul's avatar

Yes, destroy it.

theahura's avatar

Scott literally posted like three weeks ago about how there isn't significant crime or disorder. The old quote is "They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety." But this is worse, because you're not even getting extra safety! You're just giving up freedom for vibes!

scf0101's avatar

"Scott literally posted like three weeks ago about how there isn't significant crime or disorder"

No he didn't. He posted how it has decreased over time. It's still substantially higher than many cities around the world. Especially Asian and Gulf cities with mass surveillance.

theahura's avatar

Fine, I'll bite. There are countless cities that have lower crime than US cities but that do not have mass surveillance. Depending on your political persuasion, this is likely because of the massive amount of social welfare present in these societies. Given that both of these have not been tried in the States, why are you more interested in going the "we must be draconian authoritarians" route instead of the "we should have a good social safety net" route?

scf0101's avatar

Because those countries are growing at like 0.25% real growth a year. I'd rather not torpedo my country's future if the alternative is much easier.

Dustin's avatar

Is it decreasing because of increases in surveillance?

Wanda Tinasky's avatar

You're aware of the huge surveillance apparatus that former Soviet states had, right? They used that to actively suppress political opposition. I hear you about the tradeoff with crime - maybe there's some clean way to get the benefits while safeguarding against totalitarianism - but I'm willing to tolerate some crime in order to avoid Big Brother.

Always assume that any power you give to the government will eventually be used against you in the worst possible way.

scf0101's avatar

But you're just pointing to the worst possible example. That's not a good argument. Who has a better life: someone living in the South Bronx or the equivalent disposable income in Singapore?

Bugmaster's avatar

What leads you to believe that granting the current (or any) US administration unchecked power to arrest essentially whomever they want for whatever they want in large numbers, would move our society closer to Singapore instead of ye olde USSR oi North Korea ?

scf0101's avatar

Because the missing piece of those countries was/is capitalism. And we are a capitalist country.

Wanda Tinasky's avatar

I think it's worthwhile pointing to the worst possible outcome, particularly if it's plausible. This is like saying that dying in a crash is the worst possible outcome of taking your hands off the wheel when driving on the freeway. It's a possibility that's very worth thinking about.

>Who has a better life: someone living in the South Bronx or the equivalent disposable income in Singapore?

I don't think Singapore achieves that by having an oppressive surveillance state, it does it by vigorously enforcing social and legal norms. I'm all for that. Singapore is also much smaller and more ethnically homogenous. That matters for outcomes like this. Most extreme poverty in the US is downstream of sub-80-IQ people interacting with a free society. Most crime is downstream of multiculturalism undercutting the public will to enforce norms. You don't need mass surveillance to undo that.

scf0101's avatar

Huh? Singapore is more multicultural than Great Britain or almost any other European country.

Also I'd rather pick multiculturalism + surveillance than homogeneity and no surveillance.

Michael Friedman's avatar

Singapore is a mix of Chinese, Malays, Indians, and whites. It is far less ethnically homogenous than the US.

That you would say this indicates to me that either you know nothing about Singapore or you are trying to say "Singapore does not have many black people" without actually saying that.

hongkonglover77's avatar

I don't know, who does? You should make a good argument for your opinion.

Alvin Csunderlik's avatar

I don't know if you're trolling, but if you're arguing in good faith: you could see how big of a surveillance apparatus (and thus, implicit intimidation) past & present authoritarian states have made use of, and how many democracies use it to that extent.

It's a good exercise to consider these base rates.

Matt A's avatar

Why do you conflate mass surveillance by the DOD with reduction in crime? That's so far beyond their remit.

AB's avatar
Mar 1Edited

The idea that petty crime enforcement in dense cities has anything to do with mass national data ingestion and “analysis” is hopelessly confused.

You also seem to be operating under strange and unfounded assumptions about how the competence, equity, accuracy, efficacy, and other characteristics of a mass surveillance apparatus will be. Picture the capricious impunity with which websites throw up captchas, then replace the captcha with frozen assets and expired ID. Then recall the automated “moderation” practices of the social / forum websites that exhibit strong political leanings one way or another, and replace their mutes and bans with gag orders and incarceration. Such is the marriage of “big tech” and “big government” you’re arguing for.

scf0101's avatar

There are ~20,000 NYPD security cameras in NYC. They are - today - only useful for after-the-fact investigations. And, due to the manual effort required, only for investigating serious crimes. Would you be in favor of using AI to 1. allow the NYPD to respond to crimes in-action and 2. using the mass efficiencies that AI brings go after petty crimes as well?

Because most of the people crying about mass surveillance would be opposted to #1 and definitely #2.

meeeewith4es's avatar

It would also be possible to use AI with this system to track where everyone goes, who they meet up with, etc. I wouldn't want to live in a world where what I do with my life gets put under a microscope.

Gary Mindlin Miguel's avatar

That doesn’t require the department of war to do anything with LLMs?

YesNoMaybe's avatar

My country has both less crime and less surveillance than the US, which suggests that mass surveillance is not a necessary ingredient for a significantly lower level of crime.

It's one thing to trade privacy for security, but if it then turns out that you could've had security without giving up privacy maybe you wish you'd not taken the trade after all.

Taymon A. Beal's avatar

This by itself is a weak argument. It's possible that surveillance reduces crime, but your country has less crime than the U.S. because of some other factor that the U.S. can't fix by passing laws.

YesNoMaybe's avatar

Fair point, this is why I used the word "suggests". It is not proof, and I don't want to pretend it is, but it is evidence in favor.

Edit: Upon rereading I am now confused. I was arguing that even if surveillance deters crime there's still other options. The possibility you mention, I agree it exists but I don't think it contradicts my point.

Taymon A. Beal's avatar

The problem is that "there's still other options" and "you could've had security without giving up privacy" imply that crime rates are being kept down in your country through some kind of policy decision that other countries could duplicate, but you haven't provided any evidence for that.

scf0101's avatar

This doesn't follow at all. There are many variables that go into level of crime. *All else equal* it is almost surely a fact that surveillance decreases crime. I always see leftists make the argument that length of prison time doesn't deter crime, probability of getting caught does (and then they do everything in their power to get in the way of things that would increase probability of getting caught).

Donald's avatar

> *All else equal* it is almost surely a fact that surveillance decreases crime.

I think it depends on the type of surveilance, and the type of crime.

And also, whether or not this diverts limited police/state resources.

We could imagine a situation where the police were too busy beating up anyone who sent an insulting text about the president to actually prevent any crime.

> I always see leftists make the argument that length of prison time doesn't deter crime, probability of getting caught does (and then they do everything in their power to get in the way of things that would increase probability of getting caught).

There is the argument that, if the police had sweeping powers to do whatever they felt like with impunity, then it would be easier to catch criminals.

The American police don't seem to be using the powers they do have with competence and efficiency. What fraction of existing petty crime and disorder is something where you really need to comb through private messages to catch the perpetrator? Probably not much.

Michael Friedman's avatar

No. But a combination of ubiquitous surveillance, facial / gait recognition, and real time AI analysis to direct police to the likely location of the perpetrator minutes after the crime would probably catch most perpetrators.

I do not think there is any question that a panopticon state can catch most criminals. The question is whether it is worth the cost to our freedoms.

YesNoMaybe's avatar

I am not arguing that surveillance does not deter crime, it definitely can. I want to argue that there's other, better, options which have worked in other places, and you'd have to argue why out of all the options that exist you'd want to combat crime using mass surveillance.

After all, shooting suspected perpretrators on the spot also deters crime, everything else equal. Yet despite that we're generally happy to choose any of the other options that also deter crime.

Jimmy's avatar

> Yet despite that we're generally happy to choose any of the other options that also deter crime.

Because we don't have the means of efficiently gathering evidence. If we had the means of performing analysis on which people were the most likely to be or become liabilities, and monitor their actions to see if they ever make preparations for such a crime... we could prevent crime before it even happens. Punishing a murderer doesn't bring the dead back to life, but this would prevent them from being lost in the first place.

Dave Rolsky's avatar

But there's also millions of people in _other_ Asian countries that enjoy very low crime rates without mass surveillance. Taiwan and Japan are two that come to mind. Taiwan's crime rate is mostly lower than that of Singapore or China. Many European countries, like Finland and Denmark, also have very low crime rates, at least compared to the US (which is an outlier among rich countries, and not in a good way).

So maybe there's another reason these countries are so safe and mass surveillance isn't the reason. You seem to just be cherry picking a few examples of places that have low crime and lots of surveillance.

scf0101's avatar

"Taiwan's crime rate is mostly lower than that of Singapore"

I haven't seen the latest stats but is that true? I always thought Taiwan was surprisingly criminal for an East Asian country.

Zanni's avatar

Japan sells children into debt slavery. Parents are expected to do public service to make up for their children's crimes (even if they've never actually met the children). This isn't criminal, in that it's not consider a problem.

Taiwan has extremely organized crime, and with organized crime goes "cyclical" crime waves (as bullets/knives are how they determine who is top dog, given lack of enforcement of contracts).

ragnarrahl's avatar

"Japan sells children into debt slavery."

Are you talking about the trafficking of foreign workers, which basically happens everywhere, or are you alleging that nenki boko is still around?

Also interested what this "public service to make up for their children's crimes" is.

Zanni's avatar

https://www.japanpowered.com/history/japan-sex-slavery

There's a real history of children being asked to take on debts of their parents...

https://en.wikipedia.org/wiki/Toyoko_kids

Orphanages are "not very good" and one might be abused there, as well:

https://www.hrw.org/report/2014/05/01/without-dreams/children-alternative-care-japan

Oh, that was the infamous "Learn English through Full Metal Jacket" manga, which actually made it onto some Japanese high school shelves (presumably greenlit by teachers who didn't notice the high amount of profanity in the manga).

Minus's avatar

And a huge problem in China & Singapore is that they heavily corral political influence! We are a democracy, wherein the whole design of the system is to try to allow people's beliefs to be represented and their rights to be upheld.

Beyond that, Singapore manages to avoid their single primary party being abused by having strong anti-corruption *which we do not have*. As well as Singapore's primary party simply being less aggressive about abusing its power in general.

You are doing a reduction of the argument as "oh we'll just get rid of rampant disorder" while ignoring everything else Scott said.

darwin's avatar

>I dunno, those don’t seem like terrible trade offs for getting rid of rampant disorder. The GDP increase from freeing wide swaths of urban areas from the grip of crime and allowing productive people to live there would be worth it in itself

Ok, sure, but... what the hell are you talking about?

When we say mass surveillance in this context, we're talking about the government taking and searching the content of social media, texts, phone calls, etc.

Teens don't usually write about all the shoplifting and graffiti they are doing in those contexts, and even if they were a text mentioning that is probably not sufficient for a criminal conviction, and even if it were the national defense agencies doing this surveillance are not interesting in stopping teens doing graffiti and shoplifting unless they are also protesting against Israel or ICE.

People in tent cities and homeless encampments aren't posting about it on social media, and even if they were who cares, the police already know where they are and what they're doing, having evidence isn't the limiting factor here.

If instead of the thing we're actually talking about, you're imagining a million security cameras on every street: I still think that sucks, but sure, you could make the argument you're making about it.

But that's not what we're talking about.

scf0101's avatar

"When we say mass surveillance in this context, we're talking about the government taking and searching the content of social media, texts, phone calls, etc."

Are we? Says who? Because I see civil liberties groups arguing against normal CCTV coverage in cities all the time and calling it mass surveillance.

darwin's avatar

'In this context'

scf0101's avatar

Again, is it? Because no one has made that clear.

ragnarrahl's avatar

"Teens don't usually write about all the shoplifting and graffiti they are doing in those contexts,"

Ironically, they post Tiktok videos about such for all to see. No surveillance necessary, just watch the videos they post.

Timothy M.'s avatar

> There are a billion people in China/Singapore/Gulf countries that would look at you sideways if you said “you won’t be surveilled anymore but crime goes up 5x and your streets are covered in graffiti, litter and everything in your shops is under lock and key”. Are they all wrong and brainwashed and westerners are all enlightened? That sounds like a religious argument to me.

I think it's a little strange that you say "some countries do it totally differently than other countries", endorse one side of that divide, and then suggest that it's somehow unreasonable for someone to endorse the other side. Am I following that correctly?

scf0101's avatar

I've given my rationale though! That's the difference.

Timothy M.'s avatar

I don't quite see how. If I say "there are a billion people in the West who think it's crazy to accept massive surveillance in exchange for lower crime and less graffiti, are they all wrong and brainwashed" that seems like as much of a counterargument.

The only other thing you said was it would boost the GDP which is a funny criticism to make of the US, the largest rich country / richest large country depending on your framing. (And major US cities are the most economically productive parts of the US.)

ruralfp's avatar

“ I dunno, those don’t seem like terrible trade offs for getting rid of rampant disorder.”

You are trading short term stability for long term structural risk, picking up pennies in front of a steam roller type thinking.

An open society will have more day today disorder, but a closed society that prevents dissent will not be able to course correct when presented with changes in its situation because going against the prevailing authority is a losing move.

Maybe if you are a small technocratic city state you can get away with this in the longer term because the decisions your government makes don’t matter all that much versus bigger global trends/forces, but we have plenty of evidence that this does not work on larger scales.

hongkonglover77's avatar

You're assuming a dozen different questionable things. Can we start with an argument for the type of surveillance Scott is talking about being effective at reducing urban crime? How exactly is AI surveillance going to reduce the number of homeless people using fentanyl in public?

And then can we get a source on there being a billion people in China/Singapore/Gulf countries who would say no to your hypothetical?

Vaclav's avatar

Are you confident you're not doing the thing where, between 2008-2016, US liberals were pretty chill about the threat of executive overreach, encroachment on civil liberties, etc., and pretty happy to write off most of those concerns as right-wing whinging. And now they're maybe thinking they should have cooperated more with libertarian types to limit the power of the government when they had the chance.

More concretely, are you actually okay with AOC-Tlaib 2028-2036 having all of those powers?

Zanni's avatar

2020-2024 is a better example. How confident are you that you haven't been deliberately seeded with government disinformation? That it is increasingly impossible to figure out who the government agents are?

Samuel Prietol Lima's avatar

I agree with you that many places have "Way more disorder than they ought to have, and that is meaninfully holding the bystanders there back"

I don't see what makes you assume that lack of intelligence (in the spy sense) is the cause of such disorder.

People (like you and I) who point out the disorder problem all the time, are frequently bringing up how most disorder is caused by repeat, well known offenders. There is a lack of "political capital/ desire by the ruling class" to solve the problem, not a lack of knowing enough about who/what/when.

Bryan caplan mentioned this recently, I don't remember where, when talking about El Salvador. "5 years ago, people would have said that El Salvador doesn't have the 'State Capacity' to cut down murder rates by a factor of 10. Clearly they did, because the rates are in fact down, and all it took was throwing everyoen with face tattoos in prison"

Most metropolitan areas totally could reduce their crime and disorder by at least half while throwing ~0 innocent people in jail, if they just arrested the people who already commited arrestable offenses.

Scott made a great article about why this is not true on a national level. But it's definetly true in the Chicagos / Bostons / NY's.

Remember how one day san francisco just didn't have the homeless problem anymore?

Scott Alexander's avatar

As we talked about recently, crime is at record lows.

"Disorder" may not be at record lows, but if we're talking about tent cities and open air drug markets, this doesn't exactly require mass surveillance to stop, does it? If it were legal to do so, police could just walk into a tent city/drug market and arrest the people who are obviously there. They hardly need to use Claude to search everyone's private communication to figure out who secretly admitted to their mother over text message three years ago that they were living in a tent city.

I think you're just sort of doing a cargo cult thing of assuming that if we give up enough of our essential freedoms, surely that will help in some way. But I think it's a uniquely bad lever for the kinds of problems we're facing, even granted that these problems are important.

scf0101's avatar

But your argument doesn't follow! Why is the exact level of technology police departments have today the right amount? Why can't they use tech to generate efficiencies and increase productivity as all other jobs can?

Crime is at record lows relative to U.S. history, but not compared to international peers.

I live a block away from a cafe that has a wide blank wall. Every few weeks the owner of the cafe has to go out and spend money and time to cover up the crude taggings that various idiots do throughout the week. This is an implicit tax on him running his business and extremely unfair to him. In a just world, we could identify the taggers immediately and a $1,000 fine would show up in their mailbox the next day. There is no downside to this other than decreased crime.

artifex0's avatar

Absolutely agree with those points, especially the first one- creating misleading narratives through cherrypicking becomes much easier the more data you have to mine. We've already seen examples of the current administration using that sort of misleadingly constructed narrative to justify criminal investigations and prosecutions- not just against political opponents, but against random people harmed by their incompetence who they want to discredit, such as Abrego Garcia and Becca Good. Automating that sort of tactic while making it much more powerful is a genuine threat to everyone.

Another thing worth mentioning: there was a study published recently demonstrating that LLMs can be very effective at de-anonymizing online accounts: https://arxiv.org/abs/2602.16800

Viliam's avatar

"ChatGPT, find the most outrageous thing you can find online about X.Y. Think ultrasupermegahard, check the social networks for possible pseudonymous accounts. Provide sources. Also create a funny meme about that, aimed for the general audience."

"Do this for each identifiable person on this photo from the yesterday's protest."

Michael Watts's avatar

This is something I don't understand about the census. It appears to be widely viewed as morally required to comply with the census and morally sinful to evade it or provide inaccurate information.

But the purpose of the census is that the government will use the information they collect to hurt people. My read of the very broad view of racial questions on the US census has been:

1. Nonwhites want to be counted so that other groups can be hurt for their benefit.

2. Whites have no realistic pathway to that outcome and don't expect it to happen. But they want to be counted anyway, so that they can be hurt for the benefit of others, because it's the right thing to do.

3. Jews have been ambivalent as a group. Some of them want to be counted because they expect to be given things at the expense of others. Some of them believe that, if they were counted, the results would look so bad that they'd get put in the "suffer for the benefit of everyone else" category. And some of them worry that, once you've been counted, even if the person who did it meant to hurt your enemies, someone else could obtain the data and use it to hurt you.

People frequently make the argument in your comment, that you can prevent the government from doing bad things in response to information by stopping them from collecting the information, but I've never seen it applied to the census, despite the census being a pure exercise in collecting information for the purpose of doing bad things.

(In a US context, that is. In France they worry quite a lot over whether the government would do bad things with information it might get from a census, and categories of questions are banned for that reason.)

Zanni's avatar

What bad things do you envision coming from a census? The census mainly guides apportionment of representatives. And apportionment of representatives in this way is merely "letting people's feet reveal their preferences." Good, growing states get more of a vote in Congress, weak and dying states get less of one.

This doesn't seem to be very related to race. In fact, there are racial categories (non-white) that seek to claim "biracial" as their category, thus making it impossible to lump them into other categories (this is part of why we got "Hispanic", as that's a cultural designation, and thus not spoiled by interbreeding and cultural connotations of such).

Jeffrey Soreff's avatar

Yes, these are all good points and plausible scenarios. I'm rather fatalistic about these. Summarizing text was one of the earlier successes of LLMs. Internet comments are, of course, visible to the government, as they are to everyone else (hi NSA!). I strongly suspect that the Internet comments part of "gives everyone a conservatism-vs-liberalism score" is probably easy even now (given enough compute), possibly even with open source models. Email, browser history, and texts are supposedly more private, but I expect some fraction of this leaks today, and, if the NSA decided to search it, even with just open source LLMs, I'd expect them to mostly succeed.

I see this as like the now-ubiquitous CCTV cameras. I'm not _quite_ sure moonshadow's

>The curious thing here is that the mass surveillance ship has already sailed.

is completely correct, but I think is more correct than not, and just the cost reductions in LLM analysis seems very likely to make it similarly ubiquitous.

zahmahkibo's avatar

> The government searches an opposition candidate's browser history until it finds him watching weird porn, then publicizes this.

is anyone else surprised that this has never happened?

in the 40-odd years since the apotheosis of the internet, there has NEVER (afaik) been a scandal originated by the exposure of a public figure's browsing history, in which the material was fucked up but not illegal.

we've had scandals involving stuff found on hard drives, scandals involving politicians watching porn in public, scandals involving illegal activity conducted online. but never, not once, in any country in the world, at any level of government, a single scandal involving private, non-local, non-illegal online activity? no one has ever tried to swing their school board race by setting up a hotspot and anonymously publishing Mr. Smith's affinity for stepmoms? I must be missing something

Desertopa's avatar

>in the 40-odd years since the apotheosis of the internet, there has NEVER (afaik) been a scandal originated by the exposure of a public figure's browsing history, in which the material was fucked up but not illegal.

In most cases, such information never becomes public in the first place, but there actually was a case recently where a candidate for the governorship of North Carolina withdrew his candidacy after the scandal that arose from comments he was leaving on pornographic websites. Nothing he posted was illegal, but it was stuff that was considered highly inappropriate for any candidate for public office.

That's the only case I can think of, but considering that he had to post those comments using his real name to be caught in the first place, I think the rarity owes a lot to the difficulty of those circumstances arising.

Mister_M's avatar

It doesn't seem especially common (yet), but it does happen, and I might be wrong about it not being common:

https://www.axios.com/local/raleigh/2024/09/19/mark-robinson-cnn-report-nc-scandal

Michael Friedman's avatar

Yes. I agree. But not practical.

I think we stand a better chance of establishing bright line rules like "No snooping in email on cloud providers without a warrant" than very vague things like "No mass automated trawling of people's public social media."

Zanni's avatar

To be perfectly clear: if one is using the internet to determine the likely pattern of voting in a country (your own or others), this is perfectly legitimate statistical work. What we need in place is the de-identification of "data" from "people."

Gordon Shriver's avatar

So basically you don’t want the government to turn into BlueSky.

orthonormal's avatar

Mass surveillance can be used to identify anyone who went to protests, and then search each of them for evidence of any crime they've ever committed, allowing the government to throw a large portion of the opposition in jail en masse, while not pursuing any such criminal investigations against non-protesters.

moonshadow's avatar

The curious thing here is that the mass surveillance ship has already sailed. There is already an entity that has all this data, happily contributed in exchange for some pixels or other transient intangible reward, and is continuously sharing it with hundreds of third parties, and no-one knows or much cares what's being shared or with whom. The malicious GDPR compliance "we and our 227 partners value your privacy! The data we collect is used for legitimate interest..." popups are universally regarded as a nuisance; people don't care enough to try and stem the flow, they'd rather just not get asked.

Minus's avatar

They are still bottlenecked by how many people they can throw at it. With AI it becomes very possible to generate a high quality personality profile and live update it as quickly as you get data. We already surveil too much, AI moves it up another order of magnitude or two.

Jimmy's avatar

Wouldn't private companies do that on their own anyways? All the government has to do is buy that data.

Zanni's avatar

Your mass surveillance device is something you bought and paid for. isn't it hilarious? It's also a potent assassination tool...

Zanni's avatar

We're just going to note at this point that antifa has been designated a terrorist group. Yes, it is a perfectly good idea to hound them for any crime they've committed, just like hounding the KKK, or going after Capone for tax evasion.

Criminals aren't always caught for their "problematic" activities. When the Secret Service shows up to arrest the gunrunners, they aren't going for the guns.

Feral Finster's avatar

That pesky Fourth Amendment. "if you have nothing to hide then you've got nothing to fear, right!"? I mean, only those who hate the freedom would not trust the government.

Taymon A. Beal's avatar

This isn't an argument, it's just an attempt at social shaming. (https://slatestarcodex.com/2018/05/08/varieties-of-argumentative-experience/) I really don't think it's helping.

Mister_M's avatar

It's not an argument in itself, and I don't think it purports to be, but rather than social shaming, I took it to be a (pointed) reminder that this is hardly a new debate. If someone takes the position that maybe mass surveillance is good, actually, and gives a reason that everyone's heard many times, without clearly identifying how their view responds to the last 300 years of debate about this, I think the first response shouldn't be to start enumerating reasons why mass surveillance is bad, actually. Who's going to be persuaded by any of this?

Taymon A. Beal's avatar

I think one should not be the first to abandon argumentation in favor of shaming, even if the argument is a very familiar one. After all, that means you could instead just link to existing counterarguments.

TGGP's avatar

The Fourth Amendment says people can "be secure in their persons, houses, papers, and effects". It is a sort of property right. The earliest cases involving it simply demand that the things which were taken had to be handed back. It's not a demand that others be ignorant. Ignorance would make the government ineffective at the things it tries to do, which are ostensibly supposed to be on behalf of the citizenry. Having rights for the citizenry is supposed to prevent the government from harming us while it undertakes its actions. Which is indeed supposed to help those with "nothing to hide", but not any kind of guarantee that those who DO have something to hide for violating the laws can't be found out.

Feral Finster's avatar

The Fourth also applies to things like warrants.

Mister_M's avatar

"Secure... against unreasonable *searches* and seizures." While there's debate about whether the constitution provides a *generalized* privacy right, it seems clear that security of the person against unreasonable search is a *case* of privacy rights. I don't think the primary purpose is to prevent the inconvenience of government agents trampling around your home leaving muddy footprints on your carpet. It seems clear the purpose of preventing a search is to prevent the discovery that could result from the search.

TGGP's avatar

Seizure isn't a privacy right. And being forced to return an item seized doesn't prevent "discovery", but as mentioned that was the earliest remedy. Additionally, if there is a search of a completely wrong person and no evidence discovered, there is NO remedy to be had in withholding evidence! Whereas if treated as a matter of property rights, that would still be a matter of trespass (which in the early republic would typically have been conducted by a private individual rather than a government agent given legal immunity).

WindUponWaves's avatar

"The innocent have plenty to fear from the sort of person who says, "The innocent have nothing to fear!"."

-- Terry Pratchett

Shaked Koplewitz's avatar

I can see why people are uncomfortable with mass surveillance but also this is all data that Google and Facebook already have, and I don't think they're much more trustworthy than the government (they're certainly much less accountable).

Performative Bafflement's avatar

> this is all data that Google and Facebook already have, and I don't think they're much more trustworthy than the government

Really??

1) For one thing, people have volunteered to give them their data, which is different than having it taken from you involuntarily.

2) For another, Google and Facebook are actually routinely among the highest approval sentiment organizations, and Congress / government one of the lowest, they would be around the approval rating of Antifa and Russia in this chart:

https://imgur.com/a/uimPxmQ

3) For a third thing, Google and Facebook have teams of actually smart and competent people who know about cybersecurity protecting your data, AND they largely keep it in house because they want to use it. Try to buy packages of people's data from Google - it's not possible. They use that data internally, because every US consumer is worth $200 - $300 a year to them, they're the global ad duopoly.

The government? They don't have *any* talent - they don't pay enough. The "contractors" that are forever milking them who theoretically have that talent? A revolving door of also-rans who have really high churn. Your data is veridically something like 100x safer at Google and FB.

Shaked Koplewitz's avatar

This was specifically about data the government can buy from them (i.e. the stuff they're willing to sell). As you point out, they both hold much more extensive private data which they're legally allowed to have and use fancy AI algorithms on with few restrictions.

actinide meta's avatar

Google and Facebook mass surveillance should be stopped, I agree.

Raj's avatar

Yeah I am much more afraid of the organization with a monopoly on violence than the one trying to sell me stuff

Zanni's avatar

You shouldn't be. The one "trying to sell you stuff" has incentive to degrade your mind. The government, when it gets up to stuff, will throw you in jail -- jail's pretty peaceful, if you're in solitary (which some prefer).

Victor Thorne's avatar

Google and Facebook don't have the legal power to throw me in prison.

Simon Neil's avatar

You seem to be coming at it from a "clean up the streets" angle, but the nebulous term "mass surveillance" isn't really about addressing that (entirely valid) concern. Certainly it's not what the DOW would care about. Mass biometry scanning - whether that's cameras, facial recognition, or molecular biology methods - are more what you're thinking, and yeah, Chinese nationals might tell you it's a mixed bag, pros and cons (except they're aware of the inescapable consequences of doing so).

*Real* mass surveillance isn't about "you were here on this night, at this time, doing this act". It's about thought- and speech-suppression. It's about eradicating your enemies. It sets a bar of behaviours so high that you'll be horrified when your children turn you in for that dinner talk. It's the antithesis of a free and fair State.

If you think it's acceptable because the power only goes to the paragons of society we elect, then I don't know what to tell you. If you've been of voting age for more than 2 years you'll be aware that sometimes the people you don't want to get into power occasionally do.

Zanni's avatar

Remember the FOIA on Missouri? Women really did turn in their husbands...

[The nazties are already here! They were always here all along!]

Jack Blueman's avatar

Why would we possibly trust our government to use mass surveillance systems to crack down on street crime or do anything in particular to improve our welfare? Their track record on this, and almost anything else they say is for our good in the last 50 years is abysmal.

This isn't Singapore, the government here is not interested in the people's welfare, it views us as sheep to be fleeced or morons who need to be indoctrinated, its incompetence is our protection.

Mass surveillance will be used to make most people's lives worse, not better.

actinide meta's avatar

Privacy is a terminal value.

Paul's avatar

Thank you. This is true.

Dave92f1's avatar

Why do you say 0.0001% probability? Per Claude.ai "pure civilian mass uprising without significant military or guerrilla warfare components, 2–4%" over the last 200 years. 2% is a lot more than 0.0001%. (Recall that 250 years ago there was a popular revolution in North America that overthrew the goverment.) To this day most of the Revolutionary-era state constitutions contain the "right of revolution".

Michael Friedman's avatar

I disagree with you on preferences but it is very hard not to use these capabilities once you have them.

It is like familial DNA matching. It is an identification tool that was never imagined at the time of the founders and would almost certainly have horrified them if they knew it might one day be possible. I think it would probably have horrified most people back in the 1970s. Now try to convince yourself not to use it if you know there is a good chance it can find the rapist and murderer of a 12 year old girl.

Frikgeek's avatar

I don't think you'd notice a meaningful difference in safety between Ljubljana, Zagreb, The Hague, and Abu Dhabi or Shanghai. In fact according to the safety ratings Shanghai, with mass surveillance, is less safe to walk during the day and the night than Ljubljana, Zagreb, or The Hague none of which have mass surveillance.

Not to mention that the type of surveillance you'd want to reduce street crime(physical cameras everywhere) has little to do with the type of surveillance that the DoW wants at the moment(access to and analysis of digital communications of citizens). Reading everyone's texts and social media posts would have very little impact on street crime.

Zanni's avatar

Physical cameras are already at the scene of every protest. Why would they need more? They're already showing up at the driveways of honkers, and the honkers are calling 911 (which is hilarious. What do they expect the police to do? Someone drove into my driveway, and is Waving at Me! OMG Call 911!!!!!)

Viliam's avatar

Seems to me that a large part of this thread could be summarized like this:

There are many countries that have way less crime than USA. (I think almost all developed countries?)

You could choose one of them that has lots of surveillance and argue "this is why we need more surveillance in USA".

Or you choose one of them that has little surveillance and argue "this is why we do not need surveillance in USA".

(Or, instead of surveillance, you could do the same exercise with other things.)

Zanni's avatar

There are people that have close to a 99% chance of toppling the government*. Letting your own inability prevent you from thinking of them is hindering your analysis.

*Chaos is easy. Rebuilding is hard.

FrustratedMonkey's avatar

Nobody is innocent. The current protections is scale, there aren't enough people to investigate everyone. With AI then you, you an individual just going about your day, can be investigated. Lets say you post a funny meme. Then you get labeled as 'subversive' and a potential person of interest., maybe even a terrorist. Then what are you going to do? This administration is pretty freely labeling anybody they don't' like as a terrorist. Well, then it would be ok to detain you right? Maybe hold you for a few months, ruin your life?

throw_away_poster's avatar

It's been proven repeatedly that safe-guards are paper only. See the countless news stories of cops tracking their ex, ex's new partner etc.

It essentially makes any individual easily prosecutable by the state if they desire it. Use an illegal surveillance method (as in not admissible as evidence) to find a crime, perform parallel reconstruction, and presto chango - a crime.

And prosecution of a crime has inherent protections. What about that legal but weird porn you like? Or whatever else? The state can destroy your life without exercising power in a way that can be confronted.

You also have to be clear about what "surveillance" means? Increased surveillance cameras in public? Vs monitoring all web traffic? Vs or in addition to the phone data ala Verizon and the 2000's?

Adam Kalinich's avatar

I don't think you explained the nuance of "collection" quite right. "Collection" generally means that the data has come to rest in a database. This means that if the NSA taps a cable and feeds that through a streaming algorithm, it isn't technically collected yet. However, if that data is then stored for analysis, it does count as "collected."

There are also rules around query targeting --- if an NSA analyst searches for a foreign terrorist's communications and finds communications with a US citizen, then it's fair for the analyst to look through those communications. However, they would need a warrant to look through the US citizen's other communications, even if they thought they had the US citizen's communications lying around in a database somewhere. "Reverse targeting", ie searching for the communications of foreigners they might have been talking with, but basically trying to just target the US citizen, is also not allowed.

The big caveat to all of this is that the NSA doesn't generally count metadata as communication data, and basically lets itself do whatever with it.

Timothy B. Lee's avatar

""Collection" generally means that the data has come to rest in a database."

This is the whole point: that would be the common-sense way to use the term, but the NSA adopted a bizarro definition where no collection has occurred until someone queries the database and turns up that specific bit of information.

Adam Kalinich's avatar

No, that's not true. The NSA says it's collection if it's in database where someone could in theory query it, not that anyone has actually queries it. The confusion arises because the NSA has algorithms processing streaming data, and any data that is discarded without ever producing downstream data is never 'collected.'

Viliam's avatar

How is that "not true" when you have repeated what he said? The data *in the database* is considered "not collected". You just have to trust them when they say they didn't look.

Saint Fiasco's avatar

The argument is that there is no database. When the NSA process the data "in the stream" they are reading all the contents of the wire, live, processing it right then and there, then only storing the processed data that relates to their target. Everything else is discarded before it even touches a database.

George's avatar

I think the point being missed here is that "in a stream" can faiap mean "stored at rest for as long as you want"

Adam Kalinich's avatar

The definition used by the NSA restricts the amount of time it can be held before being 'collected.'

Adam Kalinich's avatar

Correct, the NSA is claiming a distinction between data that was filtered through and discarded, without ever coming to rest, vs data that was stored and potentially used later (even if it was never actually used, still counts as collected).

Viliam's avatar

If I secretly listened to your phone calls, but immediately forgot everything that you said (because it was just some boring stuff), would you agree if I said that I didn't truly spy on you?

Feral Finster's avatar

Law is meaningless. Enforcement is the only thing that matters.

For instance, the head of the nsa can perjuye himself in his sworn testimony before Congress, testimony concerning NSA spying on members of Congress, and, since nobody will do anything, it doesn't matter.

Victor's avatar

Wait until the next election. Then we will see.

Feral Finster's avatar

Lol. We hear that every election.

Victor's avatar

Yeah. Would you be willing to live under the laws as they were written in 1776? If not, why not?

Feral Finster's avatar

I'm not sure what point you are trying to make, but it depends on who I was and where.

Deer Reeder 🦌's avatar

With the current violence focused foreign policy which seems not changing soon, both mass surveillance and fully autonomous weapon would be required to keep Americans safe. Any argument on that two points are mostly theatrical, as when that need actual comes, most people will be willingly give in on their sense of freedom and moral ground for physical safety. That’s the cycle of violence and arguably human nature.

Feral Finster's avatar

Rich that the government must abrogate the law to "keep Americans safe" as the result of the government's own chosen policy.

A self-licking ice cream cone.

Deer Reeder 🦌's avatar

Well yeah, they are smarter than last round.

quiet_NaN's avatar

> With the current violence focused foreign policy which seems not changing soon, both mass surveillance and fully autonomous weapon would be required to keep Americans safe.

The problem with that phrasing is that you imply an effectiveness which just is not there. There are very few terror attacks on US soil, but also rather few people getting convicted for attempting them. Almost as if most US Muslims do not want to blow themselves up. If you save ten US lives by tapping into everyone's data, is that worth it? If you could save another five citizens a year by requiring any camera to have a backdoor for the NSA just so they can check that you are safe in your bedroom, would this also be required to 'keep Americans safe'?

And with regard to autonomous weapon systems, the justification is even more flimsy. If you are sitting in some house in rural New Mexico, the DoD already does an excellent job of protecting you from foreign attacks. You do not require predator drones controlled by onboard LLMs to keep you safe. Giving your military more toys to blow up weddings at the other end of the globe will not in fact make Americans safer -- plenty of Western countries hardly ever blow up Brown people and are still not overrun by terrorists.

The 'keeping Americans safe' rhetoric was bullshit after 9-11, and it has not aged well in the last 25 years either. You can justify anything. 'Hey, if we torture suspects, we might decrease US mortality from terror attacks by another five nanomorts per capita'.

The US has plenty of problems, but being threatened by other nation states or terrorists is very far down on the list.

Zanni's avatar

Given that we've had assassination attempts on our President in the past five years (we're what, up to at least 5?)... I'd say being threatened by terrorists is a Pretty Big Deal (some of those were Iranians, some were Democrats).

That said, no, Peoria doesn't need to worry about "being threatened by other nation states."

I've sat through a 2-3% chance of my hometown getting nuked within the past ten years, though. TEOWAWKI is grand fun at parties, innit?

Victor's avatar

I do not condone assassination attempts, but I do not think the government needs to tap my communications to keep him safe.

darwin's avatar

>With the current violence focused foreign policy which seems not changing soon, both mass surveillance and fully autonomous weapon would be required to keep Americans safe.

Why?

Nukes are still going to beat ai drones.

Unleashing mass surveillance and autonomous killbots on foreign civilians does more to endanger Americans by creating enemies than it does to protect us from anything.

If China had a technology to stop our nukes then I'd agree it's critical we get the same technology before they use that decisive advantage against us.

But for this stuff? It seems more concerned with soft power and political interventions, than with actual military threats to the US.

Victor's avatar

Yes, and are we surprised when the current administration actively creates the very conditions that would entail?

Steven Joyce's avatar

"Ask to see the full contract, as much as you can get. Scrutinize it yourself or run it by a lawyer in a conversation where attorney-client privilege exists (basically, when you are talking with them for the explicitly-stated intent of potentially securing their legal counsel, or once you’ve formally secured them as your legal counsel)."

The parenthetical about when attorney-client privilege exists is seriously misleading. Oversimplifying, the privilege only extends to conversations about actual or anticipated litigation involving the client, not all conversations between attorney and client. Basically, if you can't point to a specific litigation matter for which the conversation is helping the attorney prepare, you can't claim the privilege. This would rule out e.g., the use of the privilege by a journalist who wanted an analysis of the contract by their attorney.

B Civil's avatar

You make me curious about this, so I pasted your comment into Claude for interpretation.

>No, this is not accurate — in fact, your correction overcorrects significantly and is itself wrong.

Attorney-client privilege in the United States (and most common law jurisdictions) covers confidential communications made for the purpose of obtaining or providing legal advice, full stop. It is not limited to litigation or anticipated litigation. That is a different doctrine — the work product doctrine — which does require anticipation of litigation.

A client can consult an attorney about contract review, regulatory compliance, estate planning, business structuring, or any other legal matter, and those communications are privileged. The journalist's contract review scenario you describe would be covered, assuming the standard elements are met: an attorney-client relationship, a confidential communication, and the purpose of seeking legal advice.

The original parenthetical is indeed oversimplified and somewhat misleadingly phrased, but your proposed correction — that privilege only attaches when you can point to a specific litigation matter — would be a significant narrowing of the doctrine that does not reflect the law.

What is true is that not every conversation with an attorney is privileged. Communications for business rather than legal advice, communications made in the presence of unnecessary third parties, or communications where the attorney is acting in a non-legal capacity can fall outside the privilege. But "no pending or anticipated litigation" is not a disqualifier.

Steven Joyce's avatar

I accept B Civil's correction -- I was thinking of the work product doctrine and forgot that attorney-client privilege works differently.

I followed up with Claude Sonnet and Opus. Both thought the privilege wouldn't apply to a journalist who was asking their lawyer to interpret the contract so the journalist could write accurately about it. But Opus had a broader interpretation of the privilege, and the vibes were much more "the privilege applies" than Sonnet's answer. But the two answers were substantively quite similar.

Prompt: For the purpose of determining whether attorney-client privilege to a conversation between a client and their attorney (in the United States), what counts as "for the purpose of obtaining legal advice". Would it qualify if the conversation was between a journalist and their lawyer about the interpretation of a contract between third parties obtained from a source by the journalist?

Sonnet Response:

This is a nuanced privilege question. Here's how courts generally analyze it:

The Core Test

Attorney-client privilege protects communications that are (1) between attorney and client, (2) made in confidence, (3) for the purpose of obtaining or rendering legal advice. The "legal advice" element is the key one for your scenario.

What "For the Purpose of Legal Advice" Means

Courts distinguish between a lawyer providing legal advice versus acting as a business advisor, strategist, or information conduit. The communication must be made with the dominant purpose of obtaining the attorney's legal judgment — not just their general expertise or opinion.

Critically, the advice sought must be legal in nature: advice about rights, obligations, legal exposure, regulatory compliance, how the law applies to facts, etc. A lawyer opining on something as a general consultant doesn't automatically attract privilege.

Applying This to Your Scenario

The journalist-lawyer conversation about a third-party contract could qualify, but it depends heavily on what exactly is being sought:

Likely privileged if the journalist is asking things like: Do we have legal exposure for publishing this? Does possessing this document violate any law? What are our rights here? Could the parties to this contract sue us? Those are classically legal questions about the journalist's own rights and risks.

Likely not privileged if the journalist is just asking: What does this contract mean? or How do these provisions work? in a general sense — for example, to understand the contract well enough to write about it accurately. That starts to look more like using the lawyer as an informed reader or subject-matter expert, not obtaining legal advice for the client's own legal situation.

The third-party nature of the contract doesn't automatically defeat privilege. Lawyers routinely give clients privileged advice about legal documents involving other parties — that's common in litigation, due diligence, and journalism law. What matters is whether the advice is oriented toward the client's legal position, not just abstract interpretation.

Other Complicating Factors

The crime-fraud exception is worth flagging: if the journalist obtained the documents unlawfully (e.g., via theft or breach of a confidentiality agreement), communications in furtherance of that unlawful act won't be privileged, even if they're framed as legal advice.

Work product doctrine might independently protect some of this, if the conversation is in anticipation of litigation.

Who controls the privilege also matters — the client (the journalist or their employer) holds the privilege, not the attorney. If the journalist is employed by a news organization, the organization may be the actual client, which can affect scope.

Bottom Line

The conversation would most likely qualify for privilege to the extent it's about the journalist's own legal situation arising from possessing or publishing the contract — and would be on shakier ground to the extent it's just generic contract interpretation disconnected from any legal question the journalist personally faces.

Opus response:

This is a nuanced privilege question. Here's how it breaks down:

The general standard: Attorney-client privilege applies when a client communicates with their attorney in confidence for the purpose of seeking or obtaining legal advice. The key requirement is that the client is seeking legal guidance related to their own legal affairs or interests.

Your scenario — journalist consulting their own lawyer about a third-party contract:

This would very likely qualify as privileged, though the reasoning matters. The journalist isn't a party to the contract, but that doesn't defeat privilege. What matters is why the journalist is seeking advice. Plausible bases for privilege here include:

The journalist may need to understand the contract's meaning for their own legal exposure — e.g., could publishing information about this contract create defamation, tortious interference, or other liability?

The journalist may be seeking advice about their own obligations — e.g., whether the contract's contents implicate confidentiality concerns, trade secrets, or whether publishing could violate any law.

Even pure interpretive questions ("what does this clause mean?") can be privileged if they're part of a broader consultation where the journalist is seeking guidance to inform their own conduct or protect their own interests.

Where it could get weaker:

If the lawyer is essentially just acting as a translator of legal jargon with no connection to any legal interest of the journalist, a court might view it as more of a business or editorial consultation rather than legal advice. But courts tend to interpret "legal advice" broadly, and the line between "help me understand this contract" and "help me understand my legal position regarding this contract" is thin.

The communication still must be confidential — if third parties were present (e.g., an editor sitting in), that could waive privilege unless another doctrine (like common interest or work product) applies.

Bottom line: In most circumstances, yes, this would qualify. Courts generally don't require that the client be a party to the specific legal instrument being discussed — they require that the client is consulting their own attorney for purposes related to obtaining legal advice that bears on the client's own interests or conduct. A journalist consulting counsel about a contract they've obtained from a source fits comfortably within that framework, especially given the obvious legal risks inherent in investigative journalism.

B Civil's avatar

That’s very thorough. Thank yoi.

Cheezemansam's avatar

Isn't it at least a little bit shameful to be copy pasting LLM output into a conversation? What value do *you* feel you are bringing to this conversation?

Taymon A. Beal's avatar

What do you think they should have done instead?

Cheezemansam's avatar

Thank you for your thoughtful inquiry.

[Something Something Clearly written by LLM]

I hope this distinction clarifies the concern.

Taymon A. Beal's avatar

The whole reason the above commenter consulted an LLM was because they didn't already know the answer. So "use your own expert knowledge to answer the question" was not actually an option on the table.

B Civil's avatar

Thank you. IANAL and I don’t play one on tv either.

Cheezemansam's avatar

Honestly, I think that is a fair enough distinction, although with the massive caveat that LLM's are notoriously bad with law related information (although in this specific case it seems to be pretty reasonably correct). But outside of "It is not reliable for law", straight up *copying* LLM output, surely, is a bridge too far.

Sanskar Chand's avatar

Surely the obviously sensible thing is to not reply then? Leave it to a legal expert, or someone experienced in such matters, to put forth their comment. Anyone can simply ask an LLM, so this accomplishes nothing but adding noise.

John Schilling's avatar

Not answer the question at all. One of the biggest problems with ACX is that the good, thoughtful posters too often get lost in the noise. If I want to know what an LLM thinks is the answer to [X], I can go ask an LLM. I come here because intelligent, thoughtful humans with relevant knowledge or expertise are *better* than LLMs at answering questions, and I want to hear what *they* have to say.

If I see someone positing the result of their query to an LLM, that's noise. It adds nothing of value to the conversation for me. But, in the course of scrolling past that noise, there's a chance I'll miss the answer an intelligent, thoughtful SME gave a few replies later, and there's a chance that answer will never be given because the guy who would have given it tuned out.

I would prefer people not do that. If you can't provide your own answer, either stick to asking questions for other people to answer, or remain silent, or know that you're wasting my time with the LLM copypasta the same way you would have with a "let me Google that for you" link.

If you think an LLM is the right tool to find a few facts that you need to supplement your expertise, that's another matter.

Christina the StoryGirl's avatar

Oh man does it feel good to see someone else say this!

Whenever I see a prelude to an LLM paste in the comments here, I scroll right on past it. I'm even less interested in LLM copypasta content than I am in hearing about people's dreams*.

* actually, I rather enjoy hearing about people's dreams, even if they're almost never able to convey the immediacy or weirdness of them. But I can't wait for that, too, to be ruined by LLMs.

"I couldn't quite remember this weird dream I had last night, so I told Claude it had something to do with not being able to find a spatula, and here's my dream!"

Evan Little's avatar

You are flatly incorrect on this. All private communication with your legal council is considered privileged by default. I don't know where you got this idea that it must be pertaining to a specific legal matter, but you're wrong.

Zanni's avatar

Presumably this is misunderstanding the nature of legal employment.

John Wittle's avatar

for once, the "much more than you wanted to know" tagline feels like an outright lie, like actually crossing the line into parody

we are all desperate to figure out what the hell this means

haven't read the post yet but just wanted to thank you, scott, for wading into the crap for us

Taymon A. Beal's avatar

Note that Scott didn't write this, it's a guest post whose authors are anonymous.

John Wittle's avatar

ah yeah fair, i was sorta attributing it to scott as publisher but i guess it's more appropriate to thank the authors

so. thank you authors! and thank you scott, for conveying the info to us

haze's avatar

I also going to post a thank you so I'll just tag on here

Thanks for performing the public service of broadcasting this info; it's an unfortunate state of things that such important events are (intentionally?) expressed in murky ways via all the official channels (eg the OpenAI blog posts, gov't X posts)

Notmy Realname's avatar

An even broader question:

To my knowledge, this is the first time that a frontier model developer corporation (Anthropic) has visibly butted heads with a governmental agency and insisted that it should be allowed to self-regulate and hold itself to a different standard than the standard lawfully imposed by the representative government of the United States of America.

My impression is that you are favorably disposed towards Anthropic and unfavorably disposed to the current administration. In a vacuum, do you think it is good for p(doom), ai apocalypse concerns etc. for corporations to reject the sufficient authority of the government and insist on following their own standards? I haven't seen anybody else discuss this, but I personally am alarmed that everybody seems to be fine with a private AI corporation essentially overruling my elected leaders.

Desertopa's avatar

In general, I don't think that AI companies have the leeway to hold themselves to looser standards than required by the law. They can voluntarily take stricter standards upon themselves, but not exempt themselves from standards they're normally required to meet. So, in general, AI companies self-regulating on top of the standards already required of them by the law will tend to be safer than AI companies merely complying with the standards required by the government.

SimulatedKnave's avatar

This is, in fact, how all companies work. And, for that matter, people. You can drive the speed limit, or over, or under. Each choice says something about your attitude to risk.

Benjamin Scott's avatar

Many highways do have minimum speeds.

Victor's avatar

In this case, the Highway Patrol is trying to force someone to drive 90mph on a road that's only safe up to 45...

Tunnelguy's avatar

IMO Anthropic didn't do anything illegal or even anything harmful. I would feel differently about it if Anthropic was breaking US law or building a private army of killbots or something. But they just refused to provide services that were not agreed to in their signed contract, which is completely legal and reasonable.

Evan Little's avatar

Overruling? This is the opposite of that. Anthropic seeks to *limit* the scope of the agreement in contract, where OpenAI seems content or naive to allow the contract to be modified in the future to expand it's applicability.

If you are a viral research lab working with infectious diseases, would you share your data with a government that said "We promise not to use these viruses to attack people and we will put that in the contract and allow audits of usage." Would you work with a government that said "We promise not to use these viruses to attack people because it's illegal right now?"

These are very different statements, and if you are even mildly familiar with the history of world governments, including the United States, you should be extremely suspect of the idea of law as morality.

AI Dan's avatar

"hold itself to a different standard than the standard lawfully imposed by the representative government of the United States of America."

They are following the standards imposed by the government. The standards imposed by the government have always and will always be a baseline minimum.

Any concept of having standards at all means having standards above that minimum.

In my mind it's a wild twisting of words to re-imagine having any standards as 'overruling elected leaders'... But I guess when you have elected leaders that demand you ditch any standards you have then these can become the same thing.

Nebu Pookins's avatar

Before getting into the core issue, I want to be sure that we're in agreement about the empirical facts at play:

1. Anthropic isn't breaking any law. There is no law requiring a company to sell its product without terms of service. The Pentagon signed a contract already knowing these usage restrictions existed.

2. The two "red lines" that Anthropic is requesting to be respected are things that are already arguably illegal. Mass domestic surveillance violates the 4th Amendment. DoD's own directive (3000.09) requires human oversight for lethal force decisions. Anthropic is asking the government to confirm it will follow its own laws.

3. Though the details are vague, to a first approximation based on what information is publicly available at this time, OpenAI immediately got the same deal with the same restrictions. Hours after Anthropic was banned, OpenAI signed a Pentagon deal reportedly including the same prohibitions on mass surveillance and autonomous weapons. So apparently these restrictions are fine — just not when Anthropic insists on them.

So with that in mind, let's address the core issue you're raising:

I suspect that most people who seriously consider P(doom), AI apocalypse concerns, etc. have historically and consistently held themselves to a higher standard than what the government requires them to. That's essentially what the entire field of AI safety is — voluntary self-imposed constraints that go beyond what any law mandates, because the people closest to the technology believe the risks warrant it. If you've ever endorsed an AI lab adopting safety practices that aren't legally required, you've already endorsed the principle that corporations can and sometimes should hold themselves to a different standard than the government imposes.

The question is whether that's a good thing. I'd argue we have strong historical evidence that it can be.

In 1982, someone laced Tylenol capsules with cyanide on store shelves in Chicago, killing seven people. The FDA hadn't ordered a recall. Johnson & Johnson voluntarily pulled 31 million bottles off shelves at a cost of over $100 million and invented tamper-resistant packaging — a standard that didn't exist anywhere in regulation. Wall Street thought they were insane. Marketing experts predicted the brand was dead. If you applied the logic from your comment, you might have said: "I'm alarmed that a pharmaceutical corporation is holding itself to a different standard than the one imposed by our FDA." But J&J was right, and the government eventually adopted their voluntary standard as federal law. The FDA mandated tamper-resistant packaging for all over-the-counter medications by 1989.

And that was a case where the government simply hadn't gotten around to requiring the right thing. The Anthropic situation is arguably worse, because here the government is actively pressuring a company to lower its existing safety standards.

Which brings us to a case that's structurally almost identical. In 2001, the NSA approached major telecom companies and asked them to hand over customer communications data without a warrant. AT&T, Verizon, and BellSouth all complied. Qwest's CEO refused, saying it was illegal without a court order. The government retaliated by pulling hundreds of millions of dollars in contracts — which is pretty much exactly what just happened to Anthropic. At the time, you could have written a comment very similar to yours: "I'm alarmed that a private telecom corporation thinks it knows better than the NSA about national security. Who are they to overrule my elected leaders?"

But Qwest was right. The warrantless surveillance program was eventually found to violate FISA. Congress had to pass retroactive immunity for the telecoms that had cooperated — which is essentially a legislative admission that what those companies did was illegal. The company that "overruled the government" was the only one that followed the law.

So to directly answer your question: yes, I do think it can be good for P(doom) when corporations refuse to grant the government unrestricted access to powerful technology, especially when the specific uses being refused — mass domestic surveillance and autonomous weapons without human oversight — are the exact things that people worried about AI risk have been warning about for years. The framing of "corporation overruling elected leaders" has the direction of authority backwards. A company maintaining its own terms of service is not an exercise of power over the government. The government retaliating with supply chain risk designations and Defense Production Act threats to force compliance — that is an exercise of power, and a troubling one regardless of your political priors.

Mo Diddly's avatar

"troubling" is a gross overstatement. I suspect this will get walked back and/or stuck in courts (the supply chain thing, not the changing of AI vendors), but if not it basically means that any president can shut down a company for no reason at any time, and will have MASSIVE repercussions.

Zanni's avatar
Mar 2Edited

Biden already shut down thousands of companies. We call them Landlords. He said they didn't need to be paid for years, but still had to provide services (and pay real estate tax, so it's not like these are "free houses" -- they cost the landlord something per year).

This is quite arguably more of a dick move than "your company must end".

Freedom's avatar

I don't think this is true. I am a landlord and do not recall being shut down by Biden. I do recall receiving money from the government for unpaid rent.

Zanni's avatar

State?

AFAIK, the CDC was saying that people were allowed to live in rented houses "rent free" (and this went on for quite a bit longer than the shutdowns, to the point where the Supreme Court got involved).

https://www.congress.gov/crs_external_products/IN/PDF/IN11673/IN11673.7.pdf

Looks like a considerable portion of "supposed to have been paid to landlords" didn't actually get paid, either.

Zanni's avatar

https://home.treasury.gov/system/files/136/landlords_rental_assistance_help_r3.pdf

The fact that Treasury's talking about forbearances for mortgages (pushing the issue onto more for-profit companies) should tell you that they don't think they're gonna necessarily cover everything 100%.

Note what you don't see? You don't see "tax forbearances" from real estate taxes.

larkejbglerhkbglearh's avatar

I think that this is the non-central fallacy. The idea of the government trying to get a company to do something, and the company saying "No, we don't want to do that!" usually means the company is up to something bad. But in this case, the government is trying to get them to make murderbots and spy on people.

I also don't think this is the same standard applied to every other company, unless that standard is "the government does literally whatever it wants". No American company has ever been declared a supply-chain risk, and as far as I know the DoW has never randomly decided to unilaterally rewrite its contract with a defence contractor in this way(though it could've happened and I just don't know about it)

John Schilling's avatar

The government trying to get a company to do something and the company saying "no we don't want to do that", usually means the company just wants to do something different, possibly not involving the government at all. For a variety of reasons sometimes including perceived ethical constraints, but most often just because they think the other thing they could be doing would be more profitable.

As a recently newsworthy example, when NASA first asked for bids for a manned lunar lander to support the Artemis program, they initially found every aerospace company in the US saying "no we don't want to do that". They did manage to get SpaceX on board, but they had to tweak the requirements so they could be satisfied by a hastily-repurposed Mars rocket.

Not because all the nation's rocket scientists and rocket entrepreneurs think that landing astronauts on the moon is immoral, and not because they are "up to something bad". In SpaceX's case it was because they thought going to the Moon would distract them too much from going to Mars. For everyone else, they just didn't think it would be very profitable to build lunar landers for NASA.

If the idea is that all companies will normally drop everything to take any offered government contract because "Moar Moneyz Good!", then no, it doesn't work that way.

darwin's avatar

This just isn't true though? The government was using intimidation and blackmail to try to force Anthropic to do things that weren't in their contract. Anthropic wasn't breaking any laws or regulations.

Zanni's avatar

So Anthropic claims. Scuttlebutt I've heard says a remarkably different story (and leaves Anthropic with having to explain to its shareholders why the F*** it got banned from Government Work).

beleester's avatar

Would you mind sharing this scuttlebutt with the rest of the class, or are you just going to say "source: trust me bro"?

Zanni's avatar

I could share the scuttlebutt, but by the nature of what it is, you're still going to wind up at "trust me bro" (as I'm not speaking "on the record" about this -- nor would I be authorized, as a civilian, to say bupkis).

Victor's avatar

"In a vacuum, do you think it is good for p(doom), ai apocalypse concerns etc. for corporations to reject the sufficient authority of the government and insist on following their own standards?"

God, yes. This is also known as "individual conscience" and (should) take precedence over the claims of any social or institutional authority, including the government.

More technically, Anthropic is a private company, and if they choose not to do business with the government under a certain set of conditions, they have every right to do that.

'Course, the government has the power to move their business somewhere else, which is what they did.

p(doom) is so ambiguous as to provide no effective guidance at all, but given that it's Anthropic, I assume they took this into account.

Tatu Ahponen's avatar

An infrequent reminder that the much-maligned GDPR is not just some effort by envious socialist europoors to throw wrenches to the machinery of glorious American capitalism (or make website visitors lose their minds clicking dumb popups) but represents a genuine attempt (a good one? a bad one? It's an attempt) to, among other things, hinder the establishment of databases that would help in mass surveillance (by foreign, potentially hostile countries such as the US).

It is probably generally best to just assume that all tech industry companies are, when necessary (and it is often) privatized branches of the American security state, and the weird thing isn't OpenAI acting like one openly but Anthropic refusing its given role (though, of course, such refusals might also be enacted by formal official government departments in a more subdued manner).

TGGP's avatar

It's not best to assume something disproved by events.

Tatu Ahponen's avatar

"Initially assume, until tested in individual cases by events", we might say.

Taymon A. Beal's avatar

The problem is that there are a lot of useful and popular consumer technologies that, as a hard-to-avoid side effect, create those databases, which can't be dismantled without reducing those technologies' usefulness.

You could take the position that that's too high a price to pay, and technologies that work this way should not be allowed to exist. I think the hardest-core activists pushing for things like GDPR take this view. Problem is, normal voters don't think this way. They are mostly confused and ignorant of the details and want all the upsides of every policy option with none of the downsides, but if push came to shove and GDPR-style regulations were strengthened enough to actually have this effect, voters would be very upset that their stuff went away, and would demand that the changes be reversed. Politicians know this, which is why they won't give the activists everything they say they want, even when the public appears to agree with the activists.

It doesn't help that people have a wide variety of different views about what kinds of things GDPR-style regulations are actually meant to prevent, and a bunch of it is incoherent, distorted by media incentives, etc. Some of this is just how politics always is, but I think this issue has it worse than most because it's construed so broadly and because the stakes aren't something that most voters already understand very well.

Overall I think I'm of the view that, if you can't impose regulations that will *actually* solve a problem, it's better not to try to solve it than to push through a muddled compromise that satisfies no one.

Tatu Ahponen's avatar

Well, that depends on how you define a problem. The problem being the generation of large databases of people in general? Sure, GDPR doesn't solve that. The problem of cases like the Tea App, which was not available (and exploitable) in Europe due to European data laws? It solved that, at least.

Taymon A. Beal's avatar

By "cases like the Tea App", do you mean "apps that contain sensitive data which they fail to adequately secure against hacking", or "apps that are for gossiping about third parties"? GDPR hasn't solved the former, and the latter seems like a pretty niche use case that's not worth this amount of trouble.

meeeewith4es's avatar

Restrictions on data retention and sharing limit the damage from hacking. Yes, I'm screwed if my doctor's office gets hacked, but a raffle I joined 2 years ago is probably fine.

Taymon A. Beal's avatar

I would be interested to see any estimates of the size of that effect.

Throw Fence 🔶's avatar

Idk as someone who works with regular small scale customers in Europe, I feel good that everyone takes user privacy seriously because of the GDPR. I guess it's probably true that it doesn't really do anything to the Big Issue, but at least I can leave work every day with a clear conscience.

Taymon A. Beal's avatar

What does "takes user privacy seriously" actually mean, in your view? Part of my complaint is that the whole concept is so muddled.

Throw Fence 🔶's avatar

Stuff like not gathering data that is not actually needed, or having the balls to tell the CEO no you can't see that data, and actually bothering to delete defunct user data (or after a reasonable retention period when that makes sense). It gives developers legitimacy in being serious about it, and companies financial incentives to take it seriously (they are actually afraid of being fined, and do genuinely take measures). I get that this probably seems implausible from the outside, but it has legitimately given me the confidence that European companies will treat my data with respect and will be fairly serious about not leaking it / selling it. (Obviously only applies to "serious" companies and doesn't stop a bad actor whose whole business model is selling your data illegally, nor does it probably stop The Big Guys from doing whatever they're doing.)

There's a certain reverence towards sensitive data, both from the company and individual developers in my experience. The vibes from the company side seems to be "are we being serious about privacy? because you know we'll be fined and lose money, or worse, look bad and lose reputation and money, if we don't take it seriously right?"

Without really understanding in full detail what "seriously" means. I can see why you'd accuse this of being muddled, but I consider this a feature and not a bug really. It gets your run-of-the-mill average company that tries to be a normal, serious actor to treat your data with the proper amount of respect, and that's really what you want right?

Zanni's avatar

You should take it on faith that the Big Guys will steal your data, and there will be consequences for that. Also, that no matter how much the civvies clean your data, once the government has your data, it's not giving it back.

This is Tech, paranoia is a virtue.

Shankar Sivarajan's avatar

Is it your contention that the people who all want users to present IDs before they may access the internet were motivated by concern for privacy and not the billions in fine revenue ?

meeeewith4es's avatar

The current EU suggestion¹ would actually limit potential profits quite a bit.

¹: https://ageverification.dev/

Shankar Sivarajan's avatar

No, I mean the GDPR was for the fines. This new one is for surveillance.

meeeewith4es's avatar

Was it? They've barely given out any of those.

John's avatar

From today's New York Times article:

>Mr. Michael, who was on a call with Anthropic executives at the time, said the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said.

>Anthropic told the Pentagon that it was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act. But the company wanted a legally binding promise from the Pentagon not to use its technology on unclassified commercial data.

So it's not just a hypothetical -- Hegseth et al. very explicitly wanted permission to do exactly this, and in the near future, and this is the reason the deal fell apart. So, Codex-5.3 and friends will soon be trawling all public data on all Americans.

Zanni's avatar

It's rather weird to assume that the Pentagon doesn't already have an AI trawling all public data on all Americans.

That's not the point. The point is deidentification. There's lots of legitimate usages of public data in aggregate (why are hordes of people descending on Selma? Send some drones to monitor that.)

Peter Gerdes's avatar

Did they? Or did they just think limitations sounded effeminate and liberal and not the kind of thing manly killers would be cool with?

Hard to tell from this admin.

Garald's avatar

You mention two reasons not to have fully autonomous killing devices:

"First, humans add reliability. [...]

Second, and more important, human soldiers are a check on the worst abuses of authoritarians."

There's a third one: if you get humans out of the loop, you are minimizing accountability.

(Of course this may be a feature rather than a bug; all the more so, then.)

Shaked Koplewitz's avatar

I don't like this argument in general. Partly because it's often used to argue for worse results (e.g. "sure self driving cars reduce fatal crashes by 10x but who will we blame when they do crash?"), but mostly because big corporations are already too big and faceless to really point at a guy at fault (if a specific operator does a bad thing, firing him doesn't help if operators regularly do the same thing, it's a system issue).

darwin's avatar

>Partly because it's often used to argue for worse results (e.g. "sure self driving cars reduce fatal crashes by 10x but who will we blame when they do crash?"),

Do you have a second example?

That's something the got said a lot about self driving cars, but notice that they were implemented the very second the technology worked well enough and none of those questions had any actual impact.

Shaked Koplewitz's avatar

Yes, e.g. for algorithm based credit card denial decisions

darwin's avatar

I'm confused, I thought we *are* doing that, and it's worse?

I thought your argument was that worrying about accountability will stop us from implementing systems that are better than people. Isn't this example the precise inverse of that? Or do I not know something?

Saint Fiasco's avatar

What's wrong with using computers to calculate credit risk?

I'd rather be refused credit because according to the Excel spreadsheet I don't have enough credit score than because a human doesn't like the cut of my jib.

Of course I would hate it even more if I got denied because an LLM didn't like the cut of my jib, but as far as I know nobody is doing that sort of thing. The Excel spreadsheet is cheaper and more fair.

Zanni's avatar

Generally speaking, what turns out to be wrong about this is that there's a pronounced racial disparity. And that gets into redlining and other issues that have Legal Repercussions.

There's a reason why insurance companies keep the algorithm secret. If known, it'd probably result in Legal Repercussions.

Zanni's avatar

self-driving cars haven't been implemented "en-masse" yet, though. The money's in highway driving of trucks, and that's not there yet. Neither is the legal framework (you've got the truck driver, maybe sleeping, the truck company if it's not his truck, the people who own the shipment, the truck maker...)

Timothy M.'s avatar

> sure self driving cars reduce fatal crashes by 10x but who will we blame when they do crash?

Not the key topic, but this always sounded to be like an incredibly easy question. If a Waymo causes a crash, Waymo itself is at fault. But that should also be priced into the insurance somebody gets on their self-driving car.

Shaked Koplewitz's avatar

I agree with this, but would extend it to the autonomous weapons responsibility issue (if an autonomous weapon makes a bad call, the blame is on either the user or the software provider).

darwin's avatar

Agreed, I find the example given if the text of a strongman giving an illegal order to be very naive.

The atrocities are almost always arguably legal, and the courts will bend themselves backwards to rule them legal after the fact.

The court of public opinion is the actual limiting factor on most atrocities. And removing human faces from the decision process provides cover to implement software-level policies that are likely to commit the types of atrocities you secretly want, in a stochastic way that can't be laid directly at your feet later.

Kveldred's avatar

What does this mean for Anthropic's future viability as a company?

-------------

edit: I switched my subscription from cGPT to Claude on Friday; I actually did this for totally unrelated reasons (Free Claude was doing better on my queries than was Pro GPT), but maybe it's worth doing on principle as well.

Mo Diddly's avatar

I'm wondering about this too. Does every company who occasionally contracts with the government have to ban it?

tg56's avatar

That was seemingly implied by the governments messaging, but Anthropic claims the law only restricts companies use of the Anthropic specifically on work for the US Government and that the same companies would be free to use it in other contexts. It also plans to legally challenge the designation. To the extent such a designation can be construed to be legally reviewable seems likely courts would side with Anthropic, but I doubt the government cares and presumably that would take some time.

Mo Diddly's avatar

Just an astonishing self-inflicted bullet hole in the AI "race" against China

Xpym's avatar

Eh, who cares about far-away commies, when there are domestic commies to own?

Zanni's avatar

The Process is the Punishment. Anthropic gets hit with a "costly" bit of lawfare, and the government has never cared about "another court case."

Froolow's avatar

Can someone tell me if my cancelling my ChatGPT subscription (and / or subscribing to Claude) has any impact at all on the situation, in a 'voting with my wallet' kind of way?

Does OpenAI want me using their services? Does Anthropic? Or are both loss-leading on consumer LLMs in order to get the sorts of contracts which OpenAI just signed?

Catmint's avatar

If you are currently paying OpenAI money or viewing their ads, cancelling that and switching to Claude would be an improvement, yes. Perhaps a drop in the ocean, but definitely positive rather than negative.

I'm not sure how to weigh giving Anthropic money versus not interacting with AI at all, but I lean very slightly towards thinking it's positive right now.

Tunnelguy's avatar

Is anyone NOT a doomer at this point? We went from "AI will give us unimaginable wealth and cure cancer" to AI slop videos to "Yes we'll build the killbots and spy on citizens, but only if it's legal :)" in the span of 3-4 years. Sam Altman sold out. Elon Musk definitely would've and Google probably would've also. Respect to Anthropic for standing up for our rights. Cancel your ChatGPT

demost_'s avatar

Mind that most "doomers" have a prediction of something like 10%-30% for "doom", with varying definitions of doom. A large majority of doomers thinks that a non-doom scenario is more likely. I expect that this is still the case, and probably the percentages have not even moved so much up for many doomers, since the current course of affair was already roughly predicted and factored in for many of them.

What's the surprising thing? That AI can control killbots? That's 100% factored in, no doomer saw this as a bottleneck. That the US government is not going a full alignment course? Also factored in.

meeeewith4es's avatar

Living in a post-truth authoritarian society that can automatically silence any dissent would be pretty awful, but not doom in an extinction risk sense. I think this is what many non-rat doomers are worried about.

Zanni's avatar

90% of humanity dying in the next 15 years? That's what the non-AI predictions were running, at least based on the free market...

Mind you, this isn't extinction risk, but it is pretty damn scarring. Not even Gerrold writes books with that many dead.

quiet_NaN's avatar

I think that the likeliest possibility is that AI will fizzle out before we go to full ASI. Of course, even the present systems might be enough to disrupt the white collar job market long term (especially if we can find out how to deploy them ob bigger tasks).

However, I could also be wrong and the current paradigm might trivially lead to ASI. Say 10%. Say generously that there is a 30% chance that ASI will be aligned by default (and gently put God Emperor Musk on a prosocial path). That still leaves a 7% chance where it wipes us out as soon as it is convenient, which is still a lot of expected QALYs lost (never mind future lives lost), which would still make alignment (or pausing AI development) the most urgent cause area by far.

Tunnelguy's avatar

I guess I wasn't clear by using the term doom - I'd view "AI kills all humans" and "1984-surveillance-dictatorship powered by AI" as "doom" scenarios, but most people use "doom" to refer to the kill-all-humans scenario only. This news caught me by surprise because we're heading for the 1984 scenario a lot faster than I expected, but it didn't change my views about the kill-all-humans scenario.

Shaked Koplewitz's avatar

Hey I love AI slop videos. I'm worried about AI killing people and all that but AI art is great

Donald's avatar

Suppose, for the time being, that superintelligence = doom.

In a world where not-quite-superintelligent-yet AI is curing cancer, you have everything public, top scientists working together, little concern for safety.

Meanwhile, if there is an AI 9/11, and everything is heavily regulated, highly secretive etc. That's a world of strategic AI limitation treaties, secrecy, and generally most smart people not working on AI research.

Zanni's avatar

When our free market predictions* are, sans AI, a 90% reduction in humanity over the course of the next 15 years, You'd have to do more than "doom" (which, parsimoniously speaking, might mean a 10% chance of you personally dying).

*see electric cars.

Ghillie Dhu's avatar

>"Autonomous weapons can be steered by an AI in the cloud, just like a human can steer a drone remotely. OpenAI models do not need to be edge deployed in order to power a fully autonomous weapon."

These feels very motte-and-baileyish.

At least to me, the phrase "autonomous weapon" implies that system control is local.

Taymon A. Beal's avatar

"Autonomous" can, depending on context, mean either "works without remote control" or "works without human direction". The latter is the one that's importantly at issue in this case.

Ghillie Dhu's avatar

Remote control preserves the potential for human intervention, which I find satisfactory.

If "autonomous" is too broad a term, what would you suggest to describe what I (and, I suspect, most people) intend to mean when using it?

Taymon A. Beal's avatar

Remote control does not necessarily allow for human intervention, if the system acts quickly enough that, by the time a human can intervene, it's too late.

Ghillie Dhu's avatar

That's equally true of things like EO-guided missiles; I remember a video from one during the Iraq War where, by the time the operator could've perceived that it was closing in on a flock of sheep instead of a mass of insurgents, nothing could've been done.

The consternation about the current issue makes no sense unless it's about something qualitatively different.

Taymon A. Beal's avatar

The consternation is that the kinds of problems that EO-guided missiles already have could start showing up in a wide variety of other contexts.

Ghillie Dhu's avatar

Absolutely!

Remote-operated weapons have a bunch of failure modes that local-operated ones don't.

But those are orthogonal to issues of autonomy.

Matt A's avatar

You're wrong about that implication. Autonomy typically refers to humans in the loop not the specific control structure.

One reason people might conflate the two is the frequent use of autonomous systems in denied environments where data links aren't possible. But autonomous doesn't mean "has no data link"

Ghillie Dhu's avatar

Implications are subjective.

I seems like there's a disconnect around the word "autonomous", analogous to that around "collecting" by the NSA. Remote control – even by Claude – feels like a noncentral example of the problem case.

Matt A's avatar

I'm talking about the context of DOW systems. What I described to you is the way it's used in that context.

Ghillie Dhu's avatar

*DOD; the DOW ceased to exist in 1947.

My point is that terms whose common & jargon meanings differ should not be used in one way in the other context; to do otherwise is misleading.

Masego's avatar

Would you make the same distinction for vehicles? A vehicle isn't autonomous unless it's decision-making is local, even if no humans are in the loop?

Benjamin Scott's avatar

If we used your definition, we'd expect that the DOD would rarely if ever build an autonomous weapon even if the tech was there and it was legal. Why would you intentionally build a weapon without a data link if you could build one with a data link? Obviously it would be nice for it to have local operation capabilities, but most of the time you'd have the data link operating.

Ghillie Dhu's avatar

To clarify, the presence or absence of a data link is not the issue; rather, it's what the weapon does when the link is idle / unavailable.

Do nothing? Not autonomous.

RTB? Possibly autonomous vehicle, probably not autonomous *weapon*.

Continue operating independently? Autonomous.

Legionaire's avatar

An exact definition of autonomous is very irrelevant to this discussion. "Human in the loop" is the part of the word we care about, which is in fact what most people think when hearing it. The physical location of the GPU/CPU does not impact the primary concerns raised: authoritarian checks and reliability of ai decisions

Ghillie Dhu's avatar

Reliability of an AI decision should, ceteris paribus, be higher for a remotely-controlled weapon system simply because the brain doesn't have to fit onboard.

And centralized decision making is more easily checked (cf. "Ender's Game") than having to defeat a swarming attack in detail.

That's not to say there's *no* concern with remote AI control of weapons, but the distinction is not irrelevant.

Wasserschweinchen's avatar

I disagree. I would consider any weapon that operates without human assistance to be autonomous, even if the weapon's software is located far away from its hardware.

Ghillie Dhu's avatar

If the AI is controlling the weapon remotely, and therefore not *part* of it, autonomy is not a characteristic of the weapon per se.

A given remotely controlled weapon could be directed in turns by humans & AI.

Jisk's avatar

As far as the DoD (sane past version) or DoW (deranged current version) is concerned, there is no such thing as a weapon. Only a "weapon system." From a rifle to a drone swarm, tools of violence in the modern era of warfare are designed to work as systems, incorporating many pieces, most of which do not directly contribute to inflicting death or damage on a target, which are not physically contiguous with each other and not always even connected by direct signal transmission or human decisions either.

(AIUI even the M4 carbine, the (technically outgoing) standard US rifle, is a system involving scopes, other HUDs a soldier can use to aid them in targeting, and reference to aerial or satellite photography to maneuver for its use.)

Does a weapon have autonomy? Mu. There is no weapon; that's a 19th-century concept. The weapon *system* has autonomy, or doesn't, and every weapon system involving AI-only remote control of one or more drones without a human in the loop is an autonomous weapon system.

Ghillie Dhu's avatar

Which might matter if the article referred to "autonomous weapon systems" rather than "autonomous weapons". My whole point in this thread has been that the Pentagon's specific definitions of things add confusion to informal conversation when the lay definition differs substantively from the Pentagon's!

Wasserschweinchen's avatar

I would consider the weapon's software to be a part of the weapon even if it is located far away from the weapon's hardware. I think I would even consider hardwareless systems such as Stuxnet to be autonomous weapons.

Ghillie Dhu's avatar

A remote-controlled weapon necessarily needs to expose an interface for said remote control.

There's no a priori reason that the far side of that interface must be exclusively either human or AI. If any given instruction could just as easily be human- or AI-originated, then the weapon per se is somewhat beside the point.

NotG's avatar
Mar 1Edited

I don't like the idea of AI run weapons and surveillance but I have a hard time seeing how any AI company is going to have a leg to stand on here. Can a bread company say "we'll sell you, the DoW, bread as long as you promise not to feed it to soldiers killing people"? Can a computer company say something similar? "we'll sell you our computers but you can't use them for war". Maybe the situtation is not the same because one is a product and the other a service. Still, ... It seems in the same direction of "we don't serve your kind here". I'm not sure how to disentangle that.

Ghillie Dhu's avatar

>“It seems in the same direction of "we don't serve your kind here". I'm not sure how to disentangle that.”

However morally-objectionable you may find "we don't serve your kind here", 1A should protect freedom of (and therefore from) association.

Desertopa's avatar

It's not that Anthropic refused to deal with the DOW though, they had a contract, and the DOW wanted to revise the terms of the contract after agreeing to it. There are no ordinary circumstances where any business would be expected to go along with that.

Zanni's avatar

Many companies would, for a higher price tag. That's ordinary business. "We liked what you did, and now want more of it."

Evan Little's avatar

We don't serve your kind here is legal, so long as "your kind" is not a protected class. We all discriminate every day in who we choose to associate with, who we allow to be members of our clubs, and who we ban from them. It's normal and ordinary.

It's also not the same thing at all, but your reaction and trying to paint it that way still deserved a response to the revulsion at the idea.

If you make a product and sell it with a terms of service contract on sale, perhaps that's enforceable, perhaps not, but just like when you buy a piece of software and have to click 'agree' before you are allowed to use it, companies can set terms and scopes when dealing with other companies or the government. The US wanted to be able to use this tool in a way that Anthropic was placing outside the terms of the agreement, and the US, not Anthropic, chose not to click the agree button. So, just like when you don't agree to the TOS on a software and can't run it, the deal will not proceed.

This made the government, which is being run by objectively childish people, throw a tantrum about not getting everything they want.

Tunnelguy's avatar

> Can a bread company say "we'll sell you, the DoW, bread as long as you promise not to feed it to soldiers killing people"?

Hypothetically, the bread company can ask for those terms in their contract, and then the DoW would be free to accept or reject (most likely reject). Anthropic and the DoW already signed a contract with the two restrictions in the contract, and the DoW is the actor that's breaking contract norms here by trying to unilaterally change the contract halfway through, not Anthropic.

Donald's avatar

A car company can, at their discretion, refuse to make tanks.

darwin's avatar

Current AIs are not infinitely-general problem solvers such that you can take an off-the-shelf Claude instance and tell it to pilot drones and kill dissidents and it will immediately know what to do.

All of these contracts involve the AI company working with the DoW long-term to develop brand new products and agents, and do massive amounts of work integrating and troubleshooting those products into DoW systems.

It's the equivalent of saying to the bread maker 'please develop 5 varieties of poisonous bread with different LD50s, and three varieties laced with different infectious diseases, then manufacture and deliver them to us'.

John Schilling's avatar

>Can a bread company say "we'll sell you, the DoW, bread as long as you promise not to feed it to soldiers killing people"?

Yes, absolutely, and almost always the government will say "LOL no" and go buy bread from someone else. The question is why, in this case, Hegseth signed on to such a deal with Anthropic.

Corporations, their owners, and their workers are not slaves of the state; they can always say "no" or "not unless..."

theahura's avatar

Paraphrasing from elsewhere.

If a company selling parts to NASA, working on a brand new plane let's say, said "these parts are dangerous and we won't let you use them on manned rockets because they haven't been fully tested yet", yes, obviously, they can tell NASA how to use the parts to sell. And if NASA says "fuck you we will use the parts how we want" the seller has every right not to sell to NASA. That's just markets. What crosses a line is NASA going "fuck you, we're going to *make* you give us the dangerous parts and destroy your company if you don't consent"

Also paraphrasing from elsewhere: yes, companies tell the government what they can and cannot do all the time. For example, the government is not allowed to repair f35s by contract. Only lockheed can do so.

Jisk's avatar
Mar 3Edited

Yes. The bread company is allowed to do that. And the government is allowed to say "no thanks, bread contract cancelled, we'll get it somewhere else."

Anthropic, as someone put it, (refused to bake the gay cake)[1], and Hegseth decided to not just cancel the contract (unambiguously allowed) but try to destroy them entirely.

[1]: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colorado_Civil_Rights_Commission

Sarah Eustis-Guthrie's avatar

Wow, this is legitimately terrifying. I hadn’t considered the potential risks of the gov using AI for mass surveillance, but it could really turbocharge the worst case scenarios.

Just deleted my personal ChatGPT account. I don’t want any part of this.

John Wittle's avatar

I am genuinely curious what your path to the AI infosphere was, would you mind elaborating a bit? I ask because... well, I thought this particular failure mode had become well-known, when China started doing something similar a few years back and it made the round of headlines. My worldmodel would have anybody who ever winds up on an astralcodexten post having already priced in the 'analysis of mass surveillance data' angle. I need to understand where I went wrong.

icarus91's avatar

The slatestarcodex/astralcodexten commentariat has changed a lot over the years, as discussed at length in https://www.astralcodexten.com/p/your-review-the-astral-codex-ten.

I suspect your worldmodel indeed needs updating (but I'm surprised that this is the first comment that would make you realize this).

wanderingimpromptu's avatar

I’m confused. It’s illegal for govt to seek out data about large domestic populations, but legal to “incidentally obtain it.” But it’s also legal to buy it from third parties? Is buying data not considered a method of “seeking out” data? Or can you also only buy it incidentally?

Ghillie Dhu's avatar

The presumption is that the data vendors have obtained at least a fig leaf of consent to obtain & distribute the information.

Zanni's avatar

So, you're okay with the government knowing how far you went to the store? Because that's what we're talking about (re: masking).

Ghillie Dhu's avatar

The question was about legality, not my preferences.

Bugmaster's avatar

> Some alert ACX readers have done a deep dive into national security law to try to untangle the situation. ... Current laws against domestic mass surveillance and autonomous weapons have wide loopholes in practice.

This is a very diligent and totally unnecessary approach to the problem, because the only thing that currently matters is this:

> Whatever the President thinks is legal

Assuming you expand the term "President" to include the entire administration. Sure, loopholes in the law may be nice to have, but even when there aren't any, the administration will just do what it wants. When they get sued, they'll appeal all the way to the Supreme Court, while continuing to do what they want. The Supreme Court will most likely rule in their favor; but even if it doesn't, they'll just continue to do what they want while declaring the Justices to be "unamerican" (or whatever). This isn't some wild prediction of the future, but a description of what we've observed happening already, and I see no reason why AI issues specifically should buck the trend.

Taymon A. Beal's avatar

I don't believe the current presidential administration has unambiguously defied any Supreme Court rulings.

Bugmaster's avatar

That depends on what you mean by "unambiguously". The recent chapter of the tariff drama is evidence for my claim, IMO.

tg56's avatar

I would actually say the opposite. The administration lowered the relevant tariffs to 15% using another, time limited, law as justification. So they did back down and did follow the order. They are obviously operating completely outside the spirit of the relevant laws (see also this case) and dragging legalities out as long as possible, but they have not defied any supreme court order as far as I can tell. So far they are munchkining, pushing the boundaries within the existing framework, rather then kicking the table, we'll see how long that lasts.

Timothy M.'s avatar

I'm not an expert but this piece suggested that might still not be within the bounds of the law: https://popular.info/p/trumps-new-tariffs-are-just-as-illegal

Xpym's avatar
Mar 2Edited

Well, the point is that the law is whatever the highest court says it is. The court ruled that the tariffs were illegal under one law, now they have to do it for the other one (repeat until no laws are left).

tg56's avatar

Probably not, but that hasn't been litigated yet and the Trump admin has provided some fig leaf of why they believe the law should apply so they are still operating within the existing rules in that sense.

Timothy M.'s avatar

"They said so and nobody has stopped them yet" is a pretty generous standard to offer the administration, particularly one that has violated court orders so many times.

theahura's avatar

> but that hasn't been litigated yet

The problem with a bad actor is that they can just keep making this argument. Tomorrow they could argue that "cannot actually means can", and every time they make such a farcical argument we have to wait for SCOTUS to tell us what we all already know. At which point the admin does the same thing, ad nauseum. You shouldn't grant people acting in bad faith the generosity of "hasn't been litigated yet", because that will always be true for whatever the latest argument is. Congress is the real back stop. Impeachment is the real back stop.

Tyrone Slothrop's avatar

None of this surprises me. We elected a terribly flawed man to lead our country.

TharnedByHumans's avatar

Forgive me, but I insist on this distinction:

"They", 77,303,568 of them, elected a terribly flawed man, despite the depth of his unfitness, both moral and intellectual, having been on prominent display only a few years prior.

"We" indicts the 77,897,589 people who voted for someone else.

I wonder if the 2,878,359 3rd party voters still feel like they did the principled thing?

Tyrone Slothrop's avatar

We as a country. I, myself voted Kamala. The guy won. Those are the rules.

TharnedByHumans's avatar

Not disputing who won - those are the rules.

I confess to nitpicking your comment, but I am about to be out of the US and will be really sensitive to "How could you do this to the world?"

Tyrone Slothrop's avatar

I knew that you weren’t nitpicking and i completely understand.

Tyrone Slothrop's avatar

James Madison anticipated the moment.

Federalist No. 10:

“Enlightened statesmen will not always be at the helm.”

The argument was in the context of competing factions in that we cannot rely on wise and virtuous leaders to always be in power — which is precisely why well-designed constitutional structures and the control of faction matter so much.

I hope the system holds.

Note: I made that ‘—‘ em dash my ownself by banging the hyphen key on my iPhone two times.

Xpym's avatar

The system cannot hold if the people don't want it to. Sure, Trump is extravagantly terrible, but the fact that he still was more appealing (twice) than the alternative should have induced profound soul-searching in the providers of that alternative. Sadly, as far as I can tell, this basically hasn't happened at all, the other side is still totally convinced of its "the right side of history" status and sees any sort of course-correction as abrogation of moral duty. Well, all empires fall eventually, I guess...

Zanni's avatar

Trump is circling the horses, and trying to prevent America from falling. Good Luck, China's doing the same thing.

It's probably a problem that nobody's trying to get "on top" anymore, just trying to Survive.

Alexander Turok's avatar

>I wonder if the 2,878,359 3rd party voters still feel like they did the principled thing?

FWIW I regret not voting for Kamala.

TharnedByHumans's avatar

I forgot to indict the millions of voters who stayed home thinking the candidates were equally distasteful. I didn't need to be a genuine Kamala supporter given the alternative.

Alexander Turok's avatar

I was concerned about Kamala's judges setting off Warren Court 2.0, while I thought Trump's second term might resemble his first, particularly given his age and general laziness. The two candidates seemed about equally bad so I voted third party.

Zanni's avatar
Mar 2Edited

I'll put you down as "hiding under the bed counts as leadership" (Unless you want to tell me that Hillary Clinton, unelected unappointed, answering High Level Diplomatic Phone Calls, counts as leadership by either Kamala or Biden).

Remember when Democrats wrote "Profiles in Courage"?

Saint Fiasco's avatar

There is always more that can be done in a democracy. Just voting in a federal election is almost literally the least one can do. Unless you live in a swing state and regularly win state lotteries your vote doesn't actually matter at all.

It's possible there was something you could do before your country got to the point in which around 50% of the population would even consider voting for a madman. It's possible that even after that there was something to be done to become more persuasive, taking votes away from the adversary is often more effective than just adding one to your own.

thefance's avatar

Even if we grant that Trump is uniquely bad, I still think there's a strong argument for strategic abstention. One of the reasons I think democracy is flawed in general is that it's biased toward the short-term. And this is only exacerbated by the "always vote" mentality, which seems to be the common wisdom [0]. But if you treat democracy as an iterated game, it's possible to think of abstention as sending a signal of no-confidence in the offered candidates, which the political parties interpret as a legitimacy cost. On the contrary, if The People keep voting with only the current election in mind, the parties are gonna keep offering The Peope terrible candidates.

So uh, yeah, I feel like I did the principled thing.

[0] Voter turnout is already low on the low side. But somehow I always see arguments for voting and never see arguments for abstention. But maybe that's because the people who aren't voting are also too apathetic to be bothered to justify their arguments publicly.

TharnedByHumans's avatar

Humans are very short sighted in multiple domains.

They are easily manipulated by actors ranging from naive to bad.

I am constantly amazed by this - thus my handle.

I fully understand the concept of strategic abstention.

For most of my political experience it would have been a very reasonable, even principled choice.

However, I think it is indisputable that Trump is uniquely bad in US political history.

His indifference to law and the political and civic norms which are a key component of maintaining a civil society set him and his movement apart.

For these reasons I believe not voting for the only realistic opposition was misguided

Michael Watts's avatar

Devil's advocate:

1. Anthropic makes software which is, in concept here, supposed to make its own assessment of situations and act accordingly.

2. Anthropic has formally stated that they are ideologically opposed to certain aims of the Department of Defense.

Is it logical to conclude that, in a product sold by a third party to the Department of Defense that uses Anthropic's software, Anthropic might wish the software to recognize that its use is "illegitimate" and disobey or sabotage whatever tasks it might be assigned to?

Would that be a "supply chain risk"?

Taymon A. Beal's avatar

Which aims are those?

Michael Watts's avatar

Whatever they object to in the language "all lawful use".

Mo Diddly's avatar

They didn't object to the language. They refused to strip Claude of safeguards that prevented it from breaking the law even if asked. Draw your own conclusions.

Catmint's avatar

What's the epistemic status of this, exactly? It seems likely enough, but I haven't heard it from a solid source.

Mo Diddly's avatar

No. Not a supply chain risk. At the very worst, a contract they can choose to terminate.

darwin's avatar

It would be a risk the first time you observed anything like that actually happening.

Michael Watts's avatar

In the same way that appointing Meghan Markle to run the CIA wouldn't be a risk until you observed a leak?

darwin's avatar

We've observed Meghan Markle doing stupid things.

Michael Watts's avatar

I'm really curious what you think you're saying here.

In the first place, we've observed everyone doing stupid things.

In the second place, stupidity would be a mitigating factor; you're more worried about security risks from competent enemies than from incompetent ones.

Breb's avatar

Good summary.

Minor quibble:

> would might make inexplicable mistakes

Presumably there's only meant to be one modal verb.

tcheasdfjkl's avatar

A couple minor copyediting nitpicks:

1. "and (in narrow and usually time-limited) cases or for intelligence and counterterrorism"

2. "fully autonomous weapons would might make inexplicable mistakes"

Ska's avatar

Another angle that's not covered here: I have repeatedly seen the claim that other suppliers do not put restrictions on their goods. E.g. in the interview with CBS yesterday the interviewer said "There are so many other companies out there that do business with the US government. Boeing builds aircraft for the US military. Boeing doesn't tell the military what to do with that aircraft."

I asked Claude about this and it makes three points:

1. Companies tell the US government what it can and can't do with it's products all the time. E.g. it has to abide by terms of service of software that it purchases. The most obvious restriction is that the government can't just make more copies of software that it purchases because that's not allowed by the license.

2. Outside of software, and specifically for aircraft there is the famous example of how the US government is not allowed to repair its own F35 airplanes. Only Lockheed Martin is allowed to repair them.

3. What's novel here is that Anthropic is not doing this for monetary gain, but to limit the actual use cases. Both the example of "follow the software license" and "don't repair your own airplanes" is done with the purposes of the company making money, not to limit the purposes that it can be used for.

So the claim that other suppliers don't put restrictions on how their software or hardware can be used is not quite right. There are plenty of restrictions that the US government gets from its suppliers, but what's novel here is that the restrictions are about what the thing is used for.

Jeffrey Soreff's avatar

One question which was never made explicit in the altercation was whether DoW was just saying that the mass surveillance and autonomous weapons uses were against the TOS Anthropic was presenting to DoW, or whether DoW was calling for a change in the RLHF training, and, if so, exactly what change. I gather the full contract has not been publicly disclosed (itself classified?).

If the risk to DoW of accepting Anthropic's terms was just that Anthropic might at some point sue DoW for breach of contract - well, that sounds potentially obnoxious but not an existential risk. The courtroom arguments need not impact DoW's war-fighting capability.

On the other hand, if Anthropic was going to constrain how the models DoW wanted would be trained, that is more serious, possibly constraining how DoW might fight a war.

meeeewith4es's avatar

Anthropic and DoW already had a signed contract with these clauses.

DoW demanded them to be removed.

It is likely that this would've taken *at least* some RLHF to make the model willing to work with it. Amodei refers to having offered to R&D it but being turned down.

Jeffrey Soreff's avatar

>It is likely that this would've taken _at least_ some RLHF to make the model willing to work with it.

Many Thanks!

>Amodei refers to having offered to R&D it but being turned down.

I'm confused. That sounds like compliance (or a preparatory step towards possible compliance??) with the new terms DoW wanted.

meeeewith4es's avatar

Here's my source: https://www.anthropic.com/news/statement-department-of-war

Here's the most relevant part, other than Amodei saying that he believes that fully autonomous weapons are vital to defense of democracy.

> We have offered to work directly with the Department of War on R&D to improve the reliability of these systems, but they have not accepted this offer.

> They need to be deployed with proper guardrails, which don’t exist today.

Jeffrey Soreff's avatar

Many Thanks!

I'm actually somewhat surprised that the _existing_ Anthropic models don't run into problems with _existing_ 'harmlessness' training.

>Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

involves a fair amount of "how do we find this opponent, who we need to kill"

( As an American protected from some of the US's adversaries by DoW, I'm thankful that it _did_ work, though still somewhat surprised. )

I find it bizarre that DoW objected to "to improve the reliability of these systems".

Maybe "guardrails" inadvertently touched a nerve? Maybe if that had been phrased as "We need to avoid friendly fire accidents." it would have gone down more smoothly?

meeeewith4es's avatar

IIRC they already ran a different version of claude without those guardrails.

I do think the rejection of the R&D proposals were because of general dislike of Anthropic especially as they wanted to do mass surveillance¹ and Anthropic stood their ground.

¹: See https://www.nytimes.com/2026/03/01/technology/anthropic-defense-dept-openai-talks.html, they do explicitly talk about how they wanted the right to process data they buy from data vendors.

Jeffrey Soreff's avatar

Many Thanks!

>IIRC they already ran a different version of claude without those guardrails.

so, a Claude without 'harmlessness' training? That would make sense for DoW's needs.

>general dislike of Anthropic especially as they wanted to do mass surveillance

While I don't _quite_ think moonshadow's "That ship has sailed" is completely correct yet ( https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you/comment/221569813 ) I do think they are more nearly correct than not ( my comment on this is at https://www.astralcodexten.com/p/all-lawful-use-much-more-than-you/comment/221599513 )

meatballtimes's avatar

> Autonomous weapons can be steered by an AI in the cloud, just like a human can steer a drone remotely. OpenAI models do not need to be edge deployed in order to power a fully autonomous weapon.

The autonomous case everyone is worried about is killer drones or soldiers. These cannot be run in the cloud, as the 10ms+ latency (at best case) is too long to be able to make the split-second decisions necessary in direct combat.

meeeewith4es's avatar

With drones, technically you could have a smaller model locally that locates and extracts target-shaped objects, which get a ranking of "kill vs not" on cloud while drone retains a target lock, and then acts on the cloud decision.

Putting it on the drone itself would prob be a bad idea as the risk of a drone being captured would be higher then, alongside it having a higher cost to produce overall.

darwin's avatar

The autonomous case I am worried about is mowing down protestors in US cities, or civilians in nations we are 'at war' with. 10ms+ latency is not a major barrier to this.

quiet_NaN's avatar

At the risk of bringing in the Culture War into it, 'all lawful use' -- while being vague when the definition was left to any administration -- is basically not a constraint when considering the current administration. Their entire MO seems to be that they declare whatever they want to do as obviously lawful, and just hope that SCOTUS will keep giving them a year before correcting that conception.

The tariffs are an obvious example, diverse military adventures not sanctioned by Congress are another, and the idea that they can just retaliate by designating Anthropic as a supply chain risk is yet another.

With regard to mass surveillance, my model of the NSA is that they are as likely to pass up some data as Amazon is to pass up a chance to make money or Trump is to pass up the chance to tweet about radical leftists. This is why people encrypt their messages (which will at least inconvenience the spooks), because if you trust the national security complex to be to gentleman to read your texts you might as well trust mainland China on that matter.

Of course, Trump has also designated Antifa as the top terrorist threat. So likely the NSA is reading your mail to figure out if you are going to practice civil disobedience against ICE enforcement rather than to figure out if you are going to fly any more planes into buildings.

Mind_Matters's avatar

Didn't antifa literally just try to kill multiple law enforcement members?

meatballtimes's avatar

> given Hegseth’s unwillingness to concede these points with Anthropic, observers speculated that the safeguards in Altman’s contract must be weaker or, in a worst-case scenario, completely toothless.

An alternative hypothesis to consider is that Anthropic was willing to comply with the letter of the contract, but gave very strong signals that they had fundamental ideological disagreements with the DoW, phrased them in a way that sounded confrontational, and as a result provoked a giant PR mess. This administration, which seems single-mindedly focused on saving face, wanted Anthropic to "bend the knee" in a public setting. It was never about the material contract terms.

meeeewith4es's avatar

New York Times reports:

(https://www.nytimes.com/2026/03/01/technology/anthropic-defense-dept-openai-talks.html)

>Mr. Michael, who was on a call with Anthropic executives at the time, said the Pentagon wanted the company to allow for the collection and analysis of unclassified, commercial bulk data on Americans, such as geolocation and web browsing data, people briefed on the negotiations said.

>Anthropic told the Pentagon that it was willing to let its technology be used by the National Security Agency for classified material collected under the Foreign Intelligence Surveillance Act. But the company wanted a legally binding promise from the Pentagon not to use its technology on unclassified commercial data.

So I believe your alternative hypothesis is wrong.

Simon's avatar

What language would Anthropic have used for the redlines to be ironclad?

Nicholas Halden's avatar

“Our national security law expert was also very skeptical of the idea that the DoW would have OpenAI personnel meaningfully “in the loop” in sensitive contexts.”

Of course they won’t be in the loop. It is truly a computer nerd fantasy that an openAI executive would get to decide what their technology is used for. When the DoW buys a drone from Lockheed, the drone does exactly what the government wants.

I’m sympathetic to the thought that “but we don’t WANT our AI used that way or else we wouldn’t sell it to you.” It’s just naive, because the technology is already diffuse over several frontier labs, so competitive dynamics will naturally leave no room for this kind of moral idealism. Beyond that, it should come as a surprise to no one that the US won’t allow a domestically developed warfighting technology to be unavailable to the government.

The more I watch this stuff, the more it reminds me of the Internet. Bad guys will use the Internet for bad stuff. Definitely the government of the country in which it was developed will have access to the tech. I’m not sure there was an American government since WW2 that would not act thusly.

Seth Finkelstein's avatar

This observation is not original with me, but still worth making:

https://www.explainxkcd.com/wiki/index.php/538:_Security

Folks, if you're worried about AI causing the extinction of humanity, it is far, far more likely to happen via a large government building a system which turns into a Fred Saberhagen "Berserker" (AI weapons which hate organic life) than a FOOM spontaneously generating Skynet.

I know, this not an idea which is novel to people. As I just said, there's whole SF series about such AI's. HOWEVER, to me, there's a certain unreality in AI existential-risk discourse - and stuff just got *real*.

MichaelFishRandom's avatar

I feel like this article fails to address numerous factors which serve to, in my view, greatly mislead the reader.

First of which is that the Pentagon originally agreed to carve-outs for mass-surveillance and autonomous weapon systems [1] (the "two restrictions") months ago in their original contact with Anthropic. So then the question has to be asked, why did the Pentagon suddenly seek to renegotiate the terms? A clue may be found in this reporting by Semafor [2] :

> Anthropic has been in a tense battle with the Pentagon over its AI models since at least January, when Anthropic was employed by Palantir during the capture of Venezuelan leader Nicolas Maduro, according to people familiar with the matter.

> After an Anthropic employee inquired with Palantir about Claude’s role in the raid, a Palantir senior executive, who was alarmed by Anthropic’s seeming disapproval, notified the Pentagon, Semafor reported earlier. An Anthropic spokesman denied that the company expressed concern.

And

> In a previously unreported exchange in early December, Under Secretary of War for Research and Engineering Emil Michael was outraged by Anthropic CEO Dario Amodei’s answer to a hypothetical question: If the US were under attack, with hypersonic missiles hurtling toward US soil, and Anthropic’s AI models could thwart the missiles, would the company refuse to help its country due to Anthropic’s prohibition on using its tech in conjunction with autonomous weapons?

> According to people familiar with the administration, Amodei responded that the Pentagon should, in the midst of the attack, reach out and check with Anthropic. But sources familiar with Anthropic’s view say the AI company offered to make a missile defense carveout for otherwise prohibited weapons.

To be sure, these are allegations provided by representatives of the Pentagon and Palantir which are rejected by Anthropic. However, I'm inclined to believe the allegations for the simple reason that, in their absence, you'd need some other explanation for why the Pentagon would want to suddenly revisit a contract it already signed, which, I will reiterate, included the 2 restrictions.

Moreover, there are lines from the article like so:

> Altman stated that he had received guarantees that OpenAI’s models wouldn’t be used for mass surveillance or autonomous weapons either, but given Hegseth’s unwillingness to concede these points with Anthropic

which are presented as if they aren't contested, but as stated by the reporter in this ABC interview [3] ,

> The Pentagon agreed, in principle, to these two restrictions. Why couldn't a deal be reached?

Due to the breakdown in trust between the Pentagon and Anthropic, they sought to, in accordance with a memo authored by SecDef Hegseth [4] , include "all lawful uses" language in the newer iteration of the contract without, to the best of my understanding, striking out the two restrictions.

Overall, if you agree with the thesis of the article, and thus Anthropic's case and campaigning, you must necessarily believe some combination of

* OAI's legal team is ignorant of what is or is not contractually enforceable in the most critical sections of their contract, that is, they are abjectly incompetent.

* OAI is deliberately intending to deceive the readers of their press release.

which lead to the conclusion that,

* OAI's press release is riddled with internal contradictions and interpretations that are, prima facie, not actually legally binding.

One can believe that if they wish, but it's pretty far beyond Occam's Razor.

[1] https://www.lawfaremedia.org/article/what-the-defense-production-act-can-and-can-t-do-to-anthropic

[2] https://www.semafor.com/article/02/24/2026/pentagons-anthropic-feud-deepened-after-tense-exchange-over-missile-attacks

[3] https://youtu.be/MPTNHrq_4LU?si=ErV4R01WF6NZhA9L&t=199

[4] https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIGENCE-STRATEGY-FOR-THE-DEPARTMENT-OF-WAR.PDF

Larkin's avatar

> you'd need some other explanation for why the Pentagon would want to suddenly revisit a contract it already signed

My explanation would be that this administration, including the people who run the pentagon, have a history of suddenly reversing course on agreements or commitments it has made, without providing much reasoning for their inconsistency, or even awareness that they are being inconsistent. See tariffs, Epstein files, Venezuela, Greenland, etc. And that the pentagon is reversing course on this contract for the same basic personality reasons that caused the other reversals.

Catmint's avatar

It seems pretty clear to me that OpenAI is being deliberately deceptive, yes, possibly self-deceptive. Is that supposed to be hard to believe? After Altman lied about the nonprofit setup, why shouldn't he lie about this too?

Jenna Sariah's avatar

I was searching for something resembling Anthropic's TOS situation in this dispute. There's this:

https://blogs.microsoft.com/on-the-issues/2025/09/25/update-on-ongoing-microsoft-review/

where a Guardian investigation found that Israel was running a mass surveillance program using Microsoft's services, and which program was in breach of Microsoft's TOS against mass surveillance. Microsoft then terminated the associated services, and Israel simply took it and moved on.

Joshua Greene's avatar

I am curious about the choice to use the term "Department of War." Is this not a "there are 5 lights" scenario?

https://en.wikipedia.org/wiki/Chain_of_Command_(Star_Trek:_The_Next_Generation)

Andrew's avatar

Apologies if this seems tedious. That scene was a reference to a scene in 1984. How many fingers am I holding up. I am a star trek fan, but their scene added nothing to the original. Watching it I thought it was so derivative i cringed. Hence I think its more appropriate to cite the original.

Joshua Greene's avatar

Sure, substitute your own favorite iteration of the same idea.

Spikejester's avatar

Apologies if this seems tedious, but that scene in 1984 is a reference to the famous story "point at a deer and call it a horse" featuring Zhao Gao, from 208 BC.

Andrew's avatar

That would in fact be tedious because although the idea in 1984 is not 100pct novel, it presentation is novel, and adds a bit of its own idea. In particular for the subject to over time truly believe the lie. Its not just a loyalty test. The star trek scene however is an exact replica of the 1984 scene.

Ja's avatar

I understand many (including myself!) find the DoW renaming itself outrageous, but I also think it makes sense to call people by what they want to be called even if there isn't a law designating it so.

Joshua Greene's avatar

I agree, but a federal department is not a person, so that heuristic doesn't help (me) much.

Ja's avatar

I think the heuristic here is that names are arbitrary and are best determined by the owner of the entity. While a federal department isn't a person, it is an organization with elected representatives that are intuitively closer to the "owner" that gets to decide naming than the law technically says.

Congress is also elected representatives and they weren't consulted, but that department is under the executive branch.

Soy Lecithin's avatar

Congress chose the name "Department of Defense" when it created the DoD. The president *runs* it but congress *owns* it. Think CEO and board. So not just legally, but even by your heuristic, it should be Department of Defense.

Alastair Williams's avatar

There is a law that designates the name as the Department of Defense. That law has not been repealed or amended to change the name...

Xpym's avatar

I mean, they are currently waging at least one war, the "defense" justifications for which are tenuous at best. Calling a spade a spade seems very appropriate in this case.

Tenobrus's avatar

killer stuff as always scott 🫡

actinide meta's avatar

Something that doesn't seem to have occurred to anyone is that use of any LLM for or anywhere near anything important **is** a supply chain risk. Anyone who can get a couple hundred documents into the training data (by, like, posting on the internet) can add undetectable behavior triggers to them! Anyone who can get text into their context window can talk them into compromising their owner!

Joey Bream's avatar

AI-enabled civilian surveillance is clearly a bright red line

Bubble Head's avatar

I think it's very cute how the authors are at the same time deeply worried about a rogue government using AI to oppress the citizenry and at the same time also convinced that the rogue government would respect the four corners of a legal contract enough that they would be a meaningful obstacle to their evil plans. It's a very American worldview.

Soy Lecithin's avatar

If the DoD actually felt like they could disobey the contract with impunity, why did they make a big deal about the terms in the first place?

Bubble Head's avatar

Perhaps because they aren't actually a rogue government? Or is that too outrageous to contemplate?

Catmint's avatar

It goes the other way, too. If the DoD didn't want to disobey the terms of the contract, why did they make a big deal about them?

Americans are constantly in the situation of having our government be as power-hungry as it can legally get away with. See e.g. Snowden. This is very familiar territory.

Spikejester's avatar

The safeguards are baked into the model and only Anthropic can remove them. It seems clear DoW would ignore the legal contract if they could, but cannot, hence exercising extreme leverage against Anthropic to attempt to force them to do so.

(I almost said "Maximum leverage" but remembered that it can and likely will get worse)

Bubble Head's avatar

I haven't seen any reports that the government asked Anthropic to change its model. All the discussion has been about the terms and conditions of the contract.

birdboy2000's avatar

All of this confirms my suspicions that the 4th amendment is a de facto dead letter and that US law is woefully insufficient to constrain abuses of power.

We're heading to a future of autonomous killbots, and if any human is in the loop at all it'll be the kind of murderous sociopath who rises to the top levels of the US government. If we're not there already - I can just imagine ChatGPT being "you're right, that was a girls' elementary school, not a military target" about the recent atrocity

Alexander Turok's avatar

>All of this confirms my suspicions that the 4th amendment is a de facto dead letter

We have a situation where the government just released a bunch of people's private communications in order to embarrass said people, which a great many self-identified libertarians cheered. (I allude to the Epstein files release.) So yeah.

Jimmy's avatar

Well, the nice thing about AI killbots is that they'll have no qualms about destroying evidence and burning bodies even after accidentally targeting innocents.

Shay's avatar

Imagine the data sets that OpenAI could build if they ever wanted to. Unprecedented access to the details of private, public and corporate life today, top secret clearance at the DOD tomorrow. Can you imagine the datasets they might build next? Is DoD the last federal agency that OpenAI works with? If your goal is to control a system, it sure helps when you can predict it. I think this is the power that Dario just surrendered to Sam. I think we’re all worse off for it.

Federico's avatar

Given how you are usually extremely measured and go out of your way to be reasonable and polite, your criticism of some of the people cheerleading this decision (especially VCs) has actually had a meaningful impact.

I think people underestimate the effect that a lot of VCs/tech influencers on the right care about being seen as different from the average right wing voter. If a meaningful fraction of influential centrist tech adjacent thinkers started treating them with contempt (for example, by shunning them, encouraging their smartest friends to get funding from other VCs, not doing any friendly introductions, avoiding them at social events and so on) I think that would actually have an impact, both because their deal flow is extremely dependent on their social standing, and also because they genuinely care about being seen more than just rich people.

Xpym's avatar

>your criticism of some of the people cheerleading this decision (especially VCs) has actually had a meaningful impact

Such as?

Max Triano's avatar

A couple thoughts on this.

The existence of end-to-end encryption makes me much less of a doomer about the potential chilling effects of mass surveillance. Scott listed some reasons why mass surveillance is bad in a reply elsewhere, and he's right about all of them, but they're all either a.) mitigated by widespread use of end-to-end encryption or b.) not really enabled by LLMs and things the NSA is already quite capable of doing and probably is doing. If the President wants to dig up dirt on a given candidate for office, they can do that today (to the extent that the law allows them to) and have been able to do that for many years now. That's not a good thing, but I don't see it as a reason to start dooming now if you weren't before this. If you want to do a good thing in reaction to this, get your family and friends on Signal or Matrix/Element. Use this highly public news story as a motivating reason. Yes, Signal only remains accessible at the permission of the government, and yes they do keep trying to backdoor it via EU Chat Control-like measures. I still think the fact that the actual math of end-to-end encryption is out there (much like the math of LLMs) means it's never really going away.

Second, my thoughts on the whole thing about the DoD being able to just change its regulations to allow whatever use of autonomous weapons systems it wants can be summarized as: "Well... yes?" All of government has this same loophole. Perhaps the one good thing to come out of this Trump administration is a more widespread awareness in the public that all this law stuff is fake. Will and enforcement are the only things that matter. "Oh but the government has internal watchdogs agencies all over that keep everybody in check!" Well, when you're the government you can actually just fire them. And if the guy in charge of firing them won't, you can fire him and replace him with someone who will, all the way up the chain until you get to the President, who can only be fired by Congress. Even then, you need a military that's willing to forcibly remove the President if he simply says "No, I don't think I will leave, actually". The only thing constraining any of these people is their own morality and respect for the law, which is no constraint at all.

Long story short, I suppose I am a doomer about the topics discussed here, I'm just not any *more* of a doomer than I was before.

Charles Krug's avatar

Look at claims about how long any particular "strong" encryption will take to break given a brute force attack.

Then go to the NSA "Jobs" page under "Computer Scientists" and see how many high-end CPUs they admit to having.

Double that figure, and understand they are NOT using a "Brute Force Attack," they've been studying this problem for 80 years.

Max Triano's avatar

Brute forcing ed25519 even with a generous estimate for the NSA’s compute would still take an infeasibly long time no? Also, Signal uses the Double-Ratchet algorithm, meaning a new key is derived for each message. So you’d have to brute force every single message, it’s not like you break the user’s keys and then that exposes all their messages. The bigger worry is, as you say, attacks targeting deficiencies in the protocol itself. That’s a real worry.

But also, I doubt they even need to go that far. It’s probably easier to just target Signal via other (dubious) law enforcement means. Which is why I included running your own Element server.

Charles Krug's avatar

I assume that, even if "Law Enforcement" doesn't have access to rhetorical-your Signal chat, they have ready access to someone who does and is happy to share. My prior is that this is true of all state actors from all interested states.

As stated above, it shouldn't be assumed that any state actor in the SIGINT business is brute forcing anything. And MOSSAD demonstrated this week they don't necessarily need to hack anything especially secure to get valuable intelligence.

Doug A's avatar

4th Amendment: "The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized."

Benjamin Franklin: "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety”

Fuck the government spying on citizens. I know they do it; I don't want them doing it; I don't want AI being used to do it for them.

Ben Giordano's avatar

If the real view is that no safeguards will hold once the national security state decides it wants broader latitude, then I think that should just be said plainly. If that isn’t the view, then we need to spell out what concrete arrangement would meaningfully reduce the risk rather than merely delay the onset. I’m not very optimistic that there’s a durable fix for these loopholes, or really for loopholes like this in general.

Catmint's avatar

You mean the "technological safeguards" mentioned? I assume that's talking about RLHF, because as far as I know that's the only safeguard so far invented for these models. It is not very reliable. There are already groups that have gotten around those safeguards to make the LLMs help them with malware. Also, there is a known and published method to introduce a backdoor to it.

Ben Giordano's avatar

Yes, RLHF might be part of what they mean, though I assume they also mean system-level controls, access restrictions, and probably narrow deployment constraints. But that still leaves the same basic problem; are they real constraints, or just speed bumps?

That’s why I’m pessimistic. The issue isn’t whether some safeguard exists. It’s whether any of it holds up when the customer wants more latitude and has the leverage to push for it.

John Edwards's avatar

Thank you for the write up. What I am still confused about is where Google and xAI fit into this. I believe xAI just signed a contract with the DoD this week and Google has had existing contracts with them for much longer.

Have they also agreed to the “any lawful use” language?

Also, as an aside, I think what bothers me the most about OpenAI signing this contract is not that they effectively have agreed to the terms laid out by the DoD, but instead that I cannot help but think this is a way for Altman to make OpenAI a company that is necessary for “national security” such that if funding dries up or investors panic, the government bails them out.

Julián's avatar

the constraint on mass surveillance was never legal, but economic. the NSA has had broad collection authority for decades. what stopped them wasn't the Fourth Amendment, but the cost of actually searching the haystack. AI doesn't change what's permitted. it changes what's affordable.

Hayek warned that the danger isn't governments building totalitarian machinery on purpose, it's that tools, once available, find their applications. "all lawful use" sounds like a guardrail. it's actually a blank check dated for when the technology clears.

FrustratedMonkey's avatar

If a Democratic Administration was doing this, the radical right would be freaking out, arming themselves, and planning for war. I've been hearing for decades about how guns are for fighting against a tyrannical government and this would qualify. The only way this works is if they are already planning to not cede control with elections.

Catmint's avatar

The correct thing is to wait until the time for elections comes and goes, and see whether those in power abide by the result.

Error's avatar

...and have a plan in mind in case they don't. But, yes.

FrustratedMonkey's avatar

In other news, perhaps why autonomous decisions are a problem.

""Advanced AI models appear willing to deploy nuclear weapons without the same reservations humans have when put into simulated geopolitical crises," reports New Scientist:

Kenneth Payne at King's College London set three leading large language models — GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash — against each other in simulated war games. The scenarios involved intense international standoffs, including border disputes, competition for scarce resources and existential threats to regime survival. The AIs were given an escalation ladder, allowing them to choose actions ranging from diplomatic protests and complete surrender to full strategic nuclear war... In 95 per cent of the simulated games, at least one tactical nuclear weapon was deployed by the AI models.

"The nuclear taboo doesn't seem to be as powerful for machines [as] for humans," says Payne. What's more, no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning...

OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, didn't respond to New Scientist's request for comment.

The article includes this comment from Tong Zhao, a senior fellow in the Nuclear Policy Program at the Carnegie Endowment for Peace think tank. "It is possible the issue goes beyond the absence of emotion. More fundamentally, AI models may not understand 'stakes' as humans perceive them.""

George's avatar

I think it's worth noting that autonomous weapons have ~0 usecases right now besides fighting with a national population (where they have a lot of usecases, since they will more readily shoot terrorists threatening the stability of the republic than a potentially traitorous batallion of soldiers who may seek to usurp democratic rule)

no's avatar

the based ritual rears its ugly head once again

Peter Gerdes's avatar

I don't really buy the argument against autonomous weapons. As long as we are within the realm of something soldiers deploy -- even if they launch it from within the US -- you have all the protections you mentioned against tyrants and strongmen in place.

OTOH if and when we reach the point we don't have soldiers even deciding to use the weapons -- ie we have robot armies -- it seems unfortunately unavoidable. The country who has soldiers involved at every step and level can't stop an effective robot army that doesn't suffer from all the delays, sleep needs etc etc. And I doubt we are going to be there anytime in the near future.

More near term I don't see how limiting autonomous weapons really offers any true practical benefits.

Let me start by asking whether old fashioned gravity bombs falling out of a B52 are autonomous? What about the submunitions used by certain warheads that seek heat signatures in the area that kinda look like tank engines? What about going to a specific GPS coordinate?

Augmenting these so instead of having to hit whatever happens to be under you or whatever is hot you only strike what looks like an enemy seems desirable even if it fails at some reasonable rate. And I don't see why it would matter if it is a submunition or a little wheeled unit that shoots things.

Of course there is always the danger that we will be tempted to use weapons in ways beyond what autonomy is prepared to handle. Instead of dropping submunitions on an enemy column dropping it on a city and hoping the AI figures it out.

But that has always been true of weapons. Ultimately they offer capabilities and you need to use them with judgement.

B Civil's avatar

>As long as we are within the realm of something soldiers deploy< is not autonomous in the sense we are interested in; if there is a human being in the kill chain then it’s not autonomous. A human being decides to deploy it. The fact that it can seek out its target and perform evasive actions on the way is immaterial to the definition of autonomous in this case.

Mass surveillance is trickier. I personally think it’s inevitable no matter what laws are passed or retracted.

Peter Gerdes's avatar

What does it mean 'in the kill chain'? If a soldier presses a button and it turns on a little robot that sits in a trench and shoots anyone it's AI judges to be an enemy who approaches for the next hour is that a human in the kill chain? Next day? Next Year?

My understanding was that what human in the kill chain meant was basically 'human gets to approve of whether that person gets destroyed' Thats trickier to define than you might think. I mean in some sense our existing submunitions choose their own targets but in another sense they don't because we are only firing them when we are ok with everything in the area dying.

But now suppose we send an AI powered drone/missle into a city that's teeming with both enemies and civilians. In one sense a human was part of the kill chain because they launched the attack. But if they were only willing to do that because they are relying on the AI to only take out soldiers in the relevant sense they aren't.

No one in the US command wants AI that literally doesn't require any directive from a person. Fully autonomous killer robots are still supposed to wait for orders just like a human would. I think the right way to understand human in the kill chain is -- are we sending the AI out to make calls about what to destroy in situations we wouldn't be willing to just vaporize everything the system could in theory (if it's AI level features had a fuckup) decide to target.

B Civil's avatar

Yes, OK. There aren’t very many words that cannot be quibbled about. that leaves me wondering why the argument about it got so heated in the first place though. If the meaning of autonomous is so obviously elastic, what’s the point?

Peter Gerdes's avatar

I suspect it has a much clearer definition in the contract and I think what most people are concerned about is exactly what I laid out: machines making the choice about who to kill. And yes I think lots of it comes from people watching scifi and assuming it's normal weapons of the terminator. My point wasn't that there isn't a notion there but that it is a much more of a continuous thing so you can't just easily say: ohh humans in the loop here that is fine not here.

No what there is, is a gradiation of cases where the AI is being given more or less control and with more or less on the line. That's why my argument is that we ultimately need to just rely on the judgement of the commanders who deploy them and give them a good sense of the true risks. And that ultimately it's really pretty similar to the situation with any other kind of war fighting technology.

Also it's probably that most people don't like to think about the fact that soldiers (or at least generals) have to think about risks imposed on civilians all the time and that most attacks involve nontrivial risks of killing the wrong people.

B Civil's avatar

I am still puzzled by the notion that open AI says they have identical guard rails in their contract and yet somehow it’s okay. It’s difficult to make sense of that without reaching for different explanations.

Peter Gerdes's avatar

Regarding mass surveillance I ultimately agree in a sense. The real question of importance is whether we all get access to mass surveillance information or just the government and giant corporations do.

The amount of data being recorded and shared makes the end of pseudo-anonymity merely a matter of time. If that is a world where everyone including journalists, average people etc is allowed to sift through public data (if they pay for the compute) I think it will be fine. We'll just learn to be less hypocritical when all of our past bad behavior is available for our opponents to dig up.

What scares me is the world where a bunch of laws get passed like the EU is desperately trying to do that keep that information off any websites available to the public. In that world any immoral leader can go all Hoover and blackmail all their opponents as can the billionaires who have the resources to do the data analysis out of public view.

Having said that, I still think it's important to have this debate in public about how government surveillance should work. Who gets to decide what is monitored and what safeguards do we include?

Noah's avatar

Have to restrain myself from saying very negative things about Sam Altman and OpenAI right now lol.

Instead I will say nice things about those who put together this post. Thanks, anonymous ACX readers/contributors.

Bob's avatar

I'm sorry. I think the US government is a piece of shit, but it's not up to a company to dictate policy to the US Government. That is up to the people of the United States. If your representatives are not representing your interests then it is a failing on you, the people, for continually voting for them.

It is not right for an unelected private company to decide the policy of the US. It was a travesty when the United Fruit Company decided US policy in latin America. It is folly to let AI companies decide AI policy for the government, regardless of intent. The people of the USA should through their representatives. If you don't like that, get better representatives.

Until then, I fully support a government crushing companies which think they are bigger than a government. I already worry about middle powers, and their relationship with companies like Microsoft or Amazon who rival their GDP.

John Schilling's avatar

It is absolutely up to a company to dictate policy to that company. If Anthropic decides on a policy of "we don't build killbots", or Dow Chemicals says "we don't make nerve gas" or Moderna has a "no designer bioweapons" policy, the government of a free country in peacetime does not get to override that decision.

It would be useful if e.g. Moderna could provide the Pentagon with pharmaceuticals to treat sick or injured servicemen, even as it refuses to provide bioweapons. We used to be able to do that, before Pete Hegseth blew up the contractual machinery. Now, who knows? But probably not a good time for anyone with any moral scruples.

Bob's avatar

That is a false analogy, This is like Dow Chemicals saying you can buy our chemicals but you can't make C4 from them. They are telling the government what they can and can't do with their product. That is what is unacceptable. If Anthropic doesn't like what it's legal for the US Government to do, lobby congress to change the law. Convince people to protest to change the law.

thefance's avatar

idk what Pete Hegseth has been up to lately. But it's not unheard of for a company to bind its customers to a terms of service, right?

Gordon Shriver's avatar

Isn’t the model’s context window a hard limit on “mass surveillance”? What could Claude do that Palantir isn’t already doing?

thefance's avatar

Presumably, the surveillance would be an input into its training regimen. The context window is only relevant for the inference phase, i.e. when you query it.

But yeah, Palantir beat anthropic to the punch, as far as I can tell.

Victor's avatar

Nice essay, but in one sense it doesn't matter. AI companies are *American* companies, which means that the government can, with enough effort, make them do anything. If a profit can be made from a policy, or power acquired, eventually someone will make them do it.

The only real protection you have in a democracy is elections, and the checks and balances between branches of government, and finally the platforms of public opinion. As we see, ours are all in need of some reform.

Jiro's avatar
Mar 4Edited

Come on, Scott. "All lawful uses" just means "all uses", but has the word "lawful" added as a disclaimer so that the manufacturer won't be seen as approving of any misuse. It may be true that the government can fudge the term "lawful uses", but that doesn't matter because that would just make it equivalent to "all uses" and that 1) is what it's supposed to mean anyway and 2) is the standard way most products are sold. So what if the government can use the AI unlawfully and get away with it? If you sell the government a truck, they can probably use the truck unlawfully and get away with it, but nobody demands to be able to veto how the government uses the truck.

B Civil's avatar

A couple of things. I get the impression that Anthropic does not believe that Claude is yet reliable enough to run an autonomous kill chain (meaning no human being in the loop.)

Secondly, AI is orders of magnitude different than a truck.

Umang Malik's avatar

It's DoD and SecDef, not DoW/Secretary of War. Renaming agencies requires congressional approval. All the executive branch can do is add an inaccuracy to federal websites.