On yesterday’s post, some people tried to steelman Acemoglu’s argument into something like this:
There’s a limited amount of public interest in AI. The more gets used up on the long-term risk of superintelligent AI, the less is left for near-term AI risks like unemployment or autonomous weapons. Sure, maybe Acemoglu didn’t explain his dismissal of long-term risks very well. But given that he thinks near-term risks are bigger than long-term ones, it’s fair to argue that we should shift our limited budget of risk awareness more towards the former at the expense of the latter.
I agree this potentially makes sense. But how would you treat each of the following arguments?:
(1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people.
(2): Instead of worrying about Republican obstructionism in Congress, we should worry about the potential for novel variants of COVID to wreak devastation in the Third World.
(3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia.
(4): Instead of worrying about “pandemic preparedness”, we should worry about all the people dying of normal diseases like pneumonia right now.
(5): Instead of worrying about pharmaceutical companies getting rich by lobbying the government to ignore their abuses, we should worry about fossil fuel companies getting rich by lobbying the government to ignore their abuses.
(6): Instead of worrying about racism against blacks in the US, we should worry about racism against Uighurs in China.
I don’t think I’ve heard anything like any of these arguments, and they all sound weird enough that I’d be taken aback if I saw them in a major news source. But why? If (intuitively) every cause area draws from a limited pool of resources and so trades off against every other cause area, why shouldn’t you say things like this?
Argument (1) must ring false because worrying about police brutality and police evidence fabrication are so similar that they complement each other. If your goal is to fight a certain type of police abuse, then discussing unrelated types of police abuse at least raises awareness that police abuse exists, or helps convince people that the police are bad, or something like that. Even if for some reason you only care about police brutality and not at all about evidence fabrication, having people talk about evidence fabrication sometimes seems like a clear win for you. Maybe we would summarize this as “because both of these topics are about the police, they are natural allies and neither trades off against the other”.
But then what about argument (2)? This also sounds like something I would never hear real people say - the two problems just seem too different. A healthy worry-economy can support both worrying about political obstructionism and worrying about global health. This is true even though in theory, there’s a limited number of newspaper columns available and each extra column addressing obstructionism is one more column not devoted to health issues.
But if (1) fails because the two topics are related, and (2) fails because the two topics are unrelated, that doesn’t leave a lot of room for an argument that doesn’t fail, does it?
Maybe there’s some narrow band where topics are related enough to draw from a common pool of resources, but not related enough to be natural allies, and Acemoglu thinks AI falls in that band? But I tried to capture that kind of situation with arguments (3) and (4), which are very close matches to the AI situation (long-term major speculative risk vs. near-term smaller obvious risk). And (3) and (4) still ring false - they’re theoretically plausible arguments, but nobody would ever make them.
(5) and (6) are kind of in between. They’re a little bit like the police example in (1), but whereas it feels obvious that concern about police brutality and concern about police evidence fabrication are natural allies in decreasing the power/status of the police, the arguments in (5) and (6) feel like less natural allies. I actually don’t think that every unit of effort spent fighting anti-black racism in the US contributes very much to fighting anti-Uighur racism in China or vice versa. But these still ring false to me. Partly it’s that if someone were to seriously assert them, they would justly be accused of “whatabout-ism”. But partly it’s that I can’t imagine even a whataboutist being quite so crude about it.
What about a seventh argument - “Instead of worrying about sports and fashion, we should worry about global climate change”? This doesn’t ring false as loudly as the other six. But I still hear it pretty rarely. Everyone seems to agree that, however important global climate change is, attacking people’s other interests, however trivial, isn’t a great way to raise awareness.
Here’s a toy model that could potentially rescue Acemoglu’s argument: two topics can either be complements to each other, or substitutes for each other. The more closely related the topics, the more likely that one or the other is true, but it’s hard to ever tell exactly which, let alone what the exact effect size is. So concern about police brutality and concern about police evidence fabrication are probably complements - the more you produce of one, the more people want of the other. Concern about Republican obstructionism and about COVID are neutral; changes in one don’t affect the price of the other (I mean, at the ground level they’re competing for a limited amount of newspaper column space, but this is no more interesting than the fact that grapes and rockets are competing for a limited amount of labor/land/capital - in the real world, vineyards and space programs aren’t competitors in any meaningful sense). Concern about some other topics could be substitutes, where the more of one you produce, the less people will want of the other. And it just so happens that the only two substitute goods in the entire intellectual universe are concern about near-term AI risk, and concern about long-term AI risk.
Maybe I’m being mean/hyperbolic/sarcastic here, but can you think of another example? Even in case where this ought to be true, like (3) or (4), nobody ever thinks or talks this way. I think the only real-world cases I hear of this are obvious derailing attempts by one political party against the other, eg “if you really cared about fetuses, then instead of worrying about abortion you would worry about neonatal health programs”. I guess this could be interpreted as a claim that interest in abortion substitutes for interest in neonatal health, but obviously these kinds of things are bad-faith attempts to score points on political enemies rather than a real theory of substitute attentional goods.
Maybe this doesn’t work for attention, but it does work for money? Now that I think of it, the other time I’ve seen arguments like these in the wild was an article (can’t find it now) arguing that instead of researching potential cures for various disabling conditions, we should spend the money providing social support to existing people with disabilities. Here the writer’s world-model isn’t mysterious: there’s some group (government agency, charity, etc) which is focused on disability and has money earmarked for that purpose, and it should spend the money on one subtopic within disability instead of another. It’s less useful to say “Instead of worrying about political obstructionism, we should worry about providing social support to existing people with disabilities”, because most funders don’t directly trade off one of those causes vs. another.
I still wouldn’t expect to hear “instead of funding the fight against police evidence fabrication, civil liberties organizations should fund the fight against police brutality”. But maybe there’s a gentleman’s agreement among causes to avoid directly attacking each other in favor of just promoting your own cause and letting the funders figure it out. And maybe that gentleman’s agreement only fails if you’re willing to condemn the other cause as a completely useless scam. People in disability rights activism have already decided that “curing disabilities” is offensive, so it’s okay for them to advocate defunding it as a literally costless action (not a tradeoff). Since long-term AI risk sounds weird and sci-fi-ish, you can treat it as a joke and argue that defunding that is costless too.
This is the most sense I can make out of what Acemoglu is trying to do - but it’s still wrong, just for different reasons. Long-term-AI funding doesn’t come from some kind of generic Center For AI that’s one thinkpiece away from cancelling all those programs and redirecting the money to algorithmic bias. It comes from organizations like the Long-Term Future Fund and people like Elon Musk. And if LTF stopped donating to long-term AI research, they would probably spend the money on preventing nuclear war, pandemics, or other existential risks. If Elon Musk stopped donating to long-term AI research, he would probably spend the money on building a giant tunnel through the moon, or breeding half-Shiba-Inu mutant cybernetic warriors, or whatever else Elon Musk does. Neither one is going to give the money to near-term AI projects. Why would they?
But take this seriously, and you end up with the same kind of questions as in awareness-raising. Acemoglu isn’t going to be able to divert long-term AI funding into fighting current-day unemployment, because the long-term AI funding comes from people who are really motivated by fighting existential risk. On the other hand, he might be able to divert other fighting-unemployment funding, since that comes from people who are already focused on his preferred topic. Would it be more strategically useful for him to write “Instead of worrying about people losing jobs because of disabilities, we should worry about people losing jobs because of AI”? Probably - but then we get into the “it has to be literally costless” problem again.
So maybe the only reason we hear this so often about long-term AI concerns, but not about anything else, is because it’s something that the thinkpiece-writing-class views as silly and costless to defund - but where there’s also a big inferential gap between them and the funders. They think this is a hundred-dollar bill left on the ground, where funders are spending money on something they could easily be convinced to pivot away from. Then they keep getting surprised and angry when this doesn’t work.
Maybe the best way to stop this stupid counterproductive fight from repeating itself every few months in every major newspaper is to make the funding situation around AI better known. If so, consider this my contribution.
Share this post