Comment deleted
Expand full comment

I feel like a good amount of complaint about the TSA or FDA or FRA or other official bodies supposedly aiming at safety is of this form. Instead of worrying about whether someone's shoes are explosives, they should more accurately screen passenger lists; instead of worrying whether volunteers are hurt in vaccine experiments, they should worry about getting data about vaccines fast enough to help; instead of worrying whether trains get crushed in a crash with a freight train, they should just make sure passenger trains aren't on freight tracks.

Expand full comment

It seems to me that his argument is more like "Superintelligence risk sounds silly, and makes people think that AI risk worriers are scifi nerds with too much time on their hands, thereby giving the entire field of AI risk a bad name". I think this is a pretty strong argument actually, regardless of what you personally think about superintelligence risk.

Expand full comment

>And (3) and (4) still ring false - they’re theoretically plausible arguments, but nobody would ever make them.

I'd say that is not true. An obsessive focus on Covid (UK here) has meant that large numbers of people have/will die because, for instance, they didn't get that 'lump' checked out and now the cancer has gone too far. As a population, we do not have decent granular calculations of risk, and tend to push them to 0% and 100%, it reaches a point where the 'wrong' one is being displaced.

Further Covid example. At the beginning, we thought that Covid had fomite transmission, so hand sanitising stations outside all shops. Now we know that this is a tiny risk, and mask wearing is much more the point, the stations are still there, and the mantra is "Hands, face, space". Hands first on the list. Loads of people can only deal with one of those, so sanitise their hands then do dick-nose with their mask.

Most people are crap at processing information, and either/or choices absolutely come into play.

Expand full comment

The closest analogue to his argument would be "instead of worrying about the long term risks of climate change we should be worried about the impacts of climate change that are already affecting us".

Expand full comment

You shouldn't worry about anything. If something bad might happen, do an estimate of how likely it is, and how much it would cost to ameliorate it. If it's too expensive, then bother your mind with other thoughts.

Worrying is a waste of neurons. Just Don't Do It (tm)

Expand full comment

In other words: How should we budget our Worry Quota.

Expand full comment

Those all seem like "trade off worry about this current, realized risk against this other current, realized risk". That's not a very reasonable comparison to super-AI risks versus current-AI risks: While we are familiar with the mechanisms of nuclear war, to pick one example, we have no idea what technological changes will be necessary to create a super-AI. We do not know how to build a super-AI, so analysis of the risks is premised on shaky assumptions about the thing itself. Thinking about how to effectively manage those risks is even more speculative.

Expand full comment

"Instead of donating money to help sick people in America, you should donate money to help sick people in sub-Saharan Africa, because money buys more and people are less able to afford necessary care so you can save lives at much lower cost."

"Instead of donating to help this Democrat win against McConnell, which won't happen, you should donate to these actually competitive races."

"Instead of spending money on highways, we should spend money on public transit."

I've seen both of these arguments in the wild, and I think both are intuitively compelling. I think arguments to redirect funding from A to B, where A and B have a similar goal/purpose/topic, but B is argued to be more effective, are actually reasonably common.

Expand full comment

How about this, which I've absolutely heard some version of before:

8. Instead of worrying about preserving endangered species and recycling household goods, we should be worrying about global carbon emissions.

Expand full comment

I recall reading some articles back in January/February 2020 arguing (4) - "Instead of panicking about a new disease that's only infected a few thousand people, you should worry about the flu".

I've also seen numerous internet commenters argue similar things to (6) and at least one CCP propaganda account arguing the opposite, though in response to the CCP attacking the US for racism against blacks (Americans attacking the Chinese in the propagandist case) and not as part of an effort to shift people away from BLM.

Expand full comment

For me the issue isn't so much police brutality and evidence fabrication. We would expect some of that to occur under any system. The bigger issue is the lack of accountability.

In terms of AI should that be tackled independently of overall technological change, automation, etc.?

Personally it doesn't seem to do much good to break these issues down into fragments. But I'm open to being wrong about that.

Expand full comment

Continuing to steelman his argument (which I also find a bit... nibble-around-edgy?) I think a better comparison might be "we should worry about the shorter-term climate change effects of refusing to invest in futuristic energy tech, instead of worrying about the risk that fusion research will accidentally turn the planet into a fireball and kill us all."

Planetary-scale AI risk is a well-founded supposition, but it *is* a supposition whose likelihood is undetermined, so having a lot of smart/wealthy people nibbling at this (distant/unlikely/maybe impossible) scenario seems like rather a waste when there are very large right-now problems with AI they could be actively doing something about.

Whenever we have these "walk and chew gum at the same time" conversations, which since I spend a lot of time reading/talking about politics is A LOT, we definitely run the risk of doing what Acemoglu has done here and assuming that just because someone is talking about X, they aren't thinking/talking/caring about Y. This assumption is *usually* false for any topic of contention.

Expand full comment

Maybe a closer steelman argument is "instead of researching how to treat PTSD on people awakening from cryogenic sleep we should worry about treating PTSD on soldiers". I think part of the argument against worry about AI risk is that it's a fictional problem or applies to a world very different from ours while there are similar problems occurring right now which need more attention and are not getting it.

Expand full comment

I almost feel like I'm missing something here because Scott has made arguments about the danger of hot button issues overwhelming critical ones, like his "don't protest police shootings, focus on malaria eradication" take:


There really is a finite public attention for things, and when everyone is fixated on exciting culture war stuff it can detract from far more consequential stuff. Likewise, Yglesias had a take recently to the effect of "instead of fighting about critical race theory, worry about phonics vs. whole language which is way more consequential".

Maybe the difference here is that Finite Worry Economy makes sense when the thing you are calling for less attention toward is absorbing a high absolute share of attention, such that you have to pay less attention to it to give other issues space. Police shootings and CRT may fit this category, long-term AI risk does not.

Expand full comment

The pedantic nature of these two posts reinforces his point. You haven't said a single thing about the near-term risks. Instead you've picked an argument about abstract meaning.

Expand full comment

I think the key difference here is that the two phenomena are not just related, but sequential, and arguably causal. I don't know much about AI development, but I would assume that the risk of superintelligent AI would only occur after widespread, regular use of simpler, non-superintelligent AI starts to become common. If that is true, then it makes sense to say that by fighting the implementation of AI in the present, you not only address an immediate danger that is harming people, you also stave off all future more dangerous risks that such use would enable. And if you are worried that the language around the risk of superintelligent AI fails to pass the laugh test for a large audience, then it is to everyone's advantage to focus on addressing the immediate situation where you can mobilize much more support.

There are a ton of assumptions built into that argument, but I think that it makes sense that if superintelligent AI (a big risk but lots of people don't take seriously AND where there is a reasonable of amount of crazy-sounding discussion around) is bad, but will only happen if we accept the everyday use of current AI technology for nefarious purposes (which I think many people remain unaware of AND could be more readily mobilized against it), then you have a stronger position.

Expand full comment

Scott: make a budget, then cast the budget items in these same terms, and it will become more clear.

The answer to all six of these apparently contradictory restrictions is "this restriction is part of the toolbox I use to make a good strategy for using limited resources. it's not inherently a right or wrong restriction to make in general. it's a tool, to wield according to my preferences and according to how it helps me achieve my goal".

Also, I wouldn't be bothered by 3, 4 or 6 at all, 1 would strike me as a narrow issue of tactics between people who agree, and if I came across 2 or 5, I would just assume someone was expressing a preference or making an appeal, rather than arguing that such a thing was rational across the board.

Acemoglu was performing a 3/4/6, claiming that people who care about this topic should focus their energy on a different major sub-branch of it. He didn't attempt or even (my reading) pretend to attempt to do that by making a convincing argument against GAI x-risk. Instead, he attempted to do it by selling readers on the imminent danger of present day machine learning (albeit by using the stupid words "AI" to talk about it).

I also have to lightly protest you saying "yet another 'luminary in unrelated field discovers AI risk, pronounces it stupid'". I am a professional in the machine learning space, mostly doing search ranking for the bigs. I broadly agree with Acemoglu. This is not an appeal to authority. I am not an authority. It is a caution that it looks silly when one non-AI/ML writer dismisses another non-AI/ML writer for an opinion that is shared by a decent portion of professionals in the field.

Expand full comment

One thing I've heard that might put this in context is in *which* worry economy you're talking about, because some worry economies seem like zero-sum games, and some don't.

I think the most extreme and salient example of a zero-sum worry game is congressional floor time, which I've often heard described as the "most precious resource in government" because of how little of it there is and how hard it is to allocate to your cause. A minute spent debating your bill on the senate floor is a minute spent not debating some other issue.

On the other extreme you have a stereotypical group of stoned philosophy majors hanging out in the dormitory stairwell at 3:00 AM. That group has effectively infinite worry resources to entertain all sorts of things.

The middle is where it gets tricky. I think some activists perceive that the average persuadable normie has a limited budget for caring/worrying about things, so any non-central cause taking up too much time is wasting a precious resource.

This is the model I would start investigating from, personally.

Expand full comment

Really? 3, 4, and 6 all seem natural to me. Pretty sure I've heard variants of 4 and 6 for sure.

Typically, I end up thinking the person making them isn't being especially intellectually thorough or honest, but I would strongly disagree that "nobody ever thinks or talks this way." Ever heard the word "whataboutism", and the interminable debates about what constitutes it?

A couple of examples you might be more familiar with:

1. "Instead of worrying about U.S. poverty, we should worry about (or, it is more effective to worry about) poverty in foreign countries where many people make only a dollar a day or less."

2. "Instead of worrying about harm caused by the sexism of online feminists, we should worry about the harm caused by the sexism of governments, large companies, society, and other elements of the patriarchy."

Expand full comment

The author doth protest too much… while AI isn’t my area and I’ve only read a few books and blogs about it, I totally get why the advent of super intelligent general AI could be a massive existential risk. Also, I’m pretty sanguine about short term employment risk which has been happening since before the industrial revolution.

BUT Scott, among your strawman arguments is something steelier- vis your scenario (3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia.

Not only is this the argument most closely analogous to acemoglu’s on AI risk, it is also completely plausible, at least to me, that real people can hold this view.

As a young teen in the 70s/80s I was kind of obsessed with nuclear weapons (along with D&D, heavy metal and black holes - I’m over it now thanks 😊) went on CND marches and genuinely believed there was at least a 50% chance of a tactical nuclear war in Europe.

The fact of having survived so far has made me discount that risk to almost zero. The same 50 years have seen many bloody and potentially avoidable small-scale conflicts around the world. The big powers have intervened or interfered in some of these (iraq, Libya) , started some (Afghanistan) and backed off others (Balkans, Congo, Yemen) but there is always the sense that the USA, UK, EU and UN could and should be a force for good, and although the road to hell is well paved, there is an argument that not all interventions have been wrong, so in that sense perhaps we do worry more about these conflicts as there have been (infinitely) many more of them than there have been nuclear wars, even though the latter is by far the bigger existential threat.

Expand full comment

I think some people would have made (4) pre-pandemic, as pandemic preparedness might have struck some people as a silly niche concern (like long term AI risk is perceived).

I'd can think of two scenarios that make people suggest we should trade-off worry: when they think something is intuitively silly or implausible, and when they see something as far more important in that same area. Motivated reasoning could also encourage this.

Examples I have seen of this in the wild:

"Worrying about long term risk is silly when people are suffering now" usually alongside not taking long term risk seriously.

"Why care about animal suffering when humans are suffering?"

"Why worry about plastic straws painfully killing fish when humans painfully kill way more fish directly?" - The only argument of the three that I agree with.

Expand full comment

Admittedly, I didn't read Acemoglu's original argument. However;

"(6): Instead of worrying about racism against blacks in the US, we should worry about racism against Uighurs in China."

I could certainly imagine someone making the *reverse* argument. If you're American, racism against Black Americans is the more proximal issue that you have more control over so it deserves priority.

Racism against Uighurs in China is the more distal issue that you have less control over. Both play to a similar value system. The notion that you should "mind your own back yard" before you start worrying about other people's has solid roots in both Confucianism and Christianity. It's a pretty established Axial-age axiom. Deal with the stuff within your locus of control before you start trying to gain brownie points by wringing your hands over things that are safely outside of your control.

Also, in economics, people regularly discount future costs in favor of present ones. I guess the counter-argument to discounting future costs might be that we're justified worrying about a distant harm if doing so gives us some certain and huge advantage in addressing it. Tip the asteroid a few meters now, or destroy it with a nuclear arsenal later. But without a big "Sale! Future problems solved! 95% off! Buy now!" sign it's a common and reasonable heuristic to focus on more present and controllable issues.

And also, if we're being told that autonomous AI are safely under human control while there are still tons of land mines scattered overseas we might justifiably question the sincerity and reliability of those we're talking to.


Expand full comment

I think you must have a blind spot here. The rhetorical point Acemoglu was making was "AI isn't a future problem, it's a 'now' problem". You're reading this too literally as "AI won't be a problem in the future", but it's just a rhetorical device.

It's like me saying: "You shouldn't worry about your casserole being overcooked, because you need to call the fire department right now (because you started a fire cooking it)!" Your casserole is probably still going to be overcooked, I'm just claiming that the burning house takes priority.

Acemoglu is claiming (without a lot of reasoning, but this is an article in the washington post, not a paper) that the first step in figuring out AI problems is solving the current smaller ones, not worrying about hypothetical future ones. No comment on whether that's true.

Expand full comment

But how would you treat each of the following arguments?

(1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people: Yes.

(2): Instead of worrying about Republican obstructionism in Congress, we should worry about the potential for novel variants of COVID to wreak devastation in the Third World: Yes.

(3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia: Probably not.

(4): Instead of worrying about “pandemic preparedness”, we should worry about all the people dying of normal diseases like pneumonia right now: Yes.

(5): Instead of worrying about pharmaceutical companies getting rich by lobbying the government to ignore their abuses, we should worry about fossil fuel companies getting rich by lobbying the government to ignore their abuses: Yes.

(6): Instead of worrying about racism against blacks in the US, we should worry about racism against Uighurs in China: Yes.

Expand full comment

I think that the strongest version of this version of the argument is that the precious resource being spent isn't column inches or money but weirdness points. You get two, maybe three chances a generation to convince the public that some weird sci-fi thing is in fact a clear and present danger that we need to make non-pareto efficient trades to avoid. And right now is not the time to make that case if you want the maximum chance of it being convincing. I could absolutely see someone making the argument: Instead of worrying about humans in the distant future emulating our minds badly, we should worry about our radio signals escaping into space and allowing aliens to discover us. Or vice versa. There really isn't enough oxygen in any discussion no matter how niche for both these causes to thrive.

Expand full comment

Since everybody is going with their example, I will try mine. I think this one captures the same structure of the original argument: “Instead of worrying about the proliferation of digital walled-gardens, we should worry about the disinformation spreading on social media.”

Like the AI argument, they are both instances of the same issue (chaotic AI development / unregulated platforms), but one is very current and easy to grasp for the layman, while the other requires long explanations to convince people they might be an issue.

Expand full comment

No one actually says these things because it would be considered rude to diminish a cause by setting up a direct comparison. But in practice, I think people think this stuff all the time. Plenty of people worry about close to home or vivid problems, and they neglect the ones that are far from home or too complex or whatever. The Uighurs, Covid in the rest of the world, and so on are vastly underfunded in the worry economy! So I don't consider these tradeoffs odd at all because we make them all the time in the real world, although we shy away from thinking about it explicitly. Whether they're right or wrong, that's harder to say.

Expand full comment

7. Instead of worrying on the risk of hackers taking control of the U.S. nuclear arsenal and starting World War III, we should worry about the common computer crime that does occur all the time.

8. Instead of worrying about a future extinction event and colonizing and terraforming Mars in preparedness, we should worry about global warming.

That is, rather than focusing on some hypothetical and highly unlikely future event (that would admittedly be very impactful), we should put our effort into the problems that actually *exist*. Not just because it makes sense, but because it's also possible to get anything *done* that way, something that is exceptionally unlikely for the remote, unlikely scenarios.

Expand full comment

Does really no one make arguments like (3) and (4)?

This is exactly the kind of thing I often think, especially (3). I think far too much ink is indeed spilled on worrying about nuclear war when real ongoing wars often reach levels that have some overlap with nuclear war (though not total nuclear apocalypse). 

There's something similar on the positive side. Many justifications for space exploration, NASA's budget, etc. are along the lines of "We just don't know what the advantages could be, so they basically could be practically infinite, so even if each given expenditure has a small chance of accomplishing this, we should spare not cost." And I think a similar issue exists on the negative side with superintelligent AI: the moment you're dealing with a hypothetical near-infinite downside, the exact probabilities matter less. And to me that's a problem, because when you're dealing with such huge confidence intervals it wreaks havoc with expected utility, and you can end up making an argument that we should pour every last resource into AI research, and end up with tens of millions of unnecessary deaths from malaria while the AI research ends up having had no real salutary effect. Just to underline the problem, we *had* pandemic preparedness plans and it's not entirely clear how much good they did. I want to call a focus on the more immediate problem "risk aversion", because you're avoiding the risk of spending lots of money and getting nothing, and instead focusing on interventions with more known success rates, but it's a little odd to be saying that the approach that ignores the apocalyptic scenario is "risk-averse". 

Naturally there's a lot to be said in favor and against this kind of argument, but it's strange to me that it sounds so foreign. 

Expand full comment

I don't have any opinion on Acemoglus opinions on AI risks, but I balk at this statement:

"People in disability rights activism have already decided that “curing disabilities” is offensive, so it’s okay for them to advocate defunding it as a literally costless action (not a tradeoff)."


Yes, I know there are some extremists with this view around, and there are stories of deaf parents (side note: I know that only deaf people are allowed to use the concept “Deaf”, the rest should say hearing impaired, but bear with me) being against giving their (eventually) deaf children hearing implants, since that would remove them from Deaf Culture. But all real-life people with disabilities I am aware of, including their organizations, are able to keep two thoughts in the head at the same time: respect people with disabilities & do mainstreaming to limit the social importance of disabilities, but (also) treat disabilities whenever possible.

Because if you don’t, you condemn many people with disabilities to a life in poverty, or on being forever dependent on welfare subsidies (or charity); since many disabilities limit productive capacity, making it impossible to earn a decent-enough-for-a-good-life income on one’s own.

Disability activists, in particular Berkeleyan disability activists, may be a breed apart, but still…

Expand full comment

I wonder if there's an element here of "The wrong people are worrying about this and that's why it's right and good to dismiss it. "

Acemoglu gives this list:

> Such warnings have been sounded by tech entrepreneurs Bill Gates and Elon Musk, physicist Stephen Hawking and leading AI researcher Stuart Russell.

Bill Gates and especially Elon Musk are completely the WRONG people to sound the alarm on something, if right-thinking people every are to take it seriously.

When you hypothesize:

> I think people treat jokes at the expense of the AI risk community as harmless because they seem like fringe weirdos

I think this is right but maybe doesn't go far enough. In a world where lots of New York times articles have been written about AI risk and NPR had done some stupid podcast episode on it where they play quirky sound effects as a perky 20-year old explains nick bostrom's book, I don't think this article gets written.

You're worrying about something that is not on "The Official List Of Things That Good People Worry About," and _that_ is why i think someone writes an article saying this.

For the record - if it's worth anything - i'm not worried about AI risk because i think the orthogonality thesis is backwards from the truth, but that's neither here nor there :)

Expand full comment

I think there are a couple of things going on here.

1 and 5 are similar, in that you wouldn't say them because people understand both worries in each case to be part of the same political project. In our politics, the battle lines for police are "more deference to police" vs "less deference to police", both worries are on the same side for number 1. Similar with 5, it's "crack down on big business" or "give businesses more breathing room to operate". People do make those sorts of tradeoffs, but it's more of a tactical question - an example would be "pour resources into the Senate race in Ohio" vs "pour resources into the Senate race in Maine".

I think 2 sounds odd because Republican obstructionism, or opposition thereto, is not a good/evil thing in itself, it's a means to an end.

3 and 4 strike me as the sorts of things people do say, indeed in the early days of the pandemic you heard a lot of takes along the lines of 4.

I think 6 is made implicitly when people say things like "all these woke people who are so worried about racism don't care when it's China."

Overall, I think that it's not "related" vs "unrelated", more "same side of an ideological view" vs "not" - for AI, people either think it's a big issue or it's not, and if they do they probably will focus on all AI-related issues, and if not, they won't. And then within "same side" there's a question of tactics, but the outcome of the tactical argument doesn't really affect interest in people not on that "side" so much.

Expand full comment

If Acemoglu were asked if there is a potential long term risk of a malicious AGI being developed, I’d be surprised if he said anything but yes of course it’s possible, but probably further down the road.

My take was that he was cutting corners in an 800 word opinion piece and wanted to highlight what he saw as existing problems that could be immediately addressed.

Additionally, it may have been edited to a fare thee well to get it to fit in so few words.

He never advocated removing existing resources dedicated to evaluating long term risk in my reading.

Expand full comment

> Maybe I’m being mean/hyperbolic/sarcastic here, but can you think of another example?

"Instead of going to Mars, we should be solving problems here on Earth." Or, for those who like space, but still think Mars is a bad goal right now "we should be establishing a moon base/space elevator/something." I'm pretty sure I've heard these before. Often when talking about how billionaires should spend their money.

My model on stuff like this is basically something about diminishing returns. There's not a problem on Earth that is useful for literally everyone to focus on. So when someone says "we should be focusing on X instead of Y" they're trying to argue something to the effect that the marginal return from an extra person focusing on X is more than the marginal return from an extra person focusing on Y (and probably that the marginal loss of a person stopping focus on Y is less than the marginal gain from one more person focusing on X, so we should be transferring people to X-focus). And if they care enough to voice this argument, they probably think that the difference in marginal benefit is so great that we should shift a lot of people from Y to X.

Some things are disparate in type enough (like Republican obstructionism vs. COVID variants) that the utility of a given person focusing on them is likely more affected by that persons interests/aptitudes than it is by whether the marginal benefit is greater in one than the other. This also explains the American racism vs. Chinese racism problem: The amount of impact you can have on this is probably based more on circumstance (like where you live, or maybe how closely you can affect US foreign policy), than about the actual relative merit of each cause. And that's what makes it non-sequitor or whataboutist.

I also wouldn't expect the argument to be said about things where people think relative spending is adequate, or can't determine well enough if it is to feel strongly about.

People drop these arguments a lot on billionaire spending, because there is a huge difference in the marginal utility of an extra billionaire focusing on one topic vs. another. And I'd expect you hear it a lot in AI spaces because of the massive variance in expected utility people attach to long-term AI. Yudkowsky, for example, thinks that the project will make or break humanity permanently, with some huge number of utilons at stake. So he'd weigh the marginal utility difference very in favor of long-term AI concerns. Meanwhile lay-people, or skeptics of Superintelligent or Unfriendly AI don't tend to think the problem is so large, and so see the same massive misplacement of resources as when a billionaire focuses on the "wrong" topic.

Expand full comment

I feel like half of Matt Yglasias's articles are 'why spend political capital on {structural racism/critical race theory} when it would be better to focus on {immediate race-blind policies that would improve the lives of people of color}'. Which is at least similar insomuch as it weighs a bigger but more nebulous risk of racism against shorter term, more specific effects of racism, and Matt thinks the political will spent towards the former is rival/substitute for the latter.

Expand full comment

I do see this genre of argument in other contexts, like criticism of security theater. e.g., Instead of sanitizing subway trains, we should be improving their ventilation.

I see 4 claims embedded in this style of argument:

1) the thing I'm advocating (e.g. ventilation) is valuable

2) the thing the other folks are advocating/doing (e.g. sanitizing surfaces) is not valuable

3) there's some sort of tradeoff between the two (e.g. limited space/appetite for anti-covid measures)

4) there's something misguided about the other folks' approach which leads to them doing the not-valuable thing rather than the valuable thing (e.g. it's theater aimed at making people feel safe)

Where maybe (4) is the most important part of this.

Another example. Here's a quote from John Kerry during his presidential run: "George Bush opens fire stations in Iraq and forces fire stations to shut in America." You could dismiss this as a political pot-shot, but interpreting it charitably within this framework (with fire stations as an example & metonymy for a broader point about spending):

1) more domestic spending on things like fire stations is valuable

2) the Iraq war was a bad idea / is not valuable

3) there's a tradeoff between domestic spending & military/war spending

4) Bush has been misguided in choosing to prioritize the Iraq war

Expand full comment

A better framing would "certain short term" vs "uncertain long term" where resolution of #1 potentially significantly impacts our understanding and/or impacts of #2.


* "Sentient AI rights" vs "Animal rights"

* "AI overlord" vs "limiting the concentration of powers and information for governments, corporations ... and systems"

* "communicating with Aliens" vs "communication with Animals"

Testing this subjectively on the examples given in the post

(1) police brutality solutions could change if evidence was better gathered and protected. PASS

(2) solutions for Covid and Congress are probably not too related FAIL

(3) Again, nuclear war and civil wars are not in the same problem space FAIL

(4) pandemics and recurring health events could also be seen as orthogonal FAIL

(5) limiting lobbying wealth gains to one industry impacts future similar efforts. PASS

(6) Racism in the US and in China are likely relatively independent. FAIL

I love this post and associated comments. Made me realize that I often use this line argument simply out of boredom and exasperation to escape from the outrage trend of the month.

Expand full comment

Is anyone really that motivated against fighting AI risk? As far as I can tell, they're all adopting an everything or nothing strategy, it's solve AI alignment, (and fast, given some of the predictions thrown about) or bust. The stakes are so high they render all other ethical considerations irrelevant, surely they should be willing to use some lateral thinking to either buy more time for solving the alignment problem, or mitigate failure. Musk is trying the latter it seems, with his Mars colony vision, though it seems unlikely a rogue AGI would fail to destroy Mars. As for buying more time, collapsing industrial civilization would do the trick, as it ends the economy that makes AGI possible to begin with, as would Bostrom's proposal to turn the planet into a panopticon. I prefer the collapse civilization strategy, partially because there are many points against our civilization and I don't think it would be justified in pulling out the measures Bostrom wants for it to preserve itself. Also, destroying industrial civilization seems much easier than setting up an infallible panopticon.

Who knows, maybe there are already people working on implementing these strategies. Am I to believe AI safety researchers are more paranoid than the Pentagon? If they are, they should be working to enmesh themselves into the military, just to ensure they don't have an AGI Skunkworks. Though that still doesn't eliminate risk from Chinese military research.

While working to fulfill the Unabomber's vision, we should also try to suppress development of AI. There have been some half-assed attempts to cast AI as racist, but we could go much further in making a case for AI as white supremacy. Could trigger an immune response from the dominant culture.

We really should be pulling out all the stops against this, instead of putting all our eggs in one basket by betting it all on the success of AI safety research. A rogue superintelligence would mean the end of all life on this star system, at minimum.

Expand full comment

Presumably we're only supposed to be taken aback by these as "good arguments". I heard (4) many times last year at the onset of the pandemic by smart people I respected as late as March 10. "We shouldn't worry about pandemic preparedness because Covid has killed far fewer people than the flu". It was a terrible argument, but I heard it a lot for a few weeks.

Expand full comment

I suspect the reason why AI risk is often seen as a waste of funding is the timing: is it necessary to fund it now rather than later? If AGI is 20 years in the future, do we really need to spend all 20 years making mitigation strategies? If not, then we have more urgent things we could be investing in. I suspect this thought process is what led to Acemoglu’s position.

The counterargument would be that we can't predict when AGI will come, and when it does, bad things could happen so fast we won't have time to react. However, I think it's hard to make arguments for these that convince most people (indeed I'm not convinced).

In contrast to pandemic preparedness (#4), pandemics do happen without notice and rapidly cause problems, and that's why it's worth worrying about now.

Personally, I would say that instead of worrying about AGI risk, we should be investing in cybersecurity. The latter has immediate benefits, and a lot of apocalypse scenarios for AGI don't work if stuff can't be hacked.

Expand full comment

The difference is that all the issues you listed for comparison are real and occurring right now. Megalomaniacal AI is not.

It's not happening in the immediate future, either. No damage is being done right now that cannot be undone. It's not some creeping problem that slowly progresses until it's "too late." There is no risk at all until someone is actually trying to develop a general AI (as opposed to training chatbots on Reddit arguments and fantasizing that there's conscious thought behind it). Once it's a plausible thing, people can assess the risks. And if you can't trust the people of the future to assess the risks when the risks dawn on the horizon, then you definitely can't expect the people of today to assess the risks from our current position.

This topic is, frankly, something that smart people (more specifically, "sci-fi nerds," as one other commentator put it) seem to bring up in order to signal that they're smart. If you just wait until it becomes relevant to talk about it, you won't get to show that you were ahead of the game. So start talking about it now! If it ever comes up in the future, you were in on the ground floor.

Expand full comment

"Maybe I’m being mean/hyperbolic/sarcastic here, but can you think of another example? Even in case where this ought to be true, like (3) or (4), nobody ever thinks or talks this way. I think the only real-world cases I hear of this are obvious derailing attempts by one political party against the other, eg “if you really cared about fetuses, then instead of worrying about abortion you would worry about neonatal health programs”. I guess this could be interpreted as a claim that interest in abortion substitutes for interest in neonatal health, but obviously these kinds of things are bad-faith attempts to score points on political enemies rather than a real theory of substitute attentional goods."

I mean, I don't think it's derailing if someone makes the argument that pro-life folks should support robust sex ed programs and access to contraception, which would lead to much fewer abortions. That, to me, is a much more analogous argument to the narrow AI over general super-intelligent AI than the other examples pointed out here, but maybe that's just me. I don't think it's nonsensical to suggest that we should address the building blocks/foundations of an issue more than worrying about known unknowns of the future. Granted with abortion, that's already happening in the present day, so that part isn't entirely analogous, but in both cases it seems to me that if you address the root causes, you greatly mitigate the other areas of concern. That's why the police brutality vs. false evidence doesn't work—instead, Acemoglu's argument, to me, would be that we should address police corruption, which would probably mitigate the things that stem from that corruption.

Expand full comment

> (4): Instead of worrying about “pandemic preparedness”, we should worry about all the people dying of normal diseases like pneumonia right now.


> I don’t think I’ve heard anything like any of these arguments, and they all sound weird enough that I’d be taken aback if I saw them in a major news source

What? #4 was a super common argument in Feb 2020, see all the "why are you worried about coronavirus when you should be worried about the flu?" takes.

Expand full comment

- 3, 4, 5, 6, and the abortion vs neonatal health example all seem like reasonable statements that well intentioned people might make... even the police brutality one.

- isn't the premise of Effective Altruism (and Rationalism in general) that you can make distinctions about where attention is most valuable-ly dedicated?

- sometimes people phrase these as tradeoff arguments (colloquially) when what they actually mean is a sequencing argument "I think X needs to come before Y"

Expand full comment

My problem with the argument "we should be worry about present problem x instead of future problem y" is that x is often undefined and will never be solved.

For example, Acemoglu wants to ignore AGI risks in favor of current problems like AI displacing workers...but how do you solve that problem? How long will it take? What's the point where we can declare victory on this issue, and start thinking about AGI?

Probably never. Labor pool displacement is a battle we'll be fighting for as long as AI exists. So basically this cashes out to "we should never think about AGI at all."

I feel the same way about "people shouldn't go to space when there are people starving on earth!" type arguments. Yes, it might be better to direct money away from certain things and toward certain things. But since it seems likely that there will always be at least a couple people in poverty...when do we go to space?

Expand full comment

I have been a fan of you rationalists for a long time, but this 'dispute' has, at long last, given me a glimpse of why people I admire - say, Tyler Cowen - have mixed feelings about you. WTF?

Expand full comment

(1) seems obviously true (in the sense that it's a legitimate argument) to me. My model is that attention to police abuses is abundant, while political capital to do anything about it is scarce. Assuming that the police unions will fight to the death to prevent any reform from passing, we need to all line up behind whichever one is most important, or we'll end up with nothing.

Expand full comment

I've heard arguments like (6) fairly regularly -- and sometimes made them, but since I am American, the argument is "Instead of worrying about racism against Uighars in China, we should be worrying about racism against African Americans and Native Americans in the United States." (But I would consider the opposite argument to be valid if anyone was making it in China.) While I don't particularly agree with Acemoglu's argument (based on your retelling), I do generally agree with arguments of the form, "Instead of worrying about potential future problems which I have negligible ability to impact, I should worry known current problems that I have some ability to impact," and I suspect Acemoglu believed he was making an argument of that form.

As a general rule, I don't think most economists or pundits have any understanding of tech and therefore don't address the actual circumstances under consideration. For example, on information security from what I've seen, I think programmers are much more inclined to make the argument, "Game theory says we should make paying ransoms to hackers sufficiently illegal that no reasonable person in charge of making such a decision would ever pay the ransom." Whereas, economists are much more likely to say "Game theory says companies need to spend more on information security to prevent hackers from damaging their systems" (I read three such articles in Bloomberg Opinion in the couple of weeks after the Colonial Pipeline hack) which would be true if hackers primarily were attacking vulnerabilities that existed due to oversights (i.e. errors of omission), but is false because they are primarily attacking vulnerabilities that exist due to bugs (i.e. errors of commission which nobody responsible for realizes are errors).

Similarly, there are many incorrect models of how ML works and might progress under which Acemoglu's argument is more or less correct. (e.g. If you assume future A.I.'s will be built up out of iterating on current A.I.s instead of realizing that most major advances in technology are going to come from largely greenfield projects it makes sense to say before we worry about hypothetical bugs that we might introduce in the future, let's worry about all the extant bugs that are already there. Alternatively, if you believe advancing AI is sufficiently categorically different from computer-programming that we could categorically feasibly forbid AI research and roll back technology by five years without rolling back technology by 80 years, his argument makes sense.)

(The correctness of the arguments in incorrect paradigms doesn't prove it's incorrectness in the correct paradigm. I don't know how ML will progress, and it's conceivable his argument are also more or less correct for how ML will actually progress, but I doubt it.)

To anyone familiar with MIRI, the rebuttal, "The problems Acemoglu is discussing are just instances of the ad hoc approach to AI research/implementation that are currently common, and the best way to address them is to research how to make an A.I. which provably does what it's supposed to do which is precisely what institutions like MIRI are trying to do," is so obvious it doesn't need to be said. (I'm not confident that statement is true either, but I think for anyone who has had significant exposure to Eliezer's thoughts on the matter, it's such an obvious response that Acemoglu isn't participating in the conversation unless he addresses it, which he doesn't.)

Expand full comment

I think it is time to explain why risks of AGI (artificial general intelligence) are often ignored or downplayed by researchers in the field.

Among people closely related to AI research it is widely recognized, that AGI has been created at least twice at Facebook, at least three times at Google and at least once at OpenAI (the later coincided in time with OpenAI becoming much less open about it's research). I say "widely recognized" rather than "known", because in all cases we only have circumstantial evidence. Shortly after AGI was created the plot unfolded similarly -- in spirit -- to the plot of "Definitely Maybe", a book by russian sci-fi authors Arkady and Boris Strugatsky (known in russian as "За миллиард лет до конца света"). A series of strange coincidences led to the projects being abandoned, with very little evidence left.

This effect was dubbed DM-effect (for "Definitely Maybe") and is responsible for multiple recent breakthroughs in AI. The name of famous AI company DeepMind is actually a backronim for DM-effect. After one of the projects that triggered DM-effect was identified, said project was planned conditionally on the failure of less ambitious projects, which led to stunning successes of Alpha-Go, Alpha-Zero and Alpha-Fold.

It is not easy to harness the power of DM-effect, and failures to do so are difficult to investigate because of its nature. It is suspected that at least one known DM-triggering project was irrevocably lost while trying to use it to boost other projects. Many believe that the recent firing of Timnit Gebru (former AI-ethicist at Google) is related to the DM-effect that enabled the BERT project.

The DM-effect is very badly understood and its unchecked use by big corporations is the real threat of AGI, as the threat of AGI itself seems to be mitigated by universal laws yet to be discovered.

Expand full comment

The problem is that these arguments sounds strange, when they shouldn't.

They are all about prioritization, something severely lacking. And the problem is that we feel like we should not have any limits, like we could and should worry about everything, and we can and will, fix everything. Lest anybody constrains our "freedom" to do everything at the same time.

So we're used to worrying about everything all the time, getting enraged to this and then that which is 100 times less important in an equal amount (or even more. Just like with celebrity gossip, some less important issues gather many times the outrage, comments, funding, etc, than other far more important ones).

In the end, we're never putting things in perspective, and doing almost nothing about any of those things we "worry" about (or are told to worry about every week from the media and social media), and we do even less about the important things we should be worried about most.

Expand full comment

What about looking at the other project Musk is famous for?

"Instead of worrying about how to settle Mars in the future (which is, by the way, a foolish and outlandish science fiction idea), we should focus on the problems here on Earth we have right now."

Maybe it is just people not liking Musk! But I think it is a bit more universal. 3 seem borderline comparable to the AGI risk "substitute goods" put-down argument. Prior to COVID, worrying about pandemic risk could not be dismissed totally in off-handed way, because there was is evidence of past pandemics (like Black Death) having devastating effects. However, there is a weaker put-down argument which has some similarity:

"You know those fringe preppers who want to build self-sufficient farms in the woods and buy lots of guns and are worried about pandemics? That is silly because we have modern healthcare, government and civilization, and obviously a future pandemic will not be a zombie apocalypse. Instead of worrying about civilizational collapse from killer pandemic, what you should instead is look at these not-insane disaster precaution checklists: Have enough water and food to survive a disturbance for few days, follow the instructions by authorities, etc."

Sounds familiar? It is not as juicy substitution: There is some silly pop culture catastrophe scenario to put down (zombies), but there is a precedent for pandemics, so some form of the serious risk is granted to exist. Consequently, everyone who wishes to talk about "serious" pandemic prepradness doesn't feel they need to attach precautionary notices always before talking about it ("we are not like those fringe outgroup people preparing for zombie apocalypse, but here are some important things about pandemic preparedness ..."). Yet some people will sometimes write how preppers have seen too many zombie movies and are excessively worried about implausible scenarios.

Now, back to AGI risk. Long term vs short term AGI risk as notable juicy "substitute goods" pairing sounds like a good framing, but I want to argue it is not surprising at all

"And it just so happens that the only two substitute goods in the entire intellectual universe are concern about near-term AI risk, and concern about long-term AI risk."

Imagine we never had witnessed a serious deadly epidemic (a strange stable state where human diseases where either super contagious but not serious or deadly but not very contagious, maybe because we never domesticated cows). Then we would learn about germs and some science fiction authors speculate about killer diseases. Then generation or two later some a bit more respectable people would realize that actually, nothing in evolutionary theory precludes a possibility of contagious deadly germs, and there are killer diseases that kill ants, so maybe we should worry about it. (If you want to paint them as serious: in the alternate universe people had recently started experimenting with this new and exciting technology called "animal husbandry", inspired by the advances in evolutionary theory.)

There is a reason why AI risk sounds weird and sci-fi-ish: "technology misusing itself" is a weird and sci-fi-ish idea. It has a unique problem that it was first conceived in its recognizable form and became popular in science fiction, decades before the technology got to the point where any kind of AI risk started looking plausible. (Or plausible to the sort of people who want to remain grounded in their predictions, instead of writing or reading SF.) Asimov was writing stories about the three laws of robotics before anyone came up with term "artifical intelligence" and before we had anything resembling anything that could one day have any capabilities of R. Daneel Olivaw.

Also note, Asimov published in pulps, not the most prestigious medium around at the time. At best, you would have some researchers who read Astounding Science Fiction and were confident they were heading in right direction towards robotics with their research. Unprestigiousness of the idea has stuck a little.

And, methinks it is not totally unreasonable argument that some aspects of AGI risk talk could be misguided because key ideas may have unnecessary anthropomorphic baggage that originates from old SF. After all, it is a bit speculative. Theoretical results about rational agents can be proven correct, but it is non-trivial task to show that agents are the correct way to approach the question.

Thus, it is not surprising at all that many people find it easier to talk about and think about AI risk as a narrow-AI risk, in terms of in automation displacing workers and totalitarian surveillance, which are old worries from the 19th and 20th century, and contrast it with the fanciful science fiction AI.

Expand full comment

Much of the time, when I've encountered "instead of devoting resources to A, we should devote them to B" in response to someone advocating for A, it's been a not very thin disguise for either "I don't want anything going to A, ever, but saying that will get me pushback, so I'll cite B instead, which is a sacred cow" or "the people who benefit from B are more deserving (i.e. higher status) than low lives like you and other people who care about B".

Net result - arguments of that form, in that context, tend to result in me thinking about what the speaker really meant, rather than what they said.

I haven't read Acemoglu and don't have much interest in the whole AI risk thing. But it would be amusing if their actual problem boiled down to "only nerds care about AI; we should focus on something real people care about" or similar ;-(

Expand full comment

Here is Acemoglu's article in the Boston Review from two months ago which reads much more like something he intended to write: http://bostonreview.net/forum/science-nature/daron-acemoglu-redesigning-ai

It is about what he sees as the problems and solutions in AI and related policy. To summarize, in spite of the good things AI and automation might do, surveillance and human job loss are big negatives. This requires policy solutions:

"The same three-pronged approach can work in AI: government involvement, norms shifting, and democratic oversight."

Worry now versus worry later is interesting rhetorically. But I'm putting this here for those who are curious about what Acemoglu may have been trying to say in the Post.

In terms of the arguments above, (3) was the basis for the Cold War (albeit with other countries, not Ethiopia), so there's that.

Expand full comment

"“Instead of worrying about sports and fashion, we should worry about global climate change”? This doesn’t ring false as loudly as the other six. But I still hear it pretty rarely. Everyone seems to agree that, however important global climate change is, attacking people’s other interests, however trivial, isn’t a great way to raise awareness."

I agree it's a bad strategy, but thoughts along these lines occur to me daily—about myself above all, but about everyone else, too. Working on any other social/political/anything issue just seems bizarre. It's like there's a Borg cube orbiting the planet, and if we *all* focus on doing *everything* we can maybe we'll beat it, and if we don't we all get assimilated, and.... everyone is still worried about sports and fashion and the middle east and whatever. (Again, very much including me!) I think this is related to why it's a wicked problem: the scale, scope and threat of the problem interfaces really badly with our danger-judging heuristics, to the point where we manage to think about other things. If we were in equivalent danger in another category... except I don't think we *could* be in equivalent danger in another category, because long before we got this close we would have gotten our acts together to solve it. In no other area would we let it get this close.

I guess the plus side is this: whatever else you're worried about—dangerous super-AI, racism, wokeness, the middle east, democracy, whatever—it most likely won't be a problem in 50-100 years because civilization will have destabilized to the point where it can't be a problem any more. Yay, I guess?

Expand full comment

Jesus Christ was also fond of "Don't worry about X, worry about Y" arguments:

"And why beholdest thou the mote that is in thy brother's eye, but considerest not the beam that is in thine own eye?"

Expand full comment

I think Acemoglu is doing something like this:

1. You may think that worrying about AI is nerdy, but let me signal to you that I'm with you in spirit and definitely not a weird AI nerd by making fun of a weird nerd obsession in a way that makes it clear they're an outgroup.

2. Now that I've established I'm not a weird nerd, let me tell you why this somewhat less weird nerdy concern is actually a real concern.

3. AI is bad.

4. But remember that I'm not a weird nerd like those other ones who worry about something super sci-fi and fantastic, and this is a grounded legit concern.

Expand full comment

>>(1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people.

You're on to something here.

Expand full comment

Indeed, this kind of bad argument actually reduces the space for serious and important discussions about AI risks. In particular, the arguments for AI x-risk aren't very well developed IMO and rest on some pretty shaky foundations. I don't know how it will turn out but we absolutely need people taking criticisms of these arguments very seriously and responding by trying hard to improve them and I think we need to get those arguments addressed more in traditional philosophical, CS and biology journals to really evaluate them. Otherwise, it's just too hard to attract people who find the x-risk arguments unconvincing to spend time raising good challenges and the AI x-risk proponents won't see it as rational to respond unless doing so well means it could convince people who aren't already on board. This kind of bad criticism just creates mutually suspicious camps (even though many people are in both).


Specifically I think the foundations of AI x-risk aren't well developed on the following two issues:

One thing which really upsets me about this is that it also obstructs real necessary criticism of existential AI risk arguments. I'm still not sure if AIs pose serious x-risk

1) The assumption that we can model AIs as having things like goals/beliefs. I mean those are only so-so approximations for people (we behave as if our beliefs/goals change in different contexts) and Bostrom offers no real reason to think that AGI will have (simple… .obv any behavior optimizes some reward function but not in a useful sense) coherent beliefs/goals rather than, say, just treating eliminating disease via recommending new drug targets totally differently than doing so via manipulating humans to get into a war.

Bostrom's argument here seems to be nothing but: it seems like more capable/smarter creatures have more coherent/global goals but that's exactly what the critic would predict here too. They would just say that's a consequence of evolution favoring global optimization for reproductive success/not dying. As humans realizing that we act as if we value different things in different contexts creates pressure to act in a more uniform matter but it's not at all clear this will tend to be true for AI or if it's just an artifact of evolutionary pressure.

2) The fact that most arguments that misaligned AI represent a hard to contain threat (not merely something we might want to not rely on to control all systems) seem to be largely narrative in nature and those narratives often seem to suppose sci-fi style superpowers from extreme intelligence. Given that we tend to be smart and love that property it should naturally raise some red flags.

In particular, I think the phenomena in complexity theory (indeed in computation more generally) that natural problems tend to either be relatively computationally easy or quite computationally hard suggests that superintelligence just isn't that powerful on it's own. Sure, it's certainly an advantage but I'd suggest that it's generally just really really computationally hard to figure out how any given intervention will affect the world (i.e. as you look at more indirect effects the resources needed grow very quickly…I'd say big O but technically this isn't in terms of # of input size).

In other words what if it's just not computationally tractable to figure out how your words/suggestions will affect people several links removed much less affect elections and wars except in the ways that are also obvious to us? In that case AIs can just be kept off the direct internet. Sure, maybe they trick their handlers into putting them on the internet but if intelligence doesn't give it super powers to convince people that just means everyone goes 'gasp … ohh not the AI got loose' and shuts if down and it just doesn't have enough predictive power to scheme out any high probability counterplay.

On this take there is still real AI risk but we need to ditch the evil genie style narrative and focus more on the usual kinds of human frailties and stupidities of giving too much authority to machines/systems etc…

Expand full comment

"Instead of worrying about sexism against men in the US, we should be worrying about sexism against women."

The dynamic isn't particularly rare and unusual; instead, it's so common and prevalent that it's invisible.

Expand full comment

I do think that there is a certain pot of money that only goes to academicly respectable concerns and, while there are many academics who do take AI x-risk seriously, it still isn't seen that way generally. I mean sure you can get some papers accepted about it but you won't get a grant to study it while you will get grants to study AI bias, security etc..

I suspect that a big part of what's going on here is basically a move to defend one's chunk of academic influence/funding/etc… by keeping novel concerns out of the academic overton window of serious problems.

And yes there really is a kind of gentleman's agreement here that once something gets established as a serious academic issue with its own specialists and students and etc.. then you can't get rid of it. I mean that's why we are burdened with some kinds of methodologies in some areas of philosophy (e.g. continental style stuff) and other humanities that we don't have the slightest reason to believe are truth-conducive. Once they have the status of serious academics then you can't kick them out because so many academics outside of STEM would fear that once that pandora's box is opened maybe their field would be next.

Expand full comment

What are the most serious weak spots for Eco-terrorism?

Expand full comment

Scott, I've followed you for years, but this pair of posts is the first time I've felt like you're outright lying about your motivations and beliefs. I'd just apologize, dude.

Expand full comment

This logic seems to work when we are talking about some centralized committee figuring out how to spend money. In this case the trade offs doesn't even have to be related.

"Instead of waging wars we should invest in the economy."

"Instead of buying luxury goods we should donate money to charity"

But if the trade offs are related I guess it's easier understandable and earns more points in the political conversation because it doesn't require people to hold multiple things in mind

"Instead of funding the police to fight crime we should fund more social guarantees to deinsentivise crime from happening" - maybe the best course of action is to have both and get money from some other place, but it will require to talk about this other entity, making the idea more complicated.

But as was mentioned in the post, this logic makes little sense when we are talking about worrying. Which is interesting because it seems that there is a way our worries can be converted into financial insentives for the goverment or our own donations to causes we consider important. Hmm.

Maybe it's a bug in our psyche? Maybe it is reasonable to adjust our worries about somewhat unrelated causes x1,..,xn on p1,...,pn percents in order to better correspond to reality? But it's a complex idea that we can't actually compute, so we end up with vague feeling that some person worries about xi more than us while worrying about a related xj less than us. And it just feels wrong and we end up writing articles similar to Acemoglu's one.

Expand full comment

"If Elon Musk stopped donating to long-term AI research, he would probably spend the money on building a giant tunnel through the moon, or breeding half-Shiba-Inu mutant cybernetic warriors, or whatever else Elon Musk does. Neither one is going to give the money to near-term AI projects. Why would they?"

AND THAT IS EXACTLY THE PROBLEM. "Great oaks from little acorns grow", it's TODAY'S small problems that will develop into big ones if we don't fix them.

Musk and others *like* the Long-Term AI Research because it's the sexy, flying-cars, SF version of AI Risk - the rogue super-computer that has achieved human levels of sentience, god-levels of intelligence, and super-villain levels of motivation, and is now in a position to take over the world, bwahahahaha!!!!

That's cool, that's fun, and that's so far off (if indeed it's even plausible) that it's not an actual threat. The small-scale stuff that is a problem, like "the bank messed up my account due to their new IT system and now I can't get at my money or pay my bills or have my wages paid in and nobody can fix it because I get put on an automated phone menu or on hold for two hours and can't reach a human being, and if I do they put me straight back in the phone queue" - that's not cool or sexy or fun, so they ignore it.

But *that's* the kind of AI tech interfacing with the human world that is affecting a lot of people and causing a lot of misery.

The problem is not the tech. The problem is people. And yes, I'd tie all the long-term alarmists together and dump them on a desert island, so maybe we could have a sensible discussion of "hang on, why are you rushing to automate everything? shall we maybe have a look at the profit motive and how putting that first is going to make things worse for everyone, as you rush to create AI that will fatten the bank balances?"

Expand full comment

is this post directly about Acemoglu and AI or is it more like one of those overcomingbias/lesswrong crowd where we must sharpen up our mistakes?

Expand full comment

Isn't 2-6 just effective altruism?

Expand full comment

The obvious difference between Scott's examples and the long vs short AI comparision is that all of the harms in his examples are things that are either happening right now, or are plausible near-term extrapolations based on ample exemplars in the recent past. They are amenable to analysis using hard, real-world data.

The topic of long-term AI risk is just not like this. We routinely dunk on businesspeople and politicians who naively project _linear_ relationships out to infinity, while the AI risk folks do the same thing with _exponential_ curves!

Does it make sense to think about long term AI risk? Sure! Even spending some time coming up with a philosophical framework for AI alignment seems reasonable, at least for folks for whom that bee buzzes particularly loudly in their bonnets. But spending millions of dollars on long term AI risk projects seems utterly futile, given that we have no idea what a generally intelligent computer program might look like or what its inherent limitations or constraints might be. We're extrapolating from our extremely incomplete knowledge of the one generally intelligent agent we know of -- ourselves.

Long-term AI risk scenarios to me are like cavemen worried about "giant rock-throwing risk". Cavemen who've seen nothing but other cavemen throwing rocks might imagine giants who could throw really big, house-crushing rocks, and decide that the best way to prevent that would be to stop tall people from breeding. Over time, with more knowledge of mechanics, they could extrapolate to the idea of catapults, which look and are constructed nothing like a giant person, but can throw giant rocks anyway. They might sagely decide to limit construction of catapults beyond a certain brobdignagian size to avoid building one that could destroy the whole world. But no amount of effort they put in at the caveman technical level would accurately predict a nuclear missile or a gravity-sling, nuclear-rocket powered interplanetary kinetic kill weapon, let alone come up with some sort of specific technical or social protocols to prevent it from being a risk to them.

Expand full comment

Honestly, none of your examples are of an outgroup arguing that we need to avoid something that the mainstream media culture considers a low-probability contingency.

You absolutely do see this in other areas. I've even been guilty of it:

"Republicans endorse originalism out of worry about some future tyrannical government, but don't worry about Donald Trump's authoritarian leanings."

"People care deeply about gun rights because they don't want to be stripped of the ability to protect themselves from a hypothetical authority figure, but they don't care about police brutality."

"We fear a Chinese takeover of Taiwan, but not the rollback of voting rights in our own nation."

The requirements for this argument to be made are:

1) A statement in the form "We worry about A but not B" where "A" and "B" are in the same domain (and you can get very creative with conflating domains).

2) "A" must be a hypothetical that takes place in the relatively distant future and is not a certainty.

3) "A" must be a concern that is endorsed by a sizeable minority culture but not the metropolitan elite journalism culture.

4) "B" must be a concern that is endorsed by the metropolitan elite journalism culture.

I actually think you're being too kind to the article. It's possible that the individual who wrote it (who seems to be brilliant within his field, though I admit I'm not familiar with his work), was just borrowing the template.

But the template itself is: "This other culture is infringing upon our monopoly on truth and must be destroyed."

Expand full comment

"Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia."

I realize you mean this as a straw man argument, but I honestly think it makes sense. Ever since nuclear weapons were invented, we've obsessed about nuclear war, but they haven't been used except in the initial case in Japan, and they are unlikely to be used in the future because any country using them knows it will get retaliated in kind. Even the cliched "insane" governments like that of North Korea aren't that insane. But we kind of accept conventional warfare as just a normal thing despite millions and millions killed.

Expand full comment

*Fair warning: I did not read this whole post yet (though I did read the previous Acemoglu one), and probably won't unless I'm convinced it's worth my time.*

I can't be the only one here who's become uninterested in discussing long-term AI concerns at this point. Which isn't to say I don't acknowledge the potential problems we're facing - it's just that I don't know what I'm supposed to do about it, and pretty much nothing I've read in the past few years since first being made aware of the issue has changed that picture or made me feel like all this discussion is worth anything. Just the same arguments being rehashed over and over again. Leave it to actual experts in the field, who do generally seem to understand the concern. At least on other topics I can form an opinion and advocate for certain political or behavioral changes. Even on topics that seem nearly impossible to move the needle on, you can get an idea of what should be done. I'm not even sure which way to try to move the needle with the long-term strong AI concerns. And the conversation doesn't seem to be progressing in any way I can tell. At one point I probably believed that just informing more people of the issue was marginally productive, but now it just feels like it's crowding out other conversations in communities like this, conversations which do evolve over time and could actually lead to political or behavioral changes.

Every single issue referenced in the 6 above examples feels more addressable by me (and I'm guessing most other readers here) than the long-term strong AI issue, which makes Acemoglu's steelmanned argument unique among them. I wonder if Acemoglu feels similarly and is just trying to steer the general public into more worthwhile areas of discussion. I can respect that, even if I find his rhetorical strategy unappealing. I hope we can lay off this topic a bit until there's actually new stuff to discuss. But who knows, maybe I am the only one.

Expand full comment

The problem here is that the short term risks from current AI (unfairness) is nothing whatsoever like the long term risks from a super-intelligent maximiser (the destruction of everything). The former isn't even a lesser version of the latter.

Expand full comment

I don't think this deals with the issue fairly.

The contrasting examples were not close, fixable and relatively small problems vs. long term, potentially not happening and horrendously complex.

That's a relatively usual prioritisation that people face all the time. Setting aside something for three months mortgage payments vs. paying into pension. Vocational qualification vs. Masters degree.

Using economic growth to make warming manageable vs. eliminating fossil fuels.

And on the (fairly wide) margins there are trade offs of attention and government research money.

Expand full comment

I'll tell you what I *would* like a solution to; I recently got an email from The Metropolitan Museum of Art (you buy *one* catalogue raisonné and you're on a mailing list forever) which was trying to entice me to buy some of their tat (as an aside, the stuff they have for sale isn't that great in my estimation in quality I'd expect, but that's not the problem).

It was for "holiday decorations". That's right - Christmas in July.

This is *bonkers*. We're not even into autumn yet! And I know all the shops do it, and I know about lead-in times and ramping up production to get the products out on shelves, but it's *nuts*.

You have the Hallowe'en and Christmas goodies jostling for space on the shelves, then the 'January' sales happen on St. Stephen's Day, and the Easter chocolate is now budging up against the unsold Christmas goodies.

By the time the real day - be it Hallowe'en or Christmas or Easter - comes along, you're sick to death of it and it's not special because you've been seeing it for three months already.

If God-Emperor AI makes it so that Christmas only starts in December and Hallowee'n in the third week of October, I will gladly place my neck beneath its silicon jackboot.

Expand full comment

The problem here is that you believe that the AI Singularity risk is very real, and in fact imminent. People like Acemoglu -- and, obviously, myself -- believe that it is science fiction on par with alien invasions. I can't speak for Acemoglu, but personally, when I hear impassioned pleas to invest more money/effort/publicity into preventing the Singularity, I'm not just worried about the waste of resources (though that is a concern). Rather, I worry about the public perception of the entire field shifting towards "I'm not saying it's aliens... but it's aliens (*)".

Normally I wouldn't care, but the truth is that AI presents some very real dangers, today, right now. If half the people working on it walk off into contemplative Singularity think-tanks, and the other half shrug and ignore the whole field because they're not interested in discussing far-out kooky ideas, then the very real dangers of AI will never get addressed. This scenario would be... suboptimal.

(*) https://i.pinimg.com/originals/f3/cf/65/f3cf652b459d4e68d722526138955856.jpg

Expand full comment

Another type of weird concern pair are those which go in opposite directions:

"Instead of worrying about police brutality, we should be worrying about the underfunding of police pensions."

Expand full comment

>> Argument (1) must ring false because worrying about police brutality and police evidence fabrication are so similar that they complement each other. If your goal is to fight a certain type of police abuse, then discussing unrelated types of police abuse at least raises awareness that police abuse exists, or helps convince people that the police are bad, or something like that. Even if for some reason you only care about police brutality and not at all about evidence fabrication, having people talk about evidence fabrication sometimes seems like a clear win for you. Maybe we would summarize this as “because both of these topics are about the police, they are natural allies and neither trades off against the other”.

Much of this is only true outside the police station.

Inside the police station those topics would have tradeoffs, and it might even be well-understood what the costs and benefits are of ways to prioritize them.

In communication between the police station and the outside communities, the outside communities will advocate for the issues most salient to their own needs, regardless of what might be in the police station's best interest or what the police station judges is in the communities' best interest, and one cannot generally be offered as a substitute for the other.

>> Does Acemoglu argue this way when he writes about economics? “Some people warn of a coming economic collapse. But the Dow dropped 0.5% yesterday, which means that bad financial things are happening now. Therefore we should stop worrying about a future collapse.” This is not how I remember Acemoglu’s papers at all! I remember them being very careful and full of sober statistical analysis. But somehow when people wade into AI, this kind of reasoning becomes absolutely state of the art.

Acemoglu writes about economics from "inside the police station". His lack of interest in a 0.5% drop stems from the standpoint of theory that puts priorities elsewhere (even if in real terms that 0.5% may represent substantial damage to a significant number of people).

He writes about AI from "outside the police station". His arguments stem from an awareness of his community being particularly affected in certain ways, regardless of theory that would put priorities elsewhere.

Expand full comment

IMO Scott nailed this in the first post; Acemoglu used AI x-risk as clickbait to then pivot to complaining about narrow AI and this follow-up steel manning post makes me think of the "stop stop he's already dead" meme.

Expand full comment

>“Instead of worrying about sports and fashion, we should worry about global climate change”? This doesn’t ring false as loudly as the other six. But I still hear it pretty rarely.

I hear versions of that one quite often. But, sports and fashion aren't things people *worry* about, so much as they *enjoy*. I mean, you can use the word "worry" to describe the state of mind of someone who observes that the odds of their preferred sportsball team winning the championship have decreased, but it's a very different sort of "worry" than the one with global climate change and really it's part of a generally enjoyable process or sportsball (and fashion and whatnot) wouldn't be nearly as popular.

And to take another example several people have raised already, "Instead of sending rockets full of money into space, we should worry about problems here on Earth". But, space exploration isn't so much something people worry about(*), it's something people *hope* for.

So I think there's a hierarchy at work here.

"Instead of wasting time enjoying [frivolity], we should worry about [tragedy]", is seen as the mark of a Serious Thoughtful Person putting a Silly Selfish Person in their place.

"Instead of hoping for [fantasy], we should worry about [tragedy]", is seen as the mark of a Serious Realist trying to bring a Hopeless Polyanna down to Earth.

"Instead of worrying about [tragedy], we should worry about [different tragedy]", doesn't intrinsically privilege the speaker's position and looks a lot like Concern Trolling.

So, Worry trumps Hope trumps Enjoyment. Which is kind of backwards, and now I'm worried about how common that is. Thanks a lot, Scott :-)

* Yes, there's "I'm worried we'll all be killed by a giant meteor if we don't colonize space soon enough", but that usually is and is almost always perceived by outsiders as "I want to explore space because it's cool and I think the big-meteor argument will persuade skeptics".

Expand full comment

0. Thank <deity> for the division of labor. I don't have to waste time worrying about things that are beyond my competence or control. I don't have to know all the details of my microchip's design, or waste time picketing a hospital with signs that say "IT'S LUPUS". I just choose the products/services/micronations that I like, based on summary statistics, and let the implementation details be somebody else's problem. But the genre of news largely consists of getting people to pointlessly worry about things that are beyond their competence and beyond their control. In a democracy, zillions of man hours could be saved by randomly sampling 1% of the population at age 18 to have the franchise for life. They'd put in the effort to become super-informed, and the rest would save a lot of time and stress.

1. One man's whataboutism is another man's demand for consistency. Sometimes "whataboutism" is used as a counterspell against any accusations of double-standards.

2. In a world where all resources are finite and all values are relative, almost anything can trade off against almost anything. But, the correct form is:

"Instead of <thing X I care less about>, we should worry more about <thing Y I care more about>" and it only works if your interlocutor already agrees with you about the relative valuations (in which case it would be unnecessary to say the above phrase). All the real work is in convincing your interlocutor why Y is more valuable than he thinks and X is less valuable than he thinks. Acemoglu didn't do any of the work towards convincing anyone why X is less valuable.

3. Straw examples just for fun:

Instead of worrying about awkward boys asking girls out imperfectly, we should worry about Yog-Sogoth devouring the galaxy.

Instead of worrying about whether Abe Lincoln's new beard looks weird, we should worry about how to avoid escalating the secession crisis in to a scorched-earth civil war.

Expand full comment

Baseless conjecture:

Human minds reduce every reported threat to stories and imagery, and unlike the possibilityspace pointed at by "the police does stuff", AI hypotheticals don't lend themselves well to this reduction. Like broken hash functions, minds will map AI threat news to the same emotional [[AI risk]] symbol.

Hence one cannibalizes the other; in public discourse as well as in individual minds. "No this is not what [[AI risk]] is about, it's this *other* thing!"

The town just ain't big enough for two AI risks.

Expand full comment

This is literally how the world works though; there is a limited amount of attention that can be divided up among all the issues, at least in the public discourse. Here is how I would rank the issues in terms of the which should get the most attention:

1. "the deadly civil war in Ethiopia"

There are literally people just walking around (parts of) Ethiopia killing, raping, etc others *right now* and on top of it there is a media blackout so things are even worse than they appear. This has to be the top priority, we need to figure out why this is happening and how to stop it (in either order). People are deliberately killing each other at a mass scale, this is an incredibly important issue that we should be putting our top minds on. It's only that we are used to such things that we dismiss it.

2. "we should worry about racism against Uighurs in China."

I would characterise it more as ethnocide than racism to be honest. It's not even in the same ballpark as racism in the US.

3. "we should worry about all the people dying of normal diseases like pneumonia right now"

I would rate this higher than pandemic preparedness right now, the whole previous decade of worrying about pandemics didn't help out with this one at all (outside a few East Asian countries) but how many know about the resurgences of diseases like RSV that are sweeping countries due to lack of contact from lockdowns last year?

4. "worrying about Republican obstructionism in Congress"

I would more broadly classify this as "how are Republicans still getting elected despite being obstructionist?" And that pretty quickly leads you to "why is US democracy so dysfunctional compared to other countries?" Arguably the #1 issue since if the US was optimised it could solve all the other problems easily.

5. "worrying about “pandemic preparedness” "

Yeah so the previous worrying didn't help much and almost everyone understands the basics of what needs to be done by now, but still, probably worth worrying about this one some more at the moment.

6 & 7. "Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people."

I find these pretty equivalent really, probably worry about brutality more though.

8. "worrying about racism against blacks in the US"

Arguably could be higher since similar issues drive this as are driving the Tigray War.

9. "worrying about nuclear war"

This is always overstated. Any time there has been trouble the highest ranked person in the room closest to the action always prevents it happening. And we are in much more benign nuclear environment than during the Cold War. Still worth some worry though.

10. "worrying about pharmaceutical companies getting rich by lobbying the government to ignore their abuses, we should worry about fossil fuel companies getting rich by lobbying the government to ignore their abuses."

This is just the same issue so I'll treat it as one. Arguably could be higher due it being related to the dysfunction of government, which is a critical issue, as an effect though (companies getting unfairly richer) it is only minor compared to these other issues.

This ranking is, admittedly, fairly debatable, but the point is that there really is limited attention and issues really *should be* prioritised.

Expand full comment

Didn't you write a whole article about how people's internet focus issues are in competition? Like concerns over feminism being replaced by concerns over transphobia and racism? Isn't the whole 'stop taking up space white woman,' thing this phenomenon?

Expand full comment

Another way to look at it is how much people would natively care about issues in that field. For example, if there is some problem with law enforcement whether it be corruption or wanton use of deadly force, most people would be pretty worried about it. Similarly in (5), many people would be pretty angry about lobbying (even if it's a relatively small issue) because it ties into an "evil other human" narrative that our brains easily go toward.

On the other hand, very few people think about AI, so it's plausible that worrying about one AI risk may take the onus off another AI risk. I'm not saying that that's the case - certainly we'd need a lot more evidence before we can even plausibly claim this as a hypothesis worth considering - but it's one way of looking at the situation.

Expand full comment

Am I the only one who read the argument as equivalent to the "abortion vs neonatal" but in the sense of trying to cast one thing as "obviously" more concerning than the other - to then imply "therefore if you truly honestly care about this then this other thing should concern you far more and you're a hypocrite for spending such resources on battling a minor issue in the face of it"... At least it felt to me like the whole thing about "AI risks that definitely matter right now" vs "AI risks that might matter in the future" was all to establish the former as "obviously" more concerning by virtue of its immediacy and certainty.

Expand full comment

"breeding half-Shiba-Inu mutant cybernetic warriors"

The notion that Elon Musk's new line of genetically engineered Shiba-girls will have kinetic military applications is absurd. And putting cybernetics in them would be downright counterproductive.

Expand full comment

The Future of Humanity Institute does the same thing in the opposite direction, claiming that what they perceive as existential risks are orders of magnitude more important than anything else.

Expand full comment

I think a reasonable, if maybe unproductively cynical model, is that the resource conflict actually taking place here has very little to do with money, and a lot to do with "attention paid to my pet issue in a given social media community."

Even though they may not share funding, the sort of people who talk about catastrophic AI risk and the sort of people who talk about AI unemployment have a lot of overlap. When people complain about AI risk being a distraction, I expect the *actual* frustration is about people who normally talk about their pet issue sometimes talking about a different-but-related thing instead, which is viscerally frustrating.

Everything else is just an attempt to wrap some mildly tortured logic around a desire to get back to talking about the original pet topic.

Expand full comment

I feel like the obvious steelman of Acemoglu's position is that "AI

Threat" in the mind of the average dumbass means a) killer robots and b) something that won't happen. Thus making it clear that there are ACTUAL EXISTING AI that are threats RIGHT NOW is a useful and important political project, even if you are more concerned about MIRI type threats than Facebook algorithms. Most people literally can't comprehend the former, but educating them on the latter might help against *both* issues by providing an onroad.

Even if one doesn't agree, it seems to at least make more sense than other interpretations.

Expand full comment

> in the real world, vineyards and space programs aren’t competitors in any meaningful sense

I think your point about them competing for land goes against your conclusion here.

Let's say there are two types of land, Green and Brown, such that you can use green land for vineyards and brown land for space rocket stuff. They obviously don't compete for land.

Except if you can put tennis courts on both green and brown land: then using more land for vineyards means more tennis courts will have to go on brown land, leaving less land for space rocket stuff.

This generalizes to n ways of using n-1 (or fewer) types of land, provided there are enough overlaps. Usage A and B compete directly if there's at least one type of land suitable for both; and A and C compete indirectly if they both compete with B, either directly or indirectly. Then everything competes with everything else, either directly or indirectly, provided there's a chain of direct competition from each thing to each other thing, i.e. if the graph of direct competition is connected.

As a consequence of indirect competition, one expects that using more green land for vineyards will drive up the price for both tennis courts and space rocket stuff (and drive down the amount of land used for both).

[Concrete examples are not actually of concrete, and are thus suspicious.]

Expand full comment