267 Comments
Comment deleted
Expand full comment

Excellent comment. Spot on. Understanding not computational.

Expand full comment

I feel like a good amount of complaint about the TSA or FDA or FRA or other official bodies supposedly aiming at safety is of this form. Instead of worrying about whether someone's shoes are explosives, they should more accurately screen passenger lists; instead of worrying whether volunteers are hurt in vaccine experiments, they should worry about getting data about vaccines fast enough to help; instead of worrying whether trains get crushed in a crash with a freight train, they should just make sure passenger trains aren't on freight tracks.

Expand full comment

It's a way to avoid the incommensurability of lives and money. It's really hard to argue that "$X per passenger is too much money for airport security that misses 70% of firearms" directly, because life/money tradeoffs are taboo. It's much easier to make the funding-redirection argument when the assumption that money will be spent is baked in, and the only question is whether you can save more lives by spending it on program 1 or program 2, since it operates in terms of a more socially acceptable life/life tradeoff.

Expand full comment

I don't buy that. First, the government compares them all over the place (e.g. what medicaid/medicare will cover). Second, it's relatively easy to formulate the biggest problems here in a purely lives vs. lives fashion and pretend (as we do now) that the money is a purely external constraint (e.g. we have $X how can we minimize expected # of deaths).

I think it's because half the issue is that it's done more to signal moral values and essentially announce 'our side won on this values issues' and when it is about being actually effective any attempt to do what Kenny suggests would give huge discretion to those agencies in ways that a society without lots of cultural uniformity would end up implemented in ways that major parts of the society saw as violating sacred values (see above)

Expand full comment

There's a different issue. TSA was clearly theater but had the benefit of being such a public annoyance that it hopefully forestalled the risk of individuals deciding the government wasn't doing anything and conducting retaliatory attacks (causing even more problems since allies able to stop Islamic terrorist organizing would be hesitant to cooperate if they were being attacked).

Expand full comment

I take it you mean that they should focus more on the overall outcome rather than breaking it into pieces (is this volunteer hurt is less important than is the net harm to volunteers smaller than the benefit)? Is something like that correct?

I think this is kinda an inescapable effect of the political process (and maybe not all bad). If you want an agency to be able to focus on these more abstract/general/ultimate goals then you are necessarily giving a huge amount of discretion to that agency to decide what that requires and make a bunch of value judgements along the way.

In a more homogeneous society you could trust that the FDA won't do anything that would strike you as horrific if you leave them discretion to just safeguard public health. But I don't think you could do that in our society. Some americans see it as deeply unfair and tantamount to letting americans die not to use information about rates of terrorism by race/religion/national origin as itself unacceptable while another group sees even mathematically justified use of that data as violating sacred values.

Given our constitutional system and inability to really check if the executive is doing something pretextually (if plausible and they aren't idiots) I think this means that comprise often has to form around procedures not values.

Expand full comment

Well that and the whole it's just signalling issue.

Expand full comment

Re trains: There are only freight tracks between many places. (One reason passenger trains stink (in the US) is because the freight trains have the right of way.)

Expand full comment
founding

It seems to me that his argument is more like "Superintelligence risk sounds silly, and makes people think that AI risk worriers are scifi nerds with too much time on their hands, thereby giving the entire field of AI risk a bad name". I think this is a pretty strong argument actually, regardless of what you personally think about superintelligence risk.

Expand full comment
Comment deleted
Expand full comment

Blame the cults. There's both nonzero cost and nonzero benefit to dismissing stuff that looks silly from the outside.

See https://idlewords.com/talks/superintelligence.htm for a nice discussion of the outside view of AI risk.

""""

When you're evaluating persuasive arguments about something strange, there are two perspectives you can choose, the inside one or the outside one.

Say that some people show up at your front door one day wearing funny robes, asking you if you will join their movement. They believe that a UFO is going to visit Earth two years from now, and it is our task to prepare humanity for the Great Upbeaming.

The inside view requires you to engage with these arguments on their merits. You ask your visitors how they learned about the UFO, why they think it's coming to get us—all the normal questions a skeptic would ask in this situation.

Imagine you talk to them for an hour, and come away utterly persuaded. They make an ironclad case that the UFO is coming, that humanity needs to be prepared, and you have never believed something as hard in your life as you now believe in the importance of preparing humanity for this great event.

But the outside view tells you something different. These people are wearing funny robes and beads, they live in a remote compound, and they speak in unison in a really creepy way. Even though their arguments are irrefutable, everything in your experience tells you you're dealing with a cult.

Of course, they have a brilliant argument for why you should ignore those instincts, but that's the inside view talking.

The outside view doesn't care about content, it sees the form and the context, and it doesn't look good.

So I'd like to engage AI risk from both these perspectives. I think the arguments for superintelligence are somewhat silly, and full of unwarranted assumptions.

But even if you find them persuasive, there is something unpleasant about AI alarmism as a cultural phenomenon that should make us hesitate to take it seriously.

""""

Expand full comment

What are the equivalents of the funny robes and beads that you're seeing in the AI risk community?

Expand full comment

1) Heavy intersection with beliefs about death of which Aubrey de Grey is the official mascot. And Audrey de Grey very much looks like someone who would be in a cult.

2) Heavy intersection with non-standard living arrangements/relationships, i.e. polyamory.

3) Elon Musk is associated with it, and everyone knows Elon Musk is a meme-lord. (I think Elon Musk has also demonstrated a genius for technology, but that is much more heavily debated than his demonstrated influence over meme-driven events e.g. the price of cryptocurrencies.)

4) Prominent figures within the community writing articles normalizing cultishness, desensitizing their audiences to standard cult triggers, arguing that those shouldn't really be cult triggers: https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult , https://www.lesswrong.com/posts/cyzXoCv7nagDWCMNS/you-re-calling-who-a-cult-leader

5) Related communities are generally willing to reject norms: https://thingofthings.wordpress.com/2015/10/30/the-world-is-mad/

For what it's worth I mostly agree with all of the things I'm mentioning, but it's still pretty obvious to me why taking the outside view makes some of these things look cultish.

Expand full comment

Re 3, while he's far short of a perfect record, I definitely associate Elon Musk with outlandish vision widely dismissed as implausible but then it happens.

Expand full comment

Elon Musk DOES look like a repressed, pen-protector in the white polyester shirt 1979 nerd's idea of a cool take-charge executive, entrepreneur, and man of the world. And today's Elon Musk is adolescent Elon Musk's gun.

Expand full comment

Funnily enough I think a good counterargument for this comes from a comment left on the second link in 4) by none other than Scott himself:

I read recently an article on charitable giving which mentioned how people split up their money among many different charities to, as they put it, "maximize the effect", even though someone with this goal should donate everything to the single highest-utility charity. And this seems a bit like the example you cited where, if blue cards came up randomly 75% of the time and red cards came up 25% of the time, people would bet on blue 75% of the time even though the optimal strategy is blue 100%. All this seems to come from concepts like "Don't put all your eggs in one basket", which is a good general rule for things like investing but can easily break down.

I find myself having to fight this rule for a lot of things, and one of them is beliefs. If all of my opinions are Eliezer-ish, I feel like I'm "putting all my eggs in one basket", and I need to "diversify".You use book recommendations as a reductio, but I remember reading about half the books on your recommended reading list, thinking "Does reading everything off of one guy's reading list make me a follower?" and then thinking "Eh, as soon as he stops recommending such good books, I'll stop reading them."

The other thing is the Outside View summed up by the proverb "If two people think alike, one of them isn't thinking." In the majority of cases I observe where a person conforms to all of the beliefs held by a charismatic leader of a cohesive in-group, and keeps praising that leader's incredible insight, that person is a sheeple and that leader has a cult (see: religion, Objectivism, various political movements). I respect the Outside View enough that I have trouble replacing it with the Inside View that although I agree with Eliezer about nearly everything and am willing to say arbitrarily good things about him, I'm certainly not a cultist because I'm coming to my opinions based on Independent Logic and Reason. I don't know any way of solving this problem except the hard way.

"note: Hofstadter does not have a cult"

I tried to start a Hofstadter cult once. The first commandment was "Thou shalt follow the first commandment." The second commandment was "Thou shalt follow only those even-numbered commandments that do not exhort thee to follow themselves." I forget the other eight. Needless to say it didn't catch on.

Expand full comment

This is a difference between individual and collective rationality and even just scaled individual rationality. If you truly believe the AMF is the most effective charity on the planet, you should give all of your excess money to it. Arguably, you should just give all of your money to it, even if it would kill you, because saving many lives is worth sacrificing your own.

On the other hand, that can't be a universalized giving strategy, because we'd all be dead. Even if just all charitable giving was diverted to the AMF, it would be so drowned in cash influx that it wouldn't be able to use it and most of the money would go to waste. The effectiveness of your individual giving strategy depends upon the limited scope in which it is applied.

This was the point of the Curtis Yarvin post linked in the last discussion about diminishing marginal returns. It's obvious with investing that you can't just find the next Amazon doing a $25 million series A and try to give them $250 billion instead, expecting to earn exactly the same rate of return but scaled up. There are no 20 quadrillion dollar market cap companies and you're not going to create one by just giving a unicorn more money than it can productively use.

The same principle applies to charitable giving and any form of resource allocation. Whatever the single best use of your resources is, how much you should actually allocate is limited by:

1) The maximum amount it can actually use

2) The minimum amount you need to allocate to everything else to sustain yourself

3) The comparative utilitarian leverage it can exert with an additional unit of resource

Both 1 and 3 are dynamic values that depend on each other and also depend on how everyone else is allocating resources. So you have to make some tradeoff here between what you really calculate is best and how you believe that might impact the overall system such as to change the calculation. For a sufficiently small giver like most of us, that might simplify to just give everything to one cause. For someone with a lot to give, it's much harder. But even for the small givers, it's hard to break the intuition that not literally every resource in the world should ever be directed to a single cause, even if you're not personally the entire world.

Expand full comment

Isn't a lot of the purpose of diversification just hedging against significant losses from misjudging reality? There's no reason to assume that blue cards will always come up 75% of the time, based on the information presented. You seem to agree with this idea, based on your comment regarding investment strategies.

If you put all of your intellectual growth into the single basket of somebody's book recommendations, you will miss out on the same things they are missing. Maybe that's substantial, maybe it isn't, but hedging helps you against worst case scenarios. Putting 75% of your money on blue cards still makes more sense than 50/50, given current knowledge of blue coming up more often, but not necessarily 100%. If blue cards run out (or any number of potential reasons they drop off), then you're getting a 0% return and taking a massive loss.

In regards to charities, there's another game theory reason to mix up your giving. Signaling to operators of charities that they still have to perform in order to get your donations. If other charities are getting funded, there is competition for dollars. If one charity gets all of it, they lose a lot of incentive to improve and maintain quality.

Expand full comment

The first paragraph of Scott's comment is wrong. The minimum amount of diversification that someone should use for diversifying investments including charitable donations -- and including most real world situations that aren't necessarily about money -- given a certain level of confidence is calculable from the Kelly Criterion (https://en.wikipedia.org/wiki/Kelly_criterion) because effects of contribution compound over time. The situation in that experiment that would make allocating 100% to something that is 75% likely is a contrived laboratory game/experiment that breaks the effects of iteration/compounding. People's natural diversification instincts are pretty close to what the Kelly Criterion says they should be.

Incidentally, the Kelly Criterion is sort of just an application of Bayes theorem adjusted for effect size, and the best treatment I've seen of how it's implication should be applied to beliefs come from the sequences on Less Wrong. I forget which posts and the key words I remember are so frequently used on that site that I can't effectively search for them, but the general theme is that you should always hold multiple different contradictory beliefs and that the probability that you assign to each of them should always be non-zero (and equivalently that none of them should be assigned a probability of one), because as soon as you are completely confident of any belief or set of beliefs you are start become completely confident of any of your beliefs you are like becoming forever wrong.

If you apply these concepts to things like book recommendations and authors you trust and do a little math, the ultimate conclusion should almost always be that you end up assigning a high probability to weird beliefs that people you know are really smart hold than the probability you would assign to such beliefs if you only took the Inside View of things. (The do a little math part is just that encountering contradictory evidence of equal strength pushes a strong prior towards 50% rather than pushing it towards 1.) So for example, I think the QM explanation of consciousness is clearly correct and the algorithmic explanation of consciousness is obviously absurd. Once I learn that John Archibald Wheeler and Roger Penrose are really smart people who more or less agree with me (and at least one of them is quite familiar with the algorithmic explanation) and that Douglas Hofstadter and Eliezer Yudkowsky are really smart people who believe in an algorithmic explanation of consciousness (and at least one of them is quite familiar with the QM explanation), I should be less confident of the QM explanation of consciousness than I was before learning that any prominent thinker held one or the other beliefs.

(Irrelevant aside: I'm borrowing proper nouns from context. The only reason I believe that Hofstadter is smart at all is because a lot of other people I respect and think are really smart think so. My Inside View of Hofstadter is that he is an extremely articulate person who thinks in metaphors, and to someone who thinks in metaphors, a homomorphism is like a metaphor and a bijection is like a projection, and his misuse of math-related terminology is so cringeworthy to me I have a hard time processing the content of what he is trying to over the obnoxious voice in my head screaming, "THAT"S NOT WHAT THAT WORD MEANS!!!!" But the argument for assigning that theory increased probability stands after replacing Hofstadter's name with Daniel Dennett's.)

Expand full comment

What's cultish about de Grey, exactly? He reads way more mad scientist than guru to me.

The Alcor people on the other hand...

Expand full comment

I think Austin pretty much nailed it. In particular, (1) sometimes feels like the secular rebirth of christian eschatology.

Personally, I have very different experiences discussing the subject with people far in[1] and far out[2] of the Bay Area culture.

When I've tried to share rationalist work with people far out of the Bay culture, some x-risk, some not, I've been met with... incomprehension bordering on a concerned "are you okay?". Actually, I've literally been asked that question. It's not that the work is technical or specialized, or that they're not smart people. It's that the work is always six or more concepts up its own ass, and these people have never even *heard* of these concepts.

Here's a set of things you might see discussed in a rationalist x-risk comment thread: double cruxing. who should update. Roko's basilisk. resonance checks. steelmanning. simulacrum level 3. These look like some funny robes, at best. At worst, they look like some properly scary kool-aid.

[1] In: transgender, rationalist, ML programming, singularitarian, etc.

[2] Out, some examples: old fashioned bookish New England academics, college football boosting Texans (we sometimes have parties whose theme is "Guns And Meat"), US military (doctors and other), call center working scifi convention types, LA surfers.

Expand full comment

This gives me the impression the ACX comment section is much further removed from the average rationalist x-risk discussion than I thought.

Expand full comment

This is why I remain an ACX (comment) reader and tend to avoid lesswrong, as well as avoiding live discussions in Eliezer, CFAR or MIRI contexts. And to the extent that I do consider AI risk, I hold my nose about Musk and give my attention (so far no dollars) to OpenAI, *not* the MIRI/Bostrom crowd.

Expand full comment

Thanks for the link! I did not expect it to be nearly as convincing as it turned out to be.

Expand full comment

isn't this a fully-general argument to never believe any weird-looking idea? like sure the author says "hesitate to take it seriously" but come on, they're giving this argument (presumably) with the intention of convincing people who are sort of new to the whole ai-risk idea but think the arguments sound plausible. and clearly the author doesn't take it seriously (for terrible object-level reasons, if there's no other reason than what they described in the presentation; and come to think of it, bad outside-view reasons too---they even bring out the oldie-but-goodie "guys it's just like religion!"). but you could say this stuff about any weird idea. and then (if the ai-risk people are right) you end up dead ten years later because you were too busy laughing at how stupid all those nerds are.

i think a more productive way of looking at this is something like "okay, suppose we were in a universe in which superintelligent AI is possible, but not obviously so. what strategy should i use such that, in that universe, we don't all die?" i don't want to a priori rule out entire possible worlds just because the people who talk about them use some weird vocabulary or have other surface-level similarities to weird ufo cults.

but obviously, whatever strategy i'm using *does* have to rule out worrying about weird ufo cults, because you're right that we can't spend 100% of our time worrying about strange existential risks. but i think most of these things can be safely reduced to neligible (say, ~< 5%) probabilities on object-level arguments and evidence alone. for ufos, for example, the cultists always have extremely detail-laden mythologies for which there is only weak evidence. every extra detail weakens the prior probability of their argument substantially. perhaps even more convincingly, now that everyone carries around HD cameras all the time, reports of ufo sightings seem to have mysteriously disappeared. i can contrast this with the argument for ai-risk: even the simplified version recited in the presentation only has six premeses, all of which are plausible (i agree with them more stronly than that, but i am only arguing that ai risk is worth taking seriously, not that anyone should be as worried about it as i am; not without reading more about it, anyway).

i felt a strong need to reply to this comment because i feel like, even among really smart people like basically everyone in the ACX comment section, most of the arguments i see for not even pausing to consider whether ai risk might be worth taking seriously so often boil down to either 1. acemoglu's argument: trading off concerns with narrow-ai (which I basically consider to be whataboutism and mostly unrelated to ai risk, though yes also worrying) or 2. the people who talk about ai risk are weird. and it just bugs me a lot to see this (what i interpret as) a huge blind spot that so many people have for some reason. maybe i'm missing something, idk.

Expand full comment

I'm replying because you felt the strong need to reply, and that's worth respecting.

Perhaps you only see objections 1 and 2 in the comments because you aren't taking the object-level objections in the presentation seriously, and don't really notice people making them. Or, we don't make them here because it's not a sympathetic space and nobody really wants to get in a big object level argument about AI risk.

But I can assure you that the average ACX commenter who doesn't agree with AI risk feels strongly about at least some of those object-level objections, often from personal experience in the space.

I myself have a child, a great deal of experience watching shaky ML work succeed, so I'm particularly sympathetic to the argument from childhood and the argument from actual AI. And like probably all ACX commenters, I've watched a lot of research fail through bad math, and I've watched civilization experience some considerable hiccups in recent times, so I'm sympathetic to the arguments from wooly definitions and slavic pessimism.

I feel you and I both noticed the same thing in the first half of your reply: that it's just as much a fully general counter argument in favor of cults. Correct me if I'm wrong, but it felt like the second half was adding the necessary filter to change that. So I'm going to try to address the second half to see where I agree or don't agree.

First of all, I don't think 5% is negligible. There are all kinds of things I treat with deadly seriousness where the downside consequences fall well, well under even 1%, such as seatbelt safety and gun safety. I don't like guns, but I have fired them, and when I fire them, I am absurdly careful, even though only a very small fraction of incautious people are hurt or killed by the gun they're firing.

I don't think this is a tangent, either. It's more to say that there are tons and tons of things where the expected value tradeoff involves *very* small chances and *very* major consequences, and the math on each of these gets necessarily fuzzy through small sample size and lack of sufficient experience to define the situation (again, wooly definitions). So we're not actually picking the things we take seriously or the things we dismiss based on dispassionate calculation of probabilities. We don't have that information.

We'e picking it based on emotional reactions. I'll fully grant you that stereotypical cults actually make their story more implausible with almost every piece of information, but I contend that (1) you're not actually rationally judging probabilities, you're having an emotional "pfft" response that tends to the same answers, and (2) not every bad idea is a stereotypical cult.

It's harder to make the same rational argument about scientology. Sure, they believe in Xenu and space airplanes, but maybe they're on to something with the body having weird leftover emotions, maybe chemical pheremonal instead of thetans, and weren't they basically the first people to doubt various now doubtful psychiatric and drug recovery treatments? No, of course not. But I'm not saying that from a fully rational place where I've given them some degree of investigation. I'm saying it from a partly emotional place where they *still* look like pointy hat wearers.

If you were in the context of a different belief system, say, Christianity in the 1700s, it would also be very hard to make the same rational argument about different but frankly dangerous offshoots of your belief system. Some aggressively self flagellating puritan offshoot would have looked like "my thing, plus this one weird thing that I can't easily dismiss" to a regular puritan, and many a regular puritan would and did step off a safer path to do such a weird thing.

Maybe OpenAI is a perfectly reasonable amount of money and attention thrown at AI risk, but if you're too emotionally receptive, you can't rationally dismiss a lot of other possibilities and you find yourself giving 35% of your yearly income to MIRI and attending conferences where you have long arguments about Roko's basilisk, and that's what looks like self flagellation to people 200 years from now.

It's not hyperbole, either. Scott once wrote that the Effective Altruism people at 80,000 Hours made him feel guilty enough about being a physician that he had to at least consider quitting to work for 80,000 Hours, and I don't know about you, but I'm among the people who felt that would have been insane in almost any world; physician is a hard won job pumping out guaranteed value. Doubly so for psychiatrist in particular. And around the same time, EA was having conference sessions along the lines of "should the molecules in the universe be given moral weight, and if so, should we change everything to favor them?"

Expand full comment

It's because we have been burned on the SF-sounding claims of the past, be that "by the year 1980 the oil will have run out and we will be reduced to a dystopian wasteland" or "by the far-flung year 2000 we will be living and working in space colonies on the Moon and Mars".

"No, honest, killer computers are coming!" sounds like yet another one of those claims. "But it's true THIS time!" Yeah, line up there behind Paul Ehrlich, guy.

Expand full comment

I know I'm banging on about this, but the problem is not the tech or the machines, the problem lies within the human heart and what we find desirable to pursue.

AI risk and AI evangelists both seem to agree that eventually we will get super-duper human-smart, human-aware computer intelligence, because we're working towards it and it is inevitable, given Progress and such like.

Why are we working towards it? Why can't we call 'halt' right now?

Well, because - ignoring all the talk about efficiency and better service and human flourishing, which I am not going to say is all lies and hot air, because some people do mean it - right now, people in big organisations are working on it because "the first person to crack this will become very, very, rich".

But your company is already very, very, rich. Why do you need to become even more rich? And that's the fatal flaw within us and the society we have built: if your quarterly returns aren't climbing year-on-year, then you're prey for the sharks and vultures.

Don't worry about paperclip maximising AI until you've first sorted out the real way that AI is being used by large companies today. As in this excerpt from the current issue of "Private Eye":

"Whether or not they've read Nineteen Eighty-Four, my fellow Amazon delivery associates know just what ‘Orwellian' means: Big Brother is our daily companion, in the form of software that we have to install on our mobile phones as a condition of our alleged self-employment. Mentor® DSP by eDriving works by monitoring acceleration, braking, cornering, speeding and ‘distraction events'. It is also one of the worst-reviewed apps on the Apple store…"

To sum up: Amazon gets around employment law by having its drivers be "self-employed contractors" selling their services to Amazon. This is in the name of cutting costs and bumping up profitability.

They also do things like the above, installing software to make sure the "self-employed contractor" is hitting targets. Which may work on USA roads but not on UK ones, and which encourage reckless driving and corner-cutting because drivers who don't hit the targets are financially penalised.

Instead of real-world experience tailoring both what is expected in performance, and the software being created, the top-down 'solution' is to impose software made for the USA on the UK drivers and set them targets that come out of calculations based on more computer projections.

It's not machines or tech doing this, it's human beings making decisions based on money. There is nothing wrong with wanting to run a profitable business or wanting to make money! But AI research is not about "human flourishing, fully-automated luxury gay space communism for all", it's about "this will put another Z cents on the share price, and once we work out self-driving vans, we can dump the humans altogether and put another fraction of a cent on the share price!"

And that is the attitude that will lead to the hoped-for/dreaded AI in the future. We shouldn't be worrying about machine ethics, we should be worrying about human ethics.

Expand full comment

There isn't a like button strong enough for how much I like this comment. Thank you.

Expand full comment

I'm old and cranky and I believe in Original Sin. Right now, we are richer and more advanced than at any period in human history, and yet we're scratching each other's eyes out over racism/[insert pet cause here].

That's not a problem that will be solved by technology, and *some* (not all, but some) of the AI proponents sound like they are desperately hoping for the super-intelligent god-tier AI, because we humans haven't sorted ourselves out but if we just build the *right* machine and give it the *right* programming and install the *right* ethical system in it, it will be *so* smart and *so* powerful that it will rule us all (as benevolent dictator) and solve all our problems for us.

Huh, what was that bit about "eat the apple of knowledge and you will be as gods" again?

Expand full comment

Personally, I've always found the story of the tower of Babel to be very inspirational. It shows that if we humans put our minds and our hands together -- in work, not in prayer -- then we can make even a jealous, petty cosmic tyrant *fear* us. That's good. He *should* be afraid. If we could only stop fighting each other for five minutes, there's virtually no limit to what we could accomplish. We are together, but he is forever alone.

Expand full comment

"Right now, we are richer and more advanced than at any period in human history, and yet we're scratching each other's eyes out over racism/[insert pet cause here]." Which is a drastic improvement over enslaving each other and putting each other in concentration camps over the same things, especially after adjusting for the fact that "scratching each other's eyes out" is an exaggeration of the current state of things; whereas, the slavery and concentrations camps were real. (And I'm not just talking about modern history. The Romans and Assyrians were persecuting and enslaving people of in-group out-group distinctions every bit as severely as the worst modern societies were.)

Expand full comment

Amazon is Moloch, before it was Walmart. We both hate and support the evil we allow. (I'd be fine with cancelling our amazon prime membership, but I'm guessing the family will revolt at the idea.)

The pursuit of AI is somewhat the same in that to not do it is to leave the field empty for the competition. When the technology is ripe for 'building railroads'. Railroads will be built by someone.

Expand full comment

Capitalism is Moloch, and stock markets are smarter-than-human paper-clip maximizers. AI has been around and powerful for a very long time.

Expand full comment

Are you calling stock markets AI? I think markets (and Scott's fav. prediction markets) are examples of collective intelligence.

Expand full comment

I disagree with you here, in the long term. Yes, in the short term, it's all about putting another Z cents on the share price. But in the long term, we now live in a world (our Western world, at least) where most people can afford communication devices that can instantly connect them to the global network of human thought. Sure, such devices have obvious pitfalls, but overall, I'd rather live in a world with Internet-enabled smartphones than ye olde world of 1980s.

Smartphones were developed by people who wanted nothing more than to get rich. They didn't care about "human flourishing", they cared about money... and yet... human flourishing happened anyway -- even though a bunch of switchboard operators lost their jobs.

Similarly, a world where delivery jobs are fully automated; language barriers don't exist; and yeah, maybe I can take a vacation in space... such a world would be better than our current world, even if a few unsavoury characters get rich in the process.

Expand full comment

Adam Smith's invisible hand!

Expand full comment

Agreed, this is how I read the essay. (I said the same thing yesterday)

Expand full comment

Look forward to seeing you say it again tomorrow 😉

Expand full comment

Yeah, it's kind of weird to get a follow up email full of really bad analogies to explain what I think was a pretty obscure reading of a very short and not-very-nuanced essay to begin with.

Expand full comment

I agree that this could be a decent argument, but I don't think that's really the argument he was making. Or, if so, it was only referred to in a roundabout way and wasn't the only criticism of general AI risk.

Expand full comment
founding

But "giving the field a bad name" is something of a self-fulfilling prophecy, here. I don't think that if Acemoglu wrote an oped that instead said "we should care about short-term AI concerns, because they're pressing now and also working on them will help with potential long-term future concerns" people would say "what a nutjob! Short-term AI concerns are no longer important."

Expand full comment

I think this effect has happened to me personally. When someone tells me they're working on AI alignment/safety I immediately downweight the work's relevance in my head. But then it turns out they're working on getting language models to make stuff the user wants or something, in which case I should have been more interested.

Expand full comment

Of course that’s what acemoglu wants people to think because he is fronting for the network that’s already developing the superintelligent general AI that’s going to turn us all into paperclips! Actually is acemoglu even human? How do we know? Trust the sci-fi nerds 😉🤓🤩🤖👾

Expand full comment

I will worry about superintelligence risk when we've solved the "passwords are case sensitive" issue. That is, a human can work out that "Mr. Alexander" and "Mr. ALexander" are the same person, and the second was just a data entry error, lemme fix that for you, now you can access your bank account, sorry about the inconvenience.

The 'smart' computer will insist "I'm sorry, Mr. Alexander, you are not the same person as 'Mr. ALexander' who owns this bank account, so I am not letting you access it" until the human IT operative comes along and says "dumb machine, it's the same guy!"

Expand full comment

You know, this isn't hard to do. Presumably the bank uses a hacked together old mainframe, or just thinks that its a good idea not to do any "clever" mistake fixing. (Every time you add an automatic fix for problems like that, you introduce extra complexity and extra ways things can go wrong.) These aren't hard AI problems. These are the problems you get when the programmer is overworked and doesn't think fixing this is a priority.

Expand full comment

Generally anything that isn't a legacy system does this: often it's hardcoded in, but it doesn't have to be. In the last post I made a kind of spurious claim that neural networks are similar to animal cognition - that may or may not be true, and I regret saying it.

But it's absolutely true that we also undertake a (very rapid) analysis when we see "MR. AlExander" that converts it to "Mr. Alexander" in brain-memory similar to the computer methods that do this.

If you type a non-case-matching username in any big company's website and get an error, I'd be shocked.

The reason passwords are case sensitive is just that it (technically) increases the search space for adversaries trying to decipher the password. There are better ways to do passwords, but this requires adoption from people - it's a human issue. We've adopted a faulty mental framework for thinking about passwords and security, and we're resistant to change.

These arguments also throw me off because they seem like the equivalent of saying "people think machines can do calculus, but my microwave is a machine and it can't."

Expand full comment

My *bank* tries to keep usernames as secret as passwords and treats them as case sensitive.

Expand full comment

There are good reasons to do this but they're also all human-error based. If you get a list of usernames and there's no two-factor authentication, you can run through each of them automatically and make a list of anyone who has made their password "password".

Expand full comment

This is by design and it would be trivially easy to "solve" this problem if people weren't intentionally not doing it. When you type your password into a computer, it's supposed to hash the password in a way that is maximally encrypted. (So that any bit difference in the password you type is equally like to flip each bit of the encrypted password BEFORE it sends the password to anything that is validating it, and the thing that validates it is then supposed to encrypt it again the same way before comparing it to the minimal possible number of entries in the database, and ideally only having access to the database in a way that allows it to validate only whether there is an exact match without any ability to validate whether similar user names or passwords even exist.) Any thing less than this is a security vulnerability that makes the computer easier to hack at which point we are no longer talking about what is feasible for technology to do but instead talking about an arms-race between what can be done between attacking and defending technologies. The reason we can't have more user-friendly logins and passwords is not that it's too hard to write that code; it's that it's too easy to write code that's hacks into a more human-friendly login system.

Expand full comment

i'm sorry, i hope this is just being snarky rather than your actual reason to ignore superintelligence risk. "there exist commercial, client-focused computer interfaces that aren't maximally convenient" and "humans have a non-negligible chance of creating dangerously smart AGI from some kind of research program or otherwise, in the near to medium future" can certainly both be true simultaneously.

Expand full comment

Agreed, that was an incredibly stupid argument

Expand full comment

Computers can already solve this problem but you don't want them too as this would require them storing actual passwords rather than just hashes which is a huge security liability.

Expand full comment
founding

I'd think you could also solve it by having the computer, the first time a username/password is rejected, try hashing the hundred or so most obvious typos and letting the user in if one of those matches. That would be a much smaller security risk. But it would require a substantial contribution of programmer and security analyst time and inconvenience, for a very diffuse benefit in user convenience, so I'm not at all surprised that it hasn't been done.

Expand full comment

Agreed, and I think this is a problem of inferential distance. Scott believes that the Singularity is very real, and in fact imminent. Thus, he believes that the world needs to act, *now*, before it's too late -- and nothing else matters.

By analogy, if you truly believed that Jesus would return and start judging sinners in the next 5 years or so, I bet you'd work as hard as you could on spreading Christianity, avoiding mixed fabrics, and whatever other measures are advised for avoiding Jesus's wroth. You would be *reasonably justified* in doing so, and it is unlikely that anyone could convince you otherwise. The gulf between you and atheists who worry about little things like global warming would simply be too large.

Expand full comment

And that is why singularitarians and AI-risk people look like cultists from the outside. "Damn what others think, damn how far I am from the norm, I believe <antecedent thing> that makes it all totally necessary". If you believe <antecedent thing>, of course, do that stuff. If not, it looks truly weird.

Expand full comment

As a side note, this goes a long way towards explaining why anti-abortion Republicans behave the way they do. If you truly believe a society is killing a surplus million people a year, by all means, you are within your moral rights to do whatever it takes to stop it, including tearing down the society.

Expand full comment

Good analogy, actually.

Expand full comment

Yeah I agree with this interpretation. Similar arguments of this form are "advocating for banning industrial farming sounds silly, and makes people think that all people who want to reduce meat consumption are totalitarians who want to take away your hamburger" or "advocating for defunding/abolish the police sounds silly, and makes people think that all people who want police reform are crazy anarchists" or "climate change alarmism and worrying about the end of civilization from CO2 sounds silly, and will make people who want to prevent the Atlantic from swallowing Miami look silly". Again I am saying nothing here about my object-level positions on any of these issues, just the structure of the argument.

This is really a political claim and not an intellectual one. You will lose political capital by advocating AI risk which you should be spending on other stuff. Another one of this form is "arguing for open discussion on race and IQ is going to turn people off free speech". In fact, Scott made exactly this point at https://slatestarcodex.com/2017/04/11/sacred-principles-as-exhaustible-resources/ . AI concern is an exhaustible resource.

Expand full comment

>And (3) and (4) still ring false - they’re theoretically plausible arguments, but nobody would ever make them.

I'd say that is not true. An obsessive focus on Covid (UK here) has meant that large numbers of people have/will die because, for instance, they didn't get that 'lump' checked out and now the cancer has gone too far. As a population, we do not have decent granular calculations of risk, and tend to push them to 0% and 100%, it reaches a point where the 'wrong' one is being displaced.

Further Covid example. At the beginning, we thought that Covid had fomite transmission, so hand sanitising stations outside all shops. Now we know that this is a tiny risk, and mask wearing is much more the point, the stations are still there, and the mantra is "Hands, face, space". Hands first on the list. Loads of people can only deal with one of those, so sanitise their hands then do dick-nose with their mask.

Most people are crap at processing information, and either/or choices absolutely come into play.

Expand full comment

The closest analogue to his argument would be "instead of worrying about the long term risks of climate change we should be worried about the impacts of climate change that are already affecting us".

Expand full comment

(Disclaimer: Did not read the original Acemoglu article, opinions are based entirely on second-hand information by Scott and commenters.)

This seemed like the obvious take for me as well - that "this is something that will be relevant in the future, in the same topic space" is an integral part. #3 and #4 of Scott's examples come closest to this and I consider them approximately the same degree of plausible.

(That said, personally I'm a staunch believer that resources for important topics aren't zero-sum, so "why not both? I continue to work on X, you work work on Y that you think is more important," is my standard answer whenever someone accuses me of not doing enough Y or wasting my time with X - although it certainly doesn't *always* work, some things require so many resources that zero-sum is the right way of thinking about them, but that's been rare in my practical experience.)

Expand full comment

You shouldn't worry about anything. If something bad might happen, do an estimate of how likely it is, and how much it would cost to ameliorate it. If it's too expensive, then bother your mind with other thoughts.

Worrying is a waste of neurons. Just Don't Do It (tm)

Expand full comment
founding

I think for many people, "worrying" maps to "estimating risks and coming up with potential amelioration plans" instead of "rumination" or "excessive anxiety" or so on, and so this advice will land incorrectly for them.

Expand full comment

I know people who suffer from anxiety, and worrying is immobilizing for them.

Expand full comment

Anxiety-sufferer here. I think this is a simple case of language being frustratingly imprecise--I commonly use the word "worry" to mean either "do a sober cost-benefit analysis of possible solutions" or "get stuck in an unproductive anxiety loop" depending on context, as many people do. Obviously the former is usually a beneficial course of action and the latter counterproductive.

But I think that often people critical of taking action against [GAI risk / Climate change / Election fraud / Voter suppression / etc] will abuse that ambiguity by accusing people of "pointlessly worrying" about whatever. It can be hard to counter such an allegation; "Yes I'm worried about it, here's wh–" "Ha, you admit you're just anxious and panicking!" Alternatively: "No I'm not *worried* about it, I just–" "So now you say it's *not* worth worrying about? Make up your mind!" Essentially, the word "worry" makes it easy to accuse careful planners of being anxious nuts.

Another thing I want to touch on real quick is that even the productive kind of worrying you describe still takes mental and emotional energy and time. There are enough issues in the world that you *can't* do a thorough analysis of possible solutions for all of them.

Expand full comment

In other words: How should we budget our Worry Quota.

Expand full comment

I choose to follow the Bobby McFerrin protocol.

Expand full comment

I prefer the Alfred E Neuman protocol.

Expand full comment

Those all seem like "trade off worry about this current, realized risk against this other current, realized risk". That's not a very reasonable comparison to super-AI risks versus current-AI risks: While we are familiar with the mechanisms of nuclear war, to pick one example, we have no idea what technological changes will be necessary to create a super-AI. We do not know how to build a super-AI, so analysis of the risks is premised on shaky assumptions about the thing itself. Thinking about how to effectively manage those risks is even more speculative.

Expand full comment

Yes, I think this is basically what's behind people's intuitions of "why not worry about [this current thing] instead", even if they don't phrase it this clearly.

Expand full comment

"Instead of donating money to help sick people in America, you should donate money to help sick people in sub-Saharan Africa, because money buys more and people are less able to afford necessary care so you can save lives at much lower cost."

"Instead of donating to help this Democrat win against McConnell, which won't happen, you should donate to these actually competitive races."

"Instead of spending money on highways, we should spend money on public transit."

I've seen both of these arguments in the wild, and I think both are intuitively compelling. I think arguments to redirect funding from A to B, where A and B have a similar goal/purpose/topic, but B is argued to be more effective, are actually reasonably common.

Expand full comment

I'm not sure if I can rigorize this, but I have an intuition that it's significantly more acceptable to make transfer arguments for "tactics" than for "goals".

That is, you can say "we all want X, and we are more likely to achieve X by doing A than by doing B, so we should do A instead of B"

But it's much less acceptable to say "you currently want X, but you should sacrifice X to get Y, because Y is more important". To say this, I think you basically either need to be willing to say "sacrificing X is costless" or "it is impossible to get both X and Y".

(Although maybe the argument about tactics works because there is an implicit "sacrificing B is costless" argument, because doing A will accomplish B's true goal at least as well?)

Formally, I'm not sure these are really different, because there usually exists some lens where X and Y are both strategies for getting some more-abstract thing, like happiness or prosperity or long life. But they feel different to me.

Possibly that indicates that my brain has actually encoded X and Y as terminal values, even if I will intellectually claim that I desire them as instrumental towards achieving some higher-level abstraction.

Expand full comment

Yes exactly this. The argument isn't that we move the intellectual and financial resources from long-term AI to short-term AI. The argument is that we move the political capital in this direction. You can call this goals vs tactics if you will.

Expand full comment

How about this, which I've absolutely heard some version of before:

8. Instead of worrying about preserving endangered species and recycling household goods, we should be worrying about global carbon emissions.

Expand full comment

That sounds like a simple optimization claim: Global carbon emissions are a bigger threat to endangered species than poachers-or-whatever, so if you value endangered species you should focus on carbon emissions rather than endangered species charity.

If so it’s not really a substitute in the way Scott is saying.

Expand full comment

I recall reading some articles back in January/February 2020 arguing (4) - "Instead of panicking about a new disease that's only infected a few thousand people, you should worry about the flu".

I've also seen numerous internet commenters argue similar things to (6) and at least one CCP propaganda account arguing the opposite, though in response to the CCP attacking the US for racism against blacks (Americans attacking the Chinese in the propagandist case) and not as part of an effort to shift people away from BLM.

Expand full comment
Comment deleted
Expand full comment

The (6) that I have seen have been more along the lines of "why are you, a pharmaceutical lobbyist, saying that we should be very worried about fossil fuel lobbying?"

Which... seems reasonable enough, as an accusation of dishonesty rather than an argument about how we should prioritize. There's no tradeoff here, except in terms of attention - the CCP would rather we focus on black Americans, red tribe would rather we focus on Uighurs.

Expand full comment

But that's exactly the "whattaboutism" that Scott warns this tactic is - "yeah this is bad, but what about that other thing (that someone else/you are doing)?"

So they're not arguments, just Dark Arts

Expand full comment

For me the issue isn't so much police brutality and evidence fabrication. We would expect some of that to occur under any system. The bigger issue is the lack of accountability.

In terms of AI should that be tackled independently of overall technological change, automation, etc.?

Personally it doesn't seem to do much good to break these issues down into fragments. But I'm open to being wrong about that.

Expand full comment

Continuing to steelman his argument (which I also find a bit... nibble-around-edgy?) I think a better comparison might be "we should worry about the shorter-term climate change effects of refusing to invest in futuristic energy tech, instead of worrying about the risk that fusion research will accidentally turn the planet into a fireball and kill us all."

Planetary-scale AI risk is a well-founded supposition, but it *is* a supposition whose likelihood is undetermined, so having a lot of smart/wealthy people nibbling at this (distant/unlikely/maybe impossible) scenario seems like rather a waste when there are very large right-now problems with AI they could be actively doing something about.

Whenever we have these "walk and chew gum at the same time" conversations, which since I spend a lot of time reading/talking about politics is A LOT, we definitely run the risk of doing what Acemoglu has done here and assuming that just because someone is talking about X, they aren't thinking/talking/caring about Y. This assumption is *usually* false for any topic of contention.

Expand full comment

Maybe a closer steelman argument is "instead of researching how to treat PTSD on people awakening from cryogenic sleep we should worry about treating PTSD on soldiers". I think part of the argument against worry about AI risk is that it's a fictional problem or applies to a world very different from ours while there are similar problems occurring right now which need more attention and are not getting it.

Expand full comment

I almost feel like I'm missing something here because Scott has made arguments about the danger of hot button issues overwhelming critical ones, like his "don't protest police shootings, focus on malaria eradication" take:

https://slatestarcodex.com/2014/12/19/nobody-is-perfect-everything-is-commensurable/?fbclid=IwAR2Sf9dlbfss9H5BVocMsXHgWdjNjtsbVdv_rMa7W6MTf0hCKfWr6ignF18

There really is a finite public attention for things, and when everyone is fixated on exciting culture war stuff it can detract from far more consequential stuff. Likewise, Yglesias had a take recently to the effect of "instead of fighting about critical race theory, worry about phonics vs. whole language which is way more consequential".

Maybe the difference here is that Finite Worry Economy makes sense when the thing you are calling for less attention toward is absorbing a high absolute share of attention, such that you have to pay less attention to it to give other issues space. Police shootings and CRT may fit this category, long-term AI risk does not.

Expand full comment

Right, came here to say this. Isn't this a fully general argument against telling people that should worry about something more than something else?

The scarce resource is people's attention, not newspaper columns. Not only funders, but all people whose views change something. That is why when people say "instead of worrying about A, your should worry about B" it is often the case that B could appeal to people to whom A appeals - that's the attention you're fighting for.

Expand full comment

The pedantic nature of these two posts reinforces his point. You haven't said a single thing about the near-term risks. Instead you've picked an argument about abstract meaning.

Expand full comment

I don't think this is true. Take especially section 1.1 of Scott's previous post where he quoted Acemoglu's paper on AI's present-day impact on the economy, specifically the part saying that negative results on employment couldn't yet be observed in the data.

Expand full comment

Yes, thank you. This whole long-term strong AI conversation feels even more pointless now than it did 5 years ago. Let's move on from it until somebody actually has something new and to say about it.

Expand full comment

I think the key difference here is that the two phenomena are not just related, but sequential, and arguably causal. I don't know much about AI development, but I would assume that the risk of superintelligent AI would only occur after widespread, regular use of simpler, non-superintelligent AI starts to become common. If that is true, then it makes sense to say that by fighting the implementation of AI in the present, you not only address an immediate danger that is harming people, you also stave off all future more dangerous risks that such use would enable. And if you are worried that the language around the risk of superintelligent AI fails to pass the laugh test for a large audience, then it is to everyone's advantage to focus on addressing the immediate situation where you can mobilize much more support.

There are a ton of assumptions built into that argument, but I think that it makes sense that if superintelligent AI (a big risk but lots of people don't take seriously AND where there is a reasonable of amount of crazy-sounding discussion around) is bad, but will only happen if we accept the everyday use of current AI technology for nefarious purposes (which I think many people remain unaware of AND could be more readily mobilized against it), then you have a stronger position.

Expand full comment

Exactly. Why make a special case of AI. Any large organization is a form of super intelligent power with disproportionate access and use of information, power and influence. Addressing current immediate concerns (e.g. government back hacking and ransoms) does change the risk profile for the unknown future

Expand full comment

Scott: make a budget, then cast the budget items in these same terms, and it will become more clear.

The answer to all six of these apparently contradictory restrictions is "this restriction is part of the toolbox I use to make a good strategy for using limited resources. it's not inherently a right or wrong restriction to make in general. it's a tool, to wield according to my preferences and according to how it helps me achieve my goal".

Also, I wouldn't be bothered by 3, 4 or 6 at all, 1 would strike me as a narrow issue of tactics between people who agree, and if I came across 2 or 5, I would just assume someone was expressing a preference or making an appeal, rather than arguing that such a thing was rational across the board.

Acemoglu was performing a 3/4/6, claiming that people who care about this topic should focus their energy on a different major sub-branch of it. He didn't attempt or even (my reading) pretend to attempt to do that by making a convincing argument against GAI x-risk. Instead, he attempted to do it by selling readers on the imminent danger of present day machine learning (albeit by using the stupid words "AI" to talk about it).

I also have to lightly protest you saying "yet another 'luminary in unrelated field discovers AI risk, pronounces it stupid'". I am a professional in the machine learning space, mostly doing search ranking for the bigs. I broadly agree with Acemoglu. This is not an appeal to authority. I am not an authority. It is a caution that it looks silly when one non-AI/ML writer dismisses another non-AI/ML writer for an opinion that is shared by a decent portion of professionals in the field.

Expand full comment

Which, by the way, was something that *wouldn't* be silly for you to say: it also looks silly for Acemoglu to dismiss popular GAI x-risk writing when plenty of professionals in the space take it seriously.

Expand full comment

One thing I've heard that might put this in context is in *which* worry economy you're talking about, because some worry economies seem like zero-sum games, and some don't.

I think the most extreme and salient example of a zero-sum worry game is congressional floor time, which I've often heard described as the "most precious resource in government" because of how little of it there is and how hard it is to allocate to your cause. A minute spent debating your bill on the senate floor is a minute spent not debating some other issue.

On the other extreme you have a stereotypical group of stoned philosophy majors hanging out in the dormitory stairwell at 3:00 AM. That group has effectively infinite worry resources to entertain all sorts of things.

The middle is where it gets tricky. I think some activists perceive that the average persuadable normie has a limited budget for caring/worrying about things, so any non-central cause taking up too much time is wasting a precious resource.

This is the model I would start investigating from, personally.

Expand full comment

Good point - from now on I suggest we delegate all worrying to stoned philosophy majors, at least until we can replace them with worrying AI systems that can worry with worrying efficiency 😉

Expand full comment

Douglas Adams´ Electric Monk rewired for worrying instead of believing.

Expand full comment

Really? 3, 4, and 6 all seem natural to me. Pretty sure I've heard variants of 4 and 6 for sure.

Typically, I end up thinking the person making them isn't being especially intellectually thorough or honest, but I would strongly disagree that "nobody ever thinks or talks this way." Ever heard the word "whataboutism", and the interminable debates about what constitutes it?

A couple of examples you might be more familiar with:

1. "Instead of worrying about U.S. poverty, we should worry about (or, it is more effective to worry about) poverty in foreign countries where many people make only a dollar a day or less."

2. "Instead of worrying about harm caused by the sexism of online feminists, we should worry about the harm caused by the sexism of governments, large companies, society, and other elements of the patriarchy."

Expand full comment

Agreed - comments crossed 😉

Expand full comment

The author doth protest too much… while AI isn’t my area and I’ve only read a few books and blogs about it, I totally get why the advent of super intelligent general AI could be a massive existential risk. Also, I’m pretty sanguine about short term employment risk which has been happening since before the industrial revolution.

BUT Scott, among your strawman arguments is something steelier- vis your scenario (3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia.

Not only is this the argument most closely analogous to acemoglu’s on AI risk, it is also completely plausible, at least to me, that real people can hold this view.

As a young teen in the 70s/80s I was kind of obsessed with nuclear weapons (along with D&D, heavy metal and black holes - I’m over it now thanks 😊) went on CND marches and genuinely believed there was at least a 50% chance of a tactical nuclear war in Europe.

The fact of having survived so far has made me discount that risk to almost zero. The same 50 years have seen many bloody and potentially avoidable small-scale conflicts around the world. The big powers have intervened or interfered in some of these (iraq, Libya) , started some (Afghanistan) and backed off others (Balkans, Congo, Yemen) but there is always the sense that the USA, UK, EU and UN could and should be a force for good, and although the road to hell is well paved, there is an argument that not all interventions have been wrong, so in that sense perhaps we do worry more about these conflicts as there have been (infinitely) many more of them than there have been nuclear wars, even though the latter is by far the bigger existential threat.

Expand full comment

"The fact of having survived so far has made me discount that risk to almost zero." I'm not sure that's how probability works - how would you observe evidence of yourself dying in a nuclear fireball?

Expand full comment

Fortunately not a paradox I’ve ever had to address- yet 😉

Expand full comment

Although I agree this is a plausible line of reasoning in the abstract and makes perfect sense as an argument form, it also seems like the exact wrong takeaway in this specific case. We focused international defense treaty organization resources very tightly on avoiding nuclear war, possibly at the expense of allowing many smaller, regional conflicts that might have been stopped, but we succeeded in not having a global holocaust. You have to argue we would have avoided it no matter what, that the possibility of nuclear war was never a problem or a problem that would have been solved even if it we hadn't directly so much effort at solving it.

Expand full comment

Thanks- I’m not arguing that case, only that it’s a reasonable argument. I’m not sure it’s true that ‘ We focused international defense treaty organization resources very tightly on avoiding nuclear war, ’ who’s the we in this case and are you sure that’s why there hasn’t been a nuclear war yet? I’m inclined to think that despite a couple of near misses in the 60s - Cuban missile crisis etc, the prospect of MAD has restrained the superpowers even as the USSR imploded. Even India-Pakistan have held off - doesn’t mean it’ll never happen of course.

Expand full comment

I mean the US DoD more than anything (NATO, but realistically it was mostly the US), and the efforts being focused on such a massive arsenal buildup as to implement MAD. Up until 2001, all military training and doctrine was hyper focused on being extremely ready for WWIII, and by virtue of doing that, probably contributed to preventing WWIII

But obviously, we can never know.

Expand full comment

I think some people would have made (4) pre-pandemic, as pandemic preparedness might have struck some people as a silly niche concern (like long term AI risk is perceived).

I'd can think of two scenarios that make people suggest we should trade-off worry: when they think something is intuitively silly or implausible, and when they see something as far more important in that same area. Motivated reasoning could also encourage this.

Examples I have seen of this in the wild:

"Worrying about long term risk is silly when people are suffering now" usually alongside not taking long term risk seriously.

"Why care about animal suffering when humans are suffering?"

"Why worry about plastic straws painfully killing fish when humans painfully kill way more fish directly?" - The only argument of the three that I agree with.

Expand full comment

Admittedly, I didn't read Acemoglu's original argument. However;

"(6): Instead of worrying about racism against blacks in the US, we should worry about racism against Uighurs in China."

I could certainly imagine someone making the *reverse* argument. If you're American, racism against Black Americans is the more proximal issue that you have more control over so it deserves priority.

Racism against Uighurs in China is the more distal issue that you have less control over. Both play to a similar value system. The notion that you should "mind your own back yard" before you start worrying about other people's has solid roots in both Confucianism and Christianity. It's a pretty established Axial-age axiom. Deal with the stuff within your locus of control before you start trying to gain brownie points by wringing your hands over things that are safely outside of your control.

Also, in economics, people regularly discount future costs in favor of present ones. I guess the counter-argument to discounting future costs might be that we're justified worrying about a distant harm if doing so gives us some certain and huge advantage in addressing it. Tip the asteroid a few meters now, or destroy it with a nuclear arsenal later. But without a big "Sale! Future problems solved! 95% off! Buy now!" sign it's a common and reasonable heuristic to focus on more present and controllable issues.

And also, if we're being told that autonomous AI are safely under human control while there are still tons of land mines scattered overseas we might justifiably question the sincerity and reliability of those we're talking to.

https://www.washingtonpost.com/technology/2021/07/07/ai-weapons-us-military/

Expand full comment

I think you must have a blind spot here. The rhetorical point Acemoglu was making was "AI isn't a future problem, it's a 'now' problem". You're reading this too literally as "AI won't be a problem in the future", but it's just a rhetorical device.

It's like me saying: "You shouldn't worry about your casserole being overcooked, because you need to call the fire department right now (because you started a fire cooking it)!" Your casserole is probably still going to be overcooked, I'm just claiming that the burning house takes priority.

Acemoglu is claiming (without a lot of reasoning, but this is an article in the washington post, not a paper) that the first step in figuring out AI problems is solving the current smaller ones, not worrying about hypothetical future ones. No comment on whether that's true.

Expand full comment

But how would you treat each of the following arguments?

(1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people: Yes.

(2): Instead of worrying about Republican obstructionism in Congress, we should worry about the potential for novel variants of COVID to wreak devastation in the Third World: Yes.

(3): Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia: Probably not.

(4): Instead of worrying about “pandemic preparedness”, we should worry about all the people dying of normal diseases like pneumonia right now: Yes.

(5): Instead of worrying about pharmaceutical companies getting rich by lobbying the government to ignore their abuses, we should worry about fossil fuel companies getting rich by lobbying the government to ignore their abuses: Yes.

(6): Instead of worrying about racism against blacks in the US, we should worry about racism against Uighurs in China: Yes.

Expand full comment

I think that the strongest version of this version of the argument is that the precious resource being spent isn't column inches or money but weirdness points. You get two, maybe three chances a generation to convince the public that some weird sci-fi thing is in fact a clear and present danger that we need to make non-pareto efficient trades to avoid. And right now is not the time to make that case if you want the maximum chance of it being convincing. I could absolutely see someone making the argument: Instead of worrying about humans in the distant future emulating our minds badly, we should worry about our radio signals escaping into space and allowing aliens to discover us. Or vice versa. There really isn't enough oxygen in any discussion no matter how niche for both these causes to thrive.

Expand full comment

Ah, makes sense. Weirdness points a la https://www.lesswrong.com/posts/wkuDgmpxwbu2M2k3w/you-have-a-set-amount-of-weirdness-points-spend-them-wisely, or Idiosyncrasy credits: https://en.wikipedia.org/wiki/Idiosyncrasy_credit. Though I think Acemoglu's argument can be steelmanned as being less about "spend your weirdness points wisely" and more "if you do this right you won't have to spend your weirdness points at all."

As in, there are two ways to get people to start thinking about the risks of AI: talking about things they're already worrying about (being fired and replaced by an algorithm, social media algorithms tearing society apart like a Paperclip Maximizer set to 'Maximize Engagement', Panopticon finally being feasible since you can have an unlimited number of camera-watching algorithms for an unlimited number of cameras)... or talk about things they're not worrying about (one day in the future everything will be horrible, whether it be long-term Global Warming, the stars going out one by one in the heat death of the Universe, or AI replacing us as top dog and doing to us what we did to all species weaker than us).

The latter approach costs weirdness points; the first one doesn't. And once you've gotten people thinking about the AI risks they can see, it doesn't cost any weirdness points to talk about the AI risks they can't see but which now feel like logical extensions of what's already on their mind. That's basically the approach Human Compatible takes: https://slatestarcodex.com/2020/01/30/book-review-human-compatible/

(More specifically, "How does it manage this? Although it mentions the weird scenarios, it doesn’t dwell on them. Instead, it focuses on the present and the plausible near-future, uses those to build up concepts like “AI is important” and “poorly aligned AI could be dangerous”. Then it addresses those abstractly, sallying into the far future only when absolutely necessary. Russell goes over all the recent debates in AI – Facebook, algorithmic bias, self-driving cars. Then he shows how these are caused by systems doing what we tell them to do (ie optimizing for one easily-described quantity) rather than what we really want them to do (capture the full range of human values). Then he talks about how future superintelligent systems will have the same problem.")

Thus, a steelmanned version of Acemoglu's argument could read "Talk about things people care about, things they can already see, before you talk to them about the weird stuff. That stuff, after all, is the logical extension of the visibly scary into the nightmarishly terrifying; you need to start with the visibly scary before it makes any sense. But once you do, you don't need to spend any weirdness points at all." As I understand it, that's not the argument Acemoglu is making, but it's the argument that the existential risk community could be taking away from the discussion he's kicked off.

Expand full comment

Yes, absolutely. A better steelman of Acemoglu could be "don't contaminate my easy to understand cause with your weirdness points." The implicit background thought is that yes, the two causes are obvious complements, but this is only evident to the weird people who really looked into it.

Expand full comment

Weirdness points seems a useful concept; I like it, too. "You get two, maybe three chances a generation to convince the public that some weird sci-fi thing is in fact a clear and present danger that we need to make non-pareto efficient trades to avoid."-how many generations back have there been to make this plausible claim? It might go farther back. Most of human history everyone who claimed some unheard-of (not so much sci-fi) danger must have been laughed at or silenced. Now and then one of them may have been right and enough listeners survived to tell some tale like Noah's. Scepticism to weirdness is deeply ingrained in human minds but still at least since axial times there are strong memes for prophets, the rickety base for weirdness points.

Expand full comment

Since everybody is going with their example, I will try mine. I think this one captures the same structure of the original argument: “Instead of worrying about the proliferation of digital walled-gardens, we should worry about the disinformation spreading on social media.”

Like the AI argument, they are both instances of the same issue (chaotic AI development / unregulated platforms), but one is very current and easy to grasp for the layman, while the other requires long explanations to convince people they might be an issue.

Expand full comment

Do you really believe "X is hard to understand, and Y is easy to understand" is a good reason to divert resources from X to Y, in general?

Expand full comment

If I gave the impression I approved the argument, I did not express myself correctly: I do not. Both arguments are invalid — my point is that they are similar, and equally likely to be used. I personally I worry a lot about digital walled-gardens, and write about it when the occasion arises.

This is exactly how I tried to find the example: I asked myself “what is something I worry about that people often disregard?”

Expand full comment

Thank you for clarifying.

Expand full comment

I think it's rather the balance between something that's already going on, and something that not only hasn't happened, but where we have absolutely no ideas about the likelihood of it happening, the consequences if it does, or even if it's possible in the first place.

Expand full comment

No one actually says these things because it would be considered rude to diminish a cause by setting up a direct comparison. But in practice, I think people think this stuff all the time. Plenty of people worry about close to home or vivid problems, and they neglect the ones that are far from home or too complex or whatever. The Uighurs, Covid in the rest of the world, and so on are vastly underfunded in the worry economy! So I don't consider these tradeoffs odd at all because we make them all the time in the real world, although we shy away from thinking about it explicitly. Whether they're right or wrong, that's harder to say.

Expand full comment

7. Instead of worrying on the risk of hackers taking control of the U.S. nuclear arsenal and starting World War III, we should worry about the common computer crime that does occur all the time.

8. Instead of worrying about a future extinction event and colonizing and terraforming Mars in preparedness, we should worry about global warming.

That is, rather than focusing on some hypothetical and highly unlikely future event (that would admittedly be very impactful), we should put our effort into the problems that actually *exist*. Not just because it makes sense, but because it's also possible to get anything *done* that way, something that is exceptionally unlikely for the remote, unlikely scenarios.

Expand full comment

9. Instead of worrying about biological warfare, we should worry about accidental lab leaks.

10. Instead of worrying about civilization-ending meteor impacts, we should worry about regular floods and wildfires.

Expand full comment

This sounds like Scott's "literally costless" category, where you're saying "worry less about X and more about Y" but what you really mean is "don't worry about X at all, because X is stupid."

Obviously this is antagonistic towards anyone who thinks X is actually important.

Expand full comment

I think a small amount of theoretical worry might be called for in all of these cases, and then put the real effort into the problems that actually exist, for any number of reasons.

Also, unlike Hell in Pascal's wager, the costs here aren't infinite - it can't get any worse than total extinction, something that is still commensurable with other costs.

Expand full comment

But if a hypothetical person believes that AGI is pretty likely to happen pretty soon, and is nearly-certain to be disastrous if done wrong, then your primary disagreement with that person would be about the probabilities of those events, not about when it is appropriate to transfer worry from one risk to another.

Expand full comment

I'm confident that if Acemoglu thought the AI apocalypse was near at hand and very likely, he would make different prioritizations, sure.

Expand full comment

> it can't get any worse than total extinction

Not true, or at least very values dependent. Just as worse-than-death outcomes are possible at the individual level, worse-than-extinction outcomes are possible at the collective level. (Hopefully their possible badness is ultimately bounded by the laws of physics, but that could be an incomprehensibly high bound.) And I think they make up a non-trivial portion of what AI-risk people worry about.

Expand full comment

I can agree with theoretically possible, like if Roko's Basilisk locks us all into a simulation hell forever, but I don't believe we have to give that serious credence.

Expand full comment

Does really no one make arguments like (3) and (4)?

This is exactly the kind of thing I often think, especially (3). I think far too much ink is indeed spilled on worrying about nuclear war when real ongoing wars often reach levels that have some overlap with nuclear war (though not total nuclear apocalypse). 

There's something similar on the positive side. Many justifications for space exploration, NASA's budget, etc. are along the lines of "We just don't know what the advantages could be, so they basically could be practically infinite, so even if each given expenditure has a small chance of accomplishing this, we should spare not cost." And I think a similar issue exists on the negative side with superintelligent AI: the moment you're dealing with a hypothetical near-infinite downside, the exact probabilities matter less. And to me that's a problem, because when you're dealing with such huge confidence intervals it wreaks havoc with expected utility, and you can end up making an argument that we should pour every last resource into AI research, and end up with tens of millions of unnecessary deaths from malaria while the AI research ends up having had no real salutary effect. Just to underline the problem, we *had* pandemic preparedness plans and it's not entirely clear how much good they did. I want to call a focus on the more immediate problem "risk aversion", because you're avoiding the risk of spending lots of money and getting nothing, and instead focusing on interventions with more known success rates, but it's a little odd to be saying that the approach that ignores the apocalyptic scenario is "risk-averse". 

Naturally there's a lot to be said in favor and against this kind of argument, but it's strange to me that it sounds so foreign. 

Expand full comment

I think 3) makes an amount of sense - the nuclear balance is fairly sorted out, and if we didn't get a nuclear war during the worst days of the Cold War, we're probably looking pretty good now.

Meanwhile, millions die in the kinds of wars waged with AK-47s, machetes, and child soldiers.

Expand full comment

I don't have any opinion on Acemoglus opinions on AI risks, but I balk at this statement:

"People in disability rights activism have already decided that “curing disabilities” is offensive, so it’s okay for them to advocate defunding it as a literally costless action (not a tradeoff)."

Really?

Yes, I know there are some extremists with this view around, and there are stories of deaf parents (side note: I know that only deaf people are allowed to use the concept “Deaf”, the rest should say hearing impaired, but bear with me) being against giving their (eventually) deaf children hearing implants, since that would remove them from Deaf Culture. But all real-life people with disabilities I am aware of, including their organizations, are able to keep two thoughts in the head at the same time: respect people with disabilities & do mainstreaming to limit the social importance of disabilities, but (also) treat disabilities whenever possible.

Because if you don’t, you condemn many people with disabilities to a life in poverty, or on being forever dependent on welfare subsidies (or charity); since many disabilities limit productive capacity, making it impossible to earn a decent-enough-for-a-good-life income on one’s own.

Disability activists, in particular Berkeleyan disability activists, may be a breed apart, but still…

Expand full comment

There's treating disabilities and there's treating disabilities. An actual cure, as opposed to things like wheelchairs, is often controversial at the very least, especially when you move from physical disabilites to mental ones.

But also bear in mind that the standard disability rights narrative contrasts the "good" social model, which says that society must make accomodations for the disabled, with the "bad" medical model, which says that the disabled must make accomodations for society, with the later explicitly conceptualised as meaning prosthetics etc.

Expand full comment

True true eldomtom2…

Although the upgraded medical model is in reality very close to the social model.

Saad Nagi's medical definition of disability is a case in point. He defines disability as an “expression of a physical or mental limitation in a social context, a gap between the individual's capabilities and the demands created by the physical and the social environment”.

Nagis’ definition illustrates that the assumed reductionist-individualistic bias of the medical approach is less pronounced than argued by activists pushing the social approach to disability (who emphasise external barriers to participation). Nagi's upgraded medical model also brings the social context of the individual into the definition of disability, including the physical and social environment.

That said, dichotomies are more fun, and disability activists need an enemy to attack.

While at it: The medical model is necessary if one wants to maintain special benefits and services targeted at people with disabilities, such as US Special Supplementary Income, or the right to a subsidised personal assistant. Targeted benefits are more difficult to justify within the social model, where targeting as such can be argued to solidify the image of people with disabilities as ”the others”, even if – or perhaps particularly if – benefits are generous.

I sometimes worry that gung-ho activists of the social model, and nowhere are they more gung-ho than in Berkeley in my experience, are undermining the justification for such targeted benefits, by their insistence that disability is a “social construct”. Sawing off the (welfare) branch the work-impaired among them are sitting on, so to speak.

But the Departments of Finance around the world is likely to applaud.

Oh well, all of this is a digression regarding Acemoglu & AI, Scott’s post just pushed one of my buttons😊

Expand full comment

"(side note: I know that only deaf people are allowed to use the concept “Deaf”, the rest should say hearing impaired, but bear with me)" --- You understand that you are under no obligation, moral, social or otherwise, to accede to the linguistic demands of political activists, don't you?

Expand full comment

Well, using the D-word was a bit tongue in cheek/attempt at staying on the polite side of the fence while hoping to come across as slightly humorous…but choice of words to describe people can be difficult, or what.

The disability movement has modelled itself on the Afro-American movement, portraying themselves as a suppressed minority. As a rule, I think we should avoid using concepts that can be seen as derogatory by a group, in particular a minority group, even though the group themselves may use the concept as self-irony, or as a mark of pride, or (sometimes) just for fun.

I would not go so far as to make it a moral issue, but it’s a question of being on the right side of polite.

That said, “Deaf” has emerged as sort-of a proud word also outside the community (as in “Deaf culture”), so I took the liberty.

But I understand why Afro-Americans get rather crabby if someone outside the community should use the N-word, to use a related example.

I am not a native English speaker, but I hope I am not tone-deaf to the nuances of when you are on the right side of polite, and when you are not.

Expand full comment

>As a rule, I think we should avoid using concepts that can be seen as derogatory by a group, in particular a minority group,

I feel that this is a poor rule. It amounts to feeding utility monsters.

In order to handle that kind of thing sensibly, you *have* to figure out whether how legitimate the offense is. Objecting to something that's been used in a derogatory manner 95% of the time over the past century just isn't the same as objecting to something that wasn't even claimed to be derogatory until recently and is mostly used in a non-derogatory way.

Expand full comment

I wonder if there's an element here of "The wrong people are worrying about this and that's why it's right and good to dismiss it. "

Acemoglu gives this list:

> Such warnings have been sounded by tech entrepreneurs Bill Gates and Elon Musk, physicist Stephen Hawking and leading AI researcher Stuart Russell.

Bill Gates and especially Elon Musk are completely the WRONG people to sound the alarm on something, if right-thinking people every are to take it seriously.

When you hypothesize:

> I think people treat jokes at the expense of the AI risk community as harmless because they seem like fringe weirdos

I think this is right but maybe doesn't go far enough. In a world where lots of New York times articles have been written about AI risk and NPR had done some stupid podcast episode on it where they play quirky sound effects as a perky 20-year old explains nick bostrom's book, I don't think this article gets written.

You're worrying about something that is not on "The Official List Of Things That Good People Worry About," and _that_ is why i think someone writes an article saying this.

For the record - if it's worth anything - i'm not worried about AI risk because i think the orthogonality thesis is backwards from the truth, but that's neither here nor there :)

Expand full comment

> the orthogonality thesis is backwards from the truth

You think intelligence leads to the discovery of objective good? Do go on.

Expand full comment

I think intelligence, over long enough time ranges, is indistinguishable from the ability to compute objective good.

Blog post is here:

https://apxhard.com/2020/11/27/a-moral-system-from-scientific-rationality/

Expand full comment

Elon Musk is human clickbait, so if you mention him in your article you automatically get 75% more views. Bill Gates and Stephen Hawking give you 10% each, and Stuart Russell gives you -5%.

Expand full comment

I absolutely agree - I don't think the author of the piece intended to write an article basically called "Remember, metropolitan elite journalists are the only authority on truth." I think he intended to write an article called "Hey, pop-culture loves to equate AI and Skynet but here are some AI issues you may not have considered."

But the template of "Don't care about <thing elite outgroup cares about>, when you should care about <something metropolitan elite journalists care about>" is a common one and its purpose isn't to explore issues, it's to fight a culture war.

Expand full comment

I think there are a couple of things going on here.

1 and 5 are similar, in that you wouldn't say them because people understand both worries in each case to be part of the same political project. In our politics, the battle lines for police are "more deference to police" vs "less deference to police", both worries are on the same side for number 1. Similar with 5, it's "crack down on big business" or "give businesses more breathing room to operate". People do make those sorts of tradeoffs, but it's more of a tactical question - an example would be "pour resources into the Senate race in Ohio" vs "pour resources into the Senate race in Maine".

I think 2 sounds odd because Republican obstructionism, or opposition thereto, is not a good/evil thing in itself, it's a means to an end.

3 and 4 strike me as the sorts of things people do say, indeed in the early days of the pandemic you heard a lot of takes along the lines of 4.

I think 6 is made implicitly when people say things like "all these woke people who are so worried about racism don't care when it's China."

Overall, I think that it's not "related" vs "unrelated", more "same side of an ideological view" vs "not" - for AI, people either think it's a big issue or it's not, and if they do they probably will focus on all AI-related issues, and if not, they won't. And then within "same side" there's a question of tactics, but the outcome of the tactical argument doesn't really affect interest in people not on that "side" so much.

Expand full comment

If Acemoglu were asked if there is a potential long term risk of a malicious AGI being developed, I’d be surprised if he said anything but yes of course it’s possible, but probably further down the road.

My take was that he was cutting corners in an 800 word opinion piece and wanted to highlight what he saw as existing problems that could be immediately addressed.

Additionally, it may have been edited to a fare thee well to get it to fit in so few words.

He never advocated removing existing resources dedicated to evaluating long term risk in my reading.

Expand full comment

I thought of the don’t worry about AI taking of the world part as a kind of a hook to draw clicks. Cheezy op edit attention getting tactic maybe. But as you said though, he’s a smart guy. Smart guys know that we will have to keep an eye on AGI.

Expand full comment

Probably a lot of typos above. Written on my way to a neighborhood block party.

Expand full comment

> Maybe I’m being mean/hyperbolic/sarcastic here, but can you think of another example?

"Instead of going to Mars, we should be solving problems here on Earth." Or, for those who like space, but still think Mars is a bad goal right now "we should be establishing a moon base/space elevator/something." I'm pretty sure I've heard these before. Often when talking about how billionaires should spend their money.

My model on stuff like this is basically something about diminishing returns. There's not a problem on Earth that is useful for literally everyone to focus on. So when someone says "we should be focusing on X instead of Y" they're trying to argue something to the effect that the marginal return from an extra person focusing on X is more than the marginal return from an extra person focusing on Y (and probably that the marginal loss of a person stopping focus on Y is less than the marginal gain from one more person focusing on X, so we should be transferring people to X-focus). And if they care enough to voice this argument, they probably think that the difference in marginal benefit is so great that we should shift a lot of people from Y to X.

Some things are disparate in type enough (like Republican obstructionism vs. COVID variants) that the utility of a given person focusing on them is likely more affected by that persons interests/aptitudes than it is by whether the marginal benefit is greater in one than the other. This also explains the American racism vs. Chinese racism problem: The amount of impact you can have on this is probably based more on circumstance (like where you live, or maybe how closely you can affect US foreign policy), than about the actual relative merit of each cause. And that's what makes it non-sequitor or whataboutist.

I also wouldn't expect the argument to be said about things where people think relative spending is adequate, or can't determine well enough if it is to feel strongly about.

People drop these arguments a lot on billionaire spending, because there is a huge difference in the marginal utility of an extra billionaire focusing on one topic vs. another. And I'd expect you hear it a lot in AI spaces because of the massive variance in expected utility people attach to long-term AI. Yudkowsky, for example, thinks that the project will make or break humanity permanently, with some huge number of utilons at stake. So he'd weigh the marginal utility difference very in favor of long-term AI concerns. Meanwhile lay-people, or skeptics of Superintelligent or Unfriendly AI don't tend to think the problem is so large, and so see the same massive misplacement of resources as when a billionaire focuses on the "wrong" topic.

Expand full comment
founding

I feel like half of Matt Yglasias's articles are 'why spend political capital on {structural racism/critical race theory} when it would be better to focus on {immediate race-blind policies that would improve the lives of people of color}'. Which is at least similar insomuch as it weighs a bigger but more nebulous risk of racism against shorter term, more specific effects of racism, and Matt thinks the political will spent towards the former is rival/substitute for the latter.

Expand full comment

Matthew Yglesias is also reliant on the political research done by folks like David Shor on what the actual tradeoffs are in terms of getting elected.

Expand full comment

I do see this genre of argument in other contexts, like criticism of security theater. e.g., Instead of sanitizing subway trains, we should be improving their ventilation.

I see 4 claims embedded in this style of argument:

1) the thing I'm advocating (e.g. ventilation) is valuable

2) the thing the other folks are advocating/doing (e.g. sanitizing surfaces) is not valuable

3) there's some sort of tradeoff between the two (e.g. limited space/appetite for anti-covid measures)

4) there's something misguided about the other folks' approach which leads to them doing the not-valuable thing rather than the valuable thing (e.g. it's theater aimed at making people feel safe)

Where maybe (4) is the most important part of this.

Another example. Here's a quote from John Kerry during his presidential run: "George Bush opens fire stations in Iraq and forces fire stations to shut in America." You could dismiss this as a political pot-shot, but interpreting it charitably within this framework (with fire stations as an example & metonymy for a broader point about spending):

1) more domestic spending on things like fire stations is valuable

2) the Iraq war was a bad idea / is not valuable

3) there's a tradeoff between domestic spending & military/war spending

4) Bush has been misguided in choosing to prioritize the Iraq war

Expand full comment

A better framing would "certain short term" vs "uncertain long term" where resolution of #1 potentially significantly impacts our understanding and/or impacts of #2.

Examples:

* "Sentient AI rights" vs "Animal rights"

* "AI overlord" vs "limiting the concentration of powers and information for governments, corporations ... and systems"

* "communicating with Aliens" vs "communication with Animals"

Testing this subjectively on the examples given in the post

(1) police brutality solutions could change if evidence was better gathered and protected. PASS

(2) solutions for Covid and Congress are probably not too related FAIL

(3) Again, nuclear war and civil wars are not in the same problem space FAIL

(4) pandemics and recurring health events could also be seen as orthogonal FAIL

(5) limiting lobbying wealth gains to one industry impacts future similar efforts. PASS

(6) Racism in the US and in China are likely relatively independent. FAIL

I love this post and associated comments. Made me realize that I often use this line argument simply out of boredom and exasperation to escape from the outrage trend of the month.

Expand full comment

Is anyone really that motivated against fighting AI risk? As far as I can tell, they're all adopting an everything or nothing strategy, it's solve AI alignment, (and fast, given some of the predictions thrown about) or bust. The stakes are so high they render all other ethical considerations irrelevant, surely they should be willing to use some lateral thinking to either buy more time for solving the alignment problem, or mitigate failure. Musk is trying the latter it seems, with his Mars colony vision, though it seems unlikely a rogue AGI would fail to destroy Mars. As for buying more time, collapsing industrial civilization would do the trick, as it ends the economy that makes AGI possible to begin with, as would Bostrom's proposal to turn the planet into a panopticon. I prefer the collapse civilization strategy, partially because there are many points against our civilization and I don't think it would be justified in pulling out the measures Bostrom wants for it to preserve itself. Also, destroying industrial civilization seems much easier than setting up an infallible panopticon.

Who knows, maybe there are already people working on implementing these strategies. Am I to believe AI safety researchers are more paranoid than the Pentagon? If they are, they should be working to enmesh themselves into the military, just to ensure they don't have an AGI Skunkworks. Though that still doesn't eliminate risk from Chinese military research.

While working to fulfill the Unabomber's vision, we should also try to suppress development of AI. There have been some half-assed attempts to cast AI as racist, but we could go much further in making a case for AI as white supremacy. Could trigger an immune response from the dominant culture.

We really should be pulling out all the stops against this, instead of putting all our eggs in one basket by betting it all on the success of AI safety research. A rogue superintelligence would mean the end of all life on this star system, at minimum.

Expand full comment

I don't want to get into the object level too much here, but I will say that people in the AI safety community have considered, seriously and at length, all of the issues you mention here and are nonetheless focusing on their current portfolio of research, policy interventions, and "awareness"/outreach-ish things. Some of it hinges on beliefs about how technology will progress that are probably different from yours. And I don't doubt that mistakes, coordination failures, path dependence, and bias play a role as well. But it's not because nobody is "really that motivated against fighting AI risk"

But it is interesting to me to hear that as a hypothesis for what's going on and I think it lends further credence to Scott's model that arguments like Acemoglu's are because people think AI risk efforts are pointless and can be safely thrown under the bus.

Expand full comment

Why does this bolster Scott's argument? The people who think AI safety research is pointless don't believe in the threat of rogue superintelligence, unlike what I'm laying out here.

> policy interventions

There are policy interventions regarding AI safety? I've heard some noise about algorithmic bias, and algorithmic indoctrination (as in The Social Dilemma), but that's it.

> Some of it hinges on beliefs about how technology will progress that are probably different from yours

They think they have a lot of time? I'm hearing things like AGI by 2030 is not too unlikely.

Expand full comment

> Why does this bolster Scott's argument?

Because you're demonstrating that from the outside the movement looks ineffective and misguided in a way that I wasn't fully aware of. To be clear, I'm only saying that it increases *my* credence in his model, not that I think your comment should change anyone else's credence.

> There are policy interventions regarding AI safety?

There is, at least, a serious effort to identify policy interventions: https://www.fhi.ox.ac.uk/ai-governance

Expand full comment

Here is an article by Acemoglu from the Boston Review, dated May 20, 2021; I think the Washington Post tried to signal-boost it and it became weird, but this version from two months ago makes a little more sense. http://bostonreview.net/forum/science-nature/daron-acemoglu-redesigning-ai

"Yet society’s march toward joblessness and surveillance is not inevitable. The future of AI is still open and can take us in many different directions. If we end up with powerful tools of surveillance and ubiquitous automation (with not enough tasks left for humans to perform), it will be because we chose that path."

Acemoglu is most concerned about AI/automation impact on labor market and government surveillance in this article. Here is more:

"Plenty of academic research shows how emerging technologies— differential privacy, adversarial neural cryptography, secure multi-party computation, and homomorphic encryption, to name a few—can protect privacy and detect security threats and snooping, but this research is still marginal to commercial products and services. There is also growing awareness among both the public and the AI community that new technologies can harm public discourse, freedom, and democracy. In this climate many are demanding a concerted effort to use AI for good. Nevertheless, it is remarkable how much of AI research still focuses on applications that automate jobs and increase the ability of governments and companies to monitor and manipulate individuals. This can and needs to change." That paragraph is full of links. To be fair, it is 2/3 of the way into the article.

I think the Post changed that into "don't worry about the distant future, worry about the present." I imagine Acemoglu talking to the reporter, saying something like, many people are working to prevent the sci-fi horror AI future, we don't need to worry about that, we should focus on the jobs and surveillance situation [because those other people will worry about the future].

That is not what came across, though.

This is the second time I've noticed recently the Post is taking something published somewhere else and giving extra audience to it while making it weirder.

Expand full comment

I was wrong about Acemoglu's argument in BR: "The same three-pronged approach can work in AI: government involvement, norms shifting, and democratic oversight."

He was not arguing at all for lack of worry about the future. He was arguing for those 3 things to be developed now and employed soon in order to change the future.

The Post function takes an argument and makes it unrecognizable. Post(Acemoglu) resembles BR(Acemoglu) only in that they both mention jobs and surveillance.

Expand full comment

Actual policies have already begun, though they are not exclusive to AI safety. Here's are two summaries of some recent stuff at the federal level:

- https://www.lesswrong.com/posts/6Nuw7mLc6DjRY4mwa/the-national-defense-authorization-act-contains-ai

- https://www.lesswrong.com/posts/RpCkSY8rvfskBhkrb/outline-of-nist-draft-plan-for-ai-standards

Expand full comment

Presumably we're only supposed to be taken aback by these as "good arguments". I heard (4) many times last year at the onset of the pandemic by smart people I respected as late as March 10. "We shouldn't worry about pandemic preparedness because Covid has killed far fewer people than the flu". It was a terrible argument, but I heard it a lot for a few weeks.

Expand full comment

Did you change your assessment of their smartness/respectability? I saw that argument quite a bit, too, but it was hard to imagine that a smart, honest person could use a public platform to make it without also possessing an absurd combination of laziness and arrogance.

Expand full comment

In many cases, yes. The memory I have most strongly led to a pretty big update on how much I should respect this man's judgements, especially on Covid-19.

Expand full comment

Makes sense, and sorry about the tone of my first reply -- I don't think I meant it that way, but reading back it comes across kind of aggressive.

Expand full comment

I suspect the reason why AI risk is often seen as a waste of funding is the timing: is it necessary to fund it now rather than later? If AGI is 20 years in the future, do we really need to spend all 20 years making mitigation strategies? If not, then we have more urgent things we could be investing in. I suspect this thought process is what led to Acemoglu’s position.

The counterargument would be that we can't predict when AGI will come, and when it does, bad things could happen so fast we won't have time to react. However, I think it's hard to make arguments for these that convince most people (indeed I'm not convinced).

In contrast to pandemic preparedness (#4), pandemics do happen without notice and rapidly cause problems, and that's why it's worth worrying about now.

Personally, I would say that instead of worrying about AGI risk, we should be investing in cybersecurity. The latter has immediate benefits, and a lot of apocalypse scenarios for AGI don't work if stuff can't be hacked.

Expand full comment

Another case for AI safety work now is that it may take a lot of time. If we were confidant AGI was 50 years away, well alignment looks like a tricky research problem, it might well turn out to be 50 years of research tricky.

Actually I think that even if everyone was Bruce Schneier, AGI would still be dangerous. The level of security needed to be safe against an AI undergoing an intelligence explosion is pretty ridiculous. And it would need to be applied to the whole world. Even if computers can't be hacked, humans can be tricked.

I mean serious security might make it a bit harder for an AI in a box to get out. But anything boxed enough to be somewhat safer is still useless.

Expand full comment

The difference is that all the issues you listed for comparison are real and occurring right now. Megalomaniacal AI is not.

It's not happening in the immediate future, either. No damage is being done right now that cannot be undone. It's not some creeping problem that slowly progresses until it's "too late." There is no risk at all until someone is actually trying to develop a general AI (as opposed to training chatbots on Reddit arguments and fantasizing that there's conscious thought behind it). Once it's a plausible thing, people can assess the risks. And if you can't trust the people of the future to assess the risks when the risks dawn on the horizon, then you definitely can't expect the people of today to assess the risks from our current position.

This topic is, frankly, something that smart people (more specifically, "sci-fi nerds," as one other commentator put it) seem to bring up in order to signal that they're smart. If you just wait until it becomes relevant to talk about it, you won't get to show that you were ahead of the game. So start talking about it now! If it ever comes up in the future, you were in on the ground floor.

Expand full comment

"Maybe I’m being mean/hyperbolic/sarcastic here, but can you think of another example? Even in case where this ought to be true, like (3) or (4), nobody ever thinks or talks this way. I think the only real-world cases I hear of this are obvious derailing attempts by one political party against the other, eg “if you really cared about fetuses, then instead of worrying about abortion you would worry about neonatal health programs”. I guess this could be interpreted as a claim that interest in abortion substitutes for interest in neonatal health, but obviously these kinds of things are bad-faith attempts to score points on political enemies rather than a real theory of substitute attentional goods."

I mean, I don't think it's derailing if someone makes the argument that pro-life folks should support robust sex ed programs and access to contraception, which would lead to much fewer abortions. That, to me, is a much more analogous argument to the narrow AI over general super-intelligent AI than the other examples pointed out here, but maybe that's just me. I don't think it's nonsensical to suggest that we should address the building blocks/foundations of an issue more than worrying about known unknowns of the future. Granted with abortion, that's already happening in the present day, so that part isn't entirely analogous, but in both cases it seems to me that if you address the root causes, you greatly mitigate the other areas of concern. That's why the police brutality vs. false evidence doesn't work—instead, Acemoglu's argument, to me, would be that we should address police corruption, which would probably mitigate the things that stem from that corruption.

Expand full comment

> I mean, I don't think it's derailing if someone makes the argument that pro-life folks should support robust sex ed programs and access to contraception, which would lead to much fewer abortions.

The issue being that pro-life folks are also likely to find "robust sex ed programs" and "access to contraception" to be immoral, so you're telling them that if they're really concerned about immoral thing A, then they should support immoral thing B. A bit like saying "If BLM is so concerned about police racism against black people, they should be supporting shipping all the black people back to West Africa so that they won't have to worry about US policemen any more".

Abortion opponents will tell you that the correct way to decrease abortions is to stop our culture from normalising pre-marital sex, and that "robust sex ed programs" and "access to contraception", at least of the form that abortion proponents are usually in favour of, tend to have the opposite effect. Catholics believe that contraception is morally wrong in itself, Protestants generally don't, but are typically of the view that if you're mature enough to have sex _properly_ (ie within marriage) then you should be mature enough to figure out how to buy your own darn contraception.

Expand full comment

It seems perfectly normal to do one immoral thing to stop something even more immoral. That's the entire point of prison and war.

Expand full comment

More generally you have to remember that religious folks are deontologists as opposed to utilitarian. They believe you don't do wrong things, even to prevent a greater wrong. This is the source of a lot of confusion with religious values.

Expand full comment

"I mean, I don't think it's derailing if someone makes the argument that pro-life folks should support robust sex ed programs and access to contraception, which would lead to much fewer abortions."

To which, as a pro-life person, I reply that I've danced this dance before. "Okay, we'll give in on contraception, because as you say this will prevent unwanted babies being conceived, and hence reduce abortions".

"Great! Oh, but here's Sally. Sally *was* using contraception, but it failed, and now she's pregnant with an unplanned baby. Plainly she never wanted or intended to become pregnant because she was using contraception, so equally clearly, she should have access to abortion as the back-up!"

"Hang on, you never mentioned anything about 'failed contraception' when we made our agreement and clearly *we* feel strongly that once conceived, a child should be born. So no abortion for Sally, sorry!"

"You mean you would force this poor woman to go through nine months of agony with an unplanned pregnancy, then torture her by making her give up her very own baby for adoption into the arms of total strangers, or compel her to keep this unwanted child, which will then be abused - look, we have studies!"

"Wait, what? No we don't want to torture anyone! We believe abortion is murder and we can't agree to murder, that's why we agreed to the compromise on contraception!"

"MONSTERS! SEXIST, MISOGYNIST MONSTERS WHO HATE WOMEN, WANT TO PUNISH THEM FOR THEIR SEXUALITY, AND WANT TO FORCE EVERY WOMAN TO BE CONSTANTLY PREGNANT! REPRODUCTIVE JUSTICE RIGHTS NOW! KEEP YOUR ROSARIES OFF MY OVARIES!"

Yeah - been there, done that, got the T-shirt, as they say.

Expand full comment

And yes, I *have* seen pro-abortion arguments about how, once a woman has gone through pregnancy and delivered the child, she has bonded with it so giving it up for adoption will cause her pain and grief, therefore it is much kinder and better to let her kill the baby at an earlier stage of pregnancy*.

*You don't think abortion is killing a baby. I do. I'm going to use my own terms and beliefs here. Feel free to call me harsh names and throw rotten vegetables and try to persuade me with hair-splitting definitions of terms ("now do you really consider a zygote to be the same thing as a baby, hmmm?") but I'm not shifting on this.

Expand full comment

Hope it's a comfy shirt at least.

Expand full comment

I wear it with pride, emblazoned as it is with details of all the "isms" and "phobes" accusations that I am collecting. Fortunately, I am large-framed so there is plenty of room for the new epithets that the PC/progressive/woke/however they're being called this year want to add when they find yet another reason to be agitated 😁

Expand full comment

I may not agree with you, but fair enough.

Expand full comment

Oh, I'm hardened to it now after reading that "holding your opinions which were merely centrist or accepted common sense X years ago now means you are a fascist, you alt-right Nazi!" too many times.

Well okay, if by your definition of -ism, I'm an -ist, then I'm an -ist!

Expand full comment

I can totally empathize with the emotional argument, believe me. However, speaking purely numerically, one way to reduce the number of abortions (reduce, admittedly not eliminate) would be to make contraception free and widely available. Put a condom machine on every corner, fully subsidize hormonal birth control, etc. And yet, those who hold the pro-life view always seem to treat contraceptives as some sort of a psychic Ebola that should be fought at every turn. Why is that ?

Expand full comment

Mostly because, as near as I can tell, the root problem is the religion. I'm pro choice now, but as a brainwashed/indoctrinated Christian teenager, you're against the abortion because it's murder, and you're against the contraception/sex ed because of the premarital sex (like someone else pointed out above, maybe Deiseach).

The real problem is that a viewpoint like that doesn't leave much room for compromise, because the actions we could take to stop the issue from being as big an issue are also, themselves, issues that they don't want to compromise on. The part that personally bugs me is the magical thinking this all has to involve in thinking that somehow fully outlawing all of the things will stop them. They won't. So, really, I have to believe that it comes down to some deep need to be punitive to people who don't believe as you do.

Expand full comment

Eh, it's not clear that I was the teenager in my sentence above, so I'm adding that here for clarity. I'm in my 30s now and I'd like to think I've grown, learn, evolved so much since then. But I wasn't implying anyone in this thread was/is a brainwashed teenager other than myself (though if you watch "Jesus Camp" it's hard not to see the apparatus for what it is).

Expand full comment

Could the root problem also have one tiny shoot as well, that "everyone is and should be having sex all the time!" and "pregnancy will wreck your life completely and destroy any chance you had at anything" is a bit of a problem because sex does make babies, and even good contraception can fail?

So you know - "do the thing that makes babies, but don't make babies!" is also a tough row to hoe.

Expand full comment

I think the number of abortions resulting from failed contraception (especially in cases where both parties knew how it worked and applied the methods/means appropriately) are infinitely lower than the cases where folks just assumed they wouldn't get pregnant for some ill-informed reason, cases of rape, etc. So, to call that a "root" cause seems to be a bit misleading.

I mean, I suppose it's not really worth asking if you can see how Puritanical it is to suggest consenting people not engage in acts that they're driven to by forces both natural and social simply because your personal belief is that they shouldn't.

Expand full comment

It all depends on what your goal is. If your goal is to make sure no abortions happen anywhere ever, then yes, you should campaign against all kinds of sex, indiscriminately (except the archbishop-approved marriage sex, presumably).

However, if you want to significantly reduce the number of abortions, then supplying free contraception to every man, woman, and nonbinary critter is definitely the way to go. Yes, you will somewhat increase the total amount of sex that is happening, but the massive decrease in abortions overall will more than offset this problem -- despite the occasional extra abortion due to failed contraception.

Of course, if your religion prohibits you from taking any actions that could increase the amount of sex in the world, no matter what, then you're in a bit of a bind. I don't know what a good solution would be in this case -- forced sterilization, maybe ? -- but banning abortion does not seem to work too well. Historically, people tend to simply resort to home-grown methods in this case, often with disastrous results (even the USSR was forced to tacitly admit this, after a while, but boy did they try to hold on as long as they could).

Expand full comment

Because I've seen that argument running in my country over the past thirty years. "Permit contraception!" they said. "If Irish women have access to contraceptives, then unwanted births will go down!"

Now you can buy condoms in the supermarket, let alone go to your GP on the medical card and get contraceptive services. Meanwhile, in the school where I worked and in the social housing department where I worked, it was routine for kids of the same mother to have different surnames because they had different fathers, none of whom were ever married to each other. "Access to contraception" was not the problem there.

And now we've finally got the liberalisation of abortion laws, which activists declared on the very day they won the referendum weren't liberal enough and they would be working to get even more liberal laws. And I have no reason to believe that this will stop at "fatal foetal abnormalities, rape, incest, threat to the life and health of the mother". We need abortion, you see, because somehow despite having contraception, people are still getting pregnant when they don't want to be.

So no, I don't believe the arguments advanced. "Just this *one* exception" turns out to be "and now you gave in on that, you have to give in on this". Cthulhu always swims left.

Ironically, in view of the fact that under the cruel, inhumane laws of yore forbidding abortion this case wouldn't have happened:

https://www.irishexaminer.com/news/courtandcrime/arid-40308965.html

"In her action, the woman claims she had a scan at 12 weeks of her pregnancy on February 21, 2019, which showed everything was normal.

A week later, on February 28, she got a phone call from one of the defendant consultants and was told that a DNA test, called the Harmony Test, showed positive for a syndrome known as Trisomy 18 which results in abnormalities in babies.

A further scan was carried out and again it was normal but the woman was then advised to undergo a test on the placenta known as Chorionic villus sampling (CVS) which was sent to the Glasgow laboratory. The woman was advised on March 8, 2019, this test revealed Trisomy 18.

She claims her consultant advised the pregnancy was non-viable and that there was no point in waiting for a full analysis of the results. This is known as the Karyotype test which examines all the chromosomes inside cells for anything unusual.

She says she relied totally on this advice and the termination was carried out on March 14, 2019, in the clinic.

Subsequently, she claims, the full Karyotype analysis was provided to her which she says showed her unborn child did not have Trisomy 18 and she had a normal healthy baby boy."

If I'm being cruel, I'd say: but they got what they wanted - a dead baby. They just changed their minds when they found out that this baby didn't have the condition they were told it had, and the decision to abort was made by them. The medical services just provided the medical service that activists had demanded be made legal and available.

I know that's cruel, I know it's a hard decision, I know the couple are suffering - but one of the planks of the demand for abortion was "fatal foetal abnormality" and any hospital or doctor who refused an abortion on such grounds now it's legal would equally have been in trouble.

Expand full comment

Re-reading the above, I suppose I should clarify that the mothers were never married to any of the fathers, not that the fathers were never married to each other.

Though we do have gay marriage now, so if the fathers *did* want to marry one another, that's plain sailing! 😀

Expand full comment

> "Permit contraception!" they said. "If Irish women have access to contraceptives, then unwanted births will go down!"

Well... did they ? You pointed out that unwanted births were not *eliminated*, but that's not the same thing. Short of divine intervention, you will never be able to eliminate all unwanted births.

Which is actually another aspect of religious pro-life organizations that strikes me as hypocritical. One way to eliminate unwanted births is to reduce the total number of births, and thus conceptions. But another way is to increase the proportion of births that are *wanted*. And yet, I rarely see any kind of a push from Christian churches to e.g. provide financial assistance to all child-bearing couples; free pre-natal and post-natal medical care; free education; etc. Yes, there are charities that try to handle all that stuff, but the main effort overall appears to be aimed at preventing teenagers from having sex; reducing the prevalence of contraception and sex-ed; and then shaming said teenagers extra hard when they inevitably do get pregnant.

To be fair, I'm not attributing any nefarious motives to you personally, but it's hard for me to see major Christian organizations (at least here in the US) as anything other than evil or stupid, as far as abortion is concerned.

Expand full comment

I was under the impression that a lot of crisis pregnancy centers - among other charities - do, in fact, provide assistance to new mothers.

It could definitely be more systematized, but it is being done, and supported by Christians.

Expand full comment

> (4): Instead of worrying about “pandemic preparedness”, we should worry about all the people dying of normal diseases like pneumonia right now.

[...]

> I don’t think I’ve heard anything like any of these arguments, and they all sound weird enough that I’d be taken aback if I saw them in a major news source

What? #4 was a super common argument in Feb 2020, see all the "why are you worried about coronavirus when you should be worried about the flu?" takes.

Expand full comment

- 3, 4, 5, 6, and the abortion vs neonatal health example all seem like reasonable statements that well intentioned people might make... even the police brutality one.

- isn't the premise of Effective Altruism (and Rationalism in general) that you can make distinctions about where attention is most valuable-ly dedicated?

- sometimes people phrase these as tradeoff arguments (colloquially) when what they actually mean is a sequencing argument "I think X needs to come before Y"

Expand full comment

YES, someone framed it the way I've been struggling to. Trade-off vs. sequencing/hierarchy.

Expand full comment

My problem with the argument "we should be worry about present problem x instead of future problem y" is that x is often undefined and will never be solved.

For example, Acemoglu wants to ignore AGI risks in favor of current problems like AI displacing workers...but how do you solve that problem? How long will it take? What's the point where we can declare victory on this issue, and start thinking about AGI?

Probably never. Labor pool displacement is a battle we'll be fighting for as long as AI exists. So basically this cashes out to "we should never think about AGI at all."

I feel the same way about "people shouldn't go to space when there are people starving on earth!" type arguments. Yes, it might be better to direct money away from certain things and toward certain things. But since it seems likely that there will always be at least a couple people in poverty...when do we go to space?

Expand full comment

It seems to me that it's easier to diagnose and solve for a current issue than it is some hypothetical one. We know facial recognition AI is abysmal and oppressive, especially for people of color, so that seems like it's easier to solve for than "What form will our machine overlords take?"

Expand full comment

I have been a fan of you rationalists for a long time, but this 'dispute' has, at long last, given me a glimpse of why people I admire - say, Tyler Cowen - have mixed feelings about you. WTF?

Expand full comment
founding

Please see the comment policy (I can't find ACX's version but here's SSC's: https://slatestarcodex.com/2014/03/02/the-comment-policy-is-victorian-sufi-buddha-lite/). Your comment certainly isn't kind, and it's hard to call it necessary or true either if you don't provide any reason *why* this dispute gives you negative feelings about all rationalists - if you think that Scott (or other disputants) have gone astray, tell us where!

Expand full comment

Gosh. I just meant it seemed a strange sort of dispute. No ill will. I liked the 'unquantifiable common sense' description of Argentus. I enjoy SSC. How does 'kindness' come into it? I almost always find Scott's takes very interesting and to the point.

Expand full comment
founding

I interpreted your comment as just putting down rationalists (and not giving any reason). The strict denotation has at least some negativity, since this post is making you understand why people don't like rationalists so much so clearly you think there's something negative here, and then addressing "you rationalists" and ending on "WTF" made me take the whole thing as much more negative than you'd intended - in my experience, addressing "you Xs" usually carries a negative connotation towards group X while distancing yourself from them. I'm glad to see that my reading was not the intended one, and it's also nice to see more of what you meant (i.e. that the dispute just seems strange to you).

Expand full comment

Yeah. I've found the discussions on SSC and LessWrong much more insightful than those on places like Marginal Revolution. What I meant to express was that, in this instance, I would understand why someone like Tyler might think this was 'over intellectualizing,' something I never had before.

Expand full comment

(1) seems obviously true (in the sense that it's a legitimate argument) to me. My model is that attention to police abuses is abundant, while political capital to do anything about it is scarce. Assuming that the police unions will fight to the death to prevent any reform from passing, we need to all line up behind whichever one is most important, or we'll end up with nothing.

Expand full comment

I've heard arguments like (6) fairly regularly -- and sometimes made them, but since I am American, the argument is "Instead of worrying about racism against Uighars in China, we should be worrying about racism against African Americans and Native Americans in the United States." (But I would consider the opposite argument to be valid if anyone was making it in China.) While I don't particularly agree with Acemoglu's argument (based on your retelling), I do generally agree with arguments of the form, "Instead of worrying about potential future problems which I have negligible ability to impact, I should worry known current problems that I have some ability to impact," and I suspect Acemoglu believed he was making an argument of that form.

As a general rule, I don't think most economists or pundits have any understanding of tech and therefore don't address the actual circumstances under consideration. For example, on information security from what I've seen, I think programmers are much more inclined to make the argument, "Game theory says we should make paying ransoms to hackers sufficiently illegal that no reasonable person in charge of making such a decision would ever pay the ransom." Whereas, economists are much more likely to say "Game theory says companies need to spend more on information security to prevent hackers from damaging their systems" (I read three such articles in Bloomberg Opinion in the couple of weeks after the Colonial Pipeline hack) which would be true if hackers primarily were attacking vulnerabilities that existed due to oversights (i.e. errors of omission), but is false because they are primarily attacking vulnerabilities that exist due to bugs (i.e. errors of commission which nobody responsible for realizes are errors).

Similarly, there are many incorrect models of how ML works and might progress under which Acemoglu's argument is more or less correct. (e.g. If you assume future A.I.'s will be built up out of iterating on current A.I.s instead of realizing that most major advances in technology are going to come from largely greenfield projects it makes sense to say before we worry about hypothetical bugs that we might introduce in the future, let's worry about all the extant bugs that are already there. Alternatively, if you believe advancing AI is sufficiently categorically different from computer-programming that we could categorically feasibly forbid AI research and roll back technology by five years without rolling back technology by 80 years, his argument makes sense.)

(The correctness of the arguments in incorrect paradigms doesn't prove it's incorrectness in the correct paradigm. I don't know how ML will progress, and it's conceivable his argument are also more or less correct for how ML will actually progress, but I doubt it.)

To anyone familiar with MIRI, the rebuttal, "The problems Acemoglu is discussing are just instances of the ad hoc approach to AI research/implementation that are currently common, and the best way to address them is to research how to make an A.I. which provably does what it's supposed to do which is precisely what institutions like MIRI are trying to do," is so obvious it doesn't need to be said. (I'm not confident that statement is true either, but I think for anyone who has had significant exposure to Eliezer's thoughts on the matter, it's such an obvious response that Acemoglu isn't participating in the conversation unless he addresses it, which he doesn't.)

Expand full comment

Just adding a couple things that I thought were so obvious in my context that I didn't bother saying them, but probably aren't obvious at all in most contexts.

I somewhat regularly make arguments of this form "Since future X will be built up out of iterating on current X, let's worry about all the extant bugs that are already there before we worry about hypothetical bugs that we might introduce in the future, ." But I am working on specific X's that are iteratively improved, so sometimes I have to say, "At this time there are no possible good feature requests that should be prioritized for this product because the product already has too many bugs right now which need to be addressed before new features add even more bugs." (But this is not a valid counteragument to "this product needs to be re-architected because it has too many bugs," and actual AI-researchers are advocating something much more like re-architecting the solution than like adding more features to a buggy product.)

As to the fungible resource that makes it the argument relevant, the two most obvious are: 1) the mind-share of developers working on a particular problem; and 2) the fact that the number of bugs scales with the size of the solution. "1)" is less true for the case of AI-xrisk-theorist vs current-AI-developers than it is for group of people working on developing a particular product, but it still is somewhat true for AI/ML research. For instance, I've read arguments about AGI and AI friendliness with time I intended to spend learning about ML or AI, even though my original interest and primary interest in the subject remains much more applied. It seems highly plausible to me that I would be more aware of people's concerns about AI/ML bias if nobody talked about AI xrisk, since both of those are well within the scope of light-reading related to AI/ML that someone can guiltlessly and possibly-relevantly distract themselves with while pretending to be learning/working/productively using their time. And both are also well within the scope of water-cooler conversations for people working in these sorts of subject areas.)

(Context: I'm mostly a regular old-fashioned software developer, but for significant parts of my career I've primarily written framework code for teams that are mostly working on ML problems, without directly doing much ML work myself.)

Expand full comment

I think it is time to explain why risks of AGI (artificial general intelligence) are often ignored or downplayed by researchers in the field.

Among people closely related to AI research it is widely recognized, that AGI has been created at least twice at Facebook, at least three times at Google and at least once at OpenAI (the later coincided in time with OpenAI becoming much less open about it's research). I say "widely recognized" rather than "known", because in all cases we only have circumstantial evidence. Shortly after AGI was created the plot unfolded similarly -- in spirit -- to the plot of "Definitely Maybe", a book by russian sci-fi authors Arkady and Boris Strugatsky (known in russian as "За миллиард лет до конца света"). A series of strange coincidences led to the projects being abandoned, with very little evidence left.

This effect was dubbed DM-effect (for "Definitely Maybe") and is responsible for multiple recent breakthroughs in AI. The name of famous AI company DeepMind is actually a backronim for DM-effect. After one of the projects that triggered DM-effect was identified, said project was planned conditionally on the failure of less ambitious projects, which led to stunning successes of Alpha-Go, Alpha-Zero and Alpha-Fold.

It is not easy to harness the power of DM-effect, and failures to do so are difficult to investigate because of its nature. It is suspected that at least one known DM-triggering project was irrevocably lost while trying to use it to boost other projects. Many believe that the recent firing of Timnit Gebru (former AI-ethicist at Google) is related to the DM-effect that enabled the BERT project.

The DM-effect is very badly understood and its unchecked use by big corporations is the real threat of AGI, as the threat of AGI itself seems to be mitigated by universal laws yet to be discovered.

Expand full comment

I don't think I get it. If I remember correctly in Strugatsky's story there is a law of nature that prevents destruction of the universe by sabotaging progress of civilizations. But how can this be harnessed to create Alpha-Fold?

Expand full comment

The law seems to prevent some particular things from being done. If we commit strongly enough to work on those things conditionally on something else (e.g. failure of alpha-fold), we can trick it into trying to avoid failure of alpha fold. This is actually quite flaky, and nobody understands how it really works. It's probably not quite as Strugatsky's described it.

Expand full comment

The problem is that these arguments sounds strange, when they shouldn't.

They are all about prioritization, something severely lacking. And the problem is that we feel like we should not have any limits, like we could and should worry about everything, and we can and will, fix everything. Lest anybody constrains our "freedom" to do everything at the same time.

So we're used to worrying about everything all the time, getting enraged to this and then that which is 100 times less important in an equal amount (or even more. Just like with celebrity gossip, some less important issues gather many times the outrage, comments, funding, etc, than other far more important ones).

In the end, we're never putting things in perspective, and doing almost nothing about any of those things we "worry" about (or are told to worry about every week from the media and social media), and we do even less about the important things we should be worried about most.

Expand full comment

What about looking at the other project Musk is famous for?

"Instead of worrying about how to settle Mars in the future (which is, by the way, a foolish and outlandish science fiction idea), we should focus on the problems here on Earth we have right now."

Maybe it is just people not liking Musk! But I think it is a bit more universal. 3 seem borderline comparable to the AGI risk "substitute goods" put-down argument. Prior to COVID, worrying about pandemic risk could not be dismissed totally in off-handed way, because there was is evidence of past pandemics (like Black Death) having devastating effects. However, there is a weaker put-down argument which has some similarity:

"You know those fringe preppers who want to build self-sufficient farms in the woods and buy lots of guns and are worried about pandemics? That is silly because we have modern healthcare, government and civilization, and obviously a future pandemic will not be a zombie apocalypse. Instead of worrying about civilizational collapse from killer pandemic, what you should instead is look at these not-insane disaster precaution checklists: Have enough water and food to survive a disturbance for few days, follow the instructions by authorities, etc."

Sounds familiar? It is not as juicy substitution: There is some silly pop culture catastrophe scenario to put down (zombies), but there is a precedent for pandemics, so some form of the serious risk is granted to exist. Consequently, everyone who wishes to talk about "serious" pandemic prepradness doesn't feel they need to attach precautionary notices always before talking about it ("we are not like those fringe outgroup people preparing for zombie apocalypse, but here are some important things about pandemic preparedness ..."). Yet some people will sometimes write how preppers have seen too many zombie movies and are excessively worried about implausible scenarios.

Now, back to AGI risk. Long term vs short term AGI risk as notable juicy "substitute goods" pairing sounds like a good framing, but I want to argue it is not surprising at all

"And it just so happens that the only two substitute goods in the entire intellectual universe are concern about near-term AI risk, and concern about long-term AI risk."

Imagine we never had witnessed a serious deadly epidemic (a strange stable state where human diseases where either super contagious but not serious or deadly but not very contagious, maybe because we never domesticated cows). Then we would learn about germs and some science fiction authors speculate about killer diseases. Then generation or two later some a bit more respectable people would realize that actually, nothing in evolutionary theory precludes a possibility of contagious deadly germs, and there are killer diseases that kill ants, so maybe we should worry about it. (If you want to paint them as serious: in the alternate universe people had recently started experimenting with this new and exciting technology called "animal husbandry", inspired by the advances in evolutionary theory.)

There is a reason why AI risk sounds weird and sci-fi-ish: "technology misusing itself" is a weird and sci-fi-ish idea. It has a unique problem that it was first conceived in its recognizable form and became popular in science fiction, decades before the technology got to the point where any kind of AI risk started looking plausible. (Or plausible to the sort of people who want to remain grounded in their predictions, instead of writing or reading SF.) Asimov was writing stories about the three laws of robotics before anyone came up with term "artifical intelligence" and before we had anything resembling anything that could one day have any capabilities of R. Daneel Olivaw.

Also note, Asimov published in pulps, not the most prestigious medium around at the time. At best, you would have some researchers who read Astounding Science Fiction and were confident they were heading in right direction towards robotics with their research. Unprestigiousness of the idea has stuck a little.

And, methinks it is not totally unreasonable argument that some aspects of AGI risk talk could be misguided because key ideas may have unnecessary anthropomorphic baggage that originates from old SF. After all, it is a bit speculative. Theoretical results about rational agents can be proven correct, but it is non-trivial task to show that agents are the correct way to approach the question.

Thus, it is not surprising at all that many people find it easier to talk about and think about AI risk as a narrow-AI risk, in terms of in automation displacing workers and totalitarian surveillance, which are old worries from the 19th and 20th century, and contrast it with the fanciful science fiction AI.

Expand full comment

Update: I think I have an example about how it is might be surprisingly common and unexpected for ideas first popular in SF to have difficulties getting traction as serious scientific ideas without very strong evidence for them:

Civilization-level or extinction-level risk from meteor impacts.

I didn't know much about the topic than what I found by googling, but the similarities are notable! But it wouldn't hurt if someone more knowledgeable about history of the field would weigh in to confirm.

So, the idea of large space rocks or other objects colliding with Earth and causing large-scale devastation has a long history in science fiction (H. G. Wells, The Star, 1897). Today it does not seem a fanciful SF idea, because most of people know about dinosaurs. However, you might be surprised to know that the science took a long way to get here.

While the craters on the Moon are there for everyone to see, apparently it took quite long before people started to view lunar craters as impact craters instead of volcanic craters, and even longer until it wasn't a contentious issue in astronomy, and then some decades before geology agreed [1].

And that was just the crates. Brief googling suggests it took until the 1980s and some very persuasive arguments by Alvarez and others (about finding iridium in certain places) before there was a consensus that the dinosaur extinction was caused by an asteroid impact [2] (and consequently, maybe we should worry about hypothetical future asteroids too).

I don't know if anyone ever said, "Instead of worrying about Mr Wells' fantastical science fiction killer collisions with objects from SPACE!, we should worry about other more realistic things like volcanic eruptions causing crop failures", but it does not sound implausible.

Maybe long-term AI risk research program is doing comparatively well, considering you can not find irrefutable traces of AGI causing catastrophe by digging into ground and doing a chemical analysis of samples.

[1] According this source: https://www.univie.ac.at/geochemistry/koeberl/publikation_list/189-lunar-craters-history-EMP2001.pdf

Direct quote:

"...geologists found it largely inconceivable (maybe even offensive) that an extraterrestrial object would have influenced the geological and biological evolution on the Earth. This might explain the mixture of disbelief, rejection, and ridicule with which the suggestion was greeted that an asteroid or comet impact wiped out the dinosaurs and other species at the end of the Cretaceous (Alvarez et al., 1980)"

[2] Alvarez, 1983: https://www.pnas.org/content/pnas/80/2/627.full.pdf "I simply do not understand why some paleontologists-who are really the people that told us all about the extinctions and without whose efforts we would never have seen any dinosaurs in museums-now seem to deny that there ever was a catastrophic extinction. When we come along and say,"Here is how we think the extinction took place," some of them say, "What extinction? We don't think there was any sudden extinction at all. The dinosaurs just died away for reasons unconnected with your asteroid." So my biggest surprise was that many paleontologists (including some very good friends) did not accept our ideas."

Expand full comment

Much of the time, when I've encountered "instead of devoting resources to A, we should devote them to B" in response to someone advocating for A, it's been a not very thin disguise for either "I don't want anything going to A, ever, but saying that will get me pushback, so I'll cite B instead, which is a sacred cow" or "the people who benefit from B are more deserving (i.e. higher status) than low lives like you and other people who care about B".

Net result - arguments of that form, in that context, tend to result in me thinking about what the speaker really meant, rather than what they said.

I haven't read Acemoglu and don't have much interest in the whole AI risk thing. But it would be amusing if their actual problem boiled down to "only nerds care about AI; we should focus on something real people care about" or similar ;-(

Expand full comment

err - obvious typo above, and no edit function. Thank you substack, for this and the tiny window in which I have to type my comments, making it next to impossible to proofread them.

Expand full comment

Here is Acemoglu's article in the Boston Review from two months ago which reads much more like something he intended to write: http://bostonreview.net/forum/science-nature/daron-acemoglu-redesigning-ai

It is about what he sees as the problems and solutions in AI and related policy. To summarize, in spite of the good things AI and automation might do, surveillance and human job loss are big negatives. This requires policy solutions:

"The same three-pronged approach can work in AI: government involvement, norms shifting, and democratic oversight."

Worry now versus worry later is interesting rhetorically. But I'm putting this here for those who are curious about what Acemoglu may have been trying to say in the Post.

In terms of the arguments above, (3) was the basis for the Cold War (albeit with other countries, not Ethiopia), so there's that.

Expand full comment

"“Instead of worrying about sports and fashion, we should worry about global climate change”? This doesn’t ring false as loudly as the other six. But I still hear it pretty rarely. Everyone seems to agree that, however important global climate change is, attacking people’s other interests, however trivial, isn’t a great way to raise awareness."

I agree it's a bad strategy, but thoughts along these lines occur to me daily—about myself above all, but about everyone else, too. Working on any other social/political/anything issue just seems bizarre. It's like there's a Borg cube orbiting the planet, and if we *all* focus on doing *everything* we can maybe we'll beat it, and if we don't we all get assimilated, and.... everyone is still worried about sports and fashion and the middle east and whatever. (Again, very much including me!) I think this is related to why it's a wicked problem: the scale, scope and threat of the problem interfaces really badly with our danger-judging heuristics, to the point where we manage to think about other things. If we were in equivalent danger in another category... except I don't think we *could* be in equivalent danger in another category, because long before we got this close we would have gotten our acts together to solve it. In no other area would we let it get this close.

I guess the plus side is this: whatever else you're worried about—dangerous super-AI, racism, wokeness, the middle east, democracy, whatever—it most likely won't be a problem in 50-100 years because civilization will have destabilized to the point where it can't be a problem any more. Yay, I guess?

Expand full comment

Jesus Christ was also fond of "Don't worry about X, worry about Y" arguments:

"And why beholdest thou the mote that is in thy brother's eye, but considerest not the beam that is in thine own eye?"

Expand full comment

I think Acemoglu is doing something like this:

1. You may think that worrying about AI is nerdy, but let me signal to you that I'm with you in spirit and definitely not a weird AI nerd by making fun of a weird nerd obsession in a way that makes it clear they're an outgroup.

2. Now that I've established I'm not a weird nerd, let me tell you why this somewhat less weird nerdy concern is actually a real concern.

3. AI is bad.

4. But remember that I'm not a weird nerd like those other ones who worry about something super sci-fi and fantastic, and this is a grounded legit concern.

Expand full comment

So you're postulating that Acemoglu is doing the equivalent of "Of course Orange Man Bad, Republicans all Snidely Whiplash types, now we've got that out of the way, can I talk about how we're screwing this thing up, my fellow liberals/lefties", that I see quite often in online articles that want to critique something but also want to avoid a howling Twitter mob calling for their head on a pike as a crypto-rightwinger?

Expand full comment

>>(1): Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people.

You're on to something here.

Expand full comment

Indeed, this kind of bad argument actually reduces the space for serious and important discussions about AI risks. In particular, the arguments for AI x-risk aren't very well developed IMO and rest on some pretty shaky foundations. I don't know how it will turn out but we absolutely need people taking criticisms of these arguments very seriously and responding by trying hard to improve them and I think we need to get those arguments addressed more in traditional philosophical, CS and biology journals to really evaluate them. Otherwise, it's just too hard to attract people who find the x-risk arguments unconvincing to spend time raising good challenges and the AI x-risk proponents won't see it as rational to respond unless doing so well means it could convince people who aren't already on board. This kind of bad criticism just creates mutually suspicious camps (even though many people are in both).

—-

Specifically I think the foundations of AI x-risk aren't well developed on the following two issues:

One thing which really upsets me about this is that it also obstructs real necessary criticism of existential AI risk arguments. I'm still not sure if AIs pose serious x-risk

1) The assumption that we can model AIs as having things like goals/beliefs. I mean those are only so-so approximations for people (we behave as if our beliefs/goals change in different contexts) and Bostrom offers no real reason to think that AGI will have (simple… .obv any behavior optimizes some reward function but not in a useful sense) coherent beliefs/goals rather than, say, just treating eliminating disease via recommending new drug targets totally differently than doing so via manipulating humans to get into a war.

Bostrom's argument here seems to be nothing but: it seems like more capable/smarter creatures have more coherent/global goals but that's exactly what the critic would predict here too. They would just say that's a consequence of evolution favoring global optimization for reproductive success/not dying. As humans realizing that we act as if we value different things in different contexts creates pressure to act in a more uniform matter but it's not at all clear this will tend to be true for AI or if it's just an artifact of evolutionary pressure.

2) The fact that most arguments that misaligned AI represent a hard to contain threat (not merely something we might want to not rely on to control all systems) seem to be largely narrative in nature and those narratives often seem to suppose sci-fi style superpowers from extreme intelligence. Given that we tend to be smart and love that property it should naturally raise some red flags.

In particular, I think the phenomena in complexity theory (indeed in computation more generally) that natural problems tend to either be relatively computationally easy or quite computationally hard suggests that superintelligence just isn't that powerful on it's own. Sure, it's certainly an advantage but I'd suggest that it's generally just really really computationally hard to figure out how any given intervention will affect the world (i.e. as you look at more indirect effects the resources needed grow very quickly…I'd say big O but technically this isn't in terms of # of input size).

In other words what if it's just not computationally tractable to figure out how your words/suggestions will affect people several links removed much less affect elections and wars except in the ways that are also obvious to us? In that case AIs can just be kept off the direct internet. Sure, maybe they trick their handlers into putting them on the internet but if intelligence doesn't give it super powers to convince people that just means everyone goes 'gasp … ohh not the AI got loose' and shuts if down and it just doesn't have enough predictive power to scheme out any high probability counterplay.

On this take there is still real AI risk but we need to ditch the evil genie style narrative and focus more on the usual kinds of human frailties and stupidities of giving too much authority to machines/systems etc…

Expand full comment

On #1, I believe right now the only way we know to train AI is to pick a particular (possibly messy and convoluted) function, and ask it to to maximize it. My understanding was that a fair chunk of AI alignment research is on "how do we not do that".

Expand full comment

Re #1, I think it's obvious that a computer program could behave in ways that look nothing like having goals or having beliefs. But that's kind of like saying "not every computer program is a superintelligent AI".

I think "superintelligent" is a shorthand for something like "very good at epistemic rationality (forming accurate beliefs about the world) and instrumental rationality (devising plans that will successfully achieve a given goal)". So "beliefs" and "goals" are sort of baked into the definition. We can definitely create computer programs that don't have anything like beliefs or goals--in fact, most existing computer programs fall into this category--we just don't describe them as "superintelligent".

The question of "what would a superintelligent AI do?" can be rephrased as "what would a computer program do if it were really good at forming beliefs and achieving goals?"

.

Now, there are a few hairs you could split here. You could have an AI that formulates plans, but then merely describes the plan rather than enacting it (see: tool vs agent AI). You could perhaps imagine a program that is good at epistemic rationality OR instrumental rationality, but not both (though I think there's a lot of overlap). You could try to propose a vision of some computer program that we would want to describe as "superintelligent" even if it doesn't meet the above definition (though this doesn't seem particularly relevant unless you can ALSO convince AI researchers to actually build your vision INSTEAD of this one).

But basically, this is the thing that AI researchers are currently trying to build.

Expand full comment

I agree that this isn't an argument that there is no risk. But the question is whether the ability to, say, appear super intelligent in a conversation and in terms of suggesting research topics or whatever else you want the AI to do makes it very likely that the AI will have things that operate globally like beliefs….but it's not clear they would have things that operate like goals globally (i.e. across multiple contexts).

Or, to put the point differently, if a person is trying to reduce the global disease burden by coming up with new ideas for medications and they become aware that if they convinced people to give them a bigger lab they could even more effectively come up with medications we assume they will take that action. We presume that people have some ultimate simple goal and if they become aware of some other means of achieving that goal they will act on it. I see no reason to assume that for an AI.

For instance, just because an AI is programmed to try and produce theorems that people will find interesting and might be able to recognize the fact that there is some pattern of output it could produce which would cause people (say by causing them to suffer some kind of weird brain damage) to find all theorems super interesting doesn't obviously entail that the AI will therefore have some impulse to take that action.

But, to be clear, merely arguing that the argument *for* AI x-risk is kinda weak here (and hence why I give it lower probability at the moment) and we should do more research to try and either show that this is somehow a necessary property that any system we are going to regard as superintelligent must have or if ensuring that a program has this kind of global 'consistency' requires effort. I lean towards the later but I'm not ruling out the former I just think it can't be assumed and that should be a focus of research going forward.

Expand full comment

You are writing as if "applying your goal across multiple contexts" is some sort of advanced technique requiring a special sauce to achieve.

I tend to think the opposite. Applying a goal to only one context implies that you know what a "context" is, and that you have some way of telling which one you're in. I think humans use "context" as an optimization trick to speed up our search by limiting the search space. This is a complex and sophisticated hack.

Applying a goal "in general" seems like the obvious default that wouldn't require any special tricks. It will happen by default, unless we take pains to avoid it.

The reason existing AIs only optimize in a single domain is because they only UNDERSTAND a single domain, not because the GOAL is somehow restricted.

Expand full comment

Not quite. I'm writing as if it's not *clear* if that will happen naturally or will require extra effort to achieve. I don't think that can be assumed one way or the other.

Your argument about context is interesting as it's exactly the opposite of the argument Bostrom uses to argue the main point. He argues that if we look to (I believe he means to include animals in this) actors generally they behave more consistently the more intelligent they are and thus thinks that it any super-intelligence would tend to be even more coherent than us humans. I'm inclined to accept that his observation is true as far as animals but suspect that it's not the natural consequence of more intelligent behavior …only of the kind of intelligence that gets selected for in nature.

But that's kinda the opposite of context as a specific hack. And if you look to the animal kingdom it seems it's much easier to find cases where behavior differs across contexts. I mean when my dog sees another dog in that context she acts as if her goal is to ensure that other dog stays off our fucking lawn yet at the same time she is in some sense aware that the more she looks outside the more likely she is to catch another dog on the lawn yet (as she gets old and lazy) often seems actively uninterested in looking outside. I mean you can attempt to reframe her goals in other ways but I think the very fact that it's not even clear what's the right way to frame that here helps make the point.

Far from a sophisticated technique humans have layered on top context it seems more like our limited degree of coherence is kinda kludged atop a system of heuristics/modes (emotions etc.. etc..) which are selected to give the generally right sort of answer in certain situations/contexts. It seems like the model in which it's just a sophisticated hack wouldn't be one in which being horny, hungry, tired, sober etc.. creates pressure to be akratic (sp?) and feels like it shifts what it feels like we want to do (taking effort to keep in mind later we will feel differently).

But I agree the matter isn't totally clear. It could be that as you make an agent more intelligent you and Bostrom are correct and it naturally adopts a set of behaviors that are globally coherent.

OTOH, I'm tempted to the model on which intelligence is, essentially, just a series of more and more sophisticated/general heuristics/hacks like our emotions and it takes work to extend the range over which those heuristics produce actions which appear to all align to achieve some single desired end. I mean it's easy to describe my dog's actions when she's barking at another dog or when she's sitting inside gnawing on a bone as if she has a single goal/desire but when we take a wider POV they don't seem to be simply described in terms of a single goal.

Expand full comment

I think a lot of behaviors (especially in animals) are not goal-oriented at all, and are more stimulus-response.

You can cobble together a bunch of stimulus-responses that add up to a pretty good overall strategy (given a relatively stable environment). But I think of "intelligence" (or optimization ability) as residing in the creation of the strategy, not in its execution. I don't think of a look-up table as being intelligent.

Even in stimulus-response, I think context is probably more complex than no-context (e.g. barking at other dogs only if they're in your territory, and not if you meet them outside your territory). But context-sensitive stimulus-response is probably still simpler than generalized planning ability.

Perhaps the tendency for more complex/intelligent animals to be more coherent in their goals is due to relying increasingly on planning, and less on stimulus-response? Though it could also just be that they have a bigger library of responses that add up to a better strategy.

This is also possibly related to Kahneman's description of system 1 & 2 thinking, where system 1 is fast, effortless, and slapdash, while system 2 is slow, deliberate, and capable of abstract logic.

I suppose I have been imagining a superintelligent AGI would involve a deliberately-engineered reasoning-engine, rather than, say, a gigantic neural net that is simply exposed to the world and left to sort itself out.

Expand full comment

But, as I said, I think what you say is a perfectly plausible epistemic possibility. However, I think it's also possible that the whole thinking in terms of goals at all reflects a kind of evolved organism bias and that it takes effort to produce behavior that is directed at a single goal across a broad range of input.

Of course, even if I'm correct, we still need to worry about someone putting in that effort.

Expand full comment

Can you tell me more about this phenomenon "that natural problems tend to either be relatively computationally easy or quite computationally hard"?

Expand full comment

Sorry for the long delay but here's what I mean. Note the first example is to give an idea of the type of phenomenon I mean and isn't specifically about running time.

So if you look at computability there are non-computable r.e. sets that are neither complete (of degree of halting problem) nor computable (in fact there is a countable lattice of such degrees) but yet any example which occurs naturally (i.e., isn't specifically constructed to produce an intermediate degree..so everything that comes up just considering other mathematical or scientific problems) ends up being either computable or complete. Similarly, when we consider program running time, we find that natural problems are either in complexity class P or are NP-complete even though, assuming P!=NP, there are infinitely many problems of intermediate hardness.

More generally, this suggests that (yes equating P with effectively computable is iffy but the phenomenon plausibly extends even to lower levels of complexity) the kind of problems we encounter in the world tend to have relatively scalable solutions or very unscalable ones.

For instance, if you look at the computational complexity of optimal play in many games you find that it grows horrifically quickly with game size ( https://en.wikipedia.org/wiki/Game_complexity ) OTOH there are a bunch of games (not listed there because they aren't as interesting) which have super simple optimal strategies which can be computed super easily. And the same thing plausibly holds with respect to heuristic strategies. And trying to predict things about the real world or affect it can be thought of the same way.

So what would this mean re: AI? An AI that's smarter than you would still have a huge advantage in any kind of fair contest but maybe I can predict how my statements will affect those who immediately here them (ohh if I tell her X she'll probably do Y) but if I try and predict out an extra level to guess that this will indirectly cause her boyfriend to take some action that's just too complex for me to predict. Maybe the super-intelligent AI can make useful predictions at one more level of indirection but if the computational complexity of doing this falls on the hard side it just gets too computationally intensive to compute as you go further so rather than behaving like magic wizards who can manipulate these huge complex systems the marginal benefit of extra intelligence would be quickly decreasing.

Expand full comment

Even assuming P != NP, is there a meaningful difference in "scalability" between NPC and NPI? This argument seems not THAT far removed from saying "every running-time is either bounded by a polynomial or NOT bounded by a polynomial." If you draw a line somewhere between "easy" and "hard" problems, you are guaranteed to discover that EVERY problem is either "easy" or "hard", regardless of where you chose to draw that line.

And you're doing this in a field we don't understand very well to begin with. The standard complexity classes like P, NP, PSPACE, etc. are very broad, yet we STILL can't even prove that most of them are actually different! P=NP? NP=PSPACE? PSPACE=EXPTIME? EXPTIME=EXPSPACE? "We can't find very many problems in class X" isn't hugely persuasive when we can't prove that class X is even real.

Regardless, I don't think "scalability" (change in difficulty as the size of the problem changes) tells you very much about the practical returns from superintelligence.

For one thing, humans already do lots of stuff that is probably not in P. Many real-world problems come in a relatively small range of sizes (or you only need to solve the small size to be frighteningly powerful)--e.g. predicting "if I tell her X she'll probably do Y" is probably not in P to begin with, and being noticeably better at it would likely be a huge practical advantage even if you can't predict "further" than that.

For another thing, I don't think computational power would be a hypothetical AGI's only advantage. Humans have a lot of systematic biases and predictable failure modes that you might be able to get around with a better architecture, even without bringing extra computational power to bear.

Expand full comment

"Instead of worrying about sexism against men in the US, we should be worrying about sexism against women."

The dynamic isn't particularly rare and unusual; instead, it's so common and prevalent that it's invisible.

Expand full comment

I do think that there is a certain pot of money that only goes to academicly respectable concerns and, while there are many academics who do take AI x-risk seriously, it still isn't seen that way generally. I mean sure you can get some papers accepted about it but you won't get a grant to study it while you will get grants to study AI bias, security etc..

I suspect that a big part of what's going on here is basically a move to defend one's chunk of academic influence/funding/etc… by keeping novel concerns out of the academic overton window of serious problems.

And yes there really is a kind of gentleman's agreement here that once something gets established as a serious academic issue with its own specialists and students and etc.. then you can't get rid of it. I mean that's why we are burdened with some kinds of methodologies in some areas of philosophy (e.g. continental style stuff) and other humanities that we don't have the slightest reason to believe are truth-conducive. Once they have the status of serious academics then you can't kick them out because so many academics outside of STEM would fear that once that pandora's box is opened maybe their field would be next.

Expand full comment

What are the most serious weak spots for Eco-terrorism?

Expand full comment

Do you mean terrorism committed for the sake of the ecosystem, or terrorism committed by threatening the ecosystem?

The former is a very broad category and general resources on weak spots to terrorism are generally applicable. The latter, I'd say artificial microbes (not GMO - fully designed); the other methods are deliberate greenhouse-gas production and radionuclide dispersal, but both would require titanic amounts of resources.

Expand full comment

Scott, I've followed you for years, but this pair of posts is the first time I've felt like you're outright lying about your motivations and beliefs. I'd just apologize, dude.

Expand full comment

Could you please explain what you think Scott's true motivations & beliefs are and why?

Expand full comment

I don't think he's lying about AI, to be clear. I just don't believe he thinks the article is as bad as he's saying it is, he just wants the news articles to focus on the parts he cares about the most, and is misrepresenting this poor journalist in the process. It's okay to be frustrated about how 'normies' don't care about 'nerdy' world ending potentials of artificial intelligence. It's not okay to attack every article that disagrees with you under false pretenses so you can try and sell your own take.

Expand full comment

Well, I guess newspaper columns are a finite resource and he wants them to trade off column space on algorithmic bias vs. column space on superintelligence?

But what is the reasonable way to lament the original article for just dismissing superintelligence concerns in one line?

Expand full comment

Yes, it's not outrageously bad, just average bad, and Scott focuses on it because it's about an issue he cares, obviously. Everybody else does this kind of thing though, so you're pretty much doing the same kind of thing to Scott that you're accusing him of, amusingly.

Expand full comment

Meh, I'd give him the benefit of the doubt. I've met devoted Christians who talk about the Rapture the same way: "Who cares about global warming; Jesus is coming any day now to scourge the entire world anyway ! Repent ! Before it's too late !"

Or, to put it another way, if I truly, honestly believed that demons from Phobos were going to invade the Earth any day now, I'd probably post a lot of articles to the extent of, "can we please stop talking about AI and start stockpiling rocket launcher ammo ?". And I'd likewise feel completely exasperated that no one seems to be taking the demons as seriously as I do.

Expand full comment

This logic seems to work when we are talking about some centralized committee figuring out how to spend money. In this case the trade offs doesn't even have to be related.

"Instead of waging wars we should invest in the economy."

"Instead of buying luxury goods we should donate money to charity"

But if the trade offs are related I guess it's easier understandable and earns more points in the political conversation because it doesn't require people to hold multiple things in mind

"Instead of funding the police to fight crime we should fund more social guarantees to deinsentivise crime from happening" - maybe the best course of action is to have both and get money from some other place, but it will require to talk about this other entity, making the idea more complicated.

But as was mentioned in the post, this logic makes little sense when we are talking about worrying. Which is interesting because it seems that there is a way our worries can be converted into financial insentives for the goverment or our own donations to causes we consider important. Hmm.

Maybe it's a bug in our psyche? Maybe it is reasonable to adjust our worries about somewhat unrelated causes x1,..,xn on p1,...,pn percents in order to better correspond to reality? But it's a complex idea that we can't actually compute, so we end up with vague feeling that some person worries about xi more than us while worrying about a related xj less than us. And it just feels wrong and we end up writing articles similar to Acemoglu's one.

Expand full comment

"If Elon Musk stopped donating to long-term AI research, he would probably spend the money on building a giant tunnel through the moon, or breeding half-Shiba-Inu mutant cybernetic warriors, or whatever else Elon Musk does. Neither one is going to give the money to near-term AI projects. Why would they?"

AND THAT IS EXACTLY THE PROBLEM. "Great oaks from little acorns grow", it's TODAY'S small problems that will develop into big ones if we don't fix them.

Musk and others *like* the Long-Term AI Research because it's the sexy, flying-cars, SF version of AI Risk - the rogue super-computer that has achieved human levels of sentience, god-levels of intelligence, and super-villain levels of motivation, and is now in a position to take over the world, bwahahahaha!!!!

That's cool, that's fun, and that's so far off (if indeed it's even plausible) that it's not an actual threat. The small-scale stuff that is a problem, like "the bank messed up my account due to their new IT system and now I can't get at my money or pay my bills or have my wages paid in and nobody can fix it because I get put on an automated phone menu or on hold for two hours and can't reach a human being, and if I do they put me straight back in the phone queue" - that's not cool or sexy or fun, so they ignore it.

But *that's* the kind of AI tech interfacing with the human world that is affecting a lot of people and causing a lot of misery.

The problem is not the tech. The problem is people. And yes, I'd tie all the long-term alarmists together and dump them on a desert island, so maybe we could have a sensible discussion of "hang on, why are you rushing to automate everything? shall we maybe have a look at the profit motive and how putting that first is going to make things worse for everyone, as you rush to create AI that will fatten the bank balances?"

Expand full comment

Inadequacy of AI alignment theory is exactly that - todays small problem which can become a HUGE issue if we don't deal with it now. You just do not iteract with it often so it doesn't bother you in particular. And that's a meta problem. We tend to ignore the problem that we do not notice.

One way to fix a meta problem is to try to spread the awarness of the huge consequences of ignoring the small problem. This allow to reframe the problem as sexy so that the people like Musk become interested in it. It's a good thing.

Obviously it would be much better if all of us just were much more rational so that were motivated to deal with all the problems of humankind without such hacks. Too bad it's not the issue.

Expand full comment

Yeah, but what I see - and it may just be that I'm reading what gets into the popular media, not deep-dive technical stuff - is that the interest is all about "Okay, so by the time we have Colossus wanting to rule the world, what can we do?" and not "Okay, so the algorithm the supermarket delivery system is using is wonky and sending seventy vans to Mrs. Murphy's address" because the former is sexy - Elon in his superhero suit fighting the rogue AI! - and the latter isn't, but we need to fix the latter before we even touch the former.

Expand full comment

Technically, it's Colossus/Guardian. Not just Colossus. Get it right, Deiseach ! :-)

Expand full comment

Obviously you hear more about sexy and awesome problems in the media as they generate more clicks. No surprise here. But do you really think that more resources are poured into AI alignment than generic ML problems? Elon Musk isn't interested in raising awareness of bad delivery and customer support services, but neither he has to, because everyone already knows that it's a problem! And we have lots of economical incentive to fix it.

Also I think you are mistaken to think that we need to fix the latter before we even touch the former. There is no reason why we can't work simultaniously on both issues. Actually there is a very good reason why we should do AI alignment right now. All the mundane AI problems are fixable by even smarter AI. Unlike AI alignment, figuring which will be too late when the AIs are smart enough.

Expand full comment

is this post directly about Acemoglu and AI or is it more like one of those overcomingbias/lesswrong crowd where we must sharpen up our mistakes?

Expand full comment

Isn't 2-6 just effective altruism?

Expand full comment

To some extent! For instance, I worry a tiny bit about nuclear war, and not at all about random dust-ups between child soldiers with machetes in the third world, because a nuclear war would affect me personally while third world massacres don't affect my life, or the life of anyone I know, in the slightest. I feel like (3) is just trying to persuade me that I should take other faraway people's interests as seriously as my own.

Expand full comment
founding

The obvious difference between Scott's examples and the long vs short AI comparision is that all of the harms in his examples are things that are either happening right now, or are plausible near-term extrapolations based on ample exemplars in the recent past. They are amenable to analysis using hard, real-world data.

The topic of long-term AI risk is just not like this. We routinely dunk on businesspeople and politicians who naively project _linear_ relationships out to infinity, while the AI risk folks do the same thing with _exponential_ curves!

Does it make sense to think about long term AI risk? Sure! Even spending some time coming up with a philosophical framework for AI alignment seems reasonable, at least for folks for whom that bee buzzes particularly loudly in their bonnets. But spending millions of dollars on long term AI risk projects seems utterly futile, given that we have no idea what a generally intelligent computer program might look like or what its inherent limitations or constraints might be. We're extrapolating from our extremely incomplete knowledge of the one generally intelligent agent we know of -- ourselves.

Long-term AI risk scenarios to me are like cavemen worried about "giant rock-throwing risk". Cavemen who've seen nothing but other cavemen throwing rocks might imagine giants who could throw really big, house-crushing rocks, and decide that the best way to prevent that would be to stop tall people from breeding. Over time, with more knowledge of mechanics, they could extrapolate to the idea of catapults, which look and are constructed nothing like a giant person, but can throw giant rocks anyway. They might sagely decide to limit construction of catapults beyond a certain brobdignagian size to avoid building one that could destroy the whole world. But no amount of effort they put in at the caveman technical level would accurately predict a nuclear missile or a gravity-sling, nuclear-rocket powered interplanetary kinetic kill weapon, let alone come up with some sort of specific technical or social protocols to prevent it from being a risk to them.

Expand full comment

"We're extrapolating from our extremely incomplete knowledge of the one generally intelligent agent we know of -- ourselves."

I don't think that is true. We are really extrapolating from the current best AI systems which are reinforcement learning agents, and it is reasonably easy to analyse all the problems that sufficiently effective reinforcements agents can cause.

I recommend these videos for detailed discussion of the above: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg

Expand full comment

Honestly, none of your examples are of an outgroup arguing that we need to avoid something that the mainstream media culture considers a low-probability contingency.

You absolutely do see this in other areas. I've even been guilty of it:

"Republicans endorse originalism out of worry about some future tyrannical government, but don't worry about Donald Trump's authoritarian leanings."

"People care deeply about gun rights because they don't want to be stripped of the ability to protect themselves from a hypothetical authority figure, but they don't care about police brutality."

"We fear a Chinese takeover of Taiwan, but not the rollback of voting rights in our own nation."

The requirements for this argument to be made are:

1) A statement in the form "We worry about A but not B" where "A" and "B" are in the same domain (and you can get very creative with conflating domains).

2) "A" must be a hypothetical that takes place in the relatively distant future and is not a certainty.

3) "A" must be a concern that is endorsed by a sizeable minority culture but not the metropolitan elite journalism culture.

4) "B" must be a concern that is endorsed by the metropolitan elite journalism culture.

I actually think you're being too kind to the article. It's possible that the individual who wrote it (who seems to be brilliant within his field, though I admit I'm not familiar with his work), was just borrowing the template.

But the template itself is: "This other culture is infringing upon our monopoly on truth and must be destroyed."

Expand full comment

"Instead of worrying about nuclear war, we should worry about the smaller conflicts going on today, like the deadly civil war in Ethiopia."

I realize you mean this as a straw man argument, but I honestly think it makes sense. Ever since nuclear weapons were invented, we've obsessed about nuclear war, but they haven't been used except in the initial case in Japan, and they are unlikely to be used in the future because any country using them knows it will get retaliated in kind. Even the cliched "insane" governments like that of North Korea aren't that insane. But we kind of accept conventional warfare as just a normal thing despite millions and millions killed.

Expand full comment

*Fair warning: I did not read this whole post yet (though I did read the previous Acemoglu one), and probably won't unless I'm convinced it's worth my time.*

I can't be the only one here who's become uninterested in discussing long-term AI concerns at this point. Which isn't to say I don't acknowledge the potential problems we're facing - it's just that I don't know what I'm supposed to do about it, and pretty much nothing I've read in the past few years since first being made aware of the issue has changed that picture or made me feel like all this discussion is worth anything. Just the same arguments being rehashed over and over again. Leave it to actual experts in the field, who do generally seem to understand the concern. At least on other topics I can form an opinion and advocate for certain political or behavioral changes. Even on topics that seem nearly impossible to move the needle on, you can get an idea of what should be done. I'm not even sure which way to try to move the needle with the long-term strong AI concerns. And the conversation doesn't seem to be progressing in any way I can tell. At one point I probably believed that just informing more people of the issue was marginally productive, but now it just feels like it's crowding out other conversations in communities like this, conversations which do evolve over time and could actually lead to political or behavioral changes.

Every single issue referenced in the 6 above examples feels more addressable by me (and I'm guessing most other readers here) than the long-term strong AI issue, which makes Acemoglu's steelmanned argument unique among them. I wonder if Acemoglu feels similarly and is just trying to steer the general public into more worthwhile areas of discussion. I can respect that, even if I find his rhetorical strategy unappealing. I hope we can lay off this topic a bit until there's actually new stuff to discuss. But who knows, maybe I am the only one.

Expand full comment

The problem here is that the short term risks from current AI (unfairness) is nothing whatsoever like the long term risks from a super-intelligent maximiser (the destruction of everything). The former isn't even a lesser version of the latter.

Expand full comment

I don't think this deals with the issue fairly.

The contrasting examples were not close, fixable and relatively small problems vs. long term, potentially not happening and horrendously complex.

That's a relatively usual prioritisation that people face all the time. Setting aside something for three months mortgage payments vs. paying into pension. Vocational qualification vs. Masters degree.

Using economic growth to make warming manageable vs. eliminating fossil fuels.

And on the (fairly wide) margins there are trade offs of attention and government research money.

Expand full comment

I'll tell you what I *would* like a solution to; I recently got an email from The Metropolitan Museum of Art (you buy *one* catalogue raisonné and you're on a mailing list forever) which was trying to entice me to buy some of their tat (as an aside, the stuff they have for sale isn't that great in my estimation in quality I'd expect, but that's not the problem).

It was for "holiday decorations". That's right - Christmas in July.

This is *bonkers*. We're not even into autumn yet! And I know all the shops do it, and I know about lead-in times and ramping up production to get the products out on shelves, but it's *nuts*.

You have the Hallowe'en and Christmas goodies jostling for space on the shelves, then the 'January' sales happen on St. Stephen's Day, and the Easter chocolate is now budging up against the unsold Christmas goodies.

By the time the real day - be it Hallowe'en or Christmas or Easter - comes along, you're sick to death of it and it's not special because you've been seeing it for three months already.

If God-Emperor AI makes it so that Christmas only starts in December and Hallowee'n in the third week of October, I will gladly place my neck beneath its silicon jackboot.

Expand full comment

The problem here is that you believe that the AI Singularity risk is very real, and in fact imminent. People like Acemoglu -- and, obviously, myself -- believe that it is science fiction on par with alien invasions. I can't speak for Acemoglu, but personally, when I hear impassioned pleas to invest more money/effort/publicity into preventing the Singularity, I'm not just worried about the waste of resources (though that is a concern). Rather, I worry about the public perception of the entire field shifting towards "I'm not saying it's aliens... but it's aliens (*)".

Normally I wouldn't care, but the truth is that AI presents some very real dangers, today, right now. If half the people working on it walk off into contemplative Singularity think-tanks, and the other half shrug and ignore the whole field because they're not interested in discussing far-out kooky ideas, then the very real dangers of AI will never get addressed. This scenario would be... suboptimal.

(*) https://i.pinimg.com/originals/f3/cf/65/f3cf652b459d4e68d722526138955856.jpg

Expand full comment

Another type of weird concern pair are those which go in opposite directions:

"Instead of worrying about police brutality, we should be worrying about the underfunding of police pensions."

Expand full comment

Not necessarily opposed. Police officers might be a bit more chill if they were confident their pensions were properly funded.

Expand full comment

>> Argument (1) must ring false because worrying about police brutality and police evidence fabrication are so similar that they complement each other. If your goal is to fight a certain type of police abuse, then discussing unrelated types of police abuse at least raises awareness that police abuse exists, or helps convince people that the police are bad, or something like that. Even if for some reason you only care about police brutality and not at all about evidence fabrication, having people talk about evidence fabrication sometimes seems like a clear win for you. Maybe we would summarize this as “because both of these topics are about the police, they are natural allies and neither trades off against the other”.

Much of this is only true outside the police station.

Inside the police station those topics would have tradeoffs, and it might even be well-understood what the costs and benefits are of ways to prioritize them.

In communication between the police station and the outside communities, the outside communities will advocate for the issues most salient to their own needs, regardless of what might be in the police station's best interest or what the police station judges is in the communities' best interest, and one cannot generally be offered as a substitute for the other.

>> Does Acemoglu argue this way when he writes about economics? “Some people warn of a coming economic collapse. But the Dow dropped 0.5% yesterday, which means that bad financial things are happening now. Therefore we should stop worrying about a future collapse.” This is not how I remember Acemoglu’s papers at all! I remember them being very careful and full of sober statistical analysis. But somehow when people wade into AI, this kind of reasoning becomes absolutely state of the art.

Acemoglu writes about economics from "inside the police station". His lack of interest in a 0.5% drop stems from the standpoint of theory that puts priorities elsewhere (even if in real terms that 0.5% may represent substantial damage to a significant number of people).

He writes about AI from "outside the police station". His arguments stem from an awareness of his community being particularly affected in certain ways, regardless of theory that would put priorities elsewhere.

Expand full comment

IMO Scott nailed this in the first post; Acemoglu used AI x-risk as clickbait to then pivot to complaining about narrow AI and this follow-up steel manning post makes me think of the "stop stop he's already dead" meme.

Expand full comment
founding

>“Instead of worrying about sports and fashion, we should worry about global climate change”? This doesn’t ring false as loudly as the other six. But I still hear it pretty rarely.

I hear versions of that one quite often. But, sports and fashion aren't things people *worry* about, so much as they *enjoy*. I mean, you can use the word "worry" to describe the state of mind of someone who observes that the odds of their preferred sportsball team winning the championship have decreased, but it's a very different sort of "worry" than the one with global climate change and really it's part of a generally enjoyable process or sportsball (and fashion and whatnot) wouldn't be nearly as popular.

And to take another example several people have raised already, "Instead of sending rockets full of money into space, we should worry about problems here on Earth". But, space exploration isn't so much something people worry about(*), it's something people *hope* for.

So I think there's a hierarchy at work here.

"Instead of wasting time enjoying [frivolity], we should worry about [tragedy]", is seen as the mark of a Serious Thoughtful Person putting a Silly Selfish Person in their place.

"Instead of hoping for [fantasy], we should worry about [tragedy]", is seen as the mark of a Serious Realist trying to bring a Hopeless Polyanna down to Earth.

"Instead of worrying about [tragedy], we should worry about [different tragedy]", doesn't intrinsically privilege the speaker's position and looks a lot like Concern Trolling.

So, Worry trumps Hope trumps Enjoyment. Which is kind of backwards, and now I'm worried about how common that is. Thanks a lot, Scott :-)

* Yes, there's "I'm worried we'll all be killed by a giant meteor if we don't colonize space soon enough", but that usually is and is almost always perceived by outsiders as "I want to explore space because it's cool and I think the big-meteor argument will persuade skeptics".

Expand full comment

0. Thank <deity> for the division of labor. I don't have to waste time worrying about things that are beyond my competence or control. I don't have to know all the details of my microchip's design, or waste time picketing a hospital with signs that say "IT'S LUPUS". I just choose the products/services/micronations that I like, based on summary statistics, and let the implementation details be somebody else's problem. But the genre of news largely consists of getting people to pointlessly worry about things that are beyond their competence and beyond their control. In a democracy, zillions of man hours could be saved by randomly sampling 1% of the population at age 18 to have the franchise for life. They'd put in the effort to become super-informed, and the rest would save a lot of time and stress.

1. One man's whataboutism is another man's demand for consistency. Sometimes "whataboutism" is used as a counterspell against any accusations of double-standards.

2. In a world where all resources are finite and all values are relative, almost anything can trade off against almost anything. But, the correct form is:

"Instead of <thing X I care less about>, we should worry more about <thing Y I care more about>" and it only works if your interlocutor already agrees with you about the relative valuations (in which case it would be unnecessary to say the above phrase). All the real work is in convincing your interlocutor why Y is more valuable than he thinks and X is less valuable than he thinks. Acemoglu didn't do any of the work towards convincing anyone why X is less valuable.

3. Straw examples just for fun:

Instead of worrying about awkward boys asking girls out imperfectly, we should worry about Yog-Sogoth devouring the galaxy.

Instead of worrying about whether Abe Lincoln's new beard looks weird, we should worry about how to avoid escalating the secession crisis in to a scorched-earth civil war.

Expand full comment

Baseless conjecture:

Human minds reduce every reported threat to stories and imagery, and unlike the possibilityspace pointed at by "the police does stuff", AI hypotheticals don't lend themselves well to this reduction. Like broken hash functions, minds will map AI threat news to the same emotional [[AI risk]] symbol.

Hence one cannibalizes the other; in public discourse as well as in individual minds. "No this is not what [[AI risk]] is about, it's this *other* thing!"

The town just ain't big enough for two AI risks.

Expand full comment

((...yet. Good construction work in this post))

Expand full comment

This is literally how the world works though; there is a limited amount of attention that can be divided up among all the issues, at least in the public discourse. Here is how I would rank the issues in terms of the which should get the most attention:

1. "the deadly civil war in Ethiopia"

There are literally people just walking around (parts of) Ethiopia killing, raping, etc others *right now* and on top of it there is a media blackout so things are even worse than they appear. This has to be the top priority, we need to figure out why this is happening and how to stop it (in either order). People are deliberately killing each other at a mass scale, this is an incredibly important issue that we should be putting our top minds on. It's only that we are used to such things that we dismiss it.

2. "we should worry about racism against Uighurs in China."

I would characterise it more as ethnocide than racism to be honest. It's not even in the same ballpark as racism in the US.

3. "we should worry about all the people dying of normal diseases like pneumonia right now"

I would rate this higher than pandemic preparedness right now, the whole previous decade of worrying about pandemics didn't help out with this one at all (outside a few East Asian countries) but how many know about the resurgences of diseases like RSV that are sweeping countries due to lack of contact from lockdowns last year?

4. "worrying about Republican obstructionism in Congress"

I would more broadly classify this as "how are Republicans still getting elected despite being obstructionist?" And that pretty quickly leads you to "why is US democracy so dysfunctional compared to other countries?" Arguably the #1 issue since if the US was optimised it could solve all the other problems easily.

5. "worrying about “pandemic preparedness” "

Yeah so the previous worrying didn't help much and almost everyone understands the basics of what needs to be done by now, but still, probably worth worrying about this one some more at the moment.

6 & 7. "Instead of worrying about police brutality, we should worry about the police faking evidence to convict innocent people."

I find these pretty equivalent really, probably worry about brutality more though.

8. "worrying about racism against blacks in the US"

Arguably could be higher since similar issues drive this as are driving the Tigray War.

9. "worrying about nuclear war"

This is always overstated. Any time there has been trouble the highest ranked person in the room closest to the action always prevents it happening. And we are in much more benign nuclear environment than during the Cold War. Still worth some worry though.

10. "worrying about pharmaceutical companies getting rich by lobbying the government to ignore their abuses, we should worry about fossil fuel companies getting rich by lobbying the government to ignore their abuses."

This is just the same issue so I'll treat it as one. Arguably could be higher due it being related to the dysfunction of government, which is a critical issue, as an effect though (companies getting unfairly richer) it is only minor compared to these other issues.

This ranking is, admittedly, fairly debatable, but the point is that there really is limited attention and issues really *should be* prioritised.

Expand full comment

Didn't you write a whole article about how people's internet focus issues are in competition? Like concerns over feminism being replaced by concerns over transphobia and racism? Isn't the whole 'stop taking up space white woman,' thing this phenomenon?

Expand full comment

Another way to look at it is how much people would natively care about issues in that field. For example, if there is some problem with law enforcement whether it be corruption or wanton use of deadly force, most people would be pretty worried about it. Similarly in (5), many people would be pretty angry about lobbying (even if it's a relatively small issue) because it ties into an "evil other human" narrative that our brains easily go toward.

On the other hand, very few people think about AI, so it's plausible that worrying about one AI risk may take the onus off another AI risk. I'm not saying that that's the case - certainly we'd need a lot more evidence before we can even plausibly claim this as a hypothesis worth considering - but it's one way of looking at the situation.

Expand full comment

Am I the only one who read the argument as equivalent to the "abortion vs neonatal" but in the sense of trying to cast one thing as "obviously" more concerning than the other - to then imply "therefore if you truly honestly care about this then this other thing should concern you far more and you're a hypocrite for spending such resources on battling a minor issue in the face of it"... At least it felt to me like the whole thing about "AI risks that definitely matter right now" vs "AI risks that might matter in the future" was all to establish the former as "obviously" more concerning by virtue of its immediacy and certainty.

Expand full comment

"breeding half-Shiba-Inu mutant cybernetic warriors"

The notion that Elon Musk's new line of genetically engineered Shiba-girls will have kinetic military applications is absurd. And putting cybernetics in them would be downright counterproductive.

Expand full comment

The Future of Humanity Institute does the same thing in the opposite direction, claiming that what they perceive as existential risks are orders of magnitude more important than anything else.

Expand full comment

I think a reasonable, if maybe unproductively cynical model, is that the resource conflict actually taking place here has very little to do with money, and a lot to do with "attention paid to my pet issue in a given social media community."

Even though they may not share funding, the sort of people who talk about catastrophic AI risk and the sort of people who talk about AI unemployment have a lot of overlap. When people complain about AI risk being a distraction, I expect the *actual* frustration is about people who normally talk about their pet issue sometimes talking about a different-but-related thing instead, which is viscerally frustrating.

Everything else is just an attempt to wrap some mildly tortured logic around a desire to get back to talking about the original pet topic.

Expand full comment

I feel like the obvious steelman of Acemoglu's position is that "AI

Threat" in the mind of the average dumbass means a) killer robots and b) something that won't happen. Thus making it clear that there are ACTUAL EXISTING AI that are threats RIGHT NOW is a useful and important political project, even if you are more concerned about MIRI type threats than Facebook algorithms. Most people literally can't comprehend the former, but educating them on the latter might help against *both* issues by providing an onroad.

Even if one doesn't agree, it seems to at least make more sense than other interpretations.

Expand full comment

> in the real world, vineyards and space programs aren’t competitors in any meaningful sense

I think your point about them competing for land goes against your conclusion here.

Let's say there are two types of land, Green and Brown, such that you can use green land for vineyards and brown land for space rocket stuff. They obviously don't compete for land.

Except if you can put tennis courts on both green and brown land: then using more land for vineyards means more tennis courts will have to go on brown land, leaving less land for space rocket stuff.

This generalizes to n ways of using n-1 (or fewer) types of land, provided there are enough overlaps. Usage A and B compete directly if there's at least one type of land suitable for both; and A and C compete indirectly if they both compete with B, either directly or indirectly. Then everything competes with everything else, either directly or indirectly, provided there's a chain of direct competition from each thing to each other thing, i.e. if the graph of direct competition is connected.

As a consequence of indirect competition, one expects that using more green land for vineyards will drive up the price for both tennis courts and space rocket stuff (and drive down the amount of land used for both).

[Concrete examples are not actually of concrete, and are thus suspicious.]

Expand full comment