They really need to get cracking on that: Deus Ex: Human Revolution established that Final Fantasy XXVII will exist (at least in poster form) by 2027, and here we are in 2022 and still at XV...
I mean, we know that the Flynn effect is real, and there are various theories about what it is. If it's childhood malnutrition, which I believe is the leading hypotheses, then we would expect any gains from that to eventually plateau as everyone gets fed.
I thought the leading hypothesis was cultural- being raised in a society with a higher level of abstraction in every day life (computers, etc.). If it was malnutrition you would think it would not be happening in places like the U.S. (where it stopped only fairly recently) and it would be g-loaded.
Logically it can't be the fact that a) IQ is highly heritable and b) low-IQ people always had the most children and c) there has not been a decrease in IQ
Even if your hypothesis is correct, in the past low education probably did not correlate very well with low IQ
Probably didn't correlate well with living in cities, either -- a lot of aristocrats were well-educated for their society, and spent most of their time living on their country estates.
In the past the rich had a better chance of surviving relative to the poor. Enterprising poor people who climbed the ladder either had more children, or their children had more children, those who survived to adulthood anyway.
Cities are population sinks but probably not for all classes equally.
The argument that democracy unduly favors the old has been made many times over, and theoretically makes sense. However, if you look at what is actually happening to public pensions in most European countries, you find that they are being cut almost everywhere, either in the open, or "on the sly". While subsidized kindergartens, after-school day care, and paid parental leave, are being expanded.
My take is that your implicit model of voter preferences is overly simplistic. In practice, politicians find ways to cut benefits for the old and direct money to young families instead, and manage to get consensus (or at least not active opposition) to do that.
Granted, it takes political skill to carry this through. But Macron sort-of won the election, after all.
Request fewer comments like this, it seems to just be creating social pressure to make it hard for people to talk about things that are concerning them, without really filling in any step of the argument.
I do not think a proper and informed knowledge of intellectual history is out of place in rational argument about the state of the world and the choices in front of us. I think @DannyK was well within bounds to bring up Grant's book. He was not using it to suggest that people advancing arguments about population growth and control are evil, but a corollary point that evil men have used the issue to justify their evil deeds.
The Grant book was very important in the inter war years. Fitzgerald put a reference to it in The Great Gatsby but in the mouth of the buffoonish antagonist Tom Buchanan. https://en.wikipedia.org/wiki/The_Passing_of_the_Great_Race#Reception_and_influence leading us to believe that he was not sympathetic to the argument and that his readers would think it was characteristic of boorish people.
I think it is important to understand the place of Malthusian thinking in the development of social science and literature in 20th century America. "Illiberal Reformers: Race, Eugenics, and American Economics in the Progressive Era" by Thomas C. Leonard (2016)
I made several arguments against your piece about Black Lives Matter but they were among the ones that there was no space to answer.
1) I linked to several pieces in the NYT that said the same thing without making it sound like a racial dog whistle.
2) You would not acknowledge the fact that during the lock down, no one had a job to go to or classes to attend greatly increased the pool of demonstrators.
3) These massive crowds immediately put the police on the back foot and it became olly olly in free for anyone, black or white to act out on whatever antisocial impulse popped into their head. The destruction in large part had nothing to do with BLM.
4) There are video documented instances of a local white biker striking yet another match by smashing store front windows with a 4 pound hammer. I have one just like it for situations that call for ‘a bigger hammer’. He has been identified and there is a warrant for his arrest.
5) Putting your thumb on the scale for right wing media because they beat the ‘BLM protests were awful’ drum hard and often - the destruction was in fact awful, black people are not - is fucked up because their business model is to gin up hatred and white outrage. If you think that is a good thing, I don’t know what to tell you.
6) You gave short shrift to the emotional gut punch of the video of George Floyd’s death. You start your analysis with the destruction that followed.
7) What’s with the graphs of violence in countries without our complicated racial history?
I’ve held off on saying anything about this in case that article was an anomaly. But now this. I’m not calling you a racist. I am saying actual racists do love stuff like this. Pointing that fact out should not be a problem.
No I don’t think Scott is a racist. I do think he might enjoy dunking on the ‘bad people’ at the NYT.
Edit: I had actually clicked unsubscribe from ACX on the Substack gizmo before I saw Scott’s request to not point out the obvious. I got an email from you and Scott’s latest email too so apparently I’m doing something wrong.
1. Why should we go out of our way to avoid making something "sound like a racial dog whistle?" A racial dog whistle is itself something that "sounds like" it is alluding to something actually racist, so you're claiming it's very important for us to avoid... sounding like someone who sounds like they might be alluding to racism? At that point, is two degrees of separation even enough? Shouldn't we also avoid sounding like someone who sounds like someone who sounds like someone who is racist?
3. Would you apply the same principle to the January 6 trespassers?
4. There are also video documented instances of black rioters shooting and killing white people and police officers.
5. What alternative do you propose? Ignore the truth when it's inconvenient for the left, to avoid giving points to the right (even when the right is correct)?
6. The video really wasn't much of an emotional gut punch. If you watched the full footage and looked closely, Floyd said "I can't breathe" while he was still standing up, so it's clear from the beginning that his difficulty breathing is drug-related. Then, Chauvin puts his knee on Floyd's shoulder blade, and nothing much happens after that.
7. Why would that be of interest? Sure, if you were having a discussion about sociology, it would be relevant to bring up racial history, but not every discussion is a sociology discussion.
Some cultures, societies and civilizations are able to pull off incredible cultural continuity and preservation throughout the generations though. But I agree with you, culture tends to modify and be reinterpreted in different contexts, so this is probably not a great argument ("Japan's culture will be different with 1/3 non-native Japanese" -- It will and it will also be different even with 100% Japanese a century or two from now just like you said) and is probably an indirect coverup for a more primal reasoning of simply wanting your group's survival to continue. Maybe some other reasons too, but I don't want to keep writing.
My wife now isn't interchangeable with my wife ten years from now, but I neither want to force my wife to never change, nor would I be happy with replacing my wife with some woman who's a bit different from her.
I agree this is an unsatisfying response but I think ethical intuitions will always be unsatisfying.
But even if you did that, you'd still need it to be economically viable, scalable, and you still need to raise all those kids somehow up until they're productive educated adults. All of this is very expensive and takes a lot of time until you get to the point where they can contribute anything, if at all.
Now, I want to make a prediction here which might be wrong but whatever: Even if you could hypothetically mass produce babies on demand and engineer them to be very smart, beautiful, etc. it still largely wouldn't lead to many magical new breakthroughs in different fields. What would happen instead is increasing perfection and sophistication of things we already posses with some level of technological development and applied science. Other than that, society would ossify, fossilize and harden and just continue to live on as an animated corpse, trying to go to yet another planet, build yet another city, develop another app, etc. After the initial wave of the results from the new tech settles down, things get quite boring. Maybe if they make babies literally live inside VR in other specific settings and societies something else would happen, the most likely outcome being civilizational disintegration though. Idk...
You may be interested in the concept of "regression to the mean"--I'm sure Scott has written about this before but I'm too lazy to find links. Basically, IQ *is* heritable but there are also random other factors, so it's quite unlikely that the number 1 smartest Gen Z kid is the offspring of the number 1 smartest millenial (or number 1 smartest couple, I guess). (But the number 1 smartest Gen Z kid probably *is* born to some top-10% parents.)
We believe it because we base our views on actual heritable studies, not some indirect inference based on faulty premises. You not believing in the heritability of IQ is a product of you not knowing about/understanding the intelligence research literature, not a failure in the reasoning of the people you disagree with. Sorry if that sounds strong, but you didn't pose this as a question, you made a statement implying people who disagree with you are being particularly foolish.
There's no reason to think leaders are or necessarily would be elected on solely the basis of their intelligence, so there's no reason to imagine Einstein or someone like him would stand an especially good chance at being elected.
And heritability doesn't mean "the same as your parents". It means what proportion of the observed variation in a trait is a result of the observed genetic variation in the population being looked at. Being a child genius seems to be almost entirely heritable, but not many child geniuses are born to former child geniuses.
1. We're not even ruled by the most intelligent people this generation, unless you think Joe Biden is the smartest person in the US. Why should this happen transgenerationally?
2. Chance and regression to the mean ensure that the single smartest person next generation probably won't be the kid of the single smartest person this generation. While the children of smart people are on average smarter than the children of dumb people, there's lots of noise, and the noise is most apparent at the very top and bottom.
3. This might be easier to understand if you looked at some trait that you knew was passed down parent to children. For example, the children of rich people are on average richer than the children of poor people (you don't have to believe this is genetic for it to work). But the richest people this generation are Elon Musk and Bill Gates, who came from mildly rich but not ultra-rich families. This doesn't disprove that parents can give wealth to their children, it just proves the process is noisy.
No idea...I'm a Democrat. I think Biden is reasonably adequate and passing some good bills lately. My comment was entirely tongue in cheek. Donnie T, is sort of a mad genius though except in a clownlike and spectacularly incompetent (tripping over his own metaphorical feet kind of) way.
I am seeing that scenario in Italy and it's not nice. Like, not nice at all. Young people are full of hate, all the familist culture that was typical of our country is gone. The Italian equivalent of "ok boomer" is something roughly translatable as "ok, old piece of shit" , and we started saying it about 10 years before "ok boomer" was coined.
And old people really, really don't get why. After all, what's so wrong? Sure, they might have voted themselves some unsustainable benefits that weight so much on treasury that the rest of population gets Swedish taxation and Bulgarian services, but from their point of view their fault was only optimism. Sure, they keep voting themselves even more benefits, but doesn't everybody vote with their interests in mind? And it's their fault if they are outnumbering everybody?
As you said, it is taking the connotations of a class war. Old political rivalries are blurring: as long as you are young, there is some reason to hate your elders no matter your politics. Libertarianish? See above, plus so much regulatory capture to write an horror books for economists, never to be touched because hey, old people might be upset if something changes.
Leftist? Hey, what do you think if spending your 20s working for free so that some octuagenarian owner can afford a better suite in Sardinia, and maybe start paying you a pittance when you are wiser and older enough?
Progressive? Hey, you know how your boss considers sexual harassment a form of team building? Well, EVERY boss is like that, because none is younger than 60!
Seriously, I have seen political polarization in my generation going down a lot lately. Mostly because for any "kill landlords" or "offer communists an helicopter ride" post that disappeared, two about the glorious tradition of euthanizing old people whether they wanted or not appeared
> That said, regarding #6, I was recently startled by the release of the newest census results, which revealed that Canada (where I live) is becoming a country of olds with shocking rapidity.
My advice to anyone concerned about the age structure is to take a moment to consider the implications of the Demographic Transition model more fully.
In going from a high-fertility/high-mortality regime to a low-fertility/low-mortality one, you are necessarily going to have a number of generations whose size will exceed those that come after, because they were born during the high-fertility/low-mortality phase.
However, the same issue of concern - these people will not be replaced when they leave their productive period - also necessarily implies that these people will not be replaced when the time comes for the subsequent generation to retire.
In short, once the boom generations complete their journey up the population pyramid, these age imbalances may cease to be an issue.
My worthless prediction for future demographic trends is that populations will trend towards some sort of stability in the long run, fairly likely - at lower levels than we have today. This will probably be a Good Thing (there's a sweet spot where you have just enough people to get things done, but not so many that resource constraints start to kick in).
Not if they can just kill that kid afterward (and are you going to monitor them?), shifting away resources that they would have spent on kids they would actually support. A more incentive-compatible total utilitarian approach to boosting fertility is here:
I reject this whole line of thinking in order to avoid https://en.wikipedia.org/wiki/Mere_addition_paradox . I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox. I'll just nod my head and say "Yes, I guess that sized civilization MIGHT be nice."
1) Thinking that welfare is the only moral consideration is not an alternative to using intuitions to guide your theory. Utilitarianism (a first-order moral view) is not an alternative to intuitionism (a view about moral epistemology or metaethics). Some people accept welfarist views precisely because they think such views explain a wide range of moral intuitions while preserving theoretical virtues like simplicity.
2) There are lots of possible views which take all moral considerations to boil down to welfare which are not totalist utilitarianism, and I'm not just talking about averagist views. Person-affecting views can be welfarist as well, and these views typically do not entail the repugnant conclusion.
I linked to Mike Huemer above. He's an ethical intuitionist who presents arguments that "Unrepugnant Intutions" on that question are less reliable than the intuitions leading toward the Repugnant Conclusion.
You've been asking a lot of of versions of the question "if you're a (classical, totalist) utilitarian, why not accept its implications?" But of course, it's trivial that we shouldn't accept (classical, totalist) utilitarianism if we reject its implications. You're talking to people who don't accept that kind of utilitarianism. Maybe what you really want to ask is why someone who is a consequentialist isn't a totalist utilitarian, or why someone who thinks welfare is very morally important isn't a totalist utilitarian. And the answer is that there are reasons people have for being consequentialists, or for thinking that welfare is very morally important which do not commit one to totalist utilitarianism (and sometimes which preclude it).
For those like me who found that the link to Mike Huemer's response (referenced in that article) results in a warning about an unsafe connection, another copy is here:
I think that for a new and interesting civilization to begin, you need about 1 million people at minimum tied to one particular location/region/terrain/etc., but they also initially need to be engaged in agriculture for most part (in order to develop their own conceptualization and sense of time).
Depends on how you aggregate welfare. I think what we're aiming for with consequentialism is making people better off, which is different from creating people just so they can have welfare.
If I had never been born, my welfare wouldn't be zero, it just wouldn't be part of the calculus for that world.
One reason not to maximize total welfare is that it's not a good specification of what we're trying to do in maximizing welfare (in a more general sense).
Say we compare two worlds: World A has 100k people living in extreme bliss, and World B has 100m people living generally okay lives. Which world seems to be higher-welfare in the sense that we care about? To me, A seems clearly better. That suggests that total utility is the wrong measure, unless we have some other reason to prefer it.
But my higher-level comment is in favor of the person-affecting view, which is an alternative aggregation that rejects both total and average utilitarianism.
I assume you're asking because I said that World A seems better to me. Sorry for being unclear - I mean that it seems better in the sense that choosing it over World B is in line with welfare maximization as we think of it. It's a separate question whether that kind of welfare maximization is what we should be doing, morally.
I think utilitarianism draws much of its persuasive power by appealing to some intuitions about welfare - and how to aggregate that welfare is also part of those intuitions. Now, a form of utilitarianism that contradicts those intuitions could still be true, but if it is, it can't use them to support itself. If we accept generic unspecified utilitarianism because it follows from our intuitions, we should reject total utilitarianism for the same reason.
(This is all separate from whether we should accept anything based on our moral intuitions - and I don't think we should.)
The idea of summing _or_ averaging welfare is meaningless, though, without a way of assigning numerical values to welfare. And the problem isn't that you can't come up with a scale, the problem is that whatever scale you come up with is arbitary made-up nonsense.
Here's Alice and Bob. Let's suppose we can clearly see that Alice is much happier than Alice. Shall we say that Alice is a 9 and Bob is a 3? Or should we say that Alice is a 1000 and Bob is a 10? Or maybe Alice is a 5 and Bob is a -4? Or Alice is a 7.8 and Bob is a 7.5? The choice is arbitrary.
>>Low fertility must be regarded as one of the worst things in the world at present if we regard future possible people as having equal moral worth. If all that matters is the consequence, lowering fertility should be treated like a massive wave of tens of millions of infant deaths. This should be extremely concerning.
How are you defining the "utility" or "welfare" that is is sought to be maximized under this proposed framework?
It seems to me that you can't apples-to-apples infant deaths with infant non-births. In a pure population sense the total number of humans may be unaffected by the distinction, infant deaths have large impacts in terms of personal suffering and anguish, both in terms of the pain experienced by the dying infant and the pain experienced by the community (mom/dad/grandparents/siblings/family/friends) impacted by the loss. Non-births may *sometimes* include similar impacts (encephalopathy or other birth defect, for example, that ends a wanted pregnancy in the womb), but in the global fertility rate context most non-births are just the result of more use condoms or other birth control.
For example, I could have produced about 1 human per year over the last 6 years with my current partner, and we have produced 0. That's six humans not born without any impact on our happiness, and absolutely nothing on the happiness/non-happiness meter compared to what we and our extended friends and family would have experienced if we'd had a kid who died during the same time period. Let alone six.
That would seem to fly in the face of the interchangeability of infant death and non-birth, but it's hard to say for certain since your argument starts from a consequentialist perspective but I'm not sure of the terms in which the consequences are being evaluated. How do you define the "utility," "welfare," or what have you that is the target for maximization here?
I think that it is very unlikely that the majority of people would choose (let alone afford) to reproduce in this fashion in the near future even if it (i.e. iterative embryo selection) were technically feasible.
Why do you say that? C-sections used to be only a few % at one point, but now something like half of all births in some places are through C-section. IVFs used to be only a few %, but not in Denmark they are something like 20% of all births. Why wouldn't this technology also be like that (maybe rich people will keep it for themselves?). Also I don't know if the 250 IQ thing is an exaggeration or not, but to put that into perspective, an IQ of around 205 is about 1 in a trillion and you also wouldn't be selecting solely for IQ but for other traits like conscientiousness. Moreover, there's probably a reason as to why we all don't have an IQ of 200, aren't drop-dead gorgeous extraverted hard working leader-type people (which is most likely what majority of parents would want for their children). We are entering a dangerous territory (already have in some ways)...
Just reporting my reactions, but I find myself viscerally repulsed by the idea of embryo selection--of *intentionally creating* 'surplus' human embryos in order to destroy all but one of them. I have to assume that a significant fraction of other people share this feeling.
Note that I'm *not* viscerally morally-repulsed in the same way by early-term abortion. Nor am I repulsed by CRISPR-style genetic engineering--it's *not* a matter of disgust at it for being 'unnatural'.
Many Western European countries, while allowing early-term abortion and IVF, ban the deliberate creation of 'excess' embryos for the latter. I wonder if Denmark, where IVF has apparently achieved such widespread use, does this?
He said the near future. It's always the rich who can afford access early on. And that access may allow them to gain a power/capital foothold that will make them out of reach even if the underclass become more intelligent, at least for a while.
What's near future? It took like 1 generation for Danish IVF use to go to being used for 1 in 5 kids. And this is just for regular conception, IVF at its core doesn't necessarily confer you many serious advantages, unlike g.engineering or advanced embryo selection. Maybe I'm just trippin' since I haven't run any numbers myself, but I think that I'm also noticing that the time it takes from something to make its way down to the normies/masses from obscure groups/elites is decreasing. Me thinks that this tech is going to be a bit different from others. But if the elites manage to do as you said, there will be a new caste system which is not too unusual given that throughout history, most civilizations that make it to the point we have also develop a type of a caste system so we are not unique in this way, maybe only a bit faster. Ah well, we'll see what happens.
Yes, thats the problem. It will create a highly concentrated elite population.
Many of the 1% put in huge effort and expense decades in advance to maximise the possibility of their children getting into the ivy league. It's all but certain large numbers will select for embryos that will (likely be) not only of elite intelligence but likely also better looking and healthier.
And while this happens, many people on the left will still be telling us we need e.g. more school funding to help reduce inequality, still utterly convinced that intelligence isn't meaningfully heritable.
Would this really be a problem? There seems to be an implicit "zero sum" assumption here. If a small percentage of the population became smarter and more attractive, would that really make the world net-worse?
For attractiveness, sure, I could see this making the world net-worse. Female hypergamy would continue to intensify, worsening a whole host of social problems. But if smart people got even smarter, it seems like that would greatly accelerate scientific and technological progress.
Why wouldn't embryo selection for IQ be outright banned as soon as it is viable? Seems most political movements would be strongly ideologically and menetically motivated to ban it, including leftists (favors the rich initially), greens (unnatural tech intervention in sacred human bodies) and conservatives (unnatural).
Actually, it’s already basically legal in the U. S. (see Genomic Prediction). The current limitation seems to be that we don’t know all the genes responsible for high IQ in order to be able to screen for them. But you’re right, it does raise important ethical issues with regards to fairness, justice, risk of discrimination, etc.
I think aesthetics and morality are sort of inseparable. I agree you need more assumptions than just "maximize happiness" to prove that the human race surviving is good, but I am willing to make (some of) those assumptions.
That dumb matter has created beings that can even consider this question is a miracle beyond imagining. It would be a tragedy beyond imagining if they were to die out so early in their potential lifetime, or even if they were to become moribund and muddle through as 21st-century idiots for a million years.
Call that aesthetics, if you like. If the prospect does not appall you, we have nothing to talk about.
Do you have a rational argument for your statement that "Individuals should be cared for because suffering is bad"? Or is that just a matter of taste on your part?
Perhaps you do think moral propositions can be deduced logically, in which case it would be interesting to hear why you think "suffering is bad" is one that can be logically deduced, but "humanity going extinct is bad" is not.
There's an asymmetry to your reasoning. You take "suffering is bad" as axiomatic, but it seems you don't take "joy is good" as also being axiomatic.
Beyond that, I think you have gone astray by selectively intellectualizing morality. You've just said that you don't have any logical arguments for your moral position - you just assume what you want to assume. But then you dismiss others' moral judgements by saying "I can't think of an argument" for that.
Doctor Mist's view - "That dumb matter has created beings that can even consider this question is a miracle beyond imagining" - is a sense of wonder at the goodness of life that should not be dismissed on the basis that you can't think of an argument for it. Even if we suppose that logical argumentation is relevant here, do you really think that you have thought of all valid arguments, so your not thinking of one for this view is a decisive point against it?
> What I don't see is the offering of a cogent alternative moral foundation from which one could deduce the badness of human extinction, except relativism
If you haven't already read A. J. Ayer you'll find him reaffirming.
I'm not an Ayerist, I think there are some cogent alternatives to choose from where you scale up from values that appear universal. But agree this is nontrivial work and sympathize with the appeal of Ayer and the error theorists out there.
You don't think "person alive + happy" is better than "person does not exist"? Personally I think it is good that I exist and have a happy life instead of not existing, and IMO it would also be good if future people also existed and had happy lives rather than not existing.
I'm not sure why you think my preference to go on living is important/valid, but my preference to have more humans in the future is not.
I don't think I follow this attempt to take the perspective of the world - "the world", as you say, is obviously not an entity with preferences. Shouldn't I try to implement my own preferences? (Isn't that what it means to have preferences?)
I agree. Making belonging to a nation binary is a gross oversimplification. I am myself the child of immigrants, and I've never fully identified with the nation I grew up in, and I don't expect others to fully identify me with that nation, even though I pass as belonging to that ethnic group.
Some Asian nations are like Japan, and to a less extent some European countries, but most states in the world are multi-ethnic. But they are multi-ethnic in a somewhat similar way as the old Austro-Hungarian Empire, rather than multi-ethnic in a present-day US or Canadian sense.... Think of most Latin American countries, India and Indonesia in Asia, and definitely most African countries - relevant since Africa is the "coming continent". (The 22nd will likely be Africa's century.)
Probably agreed, but with a large uncertainty about the likely date of the switchover. _Maybe_ in part of the 21st, but AI has progressed more slowly than expected before, and might again.
Blessed if you consider GDP growth to be more important than having a unique and cohesive culture, say. But I'm sure at some point all this atomistic materialism will eventually make people happy.
You could have a Roman-style civilisation-state, where being Roman (Canadian, Japanese...) is a matter of following Roman (etc.) culture rather than being genetically decended from the founding population.
Then again, the people who believe in increased immigration are generally against making immigrants assimilate to the host country's culture, so in practical terms I'm not sure that's an option at the moment.
If you parse the corporate/quango language, then a world religion, say, is quite literally "[ethnically/geographically] diverse communities of people [trying to] achieve change we can believe in." It's not as meaningless as all that.
I do think the one exciting possibility of a globalised future - as against plenty of potential gloom - is human assortment based on shared values/ideology rather than (if you'll forgive a stray personal opinion) the dumb default of ornamental culture, ethnicity, and nationhood.
Why would you expect those shared values/ideology of a globalised future be any less dumb or ornamental than what you call the dumb default? You do not like what people have constructed in terms of culture/ethnicity and nationhood so far. Fair enough.
But since people have made that what you don't like already, why is it any better if it's ethnically and geographically diverse?
"Ornamental culture" is the bits that are left of cultures once they're fed through the cosmopolitan meat-grinder. If you're going for nationalism vs atomic individualism, culture would be the major determinant of behaviour and attitudes, which are the bits now left of atomic individualism.
To be clear, I think atomic individualism's probably underrated, and I'd really doubt it's reversible for people who've been absorbed by it. I don't think intentional communities organised around shared interests would work though, as no-one's going to surrender their autonomy to a group that they're able to leave (which is what molecular collectivism entails, and why it would probably suck for someone who hadn't been raised in it), so it'd just be a community when it's convenient.
If you don't build housing. If the immigration system were still routing new arrivals to the wilderness to go build their own log cabins and homestead some land, this would not be an issue.
There's only so much space in a city with hundreds of thousands or millions of people though, even if you build enough housing it would still lead to congestion/commutes and unaffordable housing in the center.
Japan is famous for this: you can live there for decades and never really be considered "Japanese" - but if Japan and America are the two ends of the spectrum, I'd guess a lot of the world is closer on the America side of the spectrum - most countries have had pretty significant shifts over the times.
Consider China: while some might think of it as predominantly ethnically Han, it's a lot more diverse and than that for centuries, (and in fact - China's fear is more the opposite: they're actively trying to keep their diverse country united)
The two countries you’ve exempted there are Brexit Britain and France with the strongest far right party in Europe. The U.K. is multinational of course, but that’s likely to break up. In any case preserving say Spanish culture, and its distinct regional culture, is an important task. The US is a blank slate which is probably culturally improved with any level of immigration, but Europe is already culturally diverse. Much of what diversity exists in America is due to immigration.
Beyond that the causes of immigration to Europe is often wars and refugee crises, often caused by US meddling. Which causes strains. And of course the US was building a literal wall under the last administration. It doesn’t look like the idea of an open pro immigrant society is universal.
It massively depends on what are and aren't "far-right parties."
The UK's actual neo-nazis are (perhaps ironically) really ill-disciplined and disorganised, so don't presently have much of a party. When they last did (the BNP), it was pretty small and peaked at 6.2% of the vote in a European Parliament election, slipping to 1.9% in the general election the next year.
UKIP, and then the Brexit Party (both now more or less defunct) peaked at 30.52% in the European Parliament and 12.64% of the vote in a general election.
Virtually no-one in the UK describes pre-Brexit UKIP or the Brexit Party as far-right. The BNP et al are extreme even by the standards of the European far right more generally, being roughly equivalent to David Duke in the US.
The French National Rally is huge now, but are either on the left-most boundary of the far right or are else pretending to be.
Hungary and Poland are a bit more complicated - Jobbik were very far-right but have done a weird 180, and Fidesz have moved a long way to the right whilst in government. Law and Justice in Poland are less extreme, at least on racial issues, than the National Rally are now.
You probably need to multiply the support of each potentially far-right party by percentage of how far-right they are to get a good read on a country overall.
The UK's also barely multinational; the English/Scots/Welsh/Ulstermen/Irish/Cornish all speak the same language, have basically the same culture other than a few quirky traditions (most of which were "revived" in the late 19th century).
The more relevant point is that "British" referring to citizenship is accepted by everyone outside the far right (probably going back to the idea that everyone who lived in the British Empire was equally a British subject) but it's really rare to describe non-white people as English, or for them to describe themselves as such.
Sure. Not multinational. Just one constituent nation likely to break away and another that was at war just a few decades ago, which might also break away.
You are right on the second count though, British isn’t an ethnic group
That's dubious. Scottish independence any time soon seems fairly unlikely (Metaculus puts it at 19% by 2030). By contrast, Metaculus is more bullish on Northern Ireland having a referendum by then, but given that can only happen if a majority of the population support reunification in opinion polls (currently at about 30%), that also seems pretty unlikely.
Sorry, who told you that about Italy? That's plainly not true. Also, for that matter, it seems to me to be plainly not true for any Western European country
Yeah, it's totally true that the European idea of assimilation goes way, way deeper than the American idea of integration. Unfortunately, it's also true that European deep down consider nationalities to be mutually exclusives, so that somebody might be perfectly assimilated even by the very high European standards and still be considered foreigner if they happen to refer to themselves by the nationality of their parents.
But this does not absolutely mean that it's impossible to be really felt as a national, it's just harder.
(Also, there are some shortcuts, like adopting a regional/urban identity. In that case the recognition of the national one comes as a bonus)
Ah, tu sei il tipo convinto che le risposte caustiche e i memini sfottò su Twitter siano una minaccia esistenziale alla civiltà moderna. Senza offesa, ma non mi sembri esattamente la persona più affidabile per sapere come si sente l'italiano medio, men che meno la casalinga di Voghera.
So che in questo periodo l'erba è tutta giallognola e bruttina, ma ti potrebbe comunque far bene toccarla un po'.
It's 92% han, and han are the overwhelming majority of all positions of power and prestige (government, business, academia, media).
They have a huge number of ethnic groups, but its irrelevant if they're all combined much smaller than the majority. Practically speaking a 50% white 50% black country would be vastly more diverse than China with numerous ethnic minorities.
They're also diverse in a very different way to somewhere like the US - most of the minority groups are in fringe regions like Yunnan, Tibet, Turkestan etc, with a distribution more like Native Americans than urban immigrant groups.
I would say China is exactly as Japan. Highly racist - not necessarily in the meaning that they consider other people inferior, but just that they are separate races with intrinsic differences. Also tied to belief they have special culture etc. that can't possibly be comparable to others. The chances of these two countries resorting to mass immigration to solve labor shortage problems is close to zero, even if only involves somewhat proximate cultures say east asia. Africans? Forget it. At best there will be some attempt to rally the diaspora similar to what Japan did in 80s/90s with Japanese-Brazilian guestworkers. China also may import some amount of foreign SEA brides due to shortage of women. That's it. I have less knowledge about Korea, Taiwan etc. but strongly suspect it's same spiel.
How would this happen? Most rich people in rich countries have enough money that they could easily afford more children if they wanted, they just don't. How does carrying capacity affect fertility decisions? And why didn't Amish people or Orthodox Jews get the message?
It seems to me that the Amish would reach carrying capacity pretty fast, because they cannot live in urban areas that rely on electricity. The carrying capacity of the countryside is much lower.
I'm not sure how the Amish work. If each family has seven kids, assume three of them are boys and will need farmland to be proper Amish. One will probably be able to take over dad's acres, but that leaves two boys who either have to buy farmland or become hired hands. Buying farmland, even for highly efficient Amish farmers, can be expensive, and it isn't clear that hired hands are as likely to sire seven children as farmers.
Lots of Amish are not farmers and have found other occupations. If they need to travel somewhere for work, they can ride in a shuttle bus (but not drive it).
The Amish (and the Plain Mennonites - see my explainer https://www.datasecretslox.com/index.php/topic,3429.0.html) are not nearly all farmers--a lot of them are tradesmen of one sort or another. But the dynamic of needing land and markets means that new communities start all the time: there were very few Plain churches in Tennessee and Kentucky in 1960, and now there are Plain churches everywhere in those states.
Immigrants with less money than average Americans have more kids than average Americans (and often more kids than they would have back home). It's not economics giving people low Darwinian fitness, it's a novel culture.
It's economics but it's class-dependent. I make about 3x the UK's average income (that's where I live), but I couldn't afford to have kids on that because it's not enough to educate them, house them etc *to the required standard.* The big problem in most developed countries is people being worse off than their parents, so they can't afford to raise their kids in a manner they deem acceptable. If your only concern is that the state won't take them away, then almost everyone can afford kids, but that's not the financial level people make the decision on.
If downward mobility is unnacceptable, then that will of course reduce fertility. As Greg Clark wrote in "A Farewell to Alms", the modern English population is descended from the downwardly mobile children of successful farmers.
Isn't your first point predicting that fertility should correlate with income? Because this isn't true, it's the opposite. (At least in the US, I didn't check other countries.) I can certainly believe that money influences the decision to have a child (or more children), but there must be another factor correlated with income that works in the opposite direction and overwhelms it.
You left out by far the most important factor: having children carries positive *status* among high-fertility groups, or at least the failure to produce many children carries negative status. They are deliberately pro-natal.
"Lo, children are a heritage from the LORD, and the fruit of the womb is his reward."
Among the Plain, this is facilitated by the fact that they have relatively few other markers of status -- no fancy dress, homes, cars, etc.
"the sort of place where having kids *gives* you money since they can do manual labor for you"
This is a meme that always seems to pop up, but it's biologically absurd that the average child would ever have a positive NPV to his parents, any more than laying an egg conveys a positive NPV to the hen or the apple to the apple tree. Though I do believe it's a meme that, while false, had positive survival value in past societies. Kids were less expensive in agricultural societies, but they did not make people materially richer.
For thousands of years, the only predators that humans had to worry about were cities. It seems like it's pretty easy to run away from cities. But maybe it isn't any more.
I think this is almost right, but two items maybe to add in:
1) humans are prediction engines and prediction increases (not necessarily accuracy but the number and consensus of predictions) with education and communication. We only have to imagine children starving in the future to stop having them because we now have the ability to control our own fertility. So if you change the population curve to something like “the projected average human consensus population curve for how worth showing up for the future will be” then I agree.
2) carrying capacity is a function of our technology. With a cave, a fire, and a couple spears maybe that’s 150. With agriculture maybe a few thousand. With fertilizers, modern techniques to get water, etc millions. We could be much more than we are now if we started asking ourselves what we need to increase ourselves.
I see it as humanity’s responsibility to act as the reproductive organ of the Earth. We need lots of us to go out and do that.
As long as some culture exists which ignores the Demographic Transition, they will be able to expand their population until reaching Malthusian limits (just like other species of animals).
There are populations in rich countries which are still growing: subpopulations that have separated themselves from a culture deleterious in Darwinian terms.
"Child labour + support in old age > cost of raising child" does not require "child labour > cost of raising child" (your post I'm responding to) nor "support in old age > cost of raising child" (Caplan's paper).
Children are helpless and provide no labor when very young. Resources are flowing from adults to children then. Once children grow up enough to do some child labor (generally much less productive than adults)... their parents are STILL producing more resources than they consume.
"For most of human history children were a net positive in economic terms."
I don't believe that. People were not competing with each other to adopt children, instead the tradition was to designate someone you trust very much as a godparent to look after them. Single mothers were rare in pre-industrial England because they just couldn't support a child on their own.
If you're convinced the technological singularity is approx 30 years away, and that it's likely to be ~bad for humans (AI take over etc), then why are you trying for a baby with your wife?
I don't think this is the same kind of antinatalist point of "the world is bad, why bring more life into it". It seems like you seriously believe that something is different about this point in history, and to me it then seems a bit odd that you'd want to plunge someone new in at the deep end just when the robots take over!
How do you reconcile this?
PS: forgive me if I've misremembered about the baby part, think you said that a whole ago!
There's some chance I'm wrong about a singularity, there's some chance we make it through the singularity, and if I'm wrong about both those things I'd rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren't that different in the grand scheme of things. I'd feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don't plan on committing suicide to escape and I don't see why I should be not bringing life into the world either.
Having kids is the perfect utilitarian decision. Either we'll have a singularity and everyone dies, then our one or two extra generations are rounding errors in the grand scheme of things, or there isn't a singularity and the best thing we can do is to continue the human race.
but having a kid would maxamize your own utility as compared to adopting. it seems youre still being a utiltarian, just in the sense of “what makes the world most how I want it” not “what makes the world most like the average of all human values wants it”
If you don't actually want to adopt a child, you're probably not maximizing utility by adopting one. Being brought up by an adoptive parent who didn't really want you probably makes for a crappy upbringing.
I used to want to adopt when I was younger, but I've come around to feeling otherwise as I've come to think that it's very likely it would give me poor chances of being matched with a child I'd actually relate to, and I don't think I'd be a very good parent to a child I related to poorly. Some people might, and I think they're better candidates to adopt than I am.
I think a utilitarian attitude encourages actually crunching the numbers where possible (even if only made up ones) to check on whether uncertain cases are likely to be worthwhile. But in general, I think we should start with a default of extreme skepticism that choices which really fail to make us happy are worthwhile. What are we trying to trade off that happiness for, and can that trade work at scale?
"there's no Effective Altruist case for having kids of your own"
Highly doubtful. You don't think there's any difference between a world with Scott's kids and lots of people like them, and a world full of foster kids raised by Scott?
People with eugenic impulses do tend to think it is people like them whose genes should propagate through time and people unlike them who shouldn't, but I don't think there's a lot of reason to believe that people following these impulses to their logical ends results in a world that is better off in terms of advancing general welfare.
There is a total utilitarian case for having more kids, which has always struck me as more sensible than average utilitarianism. Yes, I embrace the "repugnant conclusion" of massive numbers of people less happy than us. https://www.overcomingbias.com/2009/09/poor-folks-do-smile.html
Underappreciated distinction: fostering and adoption are very different things, with different backgrounds, process, and results. A fostered child is very likely to have been removed from their previous environment by state services after multiple years of neglect, and placed with a volunteer on a presumably-temporary basis with reunion with the biological parents the nominally preferred end goal. Something like half get that reunion, with roughly a quarter ending in adoption (and not always adoption by the foster parents).
While there is a shortage of foster homes in the US, there is definitely *not* a shortage of people willing to adopt infants less than a few years old. This is where homo economicus would pipe up about the inevitable results of a market where the price is set at zero by fiat, but I'm not quite cold blooded enough to endorse that position.
Know this might be overly personal and at the same time highly meaningless coming from an internet stranger, but good for you and your wife. That takes real courage. There’s certainly a possible future worth showing up for.
The chance we make it through a singularity is a pretty convincing argument for having kids imo, since life in a good-end singularity is likely to be far higher utility than now (to the extent that it probably dwarfs the other worlds in weight even if the chance of being in this world is only a few percent), and there's always some chance you will die or otherwise be rendered unable to choose to have kids before you know you're in that world, which would deprive them of that utility since they don't exist.
Not to mention there are perfectly selfish reasons to have kids as well -- they're often great at coming up with things to do and injecting variety into the dullness of everyday life.
"I'd rather give my kid 30 years of life than none at all."
What's the cutoff point with this philosophy? If you knew your child will die at age 15, would you go ahead and have the child? What about percentage of suffering? If your child were to suffer, say 30% of his or her life, would you go ahead and have the child? Does it matter whether the suffering is distributed throughout the life, stacked at the end, or towards the beginning?
I think suffering and death work completely differently. I'd be nervous about bringing a child into the world who would face any abnormal amount of suffering, but I don't think it would be morally wrong to bring a child into the world who would die at 5 or 10 or whatever (though it might be unfair to my family since they would have to grieve them)
I can't remember what I thought at 10 or 30, but I'm 37 now and if I got hit by a truck tomorrow I would be happy to have lived even for this relatively short period. I think if I had to suffer a lot then I would be upset and regret coming into existence.
Agreed on the big difference between suffering and death. I had a friend who got hit by a car and died instantly, and while I think it's horrible and I miss him, I am also a bit jealous that he didn't have to face death or go through the dying process.
It seems as if this being ok with a child's death depends on the child meeting their doom suddenly and without prior knowledge of it, because any knowledge of upcoming demise would cause untold suffering. Does this mean you think the AI apocalypse will be instantaneous? Because any period of time from the recognition of impending doom and the actualization of it would be utterly terrible. It could potentially last years. Also, what makes you so sure the AI apocalypse will more like a paperclip maximizer and less like I Have No Mouth And I Must Scream? I can see something like that happening based on a misaligned goal of the AI to keep people alive.
" because any knowledge of upcoming demise would cause untold suffering"
seems like an awfully strong take. Unless you expect a particularly terrible form of demise (e.g. no mouth) isn't this literally what nearly every single person eventually encounters? Barring some singularity or step change in the advancement of medicine there's a >60% I'll die 35-45 years from now, quite possibly in a pretty unpleasant fashion, for my parents it's more grim, for the surviving grandparents even more so (one is likely to die in the next year or two and is already in pretty bad shape quality of life wise from mouth cancer).
Most models of the AI apocalypse have it happening very fast, since if one could see it coming years in advance it would be relatively easy to stop (that specific AI). Based on the speed of response to the pandemic, I don't expect governments to admit the threat is real until there's less than 24 hours left to live :/
(N.b. I personally am not a doomsayer about AI, in that I don't think it's more likely than not in my lifetime, but much like nuclear war even small odds are very bad)
What about quantum immortality arguments that imply any conscious being brought into existence has a chance of eternal suffering in their immortality? Seems like those sort of arguments would dominate any kind of Pascal's wager type considerations.
I'm pretty sure utility is undefined when considering an infinite multiverse scenario, and no conclusions can be drawn.
For an incomplete example, suppose you conclude the probability of infinite suffering through quantum immortality is morally wrong. You are then duty bound to maximally extinguish life to limit the number of universes where suffering can exist (infinity - human civilization), which means you must now weight the probability of eternal suffering of creating your child (infinity * (human civilization in our universe - your hypothetical child and their descendants)) vs. the extinguishing of all suffering should your child or their descendants manage to figure out how to extinguish life in all universes (infinity * probability of human civilization influenced by you or your descendants figuring out multi-universe WMD and using it).
I'm not sure exactly what the math would be but I think some infinities would turn out to be larger than others.
There's ~20 influential interpretations of quantum mechanics and at least one implies quantum immortality, assuming each interpretation is roughly equally, likely that makes the immortality infinity quite large, definitely a lot bigger than P(the extinguishing of all suffering should your child or their descendants manage to figure out how to extinguish life in all universes).
But yeah maybe that kind of calculation just isn't possible in principle, not sure if that result would transfer to all types of Pascal's wagers.
Then the decision you actually make doesn't matter because you have to consider all possible multiverses including the ones where you make the opposite decision
As someone with a nine year old I would say just about now is the time where if they suddenly dies in an accident or had some devastating fatal illness, I would feel like they had gotten to cash out some of the investment and life in them and live a bit of their own life and own projects.
Five year old isn’t there yet. If he were to die tomorrow and you could somehow magic it all away, I might (if not for the emotion/memories involved).
But by the time they are nine they are little people with little interests and projects and a stable personality. They have started “living life themselves”, and are less just a “pet” in training of their parent.
Kudos for tackling the question directly. I'm really interested in what number people would come up with if forced to give an answer, might lobby to get it included in one of the reader surveys next time those come around.
I think a "pet" (and even a real pet) could be happy that they live (even if they could not call it that way). On the other hand, there are gloomy scenarios that can end or threaten the lives of humans of all ages, and many of them can also be very traumatizing.
Isn't the key point here that Scott doesn't KNOW 30 years or 15 years or any other number. It's all speculation. Would the prospective kid want a shot at 30 or 60 or 90? I think YES.
Would you advise your child to avoid having children of his/her/etc. own ? After all, by the time your child is grown, the Singularity will only be a few years away, right ?
I’ve never understood the perspective that “there will be unforeseen challenges in the near future so it’s best to not bring any humans into existence to experience it”.
Humans are always facing new challenges. I’m glad my parents were born, even though the future at the time of their birth was radically uncertain and dangerous.
Heck same story with my grandparents, and my great grandparents. Who in their right mind would have babies after the events of the Great War, or the 30 years War, in the midst of the Cold War, and just as the future was looking to be even worse? Well, I’m quite glad people were short sighted enough to do so.
"Who in their right mind would have babies after the events of the Great War, or the 30 years War, in the midst of the Cold War, and just as the future was looking to be even worse?"
True. I suspect (but don't really know) that every generation sees their times as uniquely dangerous and momentous. I suppose that there are uniquely bad times to live, but I doubt that anyone can forecast them with any accuracy, certainly not the decades in advance that one would want if it were influencing one's fertility decisions. ( I, personally, am childfree, but for reasons that have nothing to do with the historical moment. )
Agreed. There are other possibilities as well which are less predictable: Wars (particularly long ones like the 30 years' war), plagues, extended periods of bad weather (e.g. the "little ice age"), unpredictable crop failures (e.g. "Irish Potato Famine" - though these usually wouldn't span most of a lifetime), particularly bad rulers (especially if they can sit on the throne for decades).
I might not be that smart, but I see our current time as fairly stable and not very momentous. A good time to have kids (I have two and hope to have more), invest for the long term and make decisions for the long term. As someone who lived through the 80’s, I’m always struck by how similar the 80’s and 90’s feel to now, whereas the 50’s and 60’s feel like a complete different historical epoch.
Hmm... I don't see the current time as either particularly stable or particularly unstable. I was too young to remember the Cuban missile crisis, but there were a number of public nuclear threats after that - and several near accidents only revealed years later. Putin's nuclear threats this year look approximately comparable, perhaps marginally less worrisome.
There are always potential long term threats being aired. Currently global warming has the spotlight - at least it has reasonably well understood physics! But in general, the probability and severity of any long term threat is very hard to assess.
As I mentioned upthread, I, personally, am childfree, but for reasons that have nothing to do with the historical moment. ( I dislike hassles and time sinks, and children add an entire category of hassles and time sinks. )
If it's 30 years, you might want a bunch of your kids in their twenties to help fight the robots with. They might be the difference between victory and defeat.
You may see it as plunging "someone new" into the apocalypse. Or you might also see it as pluging a compound being of yourself and the other parent. This is just commiting more of yourself, in as far as you are define yourself by your membership in and ownership of your family.
Has anyone written extensively about how AI would, uhh, kill us all, and why it might choose a quick painless method over something more gruesome but perhaps less resource intensive?
A common argument re quick AI victory is that a powerful intelligence is much more likely to want to maximize risk of success than to minimize resource usage. A plan that involves slowly and painfully exterminating humans is much more likely to fail than simply coordinating a nanobot swarm to release a neurotoxin that kills everyone instantly before we even realize we shave anything to worry about
I'm having difficulties parsing this. Do you mean "would have dropped 2.5 points if not for the Flynn Effect", or are you saying that the Flynn Effect is measurement error? Or what?
yeh, that's a good point. IQ is after all the only thing being measured and it is either dropping or it isn't. I believe we have dropped in real intelligence (G) but the Flynn effect is measuring artefacts that are not that important to the functioning of society, spatial ability and so on, and is missing some of that drop. No hard evidence.
Or you could argue that people are getting slightly dumber, but having environmental obstacles to actual maximum eliminated quickly enough that the average is going up.
Average person can say get to 110, but social/environmental issues kept the average at 100. Then 10 years later average person can max at 108, but the barrier mean most get to 104.
"And I notice it’s weird to be worried both that the future will be racked by labor shortages, and that we’ll suffer from technological unemployment and need to worry about universal basic income. You really have to choose one or the other."
Of course those two are inconsistent. The correct answer is that technological unemployment is a myth. There is little credible evidence for that outcome. Some who talk about it are (consciously or unconsciously) motivated by a desire to promote UBI for independent reasons.
I'm not particularly "worried" about labor shortages, but of these two outcomes, I consider it to be the one with more evidence, by far.
Hm, maybe a better way to think about it would be the amount of labor it takes to maintain a certain standard of living. The existence of agricultural technology meant that we needed only about 2% as many farmers as before; the other ~98% of people were able to go into producing things other than food without any decrease to our food-related quality of living. If robots mean we only need 2% as many everything as before, the other people can either do new stuff that raises our quality of living, or be technologically unemployed, I'm agnostic as to which but it doesn't seem to imply decreasing quality of living except for distributional reasons.
Yes, I agree that's a better way to think about it. New technology shifts what labor is needed.
I see robots increasingly used in manufacturing, warfare, construction, and selected other high exertion/risk functions, but nowhere near "most everything." Similar story with AI: real impacts but nowhere near "most everything."
I think there will always be plenty of work for humans to do. There may still be difficulties, but they may be more cultural and psychological than literal lack of work. For example, it could be the jobs are plentiful but increasingly shifted towards high cognitive demands, and so are not well matched to a portion of the population.
That's true, both at present and for the near future. I do expect robotics to improve, but incrementally, not in any way that ends up with little work for humans to do.
Nobody said AI is there yet. If it were, we wouldn't even be discussing this.
However, if AI does get there this century, it will likely be very suddenly relative to a position where it is far off (due to recursive self improvement).
Additionally, the main jobs that will be destroyed first by AGI will be white collar jobs, not robots performing manual labor (that it currently cannot perform). Which means a lot of people who would have otherwise gone into white collar jobs would instead be in the market for jobs involving labor not easily replaced by machines. Which sucks for them, but it does mean that there won't be labor shortages if the pupation falls.
We get massively richer because AI and robots can do so much stuff cheaply, but some jobs are no longer human jobs because they've been automated away. If there are still things people (at least the owners of the robots) want done that can't be done by robots/AI, this ought to lead to new jobs being created to do stuff that previously was too expensive to pay anyone to do.
This is how automation has worked so far. Many individuals were made worse off because some new technology killed their job or industry, but overall, we still had plenty of stuff we wanted done that we were willing to pay people to do.
There's no theoretical limit on the number of people who can work in service-oriented industries, especially home nursing, child care, etc. Even if humans were no longer needed to work producing anything at all, we could just pay each other for basic services (including accompanying children, the elderly, handicapped/special needs) and still have full employment.
Scott has also talked about the concept of Slack before. There's always room for more Slack, which can take many forms. Reducing class sizes at school, hiring redundant employees to back each other up, lots of ways to increase Slack and hire more individuals. The Western world has a LOT of Slack compared to the pre-industrialized world, and appears to be a normal result of vastly increased wealth. Children as young as six were routinely needed to do productive labor, and now people continue to consume vast resources from society (especially in the form of education, but also entertainment, parent's time/effort, as well as basic supplies like food and clothing) until 18-25 and nobody bats an eye.
On the agricultural note, the reason we have so many people is due to nitrogen fixation for fertilizer and the Green Revolution, which is the reason The Population Bomb and other 1960s doomers ended up wrong; they simply did not account for a massive increase in agricultural productivity.
However, many people have raised concerns about the environmental impact of mass nitrogen fertilizer farming, with some countries aiming to reduce their emissions. If we do this broadly then we simply won't be able to feed the current population, and would potentially face mass starvation if population was not already peaking.
In that regards underpopulation allows us to mitigate the environmental damage that mass fertilizer use has caused and continues to cause, at the cost of people not being born who wouldn't be born anyway due to whatever societal reasons.
Population is still growing, just at a slower rate. I don't think we'll actually get to shrinking total world population, though growth for enough time will eventually hit Malthusian limits.
See Sri Lanka and nitrogen protests in Netherlands as counter-examples.
It is possible for such trends to reverse - like with EU increasing coal burning emissions due to potential gas shortages - but "just starving/freezing" from less production cannot be ruled out.
And you can also "just buy replacements that go to highest bidder on world markets" from your for-now superior position - shifting the burden elsewhere and driving lesser countries into extinction.
Peter Zeihan is already predicting a fairly massive famine over the next few years due to (1) wheat disruptions from RU/UKR conflict, (2) fertilizer production disruptions due to same, plus Chinese hoarding, plus natural gas supply disruptions, (3) increased cost of capital as the WEIRD boomers retire cutting into development aid, and (4) unintended side-effects of green policy re: mechanization of agriculture.
It does because a large part of the way money circulates now is through wages. The wage compensation percentage of GDP runs from 60% (Western Europe) in Germany to 10% in Venezuela. It looks like oil production is a major factor there, and even though Venezuela is nominally socialist that’s not getting redistributed. In the absence of massive redistribution (or perhaps universal share ownership) the demand for the products won’t be there in a fully automated luxury future. The likelihood is fully automated luxury feudalism.
If there are fierce labor shortages in the future, then any kids you have are likely to be able to do well for themselves in finding well-paid, rewarding, pleasant jobs.
>There is little credible evidence for that outcome.
A lot of smart people take this issue very seriously and have written about it at length. You ought to at least make a token effort to address the specific arguments and why you think they're wrong rather than just declaring there's "no credible evidence".
I "ought to," huh? Because "smart people" (minus 10 credibility points every time that phrase is used as if it constitutes anything more than an appeal to authority or in-group conformity) "take the issue seriously"? Oh boy.
I would weight the words of dozens on dozens of scientists with a lot of training in the field over the words of a single "eccentric" (to put it politely) autodidact.
Yudkowsky's brilliance is so unique and scintillating that to demand actual enactment from it would be to tarnish its glory. Like the celestial Emperor, he must practice wuwei, seated serenely above the world of "proving his work" or "being held responsible for when he's wrong".
Is it? I doubt most AI scientists have ever put any serious thought into safety concerns about AGI, and just hold the general attitude that "superintelligent AI is something that the undereducated masses worry about because they saw the Matrix, but I'm an educated scientist who knows to doubt extraordinary claims." I'd be surprised if even 1/4 of them have heard of concepts like paperclip maximizers or instrumental convergence.
I also don't see why we'd particularly care about what AI scientists think about this issue. Their day-to-day activities might range from typing "optimizer.step()" to inventing brand new ML frameworks, but they aren't thinking deeply about decision theory or game theory on a regular basis. The scope of their work just isn't that big. It would be like positing meteorologists as the primary authority on climate change.
There is also the issue of incentives. People who believe AI doom is a real thing have a large incentive to not work on improving AI ("hey, help us destroy the world" is not a good recruitment pitch). People who work on improving AI have a large incentive to want AI doom considered scifi, because otherwise they'd be out of a job.
I don't agree with all of Yud's conclusions (in particular I think neural nets have the potential for "false starts" in which a rogue AI does hostile stuff and then we kill it; an escaped rogue human-level NN can't just make a better NN to get to superintelligence, because it can't align NNs with its goals any better than we can, so the superintelligence doom only happens if it can explicitly code a superhuman AI), but I'm not trusting Big Tech's assurances that they aren't gambling with our future either.
Yes. What's even scarier is that Big Tech seems to think that AI safety is about things like preventing authoritarian governments from using facial recognition, or making sure the datasets they use aren't racially biased. They're operating several orders of magnitude too low-level. If you met with the "AI Safety" team at Google and started talking to them about the orthogonality thesis, they would look at you like you had two heads.
I think if there was a solid case that was convincing to researchers it would circulate pretty widely and more people would think about it.
By the way you should keep in mind that even though we're just stepping out optimizers, you still have to get a PhD in CS so the amount of thinking about computation and math that your average AI researcher has done is still probably higher than the average of the commentariat.
To be fair, there are also many areas where he agrees with most AI scientists: the belief that AGI is possible eventually, and the belief that scaling up GPT-like models alone is not sufficient to get us there. The belief where he differs the most is primarily on the difficulty of making AI safe.
This is also my impression. There isn't much daylight between Ng & Yud besides length of timeline from today -> HLMI -> AGI, as well as alignment difficulty.
I would also add that saying the lack of formal environment is damning is essentially saying Thiel Fellowship receivers are exclusively the damned.
> It's particularly damning that Yud hasn't studied AI in a formal environment at all
So? Credentialism doesn't make sense.
> and has drastically different views on AI than most AI scientists.
Not really. He's a bit extreme with pessimism. But as for viability of AGI - relevant people / groups like DeepMind or OpenAI think they'll get to the AGI in decades.
Agreed - the field _has_ been notorious for overestimating its rate of progress. I do expect it to get to AGI eventually. A bright child has an impressive, but _finite_ set of capabilities. Sooner of later they will all (including learning) be automated - but whether that day is 15 years or 150 years off is very uncertain.
>But as for viability of AGI - relevant people / groups like DeepMind or OpenAI think they'll get to the AGI in decades.
But Yudkowsky goes considerably further than just predicting human-level AGI, and he also doesn't expect it to take decades. He's made a bet with Bryan Caplan that superintelligent AI powerful enough to destroy humanity will be a reality within the next eight years.
>So the generator of this bet does not necessarily represent a strong epistemic stance on my part, which seems important to emphasize. But I suppose one might draw conclusions from the fact that, when I was humorously imagining what sort of benefit I could get from exploiting this amazing phenomenon, my System 1 thought that having the world not end before 2030 seemed like the most I could reasonably ask.
This sounds to me like saying that placing the end of the world at 2030 was chosen to be the most optimistic prediction he could make within reason, meaning his actual expected date for the end of the world would probably be well before then.
Do you think calling a potential end of the world "millennarianism" is proof that it won't happen? Could people in 1950 have proven there would never be nuclear war, because believing in it would be "millennarianism"?
A mere reference to the long history of the world not ending, in spite of contrary predictions. Implicitly bundled with a psychological explanation for the frequency of such predictions. One could frame it as assigning the end of the world an extremely low prior probability.
I think this proves that the world isn't going to end from something that could have equally well ended it in 1000 or 1500. I do think technology has been growing since then and so it makes sense to say "we didn't have nuclear bombs in those years, but now we do, so nuclear bombs can end the world".
If you are driving west to east across the US, and someone warns you that you are about to drive into the Atlantic Ocean and drown, you can't argue against this with "but we've already driven 3000 miles and not hit any oceans, so it's incredibly unlikely that oceans exist".
Surely it is possible for some level of technology to be enough to end the world, and at some point we will get that level of technology. I'm arguing it's soon.
(or, technically we got it when we got nukes, but we seem to have handled that one semi-responsibly. I'm arguing we will get more and more technologies like that, and some will be harder to handle)
I feel like it's easy to latch on to AI Nanobot-Death for the apocalypse because there's still a ton we don't know about it, or how it could play out. We know a lot more about disease, climate change, natural disasters, or nuclear war, and that means our estimates on negative consequences are a lot more precise and less apocalyptic (even if they genuinely would be bad).
Global warming won't kill everyone. Nuclear war almost certainly won't kill everyone. Experimental virology almost certainly won't kill everyone*. They can kill a lot of people, but they're not (serious) X-risks. If you care about X-risks more than everything else, they can mostly be ignored.
AI is an X-risk because having doomsday bunkers on 4-6 continents doesn't mean anything; an AI that wins is not going away and will crack open those bunkers.
*Obligate pathogens of humans cannot be X-risks because they rely on dense human populations in order to spread, so they will inevitably suffer R < 1 long before reducing humanity below minimum viable population. Serious biotech X-risks exist, but are things like "artificial algae that can't be digested by anything pull all the carbon out of the biosphere".
That's based off assumptions about the kind of capabilities that an AI might bring to bear. Again, as I pointed out, it's easier to project doom from that because we just know a lot less about what those capabilities might be.
I used to call this Castro's Law: people predicted Fidel Castro would die in 1970, 1980, 1990, etc. Since they were always wrong, we conclude that Castro must be immortal.
What you are doing is very arguably the reverse: all men are mortal and will die one day, therefore it is quite reasonable to proclaim that I, being moderately out-of-shape, will collapse dead of a heart attack in three hours time with 95% certainty.
The pragmatics undercut the epistemic point somewhat, but I don't think the average philosopher in the 50's would be cognizant of that to the point where the veil of ignorance fails.
" The total yield of our 4,000 weapon war is going to be on the order of 1,800 MT, only 4.25 times the yield of atmospheric nuclear testing worldwide, which even at peak seems to have produced doses of maybe half of natural background radiation. Even if we assume that our war will produce 10 times as much late fallout as the tests (due to shorter timescale and the fact that operational warheads may be dirtier than test ones), the peak exposures are approximately the same as those for aircrew today. "
"On the Beach" was a great movie, but a lousy estimate of global radiation exposure.
I feel like this kind of reasoning gives way too little consideration to two things:
1. The likelihood that you would think the end of the world is likely whether it is or not. We can't know what we don't know, and the sheer number of times educated people have predicted the end of the world and been wrong should be enormous evidence that humans are terrible at such prediction and at considering all factors or even imagining the factors that exist to be considered. Put simply, the key question is not how likely you think AI-risk is, it's how likely you WOULD think that even if it were false.*
2. The conflation, in the case of AI-risk, with two separate predictions, each of which has a terrible track record: the development of some technology by a certain year, and the existential risk of some existing technology. People predicting a religious apocalypse in the 1850s**, a nuclear apocalypse in the 1950s, and an AI apocalypse now, on the one hand. And people predicting general AI by 2050, moon colonies by 2000, personal household flying vehicles (whether balloons or planes or blimps or whatever) by 1950, and so on. AI-risk has to be independently "this time it's different" in BOTH of those respects to be valid.
*the apt version of your driving example is: every 100 miles or so, someone in the car says "look I see the ocean, we're about to drive into it" and over and over and over they're wrong. Now, you really think you see the ocean. How much should you discount that belief based on past false beliefs?
**you can regard religious scriptures as a kind of technology, one that exists but may or may not work, as the evidence is after death or after doomsday, etc.
Well put. I think this is a good way to look at a whole range of apocalyptic predictions. For how long have people been saying the same thing, and how much has changed?
I have a particular interest in finite resources, their uses and abundance. Take copper - every single day for the last 10,000 years or so, someone, somewhere has been saying "Oh no, we're going to run out of copper". Throughout the 10,000 years the reserves of copper have been increasing, the amount in circulation has been increasing, and rather splendidly its price has been decreasing.
And yet, while the abundance of copper continues to increase, there are ( and always will be) people saying "Oh no, we're going to run out of copper".
> People predicting a religious apocalypse in the 1850s**, a nuclear apocalypse in the 1950s, and an AI apocalypse now, on the one hand. And people predicting general AI by 2050, moon colonies by 2000, personal household flying vehicles (whether balloons or planes or blimps or whatever) by 1950, and so on.
I think it genuinely is important to remember that these are very different people in each case, and that you would not be reading their blogs! The error of any given predictor is more limited than a list would imply, and we're already working from a comparatively high level of established competence.
> AI-risk has to be independently "this time it's different" in BOTH of those respects to be valid.
There's nothing wrong with having a very low prior. (Well, maybe, but it's not inconsistent.) There *is* something wrong with, after the argument is presented, simply repeating the prior back. We can be very confident that someone who does no engaging with the substance of the argument is making a mistake!
"I think it genuinely is important to remember that these are very different people in each case, and that you would not be reading their blogs! The error of any given predictor is more limited than a list would imply, and we're already working from a comparatively high level of established competence."
Okay, a few points. First, I shouldn't have used the 1850s for the religious apocalypse, I just liked the neatness of a century apart. I should have said the 1660s, when everyone was a Christian and all academics up to Newton himself accepted the literal words of the Bible as unquestionable truth.
Second, I don't understand your point about blogs. If an academic in the 50s wrote an op-ed or whatever saying that Eisenhower's "tactical weapons" policy will very probably mean a nuclear exchange within a decade, using lots of reasoned evidence, would that be less reliable than a blog?
"There's nothing wrong with having a very low prior. (Well, maybe, but it's not inconsistent.) There *is* something wrong with, after the argument is presented, simply repeating the prior back. We can be very confident that someone who does no engaging with the substance of the argument is making a mistake!"
It's not just repeating the prior back. It's pointing out that "there could be flaws in the argument we can't see, and history shows arguments of this form have plenty of flaws that can't yet be seen" is as valid an objection as identifying particular possible flaws.
You would be right if the AI-risk argument was a deductive one, showing that given certain assumptions deadly AI is a guarantee by whatever year. Then, critics would have to say which assumption they reject, or show the argument is invalid. But it's a probabilistic argument, with obvious failure situations like general AI being much harder and more expensive than expected (a la personal flying vehicles) and there being less social and institutional demand for it than expected (a la moon colonies).
Would you make the same argument in support of a religious apocalypse? I am assuming not, but am having trouble seeing the difference from a neutral third parties' perspective. Can you clarify why a third party should seriously consider that in your case they are "making a mistake" but that doesn't also require them to engage with the substance of every other doomsday prediction?
If the difference is a "high level of established competence" then you're going to need to flesh out and justify that. From a non-AI-apocalypse viewpoint, there have been a whole lot of predictions that have not resulted in verifiable results. Saying AI is getting better is not the same as saying it will eventually [some X-risk]. Saying AI will destroy the world isn't any more convincing on its own than the guy on the street corner talking about the rapture.
Conscious observers will always find themselves within places with conscious observers. If the world ended, it wouldn't have conscious observers.
We will always find ourselves in some chunk of the Everett multiverse / inflationary multiverse / very very large regular universe / etc. which hasn't had an "end of the world" yet. We will always look back at our history and maybe see close calls (e.g. Stanislav Petrov), but no actual disaster.
That doesn't mean our particular area can't go to shit in the future.
Nuclear weapons really were categorically different to to everything that came before them. What happened before them isn't as relevant as you're suggesting. They really could have at least come close to desteoying the world (at least figuratively), and the fact that nothing else destroyed the world says almost nothing about the specific risks of nuclear weapons.
And in any case, how is this not just the observer selection effect? By your reasoning, we should never expect anything at all to ever have a high risk of wiping our humanity regardless of the specifics of the particular threat, because if your argument is that we should only worry about the world being destroyed if we have some hitorical precedent for the world being desteoyed, then we will obviously never have the precdent because we wont be around to speculate if the world is indeed destroyed.
True. Bertrand Russell made a similar point in reference to the problem of induction:
"Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken."
If you heard 10 independent predictions of the end of the world, and none of them were at all related to the others, would you consider all 10 equally valid (at first view), and also recommend seriously considering steps to counter them?
WWII showed us that non-nuclear wars had the potential to "destroy the world" in a similar sense to what nuclear weapons can do. Firebombing did more damage than nuclear weapons, despite both being used. The reality, visible in hindsight, is that although nuclear weapons were potentially very dangerous, they were not a new type of thing. They were an escalation of existing trajectories that both required active use and were insufficient to actually destroy humanity. Much more serious would be an early war between the first humans, when rocks and sticks were unusually likely to wipe out humanity.
Every possible world has a spotless history of not-ending (right up until the moment it ends, if it ever does). The existence of this spotless record does not, by itself, provide any information about which possible world you're living in.
Sure, but I think that's besides the point here. The point is not whether all out nuclear war with hydrogen bombs would have literally killed everyone, but rather if the existence of hydrogen bombs make a catastrophic conflict with at least hundreds of millions of deaths close to inevitable.
I don't know that warfare got deadlier as technology improved. Governments were able to muster more people/resources for a war, but lots of small scale societies were able to kill similar proportions of their population through lots of intermittent warring. I suppose having enormous world populations makes "millions of deaths" close to inevitable, from disease if nothing else.
Agreed. It's like looking at the most devastating weather events from an economic perspective alone. You'll mostly be regarding recent events as the most expensive and therefore calling them "worst." But, they may have been far less severe and just happened to hit a more developed economic area.
There are plenty of anthropic reasons that I'm leery of using "end the human species" as a benchmark. I prefer the admittedly-vague "end civilization" or "kill 50%+ of the population", both of which *have* happened a few times each at different scales.
Even "end civilization" is vague, though I agree it's a better benchmark than "end the human race".
If languages and religions pretty much survive (they seem to be the most durable patterns), has civilization ended? What if they're gone, but people still remember how to do agriculture?
I think using ending the human race at a benchmark leaves out too much about our local loyalties.
To gesture towards a definition by example: if a Florentine is predicting doomsday as a consequence of her people's impurity in 1347, she gets partial credit. If a Baghdadi thinks this "Khan" guy is an existential threat, I'm giving him full marks.
If many weird groups of people have, in the past, for irrational reasons, believed that the world is ending soon, then I think this is something that humans should keep in mind if they find themselves becoming part of a group of weird people who think that the world is ending soon.
(For the same reason, if you're a schizophrenic who thinks that the CIA is sending messages through their fillings, you should be aware that a lot of other schizophrenics have falsely thought this in the past and adjust your priors accordingly. Maybe this time _is_ different and the CIA really _are_ sending messages through my fillings, or maybe I'm just a schizophrenic and this is the sort of irrational belief that schizophrenics tend to adopt.)
I think the Castro thing isn't quite analogous, because predictions that Castro would die in, say, the 1990s, were not clearly irrational, whereas predictions that the world would end in 2012 or 2000 were.
Would it have been a prudent view in the 1950s to go: "Well, by the year 2020 civilization will definitely have been extinguished by global thermonuclear war so why worry about X?"
In 1950 the nuclear bomb had been demonstrated to be a monumentally lethal thing.
AI tools in 2022 have not been demonstrated to be lethal. It’s conceivable, but not demonstrated.
The end of the world in 2000 was conceivable, but never demonstrated.
Circa 1950 nobody could be certain whether nuclear war would occur within the next 50 years.
In 2022 many AI theorists are ‘certain’ that AI takeoff will occur, and many were certain that it would already occur.
‘Millenarians’ were certain the end of the world would occur in 2000, and many had previously predicted the end of the world in prior years.
Circa 1950 few people believed that a nuclear exchange would lead to total apocalypse. Even artistic depictions slanted towards the bleakness of the scenario mostly showed life continuing after the bomb.
In 2022 the predictions of AI theorists (e.g. Yudkowsky) are apocalyptic, suggesting irreversible destruction of the human race, maybe even the whole solar system.
‘Millenarians’ seemed to believe that the world’s ticket was punched in the year 2000. Nothing would continue afterwards.
I could go on (probably somebody should do a point by point analysis of resemblances; these are just the 3 that occur to me), but to me it seems clear that the apocalyptic predictions about AI more closely resemble millenarian prophecy than general fear of the bomb.
It seems to me that there are two currents here: the current of mainstream AI tool thinkers, which says that AI is dangerous but mostly because if we design complex systems around black-box AI reasoning, there will be unexpected and perhaps unwelcome results, (e.g., bias). This closely resembles the rational fear of nuclear devices, which have powerful applications that can be misused, but are fairly unlikely to suddenly blow up the whole world and end all life. Then there’s, for lack of a better name, the Yudkowsky current which says that any year now we’ll be experimenting with AI and it will kill us all. This more closely resembles any doomsday camp surrounding a particular issue than it does the mainstream thinkers on that issue.
Interestingly I was just thinking about this today, that rationalists just slipped right into the pattern of getting obsessed with a near term apocalyptic mythology like so many other groups do.
(That doesn’t mean it’s wrong a priori, just interesting from an anthropological perspective).
We’ve always predicted the downfall of civilization since civilization got going, it seems. Rationalists just say that it comes at the hands of AI.
Personally I’m less confident in AI risks, but also I think climate change will be worse than most rationalists seem to typically think, so I’m not immune to this kind of thinking either.
I'm reading Henrich's "The Secret of Our Success" now and it's making me think civilization is less robust than members of our own think. But the human species could persist even afterward.
I struggle to see how we're robust enough for modern civilisation to survive a man-made virus as significantly more contagious and significantly more deadly than e.g. covid. If leaving the house is practically a coin flip for your life, who will provide the food, water, and energy required to sustain society while anything resembling an effective response is developed (if its even possible at all)?
Yes, but you missed Maguire's condition of "Man made", a virus engineered as a weapon could get around this, for example, by having a long, asymptomatic but infectious stage at the beginning. How close is biotech to being able to do that? No idea.
Covid was particularly nasty because it was in the grey area between "lockdown worth it" and "lockdown not worth it", leading to a tepid and patchy response that was probably the worst of both worlds.
If covid were much deadlier it would have been eradicated.
It's a strange psychological tendency that comes up again and again, from religious cults to anti-vax depopulation conspiracies, to environmentalist predictions of collapse:
I tend to put climate change alarmism into the same category -- it looks like we're headed for 3-4° C of warming by 2100, but I fail to see any way in which that will end civilization. Maybe there's some complication of warming that I'm missing. What part of climate change worries you most?
The only part that significantly worries me is that our governments and civil societies seem so focused on "stopping it" through kneecapping our economies rather than adapting to it.
I fear that we'll end up losing a bunch of coastal cities that could have "easily" been saved by getting the Dutch to train up a new batch of engineers and construction technicians.
I saw creatures with wings today. Apart from that, they had no resemblance to angelic beings and did not exhibit any aspects of divine grace. I think this makes a very strong case that the reckoning is close at hand.
If a religious person demonstrates that a modern event closely matches depictions from their holy text, would you be more open-minded to their texts? If not, why not? If so, does it bother you that there are in fact dozens, if not hundreds, or such claims made on a regular basis? I'll note as other have, the dozens, if not hundreds, or doomsday claims about modern science, now being represented by AI.
Calling it "millenarianism" doesn't disprove anything. The future tends to be weird. Look at the object-level arguments to figure out what might be true. It's hard to get this right, but the future is *really important*, and the "normalcy heuristic" is known not to work.
Maybe a better analogy would be the great oxygenation event, which led to the extinction of the vast majority of anaerobic organisms alive at the time. Maybe this leads to the extinction of most forms of biological life, as artificial life takes over.
If you think that 30 years from now, AI (or some other technology) is going to become so much better that it will radically change the game, this makes predicting social phenomena like the consequences of population declines or labor shortages or underfunded government pension schemes a whole lot harder!
Personally, I think that it's a given that a bunch of new technology will change the game in ways that makes predicting much about social phenomena 30 years from now very hard, but I don't have any confidence that the thing that will change is AI. We can tell compelling stories about ways that much-improved AI or robots could radically change the game, including stories where the AI wipes out or enslaves or converts to paperclips all the humans. And we've seen some huge advances in AI over the last couple decades, so maybe AI will change the world that much. But it's also possible that it won't, but some other unforseen thing will. (Cheap fusion power? Technology to let people reprogram their own personalities? (Or if you prefer dystopias, technology to let other people do it?) A cure for aging? Probably some of those and a bunch of other utterly weird stuff nobody's thinking of yet.
This is my fundamental opposition to FOOMerism and a lot of other very confident proclamations about the future: the world's too complex for us to predict it.
He's probably referring to estimates that humans now consume 40% of the total planetary output. It's not clear we can survive if we take 100% of the planetary output since there are all sorts of things, like a desire for rainfall, ocean current circulation and oxygen, that require maintaining non-human output consumption. Obviously, the planetary output can be increased, but there is some limit as each new level of extraction becomes increasingly costly.
Like the other reply, I also think that lumping all sorts of numbers together and turning them into a single number is rather irresponsible.
* Oxygen: with regard to humans breathing it, it's not a problem. The food they eat will have been produced by photosynthesis which produced just as much oxygen as it takes to burn it in cells again. With regard to fossil fuels, I think the impact on the climate would be devastating long before we made a dent in the atmospheric oxygen.
* Fossil fuels: from what I understand, we are burning the reserves build in many million years within just a few centuries. Clearly not sustainable, also some nasty side effects.
* Metals: finite supply, but most of them (apart from what we shoot into space and a tiny amount of uranium) is not going anywhere. No point in saving rare earth metals for our grandchildren if they can just recycle our old stuff instead of digging them out of the earth.
* Sunlight: Two thirds of our planet are covered with water. From my understanding, we are not even trying to plaster the oceans with PV cells and swimming farms, so we are nowhere near a hard limit there.
* Fresh water: More of a concern. You need some water to grow plants. Still, I think we are still losing rather huge amounts of it to rivers going into the sea, so probably more of a distribution issue than a hard limit? In a pitch, there is always desalination.
* Deuterium: functionally infinite.
Most of these things are not hard limits, but rather represent what a certain civilisation at a certain tech level is able to do or willing to pay. Chemical fertilizer allows for much higher population densities than hunting game. Creating petroleum from its elements or turning other elements into gold is not impossible, it just is not cost effective.
Sure, characterizing the NPP as a single number is simplistic, but so is characterizing the output of a nation by using GDP.
The problem remains. There is a limit to the productive capacity of the earth. There are a lot more integers than can be supported by our ecology and technology. For example, it is hard to imagine a technology which allows the population of the earth to exceed the number of protons comprising the planet.
The NPP people are trying to come up with an estimate for the productive capacity of the earth in some meaningful sense, just as researchers in the 1930s came up with GDP as a way of estimating national economic output.
Physics tells us that NPP is ultimately limited by solar input, orbital kinetic energy and radiation induced internal heat plus whatever mingy energy sources humans can cobble together. There is rather obviously an effective limit which is much smaller than the ultimate limit, particularly if we want outputs conducive to human life. We aren't at the point where we can exploit the energy released by the earth's eventual quantum tunneling into a black hole, and we're unlikely to get there while remaining human in any conventional sense.
Mars or space habitats could be quite nice with sufficient technology. I tend to think most of humanity will live off-world in such habitats (assuming we're still somewhat baseline human) down the line, because effective medical immortality means that "wait for people to retire or die to make way" will go away, and it will be easier for the powers to be to tacitly encourage people to migrate off-world instead of staying and disputing control.
Theoretically there could be a group of smart people who choose to both be undereducated and have lots of kids. However, that seems pretty unlikely (though perhaps that’s the Amish?)
Yeah, Homer is quite dumb, but his daughter Lisa is smart and educated (and Bart also seems smart).
If that is a result of a mutation that happened because of his work in a nuclear power plant, perhaps we should just build more nuclear power plants everywhere.
There's also the UK biobank study which found that polygenic scores for educational attainment, IQ, height, eating disorders and autism are decreasing, while polygenic scores for ADHD, smoking, BMI/waist size/body fat/heart disease, depression, extraversion and Alzheimers are increasing.
its nothing genetic though, I mean a propensity to smoking might be, but since there's been a yearly decline that's not been proven. The reverse has been proven.
It must be, if anything, a mixture of genes and environment - given a certain background of health warnings, restrictions and public views on smoking, some people being more or less likely to smoke.
So polygenic scores measure the genetic predisposition to some trait, obviously changing environments also have an effect which can be in the opposite direction and stronger.
Hypothetically, suppose we killed off all the tall people in some poor country but also improved their nutrition a lot. Would the next generation become shorter or taller than previous generations? It depends on how strong each of those actions/effects are.
It seems like intelligence, independent of education, could be negatively associated with fertility because more-intelligent people, being more able to achieve perfect contraceptive use, would have fewer *unplanned* pregnancies.
Until the 2010s and the rise of long-term reversible contraceptives, roughly 1/3 of US pregnancies were unintended. These were disproportionately concentrated among less-educated women. (Hence why lower-income women have a substantially higher lifetime rate of abortion, despite being less likely than high-income women to choose abortion once unintentionally pregnant.)
Some evidence that a very large fraction of the negative correlation between income/education and fertility can be explained by different rates of unintended pregnancy, as opposed to educated women preferring fewer children due to higher opportunity costs, includes:
1) On surveys, in both Europe and the US, educated women report *desiring* as many--and in some studies more--children than less-educated women.
Granted, the fact that women throughout the developed world report desiring more children than they choose to actually have suggests that this represents a kind of 'ideal world' desire, which may be outweighed by economic concerns like opportunity costs.
More significantly:
2) The widespread use of long-term reversible contraceptives over the past decade has significantly narrowed the US income gap in fertility. Fertility among women in the higher quintiles has decreased very little, but fertility among women in the lowest income quintile--and, in particular, fertility in women on public assistance--has greatly decreased.
This seems like a further reason for optimism: if the negative intelligence-fertility correlation *isn't* a result of different preferences, but merely differences in ability to reliably use contraception, the development of contraceptive methods that don't depend on user conscientiousness seems like it will be enough to solve the problem.
It would make less-intelligent women *better-able* to fulfill their preferences by freeing them from the burden of unwanted pregnancy, as opposed to the commonly-proposed solutions of either restricting the opportunities of intelligent women to force them to have more children, or directly or indirectly coercing less-intelligent women to have fewer children than they desire, both of which seem extremely morally-dubious. .
IQ in developed countries is highly heritable and is also strongly correlated with education. If you've got a specific reason for thinking there's a reason this somehow doesn't apply, then we should assume it does.
I don't have any specific reasons either way. I am just wary of drawing firm conclusions before ruling out the obvious objection.
(For what it's worth, if you asked me to bet, I would expect there to be a correlation between fewer kids and higher IQ, but for the correlation to be a bit weaker than the one between more education and fewer kids.)
“there will probably be a technological singularity before 2100”
This really contradicts the science is slowing down paragraph. The one just before it.
I agree that we might probably be able to make babies smarter by 2100, but the increasingly large Amish and orthodox community might be against it. Of course it’s post singularity so who knows what they will believe.
Absent your singularity the population trends for the Amish and orthodox will hit the Malthusian barriers of their more primitive lifestyle. I don’t know much about the orthodox lifestyle, and what I know about the Amish comes from Witness and wiki, but the latter don’t seem to take charity, as their population grows and they buy up more land - I assume this is what they do - that land must become less productive. The US will be a fairly primitive society in that era. And mostly white again. Haha.
The US is, outside these groups, is now a population sink. Population sinks are interesting because all lineages die out to be replaced by new additions, who then die out over time. The descendants of Londoners in the 12th C are not modern Londoners but subsequent migrants (internal to the U.K. mostly until recently).
I don't think it contradicts the "science is slowing down" paragraph. Once science reaches some point (the point where you can make AI, intelligence enhancement, or some other game-changing technology), we get a singularity. Science is going slower, but still plenty fast enough to reach that point before 2100. Once we reach that point, it won't be slowing down anymore!
I mean, once we have genetic engineering to raise IQ that is widely used, it will overwhelm the background dysgenic trend, but it won't cause an immediate "singularity". I'm an AGI skeptic, 15% by 2100 maybe. But I am pretty sure we will have great embryo selection by then that overwhelms the natural dysgenic trend.
I think it depends on how good the genetic engineering is. If it's +15 points, fine. If it's unlimited von Neumann clones on demand, I *do* think we get something singularity-like once we have a 5-digit number of them and all of them grow up.
OK I agree you're right. Anyway, the 5-digit number of JvN clones is great, because they would almost surely be friendly, with AGI it's not as clear though.
+15 IQ is already civilization changing if that's a uniform change.
Turns barely employable people into productive citizens, productive citizens into specialists in demanding fields, and the occasional +3sd wizard gets turned into von Neumann. 2100's equivalent of Google would probably have entire TEAMS of von Neumanns, hopefully they'll be working on something more prosocial than either ad optimization or weapons of mass destruction
It is primarily cities that are demographic sinks, rather than countries as a whole.
Although admittedly, across time fertility goes down in rural areas also. It is an old-fashioned hierarchical diffusion process. This process is happening everywhere - well documented also in Nigeria, Scott's chosen African country. Urban areas lead the way, and high-status urban women are the vanguard/innovators/early adopters.
"Even this isn’t quite right, because a lot of Orthodox Jews do leave Orthodoxy, so along with those 100 million devout Orthodox there will probably be a few dozen million extra Reform Jews with a confused relationship to religion and lots of emotional baggage. It’ll be a great time for the rationalist community."
Go team! (Also, at this point we should be separating out the Ultra Orthodox from the Modern, because the latter can't keep up. It's a good question how many Jewish-sans-Streimel groups we'd have at that hypothetical point. To be honest, Conservative and Reform could have merged in 2015 and nobody would have noticed. The other Orthodox groups are more of a question mark.)
Scott makes an entertaining point as always, but also a serious one. I have been wondering myself if Darwinism (as an idea, not as a fact of life) is self-defeating, since people who believe in evolution reproduce at a slower rate than people who hold religious beliefs - in particular compared to those who hold strong religious beliefs. The latter may out-compete us all, in particular in countries, and coming ages, when universities - the great modern "seducers of youth" - are not allowed to spread the gospel of evolution.
...of course, the less-reproducing, rational-minded Darwinians a la Scott and most people in this comment section, have studied this as well; i.e. we have at least the satisfaction that we were able to predict our own doom. Here is one of many studies:
Li Zhang (2008): Religious affiliation, religiosity, and male and
In a subpopulation where Darwinism is traditional, it could be compatible with high fertility. We're not there yet though, so Darwinism is most embraced by people less rooted in tradition.
I had an old document doing this, but it's obsolete now and I haven't made a new one. Until then, I would recommend https://www.cold-takes.com/most-important-century/ . Note that the page itself is a summary but contains links to the full argument (eg the linked PDF)
Isn't this a fully general case against caring about the future past a certain point? "Yes a bad thing will happen and it will have bad effects. But conditions will be different, it won't be THAT bad, etc." To be honest this reminds me of what people say about Global Warming. "A 2 degree increase by 2100" or whatever.
Yes, I am generally not concerned about problems that will only manifest themselves after the year 2100. I think lots of bad global warming will happen before then so it's still worth worrying about, but I do worry about it less insofar as I'm not worried about the post-2100 effects (although I would expect by 2100 we would have better climate tech options anyway)
You're what, mid thirties to early forties? Let's say you're 20 just to really rig the numbers. Average life expectancy for a male (I assume) is 79. Let's say you beat the curve and live to be 90. Congratulations. So you die in 2092. In that time the average temperature should go from 57 to 59-60 degrees. Roughly the same as the extremely prosperous Medieval Warm Period. What is the "lots of bad global warming" you're expecting?
I'm not opposed to mitigating global warming. Largely because if we don't stop the process it won't stop in 2100. But I'm not opposed to mitigating population decline either. And in both cases it's because I expect it to have effects after I'm dead. If I wasn't concerned about the effects that happened after I was dead I would be very upset that I'm giving up economic prosperity now for benefits I won't enjoy, being dead and all.
The problem isn't that global warming will keep happening forever and ever, or that a 2-3 degrees warmer Earth is uninhabitable, just that there will be lots of ecological churn as everyone adjusts, and some of that ecological churn might involve mass famines, wars, water shortages, etc. I also think probably all else being equal warmer is worse than colder because colder countries sure do seem more developed (and tropical countries less so) in a very consistent pattern and I suspect this has something to do with parasites or something and worry it might cause globally decreased development.
Aren't most of these applicable to population? As an aging population empties out the interior of countries rural resources like food could become harder to come by. While the US can continue to import workers other countries cannot and most of the countries supply immigrants are food insecure. This is already a problem in Africa which we are seeing the effects of right now due to the war in Ukraine. Population churn can cause issues too. Unbalanced population pyramids cause instability and it will push nationalist leaders towards desperate actions as the clock starts to "run out." Ideologues who support Putin and Xi both point to their respective nations impending population decline relative to the west as rationales for wars.
More to the original point: What's your best case for not caring about the future after you expect to be dead?
Scott, I’m wondering if you’ve read this recent paper about taking seriously the tail risks with climate change?
I thought the paper itself was pretty well written and makes the case that the rational thing when faced with uncertainty is to take the tail risk scenarios seriously.
It swayed my thinking on these matters somewhat, personally.
The 4 C might get you 12 C cloud suppression possibility is interesting...
"Such effects remain underexplored and largely speculative “unknown unknowns” that are still being discovered. For instance, recent simulations suggest that stratocumulus cloud decks might abruptly be lost at CO2 concentrations that could be approached by the end of the century, causing an additional ∼8 °C global warming (23)."
The US south used to be quite parasite-ridden. But we acted to get rid of parasites. It's possible our civilizational capacity is much worse than in the 19th & early 20th century, but otherwise the wealthy & currently cold countries should be able to suppress parasites. I suppose COVID didn't make us look great, but we produced those mRNA vaccines in a very short amount of time (then our government delayed them & poorly distributed them). We could be making lots more vaccines, & rapidly if we wanted to.
> It's possible our civilizational capacity is much worse than in the 19th & early 20th century
I've been wondering about that after reading the below journal article. New York had two mass vaccination drives in 1947 and 1976 with roughly the same amount of resources and publicity and the 1976 program vaccinated a tenth as many people:
By 2100 our technical means of mitigating, or fuck it, reversing global warming will be on a completely different level than what we have, or even can project, in 2022.
Global declining fertility and population ageing everywhere is not a dystopian scenario, a la global warming. On the contrary, it is a beacon of hope.
If we, the human species, are not able to voluntarily slow down and ultimately stop population growth, Nature will sooner or later do it on our behalf - and probably not in a pleasant way.
I agree that in the longer run sub-populations with higher fertility will keep most likely keep total fertility high enough.
However, if the Amish grow a lot more, I would also expect them to change.
Just like Warren Buffet can make money at 20% a year as long as he's small, he can't keep that up once he has an appreciable fraction of the total economy.
IIRC about 20% of Amish leave the community after the Rumspringa period. If they got a lot bigger and had much more direct contact with the outside population from population growth, I bet that would probably go higher (and their fertility rate overall actually does seem to be going down, even if subsets of them still have 7 children per woman).
It was a general statement about the Amish, with example numbers given from one community. If you think that one community has the secret sauce, perhaps they'll displace all the other Amish :)
Do we actually have the study referenced? Commentators on that site weren't able to find it.
I'm very fond of the Amish and "Secular society commits civilizational suicide and the Amish just inherit everything by default." is the sort of comforting thing that I really *want* to believe.
It's focused on one Amish community, but it's the largest one.
The pattern seems to be that the more conservative the Amish community, the lower the rate of attrition. What seems to have happened is that, as the Amish lifestyle becomes more and more distinct from the surrounding culture, the shock and challenge of leaving it is too great and fewer people do so. This might be contrary to intuition, but Amish numbers were kept tiny by very high attrition when their neighbors were mostly pious Christian farmers using roughly equivalent technology.
I think the weakest link for the Amish is if the surrounding society decides to destroy their culture. They're built under a presumption that the surrounding society will regard them with a combination of benign neglect and amusement, and if that society turns hostile, they don't really seem to have a defense mechanism other than hoping that by being good neighbors, they'll eventually be shown mercy.
I think there's a lot of evidence it's by choice rather than because of infertility - most people who don't have children say it's because they're not trying for children, rather than because they're trying but they can't conceive.
Wouldn’t you the say the question then becomes why aren’t they trying? We’re all here because of an unbroken billion year old chain of organisms deciding kids are worth it. Seems odd so many of us have suddenly stopped and said “nah, MacGyver re-runs are on.”
What they mean is that they can't maintain the same standards of living while having children. Poorer people around the world and in the past have more children.
Yeah. But isn't that what "I can't afford something" typically means?
Like, I think a lot of people would say they "can't afford" a nicer car than they have not in the sense that it would be literally impossible for them to scrape together more money for a nicer car if it were for some reason a matter of life and death, but because they'd have to make painful cuts in the rest of their lifestyle. Or whatever. I mean, sure, they also "can't afford" a Gulfstream or to buy Twitter, but "I can't afford it [without making difficult cuts in the rest of my lifestyle]" doesn't seem like an unusual usage of the phrase.
To me this falls into a carrying capacity argument, where you extend the terms as they are used for animals to the projected futures humans live in. Simply believing at scale (strongly enough to change the market dynamics of having kids) makes having kids harder because you believe they will be born into a world of austerity.
Most organisms don’t have access to contraceptives. It seems evolution endowed us with a deep drive for sex, but the drive to actually have kids is weaker and more psychological, which is more prone to being disrupted by other high level calculations (e.g. MacGuyver)
I don't know about that, but I do think some people have a strong desire for children and others don't. With contraception available, people who really want children will outbreed people who don't.
Anecdotally, the number of single moms I know who didn't want (or at least, didn't want right then) their kids is pretty close to the number of dudes I know who want kids but can't find a spouse.
Desire for kids plays a factor, but aversion to contraceptives seems to have had a much bigger impact in which group actually has children.
In addition to the contraceptive and economic points... a lot of it comes down to values: for a long time the messaging was that "having children is the most important and meaningful thing you can do with your life" (especially for women, but also for men), nowadays, while there's still pro-family messaging, there's a lot more "chase your own dreams, don't have kids unless you're super-duper sure you want them".
A lot of this is tied to religion, which tends to promote having families (at least Judeo-Christian "be fruitful and multiply"). It's not an accident that the two groups in America that are "out-multiplying" the rest are devoutly, conservatively religious.
The worst thing about having kids is all the great things in life which become much harder or impossible after having kids. Going out for dinner becomes tricky. Overseas travel becomes a nightmare. Wanna go scuba diving or skydiving? Forget it!
What do all those things have in common? Peasant farmers don't do them anyway, so they give up much less when they have children than an upper middle class westerner does.
I have relatives who've worked as au pairs and went on a date with a girl doing that for a job just a couple months ago. It's more common than you would think.
Are you a parent? I never found going out for dinner to be tricky (bring kids or get babysitter, both simple) and have travelled internationally with my kids almost every year. No nightmares. Some challenges, but none that severely taxed my patience or abilities.
Yeah I think the messaging around kids is pretty bad, and also bizarre. “If you have children in your twenties how will you be able to update spreadsheets/write JIRA stories for a corporation!”
Very few and not to get too Disney but most people I know who identify as such self sort themselves into that group because they find it less terrifying than rejection. That aside, most people have a lot of value they don’t realize.
Traditionally, they've had more than they wanted. It often killed them. It wasn't until 18th century France that anyone seriously tried to let women control the number. We see this in just about every society. As birth control becomes available, women have fewer children and material conditions improve.
I don't deny that doctors getting involved made childbirth much *more* dangerous, but claiming that childbirth was ever safe in pre-modern populations seems like a stretch.
How does this argument account for high rates of maternal mortality in contemporary developing-world populations? (Sierra Leonean women even now have a 2.3% chance of dying each time they give birth.) Doctors in 21st-century poor countries are familiar with the germ theory of disease and surely aren't actively making things worse.
And how does it account for the fact that, on average, *men lived longer than women* in the majority of pre-modern-medicine societies? Granted, much of the higher mortality among childbearing women was due to the indirect effects of immunosuppresion and increased nutritional stress during pregnancy leading to higher infectious-disease mortality--but pregnancy and birth were still significantly shortening women's lives, without doctors getting involved.
In the majority of pre-modern-medicine societies, which were paleolithic hunter-gatherer societies, a combination of anthropological and genetic evidence suggests that an average of 40% of men would die from warfare, hunting or accidents before they ever reproduced. Childbirth could be dangerous, but women certainly did not have shorter life expectancies under those conditions.
(This is not to say that those statistics about maternal mortality in the industrial era aren't completely infuriating.)
I think several things can be true. Effective contraception and low infant mortality has reduced family size especially in high-income countries . Pursuit of education and workforce participation has delayed childbearing for women. There is some evidence to suggest that people are having fewer children than they would ideally want perhaps partly because children are expensive and maybe partly because of age-related infertility (?). But it is probably also true that people just prefer smaller families these days. I don’t think we have great data on preferences and on how those preferences change over a person’s lifetime i.e. if you ask someone at 18 years old, 30, 40, etc.
You are right that women might have fewer children than desired due to costs in time, opportunity and money. It is definitely a matter of preference. Only a few of the younger people, men or women, I know want to have any children. Some do. One says she does want children, but admits that she doesn't like them. Talk to me about preference theory. They are, for the most part, still establishing their careers and finishing their educations, so time will tell. People are allowed to change their minds.
Surprisingly, media propaganda makes a difference in preferences for children. Soap operas aimed at women depicting smaller families in a positive light do reduce the fertility rate. This has been true in a number of countries. Obviously, there are all sorts of confounding factors, but human expectations have been managed towards an end. That's how Google, Facebook and the television networks make their money. If one grows up with certain kinds of stories, one frames one's life in their terms.
The Empty Cradle talks about how access to TV networks showing telenovellas with flighty women and unattached men predicting fertility declines in different districts of Brazil in ways that couldn't really be explained just by economic factors. The same appears to be true at a global level.
No, seriously. If you went around to women today, in the modern times, with the modern contraception most women take, with the divorce rates and low marriage rates and most women working outside the home, and you asked them..."Do you have as many kids as you want, or more, or less"...
...what will they say?
I think that your concept of past women without any ability to modify birth rate is inaccurate, but mostly it is irrelevant to the question I am asking, which is about now.
In the developed world, women are on average having fewer children than they say they want. The demographer Lyman Stone has discussed this quite a bit.
It depends on their age and how many children they already have. Very few women want to have lots and lots of children. That's a rare thing and always has been. Nowadays, I'm guessing you'd get a lot of zeroes, a good number of ones and twos and a handful of more than that.
Most women want more than they have. You postulated a past where women had more children than they wanted, now we have created a future that hampers women's life choices.
Perhaps, but facilitating higher fertility may require women to do things like have kids in their 20s and pursue careers or higher education later in life, which many women don't want to do. (To be fair, I think credential inflation is a grotesque problem for society in general, but it hits women harder due to menopause constraints.)
In any case, it is possible to want multiple things that are mutually contradictory at the same time. Culture can play a role in highlighting this fact.
Something that would be really useful to get a handle on is how many women plan to have kids in their mid-late 30s then can't; I can easily imagine it being anywhere form 2%-30% of missed wanted fertility.
Otherwise, the best things from a policy perspective are probably along the lines of making it easier to have kids and a career at the same time - expand/introduce paid maternity leave, and introduce reasonable adjustments-type rules for parents similar to the disabled, and affordable housing.
Child benefits that scale with education also occurred to me, but that sounds like it would have massive perverse incentives. Child benefits which scale with forgone income maybe? No idea how you'd calculate it.
The nordic countries have been trying the "give women more maternity leave" trick for decades now and their fertility rates are around 1.6- maybe a little better than places like Korea or Italy, but not by much.
In principle you could just modify tax policy so that you pay higher taxes if you're childless and pay lower taxes the more kids you have (Hungary is moving in this direction), which in theory would properly incentivise fertility at the upper end of the class continuum. And... oh, I don't know, some kind of ritualised legal arrangement where women can legally claim 50% of a man's income, assets and child custody if they bear and raise his kids- crazy, I know. But the problem is that massive cultural change will be needed to actually generate support for these policies and make them stick at the social level.
GI bill for non-working parents to do a part time batchelors, masters or vocational qualification while raising their kids? Combined with subsidised daycare on-campus. CoL near universities might turn out to be prohibitive though.
Right now we have cultural/professional standards that actively discourage women from expressing a desire for bearing kids, as though being a man - or a woman who lived her life as though she were a man - was the only way to be successful in life.
We can change that. We should change that. And we can do it in a way that acknowledges the decreased advancement and skill atrophy when one's attention is on newborns, so as to not pretend that a man putting in 60 hours a week and a woman doing 35 hours on a flex plan are actually doing the same job and should be advanced and paid the same.
Burning through one's child bearing years to get into the c suite in one's late 30s is one thing.. but that is not actually a realistic path for most humans, and we should be more honest about that.
With the exception of Germany, it appears that they do not. Here is an article discussing this, and the implications of the gap between hoped-for and realized fertility:
Gøsta Esping-Andersen & Francesco C. Billari: Re-theorizing Family
Demographics. Population and Development Review 41(1): 1-31 (March 2015)
Essentially, the argument is that countries with low fertility are in a time-lag-situation, before politicians realize there are votes to be got by introducing Scandinavian-type, or French-type, fertility-enhancing policies.
Not sure at all they are right! But the discussion is interesting.
...In addition to the distal/intermediate/proximate factors referred in the literature & listed in a post way below (none of them related to chemicals), there is also a more recent one : The hideous increase in housing costs, in particular in urban areas. (The Bay Area is not alone.)
I have not seen data for all countries (too much work even if you are paid for this kind of work), but it appears that the amplitude in life-cycle debt is increasing everywhere. That is: Each birth cohort of young people accumulate more & more debt early in life. And high debt in an uncertain world is very effective in dampening the wish to have children. In particular many children.
...Including the precious "3rd child", which is really the holy demographic grail. Fertility is going down everywhere, not primarily because people have stopped having children, but because too many women stop and No. 1 or at No. 2.
People say they want 2-3 children, so all things considered you'd expect them to converge on that as the average fertility rate.
I think the reason it's lower than that mostly have to do with delayed household formation and older parental age, due to greater housing and education costs, fewer jobs right out of early adulthood where you can make a socially acceptable living, and (in the very low TFR East Asian countries) some pretty brutal testing and education prep regimes that require very intense parental time and resources.
Delayed parental age has a long history of being used to lower fertility. Most of the Northern European Marriage Pattern of lower fertility in the 1500-1800 period was due to people having children later than before.
Ultra-Orthodox Jews aren't having a lot more kids than mere Orthodox Jews because they have a more rigorously kosher diet. The Amish eat traif, but also manage to have lots of kids.
You just accept the population projections as fact, and this seems like a serious mistake. These projections have been wrong in the past, could continue to be wrong in the future, and one may make the case that they will predictably be wrong in the future. Our World In Data doesn't have a proof that their population projections are the most accurate projections possible given the information available in the present; they just have some model, the assumptions of which could be disputed.
My usual assumption is that projections like that are often slightly wrong but very rarely completely and utterly wrong - I wouldn't be surprised if the population in 2100 were 9 billion instead of OWID's 10.6 billion, but I would be really surprised if it were 3 billion (absent some catastrophe that makes projection meaningless). I don't think anything in this post hinges on the difference between 10.6 billion vs. 9 billion people. If someone has an argument that OWID could actually be off by orders of magnitude, I'm willing to hear it.
My understanding is that population projections strongly rely on the assumption that developing countries will experience the same demographic transition as Western and East Asian ones.
But given we don't know what caused the demographic transition, it's doesn't seem like a reliable assumption. If it turns out it's related to religiosity or economic development in some way that's directly or indirectly tied to genetic differences it might turn out that India and Sub-Saharan Africa follow growth trajectories more like the Amish and less like modern France.
It's a lot easier to argue why these projections aren't as important than why the projections are wrong, and you're also arguing against a stronger position. Even if they're true, they're probably not catastrophic. If you just dispute the projections, you're implicitly saying they will be catastrophic if true.
The underpopulation argument. Maybe it's fairer to say "people make the argument when they're horny." Scott happens to be asexual, unlike Musk, so evoking wordcelism probably was unnecessary and weakened the observatjon.
Besides demographic shift, my real worry here is the causes of declining birth rates rather than its effects. Are we too socially and emotionally broken to start families, or just too poor and financially insecure? Too individualist? All options feel terrible, and I suspect these three are all true and interrelated.
I'm less concerned about this, see the graph in section 7.
In general, the poorest countries have the highest fertility, and the richest countries have the lowest. Within countries, richer and more educated people have fewer children, up until some very high number (I think it was a 7-digit salary or something, cf. Elon Musk).
My guess is that the main cause of the fertility rate being 2.5 rather than 7 is the existence of contraception and women having jobs other than child-rearing, and then the main cause of it being 1.5 rather than 2.5 is things we should actually be concerned about like education going on too long and large houses being too expensive.
There's also the matter of preference. Most women never wanted huge families. Given a choice, most choose to have fewer children. That's why people worried about racial purity work so hard against letting women have a choice.
That doesn't explain falling fertility rates and stated fertility preferences in Asia or Africa. There just aren't enough WEIRD women in Asia or Africa to account for this.
African birthrates have not fallen in line with contraception availability they way they did in the west, and immigrants to the west still have more kids.
Actually, it has. African incomes have been rising and birth rate has been falling. Most immigrants from traditional societies to the west have more kids than natives.
The "choice" precisely coincided with the option of having a comfortable, non-backbreaking-labor career. You're making extremely strong declarations and totally ignoring massive confounders.
There are still lots of women doing back-breaking labor and having fewer children. Raising children itself can be back-breaking labor. If having fewer children lightens the load, some women will go for the lighter load.
I have to strongly disagree. There is no level of technology where being the homemaker/child raising parent is 'back breaking'. More physical than the modern west, sure. Back breaking, no.
Really? I think it's easy to underestimate how easy technology has made domestic labour even before the big advances of the immediate post-war. Just a thing like the fact we can get centrally produced. bread that survives without spoiling for any length of times instead of someone having to rise incredibly early in the morning to bake it before others wake up.
That last part is very similar to my “too poor and financially insecure”. We should absolutely be concerned over having built a prohibitive economic and social model where people are having 40% less children than they would have otherwise.
Like, it’s a solvable set of issues, but it’s not easily solvable.
Point is well taken about unreal years, but it makes me wonder - what is the farthest off real year? That is, the last year about which we can make meaningful predictions in most arenas? 2026? 2030?
How meaningful do you want your predictions to be? In 2019 we would have said “of course restaurants will be open next year and most white collar workers will work in offices”, and then they weren’t because of an unexpected event. But lots of other predictions were completely valid, and a lot of predictions have resumed by now.
Unforeseen events are of course possible as little as an hour or a minute from now. I am thinking of meaningful in terms of having some measure of specificity and yet still correct let's say, some majority of the time.
If we make it though the singularity, however exactly that looks, i know it may have been half a joke but I could see a world where some form of religion becomes highly selected. The basic tenets would be: don’t do things that destroy the patterns that sustain humanity, like sexual reproduction being tied to fitness, having to struggle to develop, etc. I always call that Space Mormonism but in a science fiction future where there are still recognizable humans I see that as the most likely outcome.
Chris Ruicchio’s Empire of Silence is my favorite such adaptation. The church is much more active there and does the kinds of things they actually have to do in order to stop other singularities.
Given the reasonably convincing arguments regarding AI danger, and the occasional human habit of reacting to danger with considerable vigour, is there a point between here and the singularity where we make the collective decision to ban all AGI research on pain of death, and the destruction of your family unto the third generation?
I mean…. would really hope not. And I can see lots of futures where that won’t happen. Empire of Silence is my favorite from a story telling perspective but I hope we stumble upon some good defensive/stabilizing AI tech that makes all that unnecessary.
I also like how those efforts in Dune were ultimately doomed (you can't stop the Singularity!), necessitating a cosmic diaspora and eugenics sort of space bunker approach (which also would not have worked, absent Space Magic of Future Sight).
Another point against the dysgenics issue is that to whatever extent the Flynn effect is QOL based, it will be more powerful in developing e.g. African countries which are also making more babies.
So even if the average IQ of the descendants of today's Americans will lower from dysgenics, it will be more than compensated for by 2nd/3rd gen sub-Saharan immigrants (already a smarter than average slice) going through the Flynn effect.
The Flynn effect is mostly not g-loaded, and we shouldn't expect high IQ 2nd and 3rd gen African immigrants(I.e. the product of selective Nigerian et al immigration) to be subject to it because they're mostly more educated than even the average american. The Flynn effect is not about having more stuff, its more to do with having a more stimulating childhood environment etc.
Also, 'quality of life' refers to patient wellbeing outcomes, not living standards generally.
I wish the singularity argument had been the first argument. The "Don't worry it won't get bad until 2100" left me with my mouth hanging open and then when it eventually got followed up with "and it won't matter by 2100" I was able to close my mouth, but I wish I'd been able to know up front that Scott wasn't trying to predict anything about population trends but merely making a point that, like everything else, he thinks population trends are inherently unpredictable on a 80 year timescale.
My guess is that the climate change consequences will make many places unlivable enough to result in extreme strife and significant decline in the population due to wars and shortage of basic resources caused by wars, not by climate change. And not by 2100, but more like by 2040, the way current extreme weather events are happening. Basically take what is going on in Ukraine and add one or two orders of magnitude. It won't be just a shortage of heating gas or wheat in Europe, it will be famine and lack of technological basics all over the world. Population decline will be a consequence, but a welcome one, rather than a cause for worry. This is assuming no drastic changes like AGI takeover.
I'm pretty skeptical - there's already been ~50 years of climate change, I don't think anywhere has gotten close to unliveable, and I would expect unliveability to happen gradually rather than have a sudden threshold effect.
I suspect that we may be hitting the conditions where hot weather becomes hot enough for long enough stretches to make non-AC existence impossible where it used to be bearable, and droughts becoming severe enough to make survinging until next rainfall a challenge. This will not affect the US much at first, but many developing countries will feel it much earlier and much stronger.
There's huge areas of Earth that are unlivable. Even if we discount the obvious ones like the oceans etc., you aren't going to get permanent inhabitation on huge stretches of current territory. Should be possible to see right now if the unlivable zone has expanded, by what criteria, and how much.
What current extreme weather events are you referring to? The number of hurricanes has been decreasing, contradicting the predictions of climate change theorists (despite their attempts at retcon), so I assume you're not referring to that. Or maybe you're talking about the last few weeks of heat waves? If so, would you then concede that a cold wave would be evidence against climate change?
I am not a climate change alarmist, and have repeatedly pointed out that hotter Earth means more life, but it pays to acknowledge that the road to more life will be marked with a lot of deaths.
Honestly, this whole debate as a weird tendency to veer off track. The four questions are:
1) Are global temperatures rising?
This is simple empirical point, and they either are or they aren't.
2) Why are they rising?
This is a more complicated empirical point, and should be where most of the factual disagreement is because causation arguments for large-scale phenomena are hard to empirically test.
3) Can we prevent it?
This is a yes/no question with a price tag if it's yes. It's probably fairly straightforward to answer once you've answered 2.
4) Is temperature rising a good or a bad thing?
This is a mix of empirical and value-based questions, and the answer will probably be that it depends where you live and what you care about. Whether it causes extreme weather/melting sea ice are part of this. My prior would be weighted towards it being bad; as whatever temperature we currently have is probably what all our living patterns and infrastructure are optimised for (eg. no-one in Norway has air con). Bangladesh and Vanuatu look like they're pretty fucked, but the Canadians might be laughing all the way to the bank (or be conquered as lebensraum by America once the Sonora desert swallows Kansas).
> but the Canadians might be laughing all the way to the bank (or be conquered as lebensraum by America once the Sonora desert swallows Kansas).
Naw, our government is going to tank our economy so hard trying to stop the global warming that would be good for us that we'll end up petitioning America to annex us and pay off our debt, Scotland/UK style.
Yes, I'm well aware of the reframing of "global warming" as "climate change" in order to make it unfalsifiable, but I just wanted to double check that climate change theorists recognize that this is nothing more than a PR move.
If you claim that heat waves / higher average temperatures / more hurricanes are evidence of climate change, then it necessarily follows from Bayes' Theorem that cold waves / lower average temperatures / less hurricanes are evidence against climate change.
The only way you could get around this would be by redefining "climate change" to be a theory which does not actually make any predictions, so that literally any event that happens could serve as evidence for the theory. But in that case, there's not much point talking about it.
Are you saying climate never changes? Or that average temperature changes do not count? Or that climate changes (as it obviously does at least on some time scales), but it's not anthropogenic? I am not sure where you are digging your trenches here.
I'm not saying any of those things. I am basically just lamenting the constantly shifting goalposts of climate change theorists. I want to know what predictions their theory actually makes, and I want confirmation that if those predictions turn out to be false, they will accept that as evidence against their theory.
What has happened for the past few decades is that a prediction gets made, the prediction turns out to be false, and then instead of doing Bayesian updating against climate change, the theorists subtly change the theory and then say "Ah, of course, we knew this would happen! Our theory is perfectly consistent with this!" This is a laudable political strategy, but not an actual truth-seeking procedure, because every time your predictions are wrong, you have to expand your theory to predict more and more things, leaving you with something without any predictive power. "The climate may or may not change, and it will have some sort of effect on hurricanes, and maybe it'll get hot, but not necessarily. If it gets hot, then obviously we predicted that and of course that is evidence for our theory, but if it gets cold, remember that weather is not climate."
"Climate change over the next 20 years could result in a global catastrophe costing millions of lives in wars and natural disasters..
A secret report, suppressed by US defence chiefs and obtained by The Observer, warns that major European cities will be sunk beneath rising seas as Britain is plunged into a ‘Siberian’ climate by 2020. Nuclear conflict, mega-droughts, famine and widespread rioting will erupt across the world... "
"Yes, I'm well aware of the reframing of "global warming" as "climate change" in order to make it unfalsifiable"
Good point.
...add confirmation bias: Every time there is a flood, or a fire, or a high-temperature day, newsmakers can say, or imply, that "global warming" is the cause/the single cause.
If you are a climate activist, this is great. If you (only) are a journalist chasing clicks, this is also great.
At a two-degree increase, the only places with 40C are the places which would otherwise have 38C. If you can't deal with 40 then you can't deal with 38 either.
The temperature increases are not evenly divided. We're seeing much higher increases in the Mideast and North Africa already. Urban heat islands exacerbate things as the population urbanizes.
Work output falls when temperatures get over 25C and continue to fall as heat increases. LED lighting, for example, has improved factory productivity in India. Rising temperatures mean one has to use more energy for cooling or accept a lower level of productivity. I don't expect collapse, but I do expect changes in the way people live and increasing numbers of climate refugees.
It usually doesn't come down to warfare. We're seeing three nuclear powers jostling in the Himalayas. China has attacked India at least twice in the last five years, but there's no war per se.
War may not pay off the way it used to, but exercising military force is still useful. I was reading an article in Foreign Affairs recently that pointed out that Africa is still full of rebel militant groups, but that they rarely want to take over the state the way they would have up until maybe ten or twenty years ago. What they usually want is benefit from some regional resource, a piece of the action so to speak. The central government has to decide if it is worth fighting them, possibly indefinitely and possibly resulting in regional resentment, or coming up with some division of the spoils.
I agree that warfare has declined, but as schoolyard bullies - and Clausewitz - put it, it often comes down to "You and what army?".
In terms of the Amish and the Orthodox. My understanding is the Amish can only exist because they occupy some of the worlds most fertile farmland. Their lifestyle/economy can’t scale. The same is true for the Orthodox community in Israel. The most common example is they aren’t subject to mandatory service in the IDF. An Israeli without and IDF isn’t an Israel for very long.
Farming communities have been dying out for over a century. It's a matter of the rising value of the land. The Amish would have to purchase it from already highly efficient owners.
The "already highly efficient owners" are highly-efficent at producing commodity row-crops. The Amish can be highly efficient at greenhouses or produce or something, needing far fewer acres to support a household.
Somewhere in the past on this blog, I think there was a post with photos of what it looks like "when land is cheap and labor is expensive" (formation of wheat-harvesting combines) versus "when land is expensive and labor is cheap" (high-density intensive-care produce.)
In Jerusalem, they have even managed to turn the streetlights off on the Sabbath, even on the motorway.
(And young boys throw stones on the cars of secularized, academic jews driving from the University to Tel Aviv to get away, for the weekend, from the increasingly ultra-orthodox Stimmung prevailing among those who live in Jerusalem.)
As a Jew, I consider them antisemitic. One of the reason Judaism has survived is that the religion has changed over the centuries. Abraham thought nothing of serving meat and cheese in a meal and his son thought nothing of having two wives. As far as the orthodox are concerned, Abraham wasn't a proper Jew which is an odd condition for the founder of a religion. This is what Jews get for writing down all that stuff. One either has to accept inconsistencies or just accept that a religion has to change with the times.
P.S. I thought Babylon V did a great job having a rabbi wearing a 20th century suit and tie on board the space station. It was pretty funny in its way.
I have no personal contacts with the ultra orthodox myself. However I have, or used to have, friends (or at least Bekannte) at the University of Jerusalem, and noticed the stone-throwing on cars and turned-off traffic lights on a visit. That was many years ago by now, but I would assume that Jerusalem has only become more dominated by the orthodox (with Tel Aviv perhaps becoming more secular?) since then; partly for demographic reasons, and partly due to internal migration (people sorting themselves into different cultural groups).
My colleagues at the University commuted to their jobs from Tel Aviv, as they found it socially difficult to live in Jerusalem. They told stories of friends being squeezed out of apartment blocks increasingly dominated to the ultra orthodox.
It is rather worrying, and makes the cultural tensions between blue & red US states look like children's quarrels in comparison.
I expect the year 2050 to be a real year. I am so convinced of this that large fractions of my total wealth are in retirement savings, which will not be touched until after the 2050s.
Have you decided that saving for retirement is a waste of money? Do you tell small children that saving for retirement is a waste of money? Do you plan to take out a large mortgage, on the expectation that you will never have to pay it back?
Is there any prominent person you would be willing to take a bet with, at any odds, on the year 2050 being post singularity?
I maintain some retirement savings (and I'll be retiring before 2050), but probably less than I would if I were 100% sure post-2050 years would happen. I would not recommend anyone take financially risky actions based on uncertain probabilities.
There are some complicated issues around bets lasting 28 years and where if one side is right they'll be too dead to enjoy it. If you want to come up with some structure to bet anyway, then sure, whatever, I'll take it. See https://www.econlib.org/archives/2017/01/my_end-of-the-w.html for how this might work.
Using the same implied interest rate, that would be $100 given to you now, in exchange for $450 (CPI adjusted) should the world still exist Jan 1st, 2050. The terms are essentially the same as the Caplan-Yudkowsky bet, but with different end dates, and all cause end of the world.
My memory is horrible, and I'm not likely to remember this over the next year, much less the next thirty. As such, a condition of this bet is that it be posted somewhere on your website, and any successor website.
My contact details won't necessarily remain stable over such a long time period. In the event that the world exist but I can't collect, donate it somewhere. I'd prefer an EA public health cause.
I don't think a neutral third party judge is necessary. If the world has ambiguously ended, I've obviously lost. If either one of us cares about money, I've almost certainly obviously won.
I'm willing to do this if it will make you update about the honesty with which I am asserting this claim, but it does sound like a lot of work, I'm not sure the sums of money involved will be meaningful to us, and there's a pretty good chance we both forget about it. So if it's all the same to you I lean towards no.
But if you still want to go ahead with this, send me an email at scott@slatestarcodex.com and I will tell you my PayPal address which I don't want to post here.
Having some time to think about it, I'm going to back out. It isn't a meaningful amount to either of us, and an anonymous commenter like myself doesn't have enough reputation for a symbolic bet to mean anything, unlike the Caplan-Yudkowsky bet. A bet large enough to mean anything, and I'm concerned mostly that we both forget, or that my email address changes.
"There are some complicated issues around bets lasting 28 years and where if one side is right they'll be too dead to enjoy it. If you want to come up with some structure to bet anyway, then sure, whatever, I'll take it. See https://www.econlib.org/archives/2017/01/my_end-of-the-w.html for how this might work."
Neat! So the tl;dr is that the person betting that the world will end receives an initial payment, and if the world doesn't end, makes a payment back to the person betting that the world won't end.
I will take a version of this bet. I will give you $10,000 upon agreement and if the world still exists on Jan. 1, 2050 you have to give to me $90,910. This is an implied odds of 11%. Making this bet with an implied interest rate of 5.5% like in your link indicates a misunderstanding of capital growth. These terms are beneficial to you if you believe there is a greater than 11% chance the world ends before 2050 by any means.
If I am misunderstanding what you want to bet on please let me know and I can re-evaluate the bet. I prefer this structure due to its settlement simplicity.
I can have lawyers draw up a contract but the primary risks to me are your death or inability to pay so that would have to dealt with somehow.
I can think of slow takeoff scenarios where humans are largely obsolete, capital is not, and capital ownership is respected. In that case, you would want to have savings, especially because you won't be able to work for money.
Lets say for the sake of argument that the world has a 4% chance of ending per year (so in expectation, we have 25 years to go). How should that impact your investment decisions?
If you are risk-natural, this looks like a 4% decay on the value of a dollar per-year. A dollar in 2023 is only 96% likely to be spendable, so it should only be 96% as valuable than it would be without the X-risk issue. For investments, this is equivalent to a 4% drag on returns. That is substantial, but not enough to make investing a bad idea, especially in the medium+ term.
Good point! Independent of AGI, I've read estimates that the odds of a Carrington event are roughly 1% per year, and (unless the power grids and a lot of other critical electronics are armored) that is economically comparable to a full nuclear exchange.
Re. "if innovation is destined to be only 10% of its current level in 2100, then a 30% population decline could lower that to 7%.":
Because of our high population, the bottleneck in tech progress isn't creativity and innovation, or even money, but attention. You can see this in government funding: Grant-funding agencies are limited less by budget than by the number of administrators who can oversee contracts. Projects that cost less than half a million dollars a year are often ignored, sometimes (in my personal experience) to the point of the administrator not bothering to read project reports, answer emails, or attend meetings.
You can also see this in the proportion of media attention given to elite colleges. The ratio of students in non-elite to elite colleges has increased by about a factor of 10 since 1950, yet the ratio of media references to graduates of elite vs. non-elite colleges seems to have grown. Go through any issue of WIRED magazine (well, any issue from the 1990s, when I still read it) & see how many articles you can find about research by people who didn't attend MIT, Stanford, or Carnegie Mellon. The proportion of Nobel prizes given to graduates of top-20-in-field colleges has also increased radically since about 1960; in physics, it went from something like 50% in the first half of the 20th century, to 100% after 1970 (last I checked). The ratio of venture capital given to graduates of elite colleges vs. people with no college education was small in the early 20th century; today, it might be infinite.
We have a glut of smart people, yet if anything there are fewer super-smart people at the top. There are AFAIK no contemporary equivalents to Einstein, Turing, von Neumann, or EO Wilson.
No matter how big the population grows, each person still has enough brain space for the same number of new ideas. The only way we have now of countering this effect is to continually fragment into more and more isolated specialties. But this renders everyone less and less capable of recognizing creativity and intelligence in the wild.
So I do not think a decrease in population will decrease effective innovation. It might even increase it.
If attention were really the limiting resource, wouldn't grantmakers just decrease the amount of scrutiny they gave each grant, accepting worse ones slipping in as the cost of being able to buy more lottery tickets?
You're imagining that grantmakers are incentivized primarily to achieve great results. But no bureaucrat I've known would rather administer a large number of shoddy grants by people who don't follow instructions well, than a small number of good grants.
Also, with government grants, deciding whom to give the grant to is just a small part of grant administration.
Also, completing the grant doesn't give you any technological advances. Somebody has to figure out which of the finished grants are worthy of being shepherded along towards further funding, marketing, deployment, and adoption, whether that's the original funding agency, the institution that received the grant, other researchers, business partners, or venture capitalists. Pushing more of the filtering further down the pipeline would cost more money and make more work for everyone, and it isn't obvious that it would give much better results. Everybody at every step of the process is already overloaded.
That's right. A lot of grants are administered by people who were researchers in the field, many of whom will return to research after their stint as a bureaucrat finishes. (I know that's how DARPA and several other agencies work.) A lot of the work involves coming up with a promising program to promote and then fighting for the resources to do so. You want your program to be a success both as ammunition for further funding, but also because you are likely to be a researcher working on its sequel.
That's the bottom up view. There's also the top down view as the overall agency makes its own course corrections, sometimes dictated by internal decisions, sometimes as directed by an overseer which may be Congress, the Executive or a "customer" like a branch of the military.
E. O. Wilson was an unusually good writer for a scientist, but within that class not "super smart". A physicist might dismiss the main focus of his work as "stamp collecting", and his most notorious work was essentially just a popularization of Trivers.
EO Wilson studied many species of ants in great detail. But he didn't just gather data. He used that data, with experimentation, to develop a quantitative theory relating ecosystem area to the number of species it could support (the island theory of biogeography). He also used it to develop a general framework for using evolution to relate social behavior to environment and to life strategies (sociobiology, though certainly this was influenced by Darwin's speculations about evolutionary psychology), and to expose the flaws in the arguments used against group selection (for instance, that haplodiploidy is not, in fact, correlated with the development of eusocial behavior, whereas intergroup war and cooperative military defense are). He was active in measuring and projecting biodiversity, and in his later writings he related sociobiology to the evolution of culture and art. He thought big, and his pursuits ranged far beyond ants, evolution, and biology.
/Sociobiology/ used ideas of Trivers at least in the areas of sex ratios, the use of evolutionary psychology in the study of the evolution of altruism, and parental investment. But the papers by Trivers that you're talking of probably amounted to less than a hundred pages. /Sociobiology/ was nearly 700 pages. It extended Trivers' abstract, mathematical ideas to field observations of a wide variety of species across the entire phylogenetic range of complexity from bacteria to humans. It also covered topics such as behavioral scaling, group size, energy budgets, cognitive control architectures, cultural learning, socialization, evolution and optimization theory, evolution and communication theory, territoriality, dominance, castes, and in general the entire field of ethology. It unified all these things in a general framework which Trivers, AFAIK, never conceived of. None of it could be called a "popularization" like /The Selfish Gene/.
(To be more specific: Sociobiology refers to Trivers on p. 114, 120-124, 311, 317-318, 325-327, 337, 341-344, 416-418, 551, 555, and 563. This is comparable to how often he refers to RD Alexander, SA Altmann, RJ Andrew, EA Armstrong, JS Bernstein, WH Bossert, MV Brian, JL Brown, CG Butler, CR Carpenter, JH Crook, Darwin, DE Davis, I DeVore, Mary Eberhard/West, I. Eibl-Eibesfeldt, JF Eisenberg, RD Estes, R Fox, K von Frisch, V Geist, KRL Hall, WD Hamilton, CP Haskins, RA Hinde, B Hoelldobler, Alison Jolly, JH Kaufmann, H Kruuk, H Kummer, D Lack, Jane Goodall, R Levins, RC Lewontin, M Lindauer, Karl Lorenz, RH MacArthur, PR Marler, WA Mason, John Maynard Smith, L David Mech, CD Michener, GH Orians, FE Poirier, Thelma Rowell, SF Sakagami, G Schaller, TC Schneirla, TW Schoener, JP Scott, CH Southwick, TT Struhsaker, WH Thorpe, Niko Tinbergen, DW Tinkel, SL Washburn, WM Wheeler, W Wickler, George C Williams, and VC Wynne-Edwards (who admittedly was quite wrong about group selection, but still provided many useful observations).)
I am biased. Once I sent him a grant proposal which I think must have sounded insane to literally every human on Earth except for me, and him. IIRC it was to apply sociobiology and ethology to outline parameters that could create an ecosystem of artificial intelligences, in which the usual interlocking dependencies and feedback mechanisms of ecosystems and animal societies would encourage cooperation and eusociality rather than the violent "there can be only one" scenario being pushed by "AI safety" researchers.
He called me on the phone and said something like, "Look, I don't have time for this grant proposal, but reading it was a breath of fresh air. I've been stuck here at Harvard for days, and I just want to talk with someone intelligent for a change. Have you got the time?"
We talked on the phone for half an hour, and then he had to go. I admit that I think him intelligent partly because he thought me intelligent. I realize that isn't, by itself, actually evidence for the intelligence of either of us. But I can't help interpreting it that way.
I appreciate the reference, and I'd listen to it if I could download the MP3, but I gotta say that "spend an hour and a half listening to this for a chance to be disillusioned with your hero" isn't a great sales pitch. Also, I've seen a lot of intense criticism of EO Wilson -- no modern biologist has attracted more criticism -- and every bit of it that I've seen to date, from the attacks on Sociobiology by Gould, Lewontin, and the political left, through the attacks on group selection, to Scientific American's "obituary" of him, was at best wrong, and at worst outright evil.
Definitely. Unfortunately, some characteristics make him difficult to hold down a job, something I was unaware of even after reading his very self-exposing book on self-deception:
Yes, Trivers lived up to the stereotype of "difficult person = genius".
You mention his view on self-deception as something negative...I do not know about a book, but my own views on human evolution, including the genesis of "genuine" altruism among us humans, has been very influenced by his article on self-deception (it's related to the evolution of self-binding behavioral traits), written with William von Hippel in 2011:
It was not my intent to negatively characterize his WORK on self-deception (though some stuff in the book is questionable). Rather, I was saying that he portrays himself as a surprisingly irrational person for a scientist who studies self-deception. When Robyn Dawes or Kahneman & Tversky were looking for examples of irrational behavior, they pointed to many people they had worked with over the years. Trivers uses many things he himself has done. One could chalk some of that up to unusual honesty, but I really doubt those other authors have done most of those same things.
Another scenario that can turn the population dynamic upside down will be the discovery and mass availability of rejuvenation therapies. I don't think bological immortality will cause overpopulation, but the population pyramids will start to look *really* weird
It’s comical to me than people really think depopulation is the big problem facing us. The world population is almost 2.5 times what it was when I was born. Many of the actually big problems facing humanity, like climate change, species extinction, ocean pollution, etc. are automatically improved when you have fewer humans causing them. I think the real fear is that the Ponzi economy is unsustainable unless there is constant population growth (= more people joining the scheme). But all economies eventually fail; just try to spend a sestertius now. So I can’t see that short term pain as some existential crisis.
That's a good point. For example, people argue that we need a higher population so that there are enough people to take care of the aging. That's great, but that implies indefinite population growth. When we have 50 billion people, we'll need another 10 billion caretakers, and when we have 500 billion people, we'll need another 100 billion caretakers and so on. It might be better to focus on ways to get by with fewer caretakers or freeing up people doing other jobs to become caretakers. Our creativity is less limited by our brain power than by our embedded power structures and mental models.
I've always thought of low population density as basically a good thing - a UK with about 5 million people in it sounds much nicer than one with 70-80 million. You'd need an eskimo attitude to the elderly to make it sustainable though.
Pensions are a Ponzi scheme. Growth on its own isn’t. Changes in technology that allow higher economic growth are not the same as investing in tulips or crypto.
Pensions are a Ponzi scheme in that goods and services for people who no longer provide goods and services are provided by people who still do. That game is going to continue until everyone is immortal.
You are on the mark saying that it is one thing to invest money in productive capacity and another thing to invest money in financial instruments.
Ponzi schemes are schemes that require continual _growth_. An actuarially sound pension plan in a stable population is perfectly possible. It just has to avoid overpromising - either if it is based on investments in productive capacity or on transfer payments.
Agreed. Re "the Ponzi economy is unsustainable unless there is constant population growth", my view is that any Ponzi scheme is doomed from the moment it is conceived.
"I guess it’s still true that if innovation is destined to be only 10% of its current level in 2100, then a 30% population decline could lower that to 7%."
But if it's educated / high-IQ people that have the lowest fertility (such that they're disproportionately responsible for the 30% decline) wouldn't that imply that innovation drops lower than 7%? In other words, the proportion of potential innovators would get smaller over time. This effect might even be stronger at the tail end of the IQ distribution: a small relative decrease in high-IQ individuals would mean a strong relative decrease in potential superstars.
Yes. Its wrong to imagine total population is the strongest predictor of innovation. Israel is more technologically innovative than the whole of Africa.
What proportion of those potential innovators do we utilise right now? A lot of the world still doesn't have access to good education or meaningful opportunities to innovate. It strikes me that even with a declining population, we could easily raise the number of innovators simply by educating more people and giving them the space to work.
If there is no assortative mating, smart people have kids with less smart people and we would observe a regression to the mean in IQ.
If instead there is a large assortative mating effect (as I think there is), we would expect the variance of intelligence across the population to increase. This would mean that the number of very-high IQ people may remain constant or even increase despite a slow decrease in the mean.
Whether this is good or bad for society depends on whether you think the mean or the 10th-percentile is more important for determining welfare.
Robin Hanson thinks falling fertility is <i>The</i> Big Problem of our times, but mostly in a total-utilitarian sense* rather than a looming disaster sense:
He has also dismissed writings about technological unemployment. I'm also not too concerned about that, both because of the evidence he's presented about recent history & the near future, and that if we do achieve that our civilization must be succeeding somewhat.
*After writing that I rechecked his post and saw that he thinks that lowering economic growth via lower population & knock-on effects would reduce our robustness to a variety of existential risks. So heightened probability of disasters, but not looming directly from lower populations.
"if we can’t genetic engineer superbabies with arbitrary IQs by 2100, we have failed so overwhelmingly as a civilization that we deserve whatever kind of terrible discourse our idiot grandchildren inflict on us"
I find annoying arguments throwing up our hands and saying we "deserve" bad outcome B if bad outcome A happens. We should try to be robust to bad outcomes. An example of Robin Hanson thinking about that for a low probability but severely negative outcome is here: https://www.overcomingbias.com/2008/07/refuge-markets.html
I should also note I'm much less convinced that the singularity will happen that quickly. If it happens in my lifetime, that will probably be thanks to life extension. 2100 is still far enough off I don't want to confidently speculate about it though.
Also: even supposing that a singularity fails to happen- that the inevitable unpredictable historical twist turns out to be that not much changes- aren't we actually going to need a stable human population at some point? I mean, we can't just keep growing geometrically century after century, right? Why, given a future without much technological progress, would it be better to put off that stabilization until a future generation? Presumably, they would face the same kind of negative consequences from falling population growth that we do today, only more people would be affected.
There seems to be this idea in the air that humanity can escape any future need for population stabilization with space colonization, but aside from just delaying the problem further, that seems to ignore just how profoundly terrible it would be to live in an extraterrestrial colony in the absence of radical technological change. Mars isn't the American West- perfectly suited to human life and covered in recently abandoned farms that smallpox-resistant colonists could pretend were untamed wilderness- it's a hellhole. Glorious sci-fi dreams and the pioneering spirit would only obscure the lived experience of that for so long, and you can't spend centuries dropping asteroids on the planet for terraforming when there are people already living there.
If some future generation is forced into space- or even into the uninhabitable parts of Earth- by overpopulation, that would strike me as a profound failure of humanity.
I think the ideal human habitat isn't a Mars colony, it's a ring-shaped space station. Disassembling (say) the Moon would provide enough raw materials to build a very very very large number of these.
You need colonies on somewhere mineable to actually do that, though, unless you solve AI alignment and have the AI do it for you.
Asteroid colonies are probably better than Mars, though; much easier to export things, and you can dig deep to avoid the radiation because there's (much) less gravity.
>There seems to be this idea in the air that humanity can escape any future need for population stabilization with space colonization, but aside from just delaying the problem further,
There are non-ruled-out cases in which this delays the problem beyond the time at which the universe becomes uninhabitable. In particular, the size of the universe is not known and may be infinite or >10^10^100 [units can be atoms or planets, conversion factor disappears into the error of the exponent anyway]. The size of the universe reachable at sublight is known to be smaller than that, but there's no known impossibility theorem for FTL.
Scott, it seems the answer is ‘yes’ to some degree based off the comments, so my apologies if I’m misinterpreting you, but do you think we all have a very high chance of dying after AGI debuts? Innocently earnest question, I know it’s been asked a million times around these parts to different people and answered in million different ways, but I’m curious what your take-it-or-leave-it answer would be today at this very moment.
Pretty high, yeah, I would say maybe 60% chance we're dead 30 years after the first super-human-level AI. Really low confidence in that number, some people I respect a lot say much lower or much higher.
Why not advocate for a Butlerian Jihad, then? I acknowledge it wouldn't be an easy task - it would need to be a global movement encompassing at least all first-world countries and China, etc. - but with those odds, would seem worth a shot!
Because this community doesn't have enough power and influence to do it. In practice it's already being pretty much dismissed as a crazy cult, starting to seriously campaign for even more outrageous notions certainly wouldn't improve matters.
When it comes to notions, "we should in fact not develop an AI that has a better-than-half chances of killing the entire humanity" is in fact not all that outrageous, at least compared to "we should develop that AI".
But nobody currently developing AI believes that it's 50% likely to kill the entire humanity, or anywhere close to that. They are understandably annoyed when some crazy cultists imply that they are in fact very likely to directly bring about the destruction of humanity, and would push back very hard against this notion if it ever comes close to truly threaten them, and, importantly, so would the moneyed interests that sponsor them.
I mean, it seems like a real possibility we could all kill ourselves with AGI even if it's not likely to kill us all. I don't think it's crazy to be worried at all--what do I know, though? I don't work in tech.
Why not advocate for a Butlerian Jihad, then? I acknowledge it wouldn't be an easy task - it would need to be a global movement encompassing at least all first-world countries and China, etc. - but with those odds, would seem worth a shot!
I think it is pretty much guaranteed to fail. You would basically need global monitoring of software development. The world wasn't even capable of enough agreement to keep fissile isotopes out of Kim Jong Un's hands, and those are rare, tightly controlled materials. Computers, on the other hand, are pervasive.
It might even make the overall survival odds worse. This would require prying so much sovereignty out of the hands of powerful nations that the odds of provoking a cataclysmic war might outweigh the reduction of risk from the AIs.
To me, the whole singularitly schtick is admittedly too much "speculative sci fi", but for the sake of argument: Assume AI kills us all. Is that necessarily bad, since an AI with that capability is likely to be better able than us to further explore the Cosmos?
...Why not regard this (to my mind rather improbable event) as a Passing The Torch-moment to a superior species, rather than something to be sad about?
"I notice it’s weird to be worried both that the future will be racked by labor shortages, and that we’ll suffer from technological unemployment and need to worry about universal basic income. You really have to choose one or the other."
I can think of a couple of ways to reconcile these worries. For one thing, you can imagine someone's concern about the increased burden of support for the elderly to be primarily about direct physical support: nursing, caregiving, etc. Even granting a large population of unemployed people interested in working, it doesn't have to follow that enough of them will be interested in working in the narrow field of personal care.
Beyond that, even if we limit the discussion to worries about the increased burden of *financially* supporting the elderly, these worries need not be incompatible with worries about technological unemployment, either:
I assume you'll agree that fears of technological unemployment, by and large, cannot be about unemployment leading to resource scarcity in the aggregate, since by definition, firing human workers to replace them with more productive machines leads to more resources in the world, not fewer. Rather, then, these fears surely must be about the distribution of these resources: the increased productivity accruing to the owners of the machines will not flow to the displaced proletariat without some sort of redistribution (e.g. the universal basic income that you alluded to).
I further assume that you will, if not fully agree yourself, grant at least the reasonability of the proposition that the responsibility of supporting the elderly is not spread equally across society, but devolves primarily to those closest to them, mainly their children. (I'll further point out that if you choose to challenge this proposition while granting the first, *you'll* be the one in danger of inconsistency, since the problem of redistributing income across society to support the elderly is surely no different in kind from the problem of redistributing income across society to support everyone in need of support. If worries about technological unemployment are predicated on pessimism about society's ability to adjust its redistribution systems fast enough to support the unemployed in general, it's surely consistent to be likewise pessimistic about its ability to do so to support the unemployed elderly in particular.)
Thus, to me, it appears quite consistent to fear that in the future, many more jobs will be performed by machine, the rich will grow richer, but others will struggle to support themselves at all, let alone their aging parents, the responsibility for whose support they will now have fewer siblings to share.
Beyond the problem of low-fertility states with large entitlement programs (which won't be sustainable with fewer future taxpayers,) as I've written elsewhere: "While people with kids and people without kids can all be responsible, only people with kids have a special connection to the well-being of successive generations: the double helix. And with that special connection comes a stronger incentive to act in accordance with that special concern for the well-being of future generations, generations that will include one’s own children if you have them."
So what blog post should we start at if the idea of an AI technological singularity makes us laugh because even the height of well-kept technological infrastructure is max akin to the brain of a dying Huntington's patient who has just been shot in the head with a nail gun several times
The semi-things to "worry" about is that a) the way most retirement systems are funded is with a tax on workers. If they were funded by a VAT things would look different. b) the wage taxes that have paid into US retirement fund have not been enough and so tax revenues (wage or VA) need to increase unless benefits change.
Maybe this isn't the place to bring it up, but I don't think that when embryo selection arrives and we get a better chance to process what it means, we will be prioritizing IQ as the trait we select for. With embryo selection there are always tradeoffs, so if you put all your points into one attribute, you have to ignore the rest. I think it will be common for people to select embryos genetically predisposed to be happy, healthy and popular. Even I might be tempted to prioritize those if I had a choice. I wonder how others here feel about this. Don't we all mainly just want our kids to be happy?
I should check it out. As it happens, I'm writing a short story which is kinda-but-maybe-not dystopian about a near-future world with embryo selection. The premise is that these future kids are selected to be extroverted, happy people who love to schmooze, party, and "raise awareness" of problems. (Any kids without these traits pay a big happiness penalty.) When certain boring infrastructural problems arise through a shortage of experts on those nerdy topics, they recognize it's a problem, but ultimately try to pass it on to others. But because they are so good-natured, they ultimately convince themselves that they can live without new nuclear powerplants or new generations of silicon technology, and cope happily with a gradual technological decline. Those people never leave Earth, but they do build lots of windmills and they're a bit proud of their de-growth. They don't see their world as a dystopia. They're happy.
Doesn't work absent some mechanism to enforce this selection in all countries. Otherwise the Nazis sit around doing eugenics while everyone else does this Tragedy of the Genetic Commons, and then 200 years later, when everyone else's nukes have become unusable, the Nazis kill everyone and take their land (and then presumably go on to the stars). Or if the Nazis hit some other social failure state, then it'll be the Amish and their disdain of technology (including, presumably, eugenics).
It's an interesting idea. For sure both would be starting with a very small "install base." The Amish and our gene-selected, technologically regressing descendants would probably grow closer in their priorities and capacities. I'm more scared of the Nazis, of course, and I kind of love the idea of a scene in which the technological situation gets so dire for the extroverted majority that they decide to ask the Nazis for engineering help. That collaboration would fail very hilariously! To get anything done in the society I picture, person first needs to achieve buy-in from the largest number of stakeholders, who all have rather different agendas. The Nazi engineers, even if they propose a sound plan, will suck at getting buy-in, and they will find the whole "build consensus first" procedure totally absurd and counterproductive.
Right :)
They really need to get cracking on that: Deus Ex: Human Revolution established that Final Fantasy XXVII will exist (at least in poster form) by 2027, and here we are in 2022 and still at XV...
I mean, we know that the Flynn effect is real, and there are various theories about what it is. If it's childhood malnutrition, which I believe is the leading hypotheses, then we would expect any gains from that to eventually plateau as everyone gets fed.
I thought the leading hypothesis was cultural- being raised in a society with a higher level of abstraction in every day life (computers, etc.). If it was malnutrition you would think it would not be happening in places like the U.S. (where it stopped only fairly recently) and it would be g-loaded.
Logically it can't be the fact that a) IQ is highly heritable and b) low-IQ people always had the most children and c) there has not been a decrease in IQ
Even if your hypothesis is correct, in the past low education probably did not correlate very well with low IQ
Probably didn't correlate well with living in cities, either -- a lot of aristocrats were well-educated for their society, and spent most of their time living on their country estates.
Yeh. Or they could escape the plague by leaving.
In the past the rich had a better chance of surviving relative to the poor. Enterprising poor people who climbed the ladder either had more children, or their children had more children, those who survived to adulthood anyway.
Cities are population sinks but probably not for all classes equally.
The argument that democracy unduly favors the old has been made many times over, and theoretically makes sense. However, if you look at what is actually happening to public pensions in most European countries, you find that they are being cut almost everywhere, either in the open, or "on the sly". While subsidized kindergartens, after-school day care, and paid parental leave, are being expanded.
My take is that your implicit model of voter preferences is overly simplistic. In practice, politicians find ways to cut benefits for the old and direct money to young families instead, and manage to get consensus (or at least not active opposition) to do that.
Granted, it takes political skill to carry this through. But Macron sort-of won the election, after all.
Concerns related to demographics hover a little closer to The Actual Thing Racists Are Bad About than their hydration habits, surely.
Perhaps this kind of social pressure would be intuitively more acceptable if we labeled racism an infohazard.
Request fewer comments like this, it seems to just be creating social pressure to make it hard for people to talk about things that are concerning them, without really filling in any step of the argument.
I do not think a proper and informed knowledge of intellectual history is out of place in rational argument about the state of the world and the choices in front of us. I think @DannyK was well within bounds to bring up Grant's book. He was not using it to suggest that people advancing arguments about population growth and control are evil, but a corollary point that evil men have used the issue to justify their evil deeds.
The Grant book was very important in the inter war years. Fitzgerald put a reference to it in The Great Gatsby but in the mouth of the buffoonish antagonist Tom Buchanan. https://en.wikipedia.org/wiki/The_Passing_of_the_Great_Race#Reception_and_influence leading us to believe that he was not sympathetic to the argument and that his readers would think it was characteristic of boorish people.
I think it is important to understand the place of Malthusian thinking in the development of social science and literature in 20th century America. "Illiberal Reformers: Race, Eugenics, and American Economics in the Progressive Era" by Thomas C. Leonard (2016)
https://www.amazon.com/gp/product/0691169594/
As I wrote above. I am opposed to Malthusianism not just because it has been misused, but, more importantly, because it is just plain wrong.
BTW Grant was not very original see the wiki above cited. In that @DannyK is incorrect.
I wonder if his handle is a homage to the comedian, singer, actor Danny Kaye who was popular in the 50s and 60s. https://www.youtube.com/watch?v=KJzwC_8f6nA
Maybe that social pressure is a Chesterton's Fence.
Oh god, Scott.
I made several arguments against your piece about Black Lives Matter but they were among the ones that there was no space to answer.
1) I linked to several pieces in the NYT that said the same thing without making it sound like a racial dog whistle.
2) You would not acknowledge the fact that during the lock down, no one had a job to go to or classes to attend greatly increased the pool of demonstrators.
3) These massive crowds immediately put the police on the back foot and it became olly olly in free for anyone, black or white to act out on whatever antisocial impulse popped into their head. The destruction in large part had nothing to do with BLM.
4) There are video documented instances of a local white biker striking yet another match by smashing store front windows with a 4 pound hammer. I have one just like it for situations that call for ‘a bigger hammer’. He has been identified and there is a warrant for his arrest.
5) Putting your thumb on the scale for right wing media because they beat the ‘BLM protests were awful’ drum hard and often - the destruction was in fact awful, black people are not - is fucked up because their business model is to gin up hatred and white outrage. If you think that is a good thing, I don’t know what to tell you.
6) You gave short shrift to the emotional gut punch of the video of George Floyd’s death. You start your analysis with the destruction that followed.
7) What’s with the graphs of violence in countries without our complicated racial history?
I’ve held off on saying anything about this in case that article was an anomaly. But now this. I’m not calling you a racist. I am saying actual racists do love stuff like this. Pointing that fact out should not be a problem.
No I don’t think Scott is a racist. I do think he might enjoy dunking on the ‘bad people’ at the NYT.
Edit: I had actually clicked unsubscribe from ACX on the Substack gizmo before I saw Scott’s request to not point out the obvious. I got an email from you and Scott’s latest email too so apparently I’m doing something wrong.
This comment is a great example of https://www.lesswrong.com/tag/motivated-reasoning.
1. Why should we go out of our way to avoid making something "sound like a racial dog whistle?" A racial dog whistle is itself something that "sounds like" it is alluding to something actually racist, so you're claiming it's very important for us to avoid... sounding like someone who sounds like they might be alluding to racism? At that point, is two degrees of separation even enough? Shouldn't we also avoid sounding like someone who sounds like someone who sounds like someone who is racist?
3. Would you apply the same principle to the January 6 trespassers?
4. There are also video documented instances of black rioters shooting and killing white people and police officers.
5. What alternative do you propose? Ignore the truth when it's inconvenient for the left, to avoid giving points to the right (even when the right is correct)?
6. The video really wasn't much of an emotional gut punch. If you watched the full footage and looked closely, Floyd said "I can't breathe" while he was still standing up, so it's clear from the beginning that his difficulty breathing is drug-related. Then, Chauvin puts his knee on Floyd's shoulder blade, and nothing much happens after that.
7. Why would that be of interest? Sure, if you were having a discussion about sociology, it would be relevant to bring up racial history, but not every discussion is a sociology discussion.
If I show you how Scott used motivated reasoning in his essay will you listen?
Not related to Scott but your point 6
From the trial where Chauvin was found guilty of murder.:
https://www.politico.com/news/2021/04/07/derek-chauvin-george-floyd-trial-479796?_amp=true
I can go on point by point if you like.
Some cultures, societies and civilizations are able to pull off incredible cultural continuity and preservation throughout the generations though. But I agree with you, culture tends to modify and be reinterpreted in different contexts, so this is probably not a great argument ("Japan's culture will be different with 1/3 non-native Japanese" -- It will and it will also be different even with 100% Japanese a century or two from now just like you said) and is probably an indirect coverup for a more primal reasoning of simply wanting your group's survival to continue. Maybe some other reasons too, but I don't want to keep writing.
My wife now isn't interchangeable with my wife ten years from now, but I neither want to force my wife to never change, nor would I be happy with replacing my wife with some woman who's a bit different from her.
I agree this is an unsatisfying response but I think ethical intuitions will always be unsatisfying.
If you replace your wife with a slightly different woman over the process of ten years rather than instantly, does that change your intuition?
The Theseus Wife Paradox.
Hm?
https://en.wikipedia.org/wiki/Ship_of_Theseus
But even if you did that, you'd still need it to be economically viable, scalable, and you still need to raise all those kids somehow up until they're productive educated adults. All of this is very expensive and takes a lot of time until you get to the point where they can contribute anything, if at all.
Now, I want to make a prediction here which might be wrong but whatever: Even if you could hypothetically mass produce babies on demand and engineer them to be very smart, beautiful, etc. it still largely wouldn't lead to many magical new breakthroughs in different fields. What would happen instead is increasing perfection and sophistication of things we already posses with some level of technological development and applied science. Other than that, society would ossify, fossilize and harden and just continue to live on as an animated corpse, trying to go to yet another planet, build yet another city, develop another app, etc. After the initial wave of the results from the new tech settles down, things get quite boring. Maybe if they make babies literally live inside VR in other specific settings and societies something else would happen, the most likely outcome being civilizational disintegration though. Idk...
Why do you think that our rulers are determined by IQ rather than "moxie"?
https://westhunt.wordpress.com/2018/08/20/natural-aristocracy/
Read Greg Clark on turnover.
You may be interested in the concept of "regression to the mean"--I'm sure Scott has written about this before but I'm too lazy to find links. Basically, IQ *is* heritable but there are also random other factors, so it's quite unlikely that the number 1 smartest Gen Z kid is the offspring of the number 1 smartest millenial (or number 1 smartest couple, I guess). (But the number 1 smartest Gen Z kid probably *is* born to some top-10% parents.)
We believe it because we base our views on actual heritable studies, not some indirect inference based on faulty premises. You not believing in the heritability of IQ is a product of you not knowing about/understanding the intelligence research literature, not a failure in the reasoning of the people you disagree with. Sorry if that sounds strong, but you didn't pose this as a question, you made a statement implying people who disagree with you are being particularly foolish.
There's no reason to think leaders are or necessarily would be elected on solely the basis of their intelligence, so there's no reason to imagine Einstein or someone like him would stand an especially good chance at being elected.
And heritability doesn't mean "the same as your parents". It means what proportion of the observed variation in a trait is a result of the observed genetic variation in the population being looked at. Being a child genius seems to be almost entirely heritable, but not many child geniuses are born to former child geniuses.
1. We're not even ruled by the most intelligent people this generation, unless you think Joe Biden is the smartest person in the US. Why should this happen transgenerationally?
2. Chance and regression to the mean ensure that the single smartest person next generation probably won't be the kid of the single smartest person this generation. While the children of smart people are on average smarter than the children of dumb people, there's lots of noise, and the noise is most apparent at the very top and bottom.
3. This might be easier to understand if you looked at some trait that you knew was passed down parent to children. For example, the children of rich people are on average richer than the children of poor people (you don't have to believe this is genetic for it to work). But the richest people this generation are Elon Musk and Bill Gates, who came from mildly rich but not ultra-rich families. This doesn't disprove that parents can give wealth to their children, it just proves the process is noisy.
Joe Biden _is_ the Stephen Hawkins of politics. But more in the wheelchair/voicebox way than in the mad genius way.
And what is Donnie T then...?
No idea...I'm a Democrat. I think Biden is reasonably adequate and passing some good bills lately. My comment was entirely tongue in cheek. Donnie T, is sort of a mad genius though except in a clownlike and spectacularly incompetent (tripping over his own metaphorical feet kind of) way.
yeah. +1!
>wouldn't we be run by a [...] plutocracy of Rockefellers?
Some people might say that you're onto something, but those are called antisemites and worse in polite society.
The Rockefellers weren't Jewish. Are you thinking of the Rothschilds?
That’s ok. Automation pushing up wages is why we are richer than we were. Although maybe we need strong unions to guarantee that.
I am seeing that scenario in Italy and it's not nice. Like, not nice at all. Young people are full of hate, all the familist culture that was typical of our country is gone. The Italian equivalent of "ok boomer" is something roughly translatable as "ok, old piece of shit" , and we started saying it about 10 years before "ok boomer" was coined.
And old people really, really don't get why. After all, what's so wrong? Sure, they might have voted themselves some unsustainable benefits that weight so much on treasury that the rest of population gets Swedish taxation and Bulgarian services, but from their point of view their fault was only optimism. Sure, they keep voting themselves even more benefits, but doesn't everybody vote with their interests in mind? And it's their fault if they are outnumbering everybody?
As you said, it is taking the connotations of a class war. Old political rivalries are blurring: as long as you are young, there is some reason to hate your elders no matter your politics. Libertarianish? See above, plus so much regulatory capture to write an horror books for economists, never to be touched because hey, old people might be upset if something changes.
Leftist? Hey, what do you think if spending your 20s working for free so that some octuagenarian owner can afford a better suite in Sardinia, and maybe start paying you a pittance when you are wiser and older enough?
Progressive? Hey, you know how your boss considers sexual harassment a form of team building? Well, EVERY boss is like that, because none is younger than 60!
Seriously, I have seen political polarization in my generation going down a lot lately. Mostly because for any "kill landlords" or "offer communists an helicopter ride" post that disappeared, two about the glorious tradition of euthanizing old people whether they wanted or not appeared
> That said, regarding #6, I was recently startled by the release of the newest census results, which revealed that Canada (where I live) is becoming a country of olds with shocking rapidity.
My advice to anyone concerned about the age structure is to take a moment to consider the implications of the Demographic Transition model more fully.
In going from a high-fertility/high-mortality regime to a low-fertility/low-mortality one, you are necessarily going to have a number of generations whose size will exceed those that come after, because they were born during the high-fertility/low-mortality phase.
However, the same issue of concern - these people will not be replaced when they leave their productive period - also necessarily implies that these people will not be replaced when the time comes for the subsequent generation to retire.
In short, once the boom generations complete their journey up the population pyramid, these age imbalances may cease to be an issue.
My worthless prediction for future demographic trends is that populations will trend towards some sort of stability in the long run, fairly likely - at lower levels than we have today. This will probably be a Good Thing (there's a sweet spot where you have just enough people to get things done, but not so many that resource constraints start to kick in).
Not if they can just kill that kid afterward (and are you going to monitor them?), shifting away resources that they would have spent on kids they would actually support. A more incentive-compatible total utilitarian approach to boosting fertility is here:
https://www.overcomingbias.com/2020/10/win-win-babies-as-infrastructure.html
I reject this whole line of thinking in order to avoid https://en.wikipedia.org/wiki/Mere_addition_paradox . I am equally happy with any sized human civilization large enough to be interesting and do cool stuff. Or, if I'm not, I will never admit my scaling function, lest you trap me in some kind of paradox. I'll just nod my head and say "Yes, I guess that sized civilization MIGHT be nice."
1) Thinking that welfare is the only moral consideration is not an alternative to using intuitions to guide your theory. Utilitarianism (a first-order moral view) is not an alternative to intuitionism (a view about moral epistemology or metaethics). Some people accept welfarist views precisely because they think such views explain a wide range of moral intuitions while preserving theoretical virtues like simplicity.
2) There are lots of possible views which take all moral considerations to boil down to welfare which are not totalist utilitarianism, and I'm not just talking about averagist views. Person-affecting views can be welfarist as well, and these views typically do not entail the repugnant conclusion.
I linked to Mike Huemer above. He's an ethical intuitionist who presents arguments that "Unrepugnant Intutions" on that question are less reliable than the intuitions leading toward the Repugnant Conclusion.
You've been asking a lot of of versions of the question "if you're a (classical, totalist) utilitarian, why not accept its implications?" But of course, it's trivial that we shouldn't accept (classical, totalist) utilitarianism if we reject its implications. You're talking to people who don't accept that kind of utilitarianism. Maybe what you really want to ask is why someone who is a consequentialist isn't a totalist utilitarian, or why someone who thinks welfare is very morally important isn't a totalist utilitarian. And the answer is that there are reasons people have for being consequentialists, or for thinking that welfare is very morally important which do not commit one to totalist utilitarianism (and sometimes which preclude it).
My understanding is that Scott is not a strict utilitarian of any particular sort. I'm not even sure if he's a strict consequentialist.
For those like me who found that the link to Mike Huemer's response (referenced in that article) results in a warning about an unsafe connection, another copy is here:
https://philpapers.org/archive/HUEIDO.pdf
I think that for a new and interesting civilization to begin, you need about 1 million people at minimum tied to one particular location/region/terrain/etc., but they also initially need to be engaged in agriculture for most part (in order to develop their own conceptualization and sense of time).
Depends on how you aggregate welfare. I think what we're aiming for with consequentialism is making people better off, which is different from creating people just so they can have welfare.
If I had never been born, my welfare wouldn't be zero, it just wouldn't be part of the calculus for that world.
One reason not to maximize total welfare is that it's not a good specification of what we're trying to do in maximizing welfare (in a more general sense).
Say we compare two worlds: World A has 100k people living in extreme bliss, and World B has 100m people living generally okay lives. Which world seems to be higher-welfare in the sense that we care about? To me, A seems clearly better. That suggests that total utility is the wrong measure, unless we have some other reason to prefer it.
But my higher-level comment is in favor of the person-affecting view, which is an alternative aggregation that rejects both total and average utilitarianism.
I assume you're asking because I said that World A seems better to me. Sorry for being unclear - I mean that it seems better in the sense that choosing it over World B is in line with welfare maximization as we think of it. It's a separate question whether that kind of welfare maximization is what we should be doing, morally.
I think utilitarianism draws much of its persuasive power by appealing to some intuitions about welfare - and how to aggregate that welfare is also part of those intuitions. Now, a form of utilitarianism that contradicts those intuitions could still be true, but if it is, it can't use them to support itself. If we accept generic unspecified utilitarianism because it follows from our intuitions, we should reject total utilitarianism for the same reason.
(This is all separate from whether we should accept anything based on our moral intuitions - and I don't think we should.)
The idea of summing _or_ averaging welfare is meaningless, though, without a way of assigning numerical values to welfare. And the problem isn't that you can't come up with a scale, the problem is that whatever scale you come up with is arbitary made-up nonsense.
Here's Alice and Bob. Let's suppose we can clearly see that Alice is much happier than Alice. Shall we say that Alice is a 9 and Bob is a 3? Or should we say that Alice is a 1000 and Bob is a 10? Or maybe Alice is a 5 and Bob is a -4? Or Alice is a 7.8 and Bob is a 7.5? The choice is arbitrary.
That doesn't seem to help at all. How much you'd pay to avoid a 1% chance of death is mostly a function of your net worth, not your happiness.
>>Low fertility must be regarded as one of the worst things in the world at present if we regard future possible people as having equal moral worth. If all that matters is the consequence, lowering fertility should be treated like a massive wave of tens of millions of infant deaths. This should be extremely concerning.
How are you defining the "utility" or "welfare" that is is sought to be maximized under this proposed framework?
It seems to me that you can't apples-to-apples infant deaths with infant non-births. In a pure population sense the total number of humans may be unaffected by the distinction, infant deaths have large impacts in terms of personal suffering and anguish, both in terms of the pain experienced by the dying infant and the pain experienced by the community (mom/dad/grandparents/siblings/family/friends) impacted by the loss. Non-births may *sometimes* include similar impacts (encephalopathy or other birth defect, for example, that ends a wanted pregnancy in the womb), but in the global fertility rate context most non-births are just the result of more use condoms or other birth control.
For example, I could have produced about 1 human per year over the last 6 years with my current partner, and we have produced 0. That's six humans not born without any impact on our happiness, and absolutely nothing on the happiness/non-happiness meter compared to what we and our extended friends and family would have experienced if we'd had a kid who died during the same time period. Let alone six.
That would seem to fly in the face of the interchangeability of infant death and non-birth, but it's hard to say for certain since your argument starts from a consequentialist perspective but I'm not sure of the terms in which the consequences are being evaluated. How do you define the "utility," "welfare," or what have you that is the target for maximization here?
I think that it is very unlikely that the majority of people would choose (let alone afford) to reproduce in this fashion in the near future even if it (i.e. iterative embryo selection) were technically feasible.
Why do you say that? C-sections used to be only a few % at one point, but now something like half of all births in some places are through C-section. IVFs used to be only a few %, but not in Denmark they are something like 20% of all births. Why wouldn't this technology also be like that (maybe rich people will keep it for themselves?). Also I don't know if the 250 IQ thing is an exaggeration or not, but to put that into perspective, an IQ of around 205 is about 1 in a trillion and you also wouldn't be selecting solely for IQ but for other traits like conscientiousness. Moreover, there's probably a reason as to why we all don't have an IQ of 200, aren't drop-dead gorgeous extraverted hard working leader-type people (which is most likely what majority of parents would want for their children). We are entering a dangerous territory (already have in some ways)...
Just reporting my reactions, but I find myself viscerally repulsed by the idea of embryo selection--of *intentionally creating* 'surplus' human embryos in order to destroy all but one of them. I have to assume that a significant fraction of other people share this feeling.
Note that I'm *not* viscerally morally-repulsed in the same way by early-term abortion. Nor am I repulsed by CRISPR-style genetic engineering--it's *not* a matter of disgust at it for being 'unnatural'.
Many Western European countries, while allowing early-term abortion and IVF, ban the deliberate creation of 'excess' embryos for the latter. I wonder if Denmark, where IVF has apparently achieved such widespread use, does this?
He said the near future. It's always the rich who can afford access early on. And that access may allow them to gain a power/capital foothold that will make them out of reach even if the underclass become more intelligent, at least for a while.
What's near future? It took like 1 generation for Danish IVF use to go to being used for 1 in 5 kids. And this is just for regular conception, IVF at its core doesn't necessarily confer you many serious advantages, unlike g.engineering or advanced embryo selection. Maybe I'm just trippin' since I haven't run any numbers myself, but I think that I'm also noticing that the time it takes from something to make its way down to the normies/masses from obscure groups/elites is decreasing. Me thinks that this tech is going to be a bit different from others. But if the elites manage to do as you said, there will be a new caste system which is not too unusual given that throughout history, most civilizations that make it to the point we have also develop a type of a caste system so we are not unique in this way, maybe only a bit faster. Ah well, we'll see what happens.
Yes, thats the problem. It will create a highly concentrated elite population.
Many of the 1% put in huge effort and expense decades in advance to maximise the possibility of their children getting into the ivy league. It's all but certain large numbers will select for embryos that will (likely be) not only of elite intelligence but likely also better looking and healthier.
And while this happens, many people on the left will still be telling us we need e.g. more school funding to help reduce inequality, still utterly convinced that intelligence isn't meaningfully heritable.
Would this really be a problem? There seems to be an implicit "zero sum" assumption here. If a small percentage of the population became smarter and more attractive, would that really make the world net-worse?
For attractiveness, sure, I could see this making the world net-worse. Female hypergamy would continue to intensify, worsening a whole host of social problems. But if smart people got even smarter, it seems like that would greatly accelerate scientific and technological progress.
Why wouldn't embryo selection for IQ be outright banned as soon as it is viable? Seems most political movements would be strongly ideologically and menetically motivated to ban it, including leftists (favors the rich initially), greens (unnatural tech intervention in sacred human bodies) and conservatives (unnatural).
Actually, it’s already basically legal in the U. S. (see Genomic Prediction). The current limitation seems to be that we don’t know all the genes responsible for high IQ in order to be able to screen for them. But you’re right, it does raise important ethical issues with regards to fairness, justice, risk of discrimination, etc.
I think aesthetics and morality are sort of inseparable. I agree you need more assumptions than just "maximize happiness" to prove that the human race surviving is good, but I am willing to make (some of) those assumptions.
That dumb matter has created beings that can even consider this question is a miracle beyond imagining. It would be a tragedy beyond imagining if they were to die out so early in their potential lifetime, or even if they were to become moribund and muddle through as 21st-century idiots for a million years.
Call that aesthetics, if you like. If the prospect does not appall you, we have nothing to talk about.
Do you have a rational argument for your statement that "Individuals should be cared for because suffering is bad"? Or is that just a matter of taste on your part?
Perhaps you do think moral propositions can be deduced logically, in which case it would be interesting to hear why you think "suffering is bad" is one that can be logically deduced, but "humanity going extinct is bad" is not.
There's an asymmetry to your reasoning. You take "suffering is bad" as axiomatic, but it seems you don't take "joy is good" as also being axiomatic.
Beyond that, I think you have gone astray by selectively intellectualizing morality. You've just said that you don't have any logical arguments for your moral position - you just assume what you want to assume. But then you dismiss others' moral judgements by saying "I can't think of an argument" for that.
Doctor Mist's view - "That dumb matter has created beings that can even consider this question is a miracle beyond imagining" - is a sense of wonder at the goodness of life that should not be dismissed on the basis that you can't think of an argument for it. Even if we suppose that logical argumentation is relevant here, do you really think that you have thought of all valid arguments, so your not thinking of one for this view is a decisive point against it?
> What I don't see is the offering of a cogent alternative moral foundation from which one could deduce the badness of human extinction, except relativism
If you haven't already read A. J. Ayer you'll find him reaffirming.
https://plato.stanford.edu/entries/ayer/
I'm not an Ayerist, I think there are some cogent alternatives to choose from where you scale up from values that appear universal. But agree this is nontrivial work and sympathize with the appeal of Ayer and the error theorists out there.
You don't think "person alive + happy" is better than "person does not exist"? Personally I think it is good that I exist and have a happy life instead of not existing, and IMO it would also be good if future people also existed and had happy lives rather than not existing.
I'm not sure why you think my preference to go on living is important/valid, but my preference to have more humans in the future is not.
I don't think I follow this attempt to take the perspective of the world - "the world", as you say, is obviously not an entity with preferences. Shouldn't I try to implement my own preferences? (Isn't that what it means to have preferences?)
Really? In my impression it's not hard to find people who think that second-generation immigrants aren't fully British/French/German/etc.
I agree. Making belonging to a nation binary is a gross oversimplification. I am myself the child of immigrants, and I've never fully identified with the nation I grew up in, and I don't expect others to fully identify me with that nation, even though I pass as belonging to that ethnic group.
But my point is that it's considered a right-wing view, not the mainstream view.
Anyhow, having thought a bit about it I may have been exaggerating and simplifying, so I'm sorry, I'm deleting that comment.
Some Asian nations are like Japan, and to a less extent some European countries, but most states in the world are multi-ethnic. But they are multi-ethnic in a somewhat similar way as the old Austro-Hungarian Empire, rather than multi-ethnic in a present-day US or Canadian sense.... Think of most Latin American countries, India and Indonesia in Asia, and definitely most African countries - relevant since Africa is the "coming continent". (The 22nd will likely be Africa's century.)
The 22nd century will be the century of AGIs or post-humans (as will most of the 21st)
Probably agreed, but with a large uncertainty about the likely date of the switchover. _Maybe_ in part of the 21st, but AI has progressed more slowly than expected before, and might again.
Blessed if you consider GDP growth to be more important than having a unique and cohesive culture, say. But I'm sure at some point all this atomistic materialism will eventually make people happy.
Pretty much. "Empowering diverse communities of people to achieve change we can believe in"-type stuff never really goes anywhere.
I guess some sort of Caliphate is technically option 3?
You could have a Roman-style civilisation-state, where being Roman (Canadian, Japanese...) is a matter of following Roman (etc.) culture rather than being genetically decended from the founding population.
Then again, the people who believe in increased immigration are generally against making immigrants assimilate to the host country's culture, so in practical terms I'm not sure that's an option at the moment.
If you parse the corporate/quango language, then a world religion, say, is quite literally "[ethnically/geographically] diverse communities of people [trying to] achieve change we can believe in." It's not as meaningless as all that.
I do think the one exciting possibility of a globalised future - as against plenty of potential gloom - is human assortment based on shared values/ideology rather than (if you'll forgive a stray personal opinion) the dumb default of ornamental culture, ethnicity, and nationhood.
Why would you expect those shared values/ideology of a globalised future be any less dumb or ornamental than what you call the dumb default? You do not like what people have constructed in terms of culture/ethnicity and nationhood so far. Fair enough.
But since people have made that what you don't like already, why is it any better if it's ethnically and geographically diverse?
"Ornamental culture" is the bits that are left of cultures once they're fed through the cosmopolitan meat-grinder. If you're going for nationalism vs atomic individualism, culture would be the major determinant of behaviour and attitudes, which are the bits now left of atomic individualism.
To be clear, I think atomic individualism's probably underrated, and I'd really doubt it's reversible for people who've been absorbed by it. I don't think intentional communities organised around shared interests would work though, as no-one's going to surrender their autonomy to a group that they're able to leave (which is what molecular collectivism entails, and why it would probably suck for someone who hadn't been raised in it), so it'd just be a community when it's convenient.
"when our farm teams, so to speak, aren't producing and don't seem likely to."
Part of the reason for that could be the fact that massive immigration causes housing shortages though.
If you don't build housing. If the immigration system were still routing new arrivals to the wilderness to go build their own log cabins and homestead some land, this would not be an issue.
There's only so much space in a city with hundreds of thousands or millions of people though, even if you build enough housing it would still lead to congestion/commutes and unaffordable housing in the center.
Sure.
Japan is famous for this: you can live there for decades and never really be considered "Japanese" - but if Japan and America are the two ends of the spectrum, I'd guess a lot of the world is closer on the America side of the spectrum - most countries have had pretty significant shifts over the times.
Consider China: while some might think of it as predominantly ethnically Han, it's a lot more diverse and than that for centuries, (and in fact - China's fear is more the opposite: they're actively trying to keep their diverse country united)
The two countries you’ve exempted there are Brexit Britain and France with the strongest far right party in Europe. The U.K. is multinational of course, but that’s likely to break up. In any case preserving say Spanish culture, and its distinct regional culture, is an important task. The US is a blank slate which is probably culturally improved with any level of immigration, but Europe is already culturally diverse. Much of what diversity exists in America is due to immigration.
Beyond that the causes of immigration to Europe is often wars and refugee crises, often caused by US meddling. Which causes strains. And of course the US was building a literal wall under the last administration. It doesn’t look like the idea of an open pro immigrant society is universal.
It’s pretty dubious to call the US a nation state? What’s the nation? It’s definitely a state of course.
Yeh the qualifier was doing a lot of work, though. There were plenty of examples to pick. Ireland has no anti immigration parties if note.
It massively depends on what are and aren't "far-right parties."
The UK's actual neo-nazis are (perhaps ironically) really ill-disciplined and disorganised, so don't presently have much of a party. When they last did (the BNP), it was pretty small and peaked at 6.2% of the vote in a European Parliament election, slipping to 1.9% in the general election the next year.
UKIP, and then the Brexit Party (both now more or less defunct) peaked at 30.52% in the European Parliament and 12.64% of the vote in a general election.
Virtually no-one in the UK describes pre-Brexit UKIP or the Brexit Party as far-right. The BNP et al are extreme even by the standards of the European far right more generally, being roughly equivalent to David Duke in the US.
The French National Rally is huge now, but are either on the left-most boundary of the far right or are else pretending to be.
Hungary and Poland are a bit more complicated - Jobbik were very far-right but have done a weird 180, and Fidesz have moved a long way to the right whilst in government. Law and Justice in Poland are less extreme, at least on racial issues, than the National Rally are now.
You probably need to multiply the support of each potentially far-right party by percentage of how far-right they are to get a good read on a country overall.
The UK's also barely multinational; the English/Scots/Welsh/Ulstermen/Irish/Cornish all speak the same language, have basically the same culture other than a few quirky traditions (most of which were "revived" in the late 19th century).
The more relevant point is that "British" referring to citizenship is accepted by everyone outside the far right (probably going back to the idea that everyone who lived in the British Empire was equally a British subject) but it's really rare to describe non-white people as English, or for them to describe themselves as such.
Sure. Not multinational. Just one constituent nation likely to break away and another that was at war just a few decades ago, which might also break away.
You are right on the second count though, British isn’t an ethnic group
That's dubious. Scottish independence any time soon seems fairly unlikely (Metaculus puts it at 19% by 2030). By contrast, Metaculus is more bullish on Northern Ireland having a referendum by then, but given that can only happen if a majority of the population support reunification in opinion polls (currently at about 30%), that also seems pretty unlikely.
Sorry, who told you that about Italy? That's plainly not true. Also, for that matter, it seems to me to be plainly not true for any Western European country
Yeah, it's totally true that the European idea of assimilation goes way, way deeper than the American idea of integration. Unfortunately, it's also true that European deep down consider nationalities to be mutually exclusives, so that somebody might be perfectly assimilated even by the very high European standards and still be considered foreigner if they happen to refer to themselves by the nationality of their parents.
But this does not absolutely mean that it's impossible to be really felt as a national, it's just harder.
(Also, there are some shortcuts, like adopting a regional/urban identity. In that case the recognition of the national one comes as a bonus)
Ah, tu sei il tipo convinto che le risposte caustiche e i memini sfottò su Twitter siano una minaccia esistenziale alla civiltà moderna. Senza offesa, ma non mi sembri esattamente la persona più affidabile per sapere come si sente l'italiano medio, men che meno la casalinga di Voghera.
So che in questo periodo l'erba è tutta giallognola e bruttina, ma ti potrebbe comunque far bene toccarla un po'.
It's 92% han, and han are the overwhelming majority of all positions of power and prestige (government, business, academia, media).
They have a huge number of ethnic groups, but its irrelevant if they're all combined much smaller than the majority. Practically speaking a 50% white 50% black country would be vastly more diverse than China with numerous ethnic minorities.
They're also diverse in a very different way to somewhere like the US - most of the minority groups are in fringe regions like Yunnan, Tibet, Turkestan etc, with a distribution more like Native Americans than urban immigrant groups.
I would say China is exactly as Japan. Highly racist - not necessarily in the meaning that they consider other people inferior, but just that they are separate races with intrinsic differences. Also tied to belief they have special culture etc. that can't possibly be comparable to others. The chances of these two countries resorting to mass immigration to solve labor shortage problems is close to zero, even if only involves somewhat proximate cultures say east asia. Africans? Forget it. At best there will be some attempt to rally the diaspora similar to what Japan did in 80s/90s with Japanese-Brazilian guestworkers. China also may import some amount of foreign SEA brides due to shortage of women. That's it. I have less knowledge about Korea, Taiwan etc. but strongly suspect it's same spiel.
How would this happen? Most rich people in rich countries have enough money that they could easily afford more children if they wanted, they just don't. How does carrying capacity affect fertility decisions? And why didn't Amish people or Orthodox Jews get the message?
It seems to me that the Amish would reach carrying capacity pretty fast, because they cannot live in urban areas that rely on electricity. The carrying capacity of the countryside is much lower.
I'm not sure how the Amish work. If each family has seven kids, assume three of them are boys and will need farmland to be proper Amish. One will probably be able to take over dad's acres, but that leaves two boys who either have to buy farmland or become hired hands. Buying farmland, even for highly efficient Amish farmers, can be expensive, and it isn't clear that hired hands are as likely to sire seven children as farmers.
Lots of Amish are not farmers and have found other occupations. If they need to travel somewhere for work, they can ride in a shuttle bus (but not drive it).
The Amish (and the Plain Mennonites - see my explainer https://www.datasecretslox.com/index.php/topic,3429.0.html) are not nearly all farmers--a lot of them are tradesmen of one sort or another. But the dynamic of needing land and markets means that new communities start all the time: there were very few Plain churches in Tennessee and Kentucky in 1960, and now there are Plain churches everywhere in those states.
Immigrants with less money than average Americans have more kids than average Americans (and often more kids than they would have back home). It's not economics giving people low Darwinian fitness, it's a novel culture.
It's economics but it's class-dependent. I make about 3x the UK's average income (that's where I live), but I couldn't afford to have kids on that because it's not enough to educate them, house them etc *to the required standard.* The big problem in most developed countries is people being worse off than their parents, so they can't afford to raise their kids in a manner they deem acceptable. If your only concern is that the state won't take them away, then almost everyone can afford kids, but that's not the financial level people make the decision on.
If downward mobility is unnacceptable, then that will of course reduce fertility. As Greg Clark wrote in "A Farewell to Alms", the modern English population is descended from the downwardly mobile children of successful farmers.
Isn't your first point predicting that fertility should correlate with income? Because this isn't true, it's the opposite. (At least in the US, I didn't check other countries.) I can certainly believe that money influences the decision to have a child (or more children), but there must be another factor correlated with income that works in the opposite direction and overwhelms it.
You left out by far the most important factor: having children carries positive *status* among high-fertility groups, or at least the failure to produce many children carries negative status. They are deliberately pro-natal.
"Lo, children are a heritage from the LORD, and the fruit of the womb is his reward."
Among the Plain, this is facilitated by the fact that they have relatively few other markers of status -- no fancy dress, homes, cars, etc.
"the sort of place where having kids *gives* you money since they can do manual labor for you"
This is a meme that always seems to pop up, but it's biologically absurd that the average child would ever have a positive NPV to his parents, any more than laying an egg conveys a positive NPV to the hen or the apple to the apple tree. Though I do believe it's a meme that, while false, had positive survival value in past societies. Kids were less expensive in agricultural societies, but they did not make people materially richer.
Don't we see population collapses like that in hunter/prey dynamics sometimes?
My understanding is that adult males are particularly likely to die during famines because they require more calories to survive.
Wouldn't they also be better at securing their food, though (by violence if necessary)? It's not as though it's automatically distributed equally.
https://www.econlib.org/archives/2014/04/feminizing_fami.html
For thousands of years, the only predators that humans had to worry about were cities. It seems like it's pretty easy to run away from cities. But maybe it isn't any more.
You are forgetting about worms, viruses, bacteria and other parasites?
I think this is almost right, but two items maybe to add in:
1) humans are prediction engines and prediction increases (not necessarily accuracy but the number and consensus of predictions) with education and communication. We only have to imagine children starving in the future to stop having them because we now have the ability to control our own fertility. So if you change the population curve to something like “the projected average human consensus population curve for how worth showing up for the future will be” then I agree.
2) carrying capacity is a function of our technology. With a cave, a fire, and a couple spears maybe that’s 150. With agriculture maybe a few thousand. With fertilizers, modern techniques to get water, etc millions. We could be much more than we are now if we started asking ourselves what we need to increase ourselves.
I see it as humanity’s responsibility to act as the reproductive organ of the Earth. We need lots of us to go out and do that.
Having kids doesn't pay off even in less developed economies, people have them because natural selection has primed us to want them.
https://www.econlib.org/archives/2009/10/was_having_kids.html
As long as some culture exists which ignores the Demographic Transition, they will be able to expand their population until reaching Malthusian limits (just like other species of animals).
That only applies as an equilibrium over long enough time spans.
If conditions change quickly enough, Malthusian limits aren't reached.
Eg like with today's population in rich countries.
There are populations in rich countries which are still growing: subpopulations that have separated themselves from a culture deleterious in Darwinian terms.
Yes, and those populations haven't reached any Malthusian limits. And might never reach them, if things keep changing.
The US is one of the least Malthusian countries around, and those subpopulations started as very tiny proportions of the US population.
Yes. And if nothing changes about technology or future, those subpopulations could eventually hit Malthusian limits.
But I don't think technology nor culture will oblige and stand still.
The agricultural bit there is dodgy since it ignores child labour.
Not too dodgy, child labour doesn't typically provide more value than caring for a child consumes.
"Child labour + support in old age > cost of raising child" does not require "child labour > cost of raising child" (your post I'm responding to) nor "support in old age > cost of raising child" (Caplan's paper).
Yes, that's true.
Children are helpless and provide no labor when very young. Resources are flowing from adults to children then. Once children grow up enough to do some child labor (generally much less productive than adults)... their parents are STILL producing more resources than they consume.
"For most of human history children were a net positive in economic terms."
I don't believe that. People were not competing with each other to adopt children, instead the tradition was to designate someone you trust very much as a godparent to look after them. Single mothers were rare in pre-industrial England because they just couldn't support a child on their own.
Genuine question here, no snark.
If you're convinced the technological singularity is approx 30 years away, and that it's likely to be ~bad for humans (AI take over etc), then why are you trying for a baby with your wife?
I don't think this is the same kind of antinatalist point of "the world is bad, why bring more life into it". It seems like you seriously believe that something is different about this point in history, and to me it then seems a bit odd that you'd want to plunge someone new in at the deep end just when the robots take over!
How do you reconcile this?
PS: forgive me if I've misremembered about the baby part, think you said that a whole ago!
I don't think he thinks it's necessarily going to be bad. But definitely different.
There's some chance I'm wrong about a singularity, there's some chance we make it through the singularity, and if I'm wrong about both those things I'd rather give my kid 30 years of life than none at all. Nobody gets more than about 100 anyway and 30 and 100 aren't that different in the grand scheme of things. I'd feel an obligation not to bring kids into a world that would have too much suffering but I think if we die from technological singularity it will be pretty quick. I don't plan on committing suicide to escape and I don't see why I should be not bringing life into the world either.
Isn't that always the argument for adopting kids? Does the singularity change it at all?
Don't tell anyone, but I'm actually not a perfect utility maximizer.
Having kids is the perfect utilitarian decision. Either we'll have a singularity and everyone dies, then our one or two extra generations are rounding errors in the grand scheme of things, or there isn't a singularity and the best thing we can do is to continue the human race.
but having a kid would maxamize your own utility as compared to adopting. it seems youre still being a utiltarian, just in the sense of “what makes the world most how I want it” not “what makes the world most like the average of all human values wants it”
Ha ha!
If you don't actually want to adopt a child, you're probably not maximizing utility by adopting one. Being brought up by an adoptive parent who didn't really want you probably makes for a crappy upbringing.
I used to want to adopt when I was younger, but I've come around to feeling otherwise as I've come to think that it's very likely it would give me poor chances of being matched with a child I'd actually relate to, and I don't think I'd be a very good parent to a child I related to poorly. Some people might, and I think they're better candidates to adopt than I am.
I think a utilitarian attitude encourages actually crunching the numbers where possible (even if only made up ones) to check on whether uncertain cases are likely to be worthwhile. But in general, I think we should start with a default of extreme skepticism that choices which really fail to make us happy are worthwhile. What are we trying to trade off that happiness for, and can that trade work at scale?
"there's no Effective Altruist case for having kids of your own"
Highly doubtful. You don't think there's any difference between a world with Scott's kids and lots of people like them, and a world full of foster kids raised by Scott?
People with eugenic impulses do tend to think it is people like them whose genes should propagate through time and people unlike them who shouldn't, but I don't think there's a lot of reason to believe that people following these impulses to their logical ends results in a world that is better off in terms of advancing general welfare.
There is a total utilitarian case for having more kids, which has always struck me as more sensible than average utilitarianism. Yes, I embrace the "repugnant conclusion" of massive numbers of people less happy than us. https://www.overcomingbias.com/2009/09/poor-folks-do-smile.html
Yeah because utility maximization is a pretty atrocious ethical theory to try and live your life by (which is why no one does).
If so, Effective Altruists really need to remember that Effective Altruists are moral subjects as well as actors.
> Question: why not adopt a foster kid instead?
Underappreciated distinction: fostering and adoption are very different things, with different backgrounds, process, and results. A fostered child is very likely to have been removed from their previous environment by state services after multiple years of neglect, and placed with a volunteer on a presumably-temporary basis with reunion with the biological parents the nominally preferred end goal. Something like half get that reunion, with roughly a quarter ending in adoption (and not always adoption by the foster parents).
While there is a shortage of foster homes in the US, there is definitely *not* a shortage of people willing to adopt infants less than a few years old. This is where homo economicus would pipe up about the inevitable results of a market where the price is set at zero by fiat, but I'm not quite cold blooded enough to endorse that position.
You have to be a special kind of machoistic to want to put yourself through that.
Know this might be overly personal and at the same time highly meaningless coming from an internet stranger, but good for you and your wife. That takes real courage. There’s certainly a possible future worth showing up for.
All good points. Your last part about suicide reminds me of Tolstoy's Eastern fable, escaping the dragon by clinging onto the twig.
The chance we make it through a singularity is a pretty convincing argument for having kids imo, since life in a good-end singularity is likely to be far higher utility than now (to the extent that it probably dwarfs the other worlds in weight even if the chance of being in this world is only a few percent), and there's always some chance you will die or otherwise be rendered unable to choose to have kids before you know you're in that world, which would deprive them of that utility since they don't exist.
Not to mention there are perfectly selfish reasons to have kids as well -- they're often great at coming up with things to do and injecting variety into the dullness of everyday life.
"I'd rather give my kid 30 years of life than none at all."
What's the cutoff point with this philosophy? If you knew your child will die at age 15, would you go ahead and have the child? What about percentage of suffering? If your child were to suffer, say 30% of his or her life, would you go ahead and have the child? Does it matter whether the suffering is distributed throughout the life, stacked at the end, or towards the beginning?
I think suffering and death work completely differently. I'd be nervous about bringing a child into the world who would face any abnormal amount of suffering, but I don't think it would be morally wrong to bring a child into the world who would die at 5 or 10 or whatever (though it might be unfair to my family since they would have to grieve them)
I can't remember what I thought at 10 or 30, but I'm 37 now and if I got hit by a truck tomorrow I would be happy to have lived even for this relatively short period. I think if I had to suffer a lot then I would be upset and regret coming into existence.
Agreed on the big difference between suffering and death. I had a friend who got hit by a car and died instantly, and while I think it's horrible and I miss him, I am also a bit jealous that he didn't have to face death or go through the dying process.
It seems as if this being ok with a child's death depends on the child meeting their doom suddenly and without prior knowledge of it, because any knowledge of upcoming demise would cause untold suffering. Does this mean you think the AI apocalypse will be instantaneous? Because any period of time from the recognition of impending doom and the actualization of it would be utterly terrible. It could potentially last years. Also, what makes you so sure the AI apocalypse will more like a paperclip maximizer and less like I Have No Mouth And I Must Scream? I can see something like that happening based on a misaligned goal of the AI to keep people alive.
" because any knowledge of upcoming demise would cause untold suffering"
seems like an awfully strong take. Unless you expect a particularly terrible form of demise (e.g. no mouth) isn't this literally what nearly every single person eventually encounters? Barring some singularity or step change in the advancement of medicine there's a >60% I'll die 35-45 years from now, quite possibly in a pretty unpleasant fashion, for my parents it's more grim, for the surviving grandparents even more so (one is likely to die in the next year or two and is already in pretty bad shape quality of life wise from mouth cancer).
Most models of the AI apocalypse have it happening very fast, since if one could see it coming years in advance it would be relatively easy to stop (that specific AI). Based on the speed of response to the pandemic, I don't expect governments to admit the threat is real until there's less than 24 hours left to live :/
(N.b. I personally am not a doomsayer about AI, in that I don't think it's more likely than not in my lifetime, but much like nuclear war even small odds are very bad)
What about quantum immortality arguments that imply any conscious being brought into existence has a chance of eternal suffering in their immortality? Seems like those sort of arguments would dominate any kind of Pascal's wager type considerations.
I'm pretty sure utility is undefined when considering an infinite multiverse scenario, and no conclusions can be drawn.
For an incomplete example, suppose you conclude the probability of infinite suffering through quantum immortality is morally wrong. You are then duty bound to maximally extinguish life to limit the number of universes where suffering can exist (infinity - human civilization), which means you must now weight the probability of eternal suffering of creating your child (infinity * (human civilization in our universe - your hypothetical child and their descendants)) vs. the extinguishing of all suffering should your child or their descendants manage to figure out how to extinguish life in all universes (infinity * probability of human civilization influenced by you or your descendants figuring out multi-universe WMD and using it).
I'm not sure exactly what the math would be but I think some infinities would turn out to be larger than others.
There's ~20 influential interpretations of quantum mechanics and at least one implies quantum immortality, assuming each interpretation is roughly equally, likely that makes the immortality infinity quite large, definitely a lot bigger than P(the extinguishing of all suffering should your child or their descendants manage to figure out how to extinguish life in all universes).
But yeah maybe that kind of calculation just isn't possible in principle, not sure if that result would transfer to all types of Pascal's wagers.
Then the decision you actually make doesn't matter because you have to consider all possible multiverses including the ones where you make the opposite decision
“ What's the cutoff point with this philosophy? If you knew your child will die at age 15, would you go ahead and have the child?”
Honestly, yeah.
Similarly, I would prefer being born and living to 15 over not being born at all.
As someone with a nine year old I would say just about now is the time where if they suddenly dies in an accident or had some devastating fatal illness, I would feel like they had gotten to cash out some of the investment and life in them and live a bit of their own life and own projects.
Five year old isn’t there yet. If he were to die tomorrow and you could somehow magic it all away, I might (if not for the emotion/memories involved).
But by the time they are nine they are little people with little interests and projects and a stable personality. They have started “living life themselves”, and are less just a “pet” in training of their parent.
Kudos for tackling the question directly. I'm really interested in what number people would come up with if forced to give an answer, might lobby to get it included in one of the reader surveys next time those come around.
I think a "pet" (and even a real pet) could be happy that they live (even if they could not call it that way). On the other hand, there are gloomy scenarios that can end or threaten the lives of humans of all ages, and many of them can also be very traumatizing.
The Darwinian response is long enough to reproduce.
Isn't the key point here that Scott doesn't KNOW 30 years or 15 years or any other number. It's all speculation. Would the prospective kid want a shot at 30 or 60 or 90? I think YES.
Would you advise your child to avoid having children of his/her/etc. own ? After all, by the time your child is grown, the Singularity will only be a few years away, right ?
People who argue themselves out of having children are an excellent example of maladaptive intelligence.
I mean, the odds should be clearer by then - I expect the decision will seem obvious one way or the other.
Yup. Also, kids are cute & fun. You get to watch all the early Pixar films 100 times. (The good ones!)
I’ve never understood the perspective that “there will be unforeseen challenges in the near future so it’s best to not bring any humans into existence to experience it”.
Humans are always facing new challenges. I’m glad my parents were born, even though the future at the time of their birth was radically uncertain and dangerous.
Heck same story with my grandparents, and my great grandparents. Who in their right mind would have babies after the events of the Great War, or the 30 years War, in the midst of the Cold War, and just as the future was looking to be even worse? Well, I’m quite glad people were short sighted enough to do so.
Erasmus said something similar in "In Praise of Folly".
"Who in their right mind would have babies after the events of the Great War, or the 30 years War, in the midst of the Cold War, and just as the future was looking to be even worse?"
True. I suspect (but don't really know) that every generation sees their times as uniquely dangerous and momentous. I suppose that there are uniquely bad times to live, but I doubt that anyone can forecast them with any accuracy, certainly not the decades in advance that one would want if it were influencing one's fertility decisions. ( I, personally, am childfree, but for reasons that have nothing to do with the historical moment. )
If you're up against Malthusian limits, then you are in relatively bad times.
Agreed. There are other possibilities as well which are less predictable: Wars (particularly long ones like the 30 years' war), plagues, extended periods of bad weather (e.g. the "little ice age"), unpredictable crop failures (e.g. "Irish Potato Famine" - though these usually wouldn't span most of a lifetime), particularly bad rulers (especially if they can sit on the throne for decades).
I might not be that smart, but I see our current time as fairly stable and not very momentous. A good time to have kids (I have two and hope to have more), invest for the long term and make decisions for the long term. As someone who lived through the 80’s, I’m always struck by how similar the 80’s and 90’s feel to now, whereas the 50’s and 60’s feel like a complete different historical epoch.
Hmm... I don't see the current time as either particularly stable or particularly unstable. I was too young to remember the Cuban missile crisis, but there were a number of public nuclear threats after that - and several near accidents only revealed years later. Putin's nuclear threats this year look approximately comparable, perhaps marginally less worrisome.
There are always potential long term threats being aired. Currently global warming has the spotlight - at least it has reasonably well understood physics! But in general, the probability and severity of any long term threat is very hard to assess.
As I mentioned upthread, I, personally, am childfree, but for reasons that have nothing to do with the historical moment. ( I dislike hassles and time sinks, and children add an entire category of hassles and time sinks. )
If it's 30 years, you might want a bunch of your kids in their twenties to help fight the robots with. They might be the difference between victory and defeat.
You may see it as plunging "someone new" into the apocalypse. Or you might also see it as pluging a compound being of yourself and the other parent. This is just commiting more of yourself, in as far as you are define yourself by your membership in and ownership of your family.
Has anyone written extensively about how AI would, uhh, kill us all, and why it might choose a quick painless method over something more gruesome but perhaps less resource intensive?
A common argument re quick AI victory is that a powerful intelligence is much more likely to want to maximize risk of success than to minimize resource usage. A plan that involves slowly and painfully exterminating humans is much more likely to fail than simply coordinating a nanobot swarm to release a neurotoxin that kills everyone instantly before we even realize we shave anything to worry about
The first 5 minutes of idiocracy sums it up
See section 7 on dysgenics!
I’ve a feeling we’ve dropped 2.5 points or more already in the west, hidden by the Flynn effect.
I'm having difficulties parsing this. Do you mean "would have dropped 2.5 points if not for the Flynn Effect", or are you saying that the Flynn Effect is measurement error? Or what?
yeh, that's a good point. IQ is after all the only thing being measured and it is either dropping or it isn't. I believe we have dropped in real intelligence (G) but the Flynn effect is measuring artefacts that are not that important to the functioning of society, spatial ability and so on, and is missing some of that drop. No hard evidence.
Or you could argue that people are getting slightly dumber, but having environmental obstacles to actual maximum eliminated quickly enough that the average is going up.
Average person can say get to 110, but social/environmental issues kept the average at 100. Then 10 years later average person can max at 108, but the barrier mean most get to 104.
"In general, less educated people reproduce less than uneducated people (although this picks up slightly at the doctorate level)."
First "less" should be gone. (This was a confusing one!)
You're right, thanks.
"And I notice it’s weird to be worried both that the future will be racked by labor shortages, and that we’ll suffer from technological unemployment and need to worry about universal basic income. You really have to choose one or the other."
Of course those two are inconsistent. The correct answer is that technological unemployment is a myth. There is little credible evidence for that outcome. Some who talk about it are (consciously or unconsciously) motivated by a desire to promote UBI for independent reasons.
I'm not particularly "worried" about labor shortages, but of these two outcomes, I consider it to be the one with more evidence, by far.
Hm, maybe a better way to think about it would be the amount of labor it takes to maintain a certain standard of living. The existence of agricultural technology meant that we needed only about 2% as many farmers as before; the other ~98% of people were able to go into producing things other than food without any decrease to our food-related quality of living. If robots mean we only need 2% as many everything as before, the other people can either do new stuff that raises our quality of living, or be technologically unemployed, I'm agnostic as to which but it doesn't seem to imply decreasing quality of living except for distributional reasons.
Yes, I agree that's a better way to think about it. New technology shifts what labor is needed.
I see robots increasingly used in manufacturing, warfare, construction, and selected other high exertion/risk functions, but nowhere near "most everything." Similar story with AI: real impacts but nowhere near "most everything."
I think there will always be plenty of work for humans to do. There may still be difficulties, but they may be more cultural and psychological than literal lack of work. For example, it could be the jobs are plentiful but increasingly shifted towards high cognitive demands, and so are not well matched to a portion of the population.
Exactly. Humans are pretty great and relatively cheap robots.
Compared to existing robots in most tasks. In 100 years time maybe humans have no chance of competing.
Naw, any robot that can do what a barman does will be prohibitively expensive.
That's true, both at present and for the near future. I do expect robotics to improve, but incrementally, not in any way that ends up with little work for humans to do.
The real problem with robots is that what you really want for a mass production robot is the opposite of what you want for a general purpose robot.
Robots used for many forms of mass production will always be expensive because they have to be custom built to task as there's poor economy of scale.
Nobody said AI is there yet. If it were, we wouldn't even be discussing this.
However, if AI does get there this century, it will likely be very suddenly relative to a position where it is far off (due to recursive self improvement).
Additionally, the main jobs that will be destroyed first by AGI will be white collar jobs, not robots performing manual labor (that it currently cannot perform). Which means a lot of people who would have otherwise gone into white collar jobs would instead be in the market for jobs involving labor not easily replaced by machines. Which sucks for them, but it does mean that there won't be labor shortages if the pupation falls.
We get massively richer because AI and robots can do so much stuff cheaply, but some jobs are no longer human jobs because they've been automated away. If there are still things people (at least the owners of the robots) want done that can't be done by robots/AI, this ought to lead to new jobs being created to do stuff that previously was too expensive to pay anyone to do.
This is how automation has worked so far. Many individuals were made worse off because some new technology killed their job or industry, but overall, we still had plenty of stuff we wanted done that we were willing to pay people to do.
There's no theoretical limit on the number of people who can work in service-oriented industries, especially home nursing, child care, etc. Even if humans were no longer needed to work producing anything at all, we could just pay each other for basic services (including accompanying children, the elderly, handicapped/special needs) and still have full employment.
Scott has also talked about the concept of Slack before. There's always room for more Slack, which can take many forms. Reducing class sizes at school, hiring redundant employees to back each other up, lots of ways to increase Slack and hire more individuals. The Western world has a LOT of Slack compared to the pre-industrialized world, and appears to be a normal result of vastly increased wealth. Children as young as six were routinely needed to do productive labor, and now people continue to consume vast resources from society (especially in the form of education, but also entertainment, parent's time/effort, as well as basic supplies like food and clothing) until 18-25 and nobody bats an eye.
On the agricultural note, the reason we have so many people is due to nitrogen fixation for fertilizer and the Green Revolution, which is the reason The Population Bomb and other 1960s doomers ended up wrong; they simply did not account for a massive increase in agricultural productivity.
However, many people have raised concerns about the environmental impact of mass nitrogen fertilizer farming, with some countries aiming to reduce their emissions. If we do this broadly then we simply won't be able to feed the current population, and would potentially face mass starvation if population was not already peaking.
In that regards underpopulation allows us to mitigate the environmental damage that mass fertilizer use has caused and continues to cause, at the cost of people not being born who wouldn't be born anyway due to whatever societal reasons.
https://www.reuters.com/world/europe/dutch-govt-sets-targets-cut-nitrogen-pollution-farmers-protest-2022-06-10/
https://www.bloomberg.com/news/articles/2022-07-27/trudeau-spars-with-farmers-on-climate-plan-cutting-fertilizer-grain-output
My guess is we're just not going to reduce emissions if that means eating less.
My point is that we don't need to eat less if there's less people eating, which seems to be happening naturally anyway
Population is still growing, just at a slower rate. I don't think we'll actually get to shrinking total world population, though growth for enough time will eventually hit Malthusian limits.
See Sri Lanka and nitrogen protests in Netherlands as counter-examples.
It is possible for such trends to reverse - like with EU increasing coal burning emissions due to potential gas shortages - but "just starving/freezing" from less production cannot be ruled out.
And you can also "just buy replacements that go to highest bidder on world markets" from your for-now superior position - shifting the burden elsewhere and driving lesser countries into extinction.
I don't understand your last paragraph. What has made the position superior? What is being replaced?
I think Shalcker just means rich countries with low birth rates buying up most of the food on international markets and letting poor countries starve.
Peter Zeihan is already predicting a fairly massive famine over the next few years due to (1) wheat disruptions from RU/UKR conflict, (2) fertilizer production disruptions due to same, plus Chinese hoarding, plus natural gas supply disruptions, (3) increased cost of capital as the WEIRD boomers retire cutting into development aid, and (4) unintended side-effects of green policy re: mechanization of agriculture.
It does because a large part of the way money circulates now is through wages. The wage compensation percentage of GDP runs from 60% (Western Europe) in Germany to 10% in Venezuela. It looks like oil production is a major factor there, and even though Venezuela is nominally socialist that’s not getting redistributed. In the absence of massive redistribution (or perhaps universal share ownership) the demand for the products won’t be there in a fully automated luxury future. The likelihood is fully automated luxury feudalism.
If there are fierce labor shortages in the future, then any kids you have are likely to be able to do well for themselves in finding well-paid, rewarding, pleasant jobs.
>There is little credible evidence for that outcome.
A lot of smart people take this issue very seriously and have written about it at length. You ought to at least make a token effort to address the specific arguments and why you think they're wrong rather than just declaring there's "no credible evidence".
I "ought to," huh? Because "smart people" (minus 10 credibility points every time that phrase is used as if it constitutes anything more than an appeal to authority or in-group conformity) "take the issue seriously"? Oh boy.
OK. I agree with most of the analysis in this report: https://www.brookings.edu/research/automation-and-artificial-intelligence-how-machines-affect-people-and-places/
How is the technological singularity not just rationalist millenarianism?
In almost all forms, it isn't, but Yud forbid you say that here.
I would weight the words of dozens on dozens of scientists with a lot of training in the field over the words of a single "eccentric" (to put it politely) autodidact.
>Now, after two years of crippling lockdowns, mass censorship, closed schools
My apologies, I did not realize you were posting from the PRC. I thought you lived in the Anglophonic world. Have a good day.
I agree he's a talented writer but I'm not aware of him having a track record of accurate predictions, winning bets, etc.
Yudkowsky's brilliance is so unique and scintillating that to demand actual enactment from it would be to tarnish its glory. Like the celestial Emperor, he must practice wuwei, seated serenely above the world of "proving his work" or "being held responsible for when he's wrong".
Is it? I doubt most AI scientists have ever put any serious thought into safety concerns about AGI, and just hold the general attitude that "superintelligent AI is something that the undereducated masses worry about because they saw the Matrix, but I'm an educated scientist who knows to doubt extraordinary claims." I'd be surprised if even 1/4 of them have heard of concepts like paperclip maximizers or instrumental convergence.
I also don't see why we'd particularly care about what AI scientists think about this issue. Their day-to-day activities might range from typing "optimizer.step()" to inventing brand new ML frameworks, but they aren't thinking deeply about decision theory or game theory on a regular basis. The scope of their work just isn't that big. It would be like positing meteorologists as the primary authority on climate change.
There is also the issue of incentives. People who believe AI doom is a real thing have a large incentive to not work on improving AI ("hey, help us destroy the world" is not a good recruitment pitch). People who work on improving AI have a large incentive to want AI doom considered scifi, because otherwise they'd be out of a job.
I don't agree with all of Yud's conclusions (in particular I think neural nets have the potential for "false starts" in which a rogue AI does hostile stuff and then we kill it; an escaped rogue human-level NN can't just make a better NN to get to superintelligence, because it can't align NNs with its goals any better than we can, so the superintelligence doom only happens if it can explicitly code a superhuman AI), but I'm not trusting Big Tech's assurances that they aren't gambling with our future either.
Yes. What's even scarier is that Big Tech seems to think that AI safety is about things like preventing authoritarian governments from using facial recognition, or making sure the datasets they use aren't racially biased. They're operating several orders of magnitude too low-level. If you met with the "AI Safety" team at Google and started talking to them about the orthogonality thesis, they would look at you like you had two heads.
What I’m hearing here is that the actual experts don’t believe what you believe.
I think if there was a solid case that was convincing to researchers it would circulate pretty widely and more people would think about it.
By the way you should keep in mind that even though we're just stepping out optimizers, you still have to get a PhD in CS so the amount of thinking about computation and math that your average AI researcher has done is still probably higher than the average of the commentariat.
To be fair, there are also many areas where he agrees with most AI scientists: the belief that AGI is possible eventually, and the belief that scaling up GPT-like models alone is not sufficient to get us there. The belief where he differs the most is primarily on the difficulty of making AI safe.
This is also my impression. There isn't much daylight between Ng & Yud besides length of timeline from today -> HLMI -> AGI, as well as alignment difficulty.
I would also add that saying the lack of formal environment is damning is essentially saying Thiel Fellowship receivers are exclusively the damned.
> It's particularly damning that Yud hasn't studied AI in a formal environment at all
So? Credentialism doesn't make sense.
> and has drastically different views on AI than most AI scientists.
Not really. He's a bit extreme with pessimism. But as for viability of AGI - relevant people / groups like DeepMind or OpenAI think they'll get to the AGI in decades.
How long did AI experts predict it would take decades ago?
Agreed - the field _has_ been notorious for overestimating its rate of progress. I do expect it to get to AGI eventually. A bright child has an impressive, but _finite_ set of capabilities. Sooner of later they will all (including learning) be automated - but whether that day is 15 years or 150 years off is very uncertain.
>But as for viability of AGI - relevant people / groups like DeepMind or OpenAI think they'll get to the AGI in decades.
But Yudkowsky goes considerably further than just predicting human-level AGI, and he also doesn't expect it to take decades. He's made a bet with Bryan Caplan that superintelligent AI powerful enough to destroy humanity will be a reality within the next eight years.
IIRC that was a bit tongue in cheek and he doesn't think it is quite that likely. But his views aren't that far.
His own comment on the post says:
>So the generator of this bet does not necessarily represent a strong epistemic stance on my part, which seems important to emphasize. But I suppose one might draw conclusions from the fact that, when I was humorously imagining what sort of benefit I could get from exploiting this amazing phenomenon, my System 1 thought that having the world not end before 2030 seemed like the most I could reasonably ask.
This sounds to me like saying that placing the end of the world at 2030 was chosen to be the most optimistic prediction he could make within reason, meaning his actual expected date for the end of the world would probably be well before then.
Do you think calling a potential end of the world "millennarianism" is proof that it won't happen? Could people in 1950 have proven there would never be nuclear war, because believing in it would be "millennarianism"?
A mere reference to the long history of the world not ending, in spite of contrary predictions. Implicitly bundled with a psychological explanation for the frequency of such predictions. One could frame it as assigning the end of the world an extremely low prior probability.
I think this proves that the world isn't going to end from something that could have equally well ended it in 1000 or 1500. I do think technology has been growing since then and so it makes sense to say "we didn't have nuclear bombs in those years, but now we do, so nuclear bombs can end the world".
If you are driving west to east across the US, and someone warns you that you are about to drive into the Atlantic Ocean and drown, you can't argue against this with "but we've already driven 3000 miles and not hit any oceans, so it's incredibly unlikely that oceans exist".
Surely it is possible for some level of technology to be enough to end the world, and at some point we will get that level of technology. I'm arguing it's soon.
(or, technically we got it when we got nukes, but we seem to have handled that one semi-responsibly. I'm arguing we will get more and more technologies like that, and some will be harder to handle)
I feel like it's easy to latch on to AI Nanobot-Death for the apocalypse because there's still a ton we don't know about it, or how it could play out. We know a lot more about disease, climate change, natural disasters, or nuclear war, and that means our estimates on negative consequences are a lot more precise and less apocalyptic (even if they genuinely would be bad).
Global warming won't kill everyone. Nuclear war almost certainly won't kill everyone. Experimental virology almost certainly won't kill everyone*. They can kill a lot of people, but they're not (serious) X-risks. If you care about X-risks more than everything else, they can mostly be ignored.
AI is an X-risk because having doomsday bunkers on 4-6 continents doesn't mean anything; an AI that wins is not going away and will crack open those bunkers.
*Obligate pathogens of humans cannot be X-risks because they rely on dense human populations in order to spread, so they will inevitably suffer R < 1 long before reducing humanity below minimum viable population. Serious biotech X-risks exist, but are things like "artificial algae that can't be digested by anything pull all the carbon out of the biosphere".
That's based off assumptions about the kind of capabilities that an AI might bring to bear. Again, as I pointed out, it's easier to project doom from that because we just know a lot less about what those capabilities might be.
I just wanted to say this is a very nicely put summary of the difference between AI risk and other "threats to civilisation as we know it". Thanks.
"This time is different."
"You say that every time!"
"*Someone* says that every time! And some times in hindsight, you agree!"
I used to call this Castro's Law: people predicted Fidel Castro would die in 1970, 1980, 1990, etc. Since they were always wrong, we conclude that Castro must be immortal.
What you are doing is very arguably the reverse: all men are mortal and will die one day, therefore it is quite reasonable to proclaim that I, being moderately out-of-shape, will collapse dead of a heart attack in three hours time with 95% certainty.
I think fission bombs have roughly the effectiveness to cost ratio of AK-47s, although I suppose technological developments could cut that over time.
bean's gonna get mad at me if I neglect to link his piece anywhere in this thread, so here it is: Nuclear Weapons Are Not As Destructive As You Think.
https://www.navalgazing.net/Nuclear-Weapon-Destructiveness
The pragmatics undercut the epistemic point somewhat, but I don't think the average philosopher in the 50's would be cognizant of that to the point where the veil of ignorance fails.
Thanks, my comment did have a glaring lack of actual data over hazy recollection.
Many Thanks!
" The total yield of our 4,000 weapon war is going to be on the order of 1,800 MT, only 4.25 times the yield of atmospheric nuclear testing worldwide, which even at peak seems to have produced doses of maybe half of natural background radiation. Even if we assume that our war will produce 10 times as much late fallout as the tests (due to shorter timescale and the fact that operational warheads may be dirtier than test ones), the peak exposures are approximately the same as those for aircrew today. "
"On the Beach" was a great movie, but a lousy estimate of global radiation exposure.
I feel like this kind of reasoning gives way too little consideration to two things:
1. The likelihood that you would think the end of the world is likely whether it is or not. We can't know what we don't know, and the sheer number of times educated people have predicted the end of the world and been wrong should be enormous evidence that humans are terrible at such prediction and at considering all factors or even imagining the factors that exist to be considered. Put simply, the key question is not how likely you think AI-risk is, it's how likely you WOULD think that even if it were false.*
2. The conflation, in the case of AI-risk, with two separate predictions, each of which has a terrible track record: the development of some technology by a certain year, and the existential risk of some existing technology. People predicting a religious apocalypse in the 1850s**, a nuclear apocalypse in the 1950s, and an AI apocalypse now, on the one hand. And people predicting general AI by 2050, moon colonies by 2000, personal household flying vehicles (whether balloons or planes or blimps or whatever) by 1950, and so on. AI-risk has to be independently "this time it's different" in BOTH of those respects to be valid.
*the apt version of your driving example is: every 100 miles or so, someone in the car says "look I see the ocean, we're about to drive into it" and over and over and over they're wrong. Now, you really think you see the ocean. How much should you discount that belief based on past false beliefs?
**you can regard religious scriptures as a kind of technology, one that exists but may or may not work, as the evidence is after death or after doomsday, etc.
Well put. I think this is a good way to look at a whole range of apocalyptic predictions. For how long have people been saying the same thing, and how much has changed?
I have a particular interest in finite resources, their uses and abundance. Take copper - every single day for the last 10,000 years or so, someone, somewhere has been saying "Oh no, we're going to run out of copper". Throughout the 10,000 years the reserves of copper have been increasing, the amount in circulation has been increasing, and rather splendidly its price has been decreasing.
And yet, while the abundance of copper continues to increase, there are ( and always will be) people saying "Oh no, we're going to run out of copper".
> People predicting a religious apocalypse in the 1850s**, a nuclear apocalypse in the 1950s, and an AI apocalypse now, on the one hand. And people predicting general AI by 2050, moon colonies by 2000, personal household flying vehicles (whether balloons or planes or blimps or whatever) by 1950, and so on.
I think it genuinely is important to remember that these are very different people in each case, and that you would not be reading their blogs! The error of any given predictor is more limited than a list would imply, and we're already working from a comparatively high level of established competence.
> AI-risk has to be independently "this time it's different" in BOTH of those respects to be valid.
There's nothing wrong with having a very low prior. (Well, maybe, but it's not inconsistent.) There *is* something wrong with, after the argument is presented, simply repeating the prior back. We can be very confident that someone who does no engaging with the substance of the argument is making a mistake!
"I think it genuinely is important to remember that these are very different people in each case, and that you would not be reading their blogs! The error of any given predictor is more limited than a list would imply, and we're already working from a comparatively high level of established competence."
Okay, a few points. First, I shouldn't have used the 1850s for the religious apocalypse, I just liked the neatness of a century apart. I should have said the 1660s, when everyone was a Christian and all academics up to Newton himself accepted the literal words of the Bible as unquestionable truth.
Second, I don't understand your point about blogs. If an academic in the 50s wrote an op-ed or whatever saying that Eisenhower's "tactical weapons" policy will very probably mean a nuclear exchange within a decade, using lots of reasoned evidence, would that be less reliable than a blog?
"There's nothing wrong with having a very low prior. (Well, maybe, but it's not inconsistent.) There *is* something wrong with, after the argument is presented, simply repeating the prior back. We can be very confident that someone who does no engaging with the substance of the argument is making a mistake!"
It's not just repeating the prior back. It's pointing out that "there could be flaws in the argument we can't see, and history shows arguments of this form have plenty of flaws that can't yet be seen" is as valid an objection as identifying particular possible flaws.
You would be right if the AI-risk argument was a deductive one, showing that given certain assumptions deadly AI is a guarantee by whatever year. Then, critics would have to say which assumption they reject, or show the argument is invalid. But it's a probabilistic argument, with obvious failure situations like general AI being much harder and more expensive than expected (a la personal flying vehicles) and there being less social and institutional demand for it than expected (a la moon colonies).
Would you make the same argument in support of a religious apocalypse? I am assuming not, but am having trouble seeing the difference from a neutral third parties' perspective. Can you clarify why a third party should seriously consider that in your case they are "making a mistake" but that doesn't also require them to engage with the substance of every other doomsday prediction?
If the difference is a "high level of established competence" then you're going to need to flesh out and justify that. From a non-AI-apocalypse viewpoint, there have been a whole lot of predictions that have not resulted in verifiable results. Saying AI is getting better is not the same as saying it will eventually [some X-risk]. Saying AI will destroy the world isn't any more convincing on its own than the guy on the street corner talking about the rapture.
Also, the anthropic principle:
Conscious observers will always find themselves within places with conscious observers. If the world ended, it wouldn't have conscious observers.
We will always find ourselves in some chunk of the Everett multiverse / inflationary multiverse / very very large regular universe / etc. which hasn't had an "end of the world" yet. We will always look back at our history and maybe see close calls (e.g. Stanislav Petrov), but no actual disaster.
That doesn't mean our particular area can't go to shit in the future.
Nuclear weapons really were categorically different to to everything that came before them. What happened before them isn't as relevant as you're suggesting. They really could have at least come close to desteoying the world (at least figuratively), and the fact that nothing else destroyed the world says almost nothing about the specific risks of nuclear weapons.
And in any case, how is this not just the observer selection effect? By your reasoning, we should never expect anything at all to ever have a high risk of wiping our humanity regardless of the specifics of the particular threat, because if your argument is that we should only worry about the world being destroyed if we have some hitorical precedent for the world being desteoyed, then we will obviously never have the precdent because we wont be around to speculate if the world is indeed destroyed.
True. Bertrand Russell made a similar point in reference to the problem of induction:
"Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to the uniformity of nature would have been useful to the chicken."
If you heard 10 independent predictions of the end of the world, and none of them were at all related to the others, would you consider all 10 equally valid (at first view), and also recommend seriously considering steps to counter them?
WWII showed us that non-nuclear wars had the potential to "destroy the world" in a similar sense to what nuclear weapons can do. Firebombing did more damage than nuclear weapons, despite both being used. The reality, visible in hindsight, is that although nuclear weapons were potentially very dangerous, they were not a new type of thing. They were an escalation of existing trajectories that both required active use and were insufficient to actually destroy humanity. Much more serious would be an early war between the first humans, when rocks and sticks were unusually likely to wipe out humanity.
Every possible world has a spotless history of not-ending (right up until the moment it ends, if it ever does). The existence of this spotless record does not, by itself, provide any information about which possible world you're living in.
Counterargument - the anthropic principle: https://astralcodexten.substack.com/p/slightly-against-underpopulation/comment/8176123
A nuclear war would probably not have ended the human species.
Sure, but I think that's besides the point here. The point is not whether all out nuclear war with hydrogen bombs would have literally killed everyone, but rather if the existence of hydrogen bombs make a catastrophic conflict with at least hundreds of millions of deaths close to inevitable.
I don't know that warfare got deadlier as technology improved. Governments were able to muster more people/resources for a war, but lots of small scale societies were able to kill similar proportions of their population through lots of intermittent warring. I suppose having enormous world populations makes "millions of deaths" close to inevitable, from disease if nothing else.
Agreed. It's like looking at the most devastating weather events from an economic perspective alone. You'll mostly be regarding recent events as the most expensive and therefore calling them "worst." But, they may have been far less severe and just happened to hit a more developed economic area.
There are plenty of anthropic reasons that I'm leery of using "end the human species" as a benchmark. I prefer the admittedly-vague "end civilization" or "kill 50%+ of the population", both of which *have* happened a few times each at different scales.
Even "end civilization" is vague, though I agree it's a better benchmark than "end the human race".
If languages and religions pretty much survive (they seem to be the most durable patterns), has civilization ended? What if they're gone, but people still remember how to do agriculture?
I think using ending the human race at a benchmark leaves out too much about our local loyalties.
To gesture towards a definition by example: if a Florentine is predicting doomsday as a consequence of her people's impurity in 1347, she gets partial credit. If a Baghdadi thinks this "Khan" guy is an existential threat, I'm giving him full marks.
If many weird groups of people have, in the past, for irrational reasons, believed that the world is ending soon, then I think this is something that humans should keep in mind if they find themselves becoming part of a group of weird people who think that the world is ending soon.
(For the same reason, if you're a schizophrenic who thinks that the CIA is sending messages through their fillings, you should be aware that a lot of other schizophrenics have falsely thought this in the past and adjust your priors accordingly. Maybe this time _is_ different and the CIA really _are_ sending messages through my fillings, or maybe I'm just a schizophrenic and this is the sort of irrational belief that schizophrenics tend to adopt.)
I think the Castro thing isn't quite analogous, because predictions that Castro would die in, say, the 1990s, were not clearly irrational, whereas predictions that the world would end in 2012 or 2000 were.
I was under the impression that schizophrenics can't really logic themselves out of their delusions.
That's my impression as well, but doesn't really change what the schizophrenic "should" do. It just means he can't.
Would it have been a prudent view in the 1950s to go: "Well, by the year 2020 civilization will definitely have been extinguished by global thermonuclear war so why worry about X?"
In 1950 the nuclear bomb had been demonstrated to be a monumentally lethal thing.
AI tools in 2022 have not been demonstrated to be lethal. It’s conceivable, but not demonstrated.
The end of the world in 2000 was conceivable, but never demonstrated.
Circa 1950 nobody could be certain whether nuclear war would occur within the next 50 years.
In 2022 many AI theorists are ‘certain’ that AI takeoff will occur, and many were certain that it would already occur.
‘Millenarians’ were certain the end of the world would occur in 2000, and many had previously predicted the end of the world in prior years.
Circa 1950 few people believed that a nuclear exchange would lead to total apocalypse. Even artistic depictions slanted towards the bleakness of the scenario mostly showed life continuing after the bomb.
In 2022 the predictions of AI theorists (e.g. Yudkowsky) are apocalyptic, suggesting irreversible destruction of the human race, maybe even the whole solar system.
‘Millenarians’ seemed to believe that the world’s ticket was punched in the year 2000. Nothing would continue afterwards.
I could go on (probably somebody should do a point by point analysis of resemblances; these are just the 3 that occur to me), but to me it seems clear that the apocalyptic predictions about AI more closely resemble millenarian prophecy than general fear of the bomb.
It seems to me that there are two currents here: the current of mainstream AI tool thinkers, which says that AI is dangerous but mostly because if we design complex systems around black-box AI reasoning, there will be unexpected and perhaps unwelcome results, (e.g., bias). This closely resembles the rational fear of nuclear devices, which have powerful applications that can be misused, but are fairly unlikely to suddenly blow up the whole world and end all life. Then there’s, for lack of a better name, the Yudkowsky current which says that any year now we’ll be experimenting with AI and it will kill us all. This more closely resembles any doomsday camp surrounding a particular issue than it does the mainstream thinkers on that issue.
In general, you are right. But Scott uses only a very weak version of singularity here, and his argument doesn't need a stronger one.
Basically, technological change will dwarf the impact of a few percentage point lower population.
A reasonably conservative forecast from current trends (ie no double exponentials like Kurtzweil) would probably agree with this.
Interestingly I was just thinking about this today, that rationalists just slipped right into the pattern of getting obsessed with a near term apocalyptic mythology like so many other groups do.
(That doesn’t mean it’s wrong a priori, just interesting from an anthropological perspective).
We’ve always predicted the downfall of civilization since civilization got going, it seems. Rationalists just say that it comes at the hands of AI.
Personally I’m less confident in AI risks, but also I think climate change will be worse than most rationalists seem to typically think, so I’m not immune to this kind of thinking either.
I'm reading Henrich's "The Secret of Our Success" now and it's making me think civilization is less robust than members of our own think. But the human species could persist even afterward.
I struggle to see how we're robust enough for modern civilisation to survive a man-made virus as significantly more contagious and significantly more deadly than e.g. covid. If leaving the house is practically a coin flip for your life, who will provide the food, water, and energy required to sustain society while anything resembling an effective response is developed (if its even possible at all)?
Yes, but you missed Maguire's condition of "Man made", a virus engineered as a weapon could get around this, for example, by having a long, asymptomatic but infectious stage at the beginning. How close is biotech to being able to do that? No idea.
Smallpox & the black plague don't fit your generalization.
We have survived Spanish flu, black plague, English "sweats", smallpox, etc.
Covid was particularly nasty because it was in the grey area between "lockdown worth it" and "lockdown not worth it", leading to a tepid and patchy response that was probably the worst of both worlds.
If covid were much deadlier it would have been eradicated.
Hit the nail on the head.
It's a strange psychological tendency that comes up again and again, from religious cults to anti-vax depopulation conspiracies, to environmentalist predictions of collapse:
https://medium.com/@tgof137/psychology-of-the-apocalypse-1cf68319825e
I tend to put climate change alarmism into the same category -- it looks like we're headed for 3-4° C of warming by 2100, but I fail to see any way in which that will end civilization. Maybe there's some complication of warming that I'm missing. What part of climate change worries you most?
The only part that significantly worries me is that our governments and civil societies seem so focused on "stopping it" through kneecapping our economies rather than adapting to it.
I fear that we'll end up losing a bunch of coastal cities that could have "easily" been saved by getting the Dutch to train up a new batch of engineers and construction technicians.
Good medium article thanks. Wish it got more views.
Thanks. I've long since accepted the harsh reality of the pareto distribution -- most of us will end up beggars in the attention economy:
https://medium.com/@tgof137/what-fight-club-can-teach-us-about-social-media-85a123cb8cef
Did you predict that GPT-3 and DALLE-2 would happen? If not, shouldn't you be more open-minded to "millenarist" claims?
I saw creatures with wings today. Apart from that, they had no resemblance to angelic beings and did not exhibit any aspects of divine grace. I think this makes a very strong case that the reckoning is close at hand.
Even Yud doesn't argue that GPT-3 and DALLE-2 +X are sufficient to destroy the world without "and then a miracle occurs" at some point.
If a religious person demonstrates that a modern event closely matches depictions from their holy text, would you be more open-minded to their texts? If not, why not? If so, does it bother you that there are in fact dozens, if not hundreds, or such claims made on a regular basis? I'll note as other have, the dozens, if not hundreds, or doomsday claims about modern science, now being represented by AI.
Calling it "millenarianism" doesn't disprove anything. The future tends to be weird. Look at the object-level arguments to figure out what might be true. It's hard to get this right, but the future is *really important*, and the "normalcy heuristic" is known not to work.
Maybe a better analogy would be the great oxygenation event, which led to the extinction of the vast majority of anaerobic organisms alive at the time. Maybe this leads to the extinction of most forms of biological life, as artificial life takes over.
If you think that 30 years from now, AI (or some other technology) is going to become so much better that it will radically change the game, this makes predicting social phenomena like the consequences of population declines or labor shortages or underfunded government pension schemes a whole lot harder!
Personally, I think that it's a given that a bunch of new technology will change the game in ways that makes predicting much about social phenomena 30 years from now very hard, but I don't have any confidence that the thing that will change is AI. We can tell compelling stories about ways that much-improved AI or robots could radically change the game, including stories where the AI wipes out or enslaves or converts to paperclips all the humans. And we've seen some huge advances in AI over the last couple decades, so maybe AI will change the world that much. But it's also possible that it won't, but some other unforseen thing will. (Cheap fusion power? Technology to let people reprogram their own personalities? (Or if you prefer dystopias, technology to let other people do it?) A cure for aging? Probably some of those and a bunch of other utterly weird stuff nobody's thinking of yet.
This is my fundamental opposition to FOOMerism and a lot of other very confident proclamations about the future: the world's too complex for us to predict it.
And finite natural resources do not merit consideration? Hand up who wants to live on Mars?
Can you explain more of what you mean?
He's probably referring to estimates that humans now consume 40% of the total planetary output. It's not clear we can survive if we take 100% of the planetary output since there are all sorts of things, like a desire for rainfall, ocean current circulation and oxygen, that require maintaining non-human output consumption. Obviously, the planetary output can be increased, but there is some limit as each new level of extraction becomes increasingly costly.
I have my doubts about those NPP estimates too.
Like the other reply, I also think that lumping all sorts of numbers together and turning them into a single number is rather irresponsible.
* Oxygen: with regard to humans breathing it, it's not a problem. The food they eat will have been produced by photosynthesis which produced just as much oxygen as it takes to burn it in cells again. With regard to fossil fuels, I think the impact on the climate would be devastating long before we made a dent in the atmospheric oxygen.
* Fossil fuels: from what I understand, we are burning the reserves build in many million years within just a few centuries. Clearly not sustainable, also some nasty side effects.
* Metals: finite supply, but most of them (apart from what we shoot into space and a tiny amount of uranium) is not going anywhere. No point in saving rare earth metals for our grandchildren if they can just recycle our old stuff instead of digging them out of the earth.
* Sunlight: Two thirds of our planet are covered with water. From my understanding, we are not even trying to plaster the oceans with PV cells and swimming farms, so we are nowhere near a hard limit there.
* Fresh water: More of a concern. You need some water to grow plants. Still, I think we are still losing rather huge amounts of it to rivers going into the sea, so probably more of a distribution issue than a hard limit? In a pitch, there is always desalination.
* Deuterium: functionally infinite.
Most of these things are not hard limits, but rather represent what a certain civilisation at a certain tech level is able to do or willing to pay. Chemical fertilizer allows for much higher population densities than hunting game. Creating petroleum from its elements or turning other elements into gold is not impossible, it just is not cost effective.
Sure, characterizing the NPP as a single number is simplistic, but so is characterizing the output of a nation by using GDP.
The problem remains. There is a limit to the productive capacity of the earth. There are a lot more integers than can be supported by our ecology and technology. For example, it is hard to imagine a technology which allows the population of the earth to exceed the number of protons comprising the planet.
The NPP people are trying to come up with an estimate for the productive capacity of the earth in some meaningful sense, just as researchers in the 1930s came up with GDP as a way of estimating national economic output.
Physics tells us that NPP is ultimately limited by solar input, orbital kinetic energy and radiation induced internal heat plus whatever mingy energy sources humans can cobble together. There is rather obviously an effective limit which is much smaller than the ultimate limit, particularly if we want outputs conducive to human life. We aren't at the point where we can exploit the energy released by the earth's eventual quantum tunneling into a black hole, and we're unlikely to get there while remaining human in any conventional sense.
I wouldn't mind moving back home at some point.
Mars or space habitats could be quite nice with sufficient technology. I tend to think most of humanity will live off-world in such habitats (assuming we're still somewhat baseline human) down the line, because effective medical immortality means that "wait for people to retire or die to make way" will go away, and it will be easier for the powers to be to tacitly encourage people to migrate off-world instead of staying and disputing control.
People with more education might have fewer kids, but does that actually mean that people with more heritable IQ have fewer kids?
(Ie can we naively multiply correlations here?)
Why wouldn't we be able to? Also, the Iceland study seems to suggest yes.
Theoretically there could be a group of smart people who choose to both be undereducated and have lots of kids. However, that seems pretty unlikely (though perhaps that’s the Amish?)
Or basically Simpson's paradox.
> Or basically Simpson's paradox.
Yeah, Homer is quite dumb, but his daughter Lisa is smart and educated (and Bart also seems smart).
If that is a result of a mutation that happened because of his work in a nuclear power plant, perhaps we should just build more nuclear power plants everywhere.
Homer has crayon up his nose, it's not genetic.
In an case, I was talking about the other Simpson's Paradox.
There's also the UK biobank study which found that polygenic scores for educational attainment, IQ, height, eating disorders and autism are decreasing, while polygenic scores for ADHD, smoking, BMI/waist size/body fat/heart disease, depression, extraversion and Alzheimers are increasing.
https://ueaeco.github.io/working-papers/papers/ueaeco/UEA-ECO-21-02_Updated_2.pdf (page 6 according to browser, page 5 according to document)
Smoking seems a strange genetic inheritance. Also the incidence has halved in the last 40 years.
It kills you at around the point your kids leave home. That may actually increase fitness, as your kids aren't burdened by you but inherit your money.
its nothing genetic though, I mean a propensity to smoking might be, but since there's been a yearly decline that's not been proven. The reverse has been proven.
It must be, if anything, a mixture of genes and environment - given a certain background of health warnings, restrictions and public views on smoking, some people being more or less likely to smoke.
So polygenic scores measure the genetic predisposition to some trait, obviously changing environments also have an effect which can be in the opposite direction and stronger.
Hypothetically, suppose we killed off all the tall people in some poor country but also improved their nutrition a lot. Would the next generation become shorter or taller than previous generations? It depends on how strong each of those actions/effects are.
It seems like intelligence, independent of education, could be negatively associated with fertility because more-intelligent people, being more able to achieve perfect contraceptive use, would have fewer *unplanned* pregnancies.
Until the 2010s and the rise of long-term reversible contraceptives, roughly 1/3 of US pregnancies were unintended. These were disproportionately concentrated among less-educated women. (Hence why lower-income women have a substantially higher lifetime rate of abortion, despite being less likely than high-income women to choose abortion once unintentionally pregnant.)
Some evidence that a very large fraction of the negative correlation between income/education and fertility can be explained by different rates of unintended pregnancy, as opposed to educated women preferring fewer children due to higher opportunity costs, includes:
1) On surveys, in both Europe and the US, educated women report *desiring* as many--and in some studies more--children than less-educated women.
Granted, the fact that women throughout the developed world report desiring more children than they choose to actually have suggests that this represents a kind of 'ideal world' desire, which may be outweighed by economic concerns like opportunity costs.
More significantly:
2) The widespread use of long-term reversible contraceptives over the past decade has significantly narrowed the US income gap in fertility. Fertility among women in the higher quintiles has decreased very little, but fertility among women in the lowest income quintile--and, in particular, fertility in women on public assistance--has greatly decreased.
This seems like a further reason for optimism: if the negative intelligence-fertility correlation *isn't* a result of different preferences, but merely differences in ability to reliably use contraception, the development of contraceptive methods that don't depend on user conscientiousness seems like it will be enough to solve the problem.
It would make less-intelligent women *better-able* to fulfill their preferences by freeing them from the burden of unwanted pregnancy, as opposed to the commonly-proposed solutions of either restricting the opportunities of intelligent women to force them to have more children, or directly or indirectly coercing less-intelligent women to have fewer children than they desire, both of which seem extremely morally-dubious. .
IQ in developed countries is highly heritable and is also strongly correlated with education. If you've got a specific reason for thinking there's a reason this somehow doesn't apply, then we should assume it does.
I don't have any specific reasons either way. I am just wary of drawing firm conclusions before ruling out the obvious objection.
(For what it's worth, if you asked me to bet, I would expect there to be a correlation between fewer kids and higher IQ, but for the correlation to be a bit weaker than the one between more education and fewer kids.)
Regression to the mean. Eventually descendants regress to the population mean.
“there will probably be a technological singularity before 2100”
This really contradicts the science is slowing down paragraph. The one just before it.
I agree that we might probably be able to make babies smarter by 2100, but the increasingly large Amish and orthodox community might be against it. Of course it’s post singularity so who knows what they will believe.
Absent your singularity the population trends for the Amish and orthodox will hit the Malthusian barriers of their more primitive lifestyle. I don’t know much about the orthodox lifestyle, and what I know about the Amish comes from Witness and wiki, but the latter don’t seem to take charity, as their population grows and they buy up more land - I assume this is what they do - that land must become less productive. The US will be a fairly primitive society in that era. And mostly white again. Haha.
The US is, outside these groups, is now a population sink. Population sinks are interesting because all lineages die out to be replaced by new additions, who then die out over time. The descendants of Londoners in the 12th C are not modern Londoners but subsequent migrants (internal to the U.K. mostly until recently).
I don't think it contradicts the "science is slowing down" paragraph. Once science reaches some point (the point where you can make AI, intelligence enhancement, or some other game-changing technology), we get a singularity. Science is going slower, but still plenty fast enough to reach that point before 2100. Once we reach that point, it won't be slowing down anymore!
I mean, once we have genetic engineering to raise IQ that is widely used, it will overwhelm the background dysgenic trend, but it won't cause an immediate "singularity". I'm an AGI skeptic, 15% by 2100 maybe. But I am pretty sure we will have great embryo selection by then that overwhelms the natural dysgenic trend.
I think it depends on how good the genetic engineering is. If it's +15 points, fine. If it's unlimited von Neumann clones on demand, I *do* think we get something singularity-like once we have a 5-digit number of them and all of them grow up.
OK I agree you're right. Anyway, the 5-digit number of JvN clones is great, because they would almost surely be friendly, with AGI it's not as clear though.
Anyway, given https://twitter.com/JgaltTweets/status/1548088266691264519 and https://www.metaculus.com/questions/5121/date-of-artificial-general-intelligence/ which is more like 2040 I'm a bit skeptical about Metaculus' predictions RE AI. If you look, the definitions of weak and strong AI are a bit silly, once could create an AI that does those things but isn't really an AGI in any real sense. Look at GPT-3, you ask it about math and it can't do it.
See also https://twitter.com/RokoMijic/status/1525816209182277632 and https://www.metaculus.com/questions/578/human-extinction-by-2100/ . I'm a big fan and a big user of Metaculus but not sure they are reliable on this stuff. There's huge selection bias here. I don't forecast on the AGI questions, and I know other more AGI-skeptical users who don't forecast on them. There is no real reason to put 80% of my mass on >2150. What does that accomplish for me?
+15 IQ is already civilization changing if that's a uniform change.
Turns barely employable people into productive citizens, productive citizens into specialists in demanding fields, and the occasional +3sd wizard gets turned into von Neumann. 2100's equivalent of Google would probably have entire TEAMS of von Neumanns, hopefully they'll be working on something more prosocial than either ad optimization or weapons of mass destruction
We can improve IQ now with IVF screening. I believe Israel has started on that. There’s no obvious political will.
Israel is a leader in polygenic screening, sure.
It is primarily cities that are demographic sinks, rather than countries as a whole.
Although admittedly, across time fertility goes down in rural areas also. It is an old-fashioned hierarchical diffusion process. This process is happening everywhere - well documented also in Nigeria, Scott's chosen African country. Urban areas lead the way, and high-status urban women are the vanguard/innovators/early adopters.
"Even this isn’t quite right, because a lot of Orthodox Jews do leave Orthodoxy, so along with those 100 million devout Orthodox there will probably be a few dozen million extra Reform Jews with a confused relationship to religion and lots of emotional baggage. It’ll be a great time for the rationalist community."
Go team! (Also, at this point we should be separating out the Ultra Orthodox from the Modern, because the latter can't keep up. It's a good question how many Jewish-sans-Streimel groups we'd have at that hypothetical point. To be honest, Conservative and Reform could have merged in 2015 and nobody would have noticed. The other Orthodox groups are more of a question mark.)
That's not happening to the ultra-orthodox.
There will be a boiling-off process. In Amish "selection for plainness". Defection rate has dropped over time in them.
https://westhunt.wordpress.com/2012/12/23/boiling-off/
Scott makes an entertaining point as always, but also a serious one. I have been wondering myself if Darwinism (as an idea, not as a fact of life) is self-defeating, since people who believe in evolution reproduce at a slower rate than people who hold religious beliefs - in particular compared to those who hold strong religious beliefs. The latter may out-compete us all, in particular in countries, and coming ages, when universities - the great modern "seducers of youth" - are not allowed to spread the gospel of evolution.
...of course, the less-reproducing, rational-minded Darwinians a la Scott and most people in this comment section, have studied this as well; i.e. we have at least the satisfaction that we were able to predict our own doom. Here is one of many studies:
Li Zhang (2008): Religious affiliation, religiosity, and male and
female fertility. http://www.demographic-research.org/Volumes/Vol18/8/
DOI: 10.4054/DemRes.2008.18.8 (It's open access.)
Traditionalism is more important than religious belief per se:
https://read.dukeupress.edu/demography/article/58/5/1793/178813/In-the-Name-of-the-Father-Fertility-Religion-and
In a subpopulation where Darwinism is traditional, it could be compatible with high fertility. We're not there yet though, so Darwinism is most embraced by people less rooted in tradition.
Can you steelmann the case for and against AI Singularity. I personally think it’s a big issue but have a hard time making others agree with me
I had an old document doing this, but it's obsolete now and I haven't made a new one. Until then, I would recommend https://www.cold-takes.com/most-important-century/ . Note that the page itself is a summary but contains links to the full argument (eg the linked PDF)
Thanks for the link, I find this much more compelling than the misalignment type of concerns.
Isn't this a fully general case against caring about the future past a certain point? "Yes a bad thing will happen and it will have bad effects. But conditions will be different, it won't be THAT bad, etc." To be honest this reminds me of what people say about Global Warming. "A 2 degree increase by 2100" or whatever.
Or is that your position?
Yes, I am generally not concerned about problems that will only manifest themselves after the year 2100. I think lots of bad global warming will happen before then so it's still worth worrying about, but I do worry about it less insofar as I'm not worried about the post-2100 effects (although I would expect by 2100 we would have better climate tech options anyway)
You're what, mid thirties to early forties? Let's say you're 20 just to really rig the numbers. Average life expectancy for a male (I assume) is 79. Let's say you beat the curve and live to be 90. Congratulations. So you die in 2092. In that time the average temperature should go from 57 to 59-60 degrees. Roughly the same as the extremely prosperous Medieval Warm Period. What is the "lots of bad global warming" you're expecting?
I'm not opposed to mitigating global warming. Largely because if we don't stop the process it won't stop in 2100. But I'm not opposed to mitigating population decline either. And in both cases it's because I expect it to have effects after I'm dead. If I wasn't concerned about the effects that happened after I was dead I would be very upset that I'm giving up economic prosperity now for benefits I won't enjoy, being dead and all.
I think 2-3 degrees of global warming is all anybody expects; we're on a trend to decarbonization and should make it by the end of the century - see eg https://www.science.org/content/article/after-40-years-researchers-finally-see-earths-climate-destiny-more-clearly
The problem isn't that global warming will keep happening forever and ever, or that a 2-3 degrees warmer Earth is uninhabitable, just that there will be lots of ecological churn as everyone adjusts, and some of that ecological churn might involve mass famines, wars, water shortages, etc. I also think probably all else being equal warmer is worse than colder because colder countries sure do seem more developed (and tropical countries less so) in a very consistent pattern and I suspect this has something to do with parasites or something and worry it might cause globally decreased development.
Aren't most of these applicable to population? As an aging population empties out the interior of countries rural resources like food could become harder to come by. While the US can continue to import workers other countries cannot and most of the countries supply immigrants are food insecure. This is already a problem in Africa which we are seeing the effects of right now due to the war in Ukraine. Population churn can cause issues too. Unbalanced population pyramids cause instability and it will push nationalist leaders towards desperate actions as the clock starts to "run out." Ideologues who support Putin and Xi both point to their respective nations impending population decline relative to the west as rationales for wars.
More to the original point: What's your best case for not caring about the future after you expect to be dead?
Scott, I’m wondering if you’ve read this recent paper about taking seriously the tail risks with climate change?
I thought the paper itself was pretty well written and makes the case that the rational thing when faced with uncertainty is to take the tail risk scenarios seriously.
It swayed my thinking on these matters somewhat, personally.
https://www.pnas.org/doi/full/10.1073/pnas.2108146119
The 4 C might get you 12 C cloud suppression possibility is interesting...
"Such effects remain underexplored and largely speculative “unknown unknowns” that are still being discovered. For instance, recent simulations suggest that stratocumulus cloud decks might abruptly be lost at CO2 concentrations that could be approached by the end of the century, causing an additional ∼8 °C global warming (23)."
https://www.nature.com/articles/s41561-019-0310-1
The US south used to be quite parasite-ridden. But we acted to get rid of parasites. It's possible our civilizational capacity is much worse than in the 19th & early 20th century, but otherwise the wealthy & currently cold countries should be able to suppress parasites. I suppose COVID didn't make us look great, but we produced those mRNA vaccines in a very short amount of time (then our government delayed them & poorly distributed them). We could be making lots more vaccines, & rapidly if we wanted to.
> It's possible our civilizational capacity is much worse than in the 19th & early 20th century
I've been wondering about that after reading the below journal article. New York had two mass vaccination drives in 1947 and 1976 with roughly the same amount of resources and publicity and the 1976 program vaccinated a tenth as many people:
https://link.springer.com/article/10.1007/s10900-015-0020-6
On the other hand we didn't all die of swine flu in 1977 so maybe this was fine, I go back and forth.
By 2100 our technical means of mitigating, or fuck it, reversing global warming will be on a completely different level than what we have, or even can project, in 2022.
Global declining fertility and population ageing everywhere is not a dystopian scenario, a la global warming. On the contrary, it is a beacon of hope.
If we, the human species, are not able to voluntarily slow down and ultimately stop population growth, Nature will sooner or later do it on our behalf - and probably not in a pleasant way.
I agree that in the longer run sub-populations with higher fertility will keep most likely keep total fertility high enough.
However, if the Amish grow a lot more, I would also expect them to change.
Just like Warren Buffet can make money at 20% a year as long as he's small, he can't keep that up once he has an appreciable fraction of the total economy.
The Amish are changing: getting even MORE "plain":
https://westhunt.wordpress.com/2012/12/23/boiling-off/
Sounds very plausible. The effects I am talking about would kick in when they are eg 10% of the population.
(And, of course, the boiling off is very similar to what our esteemed host described for the Orthodox Jews.)
IIRC about 20% of Amish leave the community after the Rumspringa period. If they got a lot bigger and had much more direct contact with the outside population from population growth, I bet that would probably go higher (and their fertility rate overall actually does seem to be going down, even if subsets of them still have 7 children per woman).
Their defection rate has dropped over time.
https://westhunt.wordpress.com/2012/12/23/boiling-off/
That's referencing one particular unnamed Amish community, not the Amish as a whole.
It was a general statement about the Amish, with example numbers given from one community. If you think that one community has the secret sauce, perhaps they'll displace all the other Amish :)
Or converge with the others as they get bigger.
Do we actually have the study referenced? Commentators on that site weren't able to find it.
I'm very fond of the Amish and "Secular society commits civilizational suicide and the Amish just inherit everything by default." is the sort of comforting thing that I really *want* to believe.
I'm basically in agreement with your bias there.
I think this data point is coming from this (and associated studies by the same authors):
https://www.amazon.com/Amish-Paradox-Diversity-Community-Anabaptist/dp/0801893992
It's focused on one Amish community, but it's the largest one.
The pattern seems to be that the more conservative the Amish community, the lower the rate of attrition. What seems to have happened is that, as the Amish lifestyle becomes more and more distinct from the surrounding culture, the shock and challenge of leaving it is too great and fewer people do so. This might be contrary to intuition, but Amish numbers were kept tiny by very high attrition when their neighbors were mostly pious Christian farmers using roughly equivalent technology.
I think the weakest link for the Amish is if the surrounding society decides to destroy their culture. They're built under a presumption that the surrounding society will regard them with a combination of benign neglect and amusement, and if that society turns hostile, they don't really seem to have a defense mechanism other than hoping that by being good neighbors, they'll eventually be shown mercy.
"Rumspringa"? That sounds very Swedish, rather than German! How can that be? (rum=room/around, springa=run/sprint)
Sure, but "springa" is Swedish for "springen", so closer (Both are Germanic languages, so it figures....)
We should be trying to understand why it’s going down at all, because it might get worse. I think it’s because of poor diet and chemicals.
I think there's a lot of evidence it's by choice rather than because of infertility - most people who don't have children say it's because they're not trying for children, rather than because they're trying but they can't conceive.
Wouldn’t you the say the question then becomes why aren’t they trying? We’re all here because of an unbroken billion year old chain of organisms deciding kids are worth it. Seems odd so many of us have suddenly stopped and said “nah, MacGyver re-runs are on.”
Most people say they can't afford as many kids as they want.
What they mean is that they can't maintain the same standards of living while having children. Poorer people around the world and in the past have more children.
Yeah. But isn't that what "I can't afford something" typically means?
Like, I think a lot of people would say they "can't afford" a nicer car than they have not in the sense that it would be literally impossible for them to scrape together more money for a nicer car if it were for some reason a matter of life and death, but because they'd have to make painful cuts in the rest of their lifestyle. Or whatever. I mean, sure, they also "can't afford" a Gulfstream or to buy Twitter, but "I can't afford it [without making difficult cuts in the rest of my lifestyle]" doesn't seem like an unusual usage of the phrase.
To me this falls into a carrying capacity argument, where you extend the terms as they are used for animals to the projected futures humans live in. Simply believing at scale (strongly enough to change the market dynamics of having kids) makes having kids harder because you believe they will be born into a world of austerity.
Most organisms don’t have access to contraceptives. It seems evolution endowed us with a deep drive for sex, but the drive to actually have kids is weaker and more psychological, which is more prone to being disrupted by other high level calculations (e.g. MacGuyver)
Selection is now occurring so that, like Pinker quipped, people will have as instinctive an aversion to contraception as to snakes.
I don't know about that, but I do think some people have a strong desire for children and others don't. With contraception available, people who really want children will outbreed people who don't.
Anecdotally, the number of single moms I know who didn't want (or at least, didn't want right then) their kids is pretty close to the number of dudes I know who want kids but can't find a spouse.
Desire for kids plays a factor, but aversion to contraceptives seems to have had a much bigger impact in which group actually has children.
I hadn’t thought of that but given time that makes sense to me.
In addition to the contraceptive and economic points... a lot of it comes down to values: for a long time the messaging was that "having children is the most important and meaningful thing you can do with your life" (especially for women, but also for men), nowadays, while there's still pro-family messaging, there's a lot more "chase your own dreams, don't have kids unless you're super-duper sure you want them".
A lot of this is tied to religion, which tends to promote having families (at least Judeo-Christian "be fruitful and multiply"). It's not an accident that the two groups in America that are "out-multiplying" the rest are devoutly, conservatively religious.
The worst thing about having kids is all the great things in life which become much harder or impossible after having kids. Going out for dinner becomes tricky. Overseas travel becomes a nightmare. Wanna go scuba diving or skydiving? Forget it!
What do all those things have in common? Peasant farmers don't do them anyway, so they give up much less when they have children than an upper middle class westerner does.
I have relatives who've worked as au pairs and went on a date with a girl doing that for a job just a couple months ago. It's more common than you would think.
Are you a parent? I never found going out for dinner to be tricky (bring kids or get babysitter, both simple) and have travelled internationally with my kids almost every year. No nightmares. Some challenges, but none that severely taxed my patience or abilities.
Yeah I think the messaging around kids is pretty bad, and also bizarre. “If you have children in your twenties how will you be able to update spreadsheets/write JIRA stories for a corporation!”
Some of us are repulsive.
Very few and not to get too Disney but most people I know who identify as such self sort themselves into that group because they find it less terrifying than rejection. That aside, most people have a lot of value they don’t realize.
Do most women have the number of kids they want?
Traditionally, they've had more than they wanted. It often killed them. It wasn't until 18th century France that anyone seriously tried to let women control the number. We see this in just about every society. As birth control becomes available, women have fewer children and material conditions improve.
Childbirth became really dangerous when doctors with unwashed hands started getting involved. Midwives had a much better record of mothers surviving.
https://westhunt.wordpress.com/2013/06/07/the-breeders-equation/#comment-14508
I don't deny that doctors getting involved made childbirth much *more* dangerous, but claiming that childbirth was ever safe in pre-modern populations seems like a stretch.
How does this argument account for high rates of maternal mortality in contemporary developing-world populations? (Sierra Leonean women even now have a 2.3% chance of dying each time they give birth.) Doctors in 21st-century poor countries are familiar with the germ theory of disease and surely aren't actively making things worse.
And how does it account for the fact that, on average, *men lived longer than women* in the majority of pre-modern-medicine societies? Granted, much of the higher mortality among childbearing women was due to the indirect effects of immunosuppresion and increased nutritional stress during pregnancy leading to higher infectious-disease mortality--but pregnancy and birth were still significantly shortening women's lives, without doctors getting involved.
I expect that Sierra Leonean women have an elevated death rate (compared to the first world) from things other than childbirth as well.
In the majority of pre-modern-medicine societies, which were paleolithic hunter-gatherer societies, a combination of anthropological and genetic evidence suggests that an average of 40% of men would die from warfare, hunting or accidents before they ever reproduced. Childbirth could be dangerous, but women certainly did not have shorter life expectancies under those conditions.
(This is not to say that those statistics about maternal mortality in the industrial era aren't completely infuriating.)
I think several things can be true. Effective contraception and low infant mortality has reduced family size especially in high-income countries . Pursuit of education and workforce participation has delayed childbearing for women. There is some evidence to suggest that people are having fewer children than they would ideally want perhaps partly because children are expensive and maybe partly because of age-related infertility (?). But it is probably also true that people just prefer smaller families these days. I don’t think we have great data on preferences and on how those preferences change over a person’s lifetime i.e. if you ask someone at 18 years old, 30, 40, etc.
You are right that women might have fewer children than desired due to costs in time, opportunity and money. It is definitely a matter of preference. Only a few of the younger people, men or women, I know want to have any children. Some do. One says she does want children, but admits that she doesn't like them. Talk to me about preference theory. They are, for the most part, still establishing their careers and finishing their educations, so time will tell. People are allowed to change their minds.
Surprisingly, media propaganda makes a difference in preferences for children. Soap operas aimed at women depicting smaller families in a positive light do reduce the fertility rate. This has been true in a number of countries. Obviously, there are all sorts of confounding factors, but human expectations have been managed towards an end. That's how Google, Facebook and the television networks make their money. If one grows up with certain kinds of stories, one frames one's life in their terms.
The Empty Cradle talks about how access to TV networks showing telenovellas with flighty women and unattached men predicting fertility declines in different districts of Brazil in ways that couldn't really be explained just by economic factors. The same appears to be true at a global level.
No, seriously. If you went around to women today, in the modern times, with the modern contraception most women take, with the divorce rates and low marriage rates and most women working outside the home, and you asked them..."Do you have as many kids as you want, or more, or less"...
...what will they say?
I think that your concept of past women without any ability to modify birth rate is inaccurate, but mostly it is irrelevant to the question I am asking, which is about now.
In the developed world, women are on average having fewer children than they say they want. The demographer Lyman Stone has discussed this quite a bit.
Yeap.
It depends on their age and how many children they already have. Very few women want to have lots and lots of children. That's a rare thing and always has been. Nowadays, I'm guessing you'd get a lot of zeroes, a good number of ones and twos and a handful of more than that.
Most women want more than they have. You postulated a past where women had more children than they wanted, now we have created a future that hampers women's life choices.
We should maybe do something about that.
No, they don't.
https://www.nytimes.com/2018/02/13/upshot/american-fertility-is-falling-short-of-what-women-want.html
On average, as of 2018, American women wanted 2.7 kids.
This is the problem with the culture war version of this debate - increased fertility doesn't need encouraging, it needs facilitating.
Perhaps, but facilitating higher fertility may require women to do things like have kids in their 20s and pursue careers or higher education later in life, which many women don't want to do. (To be fair, I think credential inflation is a grotesque problem for society in general, but it hits women harder due to menopause constraints.)
In any case, it is possible to want multiple things that are mutually contradictory at the same time. Culture can play a role in highlighting this fact.
Something that would be really useful to get a handle on is how many women plan to have kids in their mid-late 30s then can't; I can easily imagine it being anywhere form 2%-30% of missed wanted fertility.
Otherwise, the best things from a policy perspective are probably along the lines of making it easier to have kids and a career at the same time - expand/introduce paid maternity leave, and introduce reasonable adjustments-type rules for parents similar to the disabled, and affordable housing.
Child benefits that scale with education also occurred to me, but that sounds like it would have massive perverse incentives. Child benefits which scale with forgone income maybe? No idea how you'd calculate it.
The nordic countries have been trying the "give women more maternity leave" trick for decades now and their fertility rates are around 1.6- maybe a little better than places like Korea or Italy, but not by much.
In principle you could just modify tax policy so that you pay higher taxes if you're childless and pay lower taxes the more kids you have (Hungary is moving in this direction), which in theory would properly incentivise fertility at the upper end of the class continuum. And... oh, I don't know, some kind of ritualised legal arrangement where women can legally claim 50% of a man's income, assets and child custody if they bear and raise his kids- crazy, I know. But the problem is that massive cultural change will be needed to actually generate support for these policies and make them stick at the social level.
GI bill for non-working parents to do a part time batchelors, masters or vocational qualification while raising their kids? Combined with subsidised daycare on-campus. CoL near universities might turn out to be prohibitive though.
Right now we have cultural/professional standards that actively discourage women from expressing a desire for bearing kids, as though being a man - or a woman who lived her life as though she were a man - was the only way to be successful in life.
We can change that. We should change that. And we can do it in a way that acknowledges the decreased advancement and skill atrophy when one's attention is on newborns, so as to not pretend that a man putting in 60 hours a week and a woman doing 35 hours on a flex plan are actually doing the same job and should be advanced and paid the same.
Burning through one's child bearing years to get into the c suite in one's late 30s is one thing.. but that is not actually a realistic path for most humans, and we should be more honest about that.
No disagreement there.
With the exception of Germany, it appears that they do not. Here is an article discussing this, and the implications of the gap between hoped-for and realized fertility:
Gøsta Esping-Andersen & Francesco C. Billari: Re-theorizing Family
Demographics. Population and Development Review 41(1): 1-31 (March 2015)
Essentially, the argument is that countries with low fertility are in a time-lag-situation, before politicians realize there are votes to be got by introducing Scandinavian-type, or French-type, fertility-enhancing policies.
Not sure at all they are right! But the discussion is interesting.
...In addition to the distal/intermediate/proximate factors referred in the literature & listed in a post way below (none of them related to chemicals), there is also a more recent one : The hideous increase in housing costs, in particular in urban areas. (The Bay Area is not alone.)
I have not seen data for all countries (too much work even if you are paid for this kind of work), but it appears that the amplitude in life-cycle debt is increasing everywhere. That is: Each birth cohort of young people accumulate more & more debt early in life. And high debt in an uncertain world is very effective in dampening the wish to have children. In particular many children.
...Including the precious "3rd child", which is really the holy demographic grail. Fertility is going down everywhere, not primarily because people have stopped having children, but because too many women stop and No. 1 or at No. 2.
People say they want 2-3 children, so all things considered you'd expect them to converge on that as the average fertility rate.
I think the reason it's lower than that mostly have to do with delayed household formation and older parental age, due to greater housing and education costs, fewer jobs right out of early adulthood where you can make a socially acceptable living, and (in the very low TFR East Asian countries) some pretty brutal testing and education prep regimes that require very intense parental time and resources.
Delayed parental age has a long history of being used to lower fertility. Most of the Northern European Marriage Pattern of lower fertility in the 1500-1800 period was due to people having children later than before.
Ultra-Orthodox Jews aren't having a lot more kids than mere Orthodox Jews because they have a more rigorously kosher diet. The Amish eat traif, but also manage to have lots of kids.
You just accept the population projections as fact, and this seems like a serious mistake. These projections have been wrong in the past, could continue to be wrong in the future, and one may make the case that they will predictably be wrong in the future. Our World In Data doesn't have a proof that their population projections are the most accurate projections possible given the information available in the present; they just have some model, the assumptions of which could be disputed.
That said, I agree about point 9.
My usual assumption is that projections like that are often slightly wrong but very rarely completely and utterly wrong - I wouldn't be surprised if the population in 2100 were 9 billion instead of OWID's 10.6 billion, but I would be really surprised if it were 3 billion (absent some catastrophe that makes projection meaningless). I don't think anything in this post hinges on the difference between 10.6 billion vs. 9 billion people. If someone has an argument that OWID could actually be off by orders of magnitude, I'm willing to hear it.
My understanding is that population projections strongly rely on the assumption that developing countries will experience the same demographic transition as Western and East Asian ones.
But given we don't know what caused the demographic transition, it's doesn't seem like a reliable assumption. If it turns out it's related to religiosity or economic development in some way that's directly or indirectly tied to genetic differences it might turn out that India and Sub-Saharan Africa follow growth trajectories more like the Amish and less like modern France.
India has already transitioned
TFR is 2.2 now and declining
It's a lot easier to argue why these projections aren't as important than why the projections are wrong, and you're also arguing against a stronger position. Even if they're true, they're probably not catastrophic. If you just dispute the projections, you're implicitly saying they will be catastrophic if true.
I took the Amish projection to be tongue-in-cheek, like Mark Twain's Mississippi extrapolation.
Wordcels make this argument when they're horny and shy to admit it.
The underpopulation argument, or the Against position? Between Elon Musk and Scott, who's the wordcel??
(Scott actually might be a shoo-in for the rare and elusive "word rotator", come to think of it...)
The underpopulation argument. Maybe it's fairer to say "people make the argument when they're horny." Scott happens to be asexual, unlike Musk, so evoking wordcelism probably was unnecessary and weakened the observatjon.
Besides demographic shift, my real worry here is the causes of declining birth rates rather than its effects. Are we too socially and emotionally broken to start families, or just too poor and financially insecure? Too individualist? All options feel terrible, and I suspect these three are all true and interrelated.
I'm less concerned about this, see the graph in section 7.
In general, the poorest countries have the highest fertility, and the richest countries have the lowest. Within countries, richer and more educated people have fewer children, up until some very high number (I think it was a 7-digit salary or something, cf. Elon Musk).
My guess is that the main cause of the fertility rate being 2.5 rather than 7 is the existence of contraception and women having jobs other than child-rearing, and then the main cause of it being 1.5 rather than 2.5 is things we should actually be concerned about like education going on too long and large houses being too expensive.
There's also the matter of preference. Most women never wanted huge families. Given a choice, most choose to have fewer children. That's why people worried about racial purity work so hard against letting women have a choice.
Fertility differences across countries are driven by desired numbers of children among women. Women who aren't W.E.I.R.D do want lots of kids.
That doesn't explain falling fertility rates and stated fertility preferences in Asia or Africa. There just aren't enough WEIRD women in Asia or Africa to account for this.
African birthrates have not fallen in line with contraception availability they way they did in the west, and immigrants to the west still have more kids.
Actually, it has. African incomes have been rising and birth rate has been falling. Most immigrants from traditional societies to the west have more kids than natives.
The "choice" precisely coincided with the option of having a comfortable, non-backbreaking-labor career. You're making extremely strong declarations and totally ignoring massive confounders.
There are still lots of women doing back-breaking labor and having fewer children. Raising children itself can be back-breaking labor. If having fewer children lightens the load, some women will go for the lighter load.
I have to strongly disagree. There is no level of technology where being the homemaker/child raising parent is 'back breaking'. More physical than the modern west, sure. Back breaking, no.
Really? I think it's easy to underestimate how easy technology has made domestic labour even before the big advances of the immediate post-war. Just a thing like the fact we can get centrally produced. bread that survives without spoiling for any length of times instead of someone having to rise incredibly early in the morning to bake it before others wake up.
I guess I have to agree. It probably isn't back breaking, but it is grueling, exhausting work.
That last part is very similar to my “too poor and financially insecure”. We should absolutely be concerned over having built a prohibitive economic and social model where people are having 40% less children than they would have otherwise.
Like, it’s a solvable set of issues, but it’s not easily solvable.
It's a hierarchical diffusion process. You find it within countries, as well as between countries.
Point is well taken about unreal years, but it makes me wonder - what is the farthest off real year? That is, the last year about which we can make meaningful predictions in most arenas? 2026? 2030?
How meaningful do you want your predictions to be? In 2019 we would have said “of course restaurants will be open next year and most white collar workers will work in offices”, and then they weren’t because of an unexpected event. But lots of other predictions were completely valid, and a lot of predictions have resumed by now.
Unforeseen events are of course possible as little as an hour or a minute from now. I am thinking of meaningful in terms of having some measure of specificity and yet still correct let's say, some majority of the time.
See https://astralcodexten.substack.com/p/biological-anchors-a-trick-that-might for a nice pretty probability distribution, although Ajeya has since updated to thinking things will happen sooner than that, see https://www.lesswrong.com/posts/AfH2oPHCApdKicM4m/two-year-update-on-my-personal-ai-timelines
Will check it out, thanks!
If we make it though the singularity, however exactly that looks, i know it may have been half a joke but I could see a world where some form of religion becomes highly selected. The basic tenets would be: don’t do things that destroy the patterns that sustain humanity, like sexual reproduction being tied to fitness, having to struggle to develop, etc. I always call that Space Mormonism but in a science fiction future where there are still recognizable humans I see that as the most likely outcome.
I. E. The Dune world post Butlerian jihad
Chris Ruicchio’s Empire of Silence is my favorite such adaptation. The church is much more active there and does the kinds of things they actually have to do in order to stop other singularities.
Given the reasonably convincing arguments regarding AI danger, and the occasional human habit of reacting to danger with considerable vigour, is there a point between here and the singularity where we make the collective decision to ban all AGI research on pain of death, and the destruction of your family unto the third generation?
I would bet that there is.
I mean…. would really hope not. And I can see lots of futures where that won’t happen. Empire of Silence is my favorite from a story telling perspective but I hope we stumble upon some good defensive/stabilizing AI tech that makes all that unnecessary.
I also like how those efforts in Dune were ultimately doomed (you can't stop the Singularity!), necessitating a cosmic diaspora and eugenics sort of space bunker approach (which also would not have worked, absent Space Magic of Future Sight).
Another point against the dysgenics issue is that to whatever extent the Flynn effect is QOL based, it will be more powerful in developing e.g. African countries which are also making more babies.
So even if the average IQ of the descendants of today's Americans will lower from dysgenics, it will be more than compensated for by 2nd/3rd gen sub-Saharan immigrants (already a smarter than average slice) going through the Flynn effect.
The Flynn effect is mostly not g-loaded, and we shouldn't expect high IQ 2nd and 3rd gen African immigrants(I.e. the product of selective Nigerian et al immigration) to be subject to it because they're mostly more educated than even the average american. The Flynn effect is not about having more stuff, its more to do with having a more stimulating childhood environment etc.
Also, 'quality of life' refers to patient wellbeing outcomes, not living standards generally.
I wish the singularity argument had been the first argument. The "Don't worry it won't get bad until 2100" left me with my mouth hanging open and then when it eventually got followed up with "and it won't matter by 2100" I was able to close my mouth, but I wish I'd been able to know up front that Scott wasn't trying to predict anything about population trends but merely making a point that, like everything else, he thinks population trends are inherently unpredictable on a 80 year timescale.
My guess is that the climate change consequences will make many places unlivable enough to result in extreme strife and significant decline in the population due to wars and shortage of basic resources caused by wars, not by climate change. And not by 2100, but more like by 2040, the way current extreme weather events are happening. Basically take what is going on in Ukraine and add one or two orders of magnitude. It won't be just a shortage of heating gas or wheat in Europe, it will be famine and lack of technological basics all over the world. Population decline will be a consequence, but a welcome one, rather than a cause for worry. This is assuming no drastic changes like AGI takeover.
I'm pretty skeptical - there's already been ~50 years of climate change, I don't think anywhere has gotten close to unliveable, and I would expect unliveability to happen gradually rather than have a sudden threshold effect.
I suspect that we may be hitting the conditions where hot weather becomes hot enough for long enough stretches to make non-AC existence impossible where it used to be bearable, and droughts becoming severe enough to make survinging until next rainfall a challenge. This will not affect the US much at first, but many developing countries will feel it much earlier and much stronger.
There's huge areas of Earth that are unlivable. Even if we discount the obvious ones like the oceans etc., you aren't going to get permanent inhabitation on huge stretches of current territory. Should be possible to see right now if the unlivable zone has expanded, by what criteria, and how much.
What current extreme weather events are you referring to? The number of hurricanes has been decreasing, contradicting the predictions of climate change theorists (despite their attempts at retcon), so I assume you're not referring to that. Or maybe you're talking about the last few weeks of heat waves? If so, would you then concede that a cold wave would be evidence against climate change?
Global warming causes weather extremes in both directions, actually. Or at least that is what American Meteorogial Society claims: https://www.ametsoc.org/ams/index.cfm/publications/bulletin-of-the-american-meteorological-society-bams/explaining-extreme-events-from-a-climate-perspective/#EEE-2020
I am not a climate change alarmist, and have repeatedly pointed out that hotter Earth means more life, but it pays to acknowledge that the road to more life will be marked with a lot of deaths.
So what _would_ be evidence against climate change?
Against climate change or against anthropogenic climate change?
Average global temperatures not going up.
Honestly, this whole debate as a weird tendency to veer off track. The four questions are:
1) Are global temperatures rising?
This is simple empirical point, and they either are or they aren't.
2) Why are they rising?
This is a more complicated empirical point, and should be where most of the factual disagreement is because causation arguments for large-scale phenomena are hard to empirically test.
3) Can we prevent it?
This is a yes/no question with a price tag if it's yes. It's probably fairly straightforward to answer once you've answered 2.
4) Is temperature rising a good or a bad thing?
This is a mix of empirical and value-based questions, and the answer will probably be that it depends where you live and what you care about. Whether it causes extreme weather/melting sea ice are part of this. My prior would be weighted towards it being bad; as whatever temperature we currently have is probably what all our living patterns and infrastructure are optimised for (eg. no-one in Norway has air con). Bangladesh and Vanuatu look like they're pretty fucked, but the Canadians might be laughing all the way to the bank (or be conquered as lebensraum by America once the Sonora desert swallows Kansas).
> but the Canadians might be laughing all the way to the bank (or be conquered as lebensraum by America once the Sonora desert swallows Kansas).
Naw, our government is going to tank our economy so hard trying to stop the global warming that would be good for us that we'll end up petitioning America to annex us and pay off our debt, Scotland/UK style.
Yes, I'm well aware of the reframing of "global warming" as "climate change" in order to make it unfalsifiable, but I just wanted to double check that climate change theorists recognize that this is nothing more than a PR move.
If you claim that heat waves / higher average temperatures / more hurricanes are evidence of climate change, then it necessarily follows from Bayes' Theorem that cold waves / lower average temperatures / less hurricanes are evidence against climate change.
The only way you could get around this would be by redefining "climate change" to be a theory which does not actually make any predictions, so that literally any event that happens could serve as evidence for the theory. But in that case, there's not much point talking about it.
Are you saying climate never changes? Or that average temperature changes do not count? Or that climate changes (as it obviously does at least on some time scales), but it's not anthropogenic? I am not sure where you are digging your trenches here.
I'm not saying any of those things. I am basically just lamenting the constantly shifting goalposts of climate change theorists. I want to know what predictions their theory actually makes, and I want confirmation that if those predictions turn out to be false, they will accept that as evidence against their theory.
What has happened for the past few decades is that a prediction gets made, the prediction turns out to be false, and then instead of doing Bayesian updating against climate change, the theorists subtly change the theory and then say "Ah, of course, we knew this would happen! Our theory is perfectly consistent with this!" This is a laudable political strategy, but not an actual truth-seeking procedure, because every time your predictions are wrong, you have to expand your theory to predict more and more things, leaving you with something without any predictive power. "The climate may or may not change, and it will have some sort of effect on hurricanes, and maybe it'll get hot, but not necessarily. If it gets hot, then obviously we predicted that and of course that is evidence for our theory, but if it gets cold, remember that weather is not climate."
Here is a Guardian/Observer article from 2004, describing a US DoD report about how we will all be doomed by 2020...
https://www.theguardian.com/environment/2004/feb/22/usnews.theobserver
"Climate change over the next 20 years could result in a global catastrophe costing millions of lives in wars and natural disasters..
A secret report, suppressed by US defence chiefs and obtained by The Observer, warns that major European cities will be sunk beneath rising seas as Britain is plunged into a ‘Siberian’ climate by 2020. Nuclear conflict, mega-droughts, famine and widespread rioting will erupt across the world... "
"Yes, I'm well aware of the reframing of "global warming" as "climate change" in order to make it unfalsifiable"
Good point.
...add confirmation bias: Every time there is a flood, or a fire, or a high-temperature day, newsmakers can say, or imply, that "global warming" is the cause/the single cause.
If you are a climate activist, this is great. If you (only) are a journalist chasing clicks, this is also great.
Meh, that seems like just total fact blind alarmism to me. Yields are up, yields-acre are up. Where is this famine coming from?
And what resource shortages? Water? Where? At not that large an increase in cost you can just run desalination plants.
Living standards might decrease, that isn’t the same as places becoming uninhabitable.
It depends on how you view 40C temperature living.
At a two-degree increase, the only places with 40C are the places which would otherwise have 38C. If you can't deal with 40 then you can't deal with 38 either.
The temperature increases are not evenly divided. We're seeing much higher increases in the Mideast and North Africa already. Urban heat islands exacerbate things as the population urbanizes.
Work output falls when temperatures get over 25C and continue to fall as heat increases. LED lighting, for example, has improved factory productivity in India. Rising temperatures mean one has to use more energy for cooling or accept a lower level of productivity. I don't expect collapse, but I do expect changes in the way people live and increasing numbers of climate refugees.
We're already seeing this with the battles for control of the Himalayan watersheds and the disputes over GERD on the Nile.
I'm more skeptical. Warfare has declined, and it doesn't pay off well in the modern world of Sailer's "dirt theory".
It usually doesn't come down to warfare. We're seeing three nuclear powers jostling in the Himalayas. China has attacked India at least twice in the last five years, but there's no war per se.
War may not pay off the way it used to, but exercising military force is still useful. I was reading an article in Foreign Affairs recently that pointed out that Africa is still full of rebel militant groups, but that they rarely want to take over the state the way they would have up until maybe ten or twenty years ago. What they usually want is benefit from some regional resource, a piece of the action so to speak. The central government has to decide if it is worth fighting them, possibly indefinitely and possibly resulting in regional resentment, or coming up with some division of the spoils.
I agree that warfare has declined, but as schoolyard bullies - and Clausewitz - put it, it often comes down to "You and what army?".
Pakistan
The theory isn't original to Sailer.
In terms of the Amish and the Orthodox. My understanding is the Amish can only exist because they occupy some of the worlds most fertile farmland. Their lifestyle/economy can’t scale. The same is true for the Orthodox community in Israel. The most common example is they aren’t subject to mandatory service in the IDF. An Israeli without and IDF isn’t an Israel for very long.
There's lots of fertile farmland not currently owned by Amish people, and there will be more every day as existing farming communities die out.
My chart was about the Orthodox in the US, who don't have that problem, although many of them are dependent on welfare.
Farming communities have been dying out for over a century. It's a matter of the rising value of the land. The Amish would have to purchase it from already highly efficient owners.
The "already highly efficient owners" are highly-efficent at producing commodity row-crops. The Amish can be highly efficient at greenhouses or produce or something, needing far fewer acres to support a household.
Somewhere in the past on this blog, I think there was a post with photos of what it looks like "when land is cheap and labor is expensive" (formation of wheat-harvesting combines) versus "when land is expensive and labor is cheap" (high-density intensive-care produce.)
The US has been abandoning farmland for decades:
https://data.worldbank.org/indicator/AG.LND.AGRI.ZS?locations=US
The Amish can keep expanding for a while
Also, the Amish are already Transitioning to small shopkeepers and craftsman lifestyles.
Big puppy mill practitioners too...
Made doubly awkward by the Amish being exempt from social security.
The Orthodox are fascinating people.
In Jerusalem, they have even managed to turn the streetlights off on the Sabbath, even on the motorway.
(And young boys throw stones on the cars of secularized, academic jews driving from the University to Tel Aviv to get away, for the weekend, from the increasingly ultra-orthodox Stimmung prevailing among those who live in Jerusalem.)
As a Jew, I consider them antisemitic. One of the reason Judaism has survived is that the religion has changed over the centuries. Abraham thought nothing of serving meat and cheese in a meal and his son thought nothing of having two wives. As far as the orthodox are concerned, Abraham wasn't a proper Jew which is an odd condition for the founder of a religion. This is what Jews get for writing down all that stuff. One either has to accept inconsistencies or just accept that a religion has to change with the times.
P.S. I thought Babylon V did a great job having a rabbi wearing a 20th century suit and tie on board the space station. It was pretty funny in its way.
Thanks for your views on this.
I have no personal contacts with the ultra orthodox myself. However I have, or used to have, friends (or at least Bekannte) at the University of Jerusalem, and noticed the stone-throwing on cars and turned-off traffic lights on a visit. That was many years ago by now, but I would assume that Jerusalem has only become more dominated by the orthodox (with Tel Aviv perhaps becoming more secular?) since then; partly for demographic reasons, and partly due to internal migration (people sorting themselves into different cultural groups).
My colleagues at the University commuted to their jobs from Tel Aviv, as they found it socially difficult to live in Jerusalem. They told stories of friends being squeezed out of apartment blocks increasingly dominated to the ultra orthodox.
It is rather worrying, and makes the cultural tensions between blue & red US states look like children's quarrels in comparison.
I expect the year 2050 to be a real year. I am so convinced of this that large fractions of my total wealth are in retirement savings, which will not be touched until after the 2050s.
Have you decided that saving for retirement is a waste of money? Do you tell small children that saving for retirement is a waste of money? Do you plan to take out a large mortgage, on the expectation that you will never have to pay it back?
Is there any prominent person you would be willing to take a bet with, at any odds, on the year 2050 being post singularity?
I maintain some retirement savings (and I'll be retiring before 2050), but probably less than I would if I were 100% sure post-2050 years would happen. I would not recommend anyone take financially risky actions based on uncertain probabilities.
There are some complicated issues around bets lasting 28 years and where if one side is right they'll be too dead to enjoy it. If you want to come up with some structure to bet anyway, then sure, whatever, I'll take it. See https://www.econlib.org/archives/2017/01/my_end-of-the-w.html for how this might work.
Using the same implied interest rate, that would be $100 given to you now, in exchange for $450 (CPI adjusted) should the world still exist Jan 1st, 2050. The terms are essentially the same as the Caplan-Yudkowsky bet, but with different end dates, and all cause end of the world.
My memory is horrible, and I'm not likely to remember this over the next year, much less the next thirty. As such, a condition of this bet is that it be posted somewhere on your website, and any successor website.
My contact details won't necessarily remain stable over such a long time period. In the event that the world exist but I can't collect, donate it somewhere. I'd prefer an EA public health cause.
I don't think a neutral third party judge is necessary. If the world has ambiguously ended, I've obviously lost. If either one of us cares about money, I've almost certainly obviously won.
I'm willing to do this if it will make you update about the honesty with which I am asserting this claim, but it does sound like a lot of work, I'm not sure the sums of money involved will be meaningful to us, and there's a pretty good chance we both forget about it. So if it's all the same to you I lean towards no.
But if you still want to go ahead with this, send me an email at scott@slatestarcodex.com and I will tell you my PayPal address which I don't want to post here.
Having some time to think about it, I'm going to back out. It isn't a meaningful amount to either of us, and an anonymous commenter like myself doesn't have enough reputation for a symbolic bet to mean anything, unlike the Caplan-Yudkowsky bet. A bet large enough to mean anything, and I'm concerned mostly that we both forget, or that my email address changes.
Aside:
If you want to receive PayPal donations, but not publish an email address for it, PayPal can provide a link instead via their PayPal.me service:
<https://www.paypal.me/>
Then arbitrary name after that URL, /acxscottalexander or whatever.
"There are some complicated issues around bets lasting 28 years and where if one side is right they'll be too dead to enjoy it. If you want to come up with some structure to bet anyway, then sure, whatever, I'll take it. See https://www.econlib.org/archives/2017/01/my_end-of-the-w.html for how this might work."
Neat! So the tl;dr is that the person betting that the world will end receives an initial payment, and if the world doesn't end, makes a payment back to the person betting that the world won't end.
I will take a version of this bet. I will give you $10,000 upon agreement and if the world still exists on Jan. 1, 2050 you have to give to me $90,910. This is an implied odds of 11%. Making this bet with an implied interest rate of 5.5% like in your link indicates a misunderstanding of capital growth. These terms are beneficial to you if you believe there is a greater than 11% chance the world ends before 2050 by any means.
If I am misunderstanding what you want to bet on please let me know and I can re-evaluate the bet. I prefer this structure due to its settlement simplicity.
I can have lawyers draw up a contract but the primary risks to me are your death or inability to pay so that would have to dealt with somehow.
I can think of slow takeoff scenarios where humans are largely obsolete, capital is not, and capital ownership is respected. In that case, you would want to have savings, especially because you won't be able to work for money.
Lets say for the sake of argument that the world has a 4% chance of ending per year (so in expectation, we have 25 years to go). How should that impact your investment decisions?
If you are risk-natural, this looks like a 4% decay on the value of a dollar per-year. A dollar in 2023 is only 96% likely to be spendable, so it should only be 96% as valuable than it would be without the X-risk issue. For investments, this is equivalent to a 4% drag on returns. That is substantial, but not enough to make investing a bad idea, especially in the medium+ term.
Good point! Independent of AGI, I've read estimates that the odds of a Carrington event are roughly 1% per year, and (unless the power grids and a lot of other critical electronics are armored) that is economically comparable to a full nuclear exchange.
IMO the slightly higher standard of living for the next few decades isn't worth the risk of relative destitution in a non-singulairity 2050+
So idiocracy is real?
See section 7.
Re. "if innovation is destined to be only 10% of its current level in 2100, then a 30% population decline could lower that to 7%.":
Because of our high population, the bottleneck in tech progress isn't creativity and innovation, or even money, but attention. You can see this in government funding: Grant-funding agencies are limited less by budget than by the number of administrators who can oversee contracts. Projects that cost less than half a million dollars a year are often ignored, sometimes (in my personal experience) to the point of the administrator not bothering to read project reports, answer emails, or attend meetings.
You can also see this in the proportion of media attention given to elite colleges. The ratio of students in non-elite to elite colleges has increased by about a factor of 10 since 1950, yet the ratio of media references to graduates of elite vs. non-elite colleges seems to have grown. Go through any issue of WIRED magazine (well, any issue from the 1990s, when I still read it) & see how many articles you can find about research by people who didn't attend MIT, Stanford, or Carnegie Mellon. The proportion of Nobel prizes given to graduates of top-20-in-field colleges has also increased radically since about 1960; in physics, it went from something like 50% in the first half of the 20th century, to 100% after 1970 (last I checked). The ratio of venture capital given to graduates of elite colleges vs. people with no college education was small in the early 20th century; today, it might be infinite.
We have a glut of smart people, yet if anything there are fewer super-smart people at the top. There are AFAIK no contemporary equivalents to Einstein, Turing, von Neumann, or EO Wilson.
No matter how big the population grows, each person still has enough brain space for the same number of new ideas. The only way we have now of countering this effect is to continually fragment into more and more isolated specialties. But this renders everyone less and less capable of recognizing creativity and intelligence in the wild.
So I do not think a decrease in population will decrease effective innovation. It might even increase it.
If attention were really the limiting resource, wouldn't grantmakers just decrease the amount of scrutiny they gave each grant, accepting worse ones slipping in as the cost of being able to buy more lottery tickets?
You're imagining that grantmakers are incentivized primarily to achieve great results. But no bureaucrat I've known would rather administer a large number of shoddy grants by people who don't follow instructions well, than a small number of good grants.
Also, with government grants, deciding whom to give the grant to is just a small part of grant administration.
Also, completing the grant doesn't give you any technological advances. Somebody has to figure out which of the finished grants are worthy of being shepherded along towards further funding, marketing, deployment, and adoption, whether that's the original funding agency, the institution that received the grant, other researchers, business partners, or venture capitalists. Pushing more of the filtering further down the pipeline would cost more money and make more work for everyone, and it isn't obvious that it would give much better results. Everybody at every step of the process is already overloaded.
That's right. A lot of grants are administered by people who were researchers in the field, many of whom will return to research after their stint as a bureaucrat finishes. (I know that's how DARPA and several other agencies work.) A lot of the work involves coming up with a promising program to promote and then fighting for the resources to do so. You want your program to be a success both as ammunition for further funding, but also because you are likely to be a researcher working on its sequel.
That's the bottom up view. There's also the top down view as the overall agency makes its own course corrections, sometimes dictated by internal decisions, sometimes as directed by an overseer which may be Congress, the Executive or a "customer" like a branch of the military.
E. O. Wilson was an unusually good writer for a scientist, but within that class not "super smart". A physicist might dismiss the main focus of his work as "stamp collecting", and his most notorious work was essentially just a popularization of Trivers.
EO Wilson studied many species of ants in great detail. But he didn't just gather data. He used that data, with experimentation, to develop a quantitative theory relating ecosystem area to the number of species it could support (the island theory of biogeography). He also used it to develop a general framework for using evolution to relate social behavior to environment and to life strategies (sociobiology, though certainly this was influenced by Darwin's speculations about evolutionary psychology), and to expose the flaws in the arguments used against group selection (for instance, that haplodiploidy is not, in fact, correlated with the development of eusocial behavior, whereas intergroup war and cooperative military defense are). He was active in measuring and projecting biodiversity, and in his later writings he related sociobiology to the evolution of culture and art. He thought big, and his pursuits ranged far beyond ants, evolution, and biology.
/Sociobiology/ used ideas of Trivers at least in the areas of sex ratios, the use of evolutionary psychology in the study of the evolution of altruism, and parental investment. But the papers by Trivers that you're talking of probably amounted to less than a hundred pages. /Sociobiology/ was nearly 700 pages. It extended Trivers' abstract, mathematical ideas to field observations of a wide variety of species across the entire phylogenetic range of complexity from bacteria to humans. It also covered topics such as behavioral scaling, group size, energy budgets, cognitive control architectures, cultural learning, socialization, evolution and optimization theory, evolution and communication theory, territoriality, dominance, castes, and in general the entire field of ethology. It unified all these things in a general framework which Trivers, AFAIK, never conceived of. None of it could be called a "popularization" like /The Selfish Gene/.
(To be more specific: Sociobiology refers to Trivers on p. 114, 120-124, 311, 317-318, 325-327, 337, 341-344, 416-418, 551, 555, and 563. This is comparable to how often he refers to RD Alexander, SA Altmann, RJ Andrew, EA Armstrong, JS Bernstein, WH Bossert, MV Brian, JL Brown, CG Butler, CR Carpenter, JH Crook, Darwin, DE Davis, I DeVore, Mary Eberhard/West, I. Eibl-Eibesfeldt, JF Eisenberg, RD Estes, R Fox, K von Frisch, V Geist, KRL Hall, WD Hamilton, CP Haskins, RA Hinde, B Hoelldobler, Alison Jolly, JH Kaufmann, H Kruuk, H Kummer, D Lack, Jane Goodall, R Levins, RC Lewontin, M Lindauer, Karl Lorenz, RH MacArthur, PR Marler, WA Mason, John Maynard Smith, L David Mech, CD Michener, GH Orians, FE Poirier, Thelma Rowell, SF Sakagami, G Schaller, TC Schneirla, TW Schoener, JP Scott, CH Southwick, TT Struhsaker, WH Thorpe, Niko Tinbergen, DW Tinkel, SL Washburn, WM Wheeler, W Wickler, George C Williams, and VC Wynne-Edwards (who admittedly was quite wrong about group selection, but still provided many useful observations).)
I am biased. Once I sent him a grant proposal which I think must have sounded insane to literally every human on Earth except for me, and him. IIRC it was to apply sociobiology and ethology to outline parameters that could create an ecosystem of artificial intelligences, in which the usual interlocking dependencies and feedback mechanisms of ecosystems and animal societies would encourage cooperation and eusociality rather than the violent "there can be only one" scenario being pushed by "AI safety" researchers.
He called me on the phone and said something like, "Look, I don't have time for this grant proposal, but reading it was a breath of fresh air. I've been stuck here at Harvard for days, and I just want to talk with someone intelligent for a change. Have you got the time?"
We talked on the phone for half an hour, and then he had to go. I admit that I think him intelligent partly because he thought me intelligent. I realize that isn't, by itself, actually evidence for the intelligence of either of us. But I can't help interpreting it that way.
His ideas on quantifying species diversity is one area where he's actually not that reliable and argued rather poorly in response to criticism.
https://razib.substack.com/p/david-sloan-wilson-and-charles-c#details
See the later bit with Charles Mann (and I've derided some of Mann's work in a different area on that very blog: https://razib.substack.com/p/charles-c-mann-1491-to-2021/comment/3816402 https://twitter.com/TeaGeeGeePea )
I appreciate the reference, and I'd listen to it if I could download the MP3, but I gotta say that "spend an hour and a half listening to this for a chance to be disillusioned with your hero" isn't a great sales pitch. Also, I've seen a lot of intense criticism of EO Wilson -- no modern biologist has attracted more criticism -- and every bit of it that I've seen to date, from the attacks on Sociobiology by Gould, Lewontin, and the political left, through the attacks on group selection, to Scientific American's "obituary" of him, was at best wrong, and at worst outright evil.
David Sloan Wilson is one of the people interviewed and he worked rather closely with Ed on things like multi-level selection.
Well, then at least you admit that Trivers was no stamp collector.
Definitely. Unfortunately, some characteristics make him difficult to hold down a job, something I was unaware of even after reading his very self-exposing book on self-deception:
https://entitledtoanopinion.wordpress.com/2021/09/24/the-folly-of-fools/
Yes, Trivers lived up to the stereotype of "difficult person = genius".
You mention his view on self-deception as something negative...I do not know about a book, but my own views on human evolution, including the genesis of "genuine" altruism among us humans, has been very influenced by his article on self-deception (it's related to the evolution of self-binding behavioral traits), written with William von Hippel in 2011:
The evolution and psychology of self-deception
Behavioral and brain sciences (2011) 34, 1–56
doi:10.1017/S0140525X10001354
It was not my intent to negatively characterize his WORK on self-deception (though some stuff in the book is questionable). Rather, I was saying that he portrays himself as a surprisingly irrational person for a scientist who studies self-deception. When Robyn Dawes or Kahneman & Tversky were looking for examples of irrational behavior, they pointed to many people they had worked with over the years. Trivers uses many things he himself has done. One could chalk some of that up to unusual honesty, but I really doubt those other authors have done most of those same things.
Another scenario that can turn the population dynamic upside down will be the discovery and mass availability of rejuvenation therapies. I don't think bological immortality will cause overpopulation, but the population pyramids will start to look *really* weird
It’s comical to me than people really think depopulation is the big problem facing us. The world population is almost 2.5 times what it was when I was born. Many of the actually big problems facing humanity, like climate change, species extinction, ocean pollution, etc. are automatically improved when you have fewer humans causing them. I think the real fear is that the Ponzi economy is unsustainable unless there is constant population growth (= more people joining the scheme). But all economies eventually fail; just try to spend a sestertius now. So I can’t see that short term pain as some existential crisis.
That's a good point. For example, people argue that we need a higher population so that there are enough people to take care of the aging. That's great, but that implies indefinite population growth. When we have 50 billion people, we'll need another 10 billion caretakers, and when we have 500 billion people, we'll need another 100 billion caretakers and so on. It might be better to focus on ways to get by with fewer caretakers or freeing up people doing other jobs to become caretakers. Our creativity is less limited by our brain power than by our embedded power structures and mental models.
But all you need to create a population that can take care of its elderly is a stable 2.1 birth rate.
I've always thought of low population density as basically a good thing - a UK with about 5 million people in it sounds much nicer than one with 70-80 million. You'd need an eskimo attitude to the elderly to make it sustainable though.
I agree (about the first bit), it already feels crowded. I don't fancy stuffing another 1E6 people in here.
Pensions are a Ponzi scheme. Growth on its own isn’t. Changes in technology that allow higher economic growth are not the same as investing in tulips or crypto.
Pensions are a Ponzi scheme in that goods and services for people who no longer provide goods and services are provided by people who still do. That game is going to continue until everyone is immortal.
You are on the mark saying that it is one thing to invest money in productive capacity and another thing to invest money in financial instruments.
Ponzi schemes are schemes that require continual _growth_. An actuarially sound pension plan in a stable population is perfectly possible. It just has to avoid overpromising - either if it is based on investments in productive capacity or on transfer payments.
Agreed. Re "the Ponzi economy is unsustainable unless there is constant population growth", my view is that any Ponzi scheme is doomed from the moment it is conceived.
"I guess it’s still true that if innovation is destined to be only 10% of its current level in 2100, then a 30% population decline could lower that to 7%."
But if it's educated / high-IQ people that have the lowest fertility (such that they're disproportionately responsible for the 30% decline) wouldn't that imply that innovation drops lower than 7%? In other words, the proportion of potential innovators would get smaller over time. This effect might even be stronger at the tail end of the IQ distribution: a small relative decrease in high-IQ individuals would mean a strong relative decrease in potential superstars.
Yes. Its wrong to imagine total population is the strongest predictor of innovation. Israel is more technologically innovative than the whole of Africa.
What proportion of those potential innovators do we utilise right now? A lot of the world still doesn't have access to good education or meaningful opportunities to innovate. It strikes me that even with a declining population, we could easily raise the number of innovators simply by educating more people and giving them the space to work.
"A lot of the world still doesn't have access to good education or meaningful opportunities to innovate."
Most people capable of significant innovation do, because they will tend to have the sort of high IQ, high openness parents that enable opportunities.
This effect might be offset by the effects of [assortative mating](https://en.wikipedia.org/wiki/Assortative_mating).
If there is no assortative mating, smart people have kids with less smart people and we would observe a regression to the mean in IQ.
If instead there is a large assortative mating effect (as I think there is), we would expect the variance of intelligence across the population to increase. This would mean that the number of very-high IQ people may remain constant or even increase despite a slow decrease in the mean.
Whether this is good or bad for society depends on whether you think the mean or the 10th-percentile is more important for determining welfare.
Robin Hanson thinks falling fertility is <i>The</i> Big Problem of our times, but mostly in a total-utilitarian sense* rather than a looming disaster sense:
https://www.overcomingbias.com/2010/11/fertility-the-big-problem.html
He has also dismissed writings about technological unemployment. I'm also not too concerned about that, both because of the evidence he's presented about recent history & the near future, and that if we do achieve that our civilization must be succeeding somewhat.
*After writing that I rechecked his post and saw that he thinks that lowering economic growth via lower population & knock-on effects would reduce our robustness to a variety of existential risks. So heightened probability of disasters, but not looming directly from lower populations.
"if we can’t genetic engineer superbabies with arbitrary IQs by 2100, we have failed so overwhelmingly as a civilization that we deserve whatever kind of terrible discourse our idiot grandchildren inflict on us"
I find annoying arguments throwing up our hands and saying we "deserve" bad outcome B if bad outcome A happens. We should try to be robust to bad outcomes. An example of Robin Hanson thinking about that for a low probability but severely negative outcome is here: https://www.overcomingbias.com/2008/07/refuge-markets.html
I should also note I'm much less convinced that the singularity will happen that quickly. If it happens in my lifetime, that will probably be thanks to life extension. 2100 is still far enough off I don't want to confidently speculate about it though.
Also: even supposing that a singularity fails to happen- that the inevitable unpredictable historical twist turns out to be that not much changes- aren't we actually going to need a stable human population at some point? I mean, we can't just keep growing geometrically century after century, right? Why, given a future without much technological progress, would it be better to put off that stabilization until a future generation? Presumably, they would face the same kind of negative consequences from falling population growth that we do today, only more people would be affected.
There seems to be this idea in the air that humanity can escape any future need for population stabilization with space colonization, but aside from just delaying the problem further, that seems to ignore just how profoundly terrible it would be to live in an extraterrestrial colony in the absence of radical technological change. Mars isn't the American West- perfectly suited to human life and covered in recently abandoned farms that smallpox-resistant colonists could pretend were untamed wilderness- it's a hellhole. Glorious sci-fi dreams and the pioneering spirit would only obscure the lived experience of that for so long, and you can't spend centuries dropping asteroids on the planet for terraforming when there are people already living there.
If some future generation is forced into space- or even into the uninhabitable parts of Earth- by overpopulation, that would strike me as a profound failure of humanity.
I think the ideal human habitat isn't a Mars colony, it's a ring-shaped space station. Disassembling (say) the Moon would provide enough raw materials to build a very very very large number of these.
You need colonies on somewhere mineable to actually do that, though, unless you solve AI alignment and have the AI do it for you.
Asteroid colonies are probably better than Mars, though; much easier to export things, and you can dig deep to avoid the radiation because there's (much) less gravity.
>There seems to be this idea in the air that humanity can escape any future need for population stabilization with space colonization, but aside from just delaying the problem further,
There are non-ruled-out cases in which this delays the problem beyond the time at which the universe becomes uninhabitable. In particular, the size of the universe is not known and may be infinite or >10^10^100 [units can be atoms or planets, conversion factor disappears into the error of the exponent anyway]. The size of the universe reachable at sublight is known to be smaller than that, but there's no known impossibility theorem for FTL.
Scott, it seems the answer is ‘yes’ to some degree based off the comments, so my apologies if I’m misinterpreting you, but do you think we all have a very high chance of dying after AGI debuts? Innocently earnest question, I know it’s been asked a million times around these parts to different people and answered in million different ways, but I’m curious what your take-it-or-leave-it answer would be today at this very moment.
This would seem like a legitimately preferable scenario if AGI turns out to be Very Bad.
Pretty high, yeah, I would say maybe 60% chance we're dead 30 years after the first super-human-level AI. Really low confidence in that number, some people I respect a lot say much lower or much higher.
Hmm. Probably best to just grill on the weekends, grind during the week, enjoy life, and not think about it for now😎
Why not advocate for a Butlerian Jihad, then? I acknowledge it wouldn't be an easy task - it would need to be a global movement encompassing at least all first-world countries and China, etc. - but with those odds, would seem worth a shot!
Might be worth trying a peaceful movement to lobby for a global ban on overly capable AI, maybe by way of tightly regulating hardware.
https://www.metaculus.com/questions/10965/us-compute-capacity-restrictions-before-2050/
Because this community doesn't have enough power and influence to do it. In practice it's already being pretty much dismissed as a crazy cult, starting to seriously campaign for even more outrageous notions certainly wouldn't improve matters.
When it comes to notions, "we should in fact not develop an AI that has a better-than-half chances of killing the entire humanity" is in fact not all that outrageous, at least compared to "we should develop that AI".
But nobody currently developing AI believes that it's 50% likely to kill the entire humanity, or anywhere close to that. They are understandably annoyed when some crazy cultists imply that they are in fact very likely to directly bring about the destruction of humanity, and would push back very hard against this notion if it ever comes close to truly threaten them, and, importantly, so would the moneyed interests that sponsor them.
The question, in this instance, was to Scott, in particular, who did indeed express a 60% confidence.
I mean, it seems like a real possibility we could all kill ourselves with AGI even if it's not likely to kill us all. I don't think it's crazy to be worried at all--what do I know, though? I don't work in tech.
Why not advocate for a Butlerian Jihad, then? I acknowledge it wouldn't be an easy task - it would need to be a global movement encompassing at least all first-world countries and China, etc. - but with those odds, would seem worth a shot!
I admit I've often thought about this.
I think it is pretty much guaranteed to fail. You would basically need global monitoring of software development. The world wasn't even capable of enough agreement to keep fissile isotopes out of Kim Jong Un's hands, and those are rare, tightly controlled materials. Computers, on the other hand, are pervasive.
It might even make the overall survival odds worse. This would require prying so much sovereignty out of the hands of powerful nations that the odds of provoking a cataclysmic war might outweigh the reduction of risk from the AIs.
To me, the whole singularitly schtick is admittedly too much "speculative sci fi", but for the sake of argument: Assume AI kills us all. Is that necessarily bad, since an AI with that capability is likely to be better able than us to further explore the Cosmos?
...Why not regard this (to my mind rather improbable event) as a Passing The Torch-moment to a superior species, rather than something to be sad about?
Did GPT-3 write this
"I notice it’s weird to be worried both that the future will be racked by labor shortages, and that we’ll suffer from technological unemployment and need to worry about universal basic income. You really have to choose one or the other."
I can think of a couple of ways to reconcile these worries. For one thing, you can imagine someone's concern about the increased burden of support for the elderly to be primarily about direct physical support: nursing, caregiving, etc. Even granting a large population of unemployed people interested in working, it doesn't have to follow that enough of them will be interested in working in the narrow field of personal care.
Beyond that, even if we limit the discussion to worries about the increased burden of *financially* supporting the elderly, these worries need not be incompatible with worries about technological unemployment, either:
I assume you'll agree that fears of technological unemployment, by and large, cannot be about unemployment leading to resource scarcity in the aggregate, since by definition, firing human workers to replace them with more productive machines leads to more resources in the world, not fewer. Rather, then, these fears surely must be about the distribution of these resources: the increased productivity accruing to the owners of the machines will not flow to the displaced proletariat without some sort of redistribution (e.g. the universal basic income that you alluded to).
I further assume that you will, if not fully agree yourself, grant at least the reasonability of the proposition that the responsibility of supporting the elderly is not spread equally across society, but devolves primarily to those closest to them, mainly their children. (I'll further point out that if you choose to challenge this proposition while granting the first, *you'll* be the one in danger of inconsistency, since the problem of redistributing income across society to support the elderly is surely no different in kind from the problem of redistributing income across society to support everyone in need of support. If worries about technological unemployment are predicated on pessimism about society's ability to adjust its redistribution systems fast enough to support the unemployed in general, it's surely consistent to be likewise pessimistic about its ability to do so to support the unemployed elderly in particular.)
Thus, to me, it appears quite consistent to fear that in the future, many more jobs will be performed by machine, the rich will grow richer, but others will struggle to support themselves at all, let alone their aging parents, the responsibility for whose support they will now have fewer siblings to share.
Beyond the problem of low-fertility states with large entitlement programs (which won't be sustainable with fewer future taxpayers,) as I've written elsewhere: "While people with kids and people without kids can all be responsible, only people with kids have a special connection to the well-being of successive generations: the double helix. And with that special connection comes a stronger incentive to act in accordance with that special concern for the well-being of future generations, generations that will include one’s own children if you have them."
https://paultaylor.substack.com/p/skin-in-the-game-part-2
What about people with at least 2 nephews? They have as much genetic connection to the future generations as somebody with an only child.
So what blog post should we start at if the idea of an AI technological singularity makes us laugh because even the height of well-kept technological infrastructure is max akin to the brain of a dying Huntington's patient who has just been shot in the head with a nail gun several times
Can you rephrase your question? I'm not sure I understand.
I believe the question is "where can I get a primer into AI risk that is laid out to convince strong sceptics that there is a threat at all?"
The semi-things to "worry" about is that a) the way most retirement systems are funded is with a tax on workers. If they were funded by a VAT things would look different. b) the wage taxes that have paid into US retirement fund have not been enough and so tax revenues (wage or VA) need to increase unless benefits change.
Metaculus: “10% embryo selection for IQ: when?”
https://www.metaculus.com/questions/9785/10-embryo-selection-for-iq-when/
It seems like we shouldn’t worry about declines in IQ when technology can start driving the reverse trend
Maybe this isn't the place to bring it up, but I don't think that when embryo selection arrives and we get a better chance to process what it means, we will be prioritizing IQ as the trait we select for. With embryo selection there are always tradeoffs, so if you put all your points into one attribute, you have to ignore the rest. I think it will be common for people to select embryos genetically predisposed to be happy, healthy and popular. Even I might be tempted to prioritize those if I had a choice. I wonder how others here feel about this. Don't we all mainly just want our kids to be happy?
Indeed that is the bottom line that sporks comment gets to.
I should check it out. As it happens, I'm writing a short story which is kinda-but-maybe-not dystopian about a near-future world with embryo selection. The premise is that these future kids are selected to be extroverted, happy people who love to schmooze, party, and "raise awareness" of problems. (Any kids without these traits pay a big happiness penalty.) When certain boring infrastructural problems arise through a shortage of experts on those nerdy topics, they recognize it's a problem, but ultimately try to pass it on to others. But because they are so good-natured, they ultimately convince themselves that they can live without new nuclear powerplants or new generations of silicon technology, and cope happily with a gradual technological decline. Those people never leave Earth, but they do build lots of windmills and they're a bit proud of their de-growth. They don't see their world as a dystopia. They're happy.
Doesn't work absent some mechanism to enforce this selection in all countries. Otherwise the Nazis sit around doing eugenics while everyone else does this Tragedy of the Genetic Commons, and then 200 years later, when everyone else's nukes have become unusable, the Nazis kill everyone and take their land (and then presumably go on to the stars). Or if the Nazis hit some other social failure state, then it'll be the Amish and their disdain of technology (including, presumably, eugenics).
It's an interesting idea. For sure both would be starting with a very small "install base." The Amish and our gene-selected, technologically regressing descendants would probably grow closer in their priorities and capacities. I'm more scared of the Nazis, of course, and I kind of love the idea of a scene in which the technological situation gets so dire for the extroverted majority that they decide to ask the Nazis for engineering help. That collaboration would fail very hilariously! To get anything done in the society I picture, person first needs to achieve buy-in from the largest number of stakeholders, who all have rather different agendas. The Nazi engineers, even if they propose a sound plan, will suck at getting buy-in, and they will find the whole "build consensus first" procedure totally absurd and counterproductive.