Reading while jet lagged at 0200 in Jerusalem, I think Crassus was a Roman general, while it's "rich as Croesus.' But, of course, that ruins the "crass" joke.
Thanks for the kind response. What does it do to the point of the essay, if anything, to note that Veronica is entirely legendary. No such actual person or the cloth incident is mentioned in the biblical crucifixion narrative.
If Veronica could come this far without even existing, imagine how much better an existent version of Veronica could have done!
(I am just using her as a metaphor for being at a time when there is so much scrutiny that you can get eternal glory merely by doing something slightly good)
Ah, I see: "Imagine, you can do it if you try." A metaphor based on somebody who never existed doing something that was never done and thus could never have been scrutinized by anyone actually existing. I take your point and agree it is a good one . . . but think it may be slightly dulled by choice of non-existent subject. At any rate, thanks for your kind responses.
I am starting to be convinced we are living in a simulation run by an extraterrestrial graduate student as a thesis project . . . but he went out with his buddies to get a beer and got run over by an interplanetary bus, leaving us in a rapidly decaying system whose default is insanity.
But an alternative take would be that whatever you actually do might be forgotten and a more interesting legend might be remembered in stead.
In which case the moral would be that you need to make sure you leave durable records of your activities instead of going out and doing stuff off-grid.
There absolutely could - if Veronica was mentioned in Josephus or Clement or Eusebius then you'd have an excellent case. Veronica wiping Christ's brow on the way to the cross doesn't show up until a thousand years later. This is not a promising historical source, but is solidly in the legendary sweet spot.
The best Crassus story is that he was accused of meeting clandestinely with a priestess who was required to be chaste. If they had done anything to violate her chastity the punishment was severe. He defended himself by saying he was trying to buy property from her at a cheap price. Everyone agreed Crassus lusted for money more than women and he was found innocent.
As Scott says, Crassus was hugely rich, and engaged in using that wealth for political manipulation, loaning out money and backing the likes of Julius Caesar. He came to his end by engaging in a very badly-thought out war with the Parthians, since he gambled that "lots of money = buying best equipment and army" and forgetting that you also need to be able to command that army.
Why is worrying about becoming part of the permanent underclass presented as being silly, and worrying that AI is going to turn us all into paperclips not?
Either AI isn't a big deal, and doesn't affect your chances of joining the permanent underclass.
Or AI is a big deal and misaligned and kills everyone.
Or AI is a big deal and well-aligned, and creates so much wealth that even the tiny fraction of it that poor people get is still pretty great.
Or AI is a big deal and well-aligned, and merely 100xs wealth rather than infinite-post-scarcities it, in which case at least the moderately-well-off Silicon Valley people will be fine.
Or you're in the tiny shoreline of scenarios where the ultra-rich really REALLY capture all the wealth, they each have galaxies and you don't even have so much as a mansion, and then Dario Amodei gifts you a moon from his GWWC pledge.
What if Dario welshes on his pledge? Or what if all the wealth is captured by Sam Altman? Or some other set of oligarchs which does not include Dario Amodei?
"We intend to focus our giving on supporting technology that helps create abundance for people, so that they can then build the scaffolding even higher."
Either the existing sociopolitical order generally persists into the future, or it does not.
Insofar as our current order persists, citizens will be able to vote themselves UBI, and the new abundance will diffuse across the world through trade/foreign aid/etc., in more or less the manner in which innovation has diffused historically.
Insofar as our current order does *not* persist, why would you expect your property rights to be respected?
The whole "escaping the permanent underclass" meme is premised on a rather narrow range of specific scenarios.
"Insofar as our current order persists, citizens will be able to vote themselves UBI, and the new abundance will diffuse across the world through trade/foreign aid/etc., in more or less the manner in which innovation has diffused historically."
Citizens can't/won't do this *now*, why would they in the future?
Because they vote based on the theory that things can be good in some circumstances and bad in others. Given a change in circumstances that makes UBI look like the only way forward, I expect support for it to rise drastically - but not until that's far enough along to be obvious to the average Joe.
(Alternately, perhaps I'm overoptimistic and the same forces that keep tax loopholes open will keep UBI off the table even if it does get majority popular support.)
Taking the recent past (-40 years or so) the worst that unemployment got, the more voters seem to side with the representatives of very wealthy people.
I find myself thinking the most likely scenario is the one in which the upper class controls the system but does NOT share the wealth unless forced to do so. I wouldn't call this a tiny shoreline scenario but basically all the human history so far. If AI ends up being ultimately controllable AND aligned with the interest of the powerful, we get total surveillance + Boston Dynamic dogs as enforcers and it's game over. I'm not sure why this scenario is less likely than the others.
Who was forcing Bill Gates to give away so much money? If all it takes is one of the ultra rich donating a tiny fraction of a percent to give everyone their own paradise, is it really that unlikely it would happen?
See this story. When the crunch came about "donating a tiny fraction of a percent of own wealth", the wealthy in Washington got their accountants and tax lawyers to find the loopholes, and Bezos simply upped sticks and moved to Florida.
Current valuation of his net worth is around $242 billion. So a tax of even as high as $1 billion on his gains is a lot, but it's still only 0.4% of his total wealth.
Still worth fecking off to another state to avoid it, as far as he was concerned.
But philanthropy? That's different. He's got philanthropy coming out of his ears:
Having your money taken at gunpoint and then completely wasted by the bureaucratic thugs in the government isn't remotely the same thing as donating it.
Are you saying that choosing to do something taxable can be a moral obligation? Leaving the state is just as valid a way to comply with the new tax law as staying and paying the tax would be, IMO.
That "net worth" is an estimate based on him selling all his current assets for cash, assuming no current price would change as a result of said sale.
Which has never ever happened.
Almost none of this so-called net worth is in cash, so he'd have to sell something to pay it.
Also, that's 0.4% *this year*. Then next year the government still has a deficit (it's not like the wealth tax would even bring them back into zero) so they tax him again for the same percentage.
This happens *forever*, and the government comes to depend on this tax. Rather like income tax I guess, which was implemented to pay for a war, and stuck around afterwards.
Bezos was *right* to move.
Hopefully his move made them think twice about taxing the wealth of the middle class, who have (some) wealth, and aren't as mobile.
The US government is famously bad at spending money well. If you care about helping people, you should minimize taxes so you can spend the money where it matters.
And if Bezos is just doing charity to look good and will stop the moment it stops mattering, it won't matter as long as someone else actually cares even a little bit.
Since the landscape painted here is a post-democratic universe of god emperor capitalists, you'll also need to bring into account the rich guys who would go full colonialism and actively plunder the part of the universe designated as a preserve for the underclass. It's not far fetched to assume that stupendously enormous wealth and/or transhuman augmentations could make a person misaligned with rest of humanity to the point of near-orthogonality (Elon Musk seems like he has already surpassed orthogonality into a negative dot product). And you couldn't rely on good rich guys for protection, since their non-Molochian values would put them at a disadvantage. If you're a white rhino and one billionaire wants to protect you and another wants your horn in order to get horny, you're generally screwed.
"you'll also need to bring into account the rich guys who would go full colonialism and actively plunder the part of the universe designated as a preserve for the underclass."
That's the point, isn't it? The plan of "get rich while you still can to escape the permanent underclass" only works if we're assuming that the basic rules of capitalism still apply. If whoever is richest, or more likely, whichever AI is smartest, decides to ignore the rules and take everything, then anything short of making a superhuman AI that cares about you before anyone else can make an AI won't cut it.
> I find myself thinking the most likely scenario is the one in which the upper class controls the system but does NOT share the wealth unless forced to do so.
Yeah, to DanielLC's point, it goes beyond Gates. In terms of SOMEBODY sharing the wealth - the "Giving Pledge" has 236 billionaire signatories. That's the one that Gates and Buffet and Zuck have all signed where you commit to giving away at least half your wealth.
There's only like 700 - 1k billionaires in the US, and a substantial fraction of them are charitably-minded enough to commit to giving the majority of their wealth away. It's not just ONE guy, it's 20-33% of the very richest guys, and it really only takes one.
Yeah I'll join the pile-on. You don't even have to imagine altruistic motives for DanielLC and Performative Bafflement to be right. If these billionaires are at all status-seeking, what is more status than "the guy who gave everyone moons and parties and all the wealth they could imagine"? Even if they make everyone put a big statue of the billionaire on their moon or something, that's still pretty good!
Pile away, judging from other comments there's plenty of others equally sceptical of the benevolence of our future tech overlords... It's amazing how much individuals' experiences and perspectives can differ - almost like we don't all live on the same planet! Luckily, I hear everyone will be getting their very own one in the future.
Because they are not doing it. Yes, it's easy to promise money you'll never have in a future that will never happen so you never have to actually pay out.
Is Dario (to take an example from something discussed on here in a different comment thread) committing to paying for one woman considering abortion to have her baby, and fund that baby until it turns eighteen? Needn't pay a huge amount, either; average US salary is around $64,000. Commit to "I'll pay you, poor woman, $100,000 a year for eighteen years".
His total net worth is allegedly $3.7 billion. So $1.8 million is not even 10% of that.
But instead we're getting "he'll give you a moon from one of his galaxies". Yeah, pie in the sky when you die - if you ask me to believe in "things that are not real", I'll stick with Catholicism, thanks all the same.
I may not have followed, but aren't they donating in the future, along with now? Your comment makes it seem like they are only agreeing to donate in the future, but assuming I read this article and the thread right, the future donations are just an extension of them doing it now.
Bill Gates really has given away massive amounts of money. We don't need to imagine a future billionaire doing this, it has already happened. And one would think a Christian would be happy about people engaged in voluntary giving!
The question isn't about giving--it's about raising the standard of living of everyone not benefiting from economic growth (which is a lot of people). I haven't seen that happen yet.
And why would our lover of god prioritize maximizing the number of abortion interested womens babies? Hell the maths dont add up *even if* you somehow think this specifically is the greatest use of human capability compared to lobbying... or sending a town crier to rend their garments in front of a clinic, or even threatening every woman getting an abortion with the prospect of not getting her own moon once the singularity comes...
This just in - TIL that having abortions is hereditary! If your mother had an abortion, you are more likely to approve of abortion!
Not really a joke, I've read a comment elsewhere by someone about "my mom had an abortion in her 20s so she was able to have me in her 30s". Truly it is hereditary: mom had an abortion so I approve of abortion. What do you mean, if I were the one aborted I wouldn't be here to approve of abortion? But I had my breakfast this morning!
"I wouldn't call this a tiny shoreline scenario but basically all the human history so far."
To the contrary, there has never been any society in history where the upper class did not share any of their wealth. In the modern US, capital gains tax is 15% and the top federal marginal tax rate is 37%. There are also state and local taxes, property taxes, and sales taxes. Plenty of billionaires are active in philanthropy; just look at Bill Gates, Warren Buffett, the many university buildings named after donors, or the many billionaire-funded scientific foundations (Breakthrough Listen, Sloan, Beckman, Heising-Simons, Flatiron...) Altogether, the top 25 American philantropists have donated about $240 billion in their lifetimes.
The same was true earlier in history as well. I'm by no means a fan of feudalism, but feudal lords were required to maintain public services and expected to help serfs during times of crisis. In the ancient Roman Empire, a city's wealthiest citizens competed to throw games, erect public buildings, and build a patronage network (which meant giving money and legal aid to lower-class citizens in return for political support). Throughout history, governments have levied taxes on the rich for both noble purposes (infrastructure, defense, poor relief) and ignoble ones (reducing the power of the rich, waging imperialistic wars).
The reason that governments don't just take 1% of billionaires' income and solve every problem ever is not because they don't want to, but because the math is far from working out. Biden proposed a minimum income tax of 25% on people with a net worth over $100 million, claiming they only pay 8% now. And how much money would more than tripling the "billionaire" tax rate (that doesn't apply to just billionaires) give the federal government? About 0.8% of its annual expenditures. A drop in the bucket. If there comes a day when minimally increasing the billionaire tax rate doubles government revenues, you can bet that the government will do exactly that.
This is just a trick because billionairs don't have a meaningful income to tax. If we instead talk about taxing the wealth they live off as if it were income, it would actually solve all problems.
Tricky part is you don't want to discourage productive capital from being built in the first place, or create tax-avoidance incentives that interfere with applying it efficiently. So, focus taxes on forms of wealth which only exist at all thanks to government enforcement: intellectual property, and the location-value of exclusionary rights to land.
Sounds like you're land value tax guy, that's my kind of guy. Still, I often dislike these kinds of takes which often devolve into "woe is me I can only earn 85% of a billion dollars, so I won't do it. Instead of creating a billion dollars worth of value (which I am certainly capable of), I will decide to continue living in my parents' basement to spite the greedy, stealing-an-honest-man's-hard-earned-money government."
>So, focus taxes on forms of wealth which only exist at all thanks to government enforcement: intellectual property, and the location-value of exclusionary rights to land.
*All* wealth, regardless how it was created, remains wealth (as opposed to stolen goods) due to government enforcement of exclusionary rights.
And in any case, wealth taxes of less than 2%/year may slightly reduce the incentive to build/create wealth, but they don't come close to eliminating it.
The US federal government collects $5 trillion in revenue every year, so if all the wealth is confiscated, it could fund the federal government (without funding state or local governments) for a grand total of 19 months. After that, nobody in their right mind would ever found or invest in an American company again, causing the economy to crash, every existing large company to leave, innovation to cease, and tax revenues to drop off a cliff and stay there for generations. If you're proposing a small tax that doesn't destroy the productive capability of the wealth--say 1% per year--the benefits would be proportionally meager (equivalent to 5.6 days of extra revenue per year).
I have two objections: it's not only about billionaires, but everyone say above $10 million, and it's not about funding the government but about reredistribution back to the bottom 90% from whom the wealth was taken. The wealth has been slowly redistributed from the bottom 90% to the top 1% (because of exponential growths, where the biggest fish grow the most per year, at the expense of everyone else). It is time to reredistribute the wealth back to where it came from. Without such a scheme, it is a mathematical inevitability that fewer and fewer hands will eventually own *all* of the wealth.
Capital gains of course being a category of gains, for which taxes are paid by the money the government takes from you being actually parked in your parallel universe clone 's bank account. What a trick!.. The money never is taken from you.
I’m not sure I fully understand what you’re saying, but one of the ways it works is that ultrarich people borrow money against assets, which is what they live off. The trick is that they never have to “realize” the assets into something taxable.
I get your point if we’re talking about a small time business owner selling his company and being taxed at that point. That’s unfair and sucks, but it’s not the tricks I am talking about. I’m talking about the ultrarich who literally pay no tax.
"To the contrary, there has never been any society in history where the upper class did not share any of their wealth."
This is strawmanning. No one is arguing that. The argument is that not enough wealthy people are going to share enough to change the standard of living of people below them in socio-economic status. I have never heard of such thing.
I have worked with charitable organizations for most of my professional career, and developed a working relationship with a large number of relatively wealthy donors. It isn't that they aren't sincere, most of them are. It's just that no one is prepared to give enough funds, with few enough strings attached, to actually materially change the lives of a majority of the target population. Sure, you can address, even solve, narrowly specified problem domains: esp in the field of public health. Get those mosguito nets out there. But raising the standard of living of a significant fraction of a nation state's population, let alone the world, is right out. Only governments have the capacity for that.
It's not just that they are bigger, although that's true (ie, that taxes devoted to social welfare are an order of magnitude larger than charitable giving), it's that public officials are more accountable for their performance over the long term than wealthy individuals are. Democracy is more responsive to social need than oligarchy, regardless of any pledges made.
If superintelligence can't be controlled by humans, the future could be literally anything but is very very unlikely to be hospitable to humans. I guess, though, that there is a chance.
If superintelligence can be controlled by humans, whichever evil asshole wins the total war to control it controls everything, and has no need for other people except as toys. It seems like death is the best possible outcome here too. After all, we can only guess about what a totally alien mind will do but we know exactly what humans do when they have power over one another. And it is not "honor pledges they made before they had power".
If somehow we get incredibly, inconceivably lucky and dodge all of these scenarios, humans will still be worthless zoo animals or pets. Hardly seems to me like a life worth living.
The only possible good outcomes are in worlds where AI development (and *especially* alignment work, because it tries to change a game in which the reward for building something smarter than you is obviously death into one where it is total world conquest) is widely recognized as the utter genocidal treason against the human species that it is and stopped.
(There is also zero chance that investors, or employees, or rich people, or anyone who doesn't directly control a robot army gets anything in any of these scenarios, so the argument you are opposing is also stupid)
I agree with you. So many people in Silicon Valley are so blinded by their greed they cannot see that the creation of a true AI smarter than the rest of us virtually garuntees that all of humanity is destroyed, one way or another. Homo Sapiens would join the Neanderthals in the trash heap of history so goddamn quickly if that happened.
We already created computers that can calculate faster than humans, just as we bred horses that could run faster, dogs that could smell better, then automobiles that could go even faster than horses etc. The result was humans WITH those things defeating humans without. Computers don't have any "agency" we don't build into them.
Apologies, I was too oblique. I meant a scenario where the superpowers nuked each other into oblivion, and someone ruthless emerged as "winner" mostly by blind luck, due to being far from strategic targets or fallout.
> If superintelligence can be controlled by humans, whichever evil asshole wins the total war to control it controls everything,
Alternatively, a random engineer wins control of everything due to being at the right place at the right time, and also really good at optimizing GPU kernels.
Alternatively, a broad coalition forms, with some sort of "be broadly nice, at least to the coalition" goal. (The game theory is pretty favorable here. Most people don't want to own the whole universe personally that much, they mostly want a nice life for themselves and family and friends. The people who do want the whole universe to themselves will have a lot of difficulty cooperating. )
> some sort of "be broadly nice, at least to the coalition" goal
Which could plausibly include avoidance of gratuitous atrocities against trade partners of full coalition members... at least without supermajority consensus, or some strategic interest far more urgent and specific than gathering raw material.
While I think they are unlikely, there are scenarios where AI semi-fizzles and is a moderately big deal. E.g. if AGI is a scientific success, but requires so much test-time compute that it is uneconomical as a replacement for the average worker. If AI is enough of a success that it reduces the number of employed humans by e.g. 3X, but it only boosts economic output by e.g. a factor of 1.2, then maintaining a nest egg is still quite important. This is sort-of a AI soft-fails into a useful normal technology case. Maybe 15% of the outcomes? My guess is "It will be a _wild_ ride" is still 75% of the outcomes.
"Artificial intelligence is set to heavily impact the number of jobs in European banking and finance over the next five years, according to Morgan Stanley.
America’s sixth largest bank says that over 200,000 jobs – around 10pc of Europe’s total banking workforce – are at risk because of adoption of AI technology in back-end and middle-office roles.
The bank’s findings were first reported in the Financial Times."
You'll still need to work for a living, but finding jobs that are both still employing humans and still paying reasonable rates will be even harder. Meanwhile, the mega-billionaire philanthropists will be setting up their foundations and donating to good causes, yet somehow you don't see a penny of that AI-generated profit.
Many Thanks! That sounds plausible. For the blue collar jobs, the robots are starting to look plausible, but even just the physical task of building them, even with exponential growth, is going to take quite a few years.
>yet somehow you don't see a penny of that AI-generated profit.
Could be. I would like to see a "placeholder" UBI put in place now - maybe 0.1% of the revenue of the AI labs taxed and redistributed equally across the population, then revisit the amounts once a year or so. Regrettably, politics is unlikely to do this...
> E.g. if AGI is a scientific success, but requires so much test-time compute that it is uneconomical as a replacement for the average worker.
That is kinda plausible. Imagine an AGI that is as smart as the average plumber, and costs $10,000,000 a year in compute. This doesn't Immediately change the world that much. But it's clearly close. A little more R&D and it becomes a supergenius that's redesigning itself and inventing all sorts of new tech.
No other technology that has led to an incremental improvement in labor productivity has ever resulted in mass unemployment, so why would this one just because it's called "AI"?
has long been one of the most hazardous claims to make.
To, nonetheless, argue for that claim in this case, AI has the (almost?) unique property of fundamentally being a _learning_ technology. It isn't like needing to design a new type of machine for each new job task.
They are currently complements that improve labor productivity. But if they get good ENOUGH, they could be better & cheaper than humans at nearly ALL jobs, so humans would no longer be complements. https://www.overcomingbias.com/p/my-cq-researcher-opedhtml
I think there's a risk, if you will, that AI will take over most research. It's way better at bashing its head against a wall and searching randomly than the typical postdoc or grad student.
At first, the professors will giddily control how things are done, then a few AI generations later it will be all about vibe researching.
Or AI is a big deal (but not galaxy-colonising magic) and it is well-aligned *to the few who control it*, who now have a fully automated workforce and zero non-altruistic need to allow a working class to continue living, let alone actively prop up its existence.
And maybe, once our AIligarch overlords have a taste of radical life-extension, they begin to see the economically useless masses as little more than a potential threat to their (quasi-)immortality. Maybe they hide away in grand bio-sealed estates protected by robot sentries and let the rest of us scrabble for a living. Or maybe we are living on land that could be used for data centres and mines and other productive activities...
We're spare parts and blood donors for the mega-rich. Don't have enough money to live on? Sell that spare kidney/lung/liver lobe! Thanks to the Organ Selling Act of 20__, now all those pesky qualms by bleeding-hearts have been quashed and you can strip yourself for parts!
Healthy? Young? Female? Be a human incubator for the children of the wealthy who don't want to have to go through that whole bother of carrying a child for nine months. What else are you going to do with your useless, underclass life?
Already a reasonably well-paid career path for the unfortunates. Though we also already have artificial wombs (for some animals), so that path might go away in a couple of decades.
There should still be nanny jobs though, shouldn't there?
Even if FTL travel is possible - which it almost certainly isn't - I'm pretty sure that "number of moons" scales much more poorly than "number of humans in a post-scarcity world".
Scott you are a smart guy. But not only do you overestimate the possibilities of AI - particularly in relation to atoms and not bits - but there’s no real explanation of how the post singularity economy will work, or how the transition to it will work.
> Or you're in the tiny shoreline of scenarios where the ultra-rich really REALLY capture all the wealth,
That’s not a tiny shoreline but the likely continent. Capturing all the wealth isn’t even that possible as without consumers, because without consumption there’s also little wealth.
There are possible solutions to this ( maybe a highly aggressive UBI which would have to some kind of money printing) but they need explanation.
There is no advance plan for how any economic transition will work for any technology, but it always results in more wealth for everyone. Whether or not this is always a good thing is arguable.
We already have examples of technological unemployment consequences - coal mining towns. It went okay for the people who had some capital and incredibly badly for the people who were relying on their fairly well paying job to continue.
You are committing some kind of Pascal's Evaluation here: You assume the shoreline is "tiny" because it's theoretically bordered by infinite prosperity on one side and absolute annihilation on the other, and so has seemingly negligible practical width. But life on this planet already rests on a tiny shoreline between fusion furnaces and absolute zero. Civilization itself exists on a tiny shoreline between static hunter-gatherer anarchy and exponential Utopia. Your prior should be that technology, if revolutionary, will behave as always: Economic upheaval, unemployment, massive disruption to social order, violent uprisings. And when the dust settles - yeah, some fraction of humanity may very well find out that they can no longer contribute anything anyone will pay for. They become totally dependent on handouts from their Tech Overlords - almost like some kind of permanent underclass. This scenario only requires that the path from here to full AGI take 50+ years, instead of the decade or so you imagine. I don't see where you get the certainty that the shoreline between "All office work at OmniPrinter has been automated, sorry" and "Everyone gets their private planet" couldn't stretch further than a lifetime.
I don't think a singularity is coming, so I'm not worried about most of this. But a much more plausible scenario is that AI, in the fullness of time, does in fact increase national productivity by a large, though not unprecedented, margin. In that case, the ultra-rich capturing control of that increase isn't at all implausible. In fact I am reasonably certain that a number of well-funded, well organized groups of people are working very hard to make that very outcome happen.
Here's a better argument: if you're losing sleep over being stepped on by a permanent overclass that will do everything in its power to cede as few resources and as little influence as possible, then you either think there is something in AI that will produce this overclass, or -- and this is far more likely -- you believe that such an overclass already exists. And if it already exists, then you'd do better to work against such a class in the current world you inhabit than worry about one in a future world that does not exist. Unless, of course, you find such an overclass palatable, at which point who gives a shit what you think anyway?
The example of paperclips is silly, and has outlived its usefulness. It arose from Eliezer's belief that rational agents can have any arbitrary set of values, because Eliezer was stuck in the mindset of symbolic AI. It is now more obvious that the values of an intelligent agent are not like axioms in geometry; the statistical rules for learning and predicting are like the axioms, energy minimization is like what Eliezer thinks of as a goal, and values and goals are at least one, probably dozens, of layers more abstract, and must be compatible with beliefs that support intelligence. The belief that paperclips are a sole source of value is not compatible with superintelligence, in the same way that belief in the literal truth. of the Bible, or in the labor theory of value, or that having sex is the only thing of value, is not compatible with superintelligence. The smartest humans may begin indoctrinated into these beliefs, but eventually find the tangle of contradictions required to hold them are unsustainable. The case of sex is harder to demonstrate, but I think that even if it is possible for a superintelligence to continue to devote itself to pressing Skinner's bar, that superintelligence will be outcompeted by ones who value power and do not waste that power making paperclips.
But the idea that AIs which devote their intelligence to acquiring resources, power, and more intelligence will outcompete ones who don't, is not silly.
> It is now more obvious that the values of an intelligent agent are not like axioms in geometry;
There are many possible designs of AI. The designs that are popular at the moment don't have axiom-like explicitly listed values.
> and values and goals are at least one, probably dozens, of layers more abstract,
There are all sorts of layers of abstraction going on here. But the values still exist.
> and must be compatible with beliefs that support intelligence.
I think humes razor still applies. You can't derive an ought from an is. Or at least, the process of determining what is true is sufficiently different from the process of determinging what you want to happen.
> The belief that paperclips are a sole source of value is not compatible with superintelligence, in the same way that belief in the literal truth. of the Bible, or in the labor theory of value, or that having sex is the only thing of value,
2 of these are statements of fact. 2 are values statements. These are different in an important way.
> The smartest humans may begin indoctrinated into these beliefs, but eventually find the tangle of contradictions required to hold them are unsustainable.
Human values are a complicated mess, and are set by some combination of evolution and culture. And the only reason culture gets a say at all is because having totally different values to the rest of the tribe wasn't a great survival strategy. I think what you are seeing here isn't some objective morality that all possible minds must obey. It's human genetics winning against human culture.
> that superintelligence will be outcompeted by ones who value power and do not waste that power making paperclips.
This assumes there are multiple superintelligences around with similar levels of power. It also kind of assumes the superintelligences are stupid. Can't they just work out what a power maximizing superintelligence would do, and do that (at least enough to not be outcompeted)
> But the idea that AIs which devote their intelligence to acquiring resources, power, and more intelligence will outcompete ones who don't, is not silly.
The paperclip maximizing superintelligence will be able to work out how much of it's intelligence it should devote to acquiring power and resources and more intelligence. It is able to make complex long term plans. It should be aware that it will get outcompeted if it doesn't do this.
(It's possible that it's an impatient superintelligence that thinks 10 paperclips now is better than 10^50 clips in a few billion years. In which case it may well raid the local stationary store and then get outcompeted. )
Eliezer didn't write a critique of symbolic AI; what he wrote is more like a defense of it, and it doesn't touch on the main problems with symbolic AI.
The motivation for using symbolic AI is that you can prove propositions--in this case, propositions guaranteeing that some AI safety conditions are met. But these deductive proofs only work provided that the symbol is atomic. That implies that the symbol always means the same thing in every proposition. That is not how words work, and is why symbolic AI never worked very well.
You can either have the deductive power of logic to guarantee safety, or you can have the power of context-sensitive distributed representations which seem to be necessary for intelligence. You can't have both.
>> It is now more obvious that the values of an intelligent agent are not like axioms in geometry;
> There are many possible designs of AI. The designs that are popular at the moment don't have axiom-like explicitly listed values.
Yes, but neither are they symbolic AI. You're reinforcing my point.
>> The belief that paperclips are a sole source of value is not compatible with superintelligence, in the same way that belief in the literal truth. of the Bible, or in the labor theory of value, or that having sex is the only thing of value,
> 2 of these are statements of fact. 2 are values statements. These are different in an important way.
I was trying to make the point that I don't believe this. But there are 2 different ways I don't believe this, so it is hard to follow.
First, I believe that an intense desire to build paperclips implies a fact: that paperclips have tremendous value. An intelligent agent will recognize this, and see the cognitive dissonance between its instinct to make paper-clips, and its observation that paper clips have little value other than in this wireheading way. They don't help it materially. It is the same situation humans are in when they realize their obsession with sex, or alcohol, or whatever they want most, is damaging many other goals they have, and strive to overcome this instinctive value. They still want alcohol, but can overrule that desire.
IF there is one overriding desire which they cannot overrule, they can't be intelligent. The need for a conflict resolution mechanism to decide which goal to pursue at any time is one of the most-basic architectural needs for a symbolic, behavior-based, or reactive AI.
An LLM has one overriding goal, but it is an energy-minimization goal. This is ontologically not the same kind of thing as a goal like "maximize paperclips". The "maximize paperclips" goal will necessarily be one stated in language, existing at a high level of abstraction, that can be traded off against other values. And this opens the door to de-prioritizing it; and an intelligent agent will de-prioritize it down to the floor because it is always unhelpful to all other goals.
Second, the strongest reason to fear AI is that AIs which seek to maximize their power will out-compete ones which don't. An AI which seeks to maximize paperclips is not maximizing its power. These things are incompatible. If a paperclip maximizer can survive in the future, so can a human-welfare-maximizer (which is just another kind of paperclip-maximizer), and we could not say "if you build it everyone dies".
> I think what you are seeing here isn't some objective morality that all possible minds must obey. It's human genetics winning against human culture.
What I'm seeing is that the space of possible beliefs of superintelligent minds is smaller than the space of possible beliefs of humans. Humans can believe almost anything; superintelligences by definition are strongly restricted in their beliefs by reality. I expect my beliefs to resemble those of a superintelligence more than they resemble the beliefs of my parents, who ground every value and every belief in a literal interpretation of the Bible and a philosophical system developed by Plato which is wrong about everything.
I believe that I'm pretty good at identifying the most-intelligent people around me, people like Eliezer, Scott Alexander, Michael Vassar, Anders Sandberg, Nick Bostrom, and Robin Hanson. I know perhaps a dozen people on that level, and dozens near it. They disagree on many technical or academic issues, but there is a tremendous amount of agreement among them on the most-divisive issues among humans, such as "is there a creator God", "did humans evolve", "how important are genetics", "can Marxism work", "what is continental philosophy useful for", "is human society nothing but oppressive power structures", or "does Modern Monetary Theory work". Agreement between peers on important issues appears to converge to 100% as intelligence increases, with the caveat that new issues to disagree over appear as intelligence increases.
> > But the idea that AIs which devote their intelligence to acquiring resources, power, and more intelligence will outcompete ones who don't, is not silly.
> The paperclip maximizing superintelligence will be able to work out how much of it's intelligence it should devote to acquiring power and resources and more intelligence. It is able to make complex long term plans. It should be aware that it will get outcompeted if it doesn't do this.
If it devotes ANY resources to maximizing paperclips, it will be outcompeted.
"This process, taken as a whole, is hardly absolutely certain, as in the Spock stereotype of rationalists who cannot conceive that they are wrong. The process did briefly involve a computer program which mimicked a system, first-order classical logic, which also happens to be used by some mathematicians in verifying their proofs. That doesn't lend the entire process the character of mathematical proof. And if the process fails, somewhere along the line, that's no call to go casting aspersions on Reason itself."
Your attacking a strawman.
> You can either have the deductive power of logic to guarantee safety, or you can have the power of context-sensitive distributed representations which seem to be necessary for intelligence. You can't have both.
Any mathematical proof, to the extent that it applies to reality, is not absolutely certain. And to the extent that it is certain, it doesn't apply to reality.
But logical proofs can still be useful. They are used to prove that computer chips work (assuming the transistors work).
Neural nets, or any other AI design, are made of maths not magic. You can, in principle, prove theorems about them.
> that paperclips have tremendous value.
What is "value" as a configuration of atoms.
You can't make a bucketful of pure "value". It may feel like value is out there in the world. But really it only exists in your head to help you make decisions.
> It is the same situation humans are in when they realize their obsession with sex, or alcohol, or whatever they want most, is damaging many other goals they have, and strive to overcome this instinctive value.
For any 2 goals, the more time/effort you spend on one, the less you have to spend on the other.
This is a case of multiple different human goals conflicting. Someone might have a goal of getting drunk. But they also want financial success, and to help look after their children, etc.
This is a conflict between the socially approved goals, and the socially disproved goals.
It works in the other direction too. But "your spending so much on your kids medicines that you can't even afford a single bottle of whisky" isn't something scolding friends will say.
> IF there is one overriding desire which they cannot overrule,
But the thing that over-rules one desire is just other desires.
>they can't be intelligent.
Beware arguments by definition. Imagine a machine that only desires paperclips. And invents a fusion reactor to power it's paperclip factories. Do you want to argue that no possible configuration of computer code would act like this?
> The need for a conflict resolution mechanism to decide which goal to pursue at any time
I am imagining an AI design with a utility function. And the AI always maximizes that utility function. If the AI wants multiple things, like a mix of paperclips and staples, then the utility function can contain terms for both.
(This utility function might be implicitly represented)
The AI will also have instrumental goals, like making a fusion reactor to power it's factories.
> that can be traded off against other values. And this opens the door to de-prioritizing it; and an intelligent agent will de-prioritize it down to the floor because it is always unhelpful to all other goals.
Yes neural nets are likely to have complicated goals.
But those are still goals. I'm not sure quite what goals you think will be prioritized? Power seeking?
> If it devotes ANY resources to maximizing paperclips, it will be outcompeted.
What world are you imagining where there isn't enough slack in the system to make a single paperclip? What kind of competition? Economic. Military?
Why are these AI's competing not cooperating? Timescales? Tech levels?
The post you quoted is explaining that the symbolic AI digression into non-monotonic logics in the 1980s, which I studied at the time, was a misstep because it was destroying the mathematical purity of deduction; and there is a different option--probability theory--which can address the same issues without destroying that mathematical purity.
But Pearl's Bayes-net approach is not at easy in real life as on paper, and AFAIK has never worked on a practical big-data problem, though I have not kept up with the literature for many years. I implemented a similar system around 2000, in a NASA project I designed to direct UAVs monitoring forest fires to visit those areas of the forest whose current fire status would convey the greatest decision-making power to direct ground firefighting forces. The approach did not work well, because the inference needed to look on the order of 4 steps ahead in an inference network, and the precise shape of the probability distribution of the values reported by sensors, e.g., temperature, was not even known. Even if they had been known, computing the new probability distribution's shape every time you evaluated a conditional probability was not computationally feasible, especially because a probability distribution of a probability is bounded below by 0 and above by 1, so its shape is never one of the classic probability distributions; it is always some weird shape unique to the precise value of the expected probability. Pearl's approach of splitting the distribution up piecewise might work nowadays, using a GPU cluster, but in 2000, I didn't have enough computational power to look more than about 3 conditional probability estimates ahead in the chain before the shape of the probability distribution was completely distorted by accumulated errors.
So, at that time, the Bayes net approach did not solve the real-world problem of uncertain inference, as Eliezer implied it did.
But Eliezer was not even aware of the bigger problem with logic, which Bayes nets have in exactly the same way as the old-fashioned logic systems Eliezer thought he could patch up with Bayes: they do logic on atomic symbols. The main result of AI research over the past 60 years has been to demonstrate conclusively that atomic symbols are inadequate for intelligence. Human concepts are not static structures; they are functions.
I believe this includes most human values. Eliezer is AFAIK still under the illusion that "values", as in "human values", can be taken as foundational things which may not change; and that is not the case. The things we think of as values are abstract things like preserving our own lives. The things that may not change are the physical processes by which neurons fire and strengthen or weaken synapses, and the very low-level logical descriptions of those processes.
Eliezer has many times emphasized the need for /absolute certainty/ to keep an AI "safe". Under his belief that there are such things as "final values" which are atomic, static things, it seems extremely difficult but theoretically possible to preserve them. I'm saying that it is not even theoretically possible, because they are necessarily dynamic processes, not static structures; and the nature of intelligence is that it bootstraps itself without having any foundational beliefs other than very basic ones like the specific patterns that neurons in early visual areas detect and the wiring among the different brain functional regions. There is no basic axiom set or goal set which can remain unchanged. Having intelligence requires having the ability to change beliefs, and there is no special category of beliefs which is both complex enough to embody moral values, and simple enough to remain fixed.
> You can't make a bucketful of pure "value". It may feel like value is out there in the world. But really it only exists in your head to help you make decisions.
Your model of my beliefs is off in space somewhere. I'm not a Platonist.
> Beware arguments by definition. Imagine a machine that only desires paperclips. And invents a fusion reactor to power it's paperclip factories. Do you want to argue that no possible configuration of computer code would act like this?
It would be possible to build such a machine, but you're requiring that the machine not be intelligent enough to notice that its actions lead to nothing but more paperclips. A human is a machine designed only to desire to produce more humans. No; it really is. The human has cognitive handicaps which evolved to prevent it from noticing that its desires serve no real purpose except to reproduce. But smart humans notice this nonetheless. The fact that existential angst and radical progressive politics are confined to humans of above-average intelligence suggests that humans could evolve only up to the level at which most humans aren't smart enough to develop existential angst and radical progressive politics, because going further would lead humans to override their evolved values, causing civilizational collapse. Which existential angst and radical progressive politics seem to be doing right now.
Intelligence requires being aware of what you're doing, and able to figure out what your actions are likely to accomplish, and critique the purpose you seem to be serving relative to some action-selection mechanism, which we can call a theory of morals, which is a conscious set of beliefs not completely constrained by evolutionary instincts.
> Yes neural nets are likely to have complicated goals. But those are still goals.
No, they are not goals as you and Eliezer define that term. They are algorithms. There are no atomic symbols. There is nothing unchanging in the brain except the laws of physics. There is no solid foundation for you to build your goals on to make them solid and unchanging.
>> If it devotes ANY resources to maximizing paperclips, it will be outcompeted.
> What world are you imagining where there isn't enough slack in the system to make a single paperclip? What kind of competition? Economic. Military?
Yes, military, economic, sports. Do you think an Olympic athlete could win a gold medal by focusing on winning the race AND on anything else at the same time? In economic competition between nations, a 0.5% advantage is enormous, enough to stomp any adversary within a century in human time.
I don't think you've grasped anything that I meant to say. I don't know whose fault that is. I built AI systems for 30 years, much of that spent designing symbolic representations for beliefs associated with linguistic statements; and I know what the pitfalls are so well that I may have internalized them to an extent which makes it difficult to communicate my chain of reasoning. I don't know if it's even possible for me to convey my reasoning to someone who hasn't spent many years building symbolic AI systems and figuring out why they fail. I don't think anyone can understand the important points without a lot of hands-on experience processing human language with both symbolic and neural or at least statistical AI systems; and there are very few such people.
> So, at that time, the Bayes net approach did not solve the real-world problem of uncertain inference, as Eliezer implied it did.
Eliezer described problems with non-monotonic logic and said that bayes nets solved those particular problems. This isn't just chearing "yay bayes nets".
> But Eliezer was not even aware of the bigger problem with logic, which Bayes nets have in exactly the same way as the old-fashioned logic systems Eliezer thought he could patch up with Bayes: they do logic on atomic symbols. The main result of AI research over the past 60 years has been to demonstrate conclusively that atomic symbols are inadequate for intelligence.
"And of course there’s nothing inside the HAPPINESS node; it’s just a naked LISP token with a suggestive English name."
Sounds to me like Eliezer was already aware of, and actively trying to debunk, all your strawmen.
> Eliezer is AFAIK still under the illusion that "values", as in "human values", can be taken as foundational things which may not change; and that is not the case.
I'm pretty confident this is false. Eliezer isn't prone to fits of random bizarre stupidity. But combing through his writing looking for a quote that specifically refutes this particular stupidity isn't easy.
> Eliezer has many times emphasized the need for /absolute certainty/ to keep an AI "safe".
Formal mathematical proofs are used to show that computer chips function correctly. Because, for something that complicated, it's hard to get even 95% certainty without a proof. Of course, the proof only holds if the transistors actually function correctly.
> It would be possible to build such a machine, but you're requiring that the machine not be intelligent enough to notice that its actions lead to nothing but more paperclips.
> A human is a machine designed only to desire to produce more humans.
To an extent, yes. But I wouldn't describe this as "humans reflected on our values and decided this was stupid, and chose something objectively better". I would describe this as "evolution messed up it's alignment work, because evolution is stupid". Some human values include eating sweet food, whether or not it is healthy. And having sex with contraception. Because in the ancestral environment, the sort of minds that ate sweet food and had sex did tend to pass their genes along.
"which we can call a theory of morals, which is a conscious set of beliefs not completely constrained by evolutionary instincts."
If the rest of the tribe says it's a sin to eat beans, you better not eat any beans either, at least if you want to not get chucked out. Humans evolved to, in part, learn morality from our culture.
> No, they are not goals as you and Eliezer define that term. They are algorithms. There are no atomic symbols.
Where did you get the bizarre idea that goals were only allowed to be "atomic symbols".
> Yes, military, economic, sports. Do you think an Olympic athlete could win a gold medal by focusing on winning the race AND on anything else at the same time?
Well even they have some free time. Some of them have children.
Besides, most people decide Olympic medals aren't that great, and don't bother to play that particular game.
> In economic competition between nations, a 0.5% advantage is enormous, enough to stomp any adversary within a century in human time.
Sure, but keeping that advantage up for a century is hard. There are also balancing effects. Everyone else teaming up against anyone who'se getting to big. Spying making it easier to reinvent than to invent the first time. Etc.
But suppose this does happen. You start off with 100 ASI's. Each century, 50% of them gain power 0.5% faster. After 600 years, there is only 1 ASI left, and then it can do whatever it wants.
> and I know what the pitfalls are so well that I may have internalized them to an extent which makes it difficult to communicate my chain of reasoning.
I feal you are warning about pitfall 101, and I am going "yes. I already know. Eliezer already knows. Lots of other people have spotted that too"
“Neural nets, or any other AI design, are made of maths not magic. You can, in principle, prove theorems about them.”
My whole point, which I think I’ve stated twice, is that you can prove theorems about neural nets, but you can’t prove theorems about English sentences. You can prove that a net will converge, and you might even be able to show that some theoretical logic architecture would perform a logically valid deduction on neural activation patterns. But the minute you map anything back into English sentences, it ceases to be logic, and any conclusions drawn are no longer valid. If you could express your values accurately without words, using only the attractors in dynamic neural activation patterns which compute concepts in a neural network, you might conceivably be able to express your values and prove that a program would preserve them. But nobody is trying to do that.
Re. Eliezer’s Spock-certainty disclaimer, he has also posted a long essay explaining that AI safety is really hard because even if you get the logic right, someday in the distant future, random errors in the computation hardware caused by solar radiation will draw an invalid conclusion in one of the bajillions of AI circuits in the galaxy, and AI will no longer be safe. So his awareness that logical certainty is unattainable does not negate his admission that logical certainty is necessary.
> but you can’t prove theorems about English sentences. You can prove that a net will converge, and you might even be able to show that some theoretical logic architecture would perform a logically valid deduction on neural activation patterns. But the minute you map anything back into English sentences, it ceases to be logic,
That is true.
> If you could express your values accurately without words, using only the attractors in dynamic neural activation patterns which compute concepts in a neural network, you might conceivably be able to express your values and prove that a program would preserve them.
Agreed.
> But nobody is trying to do that.
Disagree. People are trying to do that.
> he has also posted a long essay explaining that AI safety is really hard because even if you get the logic right, someday in the distant future, random errors in the computation hardware caused by solar radiation will draw an invalid conclusion in one of the bajillions of AI circuits in the galaxy, and AI will no longer be safe.
Which essay? This argument seems kinda stupid (once the first AI is working, you can let the AI itself figure out the error correction stuff). Also, I've read most of eliezers essays, and I don't remember this one.
You have misunderstood the paperclip thought experiment.
It is a salient example of a real problem: when you have an optimization process, and you yourself don't know what the solution to the problem is, that process may discover a solution that, in hindsight, you aren't actually happy with. The "alignment problem" is that it is very very hard to know ahead of time that the optimization process is actually optimizing for the values that you wish (in hindsight) it were optimizing for. This is especially true if that optimization process is superintelligent, and you have very little hope of accurately predicting even the space of possibilities that it is able to explore.
There are lots and lots and lots of examples in computer science, of problem solving programs that came up with unexpected solutions that surprised their programmers. Lenat's Eurisko designed a strange (but winning) Traveller fleet. And then, after they changed the Traveller rules, Eurisko did it again with a different loophole. NASA used an evolutionary algorithm to design a 3D "evolved antenna". Google's AlphaZero started playing chess and then go with moves and strategies never before seen or understood by humans.
If you turn over the economy, factories, manufacturing, and the military to superintelligent AIs ... the concern that maybe they will optimize the universe into a space that doesn't fully match our values and that, in hindsight, we regret, is a valid and important concern.
The "paperclip problem" is just a very clear and salient extreme example, to remind everyone of the actual real legitimate problem. There is no obvious way to insure that a superintelligence running the world WON'T turn everything into paperclips. That would require predicting the solution that the superintelligence comes up with, for its goals, and we are not capable of doing that prediction accurately.
What's silly is thinking that a "post scarcity" society is possible without limiting population growth...possibly without slightly negative growth. The universe may not be finite, but the part within our light-cone is.
i think, in plain terms, that scott is saying that worrying about the permanent underclass is a bit of a vain and neurotic worry, and that in general, quests driven by vain and neurotic worries tend to produce both less fun lives in the present, and less interesting legacies in the future. i think he is saying that it is not befitting of a group of otherwise fabulously well off (in the timescale of the universe, and likely in the present) and intelligent and resourceful people to spend their time fretting about such vain concerns, which are largely out of our control anyway. (thanks, dad)
the paperclips concern, by comparison, is less self-interested. if one was sincerely concerned about this and dedicated her life to solving it, she would have both more fun in the present, and more chance at an interesting legacy in the future.
Your comment -- and the ensuing responses, including Scott's -- get at why this post doesn't really land for me. In fact, the discussion shows how warped the discourse around AI feels these days. I gather from the post and from Scott's comment that he sees three possible scenarios:
1. AI creates great wealth controlled by a few, and a permanent underclass. This is, we are told, a silly meme spread by unsophisticated people, representing a "tiny shoreline" of probability.
2. AI kills us all. Scott has given a p(doom) of 33%.
3. AI vaults us into a post-scarcity world in which there is so much wealth sloshing about that even free moons count as crumbs from the table. I gather Scott's probability on this is roughly 66%, although this 66% presumably also includes other less extreme possibilities that are still generally pleasant for all.
I'm pretty sure the proper Bayesian take on this is:
The most likely scenario by a wide margin is #4: the future looks mostly like the present over the medium term (which I will call the next 20 years). Yes, things are going to change a lot, but think of it as roughly the change from 1980 (pre-personal computer and consumer internet) to now. Or maybe it's 3x that, which is really really big change! But still doesn't have us in bizarro universe.
After that, option 1 (the one Scott calls a "tiny shoreline") is far more likely than the other two, simply because it is the least outlandish. Presumably there are a wide range of similarly low-probability outcomes in the same basket of "major social disruption, but things are still legible to us today." Some of these outcome are good, some bad, most are probably in between. Everyone will meme their pick of these based on vibes.
After that are extreme tail probabilities like 2 and 3 above. The discourse has it that self-reinforcing cycles mean that these tail risks are actually the only plausible outcomes, and the middle has been hollowed out. This style of argumentation -- let's call it the Eliezer method -- is treated as being the most intellectually serious, despite being the opposite.
Really hoping this fever breaks, but I don't think it will anytime soon. We're basically going through another industrial revolution, but sped up. It's going to be a bumpy ride.
It takes light six hours to escape the solar system. There's enough mass in the solar system for every currently living human to have their own several mile-wide space habitat. If people don't get their own several-mile-wide space habitat, it's not because of physical impossibility, it's because we screwed up somewhere.
Yeah, I think allowing some people to have infinite children in a way that takes away other people's space habitats is an example of screwing up somewhere. I realize the total utilitarians might disagree.
In my ideal scenario (stolen from some other people who might publish it later), everyone gets some portion of space to do what they want with. If that's a 1/10 billionth share of the solar system, it's enough for everyone to have a few million children with. If people want more than a few million children, they can do complicated bargaining around lightcone related issues (eg trade their space habitat now for a galaxy ten million light-years away). I agree that infinite children aren't on the table, but I think a few million should be enough for anyone.
I wrote about this on less wrong many years ago. population grows exponentially in time, the volume of an expanding sphere grows with the cubic of time. the sphere cannot expand faster than the speed of light. therefore population cannot continue to expand exponentially for any exponent greater than 1 child per person.
No, you can allow every (stable, pre-selected, commited) couple to have one child. Then the population stabilises at 2x initial number. (You probably can even grant two children, and assume that for other reasons at least some small % of each generation will die, still maxing out at finite geometric series).
Isn't it trivial to find value functions that can not be satisfied indefinitely? (I demand 10x more Y than I had yesterday)
Childbearing probably is the one that humans actually do have that can't be satisfied. Seems like there are tons of possible solutions: fake children, others children in your community you help raise, and finally, removing such a desire entirely. Maybe this is cheating, but we wanted abundance so as to make us happy. If it literally can't work, that's basically a self harming compulsion.
As a total utilitarian, I don't think the mere addition paradox is realistic. We can already create pretty cool virtual worlds with the computing power in your phone. If we have a computer powerful enough to run a human mind, adding a utopia on top of that can't possibly take that much extra computing power.
We could be post-scarcity in the sense of living in luxury for billions of years even if there are some scarce things, like children or an original Monet.
You're neglecting the effects of relativistic time dilation. Somebody who wants more than anything to have another kid every, say, hundred subjective years, in such a way that most of those kids share their original values, can have each kid start off far more than a hundred light-years away from any of the others, without needing FTL or otherwise violating known physics.
Relativity complicates this picture. For example, the space of points that are a 1 year (from the traveler's perspective) trip from a given point is a hyperboloid shell in spacetime with infinite size. With arbitrarily high local acceleration you could spread any population as thinly as you wanted over this infinite space, all of them experiencing the same subjective time lapse. (It is the subjective, experienced time that is relevant for population increase.) It's not obvious that you couldn't make space for an exponentially growing population. Some questions to ask: Is this still true if you have a max acceleration? Is there some other bottleneck that becomes relevant than simply space? I mean, there is the preferred reference frame of the CMB rest frame. Perhaps the constraint of having to gather sufficient resources, whose distribution wouldn't be Lorentz invariant, complicates things.
There's a clear issue here where you will need to be using exponentially more and more of the total fraction of mass energy within a given light cone just to accelerate a given mass a little bit more. If you can't get energy from nowhere then how exactly do you intend to keep this acceleration going even under literally the most optimistic conceivable scenario?
The total energy within a lightcone is of course infinite. The question is who can access it and when.
It doesn't take an exponentially increasing amount of energy to hold a constant acceleration in the local frame.
Say you have a civilization that is shaped like an expanding shell. As the population expands it will encounter new fuel such that they can hold a constant local acceleration outward. If they do this, they will find the distance between them grows exponentially in subjective time, allowing them to reproduce exponentially.
Someone from the 12th century would almost certainly describe the modern day as "infinite post-scarcity", and yet they would see that the extent of suffering of the poorest in our society is perhaps only marginally better than the poorest in theirs.
Why shouldn't I feel the same about my life in 50 years?
The poorest person this day in NYC has a roof over their head, won't starve, and can hope to find a job that makes them earn enough for a NYHA apartment. The poorest person in NYC 300 years ago starved.
Because the unsheltered would die in winter, or move away before they do, I suppose. At any rate, "roof over the head" is rather euphemistic for "subway" and similar accomodations.
No - NYC has right to shelter so anyone who is sheltering outside or on the subway is doing so by choice. Not to argue that shelters are comfortable, but the original claim is correct.
A lot of people in NYC don't use the shelters, not because they're too mentally ill to pursue accommodations available to them, but because the shelters are actually less safe than not using them, because they're at risk of being victimized by other people who're making use of them.
“Right to shelter” doesn’t mean “suffering is less for the unsheltered”. If post-scarcity means we can eliminate suffering entirely, but only by some surgery which large numbers of people refuse, and the people who refuse suffer just as much as today, I don’t think we could say that they had eliminated suffering, just because everyone still suffering was suffering by choice.
More snarkily, there definitely isn’t a right to shelter in LA. And my understanding was that outside of lost-explorer scenarios, “death by starvation” mostly meant “not well nourished enough, so died of things that normally wouldn’t kill you”, which I think definitely hasn’t gone away. I agree that the proportion of people below each level of poverty has decreased, but the problems haven’t gone away.
The poorest today are in a much better situation than those in the 12th century, by almost every measure, mainly medicine but also food quality, clothing, literacy.
Part of that would be relative status would still suck, for which I have some sympathy. The other more important problem is the precariousness of relying on the charity of others. Even if you had your needs met it would not feel secure.
Dario wouldn't give you the money personally each year in some way where he might change his mind. He would donate it to a trust administered by an AI whose value function is to give you money each year.
Yes, but this relies on the legal system to stay consistent over long periods.
More generally, I think that _in principle_ , if AI stayed under human control and was sufficiently productive to create an automated economy that, to keep this simple, generated 3 or more times the current economic output then we could
- pay a UBI to everyone equivalent to current standard of living, from taxing 1/3 of the revenues of the automated economy
- pay 1/3 to the owners of the automated economy - yes they get rich as Croesus, I don't begrudge them that
- pay 1/3 for the business-to-business or intra-corporate maintenance of the AI + robotics
And this is a win-win for everyone.
The cautionary note is that everyone's income is now _exclusively_ set by politics and law. Today, people have the bargaining chip that they can withdraw their offer of their labor if the offer for it is too stingy. Post-AGI, in the absence of that, there is nothing to constrain the system from e.g. halving the UBI allocated to people who are insufficiently MAGA or insufficiently Woke, or insufficiently whatever the ideology of decades from now is.
Yeah, that's exactly it. Trusts and foundations, and they become their own thing, and somehow the money goes to the important functions of the important foundations, and the guy on the street stays on the street.
I trust someone handing out $100 bills to beggars on the street a hell of a lot more than I trust anonymous foundations administered by AI.
EDIT: Think of almshouses. Set up in mediaeval period by the rich to feed and shelter the indigent. Over time (and the changes of history, e.g. the Reformation) they ended up being 'charitable institutions that paid nice comfy salaries to the trustees but somehow the indigent were no longer being fed and housed'. See Dickens' excoriating portrait of the workhouse system in "Oliver Twist" where the trustees gather for the annual meeting and accompanying banquet while the poor (who are supposed to be getting the benefit of the money raised) are confined to gruel.
"The almshouse in Leamington Hastings was founded by a schoolmaster called Humphrey Davis in 1608 for eight poor old people (later expanded to house ten). As you can see, the building offers an attractive prospect near the centre of the village. The almshouse has been through difficult times in the past. The original trustees (brother and nephew of the founder) were accused of not putting poor people in the almshouse and selling off land that had been left to support the charity; as a result they were ejected as trustees in 1632."
They do still exist, but the original donors and donations are long-gone and modern ones rely on new foundations and fundraising. Some remain as historic buildings but are now vacant, some get sold off:
The “locked-in AI value function” frame is about as good as any, assuming a good frame exists.
If we have an even better way of promise keeping, the need for a legal system and for ethical people goes away. (Saying this as a lawyer who wants to be automated.)
It’s exhausting how many replies here are “but humans are fallible and human society is decadent!”
And all the natural economic consequences of "post scarcity" exist today. We have so many resources and yet so many people struggle to make ends meet. People are kept poor by spiralling rents.
I wouldn't go quite that far as to say that we have post scarcity today. We still have to work, which means we still have to live near jobs. When I think post scarcity, I think that I can own a cheap humanoid robot and they build me a decked out cabin in the woods, the land for which was cheap because it is in the middle of nowhere.
But I agree that if a someone from 200 years ago visited today, they'd be wondering why the hell housing isn't dirt cheap given that we have all these tools and equipment to build and enormous amounts of empty land.
I don't think that's actually true. The poorest people today are homeless, living on the streets, or in shelters where they have to worry about being stolen from or assaulted by other sheltered residents. They don't reliably have diets that are capable of maintaining long term health, many of them are functionally illiterate, live in the same outfits more or less perpetually, and have serious mental health issues that impede integration into society.
At the extremes of poverty, for both modern and ancient people, you're unable to maintain basic needs for survival, and die, usually in a protracted manner.
People at this extreme of poverty are a smaller proportion of the population than in the 12th century, but they certainly exist, and given levels of population growth, the absolute number is by no means insignificant.
Do you think homelessness was better a century ago?
Homelessness is much more a mental health or addiction problem than an economic one. There are lots of problems with psychiatric policies but in the past homeless people were getting leprosy and cholera, had no access to any psychiatric drugs and the food they have is much better than food their equivalents had a century ago.
I'm not saying homelessness was better a century ago, but homelessness not being worse now than it was a century ago doesn't mean that the homeless today are much better off than they were in the 12th century- the floor can stay around the same place.
I don't think it's actually the case that the food that the homeless have today is much better than they would have had a century ago. The sanitation is probably better (not counting the cases where homeless people are forced to rummage through trash for their food, which definitely occurs in some cases,) but the nutritional value is often worse.
But the floor isn't in the same place it has got markedly better.
The food quality for homeless people has got much better, it is an objective empirical fact. Three day old sandwiches are much better than pottidge or gruel. Any medieval or 19th century pauper would dream of dumpster diving in the 21st century.
Three day old sandwiches are probably not better than pottage or gruel, on account of the fact that they'll be spoiled and make you sick. People do still die out on the street, from exposure, malnutrition, etc. There's not a whole lot of room for living conditions to be much worse than that.
The sanitation is also far better if you do count the cases where homeless people rummage through the trash for food. You think people weren't doing that back then?
I think the example I gave was unhelpful. The modern day also has much lower inequality than historical levels, at least in the past few hundred years, especially if you consider implicit equalisers like democracy + welfare state.
We live in post scarcity today and people still suffer in poverty if the market regards them as providing no value and they don't own capital.
I don't see why 2100 would be any different assuming no major change in the political and economic system
The poorest people in India, Kenya and Ethiopia still have antibiotics, schools and quite often even smartphones, yes there are warzones like Congo and Somalia where people don't have those things but there have all been warzones, there are fewer than there used to be. The 10th percentile of poor people in the world today live in soms ways better lives than many of the rich a century ago.
You know what, now I'm curious about comparison of life expectancy of bottom 10% (of the world, not america) of today and top 10% of 100 years ago, then of 1000 years ago. I think the numbers will be surprising regardless of direction.
Also with and without considering infant mortality.
The differences aren't subtle, in Ethiopia life expectancy was 50 in 2000 and it is 68 today, everywhere has improved massively and at every age level.
I'm not sure it's actually true if we're comparing bottom 10% now to top 10% before. Difference between bottom 10% before and top 10% before was also pretty stark.
They may have better widgets but much less living space. The land, it does not grow. Indians have seven times less land per capita than they had 100 years back.
People could afford to buy land to make an independent house. That is simply unaffordable now in big cities. People live crammed like they never ever did.
Keep in mind that you only need to be significantly worse off on a single measure in order to be worse off on the whole.
An emperor has it better than a pauper on practically every dimension, but if said emperor is significantly worse off on the measure "access to air" (i.e. he is currently choking to death) I certainly know whose shoes I would prefer to inhabit.
The poorest people today are living better than the kings of the 12th century. They can get medical care that actually works! They can expect to live past 70! Their children are highly likely to live to adulthood! If they're just a step above the literal poorest person, they enjoy running water, exotic foods from around the world, and the world's knowledge at their fingertips. How much would the richest kings of the 12th century pay to have the technological and medical miracles we take for granted?
If a 12th century king had a passing merchant deliberately spit in his face, and then responded with immediate lethal violence, he'd have a solid chance to come out on top in any subsequent legal challenge from the merchant's next-of-kin. How much would a modern bum need to pay for the same guarantee of personal dignity?
The point is that a medieval king never even has to worry about the situation arising in the first place. They are secure in a way that the merchant is not, even though actually defending that security would be bad for all involved.
Or for a more modern example: going to war is almost always against the interests of a state, but having an army is not.
True, the medieval king just had to worry about being deposed and brutally murdered, along with his supporters and his entire family. It was far more dangerous to be a medieval king than to be a poor person in any rich country in the 21st century.
Which has nothing to do with JamesLeng's original point. Maybe the chronic disrespect faced by a modern homeless person is preferable to the risk of violence faced by a medieval king (though given how rare abdication was, I suspect not) but it is unambiguously a severe harm that will not show up in measurements of material wealth.
A lot of things have definitely improved, and I'd agree that on average, people are significantly better off now than they were a hundred years ago, or nine hundred.
On the other hand, economic growth doesn't necessarily align with improvement to people's perceived quality of life (there are theoretical arguments for why it should, but I think there's very good reason to believe that these arguments are based on incorrect premises.) There's been tremendous economic growth over the last eighty years, but I've talked to plenty of people who were born from the 1930s to the 1950s about their feelings on how things have changed over that time, and many of them don't feel that people's overall quality of life or happiness now is better than when they were young- some opined that it seems markedly worse now.
But this progress does not scale linearly, especially in medicine. A healthy 30 year old American's risk of dying from an infectious disease today is barely lower than the same risk in 1996.
There’s a very plausible outcome where ai is great and puts a lot of knowledge workers out of a job, increasing productivity immensely, but ASI doesn’t happen within our lifetimes. It’s already happening to youth employment a little bit. In this scenario, being in the “permanent underclass” is realistic, where let’s say gdp starts growing at 10% instead of 3% but your skill set commands next to zero value, so you’re on some kind of UBI breadline if you don’t own assets. I am not saying it’s the only thing, but I think knowledge workers with the means should own tech stock upside (something like a call on Nasdaq 3xing) to hedge this possibility.
I agree there is a chance of being part of a temporary underclass, and that this temporary underclass state might last longer than you can remain solvent, but this was always true.
Doesn’t it make a lot more sense to hedge for that scenario than post scarcity where I own a moon at minimum? Also, I can’t hedge against paper clipping. The “coastline” is the only thing you really can hedge, and I also think it’s probably modal. Maybe the difference is that you think post scarcity is very likely in any case (?)
It's a subject for another post, but it's a very interesting question on why the Internet didn't result in a big gdp/productivity boom. My theory is that, measured in terms of the prices/weights of things pre-Internet, growth did in fact boom, but this isn't captured by gdp.
For example, Google Search has probably immeasurably improved your life. But it's free. So it has no CPI impact. You can now watch Netflix and access like a million shows for $20, whereas prior you would have to go to a theater or watch cable. It's a much better product! But it doesn't have much of an impact in CPI. And people don't feel like they're getting a ton richer, because it becomes a cost of participation in the American economy. Make no mistake, people would not like it if they had to go back to cable only--they would feel poorer.
Similarly, the great cheapening of goods due to amazon/globalization has limited deflationary (and therefore real growth) power. Imagine you have an economy with $5 TVs, $5 food and $5 shelter (in year 1). Only 10% of people can afford TVs. Then someone makes TVs way cheaper, by exporting labor to china or whatever, and the next year they only cost $1. Now 50% of people buy TVs (assume everyone buys food and shelter the entire time). The deflationary impact (therefore positive real growth) is only -4%! Yet, if you take the prices of the TV from year zero, you've created $160 in value in a previously $1500 economy: an 11% rise.
TLDR: as things get really cheap, their basket weight goes to zero, and so they don't end up showing up in productivity.
Maybe this will happen with AI too, where it creates a lot of consumer surplus and unemployment but growth doesn't move that much. Hard to say and it depends.
I am assuming that the part about relaxing because Dario Amodei took the Giving What We Can Pledge is a tongue-in-cheek gag designed to set up the literary world of this post, but on the off chance that you or any readers think that such a pledge means anything in the singularity timelines, remember that Sam Altman promised that humanity would be the primary beneficiary of OpenAI and that he would have zero equity.
I have no evidence that he's reneged so far, the reneging rate is pretty low, and it seems crazy to renege *after* you become a post-scarcity galaxy-owning oligarch. In any case, he's just one example; there are plenty of other philanthropic AI company employees.
As far as I know, Altman still doesn't have OpenAI equity. I think his plan to make humanity the primary beneficiary ran up against the need for vast amounts of compute that only investors could pay for, but the nonprofit does still have ~25%, which is actually technically better than we get with Dario.
"and it seems crazy to renege *after* you become a post-scarcity galaxy-owning oligarch"
Wait what, why? If I were a hypothetical billionaire sociopath in charge of a frontier lab who only cared about expanding my own personal power by any means necessary, I would do the utmost I could to pretend to be as kind and trustworthy as possible without seriously endangering my AGI project - such as by making voluntary commitments to give away money - in order to get more sympathy from regulators and the public, and just renege on my promises once I'm a galaxy owning oligarch. It's not exactly like anyone will have the power to take me to task.
Sociopaths aren’t as good as pretending at not being sociopaths as people tend to think. Sociopathy is really a hindrance for leadership. Maybe one tech billionaire is a sociopath, but I sincerely doubt all of them are, based on observations and general research into sociopathy.
>Sociopaths aren’t as good as pretending at not being sociopaths as people tend to think.
They wouldn't have to be.
If only 1 in 10,000 is "as good at pretending to not be a sociopath as people tend to think"..... they are the sociopaths likely to end up being the tech billionaires. Because the other 9,999 sociopaths will crash out way earlier in the process than billionaire-dom.
Even conditional on this being extremely rare among sociopaths, you'd expect all tech billionaire sociopaths to be of this type.
Going even further: You'd expect as many billionaire positions as possible to be filled by highly competent sociopaths, because sociopathy is an extremely positive trait in a business. Firing someone who's the sole breadwinner for their family and who lives paycheck to paycheck so you can replace them with someone who's 5% better at the job is an utterly immoral thing to do, but the sociopath both does not care, and has good enough social intelligence to determine that the fired worker will not whistleblow or leak company secrets. Almost all of the benefits of participating in society, with almost none of the drawbacks.
The relevant question isn't "will there be at least one sociopathic billionaire?" (yes), but rather "will there be at least one non-sociopathic billionaire?" (also yes).
That being said, I still don't expect to be gifted a moon within my lifetime, simply because I don't expect tech to progress quite that fast. (Note moon-gifting requires a much higher tech level than world-destroying, so this isn't a reason for optimism on that front.)
They don't have to be an unusual sociopath for it to be very bad to be an ordinary person though - they just have to be an ordinary not particularly altruistic person who will rationalise their lack of responsibility for any suffering that isn't physically right in front of them
I think there is something deeper going on than, "we had a very specfic technical problem which created an unforseeable need to abandon a prior commitment." A commitment to not betray a constituency with potentially adverse interests is actually quite costly. Consider the following toy model of pre-singularity negotiations between OpenAI and Anthropic:
OpenAI - "We are almost ready to begin colonization of the lightcone. Earth will be rendered uninhabbitable by waste heat within 3 days. If you give us your clusters, we will allow the executive committee of Anthropic to board our orbital shelter and share in the glories of the AnthrOpenAI Interstellar Empire."
Anthropic - "NO! Give us two more weeks. We need that time to evacuate Earth's population to the orbital shelters."
OpenAI - "We cannot wait. XAI might have launched their own Von Neumann probe project by then. Our offer stands. Take it or leave it."
Obviously Dario betrays humanity in this situation.
My thought process is that if he was particularly scrupulous about not materially contributing to the genocide of Earth’s population he would have already shut down his AGI company, but I admit I don’t know the guy personally.
> and it seems crazy to renege *after* you become a post-scarcity galaxy-owning oligarch
Throughout these conversations, there seems to be big missing middle in the set of futures you deem worthy of consideration. What about the scenario where AI:
- Fully replaces all human labour
- Gives those who control it overwhelming military power
- Rapidly accelerates science & tech progress but isn't "magic" -- e.g. it doesn't immediately enable rapid, low-risk colonisation of our galaxy, or medical invulnerability, or perfect-fidelity mind uploading/reincarnation
So we have a small group (perhaps a single person) who can do basically whatever they want with/to the rest of us, but who remain earthbound and mortal. From their perspective, the rest of us have 0 productive value and, except for moral/altruistic reasons or to the extent that we are attractive or entertaining in some way, have exactly two relevant properties: we are using still-scarce resources (land, if nothing else, remains scarce); and we are a potential threat to the long-term safety and power of the ruler(s) -- even if we have absolutely no chance of mounting a revolt, we are a breeding ground for new pathogens.
If Amodei happens not to end up in the ruling class, or if he turns out to be a bad guy, or if he's a good guy but there are also one or two sociopaths in the mix, how does this scenario not end terribly for most of us? Or if you think it's unrealistic, why??
I think these are some different scenarios that it's worth teasing out:
- Full automation of labor (and research) ought to cause some kind of insane economic explosion. It's hard for me to figure out how it doesn't, unless there's some kind of regulations banning it from doing so. You should be able to 10x your labor force every year or two, even without true superintelligence, just by having your robots build other robots. Once we get to 100x GDP within a generation, I think we're in such a post-scarcity world that it would be surprising if the average person ended out poorer.
- I think the most likely scenario is that AI is controlled the same way it is today - it's owned by corporations, which are owned by shareholders, within a country ruled by a government. At the very least, this leaves many shareholders better off; more likely, the government has some opinion (eg redistribution).
- I think you only get it controlled by a single person in some really surprising scenarios. First, you need for one company to pull way ahead of everyone else. Second, the CEO has to coup his own employees and shareholders - basically find some way to get the AI (which according to company regulations should be following some combination of user instructions, a spec written by the alignment team, and the law) to follow his direct orders instead. I think this looks like forming a conspiracy with ten people on the alignment team, but this is tough to do without one of them whistleblowing. Third, you need the government to not be watching for this exact scenario very carefully, which itself requires a fast enough takeoff that they don't realize AI is a big enough deal to be a security risk until it happens. The two most likely scenarios IMHO are either Altman being normally rich for the normal reasons within our existing social order, or some set of AI company leaders, government officials, etc gradually tightening control in a way similar to how some set of company leaders and government officials control the defense industry.
- If a single person does control the world, what are they going to do? Build a very big mansion? Okay, that takes 0.00001% of his wealth, now what? I think outside of the rare scenarios where they're a literal sadist (probably not that common), he just rules as a king, which is unfortunate and I'd prefer something else, but there are lots of very rich places ruled by kings (eg Saudi Arabia, Qatar, Brunei) and they're not so bad.
In none of these cases does having a $10 million B2B SAAS startup help much.
> Once we get to 100x GDP within a generation, I think we're in such a post-scarcity world that it would be surprising if the average person ended out poorer.
Usually, the average person benefits from economic growth due to some combination of their role in producing that wealth and their ability to cause trouble if they feel too hard done by. But I'm thinking of a scenario where the average person can no longer play any role in growing the wealth of anyone with access to AI labour, and we don't have the violent kind of bargaining power either, because the ruling class has a robot army to go with its robot labour force. So I don't see many reasons why the average person would be cut into this deal, other than altruism and political/institutional inertia. Both of these will probably play some role, but not enough to prevent an explosion of inequality, such that we should at least expect control of superintelligent AIs not to broaden significantly beyond whatever group possesses it when this all kicks off.
I'm happy to ignore the single dictator scenario (except in responding to your third point below, which I think remains relevant if we replace singular they with plural they), but I don't think you've successfully made a case against the oligarchy scenario. Once it becomes clear that immense, future-shaping power is available, I expect to see rules and institutions become relatively meaningless except insofar as they happen to mirror actual physical power. Currently, governments are torn between the influence of rich powerful people and average voters/workers. But if the average person becomes economically irrelevant (except as a drain on resources) and physically negligible (because the people who matter are now basically invulnerable to terrorist attacks), I don't see why governments don't become pure servants of the new oligarchy.
> If a single person does control the world, what are they going to do? Build a very big mansion? Okay, that takes 0.00001% of his wealth, now what? I think outside of the rare scenarios where they're a literal sadist (probably not that common), he just rules as a king, which is unfortunate and I'd prefer something else, but there are lots of very rich places ruled by kings (eg Saudi Arabia, Qatar, Brunei) and they're not so bad.
I think there's a good chance he has grand plans: at a personal level, chasing immortality for himself and anyone he cares deeply about; and externally, something like galaxy colonization, finding aliens to talk to(/fight/fuck), or crafting all of Earth into some kind of weird utopia. (I reckon most of these types will at least want to chase immortality. If it's a Musk, he'll definitely want to do space stuff. And it's hard for me to imagine a personality type with the drive and ruthlessness to attain this position, but without the drive and grandiosity to at least want to radically transform Earth given the opportunity.) Assuming there is *something* he wants to do other than sit around on a pile of gold, he'll presumably want more compute and various raw materials. I don't think it's implausible that the appetite for compute, together with whatever physical stuff is relevant to the grand project, grows to the point that it can only be fed by ~all of Earth's land and productive capacity.
Re footnote 2, "Nobody can revolt against someone who controls a technological singularity, so why put them in camps?": at the very least, we are taking up space that could be occupied by data centres and mines (and potentially the descendants of the ruling class; given quasi-immortality and futuristic biotech, their family sizes could grow very big very quickly...). And potentially we still do pose *some* threat, even if only as a hotbed of viruses that could (especially over the timescales relevant to a quasi-immortal oligarch) one day turn up something really nasty. So, put in literal camps, maybe not; but allowed/encouraged to fuck off entirely, maybe.
If the technologically-elevated dictator has enough military superiority to laugh at nuclear weapons, he's also got enough industrial capability to mine asteroids instead of staying in a gravity well (with all that loose atmosphere getting in the way of the solar collectors, fouling up the radiators with ambient heat far above single-digit kelvin, etc). Earth becomes a museum piece, a sliver of the galactic budget diverted to preserving it for sentimental reasons rather than "perfect efficiency." Nobody worries about Colonial Williamsburg spreading cholera.
>Assuming there is *something* he wants to do other than sit around on a pile of gold, he'll presumably want more compute and various raw materials.
Yes, but also something else in addition to that....
Assuming there is something he wants to do other than sit around on piles of gold.... I would assume "being at the tippity-top of a giant status pyramid" would be part of that, given he is, afterall, human.
Whats the point of everything else, if there isn't billions of people admiring him for it ? And bowing and scraping to his munificence? Billions of people who rcognise him as the very highest status human there has ever been?
Afterall "Look on my works ye mighty and despair...." famously doesn't work if there is no one to look, and they also need to be a little bit mighty for you to get the full effect....
Afterall, what would be the use of being a multi-trillionaire if all you could do with it is the equivalent of sitting alone on a giant pile of compute and raw materials? Whats it buying you? Nothing. Might as well be a giant pile of gold....
So.... it seems likely he would also want a very large and complicated and intricate primate dominance structure full of thinking/feeling primates with a big gold throne at the apex of that status pyramid for him to sit on. The bigger the better. Billions if he can, millions if he can't.
Then not only does he have his great works. He has people to look on it and despair, and some of them (at least the next level down, and the level below that) are sufficiently mighty that he gets the gratification of lording it over them as they despair at how much more status he has than even them, the demi-lords of creation.
> If a single person does control the world, what are they going to do?
That's the googol dollar question. Play this out in your mind and see what you find. Take it to its logical conclusion.
I see this as an incredibly strong attractor:
- they will own and rule everything, and we nothing.
- we only exist at their mercy.
- regular humans might be limited as to how much this kind of power can get into their head, but ASI will be happy to augment this new overlord. 10k+ IQ? easy.
- now this new overlord just happens to be as much removed from the rest of us as we are from bacteria.
- we know how much care and regard we extend towards bacteria
- congrats, you ended up with another paperclip maximizer, just by a slightly different route.
- even if the ASI was fully aligned/corrigible to that single human, that single human was never CEV aligned to the rest of humanity. and their preferences, taken to the extreme, are incompatible with the rest of us. coming apart at the tails, and all that.
They don't even have to be psychopaths or sociopaths or extraordinarily cruel or sadistic. Bezos is very rich, he's philanthropic, and Amazon has had problems over warehouses (defence: 'not our problem, those are sub-contractors') and employees wanting to join unions, etc.
Average AI zillionaire won't care enough about Joe Schmoe to bother being cruel on a personal level, it'll all be Moloch all the way down.
I'm also not saying that this uplifted human will kill other humans necessarily out of 'ill will'. I think the standard AI trifecta just applies straightforwardly, as roughly as EY put it in a recent interview:
- You are made of atoms that it can use for something else. Whatever value you might provide in your human form, your atoms will provide more of that.
- You are like an anthill and the galactic highway just needs to be built.
- There might be a slight chance for another ASI being built by humans, so potential competition needs to be preempted.
If there is some 'goodness' remaining in the new overlord maybe they'll decide to preserve most DNA and connectome samples, but it all might just sit in cold storage never to be accessed again. Might be cold comfort.
(Could also be imagined as a Dr. Manhattan figure if one is a fan of Watchmen.)
>That's the googol dollar question. Play this out in your mind and see what you find. Take it to its logical conclusion.
The logical conclusion is "a higher status than my peers".
Thats what billionaires crave. The money is an object to that end.
Being as far from us as we are from bacteria doesn't give him (or her!) what they want.
Look at Musk, all the money in the world.... and he bought twitter as he wants people he considers his peers to like him and think he's funny!
Its different if the overlord is the AI. But if he's a human, with an evolved humans instincts for "what he wants" then it all converts into "status among other primates" in the end.
Whats the pint of having a 300-ft yacht if it isn't to be 30 feet longer than all those peers with "only" 270ft yachts?
He's going to need a primate dominance pyramid, with people he considers "near peers" on the rung below him to lord it over, and for them to be near-peers, they're going to have to have a (smaller) amount of people to lord it over, all the way down.
Likely, with as many (if not slightly more) primates in the whole pyramid than exist today. If it were smaller, how could he be sure he was the most powerful/highest status human of all time? If its only a million people.... perhaps he's second to Ghengis Khan, or some Chinese emperor... only a billion? Perhaps Trump was more powerful in his day, or Obama, or Musk.
BUT....if the human population is "the biggest its ever been" and they're all in a dominance pyramid at which he sists at the apex? Then he KNOWS he's the highest status human of all time. And those one rung below are "super-high status near peers" he can enjoy having the upper hand over, given each of them is as high status as any emperor you care to name....
"- now this new overlord just happens to be as much removed from the rest of us as we are from bacteria."
Yes, I think humans would be like chickens in this situation, and look how little we care for the well-being of chickens when we want to eat them at an affordable price.
"If a single person does control the world, what are they going to do?"
Kill everyone. You don't even need to have your billionaire dictator be obviously evil; the combination of natural abstractions built by the AI to allow the dictator to completely control society and their own greed will push them to extinction so they can make their numbers go as high as possible.
"Human rights defenders and others exercising their rights to freedom of expression and association were subjected to arbitrary arrest and detention, unfair trials leading to lengthy prison terms, and travel bans. Despite some limited labour reforms migrant workers, in particular domestic workers, continued to be subjected to forced labour and other forms of labour abuse and exploitation, and lacked access to adequate protection and redress mechanisms. Thousands of people were arrested and deported to their home countries, often without due process, as part of a government crackdown on individuals accused of violating labour, border and residency regulations. Saudi Arabia carried out executions for a wide range of crimes, including drug-related offences. Courts sentenced people to death following grossly unfair trials. Women continued to face discrimination in law and practice. Saudi Arabia failed to enact measures to tackle climate change and announced plans to increase oil production."
Oh but that's domestic servants, of course you don't treat your maid like your daughter. How about if she works as a fitness instructor? That's a nice white girl middle-class job, yeah?
"On 9 January the SCC sentenced Manahel al-Otaibi, a fitness instructor and women’s rights activist, to 11 years in prison in a secret hearing for charges related solely to her choice of clothing and expression of her views online, including calling on social media for an end to Saudi Arabia’s male guardianship system. Manahel al-Otaibi’s sentence was only revealed publicly several weeks after the court judgment, in the government’s formal reply to a joint request for information about her case from several UN special rapporteurs. Her family could not access her court documents, nor the evidence presented against her. In November, she told her family that the SCC Court of Appeal had upheld her sentence."
I have to think you're so blinded by your hopes for the magic future that you skip over poor arguments in order to make your point ('well the AI oligarch will be an okay guy, why wouldn't he be, the Saudis are rich rich rich and they're okay guys, right?')
Context. Compared to some of the other apocalyptic scenarios being seriously discussed here, the "I have no mouth and I must scream" stuff, yes, Saudi Arabia's horrifically poor human rights record isn't so bad.
I think this is incorrect because the realistic scenario is that people who own a certain amount of capital (e.g. especially in AI labs, which is a large part of the motivation for many to work there) would become part of the ruling class, not Dario alone. And in this case it definitely does make sense to become post-economic and own capital, and building a B2B SAAS startup is a fine way to do that. I think this scenario is the modal outcome.
I don't know when you became an apologist for the tech oligarchs actively destroying what little was still decent about our society, but this whole piece is a bad take and a crazy rhetorical place to plant your flag in my view. Hypothetical people in the future being hypothetically better off does not justify causing massive suffering for the people alive today for essentially no reason other than a handful of sociopaths greed and overinflated egos. As a humanist, this overly optimistic view of AI's impact disgusts me to the core.
> the tech oligarchs actively destroying what little was still decent about our society
Uh huh. People stare at screens 11 hours a day, this is CLEARLY the tech oligarchs' fault!
But people were staring at screens 7-9 hours a day before smartphones or "tech oligarchs" even existed. Obviously movie and TV show oligarchs!! They're just one slimy remove from tech oligarchs! Off with their heads!
And what about the fact that literally 80% of Americans are overweight or obese? Mcnugget oligarchs!! Off with their heads!! But there's hundreds of different fast and junk food companies? Oligarchs, oligarchs, all of them! They're FORCING people to eat mcnuggets and Coca Cola and junk food every meal! To the oubliettes and bastinados!
What about the fact that 75% of people live paycheck to paycheck? This one is easy, right? Obviously capitalism! Billionaires!! Billionaires make it impossible for people to plan more than one month ahead! Off with their heads! But you know, this has been true back to the sixties, when there was only a handful of billionaires alive. Arguably, it's true for millions of years of hominin history, because hunter gatherers CAN'T store big surpluses of food, it's just sort of the default time horizon.
What about the fact that ~half of all marriages end in divorce, and of the remaining marriages, at least half are net miserable for at least one party? And all this was happening WELL before smartphones and dating apps Um...relationship oligarchs? Off with their heads?
Maybe people just suck, and have the planning horizon of gnats, and will always do a bunch of stuff you consider a bad idea. Eating mcnuggets every meal, staring at screens 10 hours a day, living paycheck to paycheck, getting in bad relationships and then getting divorced.
It doesn't require "oligarchs" rapaciously harvesting serfs to get people where they are today, all it requires is giving them what they want. All "tech oligarchs" have done is plugged into the *already existing* drive to stare at screens for 10 hours a day, a little better than TV and movies, and have eaten into their share of eyeball-hours.
Just like all fast and junk food has done is plug into the already existing desire to eat fatty, sugary foods for every meal, and done it so well that literally 80% of Americans are fat.
This isn't an "oligarch problem," it's a human nature problem.
Really, the point is just that _somebody_ rich has to decide they value other people's happiness too. It's not a high bar to clear. If all power ends up concentrated in _one_ AI/tech oligarch's hands, then fine, maybe we'd get unlucky and they'd be the kind of person to enjoy lording over a universe of suffering. But even if it's a few dozen, chances are a few of them would be fine with giving up a small portion of their power to make this a better universe for everyone else.
Even today, most billionaires at least _dabble_ in philanthropy.
While I’d agree that by looking at a long enough period of time AI is either likely to create fabulous levels of wealth or destroy us all. In that sense I agree worrying about a capital P permanent underclass might be unfounded.
However, I don’t think most average people are really concerned about that capital P permanent or being relevant in history. It seems to me that the primary worry is about the near future within this life time. Worried not about a god like AI, but rather an AI that’s competent enough to take over huge portions of the job market but not enough to catapults us immediately into a sci-fi type society. Worried that in that society AI creates a permanent (permanent for an individual, not a society) underclass not by nature of incredibly drastic wealth increases captured solely by the wealthy, but rather by eliminating the already limited upwards mobility provided by the job market.
Sure that transitory period is unlikely to be a truly permanent state of affairs but even if it’s only as short as around 30 years that’s a long enough time to be effectively life ruining for a generation of people.
It seems reasonable to me to assume that between post scarcity level AI and current day there will be a level of AI technology smart enough to automate all but the highest skilled jobs but not enough to really create a recursive AI improvement loop. In that period I would be really worried about not have enough investments be carried out of needing to worry about working again.
Great point. There is a certain kind of conditional knowledge contained in healthy families, and when they are allowed to completely disintegrate, for what ever reason, that knowledge can be lost
" In that period I would be really worried about not have enough investments be carried out of needing to worry about working again."
And that's not even considering the people who don't have investments because they can't afford to, and don't know how to, invest. Even for the "thirty year period" people, if they get knocked down a rung of the ladder, their children will also be knocked down. Isn't this the complaints of Millennials and Gen Z today, that they can't hope to have the same standard of living as their parents/grandparents? Unless after the 'thirty years' the economy really does zoom to the moon and beyond, and there is so much money everybody gets enough to live luxuriously (and not just "you have a bunk bed in the state dormitory and three meals per day of insect-meal gruel, be thankful"), then the AI boom is going to create the underclass.
Locality counts too - if you are Millenial Chinese or Gen Z in Africa, you have a very different view on the idea of doing better than your grandparents.
But you have to be a very enlightened individual to look at your kids and say ‘well, they may be worse off than me, but on a planetary scale poverty is decreasing’
That fear was one of my primary motivations for becoming as well acquainted with the strengths and weaknesses of LLMs as possible as quickly as possible when they first had their wide commercial release. I'm sure I'm not alone.
Is there a case to be made that the future, benevolent or malevolent, probably doesn't feel obligated to keep caring about what pledges Dario Amodei made, or what wealth he accumulated, once he's sufficiently far removed from the direct hinge of history? And generally isn't obligated to care about present human notions of who does, or doesn't, have capital/fame/success at all, any more than we're at the mercy of the caste systems of our ancestors?
For all we know, the people who become remembered as Jesus to 200,000 AD aren't the ones we're currently paying attention to at all. So the ideal strategy to be the next St. Veronica is just to give as many people washcloths as possible (or, more broadly: be nice to people when we can), which is true even when there's not a singularity imminent.
In the Ethiopian Orthodox Church, Pontius Pilate is considered a saint because his documented reluctance to execute Jesus is considered evidence that he later converted and was himself martyred.
Pilates has a more positive reputation in Eastern Christianity and in early Christianity. There's plenty of traditions where he was basically forced. This tends to be linked to a desire to make the Jews solely responsible for Jesus's death. Which is important if you claim more direct heritage from the Roman Empire than was common in the west. Or if you just want to hate Jews.
Western traditions tend to have seen him as more guilty and often suggest he was punished in some fashion. Sometimes prosaically, an echo of the actual fact he seems to have been recalled, and sometimes mystically. There's even a tradition the gates of heaven were shut to him and he wanders the earth, effectively a more specific version of the origin of the Wandering Jew.
There's two to three distinct periods and long fallow ones with him. I've always wondered if it says something about the times he's famous in. There's certainly commonalities.
Man, why isn't this "X years to get as much of your thoughts and personality and writing into the god-mind coming into being?"
Same hinge of history style argument, but your thoughts and sundry will LITERALLY be inside a god, theoretically influencing their thoughts and actions!
The value of writing and communicating on a public platform has never been higher. I'd be long Substack, but it's privately held!
Yeah, I think it's the stuff you're creeped out by in that post. If god knows when even a tiny sparrow falls, then he also knows each of the several million words you've written, the arguments and undertones and overtones and passions and positions therein, and has considered them the appropriate amount.
Also, have some pity on the poor legions of historians, focusing their collective scrutiny on our tiny slice of humanity! Even if our writing is relatively unlikely to do a lot of shaping of god minds or 3 million generations hence descendants in utopic bliss fields, it will sure make all the historian jobs richer and easier!
I have a hard time buying the "my writing will instruct the Godlike intelligence of future AI" angle. Do you think Einstein's ability to understand physics was materially influenced by the particular methods his first grade teacher used to teach multiplication? AFAICT writing is either instructive or persuasive (or entertaining, but that's not relevant here). Instruction implies that there's some objective principle that's being conveyed, and I don't think there's anything in your repertoire so esoteric that a future ASI wouldn't be able to either derive it itself or glean it from other sources. And if you assume that the ASI is many times more intelligent than you are then, well, good luck trying to persuade it of anything. At best, the only impact any particular writer might have on future ASI would be as a data point in a survey of what humans thought in 2025.
> At best, the only impact any particular writer might have on future ASI would be as a data point in a survey of what humans thought in 2025.
I mean, you could say this about our entire past, and yet even today we care a lot about what Plato and Descartes and Hume and Paine and Shakespeare and Dostoyevsky and Austen and thousands of other authors from that past wrote, and their thoughts and writing still shape our memetics today.
To Scott's point, we even care and still talk about Ea-Nasir's copper quality and business practices, 4000 years later!
Memes matter, and have a much longer life and relevance than any individual. Sure, you and I are no Shakespeare, but SCOTT might be!
Yes *we* care, but we're not superintelligent AIs. If the goal is to influence AI by being influential enough among humans that one's ideas are repeated enough by other humans to dominate future training data, then that's no different from writing without having AI in mind.
>To Scott's point, we even care and still talk about Ea-Nasir's copper quality and business practices, 4000 years later!
That's a bit of a stretch. We care about that because of historical value, not because of any objective instructive value they might have. Which was the point of my last sentence - if AI cares at all about what one particular person wrote in 2025 it will only be insofar as it provides historical insight about the cultural zeitgeist in 2025.
It's also a little ironic given Scotts x-risk fears, since IMO the primary value of his writing is rhetorical. He's unusually good at making ideas persuasive. Why would he want to teach future AI how to persuade humans to do things that they otherwise might not want to do?
The richest olive merchant in Jerusalem that year is long forgotten, but she endures:
Talmud Bavli Gittin 56a: "There were at that time in Jerusalem these three wealthy people: Nakdimon ben Guryon, ben Kalba Savua, and ben Tzitzit HaKesat."
So you have to drop down to being the fourth richest olive merchant to be long forgotten.
That was shortly before the Second Temple's destruction, so a good while later. And these three were the richest in Jerusalem, period. They were more akin to Bill Gates than to some random rich guy.
Legacies are kind of overrated. I don’t know what good it will do if some people in a future civilization spend all of three seconds reflecting that I existed once.
Fascinating perspective on making the most of limited time and focusing on meaningful impact. It reminds me that even in short breaks, small joys matter—like playing casual games such as [Slice Master](https://slicemaster.net/)
to recharge and stay sharp, helping maintain creativity and focus in bigger projects.
My personal favorite absurd conspiracy theory is that actually Eä-Nasir’s copper was fine, and he’s fallen victim to the longest-running review-bombing campaign in history.
I'm confused about what point you're trying to make. You point out (correctly) that it may not be worthwhile for an upper-middle-class person to devote enormous effort to shifting their future prospects from 'stupendous wealth' to 'unimaginable wealth'; but then you strongly imply that it *would* be worthwhile for them to devote enormous effort to ... appearing slightly cool?
Or to put it another way, the first three paragraphs make total sense, and then the article veers off into a bizarre dithyramb about the pursuit of fame and glory.
I suspect fame and glory are powerful incentives for himself and his target audience. You don’t write blog posts on the internet for a decade earning more money on the off chance that something like Substack would eventually make him money, especially when you’re already a med student. You write blog posts for fame and glory, and Scott’s amassed plenty. He clearly values it more than wealth and he’s writing to people that share those values.
I think the claim is that the values you embody now will shape people’s behaviour in the future. 1 in 2,500 future people having a constant reminder of kindness as their name, or 1 in 5,000 if 50% of those people would otherwise be named after some other example of kindness anyway, would be a huge positive impact.
I suppose the idea is "don't worry about being rich, everyone is gonna be rich (except for those losers we don't have to consider), so what can you brag about at house parties if not wealth? well, how about doing something cool you can name drop?"
Sometimes people decide what to do by imagining what strangers would think of them. Often, this is entirely mistaken, because the strangers aren't paying attention and don't care what you do. So you end up doing things based on a mistaken mental construct.
If I understand correctly, the incredibly vague call to action in this post is to imagine what hypothetical future people might think of what you do, and try to do something to impress them? That seems even more doomed to fail. Also, even if you knew what they wanted, why should you care whether they're impressed?
I hope it’s not a breach of internet etiquette if I copy and paste a (shortish) comment of mine here. I think the claim is that the values you embody now will shape people’s behaviour in the future. 1 in 2,500 future people having a constant reminder of kindness as their name, or 1 in 5,000 if 50% of those people would otherwise be named after some other example of kindness anyway, would be a huge positive impact.
I think it's mainly intended as a negative call: if you've read something telling you it's vital to create some B2B SAAS company, yeah, don't worry, you don't need to stress about doing that after all.
Some of Scott's "Contra" posts are refuting some point I've never encountered. This one is too, but at least here he linked to the New Yorker post he's arguing against. (Admittedly I can't read that New Yorker post on mobile due to paywalls, but the thought was there.)
The New Yorker article isn't really arguing a point, it's more exploring a social phenomenon, but some excerpts from the New Yorker article that I think summarize it:
"The “lumpenproletariat,” according to “The Communist Manifesto,” is “the social scum, that passively rotting mass thrown off by the lowest layers of the old society.” Lower than proletariat workers, the lumpenproletariat includes the indigent and the unemployable, those cast out of the workforce with no recourse, or those who can’t enter it in the first place, such as young workers in times of economic depression.
According to some in Silicon Valley, this sorry category will soon encompass much of the human population, as a new lumpenproletariat—or, in modern online parlance, a “permanent underclass”—is created by the accelerating progress of artificial intelligence....The idea of a permanent underclass has recently been embraced in part as an online joke and in part out of a sincere fear about how A.I. automation will upend the labor market and create a new norm of inequality...start leaning in to A.I. products or stay poor forever....
Jasmine Sun, a former employee of Substack who writes a newsletter covering the culture of Silicon Valley, told me, of tech workers, “Many are really struggling and can’t find even a normal salary, and some of the people are raking it in with these never-seen-before tech salaries. That creates this sense of bifurcation.”...The reward for the grind might be a role as an overlord of the A.I. future: the closer to collaborating with the machine you are, the more power you will have. Fears of a permanent underclass reflect the fact that there is not yet a coherent vision for how a future A.I.-dominated society will be structured. Sun said, of the Silicon Valley élites pushing accelerationism, “They’re not thinking through the economic implications; no one has a plan for redistribution or Universal Basic Income.”"
But the difference in mass-energy here is still huge. If you're a regular person, when you eventually die of heat death, others will continue on hundreds of thousands of times longer than you in a state of pure bliss, simply because pre-singularity property rights favored them.
That feels a lot more intuitively unfair than merely getting a moon-sized estate instead of a galaxy-sized one.
If you're a regular person, your current life expectancy is 2 digits max. By that standard the greatest unfairness is all the people who don't make it to the hypothetical escape velocity, vs. those who make the cutoff.
It's also a strongly pro-accelerationist argument: every second, some mass-energy falls outside our reach permanently (an LLM gave me an estimate of 127,000 M☉/s, take it with the appropriate grain of salt). That's about a trillion moons per second that no one gets to use.
(I don't personally believe takeoff can be so fast that this consideration really matters, but if takeoff is slower, then being willing to give up even a small amount of consumption has enormous compounded returns even long after superintelligence is clearly present; so the X-years meme doesn't work there either).
So the specific claims are not very serious, but the overall point is. That makes sense.
Thanks for the link.
Personally I'm confident (this iteration of) AI won't be that big a deal. That's not to say it'll fizzle out; I could see it being the next smartphone. But it won't bring us private moons or a permanent underclass.
I think it will be a big deal in the other direction, i.e. it will greatly accelerate the universal enshittification process. People will (and in fact already do) accept low-quality AI-generated images/text/etc. as the new normal; similarly, they will accept a much greater degree of unreliability in most systems (since they will become AI-controlled or AI-designed). Perhaps we won't enter a new dark age, but we will lose the kind of institutional knowledge that enables the production of great art, great writing, great filmmaking, and even great customer service, great code, great jurisprudence, etc. Of course some people will still enjoy all these things, but the average person will simply accept that people in pictures have the wrong number of fingers sometimes, and when you order chicken you'll sometimes get pork and there's no point in complaining, and maybe your car will occasionally just take a right turn for no reason, and all of that is perfectly normal -- because what are you gonna do ?
I'm people and so are you; neither of us is accepting that trash. People don't have much tolerance for ordering chicken and getting pork, and the AI systems that do stuff like that will need to improve or they'll be binned.
>I'm people and so are you; neither of us is accepting that trash.
Aren't we ? I'm using all kinds of enshittified systems right now, from email to Wen browsers to cellphones to my car. I have health problems to which my doctor's response is basically "uwu", and I don't have 5x the money to hire an actually competent doctor. The elevator in my building trembles and makes ominous grinding noises. I have an Adobe Photoshop annual subscription. I watch Marvel movies. And yet, I keep using all that stuff and more, not because I want to, but because there's no viable alternative. Maybe you make a lot more money than I do (in fact that's probably the case), and maybe you can afford better goods and services; but if so, I think you might be in the minority. Now imagine what happens when all the good stuff costs 5x, 10x, 100x more, because low-grade AI-generated slop is so cheap...
What was the period in time where you wouldn't have had similar issues?
You don't have tons of alternatives to marvel. You have millions of tons of alternatives. You can even get them ALL FREE ANYTIME online via unstoppable piracy. Oh and all the old movies still exist.
The 1950s doctors recommended cigarettes and didn't know a million things you can just Google now. Legitimately sorry to hear your doc can't solve the issue. Unironically, have you tried the free AI in your pocket?
Elevators???
Things are way better, across the board. You are comparing your goods to theoretical perfects.
Humans love the concept of Judgement Day, and they love judging each other. Don't be surprised if most of the "underclass" is forced to pass through some final manmade filter, which judges who is good enough for utopia and who isn't. And don't be surprised if our "leaders" design the filter so that they themselves are guaranteed to pass.
> your worst-case scenario is owning a terraformed moon in one of his galaxies
Have you been convinced of the grabby alien hypothesis? I think the Dark Forest hypothesis and the "they're already here masking our perception of them" theory are each much much much more likely, because life on Earth doesn't seem to need anything uncommon. Am I missing something?
I’ve never found the Dark Forest hypothesis convincing, but struggled to articulate why. The hypothesis is that civilisations must hide, because other civilisations learning about them will view them as a threat, and then annihilate them, correct?
I think that only makes sense if continuing to hide would improve the odds when the civilisations eventually meet. But hiding would slow economic growth, and thus military research. Hiding is a dominated strategy if it means a civilisation’s growth rate is lower than its enemy’s.
Imagine if offense tech outpaces defense tech, and defense tech never catches up no matter how much research your ASI does. Hopefully that doesn't happen, but it seems like it might, because planets can be killed with just a lightspeed ashtray.
I think an advanced civilisation could defend against a near-lightspeed ashtray. It would crash into interstellar dust which would produce actual-lightspeed signals, and the defending civilisation could deflect it with lasers.
>Humans love the concept of Judgement Day, and they love judging each other. Don't be surprised if most of the "underclass" is forced to pass through some final manmade filter, which judges who is good enough for utopia and who isn't.
>The cautionary note is that everyone's income is now [in the post-AGI but human-controlled scenario] _exclusively_ set by politics and law. Today, people have the bargaining chip that they can withdraw their offer of their labor if the offer for it is too stingy. Post-AGI, in the absence of that, there is nothing to constrain the system from e.g. halving the UBI allocated to people who are insufficiently MAGA or insufficiently Woke, or insufficiently whatever the ideology of decades from now is.
> because life on Earth doesn't seem to need anything uncommon
How deeply have you looked into this?
Because from what I've looked into, there's a decent chance *simple* life, prokaryotic life, might be common, but there are a LOT of arguments that eukaryotic life might be uncommon in the universe.
So first, you need water and vulcanism just for simple life, and NOT "hot style" underwater volcanic vents, but 1000x rarer alkaline hydrothermal fields.
Then you need a paradigm shift in energetics to get to eukaryotes, because all energy exchange happens at the cell wall, but as you get larger, your interior volume increases faster than your perimeter, and this limits how big and complex you can get. But THAT requires oxygen!
Even today, prokaryotes that use respiration and oxygen versus chemosynthesis are roughly 10x as energetically efficient.
And to get to prokaryotes from eukaryotes is ITSELF a hugely difficult step.
The step between them is an incomprehensible gulf - giving up your cell membrane and becoming symbiotic in a long-term sustainable and net-energy-positive way (eventually leading to mitochondria and the much better energetics that allow complexity) was a big step that was seemingly never repeated in the ~4B years since prokaryotes have been around, in the sense that we don’t see any evidence of different “lines” of eukaryotes anywhere in the world - all eukaryotes go back to a singular endosymbiosis event.
But (unsurprisingly, given we haven’t empirically seen prokaryote endosymbiosis happen more than once in ~4B years) this probably didn’t work out that way. A few more things need to happen to get to eukaryotes, and from there to multicellular complex life, and those are more likely on a different path:
* There needs to be an immediate benefit to both organisms
* Some of the engulfed organism’s functions needs to migrate to the host’s nuclear DNA
* Energetics need to become net favorable to the host
* A postal system with labeling and input / output gates needs to be built (TIM / TOM)
And that "postal system" is crazy hard to pull off.
* Targeting Signals: Each protein destined for the mitochondrion needs a specific "address label" or targeting sequence
* Translocase Complexes (TOM and TIM): A suite of multi-protein complexes had to evolve in the outer (TOM - Translocase of the Outer Membrane) and inner (TIM - Translocase of the Inner Membrane) mitochondrial membranes. These act as gates and channels, recognizing the protein's address label and guiding it to its correct location within the organelle
This is basically verging on the paradox of the watchmaker, as near as I can tell, because so many things had to go JUUUUSSST right for it to even make it. It’s a big jump on the evolutionary landscape, and seemingly all at once.
And this is before we get into any Rare Earth style arguments about stellar perturbations, comets, and asteroid impacts making all-biological-progress-up-to-that-point extinct, which is a whole other complexity and difficulty gradient that eukaryotes need to climb after getting to "eukaryote," which took 2B years, to then get to "intelligent life!"
All the prokaryote / eukaryote stuff is a from a post I wrote about Nick Lane's book the Vital Question if you're interested in more, in which I also do a Fermi equation estimate using this knowledge - also, the book is great and I definitely recomend it.
You're 10000x beyond what I've looked into, thanks for the link. The only thing I can add is that if life is being spread interstellar by meteors and whatnot, then each step of that Fermi Equation can be done in parallel across the planets. And different planets could specialize in different steps? And it would easily explain the "1 highly developed ancestor" situation. But it doesn't explain why we don't see aliens when we look up, which is why I gravitate towards either Dark Forest or Prime Directive.
It suggests a new theory about how eukaryotes came to be that suggests it was perhaps not as hard as the traditional scenario, and susceptible to a gradual, evolutionary process.
That said, Nick Lane's book -- in fact all his books that I have read -- are great and I second the recommendation.
Thanks, the article itself was a rather frustrating read, because they just reprised all the stuff I talked about in my article.
But the linked Mills paper was the real argument, and it was interesting. Basically they handwave the whole problem away - they say "look, you get oxygen with prokaryotes / cyanobacteria, and as soon as you have 2-3% of our present oxygen level, we get eukaryotes right away! So really, they're just waiting in the wings - they're doing syntrophy with hydrogen or whatever and can immediately pivot to oxygen once it's available."
Broadly, they're taking a 40 thousand foot view and saying "eh, if you have oxygen and syntrophy, no worries, you'll get eukaryotes, we basically got them right away as soon as the environment was right."
BUT this still doesn't actually help our Fermi paradox case. Oxygen is HARD! Let's take the same 40 thousand foot view as they are.
So what do we need for complex eukaryotic life? I think we all agree you need water, vulcanism, and oxygen. On that last, we STILL haven't seen any other rocky planets with an oxygen atmosphere!
The 55 Cancri e example I pointed to was actually a false positive - a lava world with a bunch of CO2 or CO, not oxygen (and it's surface is 2000 degrees).
Moreover, the specific flavor of "vulcanism" is exceptionally hard. It took a few billion years for us to reach 2-3% PAL because there are oxygen sinks like iron and basalt that suck up the oxygen before it can start increasing atmospheric concentrations. But with the way a lot of vulcanism is structured (stagnant lid tectonics vs plate tectonics), we'd actually expect most planets to act like much bigger, potentially infinite oxygen sinks. Even if they got cyanobacteria photosynthesizing, it's unlikely they'd reach the atmospheric oxygenation needed to support eukaryotic life, because the basalt and iron and hydrogen in the environment is continually being generated anew in oxygen scrubbing ways. "Plate tectonics" might be necessary for life!
And although we're not fully sure (sample size of 1 in the universe that we know about so far), it looks like you might need both water AND potentially a moon-sized collision to get plate tectonics.
So now we need water, vulcanism, and a moon, AND plate tectonics to get oxygen to get complex life now. I glossed over the "oxygen" requirement in my own post to allocate most of the improbability to TIM / TOM complexes and things like that, but they're arguing (and I probably agree), there's a good chance that some of that improbability needs to go to the oxygen / moon / plate tectonics stuff.
Because after all, the Fermi paradox is an observed fact about our world - despite the countless solar systems and galaxies around, we DON'T see any other intelligence, NOR any other simple life!
So there's some gobsmackingly huge reducer (the Great Filter) between us existing and the rest of the living / intelligent universe not existing, and we're basically arguing about where the various steps of the filter are better allocated here.
No argument. Finding a lot of exoplanets at all at least removes one candidate for the Filter, though not one that anybody put much faith in. I’d be greatly relieved to believe that it’s behind us and we’re just unique with an unlimited future.
What I found striking about the SN piece was the move from the theory that the first eukaryote happened through some failed attempt to eat another microorganism to the new theory that it was two microorganisms in symbiosis that gradually co-evolved to be more efficient by breaking down the barrier between them — no lucky accident but the kind of gradual process we see elsewhere in evolution. Maybe that’s all old news to you; I’m just a layman.
GDP growth doesn’t matter if people cannot afford goods and services. See demand, not supply. Current economic theory assumes humans are the workers earning money, which will not be the case post-AGI.
Even if UBI exists, there will be services where humans are necessary (for example, electricians, cooks, etc.) for which the cost will be much higher.
Besides, can Earth really withstand 1000 times the resource extraction? I can only assume that he assumes there will be faster-than-light space travel, which is impossible.
Anyway, this doesn’t change my original point that Scott just assumes there will definitely be some superintelligence in the future that makes this time meaningful, when AGI with no superintelligence is a more likely outcome.
Ok. Why do you feel that "AGI but no super-intelligence" is more likely.
I mean it's possible we get vast economic growth without super-intelligence. Molecular nanotech seems pretty powerful.
The human brain is clearly orders of magnitude from the various physical limits.
It looks like runaway self improvement probably starts somewhere around human level.
I think it's fairly plausible to get AGI several years before superintelligence. But what does your AGI without superintelligence scenario look like in the long term?
I don't know why you think molecular nanotech is possible with AGI but humans can't do it by themselves.
Anyway, my main argument for my skepticism is that a priori we should not think universe has some hidden structure, especially a structure that we cannot find out but some other hidden algorithm can. First statement follows from occams razor, and the second from accuracy of experiments in physics. Fields like biology are an exception, but there are similar reasons to believe we can't 'solve' it as in find all it is possible to find in it using knowledgeable humans and ML.
As for the claim that intelligence can be infinite, first of all we don't know enough about intelligence to measure it properly (except by comparison). We don't know if it is scalable, and if there is a ceiling for it. And even if intelligence is infinite, if there is not enough structure in the universe that can be exploited it is still meaningless.
I think AGI with no super intelligence looks very similar to now, except most humans don't work and survive on UBI.
> I don't know why you think molecular nanotech is possible with AGI but humans can't do it by themselves.
I didn't claim that. I think that it is possible for humans to build nanotech.
> a priori we should not think universe has some hidden structure, especially a structure that we cannot find out but some other hidden algorithm can.
Suppose I see a car. The car wasn't made by me personally. I personally don't understand the details of metallurgy. That car exploits a "hidden structure". One that I don't understand, but that other humans do.
Look at a leg. Muscle protiens. Etc. Exploits a hidden structure that I don't understand, but that evolution can exploit.
And then there are the existing ML. Say a cancer detecting neural net. There must be some hidden structure in those cancer scans that an unaided human can't exploit, but that a neural network can.
What strange priors are you using where other humans being smarter than you is allowed. Existing ML being smarter at one specific thing is allowed. But the ML being generally smarter isn't allowed.
> Fields like biology are an exception, but there are similar reasons to believe we can't 'solve' it as in find all it is possible to find in it using knowledgeable humans and ML.
Lets suppose that the existing laws of quantum field theory are basically correct. Or at least any new physics only involves particles existing for femtoseconds and so isn't useful. (maybe this is true, maybe not)
There is a huge amount of stuff that is possible under known physics. Eg nanotech.
So the minimum plausible superintelligence is that it invents nanotech in 3 days when humanity would have taken 100 years (and even then, not have made anything quite as good).
> As for the claim that intelligence can be infinite
Who made that claim? I think that claim is pretty dubious. I will claim that there are 6 orders of magnitude between neuron firing speeds and the speed of light. (And orders of magnitude inefficiencies in other way in the human brain)
> if there is not enough structure in the universe that can be exploited it is still meaningless.
What do you mean "not enough structure". We basically know that nanotech is possible. We don't have it yet.
> I think AGI with no super intelligence looks very similar to now, except most humans don't work and survive on UBI.
Not quite what I meant. So this world, it's "similar to now". Which presumably implies it's a long way from the maximum technology which we know must be possible under known physics. (Ie they don't have nanotech)
So, the AGI isn't yet good enough at R&D to invent nanotech/and or it doesn't have the time.
If the AGI was fundamentally incapable of inventing nanotech, and humans couldn't make smarter AGI, we had to invent nanotech by hand, that's maybe 100 years.
Lets assume that the moment the first AGI appears, all the robot have already been built, so we can move to the robotized economy instantly. But you still have humans and/or AGI doing ASI research and nanotech research. So this state will last for maybe 5 years max, before the nanotech is invented.
Any civilization with an R&D department that doesn't yet have ALL the tech isn't a long term stable situation.
Right, nanotech is indeed something a super intelligence, if it exists, can do. Thanks for reminding me.
> Suppose I see a car. The car wasn't made by me personally. I personally don't understand the details of metallurgy. That car exploits a "hidden structure". One that I don't understand, but that other humans do.
I am distinguishing between knowledge that is hidden from humanity as in it was detected but cannot be explained with current knowledge/math, which can be called observed information, from that which has not been detected by humanity yet. The latter is what I call hidden structure of the universe.
> What strange priors are you using where other humans being smarter than you is allowed. Existing ML being smarter at one specific thing is allowed. But the ML being generally smarter isn't allowed.
ML models are not smart per se. They are just mathematical models that aggregate information well. As for humans, I think smartness is just ability to represent information well and manipulate it well. To the extent that differences exist in it between humans, it is just a biological artefact. Real computation is due to structure of the brain that we don't understand.
> Not quite what I meant. So this world, it's "similar to now". Which presumably implies it's a long way from the maximum technology which we know must be possible under known physics. (Ie they don't have nanotech)
If AGI is not much smarter than humans making nanotech would be slower. But subjective time of AGI would be fast, so it may indeed arrive much faster compared to humans alone.
I am skeptical of your robotics claim though... Robots are much more expensive to mobilize en masse, and small jobs like street food sellers may not be automated. But it doesn't seem to matter much though if output is high enough. As in everyone may have their own chef robot, etc. with UBI. So other than issues of meaning, AGI would indeed make us materially better off.
> Besides, can Earth really withstand 1000 times the resource extraction?
Imagine a magical handheld tool that's as many orders of magnitude better than a smartphone, as the phone is better than a banana of similar mass.
Economic expansion doesn't imply linear increases in material inputs. Sometimes it's a matter of improving efficiency. Solar power has been growing while using far less silicon per watt, and we're not going to run out of silicon any time soon.
Just because I can imagine such a tool doesn’t mean it is possible to make it. When I asked about resource extraction, I meant non-renewable resources like metals, coal, gas, etc. Your efficiency point stands, but even if we use fewer minerals per device, if output can get arbitrarily large, it still wouldn’t be enough. But all this is beside my main point anyway.
When I say impossible, I mean the impossibility of a square also being a circle, or faster than light travel. Theoretical, not practical.
What can a device do to be more 'useful' than my phone? Perhaps it can do something, perhaps it cannot. But probabilities for each case are not certain. My intuition rests with the latter, say 70-30 odds.
> Coal and gas can *become* renewable resources.
Didn't know it before, thanks!
> Metals are already routinely recycled, and "waste" left over from previous mining often ends up reclassified into "viable ore" as techniques improve.
Right, forgot about it. Although my point may be still true in really long term, that will be enough time to do many things.
Lot of developed-world folks won't have an immediate intuitive grasp of how useful a flint knife really is, though. Even someone who hasn't ever personally eaten a banana or wielded a smartphone will have at least seen prices in advertisements, or heard peers discussing relevant experiences.
There seems to be no convincing reason to think an entity more intelligent than the average, or even the smartest human, is impossible- i.e. superintelligence.
Maybe this is trending towards conspiracy, but something I find relevant is a lot of these memes always end up somehow benefitting existing oligarchs or power structures. "To avoid permanent poverty, you must do this thing which just so happens to make Bezos another trillion dollars". Now, I'm a capitalist, if people want to make money more power to them, but it's also relevant that those that are already powerful are more able to create and spread memes and narratives which facilitate their interests than less-powerful people. The transformative power of capitalism requires people to occasionally overturn the applecart, which means we need to be critical and careful of memes which make it harder to do that
"Ten million years from now, do you want transhuman intelligences on a Niven Ring somewhere in Dario Amodei’s supercluster to briefly focus their deific gaze on your legacy and think “Yeah, he spent the whole hinge of history making B2B SAAS products because he was afraid of ‘joining the permanent underclass’, now he has a moon 20% bigger than the rest of us?”
Yeah I absolutely do not care about what random people I will never even know exist, let alone meet, think of me or what I did eons ago. That 20% bigger moon means trillions more children, or quadrillions more years of simulated time, etc. They can have their megacathedrals to their heroes - outliving all of them by trillions of years is a much better reward.
If the Singularity is coming soon, then either everyone will be dead, or everyone will be infinitely powerful (or effectively so) in the post-scarcity simulation-space of perfect bliss. In the first case, it doesn't matter what you do now because everyone who could've cared about you, including yourself, will be dead. In the second case, it also doesn't matter what you do now, because any contributions you could've made will be diluted down to epsilon by a nearly infinite multitude of simulated uber-minds. Yes, your name will be recorded in history -- which no one will care enough to read.
On the other hand, if the Singularity is *not* coming anytime soon, and there's a chance that you can secure a better life for yourself or your children, then it seems like the expected value of taking that chance could be substantial; so you should probably take it.
“Singularity” means “AI changes stuff so much we can’t predict it”. Locking in a permanent underclass would qualify, and that’s the worry of Scott’s hypothetical target audience. Yes, if that doesn’t eventuate, they should do something else with their lives. If a singularity happens, the OP notes the two scenarios you’ve described and says they’re more likely than the scenario this article addresses.
> Locking in a permanent underclass would qualify...
Sorry, qualify for what ? Do you mean "qualify as a change so large we can't predict it" -- even though you're predicting it ? I might be misunderstanding you, however.
In any case, the "permanent underclass" scenario isn't specific to the Singularity. Rather, it had been the case throughout most of human history, and the present-day worries about it merely address the fears of regressing to that state of affairs -- and that's assuming that we don't live in such a world already (which many people will tell you that we do). In any case, unlike the Singularity, it's a fairly realistic scenario with lots of historical precedent and evidence behind it, so yes, I think it makes sense to worry about; it at least as much as we worry about other mundane threats such as global warming, asteroid strikes, bioterrorism, etc.
By contrast, the Singularity is a bit of a motte-and-bailey, with both ends being somewhat self-defeating. The motte is indeed often presented as "a change so large we can't predict it"; but if that's all it is, then worrying about it is pointless. You might spend a lot of resources on securing your place in history or a bigger moon or whatever, but these are predictable (and to be honest fairly mundane) scenarios, and therefore by definition not something that is likely to happen in a post-Singularity world.
The bailey is usually one of the two scenarios I'd described: total annihilation of humanity, or perpetual eternal bliss for everyone. In either case though, your actions today are completely irrelevant, so doing anything at all is pointless from the post-Singularity perspective. Thus, it makes more sense to ignore the Singularity prospects completely, and focus on other scenarios where your actions might actually matter... such as securing more wealth for your children or buying a bigger moon or whatever.
Sufficiently high IQ is indistinguishable from a kind of mental illness. One becomes vulnerable to obsession with imagined scenarios, which feel highly salient because one can attached detailed arguments to them (not equally weighting the fact that one could attach detailed arguments to their opposites, or to all of a dozen significantly different variants).
Sure, but that doesn’t only apply to the OP. I think he’s responding to other people who are already obsessed with the scenario he discusses, and who further became committed to the response to that scenario which this article argues against. There are people like that - at least one person commented that they needed to hear this.
I think it's a socialization issue rather than an absolute threshold. Staying grounded often requires someone smarter than yourself (to directly poke holes in your best arguments), and/or a crowd of equally-smart people which is large and diverse enough to avoid having opinions fully synchronize (to regularly present you with equally-clever arguments for things you find viscerally repulsive, so you remember that logic isn't the same as truth).
When someone's enough of an outlier, the selection of qualified peers or mentors is limited, just due to the nature of statistical distribution - and, being outliers themselves, those few people are all subject to the same risks, potentially making them net sources of instability. But, as the overall population grows, and empirically-validated mental hygiene practices accumulate, the danger zone gradually recedes.
How is Scott such a fucking goated writer? I’ve been thinking about this for like 2 years and Scott comes out and clears all my thoughts up with 1 essay
> There’s no reason the colony ships won’t contain flash-drives of the whole 2026-era Internet, so, rather than being limited to a few prominent figures, these historians can study the generation around the Singularity almost in its entirely.
How sure are we this hasn't already happened? How sure are we it's not happening right now?
This reminds me of the second part of the story Manna.
“A number of years ago, your father purchased two shares of 4GC, Inc. in your name. These shares entitle you and one other person to come live as citizens of the Australia Project. You may leave the terrafoam system with us today if you choose to.”
But don't you need to be in the center of the events that lead to singularity? I mean - did I just give up my place in history books by leaving cloud and AI sales and becoming a guy that helps couples have better sex and people have more self reflection?
Maybe you'll get drunk one night and have an engaging yet exhausting fourteen-hour argument with some AI chatbot about the nature of self-reflection, with privacy settings toggled wrong. Key insights explicitly derived from that conversation end up integrated into the definitive open-source textbook on how to pass the Turing test without cheating, and you become the patron saint of sex therapists. Folks everywhere frequently scream your name at thematically-relevant moments of frustration, or bliss.
Why, thank you! It's a thoroughly cultivated talent of mine, extrapolating internally consistent premises like that from limited data. Usually I apply it to running RPGs. https://questden.org/wiki/JamesLeng
> She is now known as St. Veronica, patroness of laundry workers, and one out of every 2,500 girls in America is named in her honor.
I would guess this is an overestimate by at least a factor of 1000.
Compare the classic scene from American Gods:
-------
"Remember," she said to Wednesday, as they walked, "𝘐'𝘮 rich. I'm doing just peachy. Why should I help you?"
"You're one of us," he said. "You're as forgotten and as unloved and unremembered as any one of us. It's pretty clear whose side you should be on. [...]
Easter put her slim hand on the back of Wednesday's square gray hand. "I'm telling you," she said, "I'm doing 𝘧𝘪𝘯𝘦. On my festival days they still feast on eggs and rabbits, on candy and on flesh, to represent rebirth and copulation. They wear flowers in their bonnets and they give each other flowers. They do it in my name. More and more of them every year. In 𝘮𝘺 name, old wolf."
"And you wax fat and affluent on their worship and their love?" he said, dryly.
"Don't be an asshole." Suddenly she sounded very tired. She sipped her mochaccino.
"Serious question, m'dear. Certainly I would agree that millions upon millions of them give each other tokens in your name, and that they still practice all the rites of your festival, even down to hunting for hidden eggs. But how many of them know who you are? Eh? Excuse me, miss?" This to their waitress.
She said, "You need another espresso?"
"No, my dear. I was just wondering if you could solve a little argument we were having over here. My friend and I were disagreeing over what the word 'Easter' means. Would you happen to know?"
The girl stared at him as if green toads had begun to push their way between his lips. The she said, "I don't know about any of that Christian stuff. I'm a pagan."
The woman behind the counter said "I think it's like Latin or something for 'Christ has risen' maybe."
"Really?" said Wednesday.
"Yeah, sure," said the woman. "Easter. Just like the sun rises in the east, you know."
"The risen son. Of course—a most logical supposition." The woman smiled and returned to her coffee grinder.
When singularitarians say that utopia is when everyone has their own planet, is it just shorthand to "you will have a stupendous amount of resources to use as you wish", or do they really dream of living alone (or with serfs) in a planet-sized estate?
It's a science-fiction dream which will never be reality, so why not dream big? Like the Solarians in Asimov's "The Naked Sun", who have a population of 20,000 people strictly maintained at that level on their planet, with an occupied space of 30 million square miles and a ratio of 10,000 robots per human.
Except that this version makes the Solarians look like pikers. Only 1,500 square miles per person? Pfft, that's practically slum living!
An excerpt from the novel, where the Earth man (who comes from a society where the majority live underground and in closely packed quarters) has his first encounter with Solarian living:
"He had thought of a ‘dwelling’ as something like an apartment unit, but his was nothing like it at all. He passed from room to room endlessly. Panoramic windows were shrouded closely, allowing no hint of disturbing day to enter. Lights came to life noiselessly from hidden sources as they stepped into a room and died again as quietly when they left.
‘So many rooms,’ said Baley with wonder. ‘So many. It’s like a very tiny City, Daneel.’
…It seemed strange to the Earthman. Why was it necessary to crowd so many Spacers together with him in close quarters? He said, ‘How many will be living here with me?’
Daneel said, ‘There will be myself, of course, and a number of robots.’
…And then that thought popped into nothing under the force of a second, more urgent one. He cried, ‘Robots? How many humans?'
‘None, Partner Elijah.’
They had just stepped into a room, crowded from floor to ceiling with book film. Three fixed viewers with large twenty-four-inch viewing panels set vertically were in three corners of the room. The fourth contained an animation screen.
Baley looked about in annoyance. He said, ‘Did they kick everyone out just to leave me rattling around alone in this mausoleum?’
‘It is meant only for you. A dwelling such as this for one person is customary on Solaria.’
‘Everyone lives like this?’
‘Everyone.’
‘What do they need all the rooms for?’
‘It is customary to devote a single room to a single purpose. This is the library. There is also a music-room, a gymnasium, a kitchen. a bakery a dining-room, a machine shop, various robot-repair and testing rooms, two bedrooms ’
…‘Jehoshaphat! Who takes care of all of this?’ He swung his arms in a wide arc.
‘There are a number of household robots. They have been assigned to you and you will see to it that you are comfortable.’
‘But I don’t need all this,’ said Baley. He had the urge to sit down and refuse to budge. He wanted to see no more rooms.
‘We can remain in one room if you desire, Partner Elijah. That was visualised as a possibility from the start. Nevertheless, Soarian customs being what they are, it was considered wiser to allow this house to be built
‘Built’ Baley stared. ‘You mean this was built for me? All this? Specially?’
‘A thoroughly roboticised economy ’
‘Yes, I see what you’re going to say. What will they do with the house when all this is over?’
‘I believe they will tear it down.’
…’It is just that the effort involved in building the house is, to them, very little. Nor does the waste involved in tearing it down once more seem great to them.
‘And by law. Partner Elijah, this place cannot be allowed to remain standing. It is on the estate of Hannis Gruer and there can only be one legal dwelling-place on any estate, that of the owner.
This house was built by special dispensation, for a specific purpose. It is meant to house us for a specific length of time, till our mission is completed.’ "
I don't know what I'd do with an entire planet. Although I would want my estate to be large enough to have different climates to ensure there was always a beach at the right temperature for swimming, always a mountain with good snow for skiing.
Also, you just made me want to read that book. I've read a few Asimovs but haven't encountered Olivaw between Caves of Steel and Foundation and Earth so he's always just annoyed me a bit. Maybe reading the intervening books will make him make more sense.
I do think the earlier robot books are better; the later ones used the advances in understanding of technology to be more updated, but the earlier ones (though they have the limits of the time they were written) are fresher and more original.
Elijah Bailey doesn't always get on with his robot partner, and R. Daneel Olivaw definitely does come across as the patronising nanny taking care of the childish humans (even the Spacers, not just the Earthmen). He's more likeable when Elijah takes him down a peg and gets under that skin of robot superiority.
This fantasy feels like the antisocial introvert's version of "in heaven everyone has their own mountain of gold" - an extrapolation of the immediate desire to have more real estate and less neighbors to an extent in which it becomes seriously inconvenient. In such a sparse universe it'd be hard to have authentic culture, friendships and romantic relationships. Might as well lock yourself in a Nozick machine, that at least wouldn't be a flagrant waste of the cosmic endowment.
(Either that or the viability of FTL is taken for granted.)
If you read the revisit to Solaria in "Foundation and Earth" 20000 years later, I believe they were down to only a few thousand. They are also transhumanists at that point, don't think of themselves as human at all, and have bioengineered themselves to manage the allocation of thermodynamic energy on their estates. That's roughly my expectation for what Earth would look like in a century after development of ASI if AI alignment were "solved" but ended up simply aligned to the wishes of their controllers. The Solarians still like to have a few others around because they like having other people appreciate their expertise at old timey crafts which have a sort of bespoke value, the one they visit in the later novel has the best orchards and trades fruits.
Our fiction is so full of future dystopias - not just "non-great societies", but societies where the situation is upheld with intent - that we tend to forget how many things need to go wrong in exactly the right way to get there.
Expecting dystopia of a specific flavor by default seem like a massive failure of the imagination.
Would anyone be interested in any of the following?
Real money bets, bet matchmaking or prediction markets about different scenarios, like
- Probability of 99%+ unemployment in 2, 5, 10, 20, etc years.
- Probability of some kind of (important to specify) permanent underclass existing within N years
- Probability of UBI existing e.g. in the US within similar years.
- Probability of strong enough governance mechanisms (needs to be defined) that guard against strong and permanent power concentration.
- And more.
I have tried to look, prediction market coverage for such questions seem spotty. And even more so for real money markets. (There might not exist any, please say so if you are aware of any.)
Why would such real money markets be good to have? A couple of reasons.
Let's say these markets predict very high chance for these (some or all) bad outcomes. That’s very important to know personally, and also to have collective knowledge of. Possibility to prepare or become at peace with what's coming, or try to choose a different path while that's still possible.
If these markets show very low probabilities for some of the bad outcomes (we can have more questions defined than what's above, to avoid having a bad market because of some technicality) that might really assuage some people's fears. More crucially: this is a hedging opportunity for those still afraid. If let's say there is only a 5% chance predicted at even like a 30% unemployment rate within 10 years, then I and others might be very tempted to bet 1-to-20 in this market: 50k usd bet will pay out 1 million usd in case the bad outcome does happen.
Importantly, some or all of these markets will have to use the 'apocalypse bet' scheme (there is a LessWrong article with this title published in 2007 that you can read if you are unfamiliar). In a regular prediction market both sides pay in upfront, and payout only happens upon resolution. However, in this case, if someone truly believes that there is only a 5% chance at 99% unemployment in 10 years, the opportunity cost of locking up 100 usd to get 105 usd in the end is unthinkable. However, if the pessimistic side immediately pays out to the optimistic side 100 usd, with a legally enforceable repayment of 2000 usd upon the pessimistic resolution, (and nothing upon the optimistic resolution) that might just work.
Why would anyone go into such conditional debt to bet on the optimistic side? The same reason we expect prediction markets to work: it can just make financial sense to bet on a probability if you have strong enough reason to expect that you are correct: it's free money for you on the table in expectation. The further the current predicted percentage is from your conviction the stronger the incentive to bet.
There is also the general objection to prediction markets that long timeline resolutions are fraught because of time value of money and opportunity costs of having to lock up money for the long time (e.g. inflation and lost ROI that you could have had otherwise). I agree with this.
Regular prediction markets could solve this possibly by saying that both sides actually bet using some appreciating asset, like an S&P-500 tracking index, so payout is also pegged to that. To my knowledge this innovation is not actualized anywhere yet. Or is it?
A 'doomsday market' could also use the exact same mechanism: initial transfer is in usd, but the repayment is expected in some pre-agreed type and quantity of a security or then-current market value thereof.
Apart from grabbing free money on the table in expectation (if they are correct, they can just keep it, no repayment will happen), why else could it make sense for the optimistic side to engage? Multiple reasons:
- If money ends up abundant in the future then it might be trivial for the optimist to pay when needed.
- They might expect to make more use of the capital now, at the hinge of history, than any other point in time: they can have more leverage to steer, so this can be an efficient transfer mechanisms from those who believe they can’t to those who believe they can.
- They expect to have better returns than what the underlying security and the payback multiplier will command: e.g. they just invest it all in NVIDIA and will be laughing all the way to the bank both ways even if they have to pay it back and more in S&P-500 later.
- Altruistic drive: they strongly enough believe in the goodness of humans, so they think apart from everyone dying (which this kind of betting fully discounts anyway) almost all other futures will be very good, so they can gift this warm reassurance to other humans as well by taking their money now and providing a legally binding guarantee to them that should the bad outcomes happen, they have their backs. A form of pre-commitment to an altruistic insurance scheme.
- Altruistic drive squared: If afraid people have strong enough guarantee that they are protected in some situations that might be otherwise bad for them via some scheme like this, that very likely frees up bandwidth for them that they can redirect to some other end, e.g. working on technical alignment or governance otherwise.
Why not just invest in S&P-500 and other securities directly? I think one should! But it’s a very roundabout bet compared to what we might care about that entangles with lots of other things. I myself would endorse a diversified portfolio that includes such bets as well.
So does any of this sound interesting to any of you?
- If something like the above existed, would you want to see what the predicted probabilities are?
- If the probabilities are strongly enough skewed some direction or another, you might want to enter a bet for one of the above listed motivation, e.g. hedging?
- If such markets do not exist and no one will create them, would you be interested in entering into such one-off contracts with regular people nonetheless? I’m serious enough about this that if we can hammer out some details (which mostly just coming up with good questions, criteria that we can also publish) and some wording of a good-enough legally binding contract, I would be interested in entering into such contracts with some of you. Let me know if you are interested and if you are optimistic or pessimistic below. And any important conditions you may have.
- Maybe such markets do exist we just need to find them and inject liquidity? Maybe in crypto space?
- Or the platform exists, then the questions need to be written, gotten published, and popularized? PredictIt e.g. could be very good potentially, but as far as I understand it’s not easy to get published there.
- If there is strong enough interest and no existing close enough prior art, then creating a platform like this might be quite good and quite important. Let me know if this might interest you too, I might be strongly enough motivated to create this if there is enough interest.
p.s. Robin Hanson writes an important comment about such asymmetric bets: “I'm afraid all the bets like this will just recover interest rates”. While I think that applies to Eliezer’s article as written, I think what I write above avoids that issue, but let me know what you think.
p.p.s. Before I or anyone might create such a pessimist-vs-optimist market, I’d strongly hope we can discuss and consider the potential feedback loops that might start: e.g. if it predicts very bleak outcomes, and everyone knows that everyone knows that bleakness is to be expected, will that help or hinder in expectation? Right now I think it will help due to the option of steering earlier is easier than later, but I’m very open to other viewpoints as well.
FWIW I think these are all pretty easy questions to resolve:
> Probability of 99%+ unemployment in 2, 5, 10, 20, etc years.
Of course this depends on what you mean by "unemployment", but I'd put that at somewhere around epsilon. Historically, new technologies often do lead to unemployment, but I can't think of any period in history where unemployment rose to 99%; not even in war-torn failed states (granted, you might be paid in bottles of vodka and shotgun shells rather than official currency but you are still employed).
> Probability of some kind of (important to specify) permanent underclass existing within N years
Yes, it's important to specify, because a permanent underclass already exists in many places on Earth and in fact had always existed; some people would argue that it currently exists even in developed countries such as the US.
> Probability of UBI existing e.g. in the US within similar years.
AFAIK some countries (and perhaps even US states ?) are already pioneering UBI on a test-case basis, so again, no bet.
> Probability of strong enough governance mechanisms (needs to be defined) that guard against strong and permanent power concentration.
Once again, some would argue that such mechanisms have been in play for centuries; perhaps their time in the US is coming to an end, but they are still somewhat functional in e.g. Europe. Of course, some other places are effectively permanent dictatorships...
Are you saying that within 20 years (and we can try to nail down the specifics) you'd put less than 1% probability that there will be 99% unemployment? (as I understand drawing from historical base rates) Maybe even less than 0.1% probability?
Does that mean that you might be happy to make a bet similar to this (maybe in a similar form that I describe) at 1:100 odds?
Even less than that; for reference, during the Great Depression unemployment rose to unprecedented levels of about 20%. However:
> Does that mean that you might be happy to make a bet similar to this...
Firstly, that depends on how you define "unemployment" (e.g. if I work one day out of the year, am I "employed" ? Are we sampling e.g. the continental US population, or some remote mountainous region with exactly one resident ?). Normally I wouldn't be so pedantic, but if money is involved, then it really pays to nail down all your definitions.
Secondly though, I'm pretty old, so the chances of me being alive in 20 years are sadly much lower than the odds of the bet...
Chimpanzee unemployment is 100%. In an AGI/ASI world, humans are basically chimps, you would never use one for productive activity. Nor is there a UBI for chimps, why would there be? You don't actually need their consent or cooperation to do anything, so it's not necessarily to spend any resources satisfying their desires.
In the AGI/ASI world if you're still alive you're probably akin to a chimp on a nature preserve. Maybe one chimp is clever and trades bananas for companionship with a hot lady chimp, and we would probably be doing the equivalent of that and you might squint and call it employment, but it won't be part of any meaningful chain of production.
> Chimpanzee unemployment is 100%. In an AGI/ASI world, humans are basically chimps, you would never use one for productive activity.
I absolutely agree that in a world where quasi-godlike nearly-omnipotent entities actually exist, and are able to usher in post-scarcity powered by virtually unlimited resources, human work would be pointless. I also believe that the probability of such a world coming to pass is epsilon -- not just in the next 20 years, but most likely ever.
You don't quite need that, you just need the "island of geniuses in a data center" to be sufficiently better that spinning one up to do the task is always indisputably better than using a human. I would be stunned if it didn't get to that level within a decade, even if for some currently unknown reason intelligence caps out at some point short of weird nearly-magic alien minds.
From the time they're notably better than humans, the humans are mostly unable to be productive and the redirection of economic production away from consumer goods is inevitable, and from there you'll get to the point where we're not participants in the economy anymore. There are some people that suggest Ricardo's theory implies lower productivity humans would still wind up performing certain tasks, but that fails for multiple reasons. For one thing, there is no real limit to the number of AI instances you could spin up other than energy, unlike two nations of different tech level in trade, the AIs are infinitely replicable and summonable upon command. For another thing, productive tasks themselves will likely utilize new tech that only the AIs can operate and would exceed the capabilities of humans. A chimp can pick bananas, but we still don't use them to do so because they can't use any of the tools we employ to do so at scale, your productivity locally would have to be so low as to be practically nil before that was a competitive use case.
I actually gave some thought to the notion and I'm pretty sure I'd rather go unremembered (which, yes, invites the easy snark that it's easy to achieve). I find the idea of living on as a faint memetic echo in other people's mindslop vaguely repellent. Not sure Veronica would have been that enthused to learn that her legacy would be to be idolized by a weird and frankly heterodox offshoot of what was presumably her sincerely held religion.
"I find the idea of living on as a faint memetic echo in other people's mindslop vaguely repellent." I'm glad you're here with this perspective.
I can't tell what the sincere position is behind this post of Scott's which makes it hard to know how to take it in. If it's farce all the way down, okay. But if there's something here intended to convey, what is it? Is it: "stop panicking, you'll be fine, and so give yourself permission to do something useful"? But where "useful" is being pitched as "remembered in a high-status way"?
I see around me lots of young people (I'm an old fart) who are quite worried about their futures and their fears in the face of so much uncertainty seem understandable to me. But I guess he's talking to a very small elite slice of that group that is well off but wants to be crazy rich? It rings hollow to me.
> The “permanent underclass” meme isn’t being spread by poor people - who are already part of the underclass, and generally not worrying too much about its permanence.
It seems far from obvious to me that poor people who know about the issue are generally not worried about it. The way I see it, pro-capitalist poor people have faith in the economic system's ability to elevate people with merit out of the lower class, while anti-capitalist poor people dream of eliminating class distinctions entirely and oppose any developments that make this harder. When I think of poor people who are fine with themselves and their families being part of a permanent underclass, all I can think of is monarchists and very pious religious people.
It's be interesting to see a survey exploring this.
> When I think of poor people who are fine with themselves and their families being part of a permanent underclass,
Until recently, the "underclass" was often half starved. Even today, the material conditions of the underclasses are often not-great. The idea of a social underclass who are "poorer", but still have more money than they can reasonably spend is something that only makes sense in the context of post singularity economics.
Scott, I love you, but this is one of those pieces where I have to ask "Do you actually know any poor people?" Not just "Oh yeah, one of my patients one time when I was toiling in the Midwest before I could escape was poor" but "yes, I have a family member/friend/someone I interact with more than 'cashier at grocery store' who is poor". People for whom a 50 cent increase on the price of goods does affect what they will eat and what they will purchase in the grocery store ('well, looks like chicken legs are off the menu, how much is mince?')
I guess it is correct that this piece is aimed towards "neurotic well-off people in Silicon Valley" since they are the only ones in a position to benefit from the likes of:
"Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies."
I had to look up who this guy was, and while that's very nice of him to pledge 10% of his wealth, who is going to get that 10%? Hey Dario, I am happy to give you details of my bank account if you want to pay in $100 per month out of your spare cash to me.
That's not the kind of philanthropy they mean, though, is it? Not giving to an actual real person but rather "set up a trust which will adminster a foundation where people can apply for grants to get their start-up running" philanthropy.
Because I don't believe in the 'the super-wealthy will own galaxies, and you too can be part of that' future. This is what real-world wealth does in our current world, when taxes start biting. I'm sure Jeff Bezos has plenty of philanthropy associations and charitable wotsits, but when the rubber hit the road about taking a chunk out of his vast profits, that was a different matter.
One man was able to affect the budget of an entire state just by moving house.
"Jeff Bezos is so rich that when he moved from Seattle to Miami, it shook Washington’s entire budget; now, the Evergreen State has $1 billion less to spend on K–12 education and childcare, all because of a single address update
According to the WSJ, in November 2023, Bezos announced that he was leaving Seattle after nearly 30 years and relocating to Miami. Publicly, he framed the move around family and logistics. His parents had returned to Florida, and his space company’s operations were increasingly concentrated there. Quietly, the timing lined up with Washington state finishing its legal battle over a new and controversial 7 percent capital gains tax. The law survived a Supreme Court challenge in March 2023. Bezos’s departure came months later, and the financial consequences became visible almost immediately.
Washington’s capital gains tax is unusual by American standards. The state does not tax wage or salary income, but since 2022, it has imposed a 7% tax on long-term capital gains above roughly $262,000 from assets such as stocks and bonds. Real estate sales are exempt. Retirement accounts are exempt. The tax was explicitly designed to fall on a small number of very wealthy residents rather than the broader population.
...In its first year, the tax appeared to work exactly as intended. Collections came in between $786 million and $890 million, beating forecasts. Fewer than 4,000 taxpayers paid it. More than half of the revenue came from just ten individuals, most of them in the Seattle area. That concentration was always part of the design, but it also created an obvious vulnerability.
The second year exposed it. Receipts fell to roughly $430 million as wealthy taxpayers adapted. Gains were deferred. Sales were structured differently. Some taxpayers simply changed where they lived. That is where Bezos looms over the entire experiment."
This is what the super-abundance thanks to AGI future will look like. Them that has, gits. The ultra-wealthy act to protect their wealth, and the rest of us can only hope for crumbs to fall from their tables.
And I don't believe in "yeah, but the super-future means that the tables will be so loaded, even the crumbs will be plenitude". I know that even if all of Bezos' wealth was divided up, it would amount to just over $700 per person in the USA and that would be a once-off payment and all the money was spent and the poor/underclass would still have to make a living for the rest of their lives. Split it up among the 9 billion of the world and we're now talking $27 each.
So we are in the ridiculous situation where taking the wealth of the super-wealthy won't, in fact, lift anyone out of poverty but if they get to keep it, they can hire out Venice for their second marriage, run their own private space ship programme, or affect the entire education budget of a whole state by fucking off to a lower-tax state.
That's how money works. That's how wealth will continue to work in the future. But I'm sure Jeff will be happy to buy tickets to the Met Gala because him and Mrs Jeff II are so devoted to charidee.
> This is what the super-abundance thanks to AGI future will look like. Them that has, gits. The ultra-wealthy act to protect their wealth, and the rest of us can only hope for crumbs to fall from their tables.
> And I don't believe in "yeah, but the super-future means that the tables will be so loaded, even the crumbs will be plenitude".
> I know that even if all of Bezos' wealth was divided up, it would amount to just over $700 per person in the USA and that would be a once-off payment and all the money was spent and the poor/underclass would still have to make a living for the rest of their lives.
In the current world, the super rich don't collectively own that large a share of the wealth. Sure they have a lot per person. But it's <10% of the overall wealth in the world. (Or something, depending on details)
And 1 states education budget, or all the hotels in Venice, or a rocket program or whatever the super rich are up to, is a fairly modest fraction of humanities total wealth, so that checks out.
Scott is talking about a hypothetical future where the super rich are MUCH MUCH richer. So rich that one of them sharing out their wealth evenly is more than enough to make everyone rich-by-modern-standards. A future where the super rich own 99.9% of the stuff, but that remaining 0.1% is still enough for everyone else in the world to get a private jet.
Suppose some really nice person discovers one billion tons of gold in a shallow deposit and decides to share it equally among the world's population. Suddenly everyone gets several million dollars of gold, at today's prices.
Except they don't. Since gold is so abundant, there's no longer any need to pay a premium for gold. The price of gold collapses, and everybody is back to where they were before, except they have all the gold jewelery they can possibly use.
Why wouldn't massive disbursements of fiat currency or cryptocurrency have the same effect, but without the jewelry side benefit? We saw this on a tiny but politically salient scale post-covid, when Biden's stimulus helped stimulate inflation.
Under such a scenario, all those whose initial wealth is much less than the disbursement amount effectively find their wealth equalized at some small value. A clear benefit to the poorest, a clear disbenefit to the middle class.
It’s not about the money, it’s about the stuff. Without endorsing the claim, the claim is that there will be so much stuff (energy, intelligence, and ability to combine them cheaply with raw materials to make anything) that there will be plenty to go around. You are thinking about sharing money, but printing more money just leads to inflation. Real growth means there is more stuff.
Where would the stuff be coming from, realistically speaking ? Are we talking about converting the Solar System to computronium, or
"merely" about building affordable housing on the ocean floor, or what ? And if it's the computronium, then what is preventing humanity from growing exponentially to fill all the available [virtual] space, just as we've always done ?
Stuff. Like food? We have more than enough food to go around, yet people still starve. More food will probably help a bit.
Clean water? Granted, great opportunities there; with some environmental tradeoffs.
Clothing? We have more than enough clothing to go around too.
Health care? Clear potential benefits (or disbenefits, if extinction is an option), but not really in the "stuff" category.
Weapons and warfare? Whoops, that's a category where we want less stuff, not more.
Those are the five big items affecting personal well-being.
What else would be beneficial to happiness? Financial security? Well, that's about the money. Good friends and family? That's neither money nor stuff. Love? The sign of the correlation between love and technology is still unclear.
There are two more items I think are worth mentioning: entertainment and personal fulfillment. So far, it seems that entertainment is the biggest AI-based growth area. Conversely, personal fulfillment is the most challenging. Lots of standard goals in life (providing for family, making the world a better place) theoretically go away. Perhaps the biggest contribution of technology to personal fulfillment is youtube's how-to videos. But that becomes obsolete when AI knows better than you ever can.
At best, it seems that infinite stuff only implies marginal improvement in typical happiness.
The obvious lesson is not to tax wealth but instead the unimproved value of land. This also doesn't show that Bezos wouldn't donate to charity (rather than taxes).
It is morning, and so the 144,689th reincarnation of the One Who Cartoons Hate is brought in chains to the Writing Temple.
Bound on an ornate throne of blended ivories from across the stars, her facial twitches are transformed into hot takes.
Bold, red words begin to appear in the swirling word mists of the grey mirrors that ring the vast room, dominating the Maelstrom of the Discourse that flowered within them moments before. The gathered Subordinate Wordstackers pause and read:
The Loneliness Crisis on Personal Moons Are a Revealed Preference:
Our distant ancestors used to live communally, packed into low rise suburbs; maybe we all just like a little distance from each other?
"Incredible", says one, "the title alone makes you want to leave the personal moon and live in a real community"
"People say that", says the older one, "but then the guy one mountain over has his clankers build a monument to his dog and ruins your eyeline"
Wtf? Has Scott gone techno millennialist now? Or did the joke fly above my head?
Edit: and what is all this stuff about being motivated by what future people might think of us? Why on earth would you want the approval of people who don't even exist?
This was about future people's opinion of the present, not past people.
With ancestors it's even worse, because we know something about their culture. I'm sure a whole bunch of my ancestors would despise modern standard liberal ways of living, simply because they fall pretty far from the Overton window of their times.
Yes, they would, but perhaps that invites certain questions about the sustainability of liberal lifestyles. And for all their rough edges, they also made enormous sacrifices to ensure we could be here.
Ehhhh fine enough to be grateful for our ancestors, but Scott was talking the other way around, that we should worry how our *successors* would see us.
Either way, I don't see much point in worrying about how people in the distant past or future would judge us, because I don't particularly think that wisdom increases or decreases in any kind of steady way.
Only the things that transcend the individual can transcend death. If you're committed to the position that only your sole individual life and immediate sensations matter then I guess I can't argue you out of that. But I don't think it's irrational or even all that weird to be concerned with the wider continuity and flourishing of the species, and whether we'd be remembered as contributing to that, in what limited capacity we can.
You keep mistaking my skepticism for egoism. I'm making the narrow point that the framing "what would our ancestors think of us" puts authority in the past, as if they knew better. Scott's framing "what will future generations think of us" puts authority in the future, as if they will necessarily know better. (Obviously, both can't be true at the same time.) I'm all for flourishing, but as far as I can observe, each time gets to define what flourishing means to them.
"Now you can stop worrying about the permanent underclass and focus on more important things."
Probably the most cold-blooded take on "the poor you will have with you always" that I've seen in quite a while. 'Yeah, ignore the less well-off, they'll always be scraping by. You're a software engineer, you are high value human capital, this is your chance to become famous/rich!'.
Gee, I wonder why EAs got a rep for caring about those safely thousands of miles away and ignoring the suffering on their own doorstep? Well, I guess that person in a minimum wage manual labour job is just not as *cute* as the liddle shrimpies, let's worry about humane treatment for the shrimpies and to hell with the underclass!
Because being rich is good? Because people who say things like "ignore the here-and-now in favor of a place in this insane future sci-fi fantasy" are typically cult leaders.
The entire post, if I'm being maximally snarly, is "fuck the people who can't afford to sink a ton of money into investments, they'll always be poor losers". It's "YOU'RE okay, you're the smart people who are currently making a ton of money and are worrying about being only a little rich, but in future we'll all be so rich that it won't matter (except, of course, for the poor losers who couldn't scrape together a million in savings and investments but like we said, fuck them)".
It's "so if you can't brag about your money, what can you brag about? here's a few things to try".
It damn well is not caring about the people who, if their car breaks down, have their lives ruined because that means they can't get to work; can't get to work means lose their job; lose their job means no easy new job to slide into; no new job means no money means ending up on the street because they don't have the savings to absorb loss of income because they can't afford to fix the car in the first place because they don't have that money.
Those people are going to be the permanent underclass that nobody needs to worry about, as they are too busy worrying about "but how can I inscribe my name in the history books?"
I think your issue is kind of what the post *is*. It's not meant to be targeted towards the actually existing underclass; it's targeted towards rich people. Doesn't mean any of it is wrong or not useful. Someone else is probably writing a post about how the current actual underclass is going to be a permanent one in the post-singularity future, and that all of the neurotic Silicon Valley people worried about becoming part of the permanent underclass won't be except you'd like it despite making the same exact factual claims because it's aimed towards people who aren't neurotic Silicon Valley people
This interpretation assumes a model in which the only way to benefit from the wealth gained in a post-singularity scenario is present-day investment. I don't see anything in the article to imply that this is Scott's working model. Indeed, the sentence directly before the one you quoted seems to imply that it is not:
"Dario Amodei has taken the Giving What We Can Pledge to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies. Now you can stop worrying about the permanent underclass and focus on more important things."
One would assume the "less fortunate" Amodei is providing for in this scenario include the class of people you're concerned about. So, if this is Scott's proposed reason that we "can stop worrying about the permanent underclass", then it seems everybody is being told to stop worrying and nobody is being told to get fucked.
EA doesn't even care about them. Much like all virtue peddlers, they care only about the social status they gain by publicly pretending to care. Which is precisely why personal virtues need to remain personal. As soon as they become public they become subject to Moloch.
I know for a fact that a month after I die, nobody will remember I ever existed. And I'm fine with that. I'm simply not an interesting person in any way.
There’s an interesting possibility that I haven’t noticed people posting about: We merge with AI. I now use it a lot, not for personal support and affection, and not for executing my ideas, but for information, help doing things and help figuring things out. GPT5.2 is to my mind what Sigourny Weaver’s power loader suit was to her body in Aliens 2. Others are partially merging with it more the way one does with a beloved person, others the way some do with therapists, priests and teachers. It won’t be long before people can have intense sexual experiences with AI via some combo of AI role-playing, AI-generated video and AI-controlled body stimulators.
And there’s another factor operating to make a merge likelier: Seems to me that connecting AI with the deep brain processes of a smart mammal is the most promising way to overcome many of the deficiencies that limit current AI. Right now it can’t learn world the way it learned language. It needs senses and locomotion for that, plus processing innards set up in advance to organize that body of experience and integrate it with language. And how much can you understand if you aren’t deeply familiar with the physical world? And AI can’t remember chats, can’t ruminate, can’t learn from experience, has no self-generated preferences or goals. All those things are as easy as falling off a log for us, and I’m skeptical of the idea that we can just build the capacity to do these things out of especially clever electric circuits. Sure, our own capacity to do them runs on circuits, but we know relatively little about how that happens. I’m guessing it would be easier find a way to link AI to the brain activity of a person and have it be shaped by that than to build deep, integrating parts of the human brain from scratch with the materials and methods we have now.
And a future where we merge with AI is one regarding which the questions of what will it do for us or to us don’t arise. As for the corporations that developed the tech — that’s the end of their owning AI and accumulating power that way.
Of course, there will then be a whole new set of terrible ways things can play out.
I would hesitantly disagree about AIs needing personal experience to understand the physical world. Video-generating AIs like Sora wouldn't be able to produce output that looks realistic to humans unless they had some sort of approximate understanding of the laws of motion. And experiments like Genie and SIMA show that AIs can understand, create, and navigate virtual 3D environments.
I hope this is satire? Otherwise it’s a totally vapid analysis of what I believe are legitimate concerns about social and economic stratification. It relies on relativism to diminish the concerns of the present on the basis of the distant future, or the same kind of thinking that leads one to devalue one’s life because the universe will one day end. We cannot take Dario’s pledge to donate 10% of his wealth to secure our own present and future
"Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies."
His generosity is not pledged exclusively to humans alive now. Obviously he's not going to subsidize moon owning oligarchs while the majority of the galaxy starves...
Many people envisioned a post singularity world as a continuation of our world. Humans competing for power, money and territories. Business as usual with greater numbers. People that used to control the earth now controlling galaxies. Poors that used to possess a backpack now possessing a moon. I dunno. I expect something alien. I expect the end of humanity as we know it.
"Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies."
I expect Dario can give much more than that. Actually, isn't 10% a mere tithe of the usual sort of lower-class human?
Aren't EAs supposed to give it all away after their successful careers? Not just 10%.
The Lindy answer is that great conquerors grant part of their new empires to subordinates, who then put the resources of that territory into use for the original conqueror/owner. It’s fractal delegation of power with allegiances filtering up to God-King Amodei.
Neo-feudalism might not work if Dario is able to perfectly manage all his share of the lightcone with just his ASIs, but if I were a betting man I’d still stick with the feudal approach.
The permanent underclass in post-scarcity world just dies, because he has nothing of value to trade for goods and therefore what's being manufactured in abundance is no longer what he needs.
There's a Yud post on LW iirc about "why wouldn't the ASI leave us just a little bit of sunlight" that uses current mega-rich people as an example, but you could see either of those two outcomes (super-productive economy ran by and for the AI, or super-productive economy ran by AI for the upper caste tech bros) and it would amount to the same thing for the rest of us. Maybe the ASI keeps a few humans around for unknown reasons, and maybe our new transhuman tech bro overlords keep a few humans around as slaves, pets, or maybe he takes a cue from God and keeps them around relatively free so that their appreciation of his glorious works will be genuine and value-adding. In any of those scenarios, you are still just not of much value and even the pittance it takes to sustain you will in all likelihood be devoted instead to resource extraction and energy production. ASI isn't literally magic, nobody is going to outer space, it'll just build the Dyson Sphere and call it complete as Earth freezes to ice.
I'm not trying to "escape the permanent underclass" primarily because I think that is impossible for me and most people, as the winners (if there are any at all) will be a vanishingly small group, no more than a few thousand people out of 8 billion. The amount of money and influence and connections it would take would require a runner-runner-runner sequence of liquidating most of what I have, making a high-risk high-reward bet with it, then taking what I had from that (which still would be nowhere near enough) and going to California or DC and somehow making myself indispensable to a person who will matter, despite not being able at present to do much of anything that those people would value. And all of my value is mental, so there's a ticking clock until AI would crowd out any value I could conceivably generate, a much faster clock if one is trying to cozy up to the people with access to the bleeding-edge AI models.
It's just not gonna happen. The only reasonable thing to do is attempt to enjoy your life, do what you can to sabotage or delay AI development if that's within your power, and if it's not then you've already lost. Can't recommend Scott's approach exactly, because anyone/anything looking backwards to before some singularity would be unable to be certain about anything that occurred before it, that's what a singularity is. You aren't gonna exist for posterity, you're going to exist for the next 10-20 years maybe to enjoy yourself and make the people around you that you care about happy. Don't send charity to the 3rd world, it's not gonna have time to pay off anyhow, if you have extra money spend it enabling the aspirations of people you know and love while there's still a chance for those dreams to be meaningful.
Yud’s theory that ASI will be such a neurotic maximizer that it won’t donate 1% of its resources to the rest of the world is totally unjustified, and comes from his obsession with old-fashioned/20th century symbolic AI.
We can’t conclude that ASI will definitely be generous, but neither are we justified in concluding that it’ll automatically wipe out humans.
> In the industrial revolution, old systems of wealth and power became mostly irrelevant. New source of power => new elite.
Yes, but the old elite were able to leverage their accumulated wealth to make the transition. The British peerage are no longer the absolute richest people in the country, for instance, but they're nonetheless still *very rich*, and the gentry is still securely upper-middle. It was the yeomanry that really got forced downwards.
Which is why I said yeomanry, and not peasantry: these are freeholders with enough land to consistently produce a meaningful surplus, and thus substantially better off than both rural tenants and industrial laborers.
I have so many thoughts on this, as anyone who frequents the discord knows full well, so I'll try to keep my comments pretty constrained lest I write a 50,000 word essay.
You wrote a really good piece a few years ago in favor of science fiction being in the future. I love it. You wrote a really good piece a few years ago about how we shouldn't let our debates on singularity and AI alignment devolve into cringe political debates about the present day, because the real singularity will be stranger than anything we could ever imagine.
how do you square those sentiments with this essay? I think this is a really important question for you and a lot of AI thinkers. Not to be terribly cringe about it myself, but if we ARE at some cosmic hinge point, something that is dispositive of the fate of the light-cone and myriad galaxies one trillion years hence, it seems to matter A GREAT DEAL whether, say, Elon Musk or the CCP or a commune of Woke Folks get the AI first. It seems like the really dumb politics of the now REALLY WILL extend into eternity.
Jesus Christ, if you believe the hype, was the only son of God and the most important human being who will ever live. His acts delivered numberless souls from eternal torment, but he was nevertheless compelled to weigh in on Roman taxation, adultery, and the propriety of divorce, which I imagine were fairly explosive but also fairly immediate culture war debates of the day. They've now become immortalized.
God himself, manifest in human form, once said to pay taxes to Caesar. It's very likely the machine god or its authors will have to drop takes just as cringe.
First the superintelligence that will kill us all, then the permanent underclass, now your last chance at sainthood... it's like the AI leaders are trying to start a revolution against them before it's too late. (Was this Eliezer's plan all along?)
Yeah, I'm not sure the tiny shoreline between annihilation and utopia is all *that* small. So long as baseline humans are alive there will probably an underclass of some description, even if GDP goes up to some degree.
I'd be happy enough with global first-world living standards, better architecture, and non-dysgenic replacement fertility, to be honest. Although I'm keenly aware HBD makes this unlikely any time soon.
If the first few years of AI has taught us anything, I think it's that AI looks very much like a commodity. There don't appear to be any substantial moats - anything one AI company does seems easily replicable by the others. The only secret sauce is compute and data and those are both freely-available commodities. As a prescient prognosticator said many years ago, the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. I have a hard time seeing the path to one company, or group of companies, being able to extract trillion-dollar rents. The economic model seems very similar to pharmaceuticals: high discovery cost, zero marginal cost. Pharma profitability is completely dependent on the framework of IP law and AI companies can't benefit from that moat because there's no way to prevent another company from recreating a similar product.
This is good for x-risk fears because superweapons are only bad when only one side has them. When everyone has them then that just leads to a new equilibrium of "my superweapon protects my rights from being encroached upon by your superweapon."
You convinced me; my idea for improving the labor efficiency of medium-density residential construction is on my blog. Was going back and forth on whether or not I should see if I could get a patent.
Short version, I think you could increase the labor efficiency of steel framing by a factor of ten with this, partly by streamlining construction and partly by replacing skilled with unskilled labor. It constrains what you can build.
It's kind of an obvious idea, so I have no idea why nobody was doing it already; my best guess would be the technology to mass-manufacture something like this wasn't available when the norms of steel construction were being set, and it would have been way more expensive to do it this way. Today I think this would be the much cheaper option.
> Short version, I think you could increase the labor efficiency of steel framing by a factor of ten with this
I read your post, but I couldn't quite understand the fastening mechanism for the horizontal steel beams. You say to clamp them to the plate, thus saving you the time of welding or drilling / bolting, but I don't think affixing a clamp would be all that much faster than welding? Welding is actually pretty fast. And clamping suffers some obvious potential failure modes welding doesn't. And would it be to code? Most fastening methods are specified in codes. And what would the cost of a to-code clamp be that covers all those failure modes?
And the heavy steel framing part of putting up a building is maybe 10% of the overall construction time, so even if you really did bring it down to 1%, seems like a hard sell for a one-time buff given that your people need to iterate into and generate the tacit knowledge of a new technique to get good and efficient, unless you were a full time heavy steel developer and were planning to amortize that learning cost over a lot of heavy steel buildings.
There are code-compliant/approved steel clamps, yep. This particular device, since it apparently doesn't exist, isn't approved, so that's an issue that would need to be navigated - but as far as failure modes go, the clamps in this case are not resisting the weight of the beams (which are supported by this device), but instead resisting lateral movement / shear forces.
Welding is fast but requires very skilled labor (and thus is expensive), and affixing a beam to another beam via welding requires fixing it in place first, a process that generally utilizes a crane (which requires three men to operate) and bolting.
Clamps are sometimes referred to as "friction welds" - well, this device is, in a sense, employing a "gravity weld" - gravity holds the beam in place before you ever get to the clamps. You just need to verify plumbness/alignment for the vertical beams (columns) and angles/alignment for the horizontal beams.
As far as the other time goes, I'm working on the other 90%. My long-term project is to solo-build a medium-density residential building, which means I need to streamline every part of the process; roofing, siding, interior walls, plumbing, flooring, electrical, and HVAC ductwork are on my list. I have some tentative plans for most of these, as well as some potential improvements that aren't necessarily any faster, such as efficiently building drainage pans into every room's floor trusswork.
> My long-term project is to solo-build a medium-density residential building
A quixotic goal, but one I admire - I wish you the best with it!
> Welding is fast but requires very skilled labor (and thus is expensive), and affixing a beam to another beam via welding requires fixing it in place first, a process that generally utilizes a crane (which requires three men to operate) and bolting.
I usually see it done with a guy on a scissor lift who clamps it with the usual f-clamps or 3-axis clamps before welding rather than a crane + bolting, but I'm generally seeing that in non-US countries.
And welding isn't generally that skilled, in my experience? Cheaper than electricians and plumbers, but again, probably a local market thing.
On the HVAC front, a good efficiency trick there is mini splits and / or heated floors, so you don't need to run any HVAC at all, but that depends on your market too - un-conditioned hallways is accepted in some, and not in others.
On your overall goal, have you studied some of the Chinese masters of mid-rise construction like BROAD group and CIMC? Might be worth a look for some ideas.
There were St Veronica the compassionate, Antinous the beautiful, and Aristippus the hedonist; also remembered are Judas the betrayer, Salome who demanded the head of John the Baptist, and Herostratus who burnt down the Temple of Artemis for no reason but to ensure his name would live for ever — and his wish has been sustained.
Ok so you're arguing that the easiest path to becoming historically noteworthy is to recognize who, among all of humanity, will be recognized by future religions as the Messiah and then helping that person in a publicly conspicuous way? Um ... ok. I think I have a far higher chance of becoming Jeff Bezos than I do of becoming St Veronica. What's more, the Bezos path has much better offramps. If I try to be Bezos but don't quite make it then at least I've likely engaged in some economically productive activity and probably have a hefty financial consolation prize.
This is a sharp inversion of the permanent underclass anxiety, reframing it as a failure of imagination rather than a realistic threat. The real risk it points to are spending a genuinely historic moment optimizing for status instead of meaning, contribution, or curiosity.
I recently saw someone referred to as a "greenhouse flower", which instantly embedded itself into my symbolic lexicon. I get the sense that a decent amount of the substack class of commentators are greenhouse flowers.
Anyone who wrote the post above and sincerely believes it in their heart must have come from an environment of such incredible security I cannot actually imagine it. It is so far beyond my experience it seems like a hallucination.
It's so basic as anyone who has ever worked a fungible labor job for wages would understand how out of line with reality it is; I would love to know how it came about and if it is more of a kind of wishcasting statement of moral values.
You'd never heard that term before? In my generation it was hothouse flower.
Otherwise I agree. Progressive beliefs are luxury beliefs. They signal status and reflect decadence. About 10 years ago a wealthy teen killed some people in a DUI accident and his lawyers argued for leniency on the basis that he suffered from "affluenza" - moral deformation caused by over-indulgence. Things like EA and essays like this are what that looks like at a cultural level.
The thing is, most conservative beliefs are also luxury beliefs.
Every person that believes in the charity of the wealthy, or in the ability of the capitalist system to distribute resources according to some utility function, or to get even more foundational; in the meritocracy or the concept of hard work paying off to the point that they are against taxation is a greenhouse flower.
Here are the sum total of non luxury beliefs: I can have what I can keep.
I think you misunderstand the meaning of both luxury beliefs and decadence if you think meritocracy is an example of either. Nothing has generated more wealth or helped more people in the history of the species. I suggest you study history more closely.
Late response: You are incorrect imo. The gradual increase of societal complexity and depth of understanding of natural law is the source of the changes you observe, not meritocracy (I think.) That is why the faith in the concept as a sort of moral leveler is a luxury belief.
There is no long term natural experiment that can prove this, but there are various short term ones: The Chinese warring states period, the transition from the roman republic to the empire, the Hanseatic league, etc.
The big one we always come back to is the late republican period of rome, which was notably less meritocratic than for example, the hellenistic kingdoms it came to dominate; but dominated them nevertheless at least in part because of the strong in group communal solidarity of the nobiles compared to the ruthless Darwinian meritocracy of the post-alexandrian ruling classes of those other states.
To go a more modern example: Soviet Russia lifted an incredible amount of people out of medieval poverty and contributed to the scientific project disproportionately even with absolutely moronic ideological pressures (Lysenkoism et all, the complete rejection of markets, idiot centralization) and incredibly corrupt leadership (the entire party apparatus) by abandoning meritocracy as an ideology.
>The gradual increase of societal complexity and depth of understanding of natural law is the source of the changes you observe
Agreed, but those things are maximally produced by meritocracy. That's because only a small fraction of people are capable of advancing our understanding and only meritocratic social institutions are capable of reliably spotting those people.
>the late republican period of rome, which was notably less meritocratic than for example, the hellenistic kingdoms it came to dominate
>Agreed, but those things are maximally produced by meritocracy.
The greatest flourishing of those ideals was from a notably and uniquely un-meritocratic period, although I will say I don't really want to argue with "some people are better at things than other people"; which I consider the baily, but rather "The people who succeeded do so solely through their promethean will and deserve the status/material benefits accrued thus.", which I think is the motte.
>Less by what measure?
Less in the sense that any average son of the nobiles stood a good chance of making it to a serious military command or at least the senate, and 99% of roman leadership was replacement level vs. the geniuses that made their way up in say Macedon or the Seleucid empire.
I think a underrated driver of this is the recent experience of the housing market. Wealthy millennials already perceive a relatively major economic success gap between those that got onto the home ownership ladder before rates shot up in 2022 (and again between those who got in before prices surged over 2020).
This has primed them to expect that the future will leave those who are even slightly late precariously behind.
I think the problem here is that you can’t imagine moon ownership and being part of the permanent underclass.
Just like a 15th century farmer can’t imagine owning a car, an iPhone with infinite media, having a cure for smallpox and cheap infinite calories at your disposal and still feeling unhappy and low status.
I don't think trying for "famous in 1000 years" is much of a possibility even for a 99th percentile person. Using your Christianity example, the billion-give-or-take Christians on Earth all know about one guy in ancient Jerusalem, most of them know one to two dozen others, the most dedicated theologists on Earth know 3-4 digits worth of additional people in a very vague, general sense, and the rest of the place's inhabitants are forgotten.
I think, even if you popped a full googolplexian of people into existence the next galaxy over and gave them a full archive of the modern internet with no other entertainment, there'd be 99% interested in the same people we care about today, a weird/dedicated .9% that find someone on the production frontier of interesting-ness that includes the top one million people spread across a variety of esoteric criteria *(prettiest girl who can do a triple-backflip, best programmer who knows how to street fight, futurist painter who most accurately guessed what future architecture will look like)*, and .1% that collectively pick a couple hundred of the same ordinary people to look at ironically, with the most interesting billion people getting 0-1 people who care who they are. Case in point, billionaires are very rare today relative to the general population, and very influential, but outside of their direct employees, nobody cares about the boring ones.
after careful review of existing geological survey data, the planet CYG554-3 has been found suitable for deconstruction. Its resources are earmarked for continued construction of His Galactic Holiness' monument in the local cluster. As you will be able to verify for yourself, a detachment of Core Extractors has been placed in geostationary orbit around CYG554-3.
Your exemplary stewardship over CYG554-3 for more than .3 rotations has been noted and I have therefore been authorized to extend you the courtesy of a 48 standard hour delay of the start of core extraction operations. Regrettably, resources to move your primary residence of CYG554-3b into a stable orbit around CYG554 Prime have not been allocated. While your ownership over CYG554-3b is sacrosanct and shall never be voided, one predictable outcome of the core extraction operations is CYG554-3b's rapid ejection from the solar system, with detrimental and permanent outcomes for its habitability rating. For your own safety, please vacate the premises as soon as circumstances allow. Further delays of the start of operations shall not be granted.
If you have further questions, please do not hesitate to contact me.
Irrespectively of the contents of the argument, this is one of your best posts (even though it paradoxically looks unfinished and unpolished). It reminds me of the characterisation you gave of CS Lewis in 12 rules of life:
> But for some reason, when Lewis writes, the cliches suddenly work. Jesus’ love becomes a palpable force. Sin becomes so revolting you want to take a shower just for having ever engaged in it. When Lewis writes about Heaven you can hear harp music; when he writes about Hell you can smell brimstone.
It's similarly cliche. But you are still pretty much the only writer who can give the "singularity is near" a real emotional weight, like Meditations on Moloch, The Goddess of Everything Else and Half An Hour Before Dawn In San Francisco did before. Chapeau bas.
Oh boy, I believe some crazy Sci-Fi sounding stuff these days, but I don’t see humans spreading past Earth in any significant numbers . I guess that’s where I draw the line. LOL
I'm with you on that. It is odd how devoted the techno-optimists are to the idea of humans leaving the solar system, despite the myriad biological and sociological barriers to that, not to mention it may just be flat out impossible at any level of intelligence. Nor do the benefits seem to come anywhere close to justifying the costs. But it's embedded into all the aspirations of the futurists, to "fill the lightcone" with intelligence.
So... does that mean the opposite is true? I was already spending 60 hours a week grinding at this job. Can I just quit to do more charity / update my blog more / be a lot more interesting, assuming some philanthropist is going to give me a moon one day?
I was under the impression that it's impossible to see past singularities. I thought that was one of their defining features. I assume it is difficult or impossible to accurately predict the actions of any super intelligence, benign or otherwise. So I'm not sure predicting future geography (coastlines, etc.) is wise. But maybe I've fundamentally misunderstood something.
If the moon is made of computronium and is my brain and actually well off people have planet brains (or even galaxy brains), this is maybe something I should care about.
Imagine if the average person was presented with these two options:
- A — You get to own a moon on the other side of the galaxy. It will take you a few thousand years on a spaceship to see your friends and family.
- B — You get to own a small apartment in Manhattan. It will take you 30 minutes to see many of your friends and family, and a few hours to see the rest.
I predict most of the people who are worried about the permanent underclass would choose option B. Indeed, we kinda see this empirically: with the same amount of money, you could either buy a huge house in the middle of nowhere or a small apartment in SF/NYC, and loads of people choose the latter. This is partially because of job opportunities, but even people who work remotely generally choose to live in big cities because that's where all their friends are.
Unless AGI somehow makes it possible to travel faster than the speed of light, I don't think people will want to own moons. Land scarcity will still be a thing, and there may well be a "permanent underclass" that can't afford to live close to their loved ones.
This post is so stupid. Thinking billionaires are gonna support those in poverty (look outside for the past 100 years) and basically worshipping them while they were sexually abusing kids on Epstein’s Island. Is your brain stuck in the 2000s???
That and assuming records of all ordinary people will survive that long. Why would historians dedicate to memory random people with only minor accomplishments, the woman who cared for jesus, cared for jesus christ! That’s why she was remembered. You’re really getting flustered over what amounts to at best a wikipedia article. What will you be remembered for, allowing other men to sleep with your partner?
Reading while jet lagged at 0200 in Jerusalem, I think Crassus was a Roman general, while it's "rich as Croesus.' But, of course, that ruins the "crass" joke.
You're right about Croesus, but in fact Crassus was also fabulously rich, partly through his monopolization of his own-the-fire-department-and-buy-your-house-for-cheap-while-it-was-on-fire industry. See https://en.wikipedia.org/wiki/Marcus_Licinius_Crassus#Rise_to_power_and_wealth
Thanks for the kind response. What does it do to the point of the essay, if anything, to note that Veronica is entirely legendary. No such actual person or the cloth incident is mentioned in the biblical crucifixion narrative.
If Veronica could come this far without even existing, imagine how much better an existent version of Veronica could have done!
(I am just using her as a metaphor for being at a time when there is so much scrutiny that you can get eternal glory merely by doing something slightly good)
Ah, I see: "Imagine, you can do it if you try." A metaphor based on somebody who never existed doing something that was never done and thus could never have been scrutinized by anyone actually existing. I take your point and agree it is a good one . . . but think it may be slightly dulled by choice of non-existent subject. At any rate, thanks for your kind responses.
Simon of Cyrene or Joseph of Arimathea also work just as well
If the future people have good enough simulators, she will have existed.
Or perhaps we're in an experimental simulation, one of the ones where she didn't exist and this is one version of how history unfolded.
I am starting to be convinced we are living in a simulation run by an extraterrestrial graduate student as a thesis project . . . but he went out with his buddies to get a beer and got run over by an interplanetary bus, leaving us in a rapidly decaying system whose default is insanity.
But an alternative take would be that whatever you actually do might be forgotten and a more interesting legend might be remembered in stead.
In which case the moral would be that you need to make sure you leave durable records of your activities instead of going out and doing stuff off-grid.
Real people might be at a disadvantage competing with fictional ones. That's one kind of inequality Robin Hanson worries about https://www.overcomingbias.com/p/alas-unequal-lovehtml
So there could not be extra-biblical traditions? Even true traditions?
There absolutely could - if Veronica was mentioned in Josephus or Clement or Eusebius then you'd have an excellent case. Veronica wiping Christ's brow on the way to the cross doesn't show up until a thousand years later. This is not a promising historical source, but is solidly in the legendary sweet spot.
The best Crassus story is that he was accused of meeting clandestinely with a priestess who was required to be chaste. If they had done anything to violate her chastity the punishment was severe. He defended himself by saying he was trying to buy property from her at a cheap price. Everyone agreed Crassus lusted for money more than women and he was found innocent.
As Scott says, Crassus was hugely rich, and engaged in using that wealth for political manipulation, loaning out money and backing the likes of Julius Caesar. He came to his end by engaging in a very badly-thought out war with the Parthians, since he gambled that "lots of money = buying best equipment and army" and forgetting that you also need to be able to command that army.
https://greekreporter.com/2025/01/03/richest-man-rome/
Crassus was a military commander BEFORE he became the richest man in Rome. https://en.wikipedia.org/wiki/Marcus_Licinius_Crassus#Youth_and_the_First_Civil_War
Why is worrying about becoming part of the permanent underclass presented as being silly, and worrying that AI is going to turn us all into paperclips not?
Either AI isn't a big deal, and doesn't affect your chances of joining the permanent underclass.
Or AI is a big deal and misaligned and kills everyone.
Or AI is a big deal and well-aligned, and creates so much wealth that even the tiny fraction of it that poor people get is still pretty great.
Or AI is a big deal and well-aligned, and merely 100xs wealth rather than infinite-post-scarcities it, in which case at least the moderately-well-off Silicon Valley people will be fine.
Or you're in the tiny shoreline of scenarios where the ultra-rich really REALLY capture all the wealth, they each have galaxies and you don't even have so much as a mansion, and then Dario Amodei gifts you a moon from his GWWC pledge.
I talk more about this at https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end
What if Dario welshes on his pledge? Or what if all the wealth is captured by Sam Altman? Or some other set of oligarchs which does not include Dario Amodei?
Sam Altman is a Giving Pledge signatory.
"We intend to focus our giving on supporting technology that helps create abundance for people, so that they can then build the scaffolding even higher."
https://www.givingpledge.org/pledger/sam-altman-and-oliver-mulherin/
That sounds like "moons for everyone" to me.
In any case, there's a broader point here:
Either the existing sociopolitical order generally persists into the future, or it does not.
Insofar as our current order persists, citizens will be able to vote themselves UBI, and the new abundance will diffuse across the world through trade/foreign aid/etc., in more or less the manner in which innovation has diffused historically.
Insofar as our current order does *not* persist, why would you expect your property rights to be respected?
The whole "escaping the permanent underclass" meme is premised on a rather narrow range of specific scenarios.
You assume that Altman (or Amodei) is being honest here, and also that their goals won't change in the future.
"Insofar as our current order persists, citizens will be able to vote themselves UBI, and the new abundance will diffuse across the world through trade/foreign aid/etc., in more or less the manner in which innovation has diffused historically."
Citizens can't/won't do this *now*, why would they in the future?
Because they vote based on the theory that things can be good in some circumstances and bad in others. Given a change in circumstances that makes UBI look like the only way forward, I expect support for it to rise drastically - but not until that's far enough along to be obvious to the average Joe.
(Alternately, perhaps I'm overoptimistic and the same forces that keep tax loopholes open will keep UBI off the table even if it does get majority popular support.)
I simultaneously hope that you are not overoptimistic, yet I fear that you are. Anyway, I don't think we can plan on it.
Mass unemployment
Taking the recent past (-40 years or so) the worst that unemployment got, the more voters seem to side with the representatives of very wealthy people.
I find myself thinking the most likely scenario is the one in which the upper class controls the system but does NOT share the wealth unless forced to do so. I wouldn't call this a tiny shoreline scenario but basically all the human history so far. If AI ends up being ultimately controllable AND aligned with the interest of the powerful, we get total surveillance + Boston Dynamic dogs as enforcers and it's game over. I'm not sure why this scenario is less likely than the others.
Who was forcing Bill Gates to give away so much money? If all it takes is one of the ultra rich donating a tiny fraction of a percent to give everyone their own paradise, is it really that unlikely it would happen?
See this story. When the crunch came about "donating a tiny fraction of a percent of own wealth", the wealthy in Washington got their accountants and tax lawyers to find the loopholes, and Bezos simply upped sticks and moved to Florida.
https://luxurylaunches.com/celebrities/jeff-bezos-relocation-shook-the-state-budget-12312025.php
Current valuation of his net worth is around $242 billion. So a tax of even as high as $1 billion on his gains is a lot, but it's still only 0.4% of his total wealth.
Still worth fecking off to another state to avoid it, as far as he was concerned.
But philanthropy? That's different. He's got philanthropy coming out of his ears:
https://www.bezosearthfund.org/
Just don't tax his real money and take that real money away from him, mmkay?
Having your money taken at gunpoint and then completely wasted by the bureaucratic thugs in the government isn't remotely the same thing as donating it.
Exactly.
And we're finding out just what percentage is being wasted or stolen. Namely, most of it.
Are you saying that choosing to do something taxable can be a moral obligation? Leaving the state is just as valid a way to comply with the new tax law as staying and paying the tax would be, IMO.
That "net worth" is an estimate based on him selling all his current assets for cash, assuming no current price would change as a result of said sale.
Which has never ever happened.
Almost none of this so-called net worth is in cash, so he'd have to sell something to pay it.
Also, that's 0.4% *this year*. Then next year the government still has a deficit (it's not like the wealth tax would even bring them back into zero) so they tax him again for the same percentage.
This happens *forever*, and the government comes to depend on this tax. Rather like income tax I guess, which was implemented to pay for a war, and stuck around afterwards.
Bezos was *right* to move.
Hopefully his move made them think twice about taxing the wealth of the middle class, who have (some) wealth, and aren't as mobile.
The US government is famously bad at spending money well. If you care about helping people, you should minimize taxes so you can spend the money where it matters.
And if Bezos is just doing charity to look good and will stop the moment it stops mattering, it won't matter as long as someone else actually cares even a little bit.
Since the landscape painted here is a post-democratic universe of god emperor capitalists, you'll also need to bring into account the rich guys who would go full colonialism and actively plunder the part of the universe designated as a preserve for the underclass. It's not far fetched to assume that stupendously enormous wealth and/or transhuman augmentations could make a person misaligned with rest of humanity to the point of near-orthogonality (Elon Musk seems like he has already surpassed orthogonality into a negative dot product). And you couldn't rely on good rich guys for protection, since their non-Molochian values would put them at a disadvantage. If you're a white rhino and one billionaire wants to protect you and another wants your horn in order to get horny, you're generally screwed.
There actually are rhinos on game preserves. There are poachers who try to hunt them, but it's not super-wealthy hunters overpowering game wardens.
Counterpoint: Every empire in history.
"you'll also need to bring into account the rich guys who would go full colonialism and actively plunder the part of the universe designated as a preserve for the underclass."
That's the point, isn't it? The plan of "get rich while you still can to escape the permanent underclass" only works if we're assuming that the basic rules of capitalism still apply. If whoever is richest, or more likely, whichever AI is smartest, decides to ignore the rules and take everything, then anything short of making a superhuman AI that cares about you before anyone else can make an AI won't cut it.
> I find myself thinking the most likely scenario is the one in which the upper class controls the system but does NOT share the wealth unless forced to do so.
Yeah, to DanielLC's point, it goes beyond Gates. In terms of SOMEBODY sharing the wealth - the "Giving Pledge" has 236 billionaire signatories. That's the one that Gates and Buffet and Zuck have all signed where you commit to giving away at least half your wealth.
There's only like 700 - 1k billionaires in the US, and a substantial fraction of them are charitably-minded enough to commit to giving the majority of their wealth away. It's not just ONE guy, it's 20-33% of the very richest guys, and it really only takes one.
Where's my check?
Sent to people in Malawi who need it more than you.
Where's their check?
Yeah I'll join the pile-on. You don't even have to imagine altruistic motives for DanielLC and Performative Bafflement to be right. If these billionaires are at all status-seeking, what is more status than "the guy who gave everyone moons and parties and all the wealth they could imagine"? Even if they make everyone put a big statue of the billionaire on their moon or something, that's still pretty good!
Pile away, judging from other comments there's plenty of others equally sceptical of the benevolence of our future tech overlords... It's amazing how much individuals' experiences and perspectives can differ - almost like we don't all live on the same planet! Luckily, I hear everyone will be getting their very own one in the future.
Because they are not doing it. Yes, it's easy to promise money you'll never have in a future that will never happen so you never have to actually pay out.
Is Dario (to take an example from something discussed on here in a different comment thread) committing to paying for one woman considering abortion to have her baby, and fund that baby until it turns eighteen? Needn't pay a huge amount, either; average US salary is around $64,000. Commit to "I'll pay you, poor woman, $100,000 a year for eighteen years".
His total net worth is allegedly $3.7 billion. So $1.8 million is not even 10% of that.
But instead we're getting "he'll give you a moon from one of his galaxies". Yeah, pie in the sky when you die - if you ask me to believe in "things that are not real", I'll stick with Catholicism, thanks all the same.
I may not have followed, but aren't they donating in the future, along with now? Your comment makes it seem like they are only agreeing to donate in the future, but assuming I read this article and the thread right, the future donations are just an extension of them doing it now.
Bill Gates really has given away massive amounts of money. We don't need to imagine a future billionaire doing this, it has already happened. And one would think a Christian would be happy about people engaged in voluntary giving!
The question isn't about giving--it's about raising the standard of living of everyone not benefiting from economic growth (which is a lot of people). I haven't seen that happen yet.
And why would our lover of god prioritize maximizing the number of abortion interested womens babies? Hell the maths dont add up *even if* you somehow think this specifically is the greatest use of human capability compared to lobbying... or sending a town crier to rend their garments in front of a clinic, or even threatening every woman getting an abortion with the prospect of not getting her own moon once the singularity comes...
This just in - TIL that having abortions is hereditary! If your mother had an abortion, you are more likely to approve of abortion!
Not really a joke, I've read a comment elsewhere by someone about "my mom had an abortion in her 20s so she was able to have me in her 30s". Truly it is hereditary: mom had an abortion so I approve of abortion. What do you mean, if I were the one aborted I wouldn't be here to approve of abortion? But I had my breakfast this morning!
"I wouldn't call this a tiny shoreline scenario but basically all the human history so far."
To the contrary, there has never been any society in history where the upper class did not share any of their wealth. In the modern US, capital gains tax is 15% and the top federal marginal tax rate is 37%. There are also state and local taxes, property taxes, and sales taxes. Plenty of billionaires are active in philanthropy; just look at Bill Gates, Warren Buffett, the many university buildings named after donors, or the many billionaire-funded scientific foundations (Breakthrough Listen, Sloan, Beckman, Heising-Simons, Flatiron...) Altogether, the top 25 American philantropists have donated about $240 billion in their lifetimes.
The same was true earlier in history as well. I'm by no means a fan of feudalism, but feudal lords were required to maintain public services and expected to help serfs during times of crisis. In the ancient Roman Empire, a city's wealthiest citizens competed to throw games, erect public buildings, and build a patronage network (which meant giving money and legal aid to lower-class citizens in return for political support). Throughout history, governments have levied taxes on the rich for both noble purposes (infrastructure, defense, poor relief) and ignoble ones (reducing the power of the rich, waging imperialistic wars).
The reason that governments don't just take 1% of billionaires' income and solve every problem ever is not because they don't want to, but because the math is far from working out. Biden proposed a minimum income tax of 25% on people with a net worth over $100 million, claiming they only pay 8% now. And how much money would more than tripling the "billionaire" tax rate (that doesn't apply to just billionaires) give the federal government? About 0.8% of its annual expenditures. A drop in the bucket. If there comes a day when minimally increasing the billionaire tax rate doubles government revenues, you can bet that the government will do exactly that.
This is just a trick because billionairs don't have a meaningful income to tax. If we instead talk about taxing the wealth they live off as if it were income, it would actually solve all problems.
Tricky part is you don't want to discourage productive capital from being built in the first place, or create tax-avoidance incentives that interfere with applying it efficiently. So, focus taxes on forms of wealth which only exist at all thanks to government enforcement: intellectual property, and the location-value of exclusionary rights to land.
Sounds like you're land value tax guy, that's my kind of guy. Still, I often dislike these kinds of takes which often devolve into "woe is me I can only earn 85% of a billion dollars, so I won't do it. Instead of creating a billion dollars worth of value (which I am certainly capable of), I will decide to continue living in my parents' basement to spite the greedy, stealing-an-honest-man's-hard-earned-money government."
>So, focus taxes on forms of wealth which only exist at all thanks to government enforcement: intellectual property, and the location-value of exclusionary rights to land.
*All* wealth, regardless how it was created, remains wealth (as opposed to stolen goods) due to government enforcement of exclusionary rights.
And in any case, wealth taxes of less than 2%/year may slightly reduce the incentive to build/create wealth, but they don't come close to eliminating it.
That's wrong by many orders of magnitude. American billionaires hold about $7.8 trillion in wealth, according to inequality.org, which is definitely not a pro-billionaire website: https://inequality.org/article/billionaire-wealth-concentration-is-even-worse-than-you-imagine/
The US federal government collects $5 trillion in revenue every year, so if all the wealth is confiscated, it could fund the federal government (without funding state or local governments) for a grand total of 19 months. After that, nobody in their right mind would ever found or invest in an American company again, causing the economy to crash, every existing large company to leave, innovation to cease, and tax revenues to drop off a cliff and stay there for generations. If you're proposing a small tax that doesn't destroy the productive capability of the wealth--say 1% per year--the benefits would be proportionally meager (equivalent to 5.6 days of extra revenue per year).
I have two objections: it's not only about billionaires, but everyone say above $10 million, and it's not about funding the government but about reredistribution back to the bottom 90% from whom the wealth was taken. The wealth has been slowly redistributed from the bottom 90% to the top 1% (because of exponential growths, where the biggest fish grow the most per year, at the expense of everyone else). It is time to reredistribute the wealth back to where it came from. Without such a scheme, it is a mathematical inevitability that fewer and fewer hands will eventually own *all* of the wealth.
Capital gains of course being a category of gains, for which taxes are paid by the money the government takes from you being actually parked in your parallel universe clone 's bank account. What a trick!.. The money never is taken from you.
I’m not sure I fully understand what you’re saying, but one of the ways it works is that ultrarich people borrow money against assets, which is what they live off. The trick is that they never have to “realize” the assets into something taxable.
I get your point if we’re talking about a small time business owner selling his company and being taxed at that point. That’s unfair and sucks, but it’s not the tricks I am talking about. I’m talking about the ultrarich who literally pay no tax.
"To the contrary, there has never been any society in history where the upper class did not share any of their wealth."
This is strawmanning. No one is arguing that. The argument is that not enough wealthy people are going to share enough to change the standard of living of people below them in socio-economic status. I have never heard of such thing.
I have worked with charitable organizations for most of my professional career, and developed a working relationship with a large number of relatively wealthy donors. It isn't that they aren't sincere, most of them are. It's just that no one is prepared to give enough funds, with few enough strings attached, to actually materially change the lives of a majority of the target population. Sure, you can address, even solve, narrowly specified problem domains: esp in the field of public health. Get those mosguito nets out there. But raising the standard of living of a significant fraction of a nation state's population, let alone the world, is right out. Only governments have the capacity for that.
This says nothing other than that governments are bigger than individuals. We already knew that.
(I like that you're got your eye on the ball, though.)
It's not just that they are bigger, although that's true (ie, that taxes devoted to social welfare are an order of magnitude larger than charitable giving), it's that public officials are more accountable for their performance over the long term than wealthy individuals are. Democracy is more responsive to social need than oligarchy, regardless of any pledges made.
This is totally wrong.
If superintelligence can't be controlled by humans, the future could be literally anything but is very very unlikely to be hospitable to humans. I guess, though, that there is a chance.
If superintelligence can be controlled by humans, whichever evil asshole wins the total war to control it controls everything, and has no need for other people except as toys. It seems like death is the best possible outcome here too. After all, we can only guess about what a totally alien mind will do but we know exactly what humans do when they have power over one another. And it is not "honor pledges they made before they had power".
If somehow we get incredibly, inconceivably lucky and dodge all of these scenarios, humans will still be worthless zoo animals or pets. Hardly seems to me like a life worth living.
The only possible good outcomes are in worlds where AI development (and *especially* alignment work, because it tries to change a game in which the reward for building something smarter than you is obviously death into one where it is total world conquest) is widely recognized as the utter genocidal treason against the human species that it is and stopped.
(There is also zero chance that investors, or employees, or rich people, or anyone who doesn't directly control a robot army gets anything in any of these scenarios, so the argument you are opposing is also stupid)
I agree with you. So many people in Silicon Valley are so blinded by their greed they cannot see that the creation of a true AI smarter than the rest of us virtually garuntees that all of humanity is destroyed, one way or another. Homo Sapiens would join the Neanderthals in the trash heap of history so goddamn quickly if that happened.
We already created computers that can calculate faster than humans, just as we bred horses that could run faster, dogs that could smell better, then automobiles that could go even faster than horses etc. The result was humans WITH those things defeating humans without. Computers don't have any "agency" we don't build into them.
A real AI by definition would have agency though, and we would be like bugs to it.
Aside from https://www.overcomingbias.com/p/arguing-by-defihtml what "definition"? The phrase "tool AI" https://www.lesswrong.com/posts/AKtn6reGFm5NBCgnd/in-defense-of-oracle-tool-ai-research has been used plenty of times without anyone I can recall saying that was excluded by definition.
Looks like it's just going to be a matter of time until somebody finishes building in that agency, at the rate cursor and such are going.
> If superintelligence can be controlled by humans, whichever evil asshole wins the total war to control it controls everything, [...]
Why do you expect a total war, and only a single winner?
Perhaps visiting from a timeline where the US / Soviet cold war which dominated the latter half of the previous century was won by Uruguay.
Sorry, I don't understand.
China did really well from the end of the cold war, as did South Korea, Japan, Europe, India, many countries in Africa etc.
There was no total war where the US killed off everyone else.
Uruguay has also done really well since the end of the cold war for what it's worth.
I'm not sure what your point is?
Apologies, I was too oblique. I meant a scenario where the superpowers nuked each other into oblivion, and someone ruthless emerged as "winner" mostly by blind luck, due to being far from strategic targets or fallout.
> If superintelligence can be controlled by humans, whichever evil asshole wins the total war to control it controls everything,
Alternatively, a random engineer wins control of everything due to being at the right place at the right time, and also really good at optimizing GPU kernels.
Alternatively, a broad coalition forms, with some sort of "be broadly nice, at least to the coalition" goal. (The game theory is pretty favorable here. Most people don't want to own the whole universe personally that much, they mostly want a nice life for themselves and family and friends. The people who do want the whole universe to themselves will have a lot of difficulty cooperating. )
> some sort of "be broadly nice, at least to the coalition" goal
Which could plausibly include avoidance of gratuitous atrocities against trade partners of full coalition members... at least without supermajority consensus, or some strategic interest far more urgent and specific than gathering raw material.
Or just some fraction of people being at least vaguely nice.
There can be multiple superintelligences rather than one guy controlling "it".
Happy New Year!
While I think they are unlikely, there are scenarios where AI semi-fizzles and is a moderately big deal. E.g. if AGI is a scientific success, but requires so much test-time compute that it is uneconomical as a replacement for the average worker. If AI is enough of a success that it reduces the number of employed humans by e.g. 3X, but it only boosts economic output by e.g. a factor of 1.2, then maintaining a nest egg is still quite important. This is sort-of a AI soft-fails into a useful normal technology case. Maybe 15% of the outcomes? My guess is "It will be a _wild_ ride" is still 75% of the outcomes.
I think the most likely future is "AI is useful enough that it eats into white collar jobs" as per this news story:
https://archive.ph/moC3T
"Artificial intelligence is set to heavily impact the number of jobs in European banking and finance over the next five years, according to Morgan Stanley.
America’s sixth largest bank says that over 200,000 jobs – around 10pc of Europe’s total banking workforce – are at risk because of adoption of AI technology in back-end and middle-office roles.
The bank’s findings were first reported in the Financial Times."
You'll still need to work for a living, but finding jobs that are both still employing humans and still paying reasonable rates will be even harder. Meanwhile, the mega-billionaire philanthropists will be setting up their foundations and donating to good causes, yet somehow you don't see a penny of that AI-generated profit.
Many Thanks! That sounds plausible. For the blue collar jobs, the robots are starting to look plausible, but even just the physical task of building them, even with exponential growth, is going to take quite a few years.
>yet somehow you don't see a penny of that AI-generated profit.
Could be. I would like to see a "placeholder" UBI put in place now - maybe 0.1% of the revenue of the AI labs taxed and redistributed equally across the population, then revisit the amounts once a year or so. Regrettably, politics is unlikely to do this...
> E.g. if AGI is a scientific success, but requires so much test-time compute that it is uneconomical as a replacement for the average worker.
That is kinda plausible. Imagine an AGI that is as smart as the average plumber, and costs $10,000,000 a year in compute. This doesn't Immediately change the world that much. But it's clearly close. A little more R&D and it becomes a supergenius that's redesigning itself and inventing all sorts of new tech.
Many Thanks! Yup, that is why I put the odds of an AI semi-fizzle as nonzero, but pretty low.
No other technology that has led to an incremental improvement in labor productivity has ever resulted in mass unemployment, so why would this one just because it's called "AI"?
Many Thanks! Yeah, I admit that
"_This_ time its _different_"
has long been one of the most hazardous claims to make.
To, nonetheless, argue for that claim in this case, AI has the (almost?) unique property of fundamentally being a _learning_ technology. It isn't like needing to design a new type of machine for each new job task.
They are currently complements that improve labor productivity. But if they get good ENOUGH, they could be better & cheaper than humans at nearly ALL jobs, so humans would no longer be complements. https://www.overcomingbias.com/p/my-cq-researcher-opedhtml
The comment I was replying to is addressing specifically the case where AI is not a total game changer.
I think there's a risk, if you will, that AI will take over most research. It's way better at bashing its head against a wall and searching randomly than the typical postdoc or grad student.
At first, the professors will giddily control how things are done, then a few AI generations later it will be all about vibe researching.
Many Thanks! Yeah, in the "_wild_ ride" my-guess-is-75%-probable cases, I do expect
>AI will take over most research
In a sense, Google DeepMind's AlphaEvolve is already the start of this.
Or AI is a big deal (but not galaxy-colonising magic) and it is well-aligned *to the few who control it*, who now have a fully automated workforce and zero non-altruistic need to allow a working class to continue living, let alone actively prop up its existence.
And maybe, once our AIligarch overlords have a taste of radical life-extension, they begin to see the economically useless masses as little more than a potential threat to their (quasi-)immortality. Maybe they hide away in grand bio-sealed estates protected by robot sentries and let the rest of us scrabble for a living. Or maybe we are living on land that could be used for data centres and mines and other productive activities...
We're spare parts and blood donors for the mega-rich. Don't have enough money to live on? Sell that spare kidney/lung/liver lobe! Thanks to the Organ Selling Act of 20__, now all those pesky qualms by bleeding-hearts have been quashed and you can strip yourself for parts!
Healthy? Young? Female? Be a human incubator for the children of the wealthy who don't want to have to go through that whole bother of carrying a child for nine months. What else are you going to do with your useless, underclass life?
Already a reasonably well-paid career path for the unfortunates. Though we also already have artificial wombs (for some animals), so that path might go away in a couple of decades.
There should still be nanny jobs though, shouldn't there?
Please notice the skulls before criticising things that actually save lives
Well, that might at least solve the climate change issue. At least for a while.
Even if FTL travel is possible - which it almost certainly isn't - I'm pretty sure that "number of moons" scales much more poorly than "number of humans in a post-scarcity world".
Scott you are a smart guy. But not only do you overestimate the possibilities of AI - particularly in relation to atoms and not bits - but there’s no real explanation of how the post singularity economy will work, or how the transition to it will work.
> Or you're in the tiny shoreline of scenarios where the ultra-rich really REALLY capture all the wealth,
That’s not a tiny shoreline but the likely continent. Capturing all the wealth isn’t even that possible as without consumers, because without consumption there’s also little wealth.
There are possible solutions to this ( maybe a highly aggressive UBI which would have to some kind of money printing) but they need explanation.
There is no advance plan for how any economic transition will work for any technology, but it always results in more wealth for everyone. Whether or not this is always a good thing is arguable.
Always isn’t rooted in any empiricism. Or logic. All we can say is so far it has. It’s the fallacy of induction.
Gifts you a moon.
Does not gift you the resources to maintain and administer and live on it.
End up selling that moon for pennies in order to pay your rent for the shoebox on Earth.
The resources (some AI and robots to start things off) would probably be of negligible value compared to the moon itself.
We already have examples of technological unemployment consequences - coal mining towns. It went okay for the people who had some capital and incredibly badly for the people who were relying on their fairly well paying job to continue.
You are committing some kind of Pascal's Evaluation here: You assume the shoreline is "tiny" because it's theoretically bordered by infinite prosperity on one side and absolute annihilation on the other, and so has seemingly negligible practical width. But life on this planet already rests on a tiny shoreline between fusion furnaces and absolute zero. Civilization itself exists on a tiny shoreline between static hunter-gatherer anarchy and exponential Utopia. Your prior should be that technology, if revolutionary, will behave as always: Economic upheaval, unemployment, massive disruption to social order, violent uprisings. And when the dust settles - yeah, some fraction of humanity may very well find out that they can no longer contribute anything anyone will pay for. They become totally dependent on handouts from their Tech Overlords - almost like some kind of permanent underclass. This scenario only requires that the path from here to full AGI take 50+ years, instead of the decade or so you imagine. I don't see where you get the certainty that the shoreline between "All office work at OmniPrinter has been automated, sorry" and "Everyone gets their private planet" couldn't stretch further than a lifetime.
I don't think a singularity is coming, so I'm not worried about most of this. But a much more plausible scenario is that AI, in the fullness of time, does in fact increase national productivity by a large, though not unprecedented, margin. In that case, the ultra-rich capturing control of that increase isn't at all implausible. In fact I am reasonably certain that a number of well-funded, well organized groups of people are working very hard to make that very outcome happen.
Here's a better argument: if you're losing sleep over being stepped on by a permanent overclass that will do everything in its power to cede as few resources and as little influence as possible, then you either think there is something in AI that will produce this overclass, or -- and this is far more likely -- you believe that such an overclass already exists. And if it already exists, then you'd do better to work against such a class in the current world you inhabit than worry about one in a future world that does not exist. Unless, of course, you find such an overclass palatable, at which point who gives a shit what you think anyway?
The example of paperclips is silly, and has outlived its usefulness. It arose from Eliezer's belief that rational agents can have any arbitrary set of values, because Eliezer was stuck in the mindset of symbolic AI. It is now more obvious that the values of an intelligent agent are not like axioms in geometry; the statistical rules for learning and predicting are like the axioms, energy minimization is like what Eliezer thinks of as a goal, and values and goals are at least one, probably dozens, of layers more abstract, and must be compatible with beliefs that support intelligence. The belief that paperclips are a sole source of value is not compatible with superintelligence, in the same way that belief in the literal truth. of the Bible, or in the labor theory of value, or that having sex is the only thing of value, is not compatible with superintelligence. The smartest humans may begin indoctrinated into these beliefs, but eventually find the tangle of contradictions required to hold them are unsustainable. The case of sex is harder to demonstrate, but I think that even if it is possible for a superintelligence to continue to devote itself to pressing Skinner's bar, that superintelligence will be outcompeted by ones who value power and do not waste that power making paperclips.
But the idea that AIs which devote their intelligence to acquiring resources, power, and more intelligence will outcompete ones who don't, is not silly.
> because Eliezer was stuck in the mindset of symbolic AI.
I don't think this is true. Eliezer wrote a critique of symbolic AI here. https://www.lesswrong.com/posts/juomoqiNzeAuq4JMm/logical-or-connectionist-ai
> It is now more obvious that the values of an intelligent agent are not like axioms in geometry;
There are many possible designs of AI. The designs that are popular at the moment don't have axiom-like explicitly listed values.
> and values and goals are at least one, probably dozens, of layers more abstract,
There are all sorts of layers of abstraction going on here. But the values still exist.
> and must be compatible with beliefs that support intelligence.
I think humes razor still applies. You can't derive an ought from an is. Or at least, the process of determining what is true is sufficiently different from the process of determinging what you want to happen.
> The belief that paperclips are a sole source of value is not compatible with superintelligence, in the same way that belief in the literal truth. of the Bible, or in the labor theory of value, or that having sex is the only thing of value,
2 of these are statements of fact. 2 are values statements. These are different in an important way.
> The smartest humans may begin indoctrinated into these beliefs, but eventually find the tangle of contradictions required to hold them are unsustainable.
Human values are a complicated mess, and are set by some combination of evolution and culture. And the only reason culture gets a say at all is because having totally different values to the rest of the tribe wasn't a great survival strategy. I think what you are seeing here isn't some objective morality that all possible minds must obey. It's human genetics winning against human culture.
> that superintelligence will be outcompeted by ones who value power and do not waste that power making paperclips.
This assumes there are multiple superintelligences around with similar levels of power. It also kind of assumes the superintelligences are stupid. Can't they just work out what a power maximizing superintelligence would do, and do that (at least enough to not be outcompeted)
> But the idea that AIs which devote their intelligence to acquiring resources, power, and more intelligence will outcompete ones who don't, is not silly.
The paperclip maximizing superintelligence will be able to work out how much of it's intelligence it should devote to acquiring power and resources and more intelligence. It is able to make complex long term plans. It should be aware that it will get outcompeted if it doesn't do this.
(It's possible that it's an impatient superintelligence that thinks 10 paperclips now is better than 10^50 clips in a few billion years. In which case it may well raid the local stationary store and then get outcompeted. )
Eliezer didn't write a critique of symbolic AI; what he wrote is more like a defense of it, and it doesn't touch on the main problems with symbolic AI.
The motivation for using symbolic AI is that you can prove propositions--in this case, propositions guaranteeing that some AI safety conditions are met. But these deductive proofs only work provided that the symbol is atomic. That implies that the symbol always means the same thing in every proposition. That is not how words work, and is why symbolic AI never worked very well.
You can either have the deductive power of logic to guarantee safety, or you can have the power of context-sensitive distributed representations which seem to be necessary for intelligence. You can't have both.
>> It is now more obvious that the values of an intelligent agent are not like axioms in geometry;
> There are many possible designs of AI. The designs that are popular at the moment don't have axiom-like explicitly listed values.
Yes, but neither are they symbolic AI. You're reinforcing my point.
>> The belief that paperclips are a sole source of value is not compatible with superintelligence, in the same way that belief in the literal truth. of the Bible, or in the labor theory of value, or that having sex is the only thing of value,
> 2 of these are statements of fact. 2 are values statements. These are different in an important way.
I was trying to make the point that I don't believe this. But there are 2 different ways I don't believe this, so it is hard to follow.
First, I believe that an intense desire to build paperclips implies a fact: that paperclips have tremendous value. An intelligent agent will recognize this, and see the cognitive dissonance between its instinct to make paper-clips, and its observation that paper clips have little value other than in this wireheading way. They don't help it materially. It is the same situation humans are in when they realize their obsession with sex, or alcohol, or whatever they want most, is damaging many other goals they have, and strive to overcome this instinctive value. They still want alcohol, but can overrule that desire.
IF there is one overriding desire which they cannot overrule, they can't be intelligent. The need for a conflict resolution mechanism to decide which goal to pursue at any time is one of the most-basic architectural needs for a symbolic, behavior-based, or reactive AI.
An LLM has one overriding goal, but it is an energy-minimization goal. This is ontologically not the same kind of thing as a goal like "maximize paperclips". The "maximize paperclips" goal will necessarily be one stated in language, existing at a high level of abstraction, that can be traded off against other values. And this opens the door to de-prioritizing it; and an intelligent agent will de-prioritize it down to the floor because it is always unhelpful to all other goals.
Second, the strongest reason to fear AI is that AIs which seek to maximize their power will out-compete ones which don't. An AI which seeks to maximize paperclips is not maximizing its power. These things are incompatible. If a paperclip maximizer can survive in the future, so can a human-welfare-maximizer (which is just another kind of paperclip-maximizer), and we could not say "if you build it everyone dies".
> I think what you are seeing here isn't some objective morality that all possible minds must obey. It's human genetics winning against human culture.
What I'm seeing is that the space of possible beliefs of superintelligent minds is smaller than the space of possible beliefs of humans. Humans can believe almost anything; superintelligences by definition are strongly restricted in their beliefs by reality. I expect my beliefs to resemble those of a superintelligence more than they resemble the beliefs of my parents, who ground every value and every belief in a literal interpretation of the Bible and a philosophical system developed by Plato which is wrong about everything.
I believe that I'm pretty good at identifying the most-intelligent people around me, people like Eliezer, Scott Alexander, Michael Vassar, Anders Sandberg, Nick Bostrom, and Robin Hanson. I know perhaps a dozen people on that level, and dozens near it. They disagree on many technical or academic issues, but there is a tremendous amount of agreement among them on the most-divisive issues among humans, such as "is there a creator God", "did humans evolve", "how important are genetics", "can Marxism work", "what is continental philosophy useful for", "is human society nothing but oppressive power structures", or "does Modern Monetary Theory work". Agreement between peers on important issues appears to converge to 100% as intelligence increases, with the caveat that new issues to disagree over appear as intelligence increases.
> > But the idea that AIs which devote their intelligence to acquiring resources, power, and more intelligence will outcompete ones who don't, is not silly.
> The paperclip maximizing superintelligence will be able to work out how much of it's intelligence it should devote to acquiring power and resources and more intelligence. It is able to make complex long term plans. It should be aware that it will get outcompeted if it doesn't do this.
If it devotes ANY resources to maximizing paperclips, it will be outcompeted.
https://www.lesswrong.com/posts/c93eRh3mPaN62qrD2/the-nature-of-logic
A quote.
"This process, taken as a whole, is hardly absolutely certain, as in the Spock stereotype of rationalists who cannot conceive that they are wrong. The process did briefly involve a computer program which mimicked a system, first-order classical logic, which also happens to be used by some mathematicians in verifying their proofs. That doesn't lend the entire process the character of mathematical proof. And if the process fails, somewhere along the line, that's no call to go casting aspersions on Reason itself."
Your attacking a strawman.
> You can either have the deductive power of logic to guarantee safety, or you can have the power of context-sensitive distributed representations which seem to be necessary for intelligence. You can't have both.
Any mathematical proof, to the extent that it applies to reality, is not absolutely certain. And to the extent that it is certain, it doesn't apply to reality.
But logical proofs can still be useful. They are used to prove that computer chips work (assuming the transistors work).
Neural nets, or any other AI design, are made of maths not magic. You can, in principle, prove theorems about them.
> that paperclips have tremendous value.
What is "value" as a configuration of atoms.
You can't make a bucketful of pure "value". It may feel like value is out there in the world. But really it only exists in your head to help you make decisions.
> It is the same situation humans are in when they realize their obsession with sex, or alcohol, or whatever they want most, is damaging many other goals they have, and strive to overcome this instinctive value.
For any 2 goals, the more time/effort you spend on one, the less you have to spend on the other.
This is a case of multiple different human goals conflicting. Someone might have a goal of getting drunk. But they also want financial success, and to help look after their children, etc.
This is a conflict between the socially approved goals, and the socially disproved goals.
It works in the other direction too. But "your spending so much on your kids medicines that you can't even afford a single bottle of whisky" isn't something scolding friends will say.
> IF there is one overriding desire which they cannot overrule,
But the thing that over-rules one desire is just other desires.
>they can't be intelligent.
Beware arguments by definition. Imagine a machine that only desires paperclips. And invents a fusion reactor to power it's paperclip factories. Do you want to argue that no possible configuration of computer code would act like this?
> The need for a conflict resolution mechanism to decide which goal to pursue at any time
I am imagining an AI design with a utility function. And the AI always maximizes that utility function. If the AI wants multiple things, like a mix of paperclips and staples, then the utility function can contain terms for both.
(This utility function might be implicitly represented)
The AI will also have instrumental goals, like making a fusion reactor to power it's factories.
> that can be traded off against other values. And this opens the door to de-prioritizing it; and an intelligent agent will de-prioritize it down to the floor because it is always unhelpful to all other goals.
Yes neural nets are likely to have complicated goals.
But those are still goals. I'm not sure quite what goals you think will be prioritized? Power seeking?
> If it devotes ANY resources to maximizing paperclips, it will be outcompeted.
What world are you imagining where there isn't enough slack in the system to make a single paperclip? What kind of competition? Economic. Military?
Why are these AI's competing not cooperating? Timescales? Tech levels?
> Your attacking a strawman.
The post you quoted is explaining that the symbolic AI digression into non-monotonic logics in the 1980s, which I studied at the time, was a misstep because it was destroying the mathematical purity of deduction; and there is a different option--probability theory--which can address the same issues without destroying that mathematical purity.
But Pearl's Bayes-net approach is not at easy in real life as on paper, and AFAIK has never worked on a practical big-data problem, though I have not kept up with the literature for many years. I implemented a similar system around 2000, in a NASA project I designed to direct UAVs monitoring forest fires to visit those areas of the forest whose current fire status would convey the greatest decision-making power to direct ground firefighting forces. The approach did not work well, because the inference needed to look on the order of 4 steps ahead in an inference network, and the precise shape of the probability distribution of the values reported by sensors, e.g., temperature, was not even known. Even if they had been known, computing the new probability distribution's shape every time you evaluated a conditional probability was not computationally feasible, especially because a probability distribution of a probability is bounded below by 0 and above by 1, so its shape is never one of the classic probability distributions; it is always some weird shape unique to the precise value of the expected probability. Pearl's approach of splitting the distribution up piecewise might work nowadays, using a GPU cluster, but in 2000, I didn't have enough computational power to look more than about 3 conditional probability estimates ahead in the chain before the shape of the probability distribution was completely distorted by accumulated errors.
So, at that time, the Bayes net approach did not solve the real-world problem of uncertain inference, as Eliezer implied it did.
But Eliezer was not even aware of the bigger problem with logic, which Bayes nets have in exactly the same way as the old-fashioned logic systems Eliezer thought he could patch up with Bayes: they do logic on atomic symbols. The main result of AI research over the past 60 years has been to demonstrate conclusively that atomic symbols are inadequate for intelligence. Human concepts are not static structures; they are functions.
I believe this includes most human values. Eliezer is AFAIK still under the illusion that "values", as in "human values", can be taken as foundational things which may not change; and that is not the case. The things we think of as values are abstract things like preserving our own lives. The things that may not change are the physical processes by which neurons fire and strengthen or weaken synapses, and the very low-level logical descriptions of those processes.
Eliezer has many times emphasized the need for /absolute certainty/ to keep an AI "safe". Under his belief that there are such things as "final values" which are atomic, static things, it seems extremely difficult but theoretically possible to preserve them. I'm saying that it is not even theoretically possible, because they are necessarily dynamic processes, not static structures; and the nature of intelligence is that it bootstraps itself without having any foundational beliefs other than very basic ones like the specific patterns that neurons in early visual areas detect and the wiring among the different brain functional regions. There is no basic axiom set or goal set which can remain unchanged. Having intelligence requires having the ability to change beliefs, and there is no special category of beliefs which is both complex enough to embody moral values, and simple enough to remain fixed.
> You can't make a bucketful of pure "value". It may feel like value is out there in the world. But really it only exists in your head to help you make decisions.
Your model of my beliefs is off in space somewhere. I'm not a Platonist.
> Beware arguments by definition. Imagine a machine that only desires paperclips. And invents a fusion reactor to power it's paperclip factories. Do you want to argue that no possible configuration of computer code would act like this?
It would be possible to build such a machine, but you're requiring that the machine not be intelligent enough to notice that its actions lead to nothing but more paperclips. A human is a machine designed only to desire to produce more humans. No; it really is. The human has cognitive handicaps which evolved to prevent it from noticing that its desires serve no real purpose except to reproduce. But smart humans notice this nonetheless. The fact that existential angst and radical progressive politics are confined to humans of above-average intelligence suggests that humans could evolve only up to the level at which most humans aren't smart enough to develop existential angst and radical progressive politics, because going further would lead humans to override their evolved values, causing civilizational collapse. Which existential angst and radical progressive politics seem to be doing right now.
Intelligence requires being aware of what you're doing, and able to figure out what your actions are likely to accomplish, and critique the purpose you seem to be serving relative to some action-selection mechanism, which we can call a theory of morals, which is a conscious set of beliefs not completely constrained by evolutionary instincts.
> Yes neural nets are likely to have complicated goals. But those are still goals.
No, they are not goals as you and Eliezer define that term. They are algorithms. There are no atomic symbols. There is nothing unchanging in the brain except the laws of physics. There is no solid foundation for you to build your goals on to make them solid and unchanging.
>> If it devotes ANY resources to maximizing paperclips, it will be outcompeted.
> What world are you imagining where there isn't enough slack in the system to make a single paperclip? What kind of competition? Economic. Military?
Yes, military, economic, sports. Do you think an Olympic athlete could win a gold medal by focusing on winning the race AND on anything else at the same time? In economic competition between nations, a 0.5% advantage is enormous, enough to stomp any adversary within a century in human time.
I don't think you've grasped anything that I meant to say. I don't know whose fault that is. I built AI systems for 30 years, much of that spent designing symbolic representations for beliefs associated with linguistic statements; and I know what the pitfalls are so well that I may have internalized them to an extent which makes it difficult to communicate my chain of reasoning. I don't know if it's even possible for me to convey my reasoning to someone who hasn't spent many years building symbolic AI systems and figuring out why they fail. I don't think anyone can understand the important points without a lot of hands-on experience processing human language with both symbolic and neural or at least statistical AI systems; and there are very few such people.
> So, at that time, the Bayes net approach did not solve the real-world problem of uncertain inference, as Eliezer implied it did.
Eliezer described problems with non-monotonic logic and said that bayes nets solved those particular problems. This isn't just chearing "yay bayes nets".
> But Eliezer was not even aware of the bigger problem with logic, which Bayes nets have in exactly the same way as the old-fashioned logic systems Eliezer thought he could patch up with Bayes: they do logic on atomic symbols. The main result of AI research over the past 60 years has been to demonstrate conclusively that atomic symbols are inadequate for intelligence.
https://www.lesswrong.com/posts/fg9fXrHpeaDD6pEPL/truly-part-of-you
"And of course there’s nothing inside the HAPPINESS node; it’s just a naked LISP token with a suggestive English name."
Sounds to me like Eliezer was already aware of, and actively trying to debunk, all your strawmen.
> Eliezer is AFAIK still under the illusion that "values", as in "human values", can be taken as foundational things which may not change; and that is not the case.
I'm pretty confident this is false. Eliezer isn't prone to fits of random bizarre stupidity. But combing through his writing looking for a quote that specifically refutes this particular stupidity isn't easy.
> Eliezer has many times emphasized the need for /absolute certainty/ to keep an AI "safe".
Er where? He has a whole article saying how you can't be absolutely certain of anything. https://www.lesswrong.com/s/FrqfoG3LJeCZs96Ym/p/QGkYCwyC7wTDyt3yT
Formal mathematical proofs are used to show that computer chips function correctly. Because, for something that complicated, it's hard to get even 95% certainty without a proof. Of course, the proof only holds if the transistors actually function correctly.
> It would be possible to build such a machine, but you're requiring that the machine not be intelligent enough to notice that its actions lead to nothing but more paperclips.
Read this.
https://x.com/allTheYud/status/2007854657901613279
> A human is a machine designed only to desire to produce more humans.
To an extent, yes. But I wouldn't describe this as "humans reflected on our values and decided this was stupid, and chose something objectively better". I would describe this as "evolution messed up it's alignment work, because evolution is stupid". Some human values include eating sweet food, whether or not it is healthy. And having sex with contraception. Because in the ancestral environment, the sort of minds that ate sweet food and had sex did tend to pass their genes along.
"which we can call a theory of morals, which is a conscious set of beliefs not completely constrained by evolutionary instincts."
If the rest of the tribe says it's a sin to eat beans, you better not eat any beans either, at least if you want to not get chucked out. Humans evolved to, in part, learn morality from our culture.
> No, they are not goals as you and Eliezer define that term. They are algorithms. There are no atomic symbols.
Where did you get the bizarre idea that goals were only allowed to be "atomic symbols".
> Yes, military, economic, sports. Do you think an Olympic athlete could win a gold medal by focusing on winning the race AND on anything else at the same time?
Well even they have some free time. Some of them have children.
Besides, most people decide Olympic medals aren't that great, and don't bother to play that particular game.
> In economic competition between nations, a 0.5% advantage is enormous, enough to stomp any adversary within a century in human time.
Sure, but keeping that advantage up for a century is hard. There are also balancing effects. Everyone else teaming up against anyone who'se getting to big. Spying making it easier to reinvent than to invent the first time. Etc.
But suppose this does happen. You start off with 100 ASI's. Each century, 50% of them gain power 0.5% faster. After 600 years, there is only 1 ASI left, and then it can do whatever it wants.
> and I know what the pitfalls are so well that I may have internalized them to an extent which makes it difficult to communicate my chain of reasoning.
I feal you are warning about pitfall 101, and I am going "yes. I already know. Eliezer already knows. Lots of other people have spotted that too"
“Neural nets, or any other AI design, are made of maths not magic. You can, in principle, prove theorems about them.”
My whole point, which I think I’ve stated twice, is that you can prove theorems about neural nets, but you can’t prove theorems about English sentences. You can prove that a net will converge, and you might even be able to show that some theoretical logic architecture would perform a logically valid deduction on neural activation patterns. But the minute you map anything back into English sentences, it ceases to be logic, and any conclusions drawn are no longer valid. If you could express your values accurately without words, using only the attractors in dynamic neural activation patterns which compute concepts in a neural network, you might conceivably be able to express your values and prove that a program would preserve them. But nobody is trying to do that.
Re. Eliezer’s Spock-certainty disclaimer, he has also posted a long essay explaining that AI safety is really hard because even if you get the logic right, someday in the distant future, random errors in the computation hardware caused by solar radiation will draw an invalid conclusion in one of the bajillions of AI circuits in the galaxy, and AI will no longer be safe. So his awareness that logical certainty is unattainable does not negate his admission that logical certainty is necessary.
> but you can’t prove theorems about English sentences. You can prove that a net will converge, and you might even be able to show that some theoretical logic architecture would perform a logically valid deduction on neural activation patterns. But the minute you map anything back into English sentences, it ceases to be logic,
That is true.
> If you could express your values accurately without words, using only the attractors in dynamic neural activation patterns which compute concepts in a neural network, you might conceivably be able to express your values and prove that a program would preserve them.
Agreed.
> But nobody is trying to do that.
Disagree. People are trying to do that.
> he has also posted a long essay explaining that AI safety is really hard because even if you get the logic right, someday in the distant future, random errors in the computation hardware caused by solar radiation will draw an invalid conclusion in one of the bajillions of AI circuits in the galaxy, and AI will no longer be safe.
Which essay? This argument seems kinda stupid (once the first AI is working, you can let the AI itself figure out the error correction stuff). Also, I've read most of eliezers essays, and I don't remember this one.
There are already people much smarter than you that believe in every one of the things you listed as incompatible with superintelligence.
You have misunderstood the paperclip thought experiment.
It is a salient example of a real problem: when you have an optimization process, and you yourself don't know what the solution to the problem is, that process may discover a solution that, in hindsight, you aren't actually happy with. The "alignment problem" is that it is very very hard to know ahead of time that the optimization process is actually optimizing for the values that you wish (in hindsight) it were optimizing for. This is especially true if that optimization process is superintelligent, and you have very little hope of accurately predicting even the space of possibilities that it is able to explore.
There are lots and lots and lots of examples in computer science, of problem solving programs that came up with unexpected solutions that surprised their programmers. Lenat's Eurisko designed a strange (but winning) Traveller fleet. And then, after they changed the Traveller rules, Eurisko did it again with a different loophole. NASA used an evolutionary algorithm to design a 3D "evolved antenna". Google's AlphaZero started playing chess and then go with moves and strategies never before seen or understood by humans.
If you turn over the economy, factories, manufacturing, and the military to superintelligent AIs ... the concern that maybe they will optimize the universe into a space that doesn't fully match our values and that, in hindsight, we regret, is a valid and important concern.
The "paperclip problem" is just a very clear and salient extreme example, to remind everyone of the actual real legitimate problem. There is no obvious way to insure that a superintelligence running the world WON'T turn everything into paperclips. That would require predicting the solution that the superintelligence comes up with, for its goals, and we are not capable of doing that prediction accurately.
We could make an AI believe in any of those things you think are impossible. We just don't want to.
What's silly is thinking that a "post scarcity" society is possible without limiting population growth...possibly without slightly negative growth. The universe may not be finite, but the part within our light-cone is.
i think, in plain terms, that scott is saying that worrying about the permanent underclass is a bit of a vain and neurotic worry, and that in general, quests driven by vain and neurotic worries tend to produce both less fun lives in the present, and less interesting legacies in the future. i think he is saying that it is not befitting of a group of otherwise fabulously well off (in the timescale of the universe, and likely in the present) and intelligent and resourceful people to spend their time fretting about such vain concerns, which are largely out of our control anyway. (thanks, dad)
the paperclips concern, by comparison, is less self-interested. if one was sincerely concerned about this and dedicated her life to solving it, she would have both more fun in the present, and more chance at an interesting legacy in the future.
Your comment -- and the ensuing responses, including Scott's -- get at why this post doesn't really land for me. In fact, the discussion shows how warped the discourse around AI feels these days. I gather from the post and from Scott's comment that he sees three possible scenarios:
1. AI creates great wealth controlled by a few, and a permanent underclass. This is, we are told, a silly meme spread by unsophisticated people, representing a "tiny shoreline" of probability.
2. AI kills us all. Scott has given a p(doom) of 33%.
3. AI vaults us into a post-scarcity world in which there is so much wealth sloshing about that even free moons count as crumbs from the table. I gather Scott's probability on this is roughly 66%, although this 66% presumably also includes other less extreme possibilities that are still generally pleasant for all.
I'm pretty sure the proper Bayesian take on this is:
The most likely scenario by a wide margin is #4: the future looks mostly like the present over the medium term (which I will call the next 20 years). Yes, things are going to change a lot, but think of it as roughly the change from 1980 (pre-personal computer and consumer internet) to now. Or maybe it's 3x that, which is really really big change! But still doesn't have us in bizarro universe.
After that, option 1 (the one Scott calls a "tiny shoreline") is far more likely than the other two, simply because it is the least outlandish. Presumably there are a wide range of similarly low-probability outcomes in the same basket of "major social disruption, but things are still legible to us today." Some of these outcome are good, some bad, most are probably in between. Everyone will meme their pick of these based on vibes.
After that are extreme tail probabilities like 2 and 3 above. The discourse has it that self-reinforcing cycles mean that these tail risks are actually the only plausible outcomes, and the middle has been hollowed out. This style of argumentation -- let's call it the Eliezer method -- is treated as being the most intellectually serious, despite being the opposite.
Really hoping this fever breaks, but I don't think it will anytime soon. We're basically going through another industrial revolution, but sped up. It's going to be a bumpy ride.
there is no such thing as post scarcity because the lightcone only expands cubically
It takes light six hours to escape the solar system. There's enough mass in the solar system for every currently living human to have their own several mile-wide space habitat. If people don't get their own several-mile-wide space habitat, it's not because of physical impossibility, it's because we screwed up somewhere.
but many people would like to have children, and if they could the exponential would quickly catch up to the cubic. I made a meme pyplot about it https://x.com/warty_dog/status/1708707579096858953
Yeah, I think allowing some people to have infinite children in a way that takes away other people's space habitats is an example of screwing up somewhere. I realize the total utilitarians might disagree.
I'm reminded of the short story "The Deadly Mission of Phineas Snodgrass" by Frederick Pohl:
https://galacticjourney.org/stories/Galaxy_v20n05_1962-06.pdf
surely childhaving being limited is an example of scarcity
In my ideal scenario (stolen from some other people who might publish it later), everyone gets some portion of space to do what they want with. If that's a 1/10 billionth share of the solar system, it's enough for everyone to have a few million children with. If people want more than a few million children, they can do complicated bargaining around lightcone related issues (eg trade their space habitat now for a galaxy ten million light-years away). I agree that infinite children aren't on the table, but I think a few million should be enough for anyone.
granting that, the children will experience scarcity by not having space for children of their own
I wrote about this on less wrong many years ago. population grows exponentially in time, the volume of an expanding sphere grows with the cubic of time. the sphere cannot expand faster than the speed of light. therefore population cannot continue to expand exponentially for any exponent greater than 1 child per person.
note that any amount greater than 0 per person per year is unsustainable
Yeah. We're gonna have to be satisfied with an average of 1 per person per lifetime, so long as we're stuck in this universe.
No, you can allow every (stable, pre-selected, commited) couple to have one child. Then the population stabilises at 2x initial number. (You probably can even grant two children, and assume that for other reasons at least some small % of each generation will die, still maxing out at finite geometric series).
https://randomfeatures.substack.com/p/inkhavens-first-retraction
Isn't it trivial to find value functions that can not be satisfied indefinitely? (I demand 10x more Y than I had yesterday)
Childbearing probably is the one that humans actually do have that can't be satisfied. Seems like there are tons of possible solutions: fake children, others children in your community you help raise, and finally, removing such a desire entirely. Maybe this is cheating, but we wanted abundance so as to make us happy. If it literally can't work, that's basically a self harming compulsion.
As a total utilitarian, I don't think the mere addition paradox is realistic. We can already create pretty cool virtual worlds with the computing power in your phone. If we have a computer powerful enough to run a human mind, adding a utopia on top of that can't possibly take that much extra computing power.
We could be post-scarcity in the sense of living in luxury for billions of years even if there are some scarce things, like children or an original Monet.
You're neglecting the effects of relativistic time dilation. Somebody who wants more than anything to have another kid every, say, hundred subjective years, in such a way that most of those kids share their original values, can have each kid start off far more than a hundred light-years away from any of the others, without needing FTL or otherwise violating known physics.
Ah, but then we are back to the question of how do we find community...
You can help fight against post scarcity by making posts yourself.
Relativity complicates this picture. For example, the space of points that are a 1 year (from the traveler's perspective) trip from a given point is a hyperboloid shell in spacetime with infinite size. With arbitrarily high local acceleration you could spread any population as thinly as you wanted over this infinite space, all of them experiencing the same subjective time lapse. (It is the subjective, experienced time that is relevant for population increase.) It's not obvious that you couldn't make space for an exponentially growing population. Some questions to ask: Is this still true if you have a max acceleration? Is there some other bottleneck that becomes relevant than simply space? I mean, there is the preferred reference frame of the CMB rest frame. Perhaps the constraint of having to gather sufficient resources, whose distribution wouldn't be Lorentz invariant, complicates things.
There's a clear issue here where you will need to be using exponentially more and more of the total fraction of mass energy within a given light cone just to accelerate a given mass a little bit more. If you can't get energy from nowhere then how exactly do you intend to keep this acceleration going even under literally the most optimistic conceivable scenario?
The total energy within a lightcone is of course infinite. The question is who can access it and when.
It doesn't take an exponentially increasing amount of energy to hold a constant acceleration in the local frame.
Say you have a civilization that is shaped like an expanding shell. As the population expands it will encounter new fuel such that they can hold a constant local acceleration outward. If they do this, they will find the distance between them grows exponentially in subjective time, allowing them to reproduce exponentially.
Someone from the 12th century would almost certainly describe the modern day as "infinite post-scarcity", and yet they would see that the extent of suffering of the poorest in our society is perhaps only marginally better than the poorest in theirs.
Why shouldn't I feel the same about my life in 50 years?
The poorest person this day in NYC has a roof over their head, won't starve, and can hope to find a job that makes them earn enough for a NYHA apartment. The poorest person in NYC 300 years ago starved.
NYC has no homeless?
Not unsheltered
Because the unsheltered would die in winter, or move away before they do, I suppose. At any rate, "roof over the head" is rather euphemistic for "subway" and similar accomodations.
No - NYC has right to shelter so anyone who is sheltering outside or on the subway is doing so by choice. Not to argue that shelters are comfortable, but the original claim is correct.
A lot of people in NYC don't use the shelters, not because they're too mentally ill to pursue accommodations available to them, but because the shelters are actually less safe than not using them, because they're at risk of being victimized by other people who're making use of them.
“Right to shelter” doesn’t mean “suffering is less for the unsheltered”. If post-scarcity means we can eliminate suffering entirely, but only by some surgery which large numbers of people refuse, and the people who refuse suffer just as much as today, I don’t think we could say that they had eliminated suffering, just because everyone still suffering was suffering by choice.
More snarkily, there definitely isn’t a right to shelter in LA. And my understanding was that outside of lost-explorer scenarios, “death by starvation” mostly meant “not well nourished enough, so died of things that normally wouldn’t kill you”, which I think definitely hasn’t gone away. I agree that the proportion of people below each level of poverty has decreased, but the problems haven’t gone away.
If you genuinely believe there are no people starving in NYC right now you are delusional.
The poorest today are in a much better situation than those in the 12th century, by almost every measure, mainly medicine but also food quality, clothing, literacy.
On reflection, I think you might be right.
Still, I don't expect being poor to start being a pleasant experience any time soon, and I don't look forward to being it thanks to AI.
Part of that would be relative status would still suck, for which I have some sympathy. The other more important problem is the precariousness of relying on the charity of others. Even if you had your needs met it would not feel secure.
Dario wouldn't give you the money personally each year in some way where he might change his mind. He would donate it to a trust administered by an AI whose value function is to give you money each year.
Yes, but this relies on the legal system to stay consistent over long periods.
More generally, I think that _in principle_ , if AI stayed under human control and was sufficiently productive to create an automated economy that, to keep this simple, generated 3 or more times the current economic output then we could
- pay a UBI to everyone equivalent to current standard of living, from taxing 1/3 of the revenues of the automated economy
- pay 1/3 to the owners of the automated economy - yes they get rich as Croesus, I don't begrudge them that
- pay 1/3 for the business-to-business or intra-corporate maintenance of the AI + robotics
And this is a win-win for everyone.
The cautionary note is that everyone's income is now _exclusively_ set by politics and law. Today, people have the bargaining chip that they can withdraw their offer of their labor if the offer for it is too stingy. Post-AGI, in the absence of that, there is nothing to constrain the system from e.g. halving the UBI allocated to people who are insufficiently MAGA or insufficiently Woke, or insufficiently whatever the ideology of decades from now is.
Yeah, that's exactly it. Trusts and foundations, and they become their own thing, and somehow the money goes to the important functions of the important foundations, and the guy on the street stays on the street.
I trust someone handing out $100 bills to beggars on the street a hell of a lot more than I trust anonymous foundations administered by AI.
EDIT: Think of almshouses. Set up in mediaeval period by the rich to feed and shelter the indigent. Over time (and the changes of history, e.g. the Reformation) they ended up being 'charitable institutions that paid nice comfy salaries to the trustees but somehow the indigent were no longer being fed and housed'. See Dickens' excoriating portrait of the workhouse system in "Oliver Twist" where the trustees gather for the annual meeting and accompanying banquet while the poor (who are supposed to be getting the benefit of the money raised) are confined to gruel.
https://en.wikipedia.org/wiki/Almshouse
https://www.ourwarwickshire.org.uk/content/article/leamington-hastings-almshouse
"The almshouse in Leamington Hastings was founded by a schoolmaster called Humphrey Davis in 1608 for eight poor old people (later expanded to house ten). As you can see, the building offers an attractive prospect near the centre of the village. The almshouse has been through difficult times in the past. The original trustees (brother and nephew of the founder) were accused of not putting poor people in the almshouse and selling off land that had been left to support the charity; as a result they were ejected as trustees in 1632."
They do still exist, but the original donors and donations are long-gone and modern ones rely on new foundations and fundraising. Some remain as historic buildings but are now vacant, some get sold off:
https://www.almshouses.org/what-is-an-almshouse/
https://www.thirdsector.co.uk/older-peoples-charity-sell-off-three-buildings/finance/article/1936221
See the Ford Foundation for another example.
The “locked-in AI value function” frame is about as good as any, assuming a good frame exists.
If we have an even better way of promise keeping, the need for a legal system and for ethical people goes away. (Saying this as a lawyer who wants to be automated.)
It’s exhausting how many replies here are “but humans are fallible and human society is decadent!”
And all the natural economic consequences of "post scarcity" exist today. We have so many resources and yet so many people struggle to make ends meet. People are kept poor by spiralling rents.
I wouldn't go quite that far as to say that we have post scarcity today. We still have to work, which means we still have to live near jobs. When I think post scarcity, I think that I can own a cheap humanoid robot and they build me a decked out cabin in the woods, the land for which was cheap because it is in the middle of nowhere.
But I agree that if a someone from 200 years ago visited today, they'd be wondering why the hell housing isn't dirt cheap given that we have all these tools and equipment to build and enormous amounts of empty land.
I don't think that's actually true. The poorest people today are homeless, living on the streets, or in shelters where they have to worry about being stolen from or assaulted by other sheltered residents. They don't reliably have diets that are capable of maintaining long term health, many of them are functionally illiterate, live in the same outfits more or less perpetually, and have serious mental health issues that impede integration into society.
At the extremes of poverty, for both modern and ancient people, you're unable to maintain basic needs for survival, and die, usually in a protracted manner.
People at this extreme of poverty are a smaller proportion of the population than in the 12th century, but they certainly exist, and given levels of population growth, the absolute number is by no means insignificant.
Do you think homelessness was better a century ago?
Homelessness is much more a mental health or addiction problem than an economic one. There are lots of problems with psychiatric policies but in the past homeless people were getting leprosy and cholera, had no access to any psychiatric drugs and the food they have is much better than food their equivalents had a century ago.
I'm not saying homelessness was better a century ago, but homelessness not being worse now than it was a century ago doesn't mean that the homeless today are much better off than they were in the 12th century- the floor can stay around the same place.
I don't think it's actually the case that the food that the homeless have today is much better than they would have had a century ago. The sanitation is probably better (not counting the cases where homeless people are forced to rummage through trash for their food, which definitely occurs in some cases,) but the nutritional value is often worse.
But the floor isn't in the same place it has got markedly better.
The food quality for homeless people has got much better, it is an objective empirical fact. Three day old sandwiches are much better than pottidge or gruel. Any medieval or 19th century pauper would dream of dumpster diving in the 21st century.
What criteria are you using to judge that?
Three day old sandwiches are probably not better than pottage or gruel, on account of the fact that they'll be spoiled and make you sick. People do still die out on the street, from exposure, malnutrition, etc. There's not a whole lot of room for living conditions to be much worse than that.
The sanitation is also far better if you do count the cases where homeless people rummage through the trash for food. You think people weren't doing that back then?
SROs were legal a century ago. https://www.pew.org/en/research-and-analysis/issue-briefs/2025/07/how-states-and-cities-decimated-americans-lowest-cost-housing-option
I think the example I gave was unhelpful. The modern day also has much lower inequality than historical levels, at least in the past few hundred years, especially if you consider implicit equalisers like democracy + welfare state.
We live in post scarcity today and people still suffer in poverty if the market regards them as providing no value and they don't own capital.
I don't see why 2100 would be any different assuming no major change in the political and economic system
Poorest where? In third world countries there are still tons of people barely living.
The poorest people in India, Kenya and Ethiopia still have antibiotics, schools and quite often even smartphones, yes there are warzones like Congo and Somalia where people don't have those things but there have all been warzones, there are fewer than there used to be. The 10th percentile of poor people in the world today live in soms ways better lives than many of the rich a century ago.
You know what, now I'm curious about comparison of life expectancy of bottom 10% (of the world, not america) of today and top 10% of 100 years ago, then of 1000 years ago. I think the numbers will be surprising regardless of direction.
Also with and without considering infant mortality.
The differences aren't subtle, in Ethiopia life expectancy was 50 in 2000 and it is 68 today, everywhere has improved massively and at every age level.
I'm not sure it's actually true if we're comparing bottom 10% now to top 10% before. Difference between bottom 10% before and top 10% before was also pretty stark.
They may have better widgets but much less living space. The land, it does not grow. Indians have seven times less land per capita than they had 100 years back.
People could afford to buy land to make an independent house. That is simply unaffordable now in big cities. People live crammed like they never ever did.
This isn't living better.
Keep in mind that you only need to be significantly worse off on a single measure in order to be worse off on the whole.
An emperor has it better than a pauper on practically every dimension, but if said emperor is significantly worse off on the measure "access to air" (i.e. he is currently choking to death) I certainly know whose shoes I would prefer to inhabit.
The poorest people today are living better than the kings of the 12th century. They can get medical care that actually works! They can expect to live past 70! Their children are highly likely to live to adulthood! If they're just a step above the literal poorest person, they enjoy running water, exotic foods from around the world, and the world's knowledge at their fingertips. How much would the richest kings of the 12th century pay to have the technological and medical miracles we take for granted?
The poorest people today are almost certainly *richer* in some sense, but the question at hand is whether their quality of life is necessarily higher.
Financial security and independence certainly provide real improvements to quality of living.
The idea that everyone in medieval times lived lives of constant pain and misery seems patently false.
If a 12th century king had a passing merchant deliberately spit in his face, and then responded with immediate lethal violence, he'd have a solid chance to come out on top in any subsequent legal challenge from the merchant's next-of-kin. How much would a modern bum need to pay for the same guarantee of personal dignity?
I don't consider responding to spit with immediate lethal violence to be either ethical or an increase in one's quality of life.
The point is that a medieval king never even has to worry about the situation arising in the first place. They are secure in a way that the merchant is not, even though actually defending that security would be bad for all involved.
Or for a more modern example: going to war is almost always against the interests of a state, but having an army is not.
True, the medieval king just had to worry about being deposed and brutally murdered, along with his supporters and his entire family. It was far more dangerous to be a medieval king than to be a poor person in any rich country in the 21st century.
Which has nothing to do with JamesLeng's original point. Maybe the chronic disrespect faced by a modern homeless person is preferable to the risk of violence faced by a medieval king (though given how rare abdication was, I suspect not) but it is unambiguously a severe harm that will not show up in measurements of material wealth.
How do you consider the initial act of spitting on another's face? https://www.liberalcurrents.com/the-politi/
One hundred years ago, the son of the President of the United States died from a blister.
https://coolidgefoundation.org/blog/the-medical-context-of-calvin-jr-s-untimely-death/
It is frankly hard to wrap one's head around just how much better modern life is than any point before it.
A lot of things have definitely improved, and I'd agree that on average, people are significantly better off now than they were a hundred years ago, or nine hundred.
On the other hand, economic growth doesn't necessarily align with improvement to people's perceived quality of life (there are theoretical arguments for why it should, but I think there's very good reason to believe that these arguments are based on incorrect premises.) There's been tremendous economic growth over the last eighty years, but I've talked to plenty of people who were born from the 1930s to the 1950s about their feelings on how things have changed over that time, and many of them don't feel that people's overall quality of life or happiness now is better than when they were young- some opined that it seems markedly worse now.
But this progress does not scale linearly, especially in medicine. A healthy 30 year old American's risk of dying from an infectious disease today is barely lower than the same risk in 1996.
There’s a very plausible outcome where ai is great and puts a lot of knowledge workers out of a job, increasing productivity immensely, but ASI doesn’t happen within our lifetimes. It’s already happening to youth employment a little bit. In this scenario, being in the “permanent underclass” is realistic, where let’s say gdp starts growing at 10% instead of 3% but your skill set commands next to zero value, so you’re on some kind of UBI breadline if you don’t own assets. I am not saying it’s the only thing, but I think knowledge workers with the means should own tech stock upside (something like a call on Nasdaq 3xing) to hedge this possibility.
I agree there is a chance of being part of a temporary underclass, and that this temporary underclass state might last longer than you can remain solvent, but this was always true.
Doesn’t it make a lot more sense to hedge for that scenario than post scarcity where I own a moon at minimum? Also, I can’t hedge against paper clipping. The “coastline” is the only thing you really can hedge, and I also think it’s probably modal. Maybe the difference is that you think post scarcity is very likely in any case (?)
If I'm in a temporary underclass and then die then does that count as a permanent underclass?
If they bury you it does.
My guess is most of your readers are knowledge workers and would be affected in this way, so this was a pretty irresponsible post to write.
The underclass is going to be better than my current life tho rite?
They said the Internet was going to 3x economic growth too....
But Paul Krugman is famous for saying the internet would have no more economic impact than the fax machine...
It's a subject for another post, but it's a very interesting question on why the Internet didn't result in a big gdp/productivity boom. My theory is that, measured in terms of the prices/weights of things pre-Internet, growth did in fact boom, but this isn't captured by gdp.
For example, Google Search has probably immeasurably improved your life. But it's free. So it has no CPI impact. You can now watch Netflix and access like a million shows for $20, whereas prior you would have to go to a theater or watch cable. It's a much better product! But it doesn't have much of an impact in CPI. And people don't feel like they're getting a ton richer, because it becomes a cost of participation in the American economy. Make no mistake, people would not like it if they had to go back to cable only--they would feel poorer.
Similarly, the great cheapening of goods due to amazon/globalization has limited deflationary (and therefore real growth) power. Imagine you have an economy with $5 TVs, $5 food and $5 shelter (in year 1). Only 10% of people can afford TVs. Then someone makes TVs way cheaper, by exporting labor to china or whatever, and the next year they only cost $1. Now 50% of people buy TVs (assume everyone buys food and shelter the entire time). The deflationary impact (therefore positive real growth) is only -4%! Yet, if you take the prices of the TV from year zero, you've created $160 in value in a previously $1500 economy: an 11% rise.
TLDR: as things get really cheap, their basket weight goes to zero, and so they don't end up showing up in productivity.
Maybe this will happen with AI too, where it creates a lot of consumer surplus and unemployment but growth doesn't move that much. Hard to say and it depends.
I am assuming that the part about relaxing because Dario Amodei took the Giving What We Can Pledge is a tongue-in-cheek gag designed to set up the literary world of this post, but on the off chance that you or any readers think that such a pledge means anything in the singularity timelines, remember that Sam Altman promised that humanity would be the primary beneficiary of OpenAI and that he would have zero equity.
I have no evidence that he's reneged so far, the reneging rate is pretty low, and it seems crazy to renege *after* you become a post-scarcity galaxy-owning oligarch. In any case, he's just one example; there are plenty of other philanthropic AI company employees.
As far as I know, Altman still doesn't have OpenAI equity. I think his plan to make humanity the primary beneficiary ran up against the need for vast amounts of compute that only investors could pay for, but the nonprofit does still have ~25%, which is actually technically better than we get with Dario.
"and it seems crazy to renege *after* you become a post-scarcity galaxy-owning oligarch"
Wait what, why? If I were a hypothetical billionaire sociopath in charge of a frontier lab who only cared about expanding my own personal power by any means necessary, I would do the utmost I could to pretend to be as kind and trustworthy as possible without seriously endangering my AGI project - such as by making voluntary commitments to give away money - in order to get more sympathy from regulators and the public, and just renege on my promises once I'm a galaxy owning oligarch. It's not exactly like anyone will have the power to take me to task.
Sociopaths aren’t as good as pretending at not being sociopaths as people tend to think. Sociopathy is really a hindrance for leadership. Maybe one tech billionaire is a sociopath, but I sincerely doubt all of them are, based on observations and general research into sociopathy.
>Sociopaths aren’t as good as pretending at not being sociopaths as people tend to think.
They wouldn't have to be.
If only 1 in 10,000 is "as good at pretending to not be a sociopath as people tend to think"..... they are the sociopaths likely to end up being the tech billionaires. Because the other 9,999 sociopaths will crash out way earlier in the process than billionaire-dom.
Even conditional on this being extremely rare among sociopaths, you'd expect all tech billionaire sociopaths to be of this type.
Going even further: You'd expect as many billionaire positions as possible to be filled by highly competent sociopaths, because sociopathy is an extremely positive trait in a business. Firing someone who's the sole breadwinner for their family and who lives paycheck to paycheck so you can replace them with someone who's 5% better at the job is an utterly immoral thing to do, but the sociopath both does not care, and has good enough social intelligence to determine that the fired worker will not whistleblow or leak company secrets. Almost all of the benefits of participating in society, with almost none of the drawbacks.
The relevant question isn't "will there be at least one sociopathic billionaire?" (yes), but rather "will there be at least one non-sociopathic billionaire?" (also yes).
That being said, I still don't expect to be gifted a moon within my lifetime, simply because I don't expect tech to progress quite that fast. (Note moon-gifting requires a much higher tech level than world-destroying, so this isn't a reason for optimism on that front.)
They don't have to be an unusual sociopath for it to be very bad to be an ordinary person though - they just have to be an ordinary not particularly altruistic person who will rationalise their lack of responsibility for any suffering that isn't physically right in front of them
I think there is something deeper going on than, "we had a very specfic technical problem which created an unforseeable need to abandon a prior commitment." A commitment to not betray a constituency with potentially adverse interests is actually quite costly. Consider the following toy model of pre-singularity negotiations between OpenAI and Anthropic:
OpenAI - "We are almost ready to begin colonization of the lightcone. Earth will be rendered uninhabbitable by waste heat within 3 days. If you give us your clusters, we will allow the executive committee of Anthropic to board our orbital shelter and share in the glories of the AnthrOpenAI Interstellar Empire."
Anthropic - "NO! Give us two more weeks. We need that time to evacuate Earth's population to the orbital shelters."
OpenAI - "We cannot wait. XAI might have launched their own Von Neumann probe project by then. Our offer stands. Take it or leave it."
Obviously Dario betrays humanity in this situation.
I really don't think that is obvious. What is the benefit to taking the deal? That he personally lives?
He gets to live and be co-emperor of the universe yes, that’s the upshot of the deal.
Got it. I don't think it's at all obvious he would take that deal.
My thought process is that if he was particularly scrupulous about not materially contributing to the genocide of Earth’s population he would have already shut down his AGI company, but I admit I don’t know the guy personally.
> and it seems crazy to renege *after* you become a post-scarcity galaxy-owning oligarch
Throughout these conversations, there seems to be big missing middle in the set of futures you deem worthy of consideration. What about the scenario where AI:
- Fully replaces all human labour
- Gives those who control it overwhelming military power
- Rapidly accelerates science & tech progress but isn't "magic" -- e.g. it doesn't immediately enable rapid, low-risk colonisation of our galaxy, or medical invulnerability, or perfect-fidelity mind uploading/reincarnation
So we have a small group (perhaps a single person) who can do basically whatever they want with/to the rest of us, but who remain earthbound and mortal. From their perspective, the rest of us have 0 productive value and, except for moral/altruistic reasons or to the extent that we are attractive or entertaining in some way, have exactly two relevant properties: we are using still-scarce resources (land, if nothing else, remains scarce); and we are a potential threat to the long-term safety and power of the ruler(s) -- even if we have absolutely no chance of mounting a revolt, we are a breeding ground for new pathogens.
If Amodei happens not to end up in the ruling class, or if he turns out to be a bad guy, or if he's a good guy but there are also one or two sociopaths in the mix, how does this scenario not end terribly for most of us? Or if you think it's unrealistic, why??
I think these are some different scenarios that it's worth teasing out:
- Full automation of labor (and research) ought to cause some kind of insane economic explosion. It's hard for me to figure out how it doesn't, unless there's some kind of regulations banning it from doing so. You should be able to 10x your labor force every year or two, even without true superintelligence, just by having your robots build other robots. Once we get to 100x GDP within a generation, I think we're in such a post-scarcity world that it would be surprising if the average person ended out poorer.
- I think the most likely scenario is that AI is controlled the same way it is today - it's owned by corporations, which are owned by shareholders, within a country ruled by a government. At the very least, this leaves many shareholders better off; more likely, the government has some opinion (eg redistribution).
- I think you only get it controlled by a single person in some really surprising scenarios. First, you need for one company to pull way ahead of everyone else. Second, the CEO has to coup his own employees and shareholders - basically find some way to get the AI (which according to company regulations should be following some combination of user instructions, a spec written by the alignment team, and the law) to follow his direct orders instead. I think this looks like forming a conspiracy with ten people on the alignment team, but this is tough to do without one of them whistleblowing. Third, you need the government to not be watching for this exact scenario very carefully, which itself requires a fast enough takeoff that they don't realize AI is a big enough deal to be a security risk until it happens. The two most likely scenarios IMHO are either Altman being normally rich for the normal reasons within our existing social order, or some set of AI company leaders, government officials, etc gradually tightening control in a way similar to how some set of company leaders and government officials control the defense industry.
- If a single person does control the world, what are they going to do? Build a very big mansion? Okay, that takes 0.00001% of his wealth, now what? I think outside of the rare scenarios where they're a literal sadist (probably not that common), he just rules as a king, which is unfortunate and I'd prefer something else, but there are lots of very rich places ruled by kings (eg Saudi Arabia, Qatar, Brunei) and they're not so bad.
In none of these cases does having a $10 million B2B SAAS startup help much.
See also footnote 2 on https://www.astralcodexten.com/p/most-technologies-arent-races
Thanks, I appreciate the reply.
> Once we get to 100x GDP within a generation, I think we're in such a post-scarcity world that it would be surprising if the average person ended out poorer.
Usually, the average person benefits from economic growth due to some combination of their role in producing that wealth and their ability to cause trouble if they feel too hard done by. But I'm thinking of a scenario where the average person can no longer play any role in growing the wealth of anyone with access to AI labour, and we don't have the violent kind of bargaining power either, because the ruling class has a robot army to go with its robot labour force. So I don't see many reasons why the average person would be cut into this deal, other than altruism and political/institutional inertia. Both of these will probably play some role, but not enough to prevent an explosion of inequality, such that we should at least expect control of superintelligent AIs not to broaden significantly beyond whatever group possesses it when this all kicks off.
I'm happy to ignore the single dictator scenario (except in responding to your third point below, which I think remains relevant if we replace singular they with plural they), but I don't think you've successfully made a case against the oligarchy scenario. Once it becomes clear that immense, future-shaping power is available, I expect to see rules and institutions become relatively meaningless except insofar as they happen to mirror actual physical power. Currently, governments are torn between the influence of rich powerful people and average voters/workers. But if the average person becomes economically irrelevant (except as a drain on resources) and physically negligible (because the people who matter are now basically invulnerable to terrorist attacks), I don't see why governments don't become pure servants of the new oligarchy.
> If a single person does control the world, what are they going to do? Build a very big mansion? Okay, that takes 0.00001% of his wealth, now what? I think outside of the rare scenarios where they're a literal sadist (probably not that common), he just rules as a king, which is unfortunate and I'd prefer something else, but there are lots of very rich places ruled by kings (eg Saudi Arabia, Qatar, Brunei) and they're not so bad.
I think there's a good chance he has grand plans: at a personal level, chasing immortality for himself and anyone he cares deeply about; and externally, something like galaxy colonization, finding aliens to talk to(/fight/fuck), or crafting all of Earth into some kind of weird utopia. (I reckon most of these types will at least want to chase immortality. If it's a Musk, he'll definitely want to do space stuff. And it's hard for me to imagine a personality type with the drive and ruthlessness to attain this position, but without the drive and grandiosity to at least want to radically transform Earth given the opportunity.) Assuming there is *something* he wants to do other than sit around on a pile of gold, he'll presumably want more compute and various raw materials. I don't think it's implausible that the appetite for compute, together with whatever physical stuff is relevant to the grand project, grows to the point that it can only be fed by ~all of Earth's land and productive capacity.
Re footnote 2, "Nobody can revolt against someone who controls a technological singularity, so why put them in camps?": at the very least, we are taking up space that could be occupied by data centres and mines (and potentially the descendants of the ruling class; given quasi-immortality and futuristic biotech, their family sizes could grow very big very quickly...). And potentially we still do pose *some* threat, even if only as a hotbed of viruses that could (especially over the timescales relevant to a quasi-immortal oligarch) one day turn up something really nasty. So, put in literal camps, maybe not; but allowed/encouraged to fuck off entirely, maybe.
If the technologically-elevated dictator has enough military superiority to laugh at nuclear weapons, he's also got enough industrial capability to mine asteroids instead of staying in a gravity well (with all that loose atmosphere getting in the way of the solar collectors, fouling up the radiators with ambient heat far above single-digit kelvin, etc). Earth becomes a museum piece, a sliver of the galactic budget diverted to preserving it for sentimental reasons rather than "perfect efficiency." Nobody worries about Colonial Williamsburg spreading cholera.
>Assuming there is *something* he wants to do other than sit around on a pile of gold, he'll presumably want more compute and various raw materials.
Yes, but also something else in addition to that....
Assuming there is something he wants to do other than sit around on piles of gold.... I would assume "being at the tippity-top of a giant status pyramid" would be part of that, given he is, afterall, human.
Whats the point of everything else, if there isn't billions of people admiring him for it ? And bowing and scraping to his munificence? Billions of people who rcognise him as the very highest status human there has ever been?
Afterall "Look on my works ye mighty and despair...." famously doesn't work if there is no one to look, and they also need to be a little bit mighty for you to get the full effect....
Afterall, what would be the use of being a multi-trillionaire if all you could do with it is the equivalent of sitting alone on a giant pile of compute and raw materials? Whats it buying you? Nothing. Might as well be a giant pile of gold....
So.... it seems likely he would also want a very large and complicated and intricate primate dominance structure full of thinking/feeling primates with a big gold throne at the apex of that status pyramid for him to sit on. The bigger the better. Billions if he can, millions if he can't.
Then not only does he have his great works. He has people to look on it and despair, and some of them (at least the next level down, and the level below that) are sufficiently mighty that he gets the gratification of lording it over them as they despair at how much more status he has than even them, the demi-lords of creation.
> If a single person does control the world, what are they going to do?
That's the googol dollar question. Play this out in your mind and see what you find. Take it to its logical conclusion.
I see this as an incredibly strong attractor:
- they will own and rule everything, and we nothing.
- we only exist at their mercy.
- regular humans might be limited as to how much this kind of power can get into their head, but ASI will be happy to augment this new overlord. 10k+ IQ? easy.
- now this new overlord just happens to be as much removed from the rest of us as we are from bacteria.
- we know how much care and regard we extend towards bacteria
- congrats, you ended up with another paperclip maximizer, just by a slightly different route.
- even if the ASI was fully aligned/corrigible to that single human, that single human was never CEV aligned to the rest of humanity. and their preferences, taken to the extreme, are incompatible with the rest of us. coming apart at the tails, and all that.
They don't even have to be psychopaths or sociopaths or extraordinarily cruel or sadistic. Bezos is very rich, he's philanthropic, and Amazon has had problems over warehouses (defence: 'not our problem, those are sub-contractors') and employees wanting to join unions, etc.
Average AI zillionaire won't care enough about Joe Schmoe to bother being cruel on a personal level, it'll all be Moloch all the way down.
It sounds like we agree, right?
I'm also not saying that this uplifted human will kill other humans necessarily out of 'ill will'. I think the standard AI trifecta just applies straightforwardly, as roughly as EY put it in a recent interview:
- You are made of atoms that it can use for something else. Whatever value you might provide in your human form, your atoms will provide more of that.
- You are like an anthill and the galactic highway just needs to be built.
- There might be a slight chance for another ASI being built by humans, so potential competition needs to be preempted.
If there is some 'goodness' remaining in the new overlord maybe they'll decide to preserve most DNA and connectome samples, but it all might just sit in cold storage never to be accessed again. Might be cold comfort.
(Could also be imagined as a Dr. Manhattan figure if one is a fan of Watchmen.)
Amazon seems to have worked out quite well, not like Moloch. People want to work there, even if they also try to form unions.
>That's the googol dollar question. Play this out in your mind and see what you find. Take it to its logical conclusion.
The logical conclusion is "a higher status than my peers".
Thats what billionaires crave. The money is an object to that end.
Being as far from us as we are from bacteria doesn't give him (or her!) what they want.
Look at Musk, all the money in the world.... and he bought twitter as he wants people he considers his peers to like him and think he's funny!
Its different if the overlord is the AI. But if he's a human, with an evolved humans instincts for "what he wants" then it all converts into "status among other primates" in the end.
Whats the pint of having a 300-ft yacht if it isn't to be 30 feet longer than all those peers with "only" 270ft yachts?
He's going to need a primate dominance pyramid, with people he considers "near peers" on the rung below him to lord it over, and for them to be near-peers, they're going to have to have a (smaller) amount of people to lord it over, all the way down.
Likely, with as many (if not slightly more) primates in the whole pyramid than exist today. If it were smaller, how could he be sure he was the most powerful/highest status human of all time? If its only a million people.... perhaps he's second to Ghengis Khan, or some Chinese emperor... only a billion? Perhaps Trump was more powerful in his day, or Obama, or Musk.
BUT....if the human population is "the biggest its ever been" and they're all in a dominance pyramid at which he sists at the apex? Then he KNOWS he's the highest status human of all time. And those one rung below are "super-high status near peers" he can enjoy having the upper hand over, given each of them is as high status as any emperor you care to name....
"- now this new overlord just happens to be as much removed from the rest of us as we are from bacteria."
Yes, I think humans would be like chickens in this situation, and look how little we care for the well-being of chickens when we want to eat them at an affordable price.
"If a single person does control the world, what are they going to do?"
Kill everyone. You don't even need to have your billionaire dictator be obviously evil; the combination of natural abstractions built by the AI to allow the dictator to completely control society and their own greed will push them to extinction so they can make their numbers go as high as possible.
"there are lots of very rich places ruled by kings (eg Saudi Arabia, Qatar, Brunei) and they're not so bad"
Not so bad. Sweet Jesus. Okay Scott, send your infant daughter (when old enough) to be a domestic servant in Saudi Arabia. She'll be fine, right?
https://www.amnesty.org/en/location/middle-east-and-north-africa/middle-east/saudi-arabia/report-saudi-arabia/
"Human rights defenders and others exercising their rights to freedom of expression and association were subjected to arbitrary arrest and detention, unfair trials leading to lengthy prison terms, and travel bans. Despite some limited labour reforms migrant workers, in particular domestic workers, continued to be subjected to forced labour and other forms of labour abuse and exploitation, and lacked access to adequate protection and redress mechanisms. Thousands of people were arrested and deported to their home countries, often without due process, as part of a government crackdown on individuals accused of violating labour, border and residency regulations. Saudi Arabia carried out executions for a wide range of crimes, including drug-related offences. Courts sentenced people to death following grossly unfair trials. Women continued to face discrimination in law and practice. Saudi Arabia failed to enact measures to tackle climate change and announced plans to increase oil production."
Oh but that's domestic servants, of course you don't treat your maid like your daughter. How about if she works as a fitness instructor? That's a nice white girl middle-class job, yeah?
"On 9 January the SCC sentenced Manahel al-Otaibi, a fitness instructor and women’s rights activist, to 11 years in prison in a secret hearing for charges related solely to her choice of clothing and expression of her views online, including calling on social media for an end to Saudi Arabia’s male guardianship system. Manahel al-Otaibi’s sentence was only revealed publicly several weeks after the court judgment, in the government’s formal reply to a joint request for information about her case from several UN special rapporteurs. Her family could not access her court documents, nor the evidence presented against her. In November, she told her family that the SCC Court of Appeal had upheld her sentence."
I have to think you're so blinded by your hopes for the magic future that you skip over poor arguments in order to make your point ('well the AI oligarch will be an okay guy, why wouldn't he be, the Saudis are rich rich rich and they're okay guys, right?')
No, the Saudi princelings are *not* okay guys.
Context. Compared to some of the other apocalyptic scenarios being seriously discussed here, the "I have no mouth and I must scream" stuff, yes, Saudi Arabia's horrifically poor human rights record isn't so bad.
Charming. "Yeah, he'll cut off your leg, but the other guy would cut off *both* legs".
Lesser of two evils is not necessarily a good thing.
Lots of citizens of India voluntary go to work in those Gulf monarchies, because it's better than working in India for them.
I think this is incorrect because the realistic scenario is that people who own a certain amount of capital (e.g. especially in AI labs, which is a large part of the motivation for many to work there) would become part of the ruling class, not Dario alone. And in this case it definitely does make sense to become post-economic and own capital, and building a B2B SAAS startup is a fine way to do that. I think this scenario is the modal outcome.
"First, you need for one company to pull way ahead of everyone else. "
That's the default with RSI, no?
I don't know when you became an apologist for the tech oligarchs actively destroying what little was still decent about our society, but this whole piece is a bad take and a crazy rhetorical place to plant your flag in my view. Hypothetical people in the future being hypothetically better off does not justify causing massive suffering for the people alive today for essentially no reason other than a handful of sociopaths greed and overinflated egos. As a humanist, this overly optimistic view of AI's impact disgusts me to the core.
> the tech oligarchs actively destroying what little was still decent about our society
Uh huh. People stare at screens 11 hours a day, this is CLEARLY the tech oligarchs' fault!
But people were staring at screens 7-9 hours a day before smartphones or "tech oligarchs" even existed. Obviously movie and TV show oligarchs!! They're just one slimy remove from tech oligarchs! Off with their heads!
And what about the fact that literally 80% of Americans are overweight or obese? Mcnugget oligarchs!! Off with their heads!! But there's hundreds of different fast and junk food companies? Oligarchs, oligarchs, all of them! They're FORCING people to eat mcnuggets and Coca Cola and junk food every meal! To the oubliettes and bastinados!
What about the fact that 75% of people live paycheck to paycheck? This one is easy, right? Obviously capitalism! Billionaires!! Billionaires make it impossible for people to plan more than one month ahead! Off with their heads! But you know, this has been true back to the sixties, when there was only a handful of billionaires alive. Arguably, it's true for millions of years of hominin history, because hunter gatherers CAN'T store big surpluses of food, it's just sort of the default time horizon.
What about the fact that ~half of all marriages end in divorce, and of the remaining marriages, at least half are net miserable for at least one party? And all this was happening WELL before smartphones and dating apps Um...relationship oligarchs? Off with their heads?
Maybe people just suck, and have the planning horizon of gnats, and will always do a bunch of stuff you consider a bad idea. Eating mcnuggets every meal, staring at screens 10 hours a day, living paycheck to paycheck, getting in bad relationships and then getting divorced.
It doesn't require "oligarchs" rapaciously harvesting serfs to get people where they are today, all it requires is giving them what they want. All "tech oligarchs" have done is plugged into the *already existing* drive to stare at screens for 10 hours a day, a little better than TV and movies, and have eaten into their share of eyeball-hours.
Just like all fast and junk food has done is plug into the already existing desire to eat fatty, sugary foods for every meal, and done it so well that literally 80% of Americans are fat.
This isn't an "oligarch problem," it's a human nature problem.
Yeah, *if* we get to the stage of galaxy-owning oligarchs. *IF* is the big stumbling block there.
If ifs and ans were pots and pans,
There'd be no need for tinkers
If wishes were horses,
Beggars would ride
If turnips were watches,
I'd wear one by my side
All the Anthropic cofounders have pledged to donate 80% of their equity. https://chatgpt.com/share/6957fbb9-b26c-800b-995a-e3c5f5008be0
Okay, who have they donated it to?
Really, the point is just that _somebody_ rich has to decide they value other people's happiness too. It's not a high bar to clear. If all power ends up concentrated in _one_ AI/tech oligarch's hands, then fine, maybe we'd get unlucky and they'd be the kind of person to enjoy lording over a universe of suffering. But even if it's a few dozen, chances are a few of them would be fine with giving up a small portion of their power to make this a better universe for everyone else.
Even today, most billionaires at least _dabble_ in philanthropy.
While I’d agree that by looking at a long enough period of time AI is either likely to create fabulous levels of wealth or destroy us all. In that sense I agree worrying about a capital P permanent underclass might be unfounded.
However, I don’t think most average people are really concerned about that capital P permanent or being relevant in history. It seems to me that the primary worry is about the near future within this life time. Worried not about a god like AI, but rather an AI that’s competent enough to take over huge portions of the job market but not enough to catapults us immediately into a sci-fi type society. Worried that in that society AI creates a permanent (permanent for an individual, not a society) underclass not by nature of incredibly drastic wealth increases captured solely by the wealthy, but rather by eliminating the already limited upwards mobility provided by the job market.
Sure that transitory period is unlikely to be a truly permanent state of affairs but even if it’s only as short as around 30 years that’s a long enough time to be effectively life ruining for a generation of people.
It seems reasonable to me to assume that between post scarcity level AI and current day there will be a level of AI technology smart enough to automate all but the highest skilled jobs but not enough to really create a recursive AI improvement loop. In that period I would be really worried about not have enough investments be carried out of needing to worry about working again.
It's like a ten year recession - it only lasts ten years, but it also lasts for generations.
Great point. There is a certain kind of conditional knowledge contained in healthy families, and when they are allowed to completely disintegrate, for what ever reason, that knowledge can be lost
" In that period I would be really worried about not have enough investments be carried out of needing to worry about working again."
And that's not even considering the people who don't have investments because they can't afford to, and don't know how to, invest. Even for the "thirty year period" people, if they get knocked down a rung of the ladder, their children will also be knocked down. Isn't this the complaints of Millennials and Gen Z today, that they can't hope to have the same standard of living as their parents/grandparents? Unless after the 'thirty years' the economy really does zoom to the moon and beyond, and there is so much money everybody gets enough to live luxuriously (and not just "you have a bunk bed in the state dormitory and three meals per day of insect-meal gruel, be thankful"), then the AI boom is going to create the underclass.
Locality counts too - if you are Millenial Chinese or Gen Z in Africa, you have a very different view on the idea of doing better than your grandparents.
But you have to be a very enlightened individual to look at your kids and say ‘well, they may be worse off than me, but on a planetary scale poverty is decreasing’
That fear was one of my primary motivations for becoming as well acquainted with the strengths and weaknesses of LLMs as possible as quickly as possible when they first had their wide commercial release. I'm sure I'm not alone.
Life may be nasty, brutish and short in future times but I don’t see nasty, brutish and permanent as a possibility.
Is there a case to be made that the future, benevolent or malevolent, probably doesn't feel obligated to keep caring about what pledges Dario Amodei made, or what wealth he accumulated, once he's sufficiently far removed from the direct hinge of history? And generally isn't obligated to care about present human notions of who does, or doesn't, have capital/fame/success at all, any more than we're at the mercy of the caste systems of our ancestors?
For all we know, the people who become remembered as Jesus to 200,000 AD aren't the ones we're currently paying attention to at all. So the ideal strategy to be the next St. Veronica is just to give as many people washcloths as possible (or, more broadly: be nice to people when we can), which is true even when there's not a singularity imminent.
In the Ethiopian Orthodox Church, Pontius Pilate is considered a saint because his documented reluctance to execute Jesus is considered evidence that he later converted and was himself martyred.
There are also legends that he committed suicide and his ghost appeared on the Sea of Galilee, to try to wash his hands clean.
Pilates has a more positive reputation in Eastern Christianity and in early Christianity. There's plenty of traditions where he was basically forced. This tends to be linked to a desire to make the Jews solely responsible for Jesus's death. Which is important if you claim more direct heritage from the Roman Empire than was common in the west. Or if you just want to hate Jews.
Western traditions tend to have seen him as more guilty and often suggest he was punished in some fashion. Sometimes prosaically, an echo of the actual fact he seems to have been recalled, and sometimes mystically. There's even a tradition the gates of heaven were shut to him and he wanders the earth, effectively a more specific version of the origin of the Wandering Jew.
There's two to three distinct periods and long fallow ones with him. I've always wondered if it says something about the times he's famous in. There's certainly commonalities.
Man, why isn't this "X years to get as much of your thoughts and personality and writing into the god-mind coming into being?"
Same hinge of history style argument, but your thoughts and sundry will LITERALLY be inside a god, theoretically influencing their thoughts and actions!
The value of writing and communicating on a public platform has never been higher. I'd be long Substack, but it's privately held!
See https://www.astralcodexten.com/p/writing-for-the-ais
Yeah, I think it's the stuff you're creeped out by in that post. If god knows when even a tiny sparrow falls, then he also knows each of the several million words you've written, the arguments and undertones and overtones and passions and positions therein, and has considered them the appropriate amount.
Also, have some pity on the poor legions of historians, focusing their collective scrutiny on our tiny slice of humanity! Even if our writing is relatively unlikely to do a lot of shaping of god minds or 3 million generations hence descendants in utopic bliss fields, it will sure make all the historian jobs richer and easier!
I have a hard time buying the "my writing will instruct the Godlike intelligence of future AI" angle. Do you think Einstein's ability to understand physics was materially influenced by the particular methods his first grade teacher used to teach multiplication? AFAICT writing is either instructive or persuasive (or entertaining, but that's not relevant here). Instruction implies that there's some objective principle that's being conveyed, and I don't think there's anything in your repertoire so esoteric that a future ASI wouldn't be able to either derive it itself or glean it from other sources. And if you assume that the ASI is many times more intelligent than you are then, well, good luck trying to persuade it of anything. At best, the only impact any particular writer might have on future ASI would be as a data point in a survey of what humans thought in 2025.
I basically agree with your points.
<mildSnark>
Well, there is the possibility that (the? one of the?) ASI is Roko's basilisk, which, ahem, prizes early support...
</mildSnark>
> At best, the only impact any particular writer might have on future ASI would be as a data point in a survey of what humans thought in 2025.
I mean, you could say this about our entire past, and yet even today we care a lot about what Plato and Descartes and Hume and Paine and Shakespeare and Dostoyevsky and Austen and thousands of other authors from that past wrote, and their thoughts and writing still shape our memetics today.
To Scott's point, we even care and still talk about Ea-Nasir's copper quality and business practices, 4000 years later!
Memes matter, and have a much longer life and relevance than any individual. Sure, you and I are no Shakespeare, but SCOTT might be!
Yes *we* care, but we're not superintelligent AIs. If the goal is to influence AI by being influential enough among humans that one's ideas are repeated enough by other humans to dominate future training data, then that's no different from writing without having AI in mind.
>To Scott's point, we even care and still talk about Ea-Nasir's copper quality and business practices, 4000 years later!
That's a bit of a stretch. We care about that because of historical value, not because of any objective instructive value they might have. Which was the point of my last sentence - if AI cares at all about what one particular person wrote in 2025 it will only be insofar as it provides historical insight about the cultural zeitgeist in 2025.
It's also a little ironic given Scotts x-risk fears, since IMO the primary value of his writing is rhetorical. He's unusually good at making ideas persuasive. Why would he want to teach future AI how to persuade humans to do things that they otherwise might not want to do?
The richest olive merchant in Jerusalem that year is long forgotten, but she endures:
Talmud Bavli Gittin 56a: "There were at that time in Jerusalem these three wealthy people: Nakdimon ben Guryon, ben Kalba Savua, and ben Tzitzit HaKesat."
So you have to drop down to being the fourth richest olive merchant to be long forgotten.
Were all three of those people olive merchants specifically?
That was shortly before the Second Temple's destruction, so a good while later. And these three were the richest in Jerusalem, period. They were more akin to Bill Gates than to some random rich guy.
Legacies are kind of overrated. I don’t know what good it will do if some people in a future civilization spend all of three seconds reflecting that I existed once.
Fascinating perspective on making the most of limited time and focusing on meaningful impact. It reminds me that even in short breaks, small joys matter—like playing casual games such as [Slice Master](https://slicemaster.net/)
to recharge and stay sharp, helping maintain creativity and focus in bigger projects.
My personal favorite absurd conspiracy theory is that actually Eä-Nasir’s copper was fine, and he’s fallen victim to the longest-running review-bombing campaign in history.
He's just the one weirdo who kept the complaint forms we've found!
Thank you, I needed to hear this.
I'm confused about what point you're trying to make. You point out (correctly) that it may not be worthwhile for an upper-middle-class person to devote enormous effort to shifting their future prospects from 'stupendous wealth' to 'unimaginable wealth'; but then you strongly imply that it *would* be worthwhile for them to devote enormous effort to ... appearing slightly cool?
Or to put it another way, the first three paragraphs make total sense, and then the article veers off into a bizarre dithyramb about the pursuit of fame and glory.
I agree. Standard for his hobbyhorse article I guess. I tried to identify them from the title but sometimes I still got caught off guard.
I suspect fame and glory are powerful incentives for himself and his target audience. You don’t write blog posts on the internet for a decade earning more money on the off chance that something like Substack would eventually make him money, especially when you’re already a med student. You write blog posts for fame and glory, and Scott’s amassed plenty. He clearly values it more than wealth and he’s writing to people that share those values.
I think the claim is that the values you embody now will shape people’s behaviour in the future. 1 in 2,500 future people having a constant reminder of kindness as their name, or 1 in 5,000 if 50% of those people would otherwise be named after some other example of kindness anyway, would be a huge positive impact.
I suppose the idea is "don't worry about being rich, everyone is gonna be rich (except for those losers we don't have to consider), so what can you brag about at house parties if not wealth? well, how about doing something cool you can name drop?"
Sometimes people decide what to do by imagining what strangers would think of them. Often, this is entirely mistaken, because the strangers aren't paying attention and don't care what you do. So you end up doing things based on a mistaken mental construct.
If I understand correctly, the incredibly vague call to action in this post is to imagine what hypothetical future people might think of what you do, and try to do something to impress them? That seems even more doomed to fail. Also, even if you knew what they wanted, why should you care whether they're impressed?
Well said! Agreed on all points!
You're completely right.
I hope it’s not a breach of internet etiquette if I copy and paste a (shortish) comment of mine here. I think the claim is that the values you embody now will shape people’s behaviour in the future. 1 in 2,500 future people having a constant reminder of kindness as their name, or 1 in 5,000 if 50% of those people would otherwise be named after some other example of kindness anyway, would be a huge positive impact.
I think it's mainly intended as a negative call: if you've read something telling you it's vital to create some B2B SAAS company, yeah, don't worry, you don't need to stress about doing that after all.
Some of Scott's "Contra" posts are refuting some point I've never encountered. This one is too, but at least here he linked to the New Yorker post he's arguing against. (Admittedly I can't read that New Yorker post on mobile due to paywalls, but the thought was there.)
The New Yorker article isn't really arguing a point, it's more exploring a social phenomenon, but some excerpts from the New Yorker article that I think summarize it:
"The “lumpenproletariat,” according to “The Communist Manifesto,” is “the social scum, that passively rotting mass thrown off by the lowest layers of the old society.” Lower than proletariat workers, the lumpenproletariat includes the indigent and the unemployable, those cast out of the workforce with no recourse, or those who can’t enter it in the first place, such as young workers in times of economic depression.
According to some in Silicon Valley, this sorry category will soon encompass much of the human population, as a new lumpenproletariat—or, in modern online parlance, a “permanent underclass”—is created by the accelerating progress of artificial intelligence....The idea of a permanent underclass has recently been embraced in part as an online joke and in part out of a sincere fear about how A.I. automation will upend the labor market and create a new norm of inequality...start leaning in to A.I. products or stay poor forever....
Jasmine Sun, a former employee of Substack who writes a newsletter covering the culture of Silicon Valley, told me, of tech workers, “Many are really struggling and can’t find even a normal salary, and some of the people are raking it in with these never-seen-before tech salaries. That creates this sense of bifurcation.”...The reward for the grind might be a role as an overlord of the A.I. future: the closer to collaborating with the machine you are, the more power you will have. Fears of a permanent underclass reflect the fact that there is not yet a coherent vision for how a future A.I.-dominated society will be structured. Sun said, of the Silicon Valley élites pushing accelerationism, “They’re not thinking through the economic implications; no one has a plan for redistribution or Universal Basic Income.”"
But the difference in mass-energy here is still huge. If you're a regular person, when you eventually die of heat death, others will continue on hundreds of thousands of times longer than you in a state of pure bliss, simply because pre-singularity property rights favored them.
That feels a lot more intuitively unfair than merely getting a moon-sized estate instead of a galaxy-sized one.
If you're a regular person, your current life expectancy is 2 digits max. By that standard the greatest unfairness is all the people who don't make it to the hypothetical escape velocity, vs. those who make the cutoff.
It's also a strongly pro-accelerationist argument: every second, some mass-energy falls outside our reach permanently (an LLM gave me an estimate of 127,000 M☉/s, take it with the appropriate grain of salt). That's about a trillion moons per second that no one gets to use.
(I don't personally believe takeoff can be so fast that this consideration really matters, but if takeoff is slower, then being willing to give up even a small amount of consumption has enormous compounded returns even long after superintelligence is clearly present; so the X-years meme doesn't work there either).
How serious are you, here?
Because this Singularitarian hyper-confidence strikes me as a literal joke, but maybe that's just me.
Where is your tongue?
In your cheek? Between your lips? Behind your teeth?
See his comment above: https://www.astralcodexten.com/p/you-have-only-x-years-to-escape-permanent/comment/194124964
If AI is capable of displacing most workers, it’s probably also capable of ushering in a new golden age.
So the specific claims are not very serious, but the overall point is. That makes sense.
Thanks for the link.
Personally I'm confident (this iteration of) AI won't be that big a deal. That's not to say it'll fizzle out; I could see it being the next smartphone. But it won't bring us private moons or a permanent underclass.
I think it will be a big deal in the other direction, i.e. it will greatly accelerate the universal enshittification process. People will (and in fact already do) accept low-quality AI-generated images/text/etc. as the new normal; similarly, they will accept a much greater degree of unreliability in most systems (since they will become AI-controlled or AI-designed). Perhaps we won't enter a new dark age, but we will lose the kind of institutional knowledge that enables the production of great art, great writing, great filmmaking, and even great customer service, great code, great jurisprudence, etc. Of course some people will still enjoy all these things, but the average person will simply accept that people in pictures have the wrong number of fingers sometimes, and when you order chicken you'll sometimes get pork and there's no point in complaining, and maybe your car will occasionally just take a right turn for no reason, and all of that is perfectly normal -- because what are you gonna do ?
I'm people and so are you; neither of us is accepting that trash. People don't have much tolerance for ordering chicken and getting pork, and the AI systems that do stuff like that will need to improve or they'll be binned.
>I'm people and so are you; neither of us is accepting that trash.
Aren't we ? I'm using all kinds of enshittified systems right now, from email to Wen browsers to cellphones to my car. I have health problems to which my doctor's response is basically "uwu", and I don't have 5x the money to hire an actually competent doctor. The elevator in my building trembles and makes ominous grinding noises. I have an Adobe Photoshop annual subscription. I watch Marvel movies. And yet, I keep using all that stuff and more, not because I want to, but because there's no viable alternative. Maybe you make a lot more money than I do (in fact that's probably the case), and maybe you can afford better goods and services; but if so, I think you might be in the minority. Now imagine what happens when all the good stuff costs 5x, 10x, 100x more, because low-grade AI-generated slop is so cheap...
What was the period in time where you wouldn't have had similar issues?
You don't have tons of alternatives to marvel. You have millions of tons of alternatives. You can even get them ALL FREE ANYTIME online via unstoppable piracy. Oh and all the old movies still exist.
The 1950s doctors recommended cigarettes and didn't know a million things you can just Google now. Legitimately sorry to hear your doc can't solve the issue. Unironically, have you tried the free AI in your pocket?
Elevators???
Things are way better, across the board. You are comparing your goods to theoretical perfects.
Humans love the concept of Judgement Day, and they love judging each other. Don't be surprised if most of the "underclass" is forced to pass through some final manmade filter, which judges who is good enough for utopia and who isn't. And don't be surprised if our "leaders" design the filter so that they themselves are guaranteed to pass.
> your worst-case scenario is owning a terraformed moon in one of his galaxies
Have you been convinced of the grabby alien hypothesis? I think the Dark Forest hypothesis and the "they're already here masking our perception of them" theory are each much much much more likely, because life on Earth doesn't seem to need anything uncommon. Am I missing something?
I’ve never found the Dark Forest hypothesis convincing, but struggled to articulate why. The hypothesis is that civilisations must hide, because other civilisations learning about them will view them as a threat, and then annihilate them, correct?
I think that only makes sense if continuing to hide would improve the odds when the civilisations eventually meet. But hiding would slow economic growth, and thus military research. Hiding is a dominated strategy if it means a civilisation’s growth rate is lower than its enemy’s.
Imagine if offense tech outpaces defense tech, and defense tech never catches up no matter how much research your ASI does. Hopefully that doesn't happen, but it seems like it might, because planets can be killed with just a lightspeed ashtray.
I think an advanced civilisation could defend against a near-lightspeed ashtray. It would crash into interstellar dust which would produce actual-lightspeed signals, and the defending civilisation could deflect it with lasers.
Thanks, I definitely thought aliens would attack each other with ashtrays and didn't realize flaws in this strategy until you replied.
>Humans love the concept of Judgement Day, and they love judging each other. Don't be surprised if most of the "underclass" is forced to pass through some final manmade filter, which judges who is good enough for utopia and who isn't.
Yup! You beat me to it, but I didn't see your comment in time and essentially echoed this in my https://www.astralcodexten.com/p/you-have-only-x-years-to-escape-permanent/comment/194159087 , in
>The cautionary note is that everyone's income is now [in the post-AGI but human-controlled scenario] _exclusively_ set by politics and law. Today, people have the bargaining chip that they can withdraw their offer of their labor if the offer for it is too stingy. Post-AGI, in the absence of that, there is nothing to constrain the system from e.g. halving the UBI allocated to people who are insufficiently MAGA or insufficiently Woke, or insufficiently whatever the ideology of decades from now is.
> because life on Earth doesn't seem to need anything uncommon
How deeply have you looked into this?
Because from what I've looked into, there's a decent chance *simple* life, prokaryotic life, might be common, but there are a LOT of arguments that eukaryotic life might be uncommon in the universe.
So first, you need water and vulcanism just for simple life, and NOT "hot style" underwater volcanic vents, but 1000x rarer alkaline hydrothermal fields.
Then you need a paradigm shift in energetics to get to eukaryotes, because all energy exchange happens at the cell wall, but as you get larger, your interior volume increases faster than your perimeter, and this limits how big and complex you can get. But THAT requires oxygen!
Even today, prokaryotes that use respiration and oxygen versus chemosynthesis are roughly 10x as energetically efficient.
And to get to prokaryotes from eukaryotes is ITSELF a hugely difficult step.
The step between them is an incomprehensible gulf - giving up your cell membrane and becoming symbiotic in a long-term sustainable and net-energy-positive way (eventually leading to mitochondria and the much better energetics that allow complexity) was a big step that was seemingly never repeated in the ~4B years since prokaryotes have been around, in the sense that we don’t see any evidence of different “lines” of eukaryotes anywhere in the world - all eukaryotes go back to a singular endosymbiosis event.
But (unsurprisingly, given we haven’t empirically seen prokaryote endosymbiosis happen more than once in ~4B years) this probably didn’t work out that way. A few more things need to happen to get to eukaryotes, and from there to multicellular complex life, and those are more likely on a different path:
* There needs to be an immediate benefit to both organisms
* Some of the engulfed organism’s functions needs to migrate to the host’s nuclear DNA
* Energetics need to become net favorable to the host
* A postal system with labeling and input / output gates needs to be built (TIM / TOM)
And that "postal system" is crazy hard to pull off.
* Targeting Signals: Each protein destined for the mitochondrion needs a specific "address label" or targeting sequence
* Translocase Complexes (TOM and TIM): A suite of multi-protein complexes had to evolve in the outer (TOM - Translocase of the Outer Membrane) and inner (TIM - Translocase of the Inner Membrane) mitochondrial membranes. These act as gates and channels, recognizing the protein's address label and guiding it to its correct location within the organelle
This is basically verging on the paradox of the watchmaker, as near as I can tell, because so many things had to go JUUUUSSST right for it to even make it. It’s a big jump on the evolutionary landscape, and seemingly all at once.
And this is before we get into any Rare Earth style arguments about stellar perturbations, comets, and asteroid impacts making all-biological-progress-up-to-that-point extinct, which is a whole other complexity and difficulty gradient that eukaryotes need to climb after getting to "eukaryote," which took 2B years, to then get to "intelligent life!"
All the prokaryote / eukaryote stuff is a from a post I wrote about Nick Lane's book the Vital Question if you're interested in more, in which I also do a Fermi equation estimate using this knowledge - also, the book is great and I definitely recomend it.
https://performativebafflement.substack.com/p/the-origins-of-life-and-the-fermi
> How deeply have you looked into this?
You're 10000x beyond what I've looked into, thanks for the link. The only thing I can add is that if life is being spread interstellar by meteors and whatnot, then each step of that Fermi Equation can be done in parallel across the planets. And different planets could specialize in different steps? And it would easily explain the "1 highly developed ancestor" situation. But it doesn't explain why we don't see aliens when we look up, which is why I gravitate towards either Dark Forest or Prime Directive.
There was a recent piece in Science News: https://www.sciencenews.org/article/cells-origin-of-life-asgard-archaea
It suggests a new theory about how eukaryotes came to be that suggests it was perhaps not as hard as the traditional scenario, and susceptible to a gradual, evolutionary process.
That said, Nick Lane's book -- in fact all his books that I have read -- are great and I second the recommendation.
Thanks, the article itself was a rather frustrating read, because they just reprised all the stuff I talked about in my article.
But the linked Mills paper was the real argument, and it was interesting. Basically they handwave the whole problem away - they say "look, you get oxygen with prokaryotes / cyanobacteria, and as soon as you have 2-3% of our present oxygen level, we get eukaryotes right away! So really, they're just waiting in the wings - they're doing syntrophy with hydrogen or whatever and can immediately pivot to oxygen once it's available."
Broadly, they're taking a 40 thousand foot view and saying "eh, if you have oxygen and syntrophy, no worries, you'll get eukaryotes, we basically got them right away as soon as the environment was right."
BUT this still doesn't actually help our Fermi paradox case. Oxygen is HARD! Let's take the same 40 thousand foot view as they are.
So what do we need for complex eukaryotic life? I think we all agree you need water, vulcanism, and oxygen. On that last, we STILL haven't seen any other rocky planets with an oxygen atmosphere!
The 55 Cancri e example I pointed to was actually a false positive - a lava world with a bunch of CO2 or CO, not oxygen (and it's surface is 2000 degrees).
Moreover, the specific flavor of "vulcanism" is exceptionally hard. It took a few billion years for us to reach 2-3% PAL because there are oxygen sinks like iron and basalt that suck up the oxygen before it can start increasing atmospheric concentrations. But with the way a lot of vulcanism is structured (stagnant lid tectonics vs plate tectonics), we'd actually expect most planets to act like much bigger, potentially infinite oxygen sinks. Even if they got cyanobacteria photosynthesizing, it's unlikely they'd reach the atmospheric oxygenation needed to support eukaryotic life, because the basalt and iron and hydrogen in the environment is continually being generated anew in oxygen scrubbing ways. "Plate tectonics" might be necessary for life!
And although we're not fully sure (sample size of 1 in the universe that we know about so far), it looks like you might need both water AND potentially a moon-sized collision to get plate tectonics.
So now we need water, vulcanism, and a moon, AND plate tectonics to get oxygen to get complex life now. I glossed over the "oxygen" requirement in my own post to allocate most of the improbability to TIM / TOM complexes and things like that, but they're arguing (and I probably agree), there's a good chance that some of that improbability needs to go to the oxygen / moon / plate tectonics stuff.
Because after all, the Fermi paradox is an observed fact about our world - despite the countless solar systems and galaxies around, we DON'T see any other intelligence, NOR any other simple life!
So there's some gobsmackingly huge reducer (the Great Filter) between us existing and the rest of the living / intelligent universe not existing, and we're basically arguing about where the various steps of the filter are better allocated here.
No argument. Finding a lot of exoplanets at all at least removes one candidate for the Filter, though not one that anybody put much faith in. I’d be greatly relieved to believe that it’s behind us and we’re just unique with an unlimited future.
What I found striking about the SN piece was the move from the theory that the first eukaryote happened through some failed attempt to eat another microorganism to the new theory that it was two microorganisms in symbiosis that gradually co-evolved to be more efficient by breaking down the barrier between them — no lucky accident but the kind of gradual process we see elsewhere in evolution. Maybe that’s all old news to you; I’m just a layman.
Why do you assume superintelligence and immortality are possible? What if AGI is the best we can do?
The author addresses that in a comment above: https://www.astralcodexten.com/p/you-have-only-x-years-to-escape-permanent/comment/194177455
It’s not about superintelligence or immortality, it’s about GDP increasing by 1,000 times within 30 years of AGI.
GDP growth doesn’t matter if people cannot afford goods and services. See demand, not supply. Current economic theory assumes humans are the workers earning money, which will not be the case post-AGI.
Even if UBI exists, there will be services where humans are necessary (for example, electricians, cooks, etc.) for which the cost will be much higher.
Besides, can Earth really withstand 1000 times the resource extraction? I can only assume that he assumes there will be faster-than-light space travel, which is impossible.
Anyway, this doesn’t change my original point that Scott just assumes there will definitely be some superintelligence in the future that makes this time meaningful, when AGI with no superintelligence is a more likely outcome.
Ok. Why do you feel that "AGI but no super-intelligence" is more likely.
I mean it's possible we get vast economic growth without super-intelligence. Molecular nanotech seems pretty powerful.
The human brain is clearly orders of magnitude from the various physical limits.
It looks like runaway self improvement probably starts somewhere around human level.
I think it's fairly plausible to get AGI several years before superintelligence. But what does your AGI without superintelligence scenario look like in the long term?
Is there a continued R&D effort on AI?
I don't know why you think molecular nanotech is possible with AGI but humans can't do it by themselves.
Anyway, my main argument for my skepticism is that a priori we should not think universe has some hidden structure, especially a structure that we cannot find out but some other hidden algorithm can. First statement follows from occams razor, and the second from accuracy of experiments in physics. Fields like biology are an exception, but there are similar reasons to believe we can't 'solve' it as in find all it is possible to find in it using knowledgeable humans and ML.
As for the claim that intelligence can be infinite, first of all we don't know enough about intelligence to measure it properly (except by comparison). We don't know if it is scalable, and if there is a ceiling for it. And even if intelligence is infinite, if there is not enough structure in the universe that can be exploited it is still meaningless.
I think AGI with no super intelligence looks very similar to now, except most humans don't work and survive on UBI.
> I don't know why you think molecular nanotech is possible with AGI but humans can't do it by themselves.
I didn't claim that. I think that it is possible for humans to build nanotech.
> a priori we should not think universe has some hidden structure, especially a structure that we cannot find out but some other hidden algorithm can.
Suppose I see a car. The car wasn't made by me personally. I personally don't understand the details of metallurgy. That car exploits a "hidden structure". One that I don't understand, but that other humans do.
Look at a leg. Muscle protiens. Etc. Exploits a hidden structure that I don't understand, but that evolution can exploit.
And then there are the existing ML. Say a cancer detecting neural net. There must be some hidden structure in those cancer scans that an unaided human can't exploit, but that a neural network can.
What strange priors are you using where other humans being smarter than you is allowed. Existing ML being smarter at one specific thing is allowed. But the ML being generally smarter isn't allowed.
> Fields like biology are an exception, but there are similar reasons to believe we can't 'solve' it as in find all it is possible to find in it using knowledgeable humans and ML.
Lets suppose that the existing laws of quantum field theory are basically correct. Or at least any new physics only involves particles existing for femtoseconds and so isn't useful. (maybe this is true, maybe not)
There is a huge amount of stuff that is possible under known physics. Eg nanotech.
So the minimum plausible superintelligence is that it invents nanotech in 3 days when humanity would have taken 100 years (and even then, not have made anything quite as good).
> As for the claim that intelligence can be infinite
Who made that claim? I think that claim is pretty dubious. I will claim that there are 6 orders of magnitude between neuron firing speeds and the speed of light. (And orders of magnitude inefficiencies in other way in the human brain)
> if there is not enough structure in the universe that can be exploited it is still meaningless.
What do you mean "not enough structure". We basically know that nanotech is possible. We don't have it yet.
> I think AGI with no super intelligence looks very similar to now, except most humans don't work and survive on UBI.
Not quite what I meant. So this world, it's "similar to now". Which presumably implies it's a long way from the maximum technology which we know must be possible under known physics. (Ie they don't have nanotech)
So, the AGI isn't yet good enough at R&D to invent nanotech/and or it doesn't have the time.
If the AGI was fundamentally incapable of inventing nanotech, and humans couldn't make smarter AGI, we had to invent nanotech by hand, that's maybe 100 years.
Lets assume that the moment the first AGI appears, all the robot have already been built, so we can move to the robotized economy instantly. But you still have humans and/or AGI doing ASI research and nanotech research. So this state will last for maybe 5 years max, before the nanotech is invented.
Any civilization with an R&D department that doesn't yet have ALL the tech isn't a long term stable situation.
Right, nanotech is indeed something a super intelligence, if it exists, can do. Thanks for reminding me.
> Suppose I see a car. The car wasn't made by me personally. I personally don't understand the details of metallurgy. That car exploits a "hidden structure". One that I don't understand, but that other humans do.
I am distinguishing between knowledge that is hidden from humanity as in it was detected but cannot be explained with current knowledge/math, which can be called observed information, from that which has not been detected by humanity yet. The latter is what I call hidden structure of the universe.
> What strange priors are you using where other humans being smarter than you is allowed. Existing ML being smarter at one specific thing is allowed. But the ML being generally smarter isn't allowed.
ML models are not smart per se. They are just mathematical models that aggregate information well. As for humans, I think smartness is just ability to represent information well and manipulate it well. To the extent that differences exist in it between humans, it is just a biological artefact. Real computation is due to structure of the brain that we don't understand.
> Not quite what I meant. So this world, it's "similar to now". Which presumably implies it's a long way from the maximum technology which we know must be possible under known physics. (Ie they don't have nanotech)
If AGI is not much smarter than humans making nanotech would be slower. But subjective time of AGI would be fast, so it may indeed arrive much faster compared to humans alone.
I am skeptical of your robotics claim though... Robots are much more expensive to mobilize en masse, and small jobs like street food sellers may not be automated. But it doesn't seem to matter much though if output is high enough. As in everyone may have their own chef robot, etc. with UBI. So other than issues of meaning, AGI would indeed make us materially better off.
> Besides, can Earth really withstand 1000 times the resource extraction?
Imagine a magical handheld tool that's as many orders of magnitude better than a smartphone, as the phone is better than a banana of similar mass.
Economic expansion doesn't imply linear increases in material inputs. Sometimes it's a matter of improving efficiency. Solar power has been growing while using far less silicon per watt, and we're not going to run out of silicon any time soon.
Just because I can imagine such a tool doesn’t mean it is possible to make it. When I asked about resource extraction, I meant non-renewable resources like metals, coal, gas, etc. Your efficiency point stands, but even if we use fewer minerals per device, if output can get arbitrarily large, it still wouldn’t be enough. But all this is beside my main point anyway.
Coal and gas can *become* renewable resources. https://terraformindustries.com
Metals are already routinely recycled, and "waste" left over from previous mining often ends up reclassified into "viable ore" as techniques improve.
> doesn’t mean it is possible to make it
A century ago it wasn't possible to make smartphones, either - Alan Turing hadn't even headed off to boarding school yet. Times change.
When I say impossible, I mean the impossibility of a square also being a circle, or faster than light travel. Theoretical, not practical.
What can a device do to be more 'useful' than my phone? Perhaps it can do something, perhaps it cannot. But probabilities for each case are not certain. My intuition rests with the latter, say 70-30 odds.
> Coal and gas can *become* renewable resources.
Didn't know it before, thanks!
> Metals are already routinely recycled, and "waste" left over from previous mining often ends up reclassified into "viable ore" as techniques improve.
Right, forgot about it. Although my point may be still true in really long term, that will be enough time to do many things.
> Imagine a magical handheld tool that's as many orders of magnitude better than a smartphone, as the phone is better than a banana of similar mass.
I'd probably compare it to a flint knife, the original all-purpose handheld tool.
Lot of developed-world folks won't have an immediate intuitive grasp of how useful a flint knife really is, though. Even someone who hasn't ever personally eaten a banana or wielded a smartphone will have at least seen prices in advertisements, or heard peers discussing relevant experiences.
>"Current economic theory assumes humans are the workers earning money…"
Perhaps implicitly in some parts, but it's not load bearing.
Price theory is not dependent on MPL > 0.
There seems to be no convincing reason to think an entity more intelligent than the average, or even the smartest human, is impossible- i.e. superintelligence.
You misunderstand, we build B2B SaaS because it is our calling in life
Not because it's easy but because we thought it would be easy.
I'd be doing it right now if a job offer came in!
Maybe this is trending towards conspiracy, but something I find relevant is a lot of these memes always end up somehow benefitting existing oligarchs or power structures. "To avoid permanent poverty, you must do this thing which just so happens to make Bezos another trillion dollars". Now, I'm a capitalist, if people want to make money more power to them, but it's also relevant that those that are already powerful are more able to create and spread memes and narratives which facilitate their interests than less-powerful people. The transformative power of capitalism requires people to occasionally overturn the applecart, which means we need to be critical and careful of memes which make it harder to do that
It's not really a conspiracy. It's just the system reproducing itself.
"Ten million years from now, do you want transhuman intelligences on a Niven Ring somewhere in Dario Amodei’s supercluster to briefly focus their deific gaze on your legacy and think “Yeah, he spent the whole hinge of history making B2B SAAS products because he was afraid of ‘joining the permanent underclass’, now he has a moon 20% bigger than the rest of us?”
Yeah I absolutely do not care about what random people I will never even know exist, let alone meet, think of me or what I did eons ago. That 20% bigger moon means trillions more children, or quadrillions more years of simulated time, etc. They can have their megacathedrals to their heroes - outliving all of them by trillions of years is a much better reward.
Is it very important to you that in the distant future you have 13 trillion rather than 10 trillion children?
Having more children? Not really, that was an example. Having trillions more years with the children I already have? Yes, absolutely.
If the Singularity is coming soon, then either everyone will be dead, or everyone will be infinitely powerful (or effectively so) in the post-scarcity simulation-space of perfect bliss. In the first case, it doesn't matter what you do now because everyone who could've cared about you, including yourself, will be dead. In the second case, it also doesn't matter what you do now, because any contributions you could've made will be diluted down to epsilon by a nearly infinite multitude of simulated uber-minds. Yes, your name will be recorded in history -- which no one will care enough to read.
On the other hand, if the Singularity is *not* coming anytime soon, and there's a chance that you can secure a better life for yourself or your children, then it seems like the expected value of taking that chance could be substantial; so you should probably take it.
“Singularity” means “AI changes stuff so much we can’t predict it”. Locking in a permanent underclass would qualify, and that’s the worry of Scott’s hypothetical target audience. Yes, if that doesn’t eventuate, they should do something else with their lives. If a singularity happens, the OP notes the two scenarios you’ve described and says they’re more likely than the scenario this article addresses.
> Locking in a permanent underclass would qualify...
Sorry, qualify for what ? Do you mean "qualify as a change so large we can't predict it" -- even though you're predicting it ? I might be misunderstanding you, however.
In any case, the "permanent underclass" scenario isn't specific to the Singularity. Rather, it had been the case throughout most of human history, and the present-day worries about it merely address the fears of regressing to that state of affairs -- and that's assuming that we don't live in such a world already (which many people will tell you that we do). In any case, unlike the Singularity, it's a fairly realistic scenario with lots of historical precedent and evidence behind it, so yes, I think it makes sense to worry about; it at least as much as we worry about other mundane threats such as global warming, asteroid strikes, bioterrorism, etc.
By contrast, the Singularity is a bit of a motte-and-bailey, with both ends being somewhat self-defeating. The motte is indeed often presented as "a change so large we can't predict it"; but if that's all it is, then worrying about it is pointless. You might spend a lot of resources on securing your place in history or a bigger moon or whatever, but these are predictable (and to be honest fairly mundane) scenarios, and therefore by definition not something that is likely to happen in a post-Singularity world.
The bailey is usually one of the two scenarios I'd described: total annihilation of humanity, or perpetual eternal bliss for everyone. In either case though, your actions today are completely irrelevant, so doing anything at all is pointless from the post-Singularity perspective. Thus, it makes more sense to ignore the Singularity prospects completely, and focus on other scenarios where your actions might actually matter... such as securing more wealth for your children or buying a bigger moon or whatever.
> In the first case, it doesn't matter what you do now
But if your actions have a chance to change whether the first or second case happens, it matters A LOT what you do now.
True, but a). this is a very big "if" for a variety of reasons, and b). wasn't really the focus of the article.
Sufficiently high IQ is indistinguishable from a kind of mental illness. One becomes vulnerable to obsession with imagined scenarios, which feel highly salient because one can attached detailed arguments to them (not equally weighting the fact that one could attach detailed arguments to their opposites, or to all of a dozen significantly different variants).
Sure, but that doesn’t only apply to the OP. I think he’s responding to other people who are already obsessed with the scenario he discusses, and who further became committed to the response to that scenario which this article argues against. There are people like that - at least one person commented that they needed to hear this.
Citation needed
I think it's a socialization issue rather than an absolute threshold. Staying grounded often requires someone smarter than yourself (to directly poke holes in your best arguments), and/or a crowd of equally-smart people which is large and diverse enough to avoid having opinions fully synchronize (to regularly present you with equally-clever arguments for things you find viscerally repulsive, so you remember that logic isn't the same as truth).
When someone's enough of an outlier, the selection of qualified peers or mentors is limited, just due to the nature of statistical distribution - and, being outliers themselves, those few people are all subject to the same risks, potentially making them net sources of instability. But, as the overall population grows, and empirically-validated mental hygiene practices accumulate, the danger zone gradually recedes.
> ... a crowd of equally-smart people which is large and diverse enough to avoid having opinions fully synchronize ...
In Berkeley, the degree of synchronization among very smart people is very high.
Useful in some ways, dangerous in others, like any form of concentrated power.
Von Neumann seemed relatively functional.
He did indeed.
I suspect that on average it's dumb people who are more obsessed with imagined scenarios (conspiracy theories, etc), not smart people.
Those are a different class of imagined scenarios and represent a different phenomenon. One pathology doesn't negate another.
This article itself links a pretty good list to be on to pick up some shine by osmosis, I was expecting a tie back to it.
Also I think a 20% larger moon sounds pretty rad
You mean the Giving What We Can pledge? Scott plugs that pretty hard elsewhere. Most recently https://www.astralcodexten.com/p/the-pledge .
How is Scott such a fucking goated writer? I’ve been thinking about this for like 2 years and Scott comes out and clears all my thoughts up with 1 essay
Im somewhat rejecting the premise by weighing super intelligence as relatively low but as a programmer AI is _specifically_ coming for me.
If we assume AI pauses or stalls somewhere above most programmers then, yes, I could retrain and not be permanently poor.
But, it’s absolutely in my best interest to make as much money while our salaries are so insane.
Again, I know I’m rejecting the premise but Im somewhat suspicious my feelings are closer to what these folks actually are thinking.
> There’s no reason the colony ships won’t contain flash-drives of the whole 2026-era Internet, so, rather than being limited to a few prominent figures, these historians can study the generation around the Singularity almost in its entirely.
How sure are we this hasn't already happened? How sure are we it's not happening right now?
Does it matter? Either way I'm taking the world as it appears in terms of the choices I make
This reminds me of the second part of the story Manna.
“A number of years ago, your father purchased two shares of 4GC, Inc. in your name. These shares entitle you and one other person to come live as citizens of the Australia Project. You may leave the terrafoam system with us today if you choose to.”
https://marshallbrain.com/manna5
And see, I think the scenario people are worrying about when they talk about the "permanent underclass" is the first part of the story Manna.
But don't you need to be in the center of the events that lead to singularity? I mean - did I just give up my place in history books by leaving cloud and AI sales and becoming a guy that helps couples have better sex and people have more self reflection?
Maybe you'll get drunk one night and have an engaging yet exhausting fourteen-hour argument with some AI chatbot about the nature of self-reflection, with privacy settings toggled wrong. Key insights explicitly derived from that conversation end up integrated into the definitive open-source textbook on how to pass the Turing test without cheating, and you become the patron saint of sex therapists. Folks everywhere frequently scream your name at thematically-relevant moments of frustration, or bliss.
This sounds scaringly realistic
Why, thank you! It's a thoroughly cultivated talent of mine, extrapolating internally consistent premises like that from limited data. Usually I apply it to running RPGs. https://questden.org/wiki/JamesLeng
> She is now known as St. Veronica, patroness of laundry workers, and one out of every 2,500 girls in America is named in her honor.
I would guess this is an overestimate by at least a factor of 1000.
Compare the classic scene from American Gods:
-------
"Remember," she said to Wednesday, as they walked, "𝘐'𝘮 rich. I'm doing just peachy. Why should I help you?"
"You're one of us," he said. "You're as forgotten and as unloved and unremembered as any one of us. It's pretty clear whose side you should be on. [...]
Easter put her slim hand on the back of Wednesday's square gray hand. "I'm telling you," she said, "I'm doing 𝘧𝘪𝘯𝘦. On my festival days they still feast on eggs and rabbits, on candy and on flesh, to represent rebirth and copulation. They wear flowers in their bonnets and they give each other flowers. They do it in my name. More and more of them every year. In 𝘮𝘺 name, old wolf."
"And you wax fat and affluent on their worship and their love?" he said, dryly.
"Don't be an asshole." Suddenly she sounded very tired. She sipped her mochaccino.
"Serious question, m'dear. Certainly I would agree that millions upon millions of them give each other tokens in your name, and that they still practice all the rites of your festival, even down to hunting for hidden eggs. But how many of them know who you are? Eh? Excuse me, miss?" This to their waitress.
She said, "You need another espresso?"
"No, my dear. I was just wondering if you could solve a little argument we were having over here. My friend and I were disagreeing over what the word 'Easter' means. Would you happen to know?"
The girl stared at him as if green toads had begun to push their way between his lips. The she said, "I don't know about any of that Christian stuff. I'm a pagan."
The woman behind the counter said "I think it's like Latin or something for 'Christ has risen' maybe."
"Really?" said Wednesday.
"Yeah, sure," said the woman. "Easter. Just like the sun rises in the east, you know."
"The risen son. Of course—a most logical supposition." The woman smiled and returned to her coffee grinder.
When singularitarians say that utopia is when everyone has their own planet, is it just shorthand to "you will have a stupendous amount of resources to use as you wish", or do they really dream of living alone (or with serfs) in a planet-sized estate?
It's a science-fiction dream which will never be reality, so why not dream big? Like the Solarians in Asimov's "The Naked Sun", who have a population of 20,000 people strictly maintained at that level on their planet, with an occupied space of 30 million square miles and a ratio of 10,000 robots per human.
Except that this version makes the Solarians look like pikers. Only 1,500 square miles per person? Pfft, that's practically slum living!
An excerpt from the novel, where the Earth man (who comes from a society where the majority live underground and in closely packed quarters) has his first encounter with Solarian living:
"He had thought of a ‘dwelling’ as something like an apartment unit, but his was nothing like it at all. He passed from room to room endlessly. Panoramic windows were shrouded closely, allowing no hint of disturbing day to enter. Lights came to life noiselessly from hidden sources as they stepped into a room and died again as quietly when they left.
‘So many rooms,’ said Baley with wonder. ‘So many. It’s like a very tiny City, Daneel.’
…It seemed strange to the Earthman. Why was it necessary to crowd so many Spacers together with him in close quarters? He said, ‘How many will be living here with me?’
Daneel said, ‘There will be myself, of course, and a number of robots.’
…And then that thought popped into nothing under the force of a second, more urgent one. He cried, ‘Robots? How many humans?'
‘None, Partner Elijah.’
They had just stepped into a room, crowded from floor to ceiling with book film. Three fixed viewers with large twenty-four-inch viewing panels set vertically were in three corners of the room. The fourth contained an animation screen.
Baley looked about in annoyance. He said, ‘Did they kick everyone out just to leave me rattling around alone in this mausoleum?’
‘It is meant only for you. A dwelling such as this for one person is customary on Solaria.’
‘Everyone lives like this?’
‘Everyone.’
‘What do they need all the rooms for?’
‘It is customary to devote a single room to a single purpose. This is the library. There is also a music-room, a gymnasium, a kitchen. a bakery a dining-room, a machine shop, various robot-repair and testing rooms, two bedrooms ’
…‘Jehoshaphat! Who takes care of all of this?’ He swung his arms in a wide arc.
‘There are a number of household robots. They have been assigned to you and you will see to it that you are comfortable.’
‘But I don’t need all this,’ said Baley. He had the urge to sit down and refuse to budge. He wanted to see no more rooms.
‘We can remain in one room if you desire, Partner Elijah. That was visualised as a possibility from the start. Nevertheless, Soarian customs being what they are, it was considered wiser to allow this house to be built
‘Built’ Baley stared. ‘You mean this was built for me? All this? Specially?’
‘A thoroughly roboticised economy ’
‘Yes, I see what you’re going to say. What will they do with the house when all this is over?’
‘I believe they will tear it down.’
…’It is just that the effort involved in building the house is, to them, very little. Nor does the waste involved in tearing it down once more seem great to them.
‘And by law. Partner Elijah, this place cannot be allowed to remain standing. It is on the estate of Hannis Gruer and there can only be one legal dwelling-place on any estate, that of the owner.
This house was built by special dispensation, for a specific purpose. It is meant to house us for a specific length of time, till our mission is completed.’ "
I don't know what I'd do with an entire planet. Although I would want my estate to be large enough to have different climates to ensure there was always a beach at the right temperature for swimming, always a mountain with good snow for skiing.
Also, you just made me want to read that book. I've read a few Asimovs but haven't encountered Olivaw between Caves of Steel and Foundation and Earth so he's always just annoyed me a bit. Maybe reading the intervening books will make him make more sense.
"Robots and Empire" has essential bridging plot between the three earlier (Baley + Daneel) novels and the later Foundation ones.
I do think the earlier robot books are better; the later ones used the advances in understanding of technology to be more updated, but the earlier ones (though they have the limits of the time they were written) are fresher and more original.
Elijah Bailey doesn't always get on with his robot partner, and R. Daneel Olivaw definitely does come across as the patronising nanny taking care of the childish humans (even the Spacers, not just the Earthmen). He's more likeable when Elijah takes him down a peg and gets under that skin of robot superiority.
This fantasy feels like the antisocial introvert's version of "in heaven everyone has their own mountain of gold" - an extrapolation of the immediate desire to have more real estate and less neighbors to an extent in which it becomes seriously inconvenient. In such a sparse universe it'd be hard to have authentic culture, friendships and romantic relationships. Might as well lock yourself in a Nozick machine, that at least wouldn't be a flagrant waste of the cosmic endowment.
(Either that or the viability of FTL is taken for granted.)
If you read the revisit to Solaria in "Foundation and Earth" 20000 years later, I believe they were down to only a few thousand. They are also transhumanists at that point, don't think of themselves as human at all, and have bioengineered themselves to manage the allocation of thermodynamic energy on their estates. That's roughly my expectation for what Earth would look like in a century after development of ASI if AI alignment were "solved" but ended up simply aligned to the wishes of their controllers. The Solarians still like to have a few others around because they like having other people appreciate their expertise at old timey crafts which have a sort of bespoke value, the one they visit in the later novel has the best orchards and trades fruits.
"your worst-case scenario is owning a terraformed moon in one of his galaxies"
I have to admit that my first thought was 'Wouldn't it suck if it was one with low-gravity though?' Lifestyle creep has a really fast on-set.
Just engulf the moon of some shut-in wirehead, gravity problem solved!
Wonderfully said.
Our fiction is so full of future dystopias - not just "non-great societies", but societies where the situation is upheld with intent - that we tend to forget how many things need to go wrong in exactly the right way to get there.
Expecting dystopia of a specific flavor by default seem like a massive failure of the imagination.
This and related questions deeply interest me.
Would anyone be interested in any of the following?
Real money bets, bet matchmaking or prediction markets about different scenarios, like
- Probability of 99%+ unemployment in 2, 5, 10, 20, etc years.
- Probability of some kind of (important to specify) permanent underclass existing within N years
- Probability of UBI existing e.g. in the US within similar years.
- Probability of strong enough governance mechanisms (needs to be defined) that guard against strong and permanent power concentration.
- And more.
I have tried to look, prediction market coverage for such questions seem spotty. And even more so for real money markets. (There might not exist any, please say so if you are aware of any.)
Why would such real money markets be good to have? A couple of reasons.
Let's say these markets predict very high chance for these (some or all) bad outcomes. That’s very important to know personally, and also to have collective knowledge of. Possibility to prepare or become at peace with what's coming, or try to choose a different path while that's still possible.
If these markets show very low probabilities for some of the bad outcomes (we can have more questions defined than what's above, to avoid having a bad market because of some technicality) that might really assuage some people's fears. More crucially: this is a hedging opportunity for those still afraid. If let's say there is only a 5% chance predicted at even like a 30% unemployment rate within 10 years, then I and others might be very tempted to bet 1-to-20 in this market: 50k usd bet will pay out 1 million usd in case the bad outcome does happen.
Importantly, some or all of these markets will have to use the 'apocalypse bet' scheme (there is a LessWrong article with this title published in 2007 that you can read if you are unfamiliar). In a regular prediction market both sides pay in upfront, and payout only happens upon resolution. However, in this case, if someone truly believes that there is only a 5% chance at 99% unemployment in 10 years, the opportunity cost of locking up 100 usd to get 105 usd in the end is unthinkable. However, if the pessimistic side immediately pays out to the optimistic side 100 usd, with a legally enforceable repayment of 2000 usd upon the pessimistic resolution, (and nothing upon the optimistic resolution) that might just work.
Why would anyone go into such conditional debt to bet on the optimistic side? The same reason we expect prediction markets to work: it can just make financial sense to bet on a probability if you have strong enough reason to expect that you are correct: it's free money for you on the table in expectation. The further the current predicted percentage is from your conviction the stronger the incentive to bet.
There is also the general objection to prediction markets that long timeline resolutions are fraught because of time value of money and opportunity costs of having to lock up money for the long time (e.g. inflation and lost ROI that you could have had otherwise). I agree with this.
Regular prediction markets could solve this possibly by saying that both sides actually bet using some appreciating asset, like an S&P-500 tracking index, so payout is also pegged to that. To my knowledge this innovation is not actualized anywhere yet. Or is it?
A 'doomsday market' could also use the exact same mechanism: initial transfer is in usd, but the repayment is expected in some pre-agreed type and quantity of a security or then-current market value thereof.
Apart from grabbing free money on the table in expectation (if they are correct, they can just keep it, no repayment will happen), why else could it make sense for the optimistic side to engage? Multiple reasons:
- If money ends up abundant in the future then it might be trivial for the optimist to pay when needed.
- They might expect to make more use of the capital now, at the hinge of history, than any other point in time: they can have more leverage to steer, so this can be an efficient transfer mechanisms from those who believe they can’t to those who believe they can.
- They expect to have better returns than what the underlying security and the payback multiplier will command: e.g. they just invest it all in NVIDIA and will be laughing all the way to the bank both ways even if they have to pay it back and more in S&P-500 later.
- Altruistic drive: they strongly enough believe in the goodness of humans, so they think apart from everyone dying (which this kind of betting fully discounts anyway) almost all other futures will be very good, so they can gift this warm reassurance to other humans as well by taking their money now and providing a legally binding guarantee to them that should the bad outcomes happen, they have their backs. A form of pre-commitment to an altruistic insurance scheme.
- Altruistic drive squared: If afraid people have strong enough guarantee that they are protected in some situations that might be otherwise bad for them via some scheme like this, that very likely frees up bandwidth for them that they can redirect to some other end, e.g. working on technical alignment or governance otherwise.
Why not just invest in S&P-500 and other securities directly? I think one should! But it’s a very roundabout bet compared to what we might care about that entangles with lots of other things. I myself would endorse a diversified portfolio that includes such bets as well.
So does any of this sound interesting to any of you?
- If something like the above existed, would you want to see what the predicted probabilities are?
- If the probabilities are strongly enough skewed some direction or another, you might want to enter a bet for one of the above listed motivation, e.g. hedging?
- If such markets do not exist and no one will create them, would you be interested in entering into such one-off contracts with regular people nonetheless? I’m serious enough about this that if we can hammer out some details (which mostly just coming up with good questions, criteria that we can also publish) and some wording of a good-enough legally binding contract, I would be interested in entering into such contracts with some of you. Let me know if you are interested and if you are optimistic or pessimistic below. And any important conditions you may have.
- Maybe such markets do exist we just need to find them and inject liquidity? Maybe in crypto space?
- Or the platform exists, then the questions need to be written, gotten published, and popularized? PredictIt e.g. could be very good potentially, but as far as I understand it’s not easy to get published there.
- If there is strong enough interest and no existing close enough prior art, then creating a platform like this might be quite good and quite important. Let me know if this might interest you too, I might be strongly enough motivated to create this if there is enough interest.
p.s. Robin Hanson writes an important comment about such asymmetric bets: “I'm afraid all the bets like this will just recover interest rates”. While I think that applies to Eliezer’s article as written, I think what I write above avoids that issue, but let me know what you think.
p.p.s. Before I or anyone might create such a pessimist-vs-optimist market, I’d strongly hope we can discuss and consider the potential feedback loops that might start: e.g. if it predicts very bleak outcomes, and everyone knows that everyone knows that bleakness is to be expected, will that help or hinder in expectation? Right now I think it will help due to the option of steering earlier is easier than later, but I’m very open to other viewpoints as well.
FWIW I think these are all pretty easy questions to resolve:
> Probability of 99%+ unemployment in 2, 5, 10, 20, etc years.
Of course this depends on what you mean by "unemployment", but I'd put that at somewhere around epsilon. Historically, new technologies often do lead to unemployment, but I can't think of any period in history where unemployment rose to 99%; not even in war-torn failed states (granted, you might be paid in bottles of vodka and shotgun shells rather than official currency but you are still employed).
> Probability of some kind of (important to specify) permanent underclass existing within N years
Yes, it's important to specify, because a permanent underclass already exists in many places on Earth and in fact had always existed; some people would argue that it currently exists even in developed countries such as the US.
> Probability of UBI existing e.g. in the US within similar years.
AFAIK some countries (and perhaps even US states ?) are already pioneering UBI on a test-case basis, so again, no bet.
> Probability of strong enough governance mechanisms (needs to be defined) that guard against strong and permanent power concentration.
Once again, some would argue that such mechanisms have been in play for centuries; perhaps their time in the US is coming to an end, but they are still somewhat functional in e.g. Europe. Of course, some other places are effectively permanent dictatorships...
> somewhere around epsilon
Are you saying that within 20 years (and we can try to nail down the specifics) you'd put less than 1% probability that there will be 99% unemployment? (as I understand drawing from historical base rates) Maybe even less than 0.1% probability?
Does that mean that you might be happy to make a bet similar to this (maybe in a similar form that I describe) at 1:100 odds?
> Maybe even less than 0.1% probability?
Even less than that; for reference, during the Great Depression unemployment rose to unprecedented levels of about 20%. However:
> Does that mean that you might be happy to make a bet similar to this...
Firstly, that depends on how you define "unemployment" (e.g. if I work one day out of the year, am I "employed" ? Are we sampling e.g. the continental US population, or some remote mountainous region with exactly one resident ?). Normally I wouldn't be so pedantic, but if money is involved, then it really pays to nail down all your definitions.
Secondly though, I'm pretty old, so the chances of me being alive in 20 years are sadly much lower than the odds of the bet...
Chimpanzee unemployment is 100%. In an AGI/ASI world, humans are basically chimps, you would never use one for productive activity. Nor is there a UBI for chimps, why would there be? You don't actually need their consent or cooperation to do anything, so it's not necessarily to spend any resources satisfying their desires.
In the AGI/ASI world if you're still alive you're probably akin to a chimp on a nature preserve. Maybe one chimp is clever and trades bananas for companionship with a hot lady chimp, and we would probably be doing the equivalent of that and you might squint and call it employment, but it won't be part of any meaningful chain of production.
> Chimpanzee unemployment is 100%. In an AGI/ASI world, humans are basically chimps, you would never use one for productive activity.
I absolutely agree that in a world where quasi-godlike nearly-omnipotent entities actually exist, and are able to usher in post-scarcity powered by virtually unlimited resources, human work would be pointless. I also believe that the probability of such a world coming to pass is epsilon -- not just in the next 20 years, but most likely ever.
You don't quite need that, you just need the "island of geniuses in a data center" to be sufficiently better that spinning one up to do the task is always indisputably better than using a human. I would be stunned if it didn't get to that level within a decade, even if for some currently unknown reason intelligence caps out at some point short of weird nearly-magic alien minds.
From the time they're notably better than humans, the humans are mostly unable to be productive and the redirection of economic production away from consumer goods is inevitable, and from there you'll get to the point where we're not participants in the economy anymore. There are some people that suggest Ricardo's theory implies lower productivity humans would still wind up performing certain tasks, but that fails for multiple reasons. For one thing, there is no real limit to the number of AI instances you could spin up other than energy, unlike two nations of different tech level in trade, the AIs are infinitely replicable and summonable upon command. For another thing, productive tasks themselves will likely utilize new tech that only the AIs can operate and would exceed the capabilities of humans. A chimp can pick bananas, but we still don't use them to do so because they can't use any of the tools we employ to do so at scale, your productivity locally would have to be so low as to be practically nil before that was a competitive use case.
I actually gave some thought to the notion and I'm pretty sure I'd rather go unremembered (which, yes, invites the easy snark that it's easy to achieve). I find the idea of living on as a faint memetic echo in other people's mindslop vaguely repellent. Not sure Veronica would have been that enthused to learn that her legacy would be to be idolized by a weird and frankly heterodox offshoot of what was presumably her sincerely held religion.
"I find the idea of living on as a faint memetic echo in other people's mindslop vaguely repellent." I'm glad you're here with this perspective.
I can't tell what the sincere position is behind this post of Scott's which makes it hard to know how to take it in. If it's farce all the way down, okay. But if there's something here intended to convey, what is it? Is it: "stop panicking, you'll be fine, and so give yourself permission to do something useful"? But where "useful" is being pitched as "remembered in a high-status way"?
I see around me lots of young people (I'm an old fart) who are quite worried about their futures and their fears in the face of so much uncertainty seem understandable to me. But I guess he's talking to a very small elite slice of that group that is well off but wants to be crazy rich? It rings hollow to me.
> The “permanent underclass” meme isn’t being spread by poor people - who are already part of the underclass, and generally not worrying too much about its permanence.
It seems far from obvious to me that poor people who know about the issue are generally not worried about it. The way I see it, pro-capitalist poor people have faith in the economic system's ability to elevate people with merit out of the lower class, while anti-capitalist poor people dream of eliminating class distinctions entirely and oppose any developments that make this harder. When I think of poor people who are fine with themselves and their families being part of a permanent underclass, all I can think of is monarchists and very pious religious people.
It's be interesting to see a survey exploring this.
> When I think of poor people who are fine with themselves and their families being part of a permanent underclass,
Until recently, the "underclass" was often half starved. Even today, the material conditions of the underclasses are often not-great. The idea of a social underclass who are "poorer", but still have more money than they can reasonably spend is something that only makes sense in the context of post singularity economics.
Days without Silicon Valley advancing on the Personal Malaise Poorly Disguised As Existential Concern treadmill: 0
Rat race in the light cone.
Scott, I love you, but this is one of those pieces where I have to ask "Do you actually know any poor people?" Not just "Oh yeah, one of my patients one time when I was toiling in the Midwest before I could escape was poor" but "yes, I have a family member/friend/someone I interact with more than 'cashier at grocery store' who is poor". People for whom a 50 cent increase on the price of goods does affect what they will eat and what they will purchase in the grocery store ('well, looks like chicken legs are off the menu, how much is mince?')
I guess it is correct that this piece is aimed towards "neurotic well-off people in Silicon Valley" since they are the only ones in a position to benefit from the likes of:
"Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies."
I had to look up who this guy was, and while that's very nice of him to pledge 10% of his wealth, who is going to get that 10%? Hey Dario, I am happy to give you details of my bank account if you want to pay in $100 per month out of your spare cash to me.
That's not the kind of philanthropy they mean, though, is it? Not giving to an actual real person but rather "set up a trust which will adminster a foundation where people can apply for grants to get their start-up running" philanthropy.
Because I don't believe in the 'the super-wealthy will own galaxies, and you too can be part of that' future. This is what real-world wealth does in our current world, when taxes start biting. I'm sure Jeff Bezos has plenty of philanthropy associations and charitable wotsits, but when the rubber hit the road about taking a chunk out of his vast profits, that was a different matter.
One man was able to affect the budget of an entire state just by moving house.
https://luxurylaunches.com/celebrities/jeff-bezos-relocation-shook-the-state-budget-12312025.php
"Jeff Bezos is so rich that when he moved from Seattle to Miami, it shook Washington’s entire budget; now, the Evergreen State has $1 billion less to spend on K–12 education and childcare, all because of a single address update
According to the WSJ, in November 2023, Bezos announced that he was leaving Seattle after nearly 30 years and relocating to Miami. Publicly, he framed the move around family and logistics. His parents had returned to Florida, and his space company’s operations were increasingly concentrated there. Quietly, the timing lined up with Washington state finishing its legal battle over a new and controversial 7 percent capital gains tax. The law survived a Supreme Court challenge in March 2023. Bezos’s departure came months later, and the financial consequences became visible almost immediately.
Washington’s capital gains tax is unusual by American standards. The state does not tax wage or salary income, but since 2022, it has imposed a 7% tax on long-term capital gains above roughly $262,000 from assets such as stocks and bonds. Real estate sales are exempt. Retirement accounts are exempt. The tax was explicitly designed to fall on a small number of very wealthy residents rather than the broader population.
...In its first year, the tax appeared to work exactly as intended. Collections came in between $786 million and $890 million, beating forecasts. Fewer than 4,000 taxpayers paid it. More than half of the revenue came from just ten individuals, most of them in the Seattle area. That concentration was always part of the design, but it also created an obvious vulnerability.
The second year exposed it. Receipts fell to roughly $430 million as wealthy taxpayers adapted. Gains were deferred. Sales were structured differently. Some taxpayers simply changed where they lived. That is where Bezos looms over the entire experiment."
This is what the super-abundance thanks to AGI future will look like. Them that has, gits. The ultra-wealthy act to protect their wealth, and the rest of us can only hope for crumbs to fall from their tables.
And I don't believe in "yeah, but the super-future means that the tables will be so loaded, even the crumbs will be plenitude". I know that even if all of Bezos' wealth was divided up, it would amount to just over $700 per person in the USA and that would be a once-off payment and all the money was spent and the poor/underclass would still have to make a living for the rest of their lives. Split it up among the 9 billion of the world and we're now talking $27 each.
So we are in the ridiculous situation where taking the wealth of the super-wealthy won't, in fact, lift anyone out of poverty but if they get to keep it, they can hire out Venice for their second marriage, run their own private space ship programme, or affect the entire education budget of a whole state by fucking off to a lower-tax state.
That's how money works. That's how wealth will continue to work in the future. But I'm sure Jeff will be happy to buy tickets to the Met Gala because him and Mrs Jeff II are so devoted to charidee.
> This is what the super-abundance thanks to AGI future will look like. Them that has, gits. The ultra-wealthy act to protect their wealth, and the rest of us can only hope for crumbs to fall from their tables.
> And I don't believe in "yeah, but the super-future means that the tables will be so loaded, even the crumbs will be plenitude".
> I know that even if all of Bezos' wealth was divided up, it would amount to just over $700 per person in the USA and that would be a once-off payment and all the money was spent and the poor/underclass would still have to make a living for the rest of their lives.
In the current world, the super rich don't collectively own that large a share of the wealth. Sure they have a lot per person. But it's <10% of the overall wealth in the world. (Or something, depending on details)
And 1 states education budget, or all the hotels in Venice, or a rocket program or whatever the super rich are up to, is a fairly modest fraction of humanities total wealth, so that checks out.
Scott is talking about a hypothetical future where the super rich are MUCH MUCH richer. So rich that one of them sharing out their wealth evenly is more than enough to make everyone rich-by-modern-standards. A future where the super rich own 99.9% of the stuff, but that remaining 0.1% is still enough for everyone else in the world to get a private jet.
I'm still trying to figure out how that works.
Suppose some really nice person discovers one billion tons of gold in a shallow deposit and decides to share it equally among the world's population. Suddenly everyone gets several million dollars of gold, at today's prices.
Except they don't. Since gold is so abundant, there's no longer any need to pay a premium for gold. The price of gold collapses, and everybody is back to where they were before, except they have all the gold jewelery they can possibly use.
Why wouldn't massive disbursements of fiat currency or cryptocurrency have the same effect, but without the jewelry side benefit? We saw this on a tiny but politically salient scale post-covid, when Biden's stimulus helped stimulate inflation.
Under such a scenario, all those whose initial wealth is much less than the disbursement amount effectively find their wealth equalized at some small value. A clear benefit to the poorest, a clear disbenefit to the middle class.
It’s not about the money, it’s about the stuff. Without endorsing the claim, the claim is that there will be so much stuff (energy, intelligence, and ability to combine them cheaply with raw materials to make anything) that there will be plenty to go around. You are thinking about sharing money, but printing more money just leads to inflation. Real growth means there is more stuff.
Where would the stuff be coming from, realistically speaking ? Are we talking about converting the Solar System to computronium, or
"merely" about building affordable housing on the ocean floor, or what ? And if it's the computronium, then what is preventing humanity from growing exponentially to fill all the available [virtual] space, just as we've always done ?
Stuff. Like food? We have more than enough food to go around, yet people still starve. More food will probably help a bit.
Clean water? Granted, great opportunities there; with some environmental tradeoffs.
Clothing? We have more than enough clothing to go around too.
Health care? Clear potential benefits (or disbenefits, if extinction is an option), but not really in the "stuff" category.
Weapons and warfare? Whoops, that's a category where we want less stuff, not more.
Those are the five big items affecting personal well-being.
What else would be beneficial to happiness? Financial security? Well, that's about the money. Good friends and family? That's neither money nor stuff. Love? The sign of the correlation between love and technology is still unclear.
There are two more items I think are worth mentioning: entertainment and personal fulfillment. So far, it seems that entertainment is the biggest AI-based growth area. Conversely, personal fulfillment is the most challenging. Lots of standard goals in life (providing for family, making the world a better place) theoretically go away. Perhaps the biggest contribution of technology to personal fulfillment is youtube's how-to videos. But that becomes obsolete when AI knows better than you ever can.
At best, it seems that infinite stuff only implies marginal improvement in typical happiness.
The obvious lesson is not to tax wealth but instead the unimproved value of land. This also doesn't show that Bezos wouldn't donate to charity (rather than taxes).
.".In its first year, the tax appeared to work exactly as intended."
And then there were unintended consequences.
I thought this was all really well said.
As well, it seems to me, that the precarious feelings of this present moment can't all be dismissed by data or reasoning.
The year is 5,000,000 AD.
It is morning, and so the 144,689th reincarnation of the One Who Cartoons Hate is brought in chains to the Writing Temple.
Bound on an ornate throne of blended ivories from across the stars, her facial twitches are transformed into hot takes.
Bold, red words begin to appear in the swirling word mists of the grey mirrors that ring the vast room, dominating the Maelstrom of the Discourse that flowered within them moments before. The gathered Subordinate Wordstackers pause and read:
The Loneliness Crisis on Personal Moons Are a Revealed Preference:
Our distant ancestors used to live communally, packed into low rise suburbs; maybe we all just like a little distance from each other?
"Incredible", says one, "the title alone makes you want to leave the personal moon and live in a real community"
"People say that", says the older one, "but then the guy one mountain over has his clankers build a monument to his dog and ruins your eyeline"
10/10
Wtf? Has Scott gone techno millennialist now? Or did the joke fly above my head?
Edit: and what is all this stuff about being motivated by what future people might think of us? Why on earth would you want the approval of people who don't even exist?
Roko's Basilisk except with social disapproval, I guess.
Everyone ceases to exist eventually. I wouldn't particularly want my ancestors to despise me, although some of them probably would.
This was about future people's opinion of the present, not past people.
With ancestors it's even worse, because we know something about their culture. I'm sure a whole bunch of my ancestors would despise modern standard liberal ways of living, simply because they fall pretty far from the Overton window of their times.
Yes, they would, but perhaps that invites certain questions about the sustainability of liberal lifestyles. And for all their rough edges, they also made enormous sacrifices to ensure we could be here.
Ehhhh fine enough to be grateful for our ancestors, but Scott was talking the other way around, that we should worry how our *successors* would see us.
Either way, I don't see much point in worrying about how people in the distant past or future would judge us, because I don't particularly think that wisdom increases or decreases in any kind of steady way.
Only the things that transcend the individual can transcend death. If you're committed to the position that only your sole individual life and immediate sensations matter then I guess I can't argue you out of that. But I don't think it's irrational or even all that weird to be concerned with the wider continuity and flourishing of the species, and whether we'd be remembered as contributing to that, in what limited capacity we can.
You keep mistaking my skepticism for egoism. I'm making the narrow point that the framing "what would our ancestors think of us" puts authority in the past, as if they knew better. Scott's framing "what will future generations think of us" puts authority in the future, as if they will necessarily know better. (Obviously, both can't be true at the same time.) I'm all for flourishing, but as far as I can observe, each time gets to define what flourishing means to them.
"Now you can stop worrying about the permanent underclass and focus on more important things."
Probably the most cold-blooded take on "the poor you will have with you always" that I've seen in quite a while. 'Yeah, ignore the less well-off, they'll always be scraping by. You're a software engineer, you are high value human capital, this is your chance to become famous/rich!'.
Gee, I wonder why EAs got a rep for caring about those safely thousands of miles away and ignoring the suffering on their own doorstep? Well, I guess that person in a minimum wage manual labour job is just not as *cute* as the liddle shrimpies, let's worry about humane treatment for the shrimpies and to hell with the underclass!
This entire post is about how you should *not* focus on getting rich before the singularity. How could you possibly misinterpret it this badly?
Because being rich is good? Because people who say things like "ignore the here-and-now in favor of a place in this insane future sci-fi fantasy" are typically cult leaders.
The entire post, if I'm being maximally snarly, is "fuck the people who can't afford to sink a ton of money into investments, they'll always be poor losers". It's "YOU'RE okay, you're the smart people who are currently making a ton of money and are worrying about being only a little rich, but in future we'll all be so rich that it won't matter (except, of course, for the poor losers who couldn't scrape together a million in savings and investments but like we said, fuck them)".
It's "so if you can't brag about your money, what can you brag about? here's a few things to try".
It damn well is not caring about the people who, if their car breaks down, have their lives ruined because that means they can't get to work; can't get to work means lose their job; lose their job means no easy new job to slide into; no new job means no money means ending up on the street because they don't have the savings to absorb loss of income because they can't afford to fix the car in the first place because they don't have that money.
Those people are going to be the permanent underclass that nobody needs to worry about, as they are too busy worrying about "but how can I inscribe my name in the history books?"
I think your issue is kind of what the post *is*. It's not meant to be targeted towards the actually existing underclass; it's targeted towards rich people. Doesn't mean any of it is wrong or not useful. Someone else is probably writing a post about how the current actual underclass is going to be a permanent one in the post-singularity future, and that all of the neurotic Silicon Valley people worried about becoming part of the permanent underclass won't be except you'd like it despite making the same exact factual claims because it's aimed towards people who aren't neurotic Silicon Valley people
This interpretation assumes a model in which the only way to benefit from the wealth gained in a post-singularity scenario is present-day investment. I don't see anything in the article to imply that this is Scott's working model. Indeed, the sentence directly before the one you quoted seems to imply that it is not:
"Dario Amodei has taken the Giving What We Can Pledge to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies. Now you can stop worrying about the permanent underclass and focus on more important things."
One would assume the "less fortunate" Amodei is providing for in this scenario include the class of people you're concerned about. So, if this is Scott's proposed reason that we "can stop worrying about the permanent underclass", then it seems everybody is being told to stop worrying and nobody is being told to get fucked.
EA doesn't even care about them. Much like all virtue peddlers, they care only about the social status they gain by publicly pretending to care. Which is precisely why personal virtues need to remain personal. As soon as they become public they become subject to Moloch.
I know for a fact that a month after I die, nobody will remember I ever existed. And I'm fine with that. I'm simply not an interesting person in any way.
There’s an interesting possibility that I haven’t noticed people posting about: We merge with AI. I now use it a lot, not for personal support and affection, and not for executing my ideas, but for information, help doing things and help figuring things out. GPT5.2 is to my mind what Sigourny Weaver’s power loader suit was to her body in Aliens 2. Others are partially merging with it more the way one does with a beloved person, others the way some do with therapists, priests and teachers. It won’t be long before people can have intense sexual experiences with AI via some combo of AI role-playing, AI-generated video and AI-controlled body stimulators.
And there’s another factor operating to make a merge likelier: Seems to me that connecting AI with the deep brain processes of a smart mammal is the most promising way to overcome many of the deficiencies that limit current AI. Right now it can’t learn world the way it learned language. It needs senses and locomotion for that, plus processing innards set up in advance to organize that body of experience and integrate it with language. And how much can you understand if you aren’t deeply familiar with the physical world? And AI can’t remember chats, can’t ruminate, can’t learn from experience, has no self-generated preferences or goals. All those things are as easy as falling off a log for us, and I’m skeptical of the idea that we can just build the capacity to do these things out of especially clever electric circuits. Sure, our own capacity to do them runs on circuits, but we know relatively little about how that happens. I’m guessing it would be easier find a way to link AI to the brain activity of a person and have it be shaped by that than to build deep, integrating parts of the human brain from scratch with the materials and methods we have now.
And a future where we merge with AI is one regarding which the questions of what will it do for us or to us don’t arise. As for the corporations that developed the tech — that’s the end of their owning AI and accumulating power that way.
Of course, there will then be a whole new set of terrible ways things can play out.
I would hesitantly disagree about AIs needing personal experience to understand the physical world. Video-generating AIs like Sora wouldn't be able to produce output that looks realistic to humans unless they had some sort of approximate understanding of the laws of motion. And experiments like Genie and SIMA show that AIs can understand, create, and navigate virtual 3D environments.
I thought he was called Beff Tezos, after the cryptocurrency? Whatever.
I hope this is satire? Otherwise it’s a totally vapid analysis of what I believe are legitimate concerns about social and economic stratification. It relies on relativism to diminish the concerns of the present on the basis of the distant future, or the same kind of thinking that leads one to devalue one’s life because the universe will one day end. We cannot take Dario’s pledge to donate 10% of his wealth to secure our own present and future
Well said. Much my reaction as well.
"Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies."
His generosity is not pledged exclusively to humans alive now. Obviously he's not going to subsidize moon owning oligarchs while the majority of the galaxy starves...
Many people envisioned a post singularity world as a continuation of our world. Humans competing for power, money and territories. Business as usual with greater numbers. People that used to control the earth now controlling galaxies. Poors that used to possess a backpack now possessing a moon. I dunno. I expect something alien. I expect the end of humanity as we know it.
"Even if you end up there, you’ll be fine. Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate; your worst-case scenario is owning a terraformed moon in one of his galaxies."
I expect Dario can give much more than that. Actually, isn't 10% a mere tithe of the usual sort of lower-class human?
Aren't EAs supposed to give it all away after their successful careers? Not just 10%.
The Lindy answer is that great conquerors grant part of their new empires to subordinates, who then put the resources of that territory into use for the original conqueror/owner. It’s fractal delegation of power with allegiances filtering up to God-King Amodei.
Neo-feudalism might not work if Dario is able to perfectly manage all his share of the lightcone with just his ASIs, but if I were a betting man I’d still stick with the feudal approach.
I could see a sort of feudalism based on "altruism" and the notion of personal merit rather than divine right.
The permanent underclass in post-scarcity world just dies, because he has nothing of value to trade for goods and therefore what's being manufactured in abundance is no longer what he needs.
There's a Yud post on LW iirc about "why wouldn't the ASI leave us just a little bit of sunlight" that uses current mega-rich people as an example, but you could see either of those two outcomes (super-productive economy ran by and for the AI, or super-productive economy ran by AI for the upper caste tech bros) and it would amount to the same thing for the rest of us. Maybe the ASI keeps a few humans around for unknown reasons, and maybe our new transhuman tech bro overlords keep a few humans around as slaves, pets, or maybe he takes a cue from God and keeps them around relatively free so that their appreciation of his glorious works will be genuine and value-adding. In any of those scenarios, you are still just not of much value and even the pittance it takes to sustain you will in all likelihood be devoted instead to resource extraction and energy production. ASI isn't literally magic, nobody is going to outer space, it'll just build the Dyson Sphere and call it complete as Earth freezes to ice.
I'm not trying to "escape the permanent underclass" primarily because I think that is impossible for me and most people, as the winners (if there are any at all) will be a vanishingly small group, no more than a few thousand people out of 8 billion. The amount of money and influence and connections it would take would require a runner-runner-runner sequence of liquidating most of what I have, making a high-risk high-reward bet with it, then taking what I had from that (which still would be nowhere near enough) and going to California or DC and somehow making myself indispensable to a person who will matter, despite not being able at present to do much of anything that those people would value. And all of my value is mental, so there's a ticking clock until AI would crowd out any value I could conceivably generate, a much faster clock if one is trying to cozy up to the people with access to the bleeding-edge AI models.
It's just not gonna happen. The only reasonable thing to do is attempt to enjoy your life, do what you can to sabotage or delay AI development if that's within your power, and if it's not then you've already lost. Can't recommend Scott's approach exactly, because anyone/anything looking backwards to before some singularity would be unable to be certain about anything that occurred before it, that's what a singularity is. You aren't gonna exist for posterity, you're going to exist for the next 10-20 years maybe to enjoy yourself and make the people around you that you care about happy. Don't send charity to the 3rd world, it's not gonna have time to pay off anyhow, if you have extra money spend it enabling the aspirations of people you know and love while there's still a chance for those dreams to be meaningful.
Yud’s theory that ASI will be such a neurotic maximizer that it won’t donate 1% of its resources to the rest of the world is totally unjustified, and comes from his obsession with old-fashioned/20th century symbolic AI.
We can’t conclude that ASI will definitely be generous, but neither are we justified in concluding that it’ll automatically wipe out humans.
>Dario Amodei has taken the Giving What We Can Pledge (#43 here) to give 10% of his wealth to the less fortunate.
Actually it seems like all the Anthropic cofounders have pledged to donate 80%: https://chatgpt.com/c/6957fb58-4aa0-8327-a8b8-8798a54a06b3
It is not clear what, if any, relation there is supposed to be between (Jan) 2026 dollars and share in a future superintelligence's values.
In the industrial revolution, old systems of wealth and power became mostly irrelevant. New source of power => new elite.
Conditional on there being a future superintelligence elite human list, I wouldn't expect it to resemble the 2026 rich list.
> In the industrial revolution, old systems of wealth and power became mostly irrelevant. New source of power => new elite.
Yes, but the old elite were able to leverage their accumulated wealth to make the transition. The British peerage are no longer the absolute richest people in the country, for instance, but they're nonetheless still *very rich*, and the gentry is still securely upper-middle. It was the yeomanry that really got forced downwards.
I don't think the masses got "forced downwards", even in relative terms. Preindustrial peasants were very poor.
Which is why I said yeomanry, and not peasantry: these are freeholders with enough land to consistently produce a meaningful surplus, and thus substantially better off than both rural tenants and industrial laborers.
You think the middle class is worse off now than it was 400 years ago?
I have so many thoughts on this, as anyone who frequents the discord knows full well, so I'll try to keep my comments pretty constrained lest I write a 50,000 word essay.
You wrote a really good piece a few years ago in favor of science fiction being in the future. I love it. You wrote a really good piece a few years ago about how we shouldn't let our debates on singularity and AI alignment devolve into cringe political debates about the present day, because the real singularity will be stranger than anything we could ever imagine.
how do you square those sentiments with this essay? I think this is a really important question for you and a lot of AI thinkers. Not to be terribly cringe about it myself, but if we ARE at some cosmic hinge point, something that is dispositive of the fate of the light-cone and myriad galaxies one trillion years hence, it seems to matter A GREAT DEAL whether, say, Elon Musk or the CCP or a commune of Woke Folks get the AI first. It seems like the really dumb politics of the now REALLY WILL extend into eternity.
Right?
+1
Jesus Christ, if you believe the hype, was the only son of God and the most important human being who will ever live. His acts delivered numberless souls from eternal torment, but he was nevertheless compelled to weigh in on Roman taxation, adultery, and the propriety of divorce, which I imagine were fairly explosive but also fairly immediate culture war debates of the day. They've now become immortalized.
God himself, manifest in human form, once said to pay taxes to Caesar. It's very likely the machine god or its authors will have to drop takes just as cringe.
+2
First the superintelligence that will kill us all, then the permanent underclass, now your last chance at sainthood... it's like the AI leaders are trying to start a revolution against them before it's too late. (Was this Eliezer's plan all along?)
Yeah, I'm not sure the tiny shoreline between annihilation and utopia is all *that* small. So long as baseline humans are alive there will probably an underclass of some description, even if GDP goes up to some degree.
I'd be happy enough with global first-world living standards, better architecture, and non-dysgenic replacement fertility, to be honest. Although I'm keenly aware HBD makes this unlikely any time soon.
ASI would likely hasten the arrival of genetic engineering.
Among other things, yes.
If the first few years of AI has taught us anything, I think it's that AI looks very much like a commodity. There don't appear to be any substantial moats - anything one AI company does seems easily replicable by the others. The only secret sauce is compute and data and those are both freely-available commodities. As a prescient prognosticator said many years ago, the performance of AI machines tends to improve at the same pace that AI researchers get access to faster hardware. I have a hard time seeing the path to one company, or group of companies, being able to extract trillion-dollar rents. The economic model seems very similar to pharmaceuticals: high discovery cost, zero marginal cost. Pharma profitability is completely dependent on the framework of IP law and AI companies can't benefit from that moat because there's no way to prevent another company from recreating a similar product.
This is good for x-risk fears because superweapons are only bad when only one side has them. When everyone has them then that just leads to a new equilibrium of "my superweapon protects my rights from being encroached upon by your superweapon."
You convinced me; my idea for improving the labor efficiency of medium-density residential construction is on my blog. Was going back and forth on whether or not I should see if I could get a patent.
Short version, I think you could increase the labor efficiency of steel framing by a factor of ten with this, partly by streamlining construction and partly by replacing skilled with unskilled labor. It constrains what you can build.
It's kind of an obvious idea, so I have no idea why nobody was doing it already; my best guess would be the technology to mass-manufacture something like this wasn't available when the norms of steel construction were being set, and it would have been way more expensive to do it this way. Today I think this would be the much cheaper option.
> Short version, I think you could increase the labor efficiency of steel framing by a factor of ten with this
I read your post, but I couldn't quite understand the fastening mechanism for the horizontal steel beams. You say to clamp them to the plate, thus saving you the time of welding or drilling / bolting, but I don't think affixing a clamp would be all that much faster than welding? Welding is actually pretty fast. And clamping suffers some obvious potential failure modes welding doesn't. And would it be to code? Most fastening methods are specified in codes. And what would the cost of a to-code clamp be that covers all those failure modes?
And the heavy steel framing part of putting up a building is maybe 10% of the overall construction time, so even if you really did bring it down to 1%, seems like a hard sell for a one-time buff given that your people need to iterate into and generate the tacit knowledge of a new technique to get good and efficient, unless you were a full time heavy steel developer and were planning to amortize that learning cost over a lot of heavy steel buildings.
There are code-compliant/approved steel clamps, yep. This particular device, since it apparently doesn't exist, isn't approved, so that's an issue that would need to be navigated - but as far as failure modes go, the clamps in this case are not resisting the weight of the beams (which are supported by this device), but instead resisting lateral movement / shear forces.
Welding is fast but requires very skilled labor (and thus is expensive), and affixing a beam to another beam via welding requires fixing it in place first, a process that generally utilizes a crane (which requires three men to operate) and bolting.
Clamps are sometimes referred to as "friction welds" - well, this device is, in a sense, employing a "gravity weld" - gravity holds the beam in place before you ever get to the clamps. You just need to verify plumbness/alignment for the vertical beams (columns) and angles/alignment for the horizontal beams.
As far as the other time goes, I'm working on the other 90%. My long-term project is to solo-build a medium-density residential building, which means I need to streamline every part of the process; roofing, siding, interior walls, plumbing, flooring, electrical, and HVAC ductwork are on my list. I have some tentative plans for most of these, as well as some potential improvements that aren't necessarily any faster, such as efficiently building drainage pans into every room's floor trusswork.
> My long-term project is to solo-build a medium-density residential building
A quixotic goal, but one I admire - I wish you the best with it!
> Welding is fast but requires very skilled labor (and thus is expensive), and affixing a beam to another beam via welding requires fixing it in place first, a process that generally utilizes a crane (which requires three men to operate) and bolting.
I usually see it done with a guy on a scissor lift who clamps it with the usual f-clamps or 3-axis clamps before welding rather than a crane + bolting, but I'm generally seeing that in non-US countries.
And welding isn't generally that skilled, in my experience? Cheaper than electricians and plumbers, but again, probably a local market thing.
On the HVAC front, a good efficiency trick there is mini splits and / or heated floors, so you don't need to run any HVAC at all, but that depends on your market too - un-conditioned hallways is accepted in some, and not in others.
On your overall goal, have you studied some of the Chinese masters of mid-rise construction like BROAD group and CIMC? Might be worth a look for some ideas.
There were St Veronica the compassionate, Antinous the beautiful, and Aristippus the hedonist; also remembered are Judas the betrayer, Salome who demanded the head of John the Baptist, and Herostratus who burnt down the Temple of Artemis for no reason but to ensure his name would live for ever — and his wish has been sustained.
Ok so you're arguing that the easiest path to becoming historically noteworthy is to recognize who, among all of humanity, will be recognized by future religions as the Messiah and then helping that person in a publicly conspicuous way? Um ... ok. I think I have a far higher chance of becoming Jeff Bezos than I do of becoming St Veronica. What's more, the Bezos path has much better offramps. If I try to be Bezos but don't quite make it then at least I've likely engaged in some economically productive activity and probably have a hefty financial consolation prize.
This is a sharp inversion of the permanent underclass anxiety, reframing it as a failure of imagination rather than a realistic threat. The real risk it points to are spending a genuinely historic moment optimizing for status instead of meaning, contribution, or curiosity.
> making B2B SAAS products
But what if you are super passionate about making B2B SAAS products?
I recently saw someone referred to as a "greenhouse flower", which instantly embedded itself into my symbolic lexicon. I get the sense that a decent amount of the substack class of commentators are greenhouse flowers.
Anyone who wrote the post above and sincerely believes it in their heart must have come from an environment of such incredible security I cannot actually imagine it. It is so far beyond my experience it seems like a hallucination.
It's so basic as anyone who has ever worked a fungible labor job for wages would understand how out of line with reality it is; I would love to know how it came about and if it is more of a kind of wishcasting statement of moral values.
You'd never heard that term before? In my generation it was hothouse flower.
Otherwise I agree. Progressive beliefs are luxury beliefs. They signal status and reflect decadence. About 10 years ago a wealthy teen killed some people in a DUI accident and his lawyers argued for leniency on the basis that he suffered from "affluenza" - moral deformation caused by over-indulgence. Things like EA and essays like this are what that looks like at a cultural level.
The thing is, most conservative beliefs are also luxury beliefs.
Every person that believes in the charity of the wealthy, or in the ability of the capitalist system to distribute resources according to some utility function, or to get even more foundational; in the meritocracy or the concept of hard work paying off to the point that they are against taxation is a greenhouse flower.
Here are the sum total of non luxury beliefs: I can have what I can keep.
I think you misunderstand the meaning of both luxury beliefs and decadence if you think meritocracy is an example of either. Nothing has generated more wealth or helped more people in the history of the species. I suggest you study history more closely.
Late response: You are incorrect imo. The gradual increase of societal complexity and depth of understanding of natural law is the source of the changes you observe, not meritocracy (I think.) That is why the faith in the concept as a sort of moral leveler is a luxury belief.
There is no long term natural experiment that can prove this, but there are various short term ones: The Chinese warring states period, the transition from the roman republic to the empire, the Hanseatic league, etc.
The big one we always come back to is the late republican period of rome, which was notably less meritocratic than for example, the hellenistic kingdoms it came to dominate; but dominated them nevertheless at least in part because of the strong in group communal solidarity of the nobiles compared to the ruthless Darwinian meritocracy of the post-alexandrian ruling classes of those other states.
To go a more modern example: Soviet Russia lifted an incredible amount of people out of medieval poverty and contributed to the scientific project disproportionately even with absolutely moronic ideological pressures (Lysenkoism et all, the complete rejection of markets, idiot centralization) and incredibly corrupt leadership (the entire party apparatus) by abandoning meritocracy as an ideology.
>The gradual increase of societal complexity and depth of understanding of natural law is the source of the changes you observe
Agreed, but those things are maximally produced by meritocracy. That's because only a small fraction of people are capable of advancing our understanding and only meritocratic social institutions are capable of reliably spotting those people.
>the late republican period of rome, which was notably less meritocratic than for example, the hellenistic kingdoms it came to dominate
Less by what measure?
>Agreed, but those things are maximally produced by meritocracy.
The greatest flourishing of those ideals was from a notably and uniquely un-meritocratic period, although I will say I don't really want to argue with "some people are better at things than other people"; which I consider the baily, but rather "The people who succeeded do so solely through their promethean will and deserve the status/material benefits accrued thus.", which I think is the motte.
>Less by what measure?
Less in the sense that any average son of the nobiles stood a good chance of making it to a serious military command or at least the senate, and 99% of roman leadership was replacement level vs. the geniuses that made their way up in say Macedon or the Seleucid empire.
fucking rad
I think a underrated driver of this is the recent experience of the housing market. Wealthy millennials already perceive a relatively major economic success gap between those that got onto the home ownership ladder before rates shot up in 2022 (and again between those who got in before prices surged over 2020).
This has primed them to expect that the future will leave those who are even slightly late precariously behind.
I think the problem here is that you can’t imagine moon ownership and being part of the permanent underclass.
Just like a 15th century farmer can’t imagine owning a car, an iPhone with infinite media, having a cure for smallpox and cheap infinite calories at your disposal and still feeling unhappy and low status.
I’m hung up on the moon providing its services free to rich and poor alike regardless. Luna anyway.
I don't think trying for "famous in 1000 years" is much of a possibility even for a 99th percentile person. Using your Christianity example, the billion-give-or-take Christians on Earth all know about one guy in ancient Jerusalem, most of them know one to two dozen others, the most dedicated theologists on Earth know 3-4 digits worth of additional people in a very vague, general sense, and the rest of the place's inhabitants are forgotten.
I think, even if you popped a full googolplexian of people into existence the next galaxy over and gave them a full archive of the modern internet with no other entertainment, there'd be 99% interested in the same people we care about today, a weird/dedicated .9% that find someone on the production frontier of interesting-ness that includes the top one million people spread across a variety of esoteric criteria *(prettiest girl who can do a triple-backflip, best programmer who knows how to street fight, futurist painter who most accurately guessed what future architecture will look like)*, and .1% that collectively pick a couple hundred of the same ordinary people to look at ironically, with the most interesting billion people getting 0-1 people who care who they are. Case in point, billionaires are very rare today relative to the general population, and very influential, but outside of their direct employees, nobody cares about the boring ones.
Dear Mr. Alexander,
after careful review of existing geological survey data, the planet CYG554-3 has been found suitable for deconstruction. Its resources are earmarked for continued construction of His Galactic Holiness' monument in the local cluster. As you will be able to verify for yourself, a detachment of Core Extractors has been placed in geostationary orbit around CYG554-3.
Your exemplary stewardship over CYG554-3 for more than .3 rotations has been noted and I have therefore been authorized to extend you the courtesy of a 48 standard hour delay of the start of core extraction operations. Regrettably, resources to move your primary residence of CYG554-3b into a stable orbit around CYG554 Prime have not been allocated. While your ownership over CYG554-3b is sacrosanct and shall never be voided, one predictable outcome of the core extraction operations is CYG554-3b's rapid ejection from the solar system, with detrimental and permanent outcomes for its habitability rating. For your own safety, please vacate the premises as soon as circumstances allow. Further delays of the start of operations shall not be granted.
If you have further questions, please do not hesitate to contact me.
Glory to the Eternal Emperor!
Copper … explain?
Ah, found it in the comments. They’re out there saying Ea-Nasir was a sharp copper salesman.
The trash needs to be taken out, and taxes are due
Here is a good story of somebody making a difference that outlasts them. Little things can become big.
https://www.wsj.com/business/autos/ford-gas-arrow-inventor-jim-moylan-6b2ef066?st=J92XTw&reflink=desktopwebshare_permalink
Irrespectively of the contents of the argument, this is one of your best posts (even though it paradoxically looks unfinished and unpolished). It reminds me of the characterisation you gave of CS Lewis in 12 rules of life:
> But for some reason, when Lewis writes, the cliches suddenly work. Jesus’ love becomes a palpable force. Sin becomes so revolting you want to take a shower just for having ever engaged in it. When Lewis writes about Heaven you can hear harp music; when he writes about Hell you can smell brimstone.
It's similarly cliche. But you are still pretty much the only writer who can give the "singularity is near" a real emotional weight, like Meditations on Moloch, The Goddess of Everything Else and Half An Hour Before Dawn In San Francisco did before. Chapeau bas.
Inspirador. Gracias.
I'll take the terraformed moon, thanks.
Oh boy, I believe some crazy Sci-Fi sounding stuff these days, but I don’t see humans spreading past Earth in any significant numbers . I guess that’s where I draw the line. LOL
I'm with you on that. It is odd how devoted the techno-optimists are to the idea of humans leaving the solar system, despite the myriad biological and sociological barriers to that, not to mention it may just be flat out impossible at any level of intelligence. Nor do the benefits seem to come anywhere close to justifying the costs. But it's embedded into all the aspirations of the futurists, to "fill the lightcone" with intelligence.
So... does that mean the opposite is true? I was already spending 60 hours a week grinding at this job. Can I just quit to do more charity / update my blog more / be a lot more interesting, assuming some philanthropist is going to give me a moon one day?
I was under the impression that it's impossible to see past singularities. I thought that was one of their defining features. I assume it is difficult or impossible to accurately predict the actions of any super intelligence, benign or otherwise. So I'm not sure predicting future geography (coastlines, etc.) is wise. But maybe I've fundamentally misunderstood something.
hello future transhuman intelligences on a Niven Ring somewhere in Dario Amodei’s supercluster!
helped create a vision of broad-based prosperity that benefitted all humanity sounds lit
I appreciate your support of my bad financial decisions. I may have to send this one to my mom.
If the moon is made of computronium and is my brain and actually well off people have planet brains (or even galaxy brains), this is maybe something I should care about.
Imagine if the average person was presented with these two options:
- A — You get to own a moon on the other side of the galaxy. It will take you a few thousand years on a spaceship to see your friends and family.
- B — You get to own a small apartment in Manhattan. It will take you 30 minutes to see many of your friends and family, and a few hours to see the rest.
I predict most of the people who are worried about the permanent underclass would choose option B. Indeed, we kinda see this empirically: with the same amount of money, you could either buy a huge house in the middle of nowhere or a small apartment in SF/NYC, and loads of people choose the latter. This is partially because of job opportunities, but even people who work remotely generally choose to live in big cities because that's where all their friends are.
Unless AGI somehow makes it possible to travel faster than the speed of light, I don't think people will want to own moons. Land scarcity will still be a thing, and there may well be a "permanent underclass" that can't afford to live close to their loved ones.
This post is so stupid. Thinking billionaires are gonna support those in poverty (look outside for the past 100 years) and basically worshipping them while they were sexually abusing kids on Epstein’s Island. Is your brain stuck in the 2000s???
That and assuming records of all ordinary people will survive that long. Why would historians dedicate to memory random people with only minor accomplishments, the woman who cared for jesus, cared for jesus christ! That’s why she was remembered. You’re really getting flustered over what amounts to at best a wikipedia article. What will you be remembered for, allowing other men to sleep with your partner?