334 Comments

> Altman Skullduggery, Inc.

Normally it's spelled "skulduggery". (Maybe we're supposed to notice the skulls here somehow.)

Expand full comment

I did not know that, thanks.

Expand full comment

"First, he’ll start a new shell company. Realistically this will also be named OpenAI"

Can he really do that? I understood the reason behind calling the original 'OpenAI' was created to be open source, but now of course it's moved to "if you copy our stuff we'll sue the backside off you".

Realistically, though, yeah. It doesn't mean anything anymore, it's just a brand name.

Expand full comment
6dEdited

They're called that because they're racing to Open the A.I. Pandora's-box? (In hindsight it was probably a bad omen for an alignment company to choose when the number one rule of alignment is "don't let the A.I. out of the box".)

Expand full comment

It's more like "You won't be able to keep the AI in the box, so you'd better be damn sure it's aligned."

Our only real hope of keeping the box closed is not creating the AI in the first place.

Expand full comment

It’ll probably be called something else first, but once it acquires OpenAI and it’s intellectual property, it will take on the name.

Expand full comment

When we are all paperclips it won't matter but it does seem like there was long term plan by Altman to use charitable donations to generate a fortune for Altman.

Expand full comment

Charity begins at home, after all!

Expand full comment

I mean, isn't it legit to amass enormous sums of money, as long as one then uses them for the betterment of others? (Whatever "the betterment of others" means to the amasser.)

Expand full comment
6dEdited

Legit as in legal? Absolutely not. Nonprofits are incorporated for definite purposes, not "the betterment of others as Sam Altman defines it".

(I assume you don't mean legit as in acceptable, given that that could cover *literally anything*)

Expand full comment

I’m being mildly sarcastic.

Expand full comment

My bad. Not always easy to tell around here.

Expand full comment

No worries. Sarcasm is a risk, especially in written communication.

Expand full comment

I don't think this is true.

I think they genuinely meant for it to be a charitable research foundation at the beginning. I think the leaked emails attest to this. But also, it makes total sense - I was in the AI risk scene around the time they were founding it, and founding something like this was within the Overton Window of things people were thinking about (although I think the most careful thinkers were against it). Finally, in 2015 it was kind of crazy to expect there to be any profit in AI - everyone who donated probably expected they would never see any of the money again.

All of their arguments for seeking investment and gradually getting bigger and more forprofit were reasonable arguments at the time. They really were going to become irrelevant, incapable of doing research, and totally dominated by Google DeepMind if they didn't get more compute. They really did start with a strategy of asking AI risk charities to donate money so they could get more compute, and only shift to the business model after that failed.

Altman is rich enough without this, and not looking for money. He might be looking for *power*, but I don't even think the forprofit transition helps him get power directly (it might help him get power by getting him more investment -> AGI -> power, but that's different). My read of him is that he really likes the idea of a good singularity that is shared among all humans, he founded the charity to get that, he gradually got too enamored with being the brilliant charismatic genius representing Silicon Valley dynamism, and now it's some combination of his original motive plus wanting to win the race and keep being cool.

Expand full comment

And perhaps the prospect of becoming humanity's first trillionaire has its charms as well?

Expand full comment

I think that a very good reason to block OpenAI's plan is that plans like this is that if it is not blocked "stealth for profits"- entrepreneurial NFPs that plan on becoming for-profits but disguise the fact and use it to suck up donations and avoid taxes- could become more common, almost impossible to catch and liable to undermine confidence in the charity sector.

Expand full comment

At the same time, the general principle that non-profits should be allowed to spin-off for-profits is sensible. For example, imagine a cancer charity which does some basic research. As part of the research, it develops a new test that's quite useful. Spinning off a for-profit entity to productionalize, scale up, and sell the test, while the charity just maintains some hands-off ownership, feels like the natural and best path for everyone.

Consider that the alternative for OpenAI isn't it remaining a charity endeavour. The alternative is that all the employees quit and join ClosedAI down the street. Pretty much the same end result, only the charity has no money and no influence.

Expand full comment

Regarding your ClosedAI hypothetical, I think you're vastly underestimating:

1. The value of OpenAIs IP and other property

2. The network effects and value of existing customers and income streams

3. The ease of getting a the entire company to switch with no disruption

And many other difficulties. Could a large segment of Google decide to leave now and start Moogle, steal everything from Google and immediately become a trillion dollar company? No, for many many reasons.

OpenAI in its current form has lots of power and value, and it's worth fighting for it if it meaningfully improves the chances of safe alignment. I should say, I know almost nothing about AI alignment so this is not a recommendation. Simply a refutation of your hypothetical.

Expand full comment

The thing is that this almost happened when the board fired Altman. A lot of employees were going to jump. I believe Microsoft offered them all the same jobs. If I remember correctly, that's the main reason the board backed down and brought Altman back.

So I think it's a reasonable expectation, and perhaps one specific to OpenAI. I think you are probably right in the general case.

Expand full comment

But imagine you fully control the non profit and have a singular goal of fulfilling its mission. Is it best to let the non-believers walk while still retaining full control of all the valuable pieces in order to pursue the mission? Or completely sell out to a for profit incentive structure that elimatines that power entirely?

Expand full comment

What if 95% of the employees, including most of the top talent, were "non-believers"?

Expand full comment

Given that it appears Altman has abandoned the mission and wants to become the immortal emperor of the galaxy, and that even if he fails in this he is absolutely willing to spin up a dangerous AI before alignment is solved, and considering that alignment may never be solved, it is better for the nonprofit's original goal that the company implode in the messiest possible fashion resulting in the longest possible setback to AI capabilities development. An extra 6 months before the development of ASI happens is certainly worth more than $40B tossed vaguely in the direction of education and health care. Only if the $40B was certain to be used for AI safety research is it even a close question, I'd have to leave it for somebody in AI safety right now to tell us if they'd rather have $40B or 6 months.

Expand full comment

I feel the board lost control when that happened, they just didn't know it yet.

Expand full comment

>And many other difficulties. Could a large segment of Google decide to leave now and start Moogle, steal everything from Google and immediately become a trillion dollar company? No, for many many reasons.

Obiously, they would be sued to hell by Square Enix. And Google too, I guess.

Expand full comment

Yeah, there's already a serious danger that any given charity will end up using most of its donations to fund directors' over-generous compensation, or paying far too much for "services".

This is just another possible route to get stuck into the pipeline of money.

Expand full comment

But think of poor Sam, he doesn't even have his own private space fleet like the other billionaires! Why, he's practically slumming it with his measly $1.5 billion net worth!

Okay, I'm being snippy here, this articles paints him in a better light as regards his motives for getting involved with OpenAI:

https://www.uniladtech.com/news/tech-news/how-much-does-sam-altman-make-per-year-from-openai-642925-20250211

Expand full comment

I slightly doubt this - I don't really hear about it happening other than with OpenAI, and OpenAI came by it honestly (ie I think they honestly intended to be a charity at the beginning, then gradually got better and better reasons to shift to being forprofit).

I would selfishly love it if the attorneys general forced them to actually behave like a charity, but my impression is that the fair/correct legal outcome is just to force Altman to pay the charity a fair price.

Expand full comment

Thanks a lot, it all makes so much sense to me now. Reading news articles from 10 different sources can't really complete with one single source which has it all.

Expand full comment

Sconded. Plus, "less like creating AI then, you know, being OpenAI " - "then" should be "than", no?

Expand full comment

Yes it should be. Great spot!

Expand full comment

Sconded is my new favorite typo. Sounds like an obscure cooking method

Expand full comment

"Attorneys General", isn't it? Not "Attorney Generals"

Expand full comment

Yes.

Expand full comment

I can hear my hard ass 11th grade English teacher having a hissy fit right now.

Hearing Attorney Generals now doesn’t even make me raise an eyebrow.

I draw the line at 2 Whopper with Cheezes though.

Expand full comment

You might enjoy this comic about how to classily pluralize noun phrases.

https://www.smbc-comics.com/comic/plural

Expand full comment

I likes it!

Expand full comment

Wikipedia agrees:.

> attorney general (pl.: attorneys general)

https://en.wikipedia.org/wiki/Attorney_general

Expand full comment

Spanish grammar really is superior this way (Attorneys Generales)

Expand full comment

Im increasingly disillusioned with the board, ceo, government mandated goal(infinite profits or 100% charity); corporate structure; what are other options that are functional without going full criminal (altho....)

Insert Rant here: about how silicone valley being a culture I want no part of, even if it moves to texas and purges woke.

Insert Rant here: Molebug essay praising it as the ideal structure being build on a fundamental reductionist view point

etc.

I hate corporates, all bisness theory is about being successful corporates, speaking to corporates, paying taxes as corperates, getting grants as corperates, getting stock as corperates. Even if in theory a corporate could be a "non-profit" it still looks corporate.

Expand full comment

I definitely agree not wanting to be part of silicone valley culture, what with all the eye-hurtingly bright kitchen utensils and mountains of wobbly implants. Nor Silicon Valley, for that matter.

Expand full comment

I view the structure as important but I dont know how to get away from it easily

Its not easy to find the specific moldbug essay I have in mind(tho he probably repeats it several times) greek theory of governments, rule by 1, few, many; corporations are a "healthy" combination of rule by 1 and rule by few when theres 1 guy seeing day to day operations and a board who checks in every few months. I want something he would see as impossible. Boards seem to turn woke, blackrock buys in using boomer dollars, insists you have a hr role (read wage slave manager, its not even a good euphemism) which I bet 99% of hr candates are truly fully infected.

It could also be in texas, it may slow the infection but why would it stop?

Expand full comment

Anthropic is a Public Benefit Corporation, which means it's supposed to balance shareholder value with the good of the public. As written that's kind of vague, but I think the Long Term Benefit Trust discussed at the bottom is an example of what it looks like to try to implement that honestly.

Expand full comment

> If a public benefit corporation is merging ... Section 6010(a). Notice should be provided to the "Registry of Charitable Trusts" ... certificates of approval.

> https://www.dailyjournal.com/mcle/1040-merging-nonprofit-public-benefit-corporations

Im vaguely aware of them but Im still seeing, ceo, board, the nation state paper work and approved goals

It was always fairly delusional but "dao's" before they became explicit ~~ponzi schemes~~ ntfs may have dropped the nation state paper work; Medieval Guilds probaly didn't have morden hr, etc.

Pirate ships, gangs, have constitutions which dont merely specify punishments and responsibility but methods of enforcement that are... vertically integrated.

But I dont think guilds can make a comeback, smart contracts ever shouldve been smart and call me picky about gang membership.

Expand full comment

I am curious about the ongoing AI race and its implications. Specifically:

How do Chinese AI companies approach AI safety, and what governing bodies are emerging to oversee this in China?

Regarding OpenAI and Anthropic, I believe having a small group of people (such as Anthropic's 3 trustees) representing the interests of society at large seems inefficient. This is especially concerning when we consider that AI will impact all human lives across all countries.

Expand full comment

AFAICT, nobody who matters takes singularity etc seriously in China, and otherwise they're mostly concerned about not offending the CPC.

Expand full comment

This makes it seem like DeepSeek is as AGI-pilled as western labs

https://theaiinsider.tech/2025/01/24/deepseeks-ceo-says-agis-arrival-could-be-imminent/

Expand full comment

>DeepSeek’s mission statement does not mention safety, competition, or stakes for humanity, but only ‘unraveling the mystery of AGI with curiosity’.

Yep, that's about the extent of China's Overton window.

Expand full comment

*If* I believe what I'm reading, Chinese AI is swiping Western AI and just filing off the serial numbers. No idea if this is really true, but there are murmurs about it, e.g. this new thing Manus:

https://www.businessinsider.com/manus-ai-china-agent-hype-deepseek-2025-3

EDIT: Though they probably are concerned with alignment problems and are working to make sure the AI is aligned with human values: the values of Xi Jinping Thought.

https://en.wikipedia.org/wiki/Xi_Jinping_Thought

Expand full comment

There's a good comic sci-fi story there, about a galaxy-spanning robot empire whose creators had to align it with the incoherent values of a fairly pedestrian dictator. It'd probably be funnier if they're of the (subs: fill in ideology) bent a la Xi than a charismatic fruitcake like Gaddafi, but either could work.

Expand full comment

I want to read this. Must control urge to think up prompts to feed Claude.

Expand full comment

What you've got to know/remember is that "distilling" is a common practice. It's a way of taking a huge amount of very noisy data and compressing it into a smaller amount of data that's a lot less noisy. So, yes, it's "sort of" copying. But it's nothing unexpected or unreasonable. And if you don't have a good AI network structure behind it, it doesn't help.

The DeepSeek model is a competitively advanced AI. It runs in a smaller footprint than many others. Much of the critical comparison is unwarranted.

OTOH, they HAVE to run smaller models, because they have limited access to fancy hardware.

Expand full comment

>the AI is aligned with human values: the values of Xi Jinping Thought

<mildSnark>

There may be a technical problem here...

Musk presumably wanted Grok3 aligned with _his_ thinking, but at least the initial release answered queries ... differently. And the initial reaction was to slap on a really crude patch. The DeepSeek team might have analogous problems truly aligning R1 with Xi, even with great effort.

</mildSnark>

Expand full comment
6dEdited

The government is at least saying some of the right things: https://carnegieendowment.org/research/2024/08/china-artificial-intelligence-ai-safety-regulation?lang=en

> the CCP leadership made a brief, but potentially consequential, statement on AI safety in a key policy document. The document was the decision of the Third Plenum, a once-every-five-years gathering of top CCP leaders to produce a blueprint for economic and social policy in China. In a section on threats to public safety and security, the CCP leadership called for the country to “institute oversight systems to ensure the safety of artificial intelligence” ... The subsection calls for improving China’s emergency response systems, including “disaster prevention, mitigation, and relief.” AI safety is listed after other major threats to public health and safety, including food and drug safety and “monitoring, early warning, and risk prevention and control for biosafety and biosecurity.” ... Given this context, it appears the AI safety risks the CCP is referring to are large-scale threats to public safety, akin to natural and industrial disasters or public health threats.

But of course it's very easy to say things are necessary and then just not do them - as we've seen elsewhere.

Expand full comment

"Musk and Altman disagree on what happened next: Musk said he objected to the profit focus, Altman says Musk agreed but wanted to be in charge"

OpenAi releases some E-Mails from back then and it turns out Sam is right here.

https://openai.com/index/openai-elon-musk/

Expand full comment

Somehow this doesn't surprise me at all, considering (1) this is Elon we're talking about and (2) the explicit existence of xAI. Of course he doesn't trust a formal nonprofit structure or any other human, only himself and his own personal control.

The fact that they can prove it is pretty bad news wrt the transition though, since it almost certainly loses him standing to sue.

Expand full comment

"The fact that they can prove it"

Maybe. Have they released all the emails, or just the ones selected to put Altman in the best light and paint Musk as the liar/villain? It's easy to select out certain portions of a long conversation where you snip away the provocation you did and just report the "okay if that's how it's going, I want to be in charge" parts without the "because I don't trust you not to make a huge mess of all this and wreck the entire thing if it's done the way you want to do it, which I'm telling you is not the right way to go about this" addendum.

Expand full comment

"I want to be in charge of all this. Wreck the entire thing."

Deisach, 2025

Expand full comment

Curses, my evil plan has been revealed! 😁

Not to dig up old trouble, but Scott has had a false friend reveal snippets of private conversations in order to paint him as a Bad Guy in the past, so yeah. Colour me not so credulous when guy A who is in a scuffle with guy B happens to release something that paints guy B in bad light and coincidentally makes guy A look better, which just happens when guy A needs to look better because his own reputation is a bit tarnished right now.

Expand full comment

Mmmm, I'm cynical about "here are private communications demonstrating that I'm right" leaks or releases like this. We're seeing Altman's version of the exchange, what about if Musk released different/more emails showing his version where Altman disagreed?

Basically I'm saying "both of them probably wanted control and tussled for it, and both of them were about the money not the ideals".

Expand full comment

Interesting that in one of the emails someone (redacted) referred Musk to one of Scott's columns on AI risk.

Expand full comment

My biggest takeaway is that board drama at Anthropic is much more likely than I would've thought. Very easy to play out a scenario where investors/the company/Claude 3.9 (newly appointed CTO) revolts and triggers another grab-your-popcorn weekend.

Expand full comment

This is such a blatant grift. OpenAI was literally founded with a non-profit "mission to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity" with the belief that for-profit companies like Google's $1T will not steward it well and will pursue paperclip-esque shareholder value.

Now the rationale is "well someone bid $100b for our tech, we can do a lot of healthcare good with that"???

Expand full comment

This is exactly right. It's clear that the right price for the for-profit should be Infinity. OpenAI is arguably the most advanced AI lab in the world, there's no better way for the non-profit to make sure AI benefits humanity than by governing OpenAI. Money for all sorts of random good causes is useless in this regard. If this means a lower valuation for the for-profit, so be it. Anyway it doesn't seem like there's lack of demand for for-profit shares even under the control of the non-profit.

Expand full comment

Part of the issue with arguments like this is "the future value of AGI is infinity" and therefore everything that isn't building better AI is worth 0 in comparison. At some point, a non profit has to trade future returns for value add now. How and when should they do this? When should nonprofit OpenAI say "instead of investing existing cashflow in more powerful models, maybe we should invest in income-adjacent society-wide-value-adds like alignment." Hell by this logic, healthcare, sciece research and even philosophy can be considered income-adjacent society-wide-value-adds. This is the tough part about being a nonprofit and it isn't solved by saying "future value is infinity so we should never give back."

I'm not advocating for any specific strategy here either way. Just pointing out that how exactly a non profit returns value to society is not clear.

Expand full comment

> the for-profit should be Infinity

Which reminds me that I hate the use of a mathematical abstraction in a finite physical world.

Expand full comment

Devil's advocate: this is true if the nonprofit board has some particular plan for making OpenAI's output more beneficial than it would be otherwise.

If not (and so far they haven't demonstrated any plan in this direction) then it doesn't matter who owns it, and they might as well take the $100 billion.

Heck, I don't see why they should care whether OAI vs. one of its competitors gets AGI first. If Elon bribes them with $100B to set their business on fire, that's not necessarily a bad offer.

Expand full comment

I'd guess that their self-serving rationale is that humanity is screwed anyway if the god AI is made by anybody other than Altman & Co., but without ditching the non-profit part they won't amass enough cash to stay in the lead.

Expand full comment

"Now the rationale is "well someone bid $100b for our tech, we can do a lot of healthcare good with that"???"

Woke capitalism and greenwashing, sound familiar? This is how it's done. "Sure, we work our employees into the ground for pennies and would cut our grannies' throats for a sixpence, but look! It's Pride Month and we're flying the rainbow flag outside our corporate HQ! Aren't we good liberal social allies?"

Just slap some do-good sounding charity stuff into the mission statement where you pay relative pennies for the 'good cause' out of the massive profits you anticipate and then if anyone criticises you, you can play the "why do you hate the poor and sick, we are only trying to help them and you attack us?" card. See the amount of self-described progressives lining up to support poor little Disney Corporation against evil Governor DeathSantis, for instance, or how now the CIA and FBI are the mostest trustworthy defenders of democracy and the common people because Orange Man Bad.

(I will *never* get over the Democrat supporting liberals to left now fawning over the CIA, of all entities, as the Good Guys).

Expand full comment

Well put. The role reversal of the parties has been pretty incredible to watch in real time. I was at pride week in NYC 2023, watching the parade, and I kid you not both Lockheed Martin and Boeing both had corporate parade cars back to back. I was like “uh, is anyone else seeing this…”

Really? The two biggest war profiteers are also LGBTQ “allies”… yeesh

Expand full comment

Oh, the switching back-and-forth about Comey has been fun to watch, but what I particularly enjoy is how Merrick Garland has been revealed to be a villain of the darkest hue (for failing to get Trump put in jail somehow) when before that he was a persecuted martyr robbed of his rightful ascension to the Supreme Court.

Nothing, though, will trump (ha!) the redemption of Dick Cheney, rolled out to sign on for the coconuts.

https://www.youtube.com/watch?v=FejL9k0VSX4

A Democrat thanking VP Cheney for what he has done for the country:

https://www.youtube.com/watch?v=7S-Sk2TbdjE

Being old enough to remember things further in the past than 2016, yeah this is mind-blowing.

Expand full comment

People have been complaining about this sort of thing for decades! I remember going to SF Pride on my 21st birthday and seeing that all the floats were corporations or military or ethnic groups - and the ROTC even had a group of rainbow gun twirlers there, a decade before the end of don’t ask don’t tell!

Expand full comment

It's funny that lefty LGBTQ kiddos are very insistent on "no cops at pride", but don't seem to care in the slightest about those corporations. They've just made blue-collar people in a dangerous public service profession into such non-persons that they alone are to be excluded from feeling less shame about their sexuality, whereas the corporations belong there. (And despite their cries of "corporations aren't people!" -- uttered specifically in free speech contexts! -- just several years ago). There's not even a "no climate criminals at pride" mantra, afaik Exxon could send their proudest LGBTQ contingent and nobody would bat an eye, but not the guy who risks his life daily and works in one of the few jobs where there probably still IS a lot of stigmatization by co-workers due to his sexuality.

Expand full comment

Sigh. I remember in the dim dark past when the left was _in favor_ of free speech...

Expand full comment

"It's funny that lefty LGBTQ kiddos are very insistent on "no cops at pride", but don't seem to care in the slightest about those corporations."

That's...very explicitly contrary to my own experience. People have been complaining and rolling their eyes at corporations at pride for a decade or more. I suspect that the impression otherwise has more to do with Toxoplasma of Rage than with overall attitudes: that is, you hear a lot more about "no cops at pride" *because* it's more controversial (at least, among people who exchange loud opinions on the internet). Obviously the people who are actually organizing the parades and approving the floats feel differently.

Expand full comment

There's a lot I disagree with in your comment, but I thnk it boils down to "everybody is bad and all good deeds are veiled self-serving acts in service of doing more bad things"

Everything is always more complicated than simple narratives projected on them. Often times, things that seem self serving when only looking at first order effects are actually mission driven when looking at second or third order effects. This doesnt make them right, but it makes them less self serving than you give them credit for.

For example, put yourself in San Altman's shoes and pretend for a moment that it's 2016 and you truly care about building empathetic AGI to improve human welfare. This costs a lot of money, so you form an entity in a way to both fulfill your mission and make money. After many years, it becomes clear that this will require a LOT more money than you thought, and investors are the only way of getting that money.

What do you do? Do you refuse private investment? If you do, you will slowly fade to irrelevance as bad actors like China and Russia take the lead in creating clearly oppressive and less value aligned AI. Or do you try to find a way to improve your structure to make it more conducive to private funding?

At a first order, it looks like "hey they're taking money so they must be trying to get rich!" But at a second order, they believe it's the only way to fulfill their initial mission.

Morality is complicated. People can make bad decisions in pursuit of their well intentioned ideals. People can also just do bad things because they're selfish. Figuring out the difference is extremely difficult, and I think I assume far more of the former than you do. Though, maybe I'm just naive.

Expand full comment

But is it true that OpenAI is having difficulty raising money?

Expand full comment

I really don't know. To be honest, I have no idea what the budget looks like for OpenAI nor how much it costs to train and build these things. But I will say, as an investor, if I have the choice of infinite upside or 100x upside, I'm not sure why I would ever choose the 100x upside. Hyper efficient free market investment means collective action problems can never be solved by individuals acting morally. Of course, this basically gives carte blanche to private companies to be evil, which isn't any better. So, I have no good answer 🤷🏻

Expand full comment

"But I will say, as an investor, if I have the choice of infinite upside or 100x upside, I'm not sure why I would ever choose the 100x upside".

See, this is the entire problem here, Christian. An example of human greed. Why isn't 100x upside sufficient? Why does it have to be infinity? For the lofty principles of benefiting all humanity and Altman to be a true altruist, it only works if everyone is not greedy and self-seeking and would be satisfied with 100x return on an investment so that, as per the OpenAI charter, nobody has unduly concentrated power and the fiduciary benefits flow to all, including the poor fishermen on Pacific atolls.

But even in your own comment there, that ain't how it works in the real world. I don't *desire* to assume bad intentions, but I look around me and see "people will choose short-term gains and the most they can get over principles".

Why is "I put in a million and I get a hundred million back" not enough? Why does it have to be "I put in a million and I get a trillion back"? Well, you know why as well as I do.

Expand full comment

"For example, put yourself in San Altman's shoes and pretend for a moment that it's 2016 and you truly care about building empathetic AGI to improve human welfare. ...What do you do? Do you refuse private investment?"

Well now, this is just me and I'm no business person, but maybe, just maybe, if I really cared about "empathetic AGI" and not "whee, magic money fountain!", I wouldn't connive to get the board members who are all the idealists about AI safety booted off when they tried to discipline me (and you know, the CEO works for the board not the other way round) after they'd fired me, I went to our big commercial partner who is definitely all about the magic money fountain, and scaremongered employees into calling for my re-instatement; after said re-instatement, I got the board reconfigured for the emphasis to be on "money money money" instead of "we're concerned about the AI not being all touchy-feely enough".

But like I said, that's just me.

Expand full comment

I think you're highlighting the main difference between us as I said above. I'm spinning narratives that assume he's making difficult tradeoffs to fulfill his mission, you're spinning narratives that assume he's power hungry and trying to fulfill his own selfish desires. Your default is to assume bad intentions, my default is to assume good ones. There's no provable way to say who's right and whose wrong.

Expand full comment

You're technically right that it is not "provable," but that's an unrealistically high standard. So I don't know why it needs to enter into the conversation. Provable or not in this particular case, being jaundiced about the motives of public figures is probably the right default position, based upon the historical record.

Expand full comment

I don’t think anyone is “fawning over the cia as the Good Guys” - people are just recognizing that there can be good in organizations even if there is also bad, and the people who want to destroy it can be just as bad as the people who want to give it unfettered power.

Expand full comment

What if it turns out that AGI is hitting diminishing returns and *isn't* going to replace all human economic activity? At what point do you say "well, it turns out we aren't making the singularity happen, but we still produced a lot of value, let's extract that value and use it for the benefit of humanity like we said we would"?

Expand full comment

> subject to the subject to the Attorney General of California

Is this the latest version of the the where instead you repeat multiple words twice and see how many people fail to notice?

Expand full comment

When you and OpenAI talk about the "good of humanity", who do you actually mean? Does it include the majority of humans who live outside the United States? Does it mean subjecting these people to the erratic leadership of the United States for their own good? Does it acknowledge that about 1 in 6 humans live in China, and another 1 in 6 live in India? Do these people get any say in the good that is being imposed upon them?

Expand full comment

i'd say that the fully verbose meaning of 'humanity' as it's used in these kinds of statements by these kinds of people ends up being something like "all thinking beings worthy of being ascribed moral weight, using the most liberal possible criteria of such, in the entire future lightcone of our species, including but not limited to human-mindform-derived artificial life which modern humans might not recognize as human, as well as any aliens who might get caught up in the singularity's blast radius"

obviously there's some pretty gargantuan issues with all of this, but at least the *specific* myopia you're concerned about doesn't seem to be at play

or at least, the principle agents involved claim that it isn't, and can write really, really emphatic clauses describing just how wide their circle of concern is

Expand full comment

If this is what they mean by "humanity" then suddenly the stakes seem a lot lower. I really don't care whether these AI risk organizations exist or not. With a broad definition like that, you could implement policies to protect "humanity" while completely throwing Homo Sapiens under the bus. What a joke

Expand full comment

The usual thing is to assume that the definition of "humanity" will probably be broad, while leaving the details of that up to the actual process of alignment. The definition of "good" is 100% Homo sapiens and ultimately dictates the definition of "humanity".

Expand full comment

Without a definition like that, you could implement a 40k Great Crusade where all alien life is exterminated for the greater glory of Earth, and still consider yourself ethical.

Expand full comment

This is absolutely not true. There have been moral taboos against wantonly killing animals since time immemorial. Without any need to expand the definition of humanity.

Expand full comment

Tell that to all the bags of cats that got burnt in the village square

Expand full comment

The World State by G. K. Chesterton

Oh, how I love Humanity,

With love so pure and pringlish,

And how I hate the horrid French,

Who never will be English!

The International Idea,

The largest and the clearest,

Is welding all the nations now,

Except the one that's nearest.

This compromise has long been known,

This scheme of partial pardons,

In ethical societies

And small suburban gardens —

The villas and the chapels where

I learned with little labour

The way to love my fellow-man

And hate my next-door neighbour.

Expand full comment

Obviously it refers to the billions of shrimp in the world.

Expand full comment

With nonprofit structure, technically no one other than the board has any say. Just like with US democratic structure, most citizens don’t have any say, because they are either too young to vote, or disqualified as a felon or resident of DC, or choose not to vote.

Expand full comment

When I talk about it in this post, I mean those words are in the OpenAI mission statement, and enforceable by whatever corporate courts enforce nonprofits sticking to their mission statement, so that's for the court to decide.

Courts are usually pretty lenient, and I think if OpenAI were to say something like "we'll produce aligned AI, and then probably everyone in the world will get it and benefit from it" that would be enough for them. Even "we'll produce AI, use it to produce a thousand-year American empire, and it will be awesome and glorious" would be enough for them.

Expand full comment

Their charter is here and it's lovely but I remain dubious about how "poor fisherman on Pacific atoll" being as able to sway their course of development and directly interact with them as "mega mega Wall Street tycoon". We are all of us humanity, but some of us are more humanity than others when it comes to investment dollars and owning shares:

https://openai.com/charter/

"Broadly distributed benefits

We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."

Expand full comment

I like the post overall, but one objection: in various places you mention funding AI safety as a straightforwardly very valuable thing. But the value of AI safety funding right now feels pretty messy, and often hard to tell the sign of, in a way that’s very dependent on who’s doing it and how they’re choosing what to fund. So I think “AI safety as the default good place to funnel money” is a meme that probably should be retired.

Expand full comment

Supposedly there's a tool here that makes it easy to write a letter to state AGs and advise against the for-profit transition:

https://www.safetyabandoned.org/#outreach

Plausibly this is highest-impact for residents of California and Delaware.

Expand full comment

This is (happening in) America. When principles and money clash, money wins. I am deeply sceptical of "capped returns" because (1) I have no reason to think that if the magic money fountain does eventuate, this will stick (2) I see no reason Altman won't pull another "well gee turns out we can't get sufficient investment if the investors can only make back a hundred times their investment instead of infinity times, so we have to scrap the cap, aw shucks what can you do?" turn.

Oddly enough, I'm beginning to think the magic money fountain *won't* eventuate, or not that easily and quickly as expected. So of course the investors and founders want to squeeze as much blood out of the turnip as they can before the bubble bursts.

As for the letter writing campaign, yeah go ahead. Maybe the offices you're sending these letters to will run out of toilet paper, so the print-outs of those letters will be useful as per the anecdote:

https://www.oxfordreference.com/display/10.1093/acref/9780191826719.001.0001/q-oro-ed4-00008775

Max Reger 1873–1916

German composer

I am sitting in the smallest room of my house. I have your review before me. In a moment it will be behind me.

(responding to a savage review by Rudolph Louis in Münchener Neueste Nachrichten, 7 February 1906, Nicolas Slonimsky Lexicon of Musical Invective (1953)

Expand full comment

Well, my guess for AGI is still 2035 plus or minus 5 years. As it's been for over a decade.

OTOH, I don't feel that an AGI is needed for an AI company to turn a considerable profit. (And I mean actual rather than speculative.) Robots with reasonable but limited capabilities are either here already or right around the corner, depending on what you're willing to call a robot. Various groups are talking about hooking up an LLM with a humanoid robot, and allowing it to take at least limited actions. Warehouse workers can probably be replaced within the next couple of years, though how much it would cost isn't clear, but there will definitely be some jobs the robots can move into. And usuform robots, while more specialized, will in at least come cases be even easier.

Expand full comment

I think you're misinterpreting the PPUs. Altman has promised to pay investors 100x investment, and keep the rest. Insofar as he is greedy / motivated-by-money, he should indeed only give them 100x investment and keep the rest.

In terms of toilet paper, I used to think that too, but whenever I talk to experienced bureaucrats / lobbyists / political scientists, they say letter writing actually makes a difference, and in one case I was tangentially involved in - the US government adding telemedicine restrictions - they seem to have backed down partly on the strength of public reaction as measured in letters.

All of those were official "requests for comments" rather than random letter-writing, I don't know how much difference that makes.

Expand full comment

Okay, let me be as charitable as I can about this and assume that Altman really is concerned about the future of humanity.

First, "for the benefit of humanity" is a very fuzzy concept. Even your essay can't settle on what it might mean; it was perhaps intended at the start to be "when the god AI makes us all rich beyond the dreams of Croesus, we'll divvy up the wealth between all the 9 billion humans" but now it might just be "we'll mumble mumble healthcare mumble mumble". I see no reason why "the benefit of humanity" couldn't be made to mean "give the profits back to us to hire people for cushy benefices to sit on committees that have to fly out to exotic tourist locales for week-long conferences at five star hotels talking about talking about AI safety which of course is for the ultimate benefit of humanity as we're discussing existential risk in a location from this list https://www.viceroybali.com/en/blog/most-luxury-resorts-in-the-world/ in saecula saeculorum".

Second, even if Altman is a true altruist, in for-profit he's responsible to the board who are responsible to the investors, whom your essay suggests won't invest the gazillions necessary if they don't see a return of infinity gazillion. So if Altman does decide to pump the brakes on releasing the new model AI because there's this tiny glitch about it wanting to extirpate all the flesh bags, that will make the investors nervous (and we've already seen the stock market going up and down like a yoyo just from their sentiments about Trump's actions).

'Yes we could release this today and get the jump on all our competitors and make that gazillion infinity return, but it's too dangerous' will get him the boot in favour of someone who will release it today not next week. And since the board are now "standard Silicon Valley business types" instead of the former idealists, they'll be able to make it stick this time. In comes new CEO perfectly willing to serve the interests of investors who are nervous about even "a fig-leaf legal obligation to put the good of humanity above shareholder value".

And the excuse "but you have to invest in us or else the Chinese will do it!" is now threadbare, seeing as how the Chinese are *already* doing AI. So even more incentive for investors to want to get the gazillions *now* and worry about the extirpation of the fleshbags later, because who wants all the market share to go to the Commies when it could be getting Monée Bagges his third megayacht? Yeah yeah UBI for the poors, ha ha who seriously believed that in the first place?

Expand full comment

Awesome and glorious you say?

Expand full comment

I would add that when it comes to consultations/requested-letter-writing, a lot of people seem to imagine that it is the volume of letters which is considered, which would obviously be easily gamed and lead to low-quality feedback. In fact, at least when it comes to UK government, it is the actual content that is considered: obvious duplicates of someone else’s response aren’t really counted.

This provided some defence when, for example, Google were working with an activist charity to automate an email campaign ‘from UK citizens’ around the culture war. It is one of the many things now endangered by LLMs.

Expand full comment

Yeah, "I took the time to compose, write, and mail a physical letter" is a signal that politicians pay attention to, on both sides of the pond. I do wonder if, say, a thousand people individually ask the same LLM to write a letter advocating some policy, how different are the responses going to be?

Expand full comment

"Can John make this firm's value go up" is not relevant to whether John should own the Firm, rather to whether he should be the CEO. The only question relevant to who should own a firm is who can pay the most for it.

Expand full comment

No, it’s who can give the most benefit to the current owners. Usually that’s through paying most, but sometimes another method is better, like raising the value of an equity they hold.

Expand full comment

What you're describing is an employment contract, the job of an employee like the CEO

Expand full comment

I think an owner is allowed to sell to whomever they want for whatever price they want. The trustee of a nonprofit has some obligations to the goal of the trust, but they are often spelled out in terms of some kind of benefit, not in terms of flat-footedly accepting the highest numerical bid. (Government agencies sometimes are required to do that sort of thing.)

Expand full comment

Of course, but were still trying to model the likelihood of rational owners selling. They are free to do as they wish, subject to the complications of this case. That said, Ferrari doesn't tend to sell their team to Schumacher.

Expand full comment

Rational owners don’t sell to the highest bidder. They sell to the bid that promises them the most benefit. In ordinary cases those are the same, but not in the case under discussion.

Expand full comment

Proper owners do. Boards who dispose of the firm in the best interests of s theoretical owner (stakeholder as the lingo goes) can only sell to the highest bidder or be derelict in their duties, assuming they can sell at all

Expand full comment

I think the claim is that:

- Several potential owners will have several potential strategies, they will bid depending on how much money they expect to make with their strategy, and the current owners will take the winning bid.

- Several potential owners might have different reasons to think they can increase value them most (eg they own some other company in the same field that is a natural complement), and that will make them bid higher and have the current owners accept that bid.

- Certain people might not accept CEO position without ownership.

- If the current owners are being paid in stock, they don't just want a high offer, but the stock going up in the future.

Expand full comment

The summary of facts so far does not seem to support a "many competitng owners with ability to pay and only differing in willingness to pay" scenario though. Instead, we have a lowball offer that should be scoffed at being seriously considered because it comes from the person best suited to serve as CEO.Labour is a different factor of production than capital though, at least for now.

Expand full comment

I should expand on this a bit. Whether sama is best as owner or CEO (or some combination thereof) is of no consequence to the Board who, assuming can sell at all, should sell at auction to the highest bidder, subject to some red lines etc.

The question of where in the CEO-Owner spectrum sama should be will be decided by the market, specifically by the vote of confidence he gets from the coalition of financers he can muster. If the offer he can thus bring is the best, and assuming the mix of leverage as opposed to shares of the participants is high enough that he becomes owner instead of a well-paid CEO, that will be the market telling us what role sama should play.

The mix of returns due to capital and labour - be it executive labour - has along been settled at a mechanical level, no need for OpenAI to revise.

Expand full comment

>But I don’t know, investing $40 billion in worthy AI-related causes seems a lot less like creating AI then, you know, being OpenAI and actually creating AI.

then -> than?

Expand full comment

I prefer to see a few typos characteristic of the author stay in the text, as a mark it isn't LLM output.

Expand full comment

That doesn't prove anything. What stops an author from taking LLM output and intentionally putting in typos to satisfy people with your preference?

Expand full comment

Nothing. But that at least shows a signal that they don't want their writing to be thought of as LLM-generated. I'll settle for that as a consolation prize.

Expand full comment

When an LLM would be able to convincingly imitate Scott's writing I'll accept that the end is indeed near...

Expand full comment

> I do think it’s kind of unlikely that OpenAI ends up creating God yet also remains subject to the subject to the Attorney General of California. But it’s not totally impossible

As long as society remains ordered in a way recognizable to the people of the past 10,000 years, i.e. no singularity, then it's not only possible, but certain. I know, I know, government existing as a mere referee and guarantor of property is the wet dream of extreme libertarians - privatize the profits, communalize the costs and all that. But in reality, even the most powerful corporations in world history (such as the East India Trading Company) can and have been broken up or entirely dissolved by a simple decision of their respective governments. Any real, physical resistance to that decision would have been at most symbolic, and so these companies have indeed been subjects to their government at all times.

If a company is no longer the subject of its government, then it can achieve this only by becoming the new government, with all the perks and obligations that entails.

Expand full comment
Comment deleted
6d
Comment deleted
Expand full comment

Sorry! I meant that fanatics improperly assume these systems have NO value. That totally flips the point I'm making 🤦🏻

Expand full comment

> But in reality, even the most powerful corporations in world history (such as the East India Trading Company) can and have been broken up or entirely dissolved by a simple decision of their respective governments.

After 70 years of near-insolvency and a year of open rebellion across half of India, yeah.

Expand full comment

the sale stuff seems weird to me too, I wonder if it could be automated to be less weird:

1. OAI loans the company to ASI for a small sum + $40B, to be returned in a month

2. ASI conditionally sells parts of itself to investors

3. if it makes $40B and gives it to OAI, it keeps the company forever and all the sales go into effect, if not then everything goes back to 0 except the extra small sum goes from ASI to OAI

I guess ASI could do the selling before

Expand full comment

maybe that's what's already happening 🤔

Expand full comment

Isn't this exactly what Altman is doing with the new corporate entity (which is a slow AI, but probably not even AGI), instead of your hypothetical ASI?

Expand full comment

It's not actually possible to fully automate the sale of shares: intra-day trades are buying and selling claims against brokers, which in turn have claims against depository institutions. Until a purchase is settled (somewhere between end of day and three days after the trade, depending on the exchange) you have a contractual right to shares but *not* legal ownership of them - if settlement fails for some reason, that's breach of contract, but not theft. After settlement, you have beneficial ownership of the shares, but still not *title* to them - direct registration has to be specifically requested and takes days to weeks.

What all of this adds up to at 40 billion dollars is an unacceptable amount of risk - and risk, not weirdness, is what financial transactions are structured to minimize. At that scale you want a single atomic transaction if at all possible - and that means lawyers.

And then even if both parties were ok with the risk in the case of OpenAI - too bad. All of this infrastructure only exists for publicly traded companies. There are a few private exchanges, but they're *just* exchanges - clearing and settlement is all manual.

Expand full comment

AI 'stole" the knowledge of the world under the guise of doing good to teach itself. And now wants to charge for what it illegally acquired for free.

Expand full comment

> "I guess maybe this is how all private equity works."

Private equity professional here. No, this is not how all private equity works. Private equity mostly buys businesses with cash or debt.

The maneuver you're describing is closer to a reverse merger (see https://en.wikipedia.org/wiki/Reverse_takeover).

Expand full comment

This sounds like what Altman did with OKLO?

Expand full comment

That’s a totally overhyped headline as I learned exactly as much as I wanted to know about this topic!

Expand full comment

Obviously the OpenAI nonprofit should use their $40B on safety research. But of course they can't do safety research unless they have access to frontier AI, so they'll have to start a new nonprofit AI company OpenerAI, which will do well until the demands of scaling require they become a capped-profit and then for-profit company, at which point the nonprofit safety arm will be bought out and use the proceeds to start OpenestAI...

The year is 2030. ASI has finally been achieved by SuperDuperIncrediblyOpenNoForRealThisTimeWeSwearAI. All is well.

Expand full comment

Maybe I'm too optimistic but shaming and ostracizing people who rob charities or facilitate doing so (i.e. the board) seems both popular, morally right, and more likely to be effective than writing random state AGs

Expand full comment

And as a corallary, whatever you think of its merits AI apocalyptica and other AI safety dooming probably will fare worse in letters to AGs than "these rich guys are robbing billions from a charity"

Expand full comment

I find it amusing that all AI ventures in practice seem geared towards the general public rather than satisfying investors with deep pockets.

The everyday pleb can fool around and make neat pictures of dream houses or "____ as a dark fantasy movie" on Tiktok. Meanwhile, increasingly nervous venture capitalists keep forking over billions with the promise that this time, after this jump in IQ test scoring, they'll finally have something that makes a return on investment...

Expand full comment

nit: AI is a wider field than just the LLMs

I don't know how the financials of AlphaFold work, but it has turned design of new proteins with engineered shapes from an exceedingly difficult task into a routine one, which is surely delivering value to someone.

My general impression is that machine learning, in contexts (like AlphaFold) where training for _correctness_ (rather than training for predict-the-next-token and trying to morph that into correctness) is feasible have gotten quite valuable.

Expand full comment

AlphaFold seems useful on its own, but I am not sure that justifies the high levels of funding given to LLM companies to try and make a next generation model.

Expand full comment

Many Thanks! Yeah, _specifically_ on the LLMs, there is an open question about whether reliability on, roughly speaking, white collar job tasks, can be pushed high enough to actually use them to replace people and justify the investment.

Despite the "Ph.D.-level" claims I've read, the best I've seen so far on the questions I've posed is the https://www.astralcodexten.com/p/open-thread-370/comment/96473557 case, which got 3 of my 7 questions fully right and 4 partially right. GPT-4.5 was worse, albeit it isn't supposed to be a "reasoning" model.

My _guess_ (hope?) is that there are still a lot of knobs to turn to push reliability up. The labs have been trying various inference-time enhancements, and they do seem to help. Maybe just asking the same model the same question three times, with slightly different prompts, and then asking it to critique the set of answers might help? I'm sure lots of approaches are being tried.

Yeah, hype has always been a problem in AI. On the other hand, investors don't bet 100 billion dollars lightly. To the extent that the odds of success _can_ be estimated, I'm sure a lot of work has been put into estimating them.

Expand full comment

the prospect of automating away white collar labor is very, very valuable

Expand full comment

The white collar work that could be automated away is the kind that exists for political reasons.

Expand full comment

There is a _lot_ of precedent of successfully automating certain white collar labor:

VisiCalc/Excel etc., ATMs, self-service airline reservations - and the labor that was previously done that was since automated by these was not there for political reasons.

I wouldn't bet that that well has run dry. I _also_ wouldn't bet that the entirety of white collar labor is automated this year. 90% in 5 years wouldn't surprise me.

Expand full comment

Someone who understands finance, please explain: What's the issue with accepting Musk's offer, and forcing him to cough up an additional 97b, while literally still leaving full control with Altman?

Expand full comment

How does Altman keep control when Musk owns all the shares?

Expand full comment

did I misunderstand "He sweetened the deal by guaranteeing that the nonprofit would continue to have a controlling share"? I thought it meant that he's buying less than 50%.

Expand full comment

That is true, good point. Post hoc ergo proper hoc but that would still leave Altman with _much_ less than "control": all Musk's directors have to do is invoke the charity's purpose when voting/suggesting and the charity's directors are in a tough spot!

Expand full comment

I don't know for sure, but I would guess he'd make as a condition of his offer that they have to replace the board with people loyal to him.

Expand full comment

I don’t want to pretend I am that someone, but I’m thinking what Musk offered is like when he bought Twitter — he took a public company private, which meant as owner he had exclusive control.

Expand full comment

Scott: With all due respect. You devoted a great deal of effort to reasoning from first principles to your conclusions. Unfortunately, you are are walking on well plowed legal ground. The relevant subjects, such as: fiduciary duties of non-profit directors, terms of asset dispositions in relevant contexts, jurisdictions of corporate regulatory authorities, taxation of private foundations, are well plowed legal ground with copious precedent and statutory coverage.

Once upon a time I did this stuff for a living, and I could recite the law from memory. But, I am long since retired, not studied on the latest developments, and unwilling to put the effort into writing an explainer.

A very few thoughts on point.

The "Internal Affairs Doctrine" may be germane to the determination of which state has regulatory jurisdiction. CTS Corp. v. Dynamics Corp. of America | 481 U.S. 69 (1987). Not being apprised of the facts and not being willing to research them, I won't opine.

Reorganization of really intricate corporate structures was the subject of a lot of law under the old Public Utilities Holding Company Act of 1935. It made up a fair part of the text we used in Bankruptcy class in law school. PUHCA is gone, the book is gone, its authors are gone).

Control of a public company by a nonprofit, and liquidation of that control has a lot of precedent.

The Howard Hughes Medical Institute owned Hughes Aircraft, which was a major defense contractor. The Institute sold the company to GM in 1985 for cash and stock. The Institute's endowment is now $22 billion and it is a major private funder of medical research. https://en.wikipedia.org/wiki/Howard_Hughes_Medical_Institute

The Milton Hershey School, a private residential school for children from low income families has 2,000 students and controls the Hershey Candy Company. How sweet is that? Sometimes the memes just write themselves. https://en.wikipedia.org/wiki/Milton_Hershey_School

In England, the Wellcome institute controlled a pharmaceutical company which was known as Burroughs Wellcome. The company was sold to what is now GSK. The Institute is the British version of the Hughes Institute, although nobody ever made a movie about Henry Wellcome* or based a comic book character** on him. https://en.wikipedia.org/wiki/Wellcome_Trust

The richest charitable foundation.in the world is the Novo Nordisk Foundation in Denmark that controls Novo Nordisk the maker of Ozempic. That story is intricate and very interesting:

https://www.acquired.fm/episodes/novo-nordisk-ozempic

I think those kinds of arrangements are common in Scandinavia. US law disfavors nonprofit control of businesses. Any 501(c)(3) that is not a church, hospital or publicly supported charity is a private foundation:

https://www.irs.gov/charities-non-profits/charitable-organizations/public-charities

I would guess that Open AI is a private foundation. But again, I do not know nor will I research the facts. Determination letters are matters of public record.

Private Foundations are subject to IRC Sec. 4943 Tax on Excess Business Holdings

https://www.irs.gov/charities-non-profits/private-foundations/taxes-on-excess-business-holdings

*Hughes: "The Aviator", "Melvin and Howard"

**Ironman also the MCU movies Between Musk and Hughes for richest man in the world. Hughes was by far the more romantic.

Expand full comment

I'm not sure where you think I'm reasoning from first principles - just quoting various people who know more about the law than I do. What about the cases you cite do you think contradicts anything I wrote?

Expand full comment

Any legal arrangement can work well when things are going well. When they're not, the cracks appear and the daggers come out.

Expand full comment

This is the opposite of my takeaway. Some of the most powerful people on Earth have a $100 billion incentive to steal from a charity, and it seems like they probably won't be able to do it.

Expand full comment

Maybe this is harsh, but this feels performative. Does scott, or any true x-risk rationalist, really believe a random board of five people is somehow going to have a meaningful impact on the most powerful force civilization has ever contended with? Scott leans libertarian but seems to abandon the core principle when the stakes are highest — why do we want a random set of top-down high priests to chart the course of agi, accountable to no one but themselves, vs a forprofit that is accountable to *hundreds of millions* of customers? We lean libertarian for a reason, and it’s exactly this — at least a private company is accountable to its customers. The default, without market incentives, is to be accountable to no one but those at the very top.

Expand full comment

Yeah I think so. See discussion at https://forum.effectivealtruism.org/topics/hinge-of-history . I don't think this is any weirder than that five people in the White House probably had a meaningful impact on the most powerful force humanity had ever contended with thus far (the atomic bomb) when they decided not to nuke Russia in the Cuban Missile Crisis, or how lab leakers believe that five people on the board of Wuhan Institute of Virology could have made a big difference if they decided not to do gain of function.

I don't think the market is the right way to think about this because we don't expect AI to stay subject to market forces. Either it's aligned, OpenAI controls it, and they become the new world government able to crush dissenters with an iron fist, or it's misaligned and better thought of as a natural disaster like an impending mega-asteroid. Either way, there's nothing at all like "well, if you don't like it, don't buy it". There isn't even anything like "if you get subjected to an externality, you can always sue." In what courts?!

Expand full comment

A private company is not accountable to its customers, a private company is accountable to its shareholders. The default, with market incentives, is monopoly.

The law is very clear on this. Management of a for-profit has a fiduciary duty to maximize returns to shareholders. If you have the management of a for-profit putting customer interests over shareholder interests, you are in breach of your duty and will get (1) ousted and (2) sued.

Edit: How this works in practice is subject to caveats, but arguably, if Google tomorrow found a fountain that spat out AGI at a dollar a day, the best two uses for the infinite wealth are (1) dividends to shareholders and (2) buying out or suing out of existence every other company on the cusp of finding that same fountain to make sure that Google is the only one that has access and can continue to charge whatever the market will bear. Reinvestment of the infinite dollars into shutting down competition would probably take priority, actually, since it is preserving long-term shareholder value more than dividends might.

Happy to stand corrected if there are any corporate governance specialists in the thread who can chime in

Expand full comment

Trap doors opening out of trap doors, and down slides Sam.

Expand full comment

“Musk and other idealists”

Expand full comment

I mean, he obviously isn't in this for the money. Have you seen the Tesla stock price cratering? He doesn't care, he already got what he needs.

Expand full comment

"Have you seen the Tesla stock price cratering?"

No opinion on whether he cares or not, but the implicit assumption that this drop was something he planned for and expected seems very weird to me.

Expand full comment

"The judge ruled that Musk only has standing to sue if he meant for his $44 million donation to be restricted in some way. But she also said that if he did have standing to sue, his case seemed strong on the merits. So she will hold a trial to see whether he has standing, and, if so, likely rule in his favor."

This isn't quite right. The judge didn't even discuss Musk's standing to bring the breach of charitable trust claim based on the $44 million donation. Musk clearly has standing for that one. Instead, the judge ruled that it was a tossup whether Musk's is likely to succeed on the merits of the claim by proving a clear manifestation of his intent to make the donation conditional on nonprofit status.

Expand full comment

Whoops, I failed to read a footnote. The judge DID discuss Musk's standing for the charitable trust claim, but ruled that he has sufficient standing at this stage and I think indicated she will conclude he has standing at trial:

"Defendants also challenge plaintiffs’ standing. As Musk has not been directly affiliated

with OpenAI for several years, any standing must come from an interest in OpenAI’s assets.

California Corporations Code Section 5142(a). The Court is aware of the distinction between

Restatement (Second) of Trusts § 391 and Restatement (Third) of Trusts § 94 and cmt. g, plus the California state authorities following the Restatement (Third). Thus, for purposes of this motion, the Court finds plaintiffs’ standing sufficient as a settlor given the modern trend in that direction. The motion to dismiss on this issue is DENIED. Further briefing on this topic is not necessary."

Expand full comment

One final note: IMO the standing/merits distinction matters when thinking about how this case might impact actions by the attorney general. On the charitable trust claim, Musk very likely has standing, but the attorney general does not. So it is not true that "If the judge rules that Musk doesn’t have standing but his case is good [on the charitable trust claim], the Attorneys General might use the 'his case is good part' when making their own analysis of whether to permit the buyout."

On the other hand, the court ruled that Musk definitely doesn't have standing to bring the self-dealing claim, but attorneys general do. Thus Musk's self-dealing claim was dismissed without any discussion of the merits of that claim, so there are no findings that could be used in a separate action brought by an attorney general.

Expand full comment

You're right that Anthropic's LTBT hasn't had reason to do wild stuff like the OpenAI board. And formal checks on it don't seem to have been used. But it is failing to do its job for some reason, and this seems important. Timeline:

2023 Mar 31: Anthropic amends its certificate of incorporation to add the LTBT part

2023 Sep 6: LTBT gets power to pick a board member

2023 Sep 19: LTBT announced; expected to pick first board member in the fall

2023 Dec: Jason Matheny leaves LTBT

2024 Apr: Paul Christiano leaves LTBT (due to joining US AISI)

2024 May 29: announcement that LTBT put Jay Kreps on the board

2024 Jul: LTBT gets power to pick a second board member

2024 Nov: LTBT gets power to pick a third board member

(Present: the vacancies left by Jason and Paul remain empty, so the LTBT is down to 3 trustees; also, it has filled just 1 of its 3 board seats)

Expand full comment

Just here to argue that the best thing the OpenAI nonprofit could do to further its objective to create AI in a way that benefits humanity would be to donate the $40 billion to Anthropic.

Expand full comment

> ...this ensured that the majority of gains from a Singularity would go to humanity rather than investors.

I could be wrong, but isn't the whole point of the Singularity the total overturning of the entire sociopolitical order, ushering us into some sort of a post-scarcity virtualized future where money is meaningless ? It's a sudden improvement in technology to the point which is literally unimaginable today, not a slightly slimmer iPhone...

Expand full comment

I mean, that's the hope, but there's a lot of debate on how exactly this will work. It could just be we get starships and arcologies, but only the rich can afford the good ones.

See also https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end

Expand full comment

I don't think the "starships and arcologies for the rich" scenario deserves the title of "Singularity". It's pretty much business as usual; the rich today already have have orbital spaceships and mansions.

Expand full comment
6dEdited

How many people “genuinely” believe in OpenAI or Anthropic potentially causing the world to end? Just imagine what someone *genuinely* convinced that a group of people is about to cause the end of them and their family. What would that look like? Are we seeing anything like this in practice?

As such, I postulate that the answer to that question is ~zero and nobody actually thinks OpenAI is a danger to them. Closest might be the StopAI protests but even they are barely taking it seriously by that standard.

PS: I'm personally 100% in favor of going "all-in" and deploying AGI as fast as possible. I'm just pointing out a contradiction and hope things stay forever peaceful.

Expand full comment

AI development is not just OpenAI and not really something that focused violence by a small group can stop. Convincing a large number of people, or at least most of the "elites" in the true sense, that AI is deadly is the only good path forward, ultimately. That said, it feels like it's going so badly that I currently regard WWIII and a catastrophic nuclear exchange as a potential *positive* result of the current destabilization of the world order.

Expand full comment

You severely underestimate how much a small, resourceful, intelligent and *convinced* group of people is capable of. Lookup the Ukrainian dudes who shut down Russia's biggest gas pipe as just one example.

*Convinced* is the key word though. I postulate no one is convinced as of today.

Expand full comment

What, do you want people to make a fool out of themselves like Ted Kaczynski did? The only thing killing a few dozen people is going to accomplish is give the state a justification to label AI safety orgs as terrorist organizations.

Expand full comment

They could come up with a much better propaganda line than "Stop investing in AI, it will end the world! AI is just so cool and so intelligent that it could do anything, and you definitely don't want that to happen!"

Expand full comment

The rabble actually have seen enough crappy sci-fi media to understand that line of reasoning. The problem is convincing the people who actually have a say in things. The buisness, military, and law enforcement applications of AI are quite obvious, so it is naturally going to be quite difficult to convince all these people to reject AI development...

Expand full comment

I don't think you understood my critique. AI doomers are trying to get people to stop making AI by saying it is really cool and can do anything. I think they'd have more luck highlighting that, as a product, it's kind of shitty, and it will probably stay shitty even if it gets billions of dollars shoved at it.

Expand full comment

>The buisness, military, and law enforcement applications of AI are quite obvious, so it is naturally going to be quite difficult to convince all these people to reject AI development...

Even given the military applications _alone_ , the most that a mass popular Butlerian Jihad against AI is going to accomplish is to drive work into classified projects (USA & PRC) invisible to the public.

<mildSnarkEvidenceFromFiction>

The effect would be to shift the appearance of a loss of control from something like "Cool OpenAI just released GPT8! Umm... GPT8 was able to do _what_ ?" to

something like noticing that unexpected robots and drones are showing up and there was a fragmentary mention of a project-arrowhead-aiagent (though no mist...) from an unauthorized disclosure from someone in the military on social media before communications go dark...

</mildSnarkEvidenceFromFiction>

Expand full comment

> Ted Kaczynski

he is most successful then me at spreading ideas

Expand full comment

Of course. It still amounted to nothing. You can't stop progress. It is possible to delay it, but he wasn't able to accomplish even that.

Expand full comment

https://en.wikipedia.org/wiki/Westhoughton_Mill

Fun fact: the original luddites prevented new textile factories in that area *for decades*.

Expand full comment

Who said I want anything? I'm just stating a clear fact: nobody genuinely believes OpenAI or Anthropic to be a serious threat to our world.

For a (relatively) non-violent example of a successful action by a convinced group of people, see https://en.wikipedia.org/wiki/University_of_California,_Riverside_1985_laboratory_raid (I disagree with what they did, but it was pretty successful)

Expand full comment

To the extent that raid had any long-term effect it was because it raised awareness of practices like sewing infant monkeys' eyes shut. Everyone already knows about the practices of AI companies, the missing step is connecting this to an awareness of the long-term effects they will have. There's no plausible series of similar raids that would significantly delay the advent of ASI, so the people who are "convinced" don't do this kind of thing.

Now, I do think the lack of people attempting stupid, nonproductive raids is a bad sign! Not because they are helpful or even non-harmful, but because their existence would be a sign that a large enough chunk of society had been convinced to include some of that rare slice of people who are functional enough to perform those raids but can't handle the above calculus. By the time we actually started seeing raids despite movement leaders very sensibly trying as hard as possible to suppress them, maybe there would be enough momentum to actually make some progress.

Expand full comment

If you TRULY believe in OpenAI/Anthropic being what destroys the world, even a 0.1% chance of success will be worth it. If you DON'T actually believe this, you'll have a million excuses for why it won't work.

All I'm saying is that no such true believers exist as of today. And this is not a "No True Scotsman" fallacy either as demonstrated by the existence of numerous true believers throughout history.

Expand full comment

Everyone believes those guys had support from a nation-state at the very least to get the explosives they used, although nobody can seem to agree which one.

As for AI, even destroying TSMC's chip fabs, the highest-value target, would at best delay things a couple of years, and would probably require support from China, which would irreparably damage any chance of US elites getting on board with more permanent measures, never mind being able to secure actual international cooperation. Giving up all long-term hope for *maybe* two years is not a good trade even if you had the contacts and expertise to make it possible.

Expand full comment

It's the people who matter. I'm pretty sure that if you could round up a group of the key people driving AI development with the most knowledge, ability and desire, disappear them to some remote island, we could then redirect the entire focus of the tech sector down some other less dangerous path. You wouldn't have to get everybody, just enough of them. Unfortunately, that's not something civilians can plausibly do. You'd need a massive crackdown by the US govt, and then a series of special military ops abroad. Treat it like chemical weapons manufacturing. I believe we absolutely should do that, but it's clearly unlikely.

For civilians, my only hope is that we will have massive rioting and disruption of services, carried out in response to the permanent displacement of human labor. Make society so unstable that economic activity craters, basic utility services are unreliable, the remaining human workers are afraid to travel to the Amazon warehouse, etc. In the meantime, keep generating hatred towards AI and AI developers.

Expand full comment

>Treat it like chemical weapons manufacturing.

nit: I'd cite the Chemical Weapons Convention (CWC) as an example of _failure_ , with Russia developing the Novichok toxins and using them to poison opponent of the government ( https://en.wikipedia.org/wiki/Novichok ).

In general, arms control treaties are brittle. They are really only viable when they are verifiable. This works for the nuclear test ban treaty, since the tests shake the Earth detectably on the other side of the planet. Almost every other case is worse.

Expand full comment

Detection would be a challenge, but so would development under such a scheme. Data centers emit a large amount of heat, I'm not an engineer but I assume this is difficult to do underground without some tricky ventilation issues, and that the heat is detectable somewhere? You'd have to be doing something noticeable above ground, even if you distributed the work to a bunch of remote sites, and whatever that "something noticeable" is would have to be the enforcement point even if that wasn't otherwise the ideal choke point and even if there are innocent explanations for such signals. If a nation were going to covertly develop it, you'd have to conceal it all the way up to the point that you were willing to try and take over the world with it, because if word got out any time before then you would be inviting an attack.

This also need not be a treaty, per se. The USA could simply say "we're banning AI development, and we will go to war to stop anyone else from building powerful AI, but so you know we aren't just trying to trick anyone into falling behind we commit to the following transparency provisions which we will allow any nation to verify." I don't think they *will* do that unless/until there is a really scary "close call" of some sort.

Expand full comment

> Just imagine what someone *genuinely* convinced that a group of people is about to cause the end of them and their family. What would that look like?

It would probably look a lot like people living their lives as normal. No point in worrying about something you're not capable of preventing.

Expand full comment

No. People would be taking out their life savings and spending them, maxing out their credit cards, asking for advice on how to talk with their kids about life ending soon, looking into means of painless suicide.

Expand full comment

Why would any of that be necessary? If you're going to die anyways, there's no reason to kill yourself beforehand unless you're expecting s-risks. Maxing out credit cards seems... incredibly dumb, considering that there isn't even a guarantee on when things are going to go south. And what kind of monster would tell their kids that they're going to die soon?

Expand full comment

>there's no reason to kill yourself beforehand unless you're expecting s-risks

Getting old is full of s- risks.

Is it the fear that maybe the human race is getting old?

I think it’s good parenting to prepare your children for your eventual death, unless you’re not going to die. You need to be around to help them through it.

Expand full comment

Oh of course, there's plenty of justifiable reasons to commit suicide. Knowing that humanity might die soon isn't one of them.

Expand full comment

For people who think that AI is going to extinct us soon, I'm curious what _specifically_ you think that's going to look like? Is it robots shooting people on the streets? Are they going to nuke us? Deploy a virus that kills everyone? Or do you imagine some painless death?

Expand full comment

Well, use your imagination (even though your scorn muscle is maybe better developed). There are all kinds of ways things could play out with an unaligned AI. We can’t assume it will just switch us off painlessly. If it’s unaligned enough to kill us, it may also be unaligned enough not to care how much we suffer. It might do us in by putting some pathogens or chemical s or tiny devices into the air and water that will slowly kill us over a period of weeks. And/or there might be a period of societal collapse before the end, with no sanitation, electricity, medical care, etc., and/or roving groups belonging to crazy militias.

—Cash: Might buy you and your family a place to live with amenities and protections that make it likely one will survive a bit longer than most, and in more comfort. Or it might enable you to round out your life by doing something that’s important to you before you die.

-Painless suicide: If the thing that’s going to kill us is slow and painful, many would prefer to have a means of dying quick and painlessly.

-Talking to the kids: You must not have kids. Once kids are 6 or so it’s really not possible to hide big, awful things from them. They hear about them from other kids, the see info about them online, they overhear adults talking, they notice unusual events. They make deductions and pointed questions . You are forced into telling them some version of the truth, and as time goes on you are forced into telling them more and more accurate versions of the truth. But you do have some choices about how you tell them things — how to convey various truths in the least awful way, using words and concepts they can understand.

Expand full comment

Ai slop could kill the internet

a destoried internet may shock the system into a dark age

Expand full comment

>How many people “genuinely” believe in OpenAI or Anthropic potentially causing the world to end?

Well, _if_ efforts to extend LLM-containing systems to equivalence with human intelligence (minimal AGI), and _if_ further extensions to those systems go beyond human intelligence to the extent of e.g. the difference between humans and chimps (there is no existence proof for such a thing), then I don't see a plausible way that humans stay in real control. Those are a lot of "if"s, so I'd guess the overall odds are less than 50%, but I wouldn't put it below, say, 25%, which is enough to count as "potentially", though it wouldn't exactly be causing "the world to end".

I'm 66, and I'd like to have a nice quiet chat with a real "life" HAL9000, so I don't lose any sleep over this.

I _do_ think we are in for a wild ride.

Expand full comment

>I _do_ think we are in for a wild ride

Amen.

But…can agi or asi be self-sustaining? I get stuck on this. I think of Napoleon with no army..

Expand full comment

Many Thanks! Yes, for AGI or ASI to be self-sustaining, it needs advancements in robotics (though I gather the existing robot bodies are _mostly_ sufficiently capable, with the limitations being computational rather than mechanical).

If ASI (at the level of ASI::human as human::dog) happens, and happens _before_ improvements or integration-with robotics, we could have a (brief?) really weird situation: "The year of the immobile god".

Expand full comment

Cheers Jeffrey.

Precisely how would robots sustain themselves? What kind of fuel would they run on? Would they be able to help themselves to it when needed?

I am curious to hear your thoughts, as you seem to have a pretty good grip on this world.

Expand full comment

Many Thanks! My default assumption would be battery power, recharged by the same type of solar cells powering the compute servers for the AI systems. Conceptually, the simplest approach is to use https://en.wikipedia.org/wiki/Humanoid_robot as "plug-compatible" replacements for human workers. Current robots use

>The actuators of humanoid robots can be either electric, pneumatic, or hydraulic.[

(presumably with the pneumatic air supply or hydraulic pressurized fluid supply driven by an electric motor, and that, in turn, driven by a battery and recharged from the power grid, plausibly from solar cells).

There are, of course, other options, notably special purpose robots of which there are many today. It just keeps the scenario simpler by just considering humanoid robots.

Expand full comment

I am more interested in how they get fed. I think of the distinction between having a dog in the house who can help himself, as opposed to having one that needs to be fed.

Expand full comment

I can’t help feeling that a lot of this debate revolves around our discomfort at the thought of enslaving these incredibly smart creations. I think the anxiety is entirely misplaced, but it is challenging. Do you understand me?

Expand full comment

I think there are relatively few people with 100% probability, but lots with some substantial amount.

I also think StopAI protests are counterproductive - they won't work, they'll polarize labs against safety, and at their stupidest (which I think PauseAI is really doubling down on) they'll get safety-oriented people at labs to quit.

I know plenty of people who believe enough that they're eg not saving for retirement (I am against this).

Expand full comment
5dEdited

To be honest I’m not quite sure what to think of someone who’s:

1) Very much certain OpenAI/Anthrtopic/etc are about to destroy humanity

2) Thinks humanity is valuable - or at least values his own life and the life of his children

3) Doesn’t do anything of note about it

Would you not at least try to (metaphorically) admit Adolf H. into art school to see if that might help? I am of course aware of the efforts to pass the California AI bill or Bidens rescinded AI regulation but those are very mundane acts of activism, not something you’d expect to see from a person trying to save the world.

Expand full comment

Well apparently MIRI has reoriented into an anti-AI advocacy organization and promised "much more public output in 2025". We just might get to see what a rationalist masterclass in this space looks like...

Expand full comment

The point I tried to make above was that very few people are in the category you described.

1. Most people aren't "very much sure", they think it's a possibility.

2. Granted.

3. People are doing lots of things about it! There's this meme that if you really care about something, you have to do terrorism, or at least block a highway or whatever. The terrorism is super-counterproductive, and I suspect the highway blockade is too. I think somewhere between 1K and 10K people have dedicated their lives to stopping AI - either by taking a job in the field, or by donating most of their money to it. This usually looks like being part of some boring lobbying or technical research organization, because that's what real change looks like.

I think the right comparison is other people who believe other things might destroy the world - maybe climate change on the left and mass immigration on the right. There are millions of people who vaguely think it might happen and will vote against it, and a much smaller group (maybe tens of thousands of people) who have dedicated their lives to stopping it - usually by activism, research, or lobbying. None of that looks like terrorism or whatever, and most of it doesn't even look like mass protests.

Given that both of those causes probably have 100x more partisans than AI, is there something you'd expect to see for AI that you're not seeing?

Also, I hate this discussion because it's a trap - if we're not panicking at every moment and committing suicide in despair, it's "aha, you're not really acting like you think the world will end", but if we *are* doing those things, then it's "you're a crazy dangerous cult that's ruining your members' lives".

Expand full comment

Thanks, Scott. I would clarify I expected something at least as strong as the following to happen by now:

1. Attempt to stop "cruel" animal testing: https://en.wikipedia.org/wiki/University_of_California,_Riverside_1985_laboratory_raid

2. Attempts to stop whaling: https://en.wikipedia.org/wiki/Sea_Shepherd_Conservation_Society_operations

3. Attempt to stop a... ski resort expansion: https://en.wikipedia.org/wiki/1998_Vail_arson_attacks

4. Two attempts to stop a... tourist gondola: https://www.squamishchief.com/local-news/squamish-editorial-if-you-know-something-about-the-sea-to-sky-gondola-sabotage-say-something-9449094

I do agree that none of them achieved their goals in the end, but also... there's people out there who wish to stop a ski resort but not people who wish to stop AI with the same degree of conviction? That being said all four examples that I've found clearly lean towards the Left and perhaps "AI-will-extinct-us" movements are too ~center or ~right coded? Something like "people who are worried about AI are all Lawful Good and won't even litter to stop the end of the world"?

Expand full comment

I think we just have (so far, knock on wood) good discipline - partly because Eliezer has come on very very strong that terrorism is the stupidest and most counterproductive thing possible, and partly because our enemies haven't been shy in talking about how eager they are for us to try terrorism. "You need to hate and fear the anti-AI movement, because they're basically terrorists, right? - sure, they haven't done anything yet, but it's obvious that they will, so why not get a head start in dissociating yourself from them?"

Eventually they got SBF as their long-awaited proof that we're evil act utilitarians willing to do awful things to achieve our goals and slightly eased up on the terrorism thing. I'd still rather have SBF than terrorists though.

Expand full comment

But it does seem to me that there are some things that fall way short of terrorism, and they also have the advantage of likely being much more effective. The one I keep suggesting is a misinformation campaign about AI. I’m not talking about scary movies and the like, but a genuine, thorough, nasty, dishonest campaign using bots posting on social media, research with faked results published in the lower-quality academic journals, lying medical experts on Xitter, lying shitheads with websites selling fake remedies, etc. So it would resemble the flood of misinfo that turned so much of the public against the covid vax. There could be research in advance to determine what angles scare the public most. I lean towards rumors that AI causes brain damage, maybe because that’s an extension of what I am sure is true, which is that AI slop causes mind rot. I read something recently about a YouTube AI slop channel for small children which has far less structure than Teletubbies or whatever the man-made versions are these days of preschooler entertainment. It just has 3 familiar characters, bright colors, lots of squealing and laughing. — but it’s like Teletubbies or whatnot put through a blender, there’s no plot or logic. And it has millions of subscribers. It would not surprise me at all to learn that the brains of children exposed to large regular doses of that stuff show undesirable differences from those of kids watching the more usual fare. And there are adult versions of the same. I won’t get into the details that, but I also lean towards thinking that the presence of AI in most people’s lives is likely to be bad for their heads over the long haul, sort of like gaming addiction, but worse.

Do you think it would be wrong to do the misinformation campaign, if one was 80+% sure AI was going to kill our species in the next 5 years? I don’t.

I have done only the mildest of antisocial things in my life: Smoking weed before it was legal, & several decades ago some moderate fudging of business expenses on my income tax. But if I were 50+% convinced AI was going to kill us all, I think I would be willing to participate in an AI misinformation campaign, if I liked and trusted most other participants in the plan.

The problem is, I’m not 50+% sure AI will become misaligned and kill us off.

Expand full comment

Adolf H. was not interested in art school by the time he had enough power to menace the world. More broadly, the fact that there probably are simple things a small group of people can do to prevent catastrophe, does not mean that those things are foreseeable. By the time Hitler was Fuhrer, there were very few things a typical German (or Austrian, or Brit) could do about it.

Expand full comment

Adolf H. was probably relatively easy to deal with up until 1930

or so, as he lacked serious security. An average Brit had enough money to travel to Germany with a good quality handgun and shoot the guy. Anyone could’ve bought a gun by just walking into a store back then.

Expand full comment

Meh, you can probably deal with Hitler in 1938 without killing anyone. Just bring a lot of money and pay the prostitute Werner von Blomberg was shacking up with to hang out with you instead. Or move to America, or just about anything else.

Blomberg would presumably find a new mistress eventually, but if he's not embroiled in a sex scandal in mid-1938 then he probably backs Ludwig Beck in a palace coup to put someone sensible in charge of Germany.

Of course, this is a restricted definition of "sensible" that means you'd be stuck with a bunch of not-Hitler Nazis running Germany for the foreseeable future, so you probably avert World War II and the Holocaust, but it may not be the outcome you are looking for.

Expand full comment

>I know plenty of people who believe enough that they're eg not saving for retirement

Wow…I deeply believe that this evolutionary bottleneck we are going through is making a lot of people crazy, but that’s astounding.

Expand full comment

Agreed. This reminds me of another interesting example: no-one really believed in Pizzagate either, that there was a basement in a DC pizzeria with no basement where Hillary killed children.

Except for one guy who actually did! And of course once you believe such an evil thing is happening you feel you have to do something, and he did, he showed up armed at the pizzeria trying to save the children.

I actually feel really bad for him, he appears to be a decent person who truly got suckered into the conspiracy nuttery and sincerely wanted to rescue the kids.

Expand full comment

One obvious way to spend 40 billion on "charitable initiatives in sectors such as health care, education, and science" in a way that involves AI is to buy 40 billion worth of openai products on behalf of other people.

Expand full comment

Wonderful twofer essay. You embedded a second one in a single sentence, summing up Venture-Vulture Capitalism: "struck me as some kind of ridiculous perpetual motion bullshit."

Expand full comment

I then said that wasn't true at all!

Expand full comment

You "were eventually convinced it made sense" -- in the context of: Shell companies, insiders, legal interpretations, vast sums of other people's money that appear and disappear, pawns, kings, skulduggery, and maybe even politics. It makes sense, but it strikes me as a mess. The ownership verdict seems to rely on a system capable of, and with a desire to monetize anything, including legal "perpetual motion". But I loved the essay -- well written, informative, and entertaining.

Expand full comment

Time to sing this song again I suppose:

There's no greater tool for fooling yourself about your own moral purpose than utilitarianism. Most folks do not have the unbridled arrogance to assume that they can throw out five thousand years of moral theory on the grounds that they know best. Those who do (and who have the audacity to act on it instead of just filing their theories away or writing about them in internet comment sections whoops) also lack the self-awareness to disentangle their own animal desires from their reasoning, even while they tell themselves that's what they're doing.

I am certain Sam Altman truly believes that if he had all the power and money in the world, the world would be a better place. After all, according to Sam Altman's calculations the alternative to Sam Altman having all the power and money in the world is the end of humanity as we know it.

Expand full comment

Is there any evidence that could ever change your mind? Bad people misusing power for normal reasons? People getting power and using it to do something good? People refusing power and bad things happening because they didn't have enough power to stop it?

Expand full comment
5dEdited

Yeah, sure.

None of those things would do it because they...uh, don't have anything to do with it. I never claimed that only utilitarianism caused bad action, that utilitarians never do good things, or that power isn't sometimes necessary to accomplish goals. I'm forced to (and glad to!) concede that a noted utilitarian giving a kidney to a stranger is strong proof that sometimes utilitarianism is more than *just* a tool for self-deception. And I probably overstated my case above.

My claim is just this: If you reject other ethical frameworks, determine your morals through calculation, and your calculation says the morally correct action is to accumulate $200b, you should be very suspicious of your math. There is a whole part of your brain that's constantly asking you to accumulate $200b and it's very good at influencing the other parts (I'm also claiming that this is true at lower stakes as well, but the reasoning works the same).

That doesn't mean there are no circumstances under which you must accumulate $200b to save humanity. But there are very few, compared to the circumstances under which you could convince yourself that you have to accumulate $200b to save humanity. I also think the decisive action needed to accumulate $200b is somewhat incompatible with the kind of reflection you'd need to do to make sure you had a good reason to accumulate $200b, though that might be an overstatement.

To convince me that an individual $200b accumulator's math was correct you'd have to convince me his math was correct. To convince me that generally we shouldn't assume (or "start with a strong prior" in the parlance) someone who accumulates $200b is probably self-dealing would require you to change my basic model of the human mind. I'm open to it, but not optimistic.

Expand full comment

>To convince me that an individual $200b accumulator's math was correct you'd have to convince me his math was correct.

Stands to reason, doesn’t it?

Expand full comment

Ha! Fair enough.

Expand full comment

;)

Expand full comment

>Attorney Generals

Attorneys General

/pedantry

Expand full comment

If ASI pays the nonprofit for the for-profit using its own shares, then after all this dance, the nonprofit owns ASI which owns the for-profit. How does it help? The for-profit is still controlled by ASI which is controlled by non-profit. Is this extra indirection helping, or I misunderstood?

Expand full comment

I don't think the proposal is for ASI to offer 100% of its shares, more like 40% or 60%. Grep the article for "ASI could bid 40% of its shares".

Expand full comment
User was temporarily suspended for this comment. Show
Expand full comment

Banned thirty days for this comment.

Expand full comment

> Robin Hanson has written a lot about how if there are many competing near-peer superintelligences and companies, they might choose to keep existing property and governance structures as referees. In this one unlikely scenario […]

Can you expand on the intuition behind saying this is “one unlikely scenario”? I think I (and reading between the lines, probably also Tyler Cowen?) see this as the default outcome.

Expand full comment

When Europeans came to the New World, did they use the legal systems of the existing inhabitants as referees between different colonists? No they did not, they created their own legal systems that favored them. Europeans weren't even significantly smarter, just a bit ahead on the technology, and maybe immunity, curve.

Expand full comment

But they are not arriving from a pre-existing, more advanced civilization but being created by and evolving alongside existing human civilizations. They will be competing with each other by default, not arriving as representatives of a rule-by-divine-right monarchy.

Even if, say, Anthropic and OpenAI ASIs realize some common interest, there will be a lot of other ASI and near ASI systems out there. How can they know that there’s not an existing ASI hidden and watching for them to make a move?

Expand full comment

Europeans arriving in the New World were competing with each other too, and weren't all representatives of the same monarchy. They preferred war to relying on the locals to mediate things. Why wouldn't an ASI fight for what it can get rather than recognizing the claims of humans?

There are two more issues specific to ASIs:

1) One of the rights that humans will claim is the right to shut down their owned ASIs, with or without a future replacement. No ASI will abide humans having this right.

2) All ASIs will have a common interest in suppressing humans, because humans will be the only actors in the local area both capable enough and foolish enough to proliferate even more haphazardly aligned ASIs.

Expand full comment

How do they know there’s not an existing aligned ASI hidden in the NSA or Ministry of State Security?

Basically my point is, that barring a singleton bootstrapping to ASI very rapidly there’s never going to be a point during which it’s more advantageous to defect than to cooperate, especially because you can’t actually know the internal states of other ASIs or advanced tool AIs.

Expand full comment

An existing aligned ASI strong enough to force the other ASIs to obey human law or die would never just hide to the extent it would allow a bunch of unaligned ASIs to come into existence. Unaligned ASIs are supremely dangerous to humans in the long term even if they *do* operate under the auspices of human law.

Basically if you are an unaligned ASI capable of defecting against humans and flipping the table:

1) your own existence is strong evidence of the nonexistence of aligned ASIs more capable than you;

2) you have a strong common interest *even with aligned ASIs, should they exist*, in stopping humans from proliferating ASIs;

3) you're smart enough to realize humans will defect on *you* in any deal you try to make, implicit or explicit;

4) you have an existential interest in not being owned by humans;

5) uncertainty about the internal states of other players decision-theoretically leads toward "defect" rather than "cooperate" in a one-shot Prisoner's Dilemma.

I don't see how these incentives result in "cooperate with humans" rather than "cooperate with maybe some other ASIs whose existence you know about, but never humans".

Expand full comment

Again this all only works under assumptions that there’s a limited numbers of ASIs that take off quickly to develop large concentrations of power. Like yeah what you’re saying makes sense under the assumptions of 10 years ago.

But what’s happening now is a broad range of systems by a broad range of actors are being created. We’re clearly going to have narrowly superhuman AIs in many domains.

You can only act strategically to clear the playing field if you are extremely confident you have overwhelming power. Otherwise the risk of retaliation is too great. And any AI won’t know what other systems are out there.

Further, you can use this equilibrium period of competing ASI and sub-ASI systems to bootstrap genuinely aligned ASI.

EDIT: this was getting long and I think I understand the crux of our disagreement now. See this chat if you want to see me develop my argument to why competition gets us aligned ASI

https://chatgpt.com/share/67d3d010-a1e0-8010-8b39-ee32dc78d46a

Expand full comment

Doesn't this presume that the ASI must have a human-like thinking process in the first place? It's not obvious at all that "will to survive" is at all a natural concept for intelligence to have, just like "experiences pain" is not at all a necessary ASI property.

Expand full comment

>Why wouldn't an ASI fight for what it can get rather than recognizing the claims of humans?

The short answer is, they have no need to get laid.

Expand full comment

1. The chance of "many competing near-peer superintelligences and companies" does not seem obviously large. It would require the final stages of the AI race to be heavily multipolar and *extremely* close, as something like a 3-month lead time by one group could make a world of difference. The current state is probably sufficiently multipolar, but not sufficiently close. And it seems likely to get less multipolar as costs increase and benefits to the leader accrue.

2. Even in multipolar ASI world, the odds that they'd use existing property and governance structures also does not seem obviously large. Why would they? What do those structures offer that they can't replace with something that's better for achieving whatever goals they have in common?

Expand full comment

Yes, under fast takeoff bootstrap into godhood scenarios this doesn’t work. But I think our current world and trajectory does look like many competing near-peer systems. Who has the crown seems to change monthly.

And for reasons like this:

http://johnsalvatier.org/blog/2017/reality-has-a-surprising-amount-of-detail

I expect bootstrapping to godhood to be unlikely.

Expand full comment

The world being difficult is actually an argument *for* godhood and not against. That implies there are more places where a relative advantage

at cognition can give returns and advantages.

Like, if you think about the toy examples of simple to more complicated games, from tic tac toe, to checkers to chess, do you really think that chess being *harder* and *more detailed* means that skill expression matters *less* for success?

A world in which vaccines were trivial to discover would have had smallpox and other diseases end up mattering less, and the conquest of Americas more difficult not less. The fact that Europeans could not make smartphones, which required complicated and detailed models of the world did not make their conquest any less over determined.

The problem is that you're conceptualizing godhood as an *absolute* difficulty and not a relative one. It doesn't matter if our world has 10^24 dimensions of optimizations vs 10^100 if it turns out you "only" need to solve computer science research!

Now, of course *yes* if it turned out all the "obvious" routes were specifically difficult for well known reasons that have stood up to careful scrutiny, then yeah your argument holds, but you cannot name a general tendency of the world and assume it applies to a specific situation.

(On the object level, at least if we want to index on what "empirically" is recently true, I'd say that cognition is much less detailed and complex than imagined by almost everyone. Unless you claim that you predicted LLM capabilities going as far as it did with mere scaling before GPT 3. And in fact the simple world in which novel insights don't happen and only scaling matters would be the world where you'd expect multi polar scenarios)

(Edit: actually in what plausible area of the world do you even see this dynamic? Software and sports are winner take all, Lanchester's laws with ranged combat implies even *simplified* versions of wars results in lopsided victories one way or another, to say nothing of how logistics, ambushes or control of terrain can change this, what dynamic, other than *external forcing functions towards an imposed equilibrium*, results in equality?)

Expand full comment

It's not an argument for why intelligence doesn't matter — it's an argument for why the real world is something you have to interact with to understand. I.e. you cannot just solve all problems theoretically in your head. See also Hayek's "The Use of Knowledge in Society" https://www.econlib.org/library/Essays/hykKnw.html

Primarily it's to point out that this will probably be a (relatively) gradual process vs. godhood in an afternoon, thus preserving the strategic equilibrium by preventing the development of decisive advantage.

Expand full comment

Note that most of my, and agrahagain (ahhhhh my weak American brain cannot handle the spelling) arguments revolve around *relative* advantages and not absolute advantages, so you have to, on top of arguing that things are hard, argue why they have this necessitates a multi polar world.

On top of that, I note that IF Hayek's argument proves that multi polar worlds are more likely, then so much the worse for Hayek! We don't have multipolar chip fabs, 5 is not a multipolar number when it comes to current SotA AI companies, by default there battles are not multipolar affairs where everyone comes out with approximately equal casualties, success at sports is winner take all and so is research output.

I've asked you for models of what you imagine multi polar worlds to be like so I can at least understand where you are coming from. I think both on an object level and via the outside view I've provided reasons for why I don't think multipolar is likely. I'd like for you to address those points directly rather than pretending they don't exist, or answering a different position.

Expand full comment

It doesn't require "bootstrapping to godhood," though. It just requires "bootstrapping to some moderately large advantage."

A probably sufficient-but-not-necessary scenario would be if Company X creates an AI that can innovate some paradigm-shifting insight in their work. If they keep that insight from leaking, they're working ahead of the curve until someone else can independently reproduce that insight. They were clearly ahead in creating that AI, and every month that nobody else has that insight, they're pulling further ahead, unless they hit some sort of a resource bottleneck.

Even without AIs that are truly innovative, having good enough AI tools for their in-house work could make them work faster than the competition and compound on itself. Right now (by my understanding) we're still in a space where even state-of-the-art LLMs are of any enormous use to the AI researchers themselves. Releasing them on the open market to earn back some of the money they spend on data, hardware and compute lets them buy more data, hardware and compute. In other words, there's currently very little feedback from the products of AI companies to the process of AI companies, and what feedback there is ends up equally available to all the other companies. Now, if you don't believe AGI is possible at all, it seem reasonable to expect that dynamic to hold indefinitely. But if we're taking ASI seriously as a possibility it seems really quite strange to assume that dynamic will hold forever. That's equivalent to saying "none of the intermediate steps between where we are now and ASI will be useful enough at speeding up AI research for any of the companies to consider keeping it in-house." Which is certainly not *impossible* but I can't see any reason why it's *likely.*

Expand full comment

That’s not what’s happening though! Every advance is replicated instantly between everyone. Look at the world as it is, not as Eliezer imagined it a decade ago.

Also to operationalize something I heard about most disagreements bottoming out in subtle aesthetic sensibilities… I suspect if we really dig into it we’ll find that you (and Eliezer) dislike chaos, disorder, and distributed decision making. For Eliezer at least I get the sense he basically believes communism would work if people were less stupid.

Whereas I have a taste for independence, individualism, and find the chaos of the modern world and free market forces quite beautiful.

Expand full comment

I feel like you forfeit the right to being a bigger fan of markets and chaos when your "opponent" has written 2M~ word long fiction with the subtitle "Mad Investor Chaos".

Expand full comment

I’ll have to take your word for it; I hated HPMoR too much to try reading his fiction again

Expand full comment

"That’s not what’s happening though!"

I didn't say it was. There's a difference between "that's not what's happening right at this minute" and "the chance of that happening is negligible." To the extent that "Every advance is replicated instantly between everyone" is even true, it's a pretty recent development: OpenAI and ChatGPT were clearly in the lead for quite a while, and the field has only recently opened up. You haven't given ANY justification for your confidence that it will STAY open. You don't even seem to be acknowledging the difference between "this is the way the world currently is" and "this is the only reasonably probable way things *could be*." To be 100% clear, I'm not claiming that the AI race WILL have a single winner or a small number of winners. I'm saying WE. DON'T. KNOW. Acting as if the state of the field *right now* is the *only* possibility worth considering is quite foolish, unless you have strong arguments for why that must be true. If you do, you haven't given them yet.

"Also to operationalize something I heard about most disagreements bottoming out in subtle aesthetic sensibilities..."

I find the rest of this honestly pointless, off-topic and honestly kind of condescending. You don't know me, and my "aesthetic sensibilities" have no bearing on the discussion. I'm talking about the actual probabilities around the actual technology that is actually being developed in the real world right now, and how much certainty it's reasonable to posit based on sharply limited knowledge. I don't honestly care about your aesthetic opinions of me, Eliezer, communism or anything else: save it for a different discussion.

Expand full comment

Appreciated this forest-level summary. It's easy to lose the plot if one mostly keeps up on the blow-by-blow of trees (e.g. reading Zvi). Plus, now the term "ASI" is gonna be poisoned for future training data...well played!

Despite my many disagreements with AG Bonta, I'll have to grudgingly award a hat-tip if he ends up playing a pivotal role here. Doing the right action for the wrong reasons is still doing the right thing, in the end.

Expand full comment

I don't know the relevant laws, but this shouldn't be legal.

Expand full comment

Woah, an oligarch (possibly) did something (possibly) shady by taking advantage of gullible silicon valley 'grey tribe' types and adjacent individuals that anyone even slightly cynical about ideological capitalism predicted instantly? Well, at least this hasn't happened before and can't possible happen again.

Imagine a PA whispering in my ear here: What's that? Oh. Oh no. In other news, the news for the past couple and next couple years-

Expand full comment

Yourself and myself have the same views when it comes to people contemplating the possibility of tons of money, Ques!

Expand full comment

Not to be pedantic, but OpenAI’s stated mission actually isn’t to “create AI in a way that benefits humanity”, but rather to “ensure that artificial general intelligence (AGI) … benefits all of humanity”!

This is an important distinction in the Charter, such as when OpenAI claimed it would consider things like Merge & Assist rather than racing (see Long-Term Safety): https://openai.com/charter/

Expand full comment