795 Comments

Hello: I just now happened on your review of the book Jaynes ought to have written, and am very glad to have. Now reading "On the Failure of Oracles" at U of Chicago's web site. thanks!

Expand full comment

I’m looking for a small town to write a novel from. It has to be in the mountain west (CO/WY and surrounding area), ideally has a small town and good surrounding nature to go on hikes or runs. I’d rather avoid somewhere like Jackson Hole or Aspen because I expect these would be overrun with tourists, but unpopular places are by construction hard to find. Does anyone have any suggestions?

Expand full comment
Apr 7, 2023·edited Apr 7, 2023

Comment from Scott Locklin's blog:

"To add to this: just learned that MIRI (then SIAI) was literally founded to *speed up* the coming Singularity, to prevent the gray goo apocalypse that Yudkowsky circa 2001 was convinced would kill us otherwise:

'On the nanotechnology side, we possess machines capable of producing arbitrary DNA sequences, and we know how to turn arbitrary DNA sequences into arbitrary proteins (6). We have machines – Atomic Force Probes – that can put single atoms anywhere we like, and which have recently [1999] been demonstrated to be capable of forming atomic bonds. Hundredth-nanometer precision positioning, atomic-scale tweezers… the news just keeps on piling up…. If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight….

If you project on a graph the minimum size of the materials we can manipulate, it reaches the atomic level – nanotechnology – in I forget how many years (the page vanished), but I think around 2035. This, of course, was before the time of the Scanning Tunnelling Microscope and “IBM” spelled out in xenon atoms. For that matter, we now have the artificial atom (“You can make any kind of artificial atom – long, thin atoms and big, round atoms.”), which has in a sense obsoleted merely molecular nanotechnology – the surest sign that nanotech is just around the corner. I believe Drexler is now giving the ballpark figure of 2013. My own guess would be no later than 2010…

Above all, I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse – misuse of a purely material (and thus, amoral) ultratechnology, one powerful enough to destroy the planet. We cannot just sit back and wait….'

And he made this incredible prediction:

'Our best guess for the timescale is that our final-stage AI will reach transhumanity sometime between 2005 and 2020, probably around 2008 or 2010.'"

Expand full comment

I started experimenting with betting on prediction markets a few weeks ago and decided to make my first market recently and I had some questions on getting the most out of them:

1. Does anyone have any reading to recommend on best use cases? i.e. where does wisdom of the crowds "fail" or trend towards a wrong answer as N approaches infinity? Maybe better phrased would be: Where—if anywhere—does expert opinion reliably outperform the equilibrium point reached by wisdom of the crowd?

2. Are there any forums or communities where people who make markets can request extra engagement on matters they think are important to try and increase the predictive validity of the market? Alternatively are there ways of doing this with Manifold or Metacalculus by spending money or some other mechanism?

Expand full comment

I am offering a mini-grant(s) to any individual [ideally who I could pay through a 501 c(3)] that would use the money for any one of the following:

1. Filling otherwise unmet need for high-IQ people ages 5 to 20 for whom the grant to you is likely to help him/her/them live up to their potential to make a difference in their current sphere of influence or the larger society. "Make a difference" can be in a progressive, conservative, or apolitical direction.

2. Encouraging discussion of pro-merit issues, e.g., the net negative of yet more redistribution of money and attention from people with greater potential to contribute to a better society to those less likely to. Like the other two foci, this must be used for an initiative that would otherwise go unfunded.

3. Taking a worthy step toward understanding the biological basis of reasoning or impulse control that would otherwise go unfunded.

Email me a brief proposal saying: 1 What you would do with the money, 2. What makes you a person likely to use the money well. 3. What would be the amount of the grant that would yield maximum benefit per dollar I'd give you. 4. Whether I could send the grant money through a 501c(3.) Send your proposal to me at mnemko@comcast.net

Expand full comment

I can't help you but just want to say this is an incredible initiative and I hope it succeeds.

Expand full comment

Thank you. I look forward to seeing proposals.

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

Any thought on Italy blocking chatGPT? It’s been the first country in the world to do so, citing privacy concerns on user data: https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847.

Other EU countries may follow suit.

Expand full comment

Southern Europe is used to missing out on technological progress, so I'm sure they'll be fine and impoverished.

What worries me is that it's getting praised in western europe, which would be a disaster for our future.

Expand full comment

So alignment will be trivially easy ?

Expand full comment
Comment deleted
Expand full comment

And your evidence for this?

Expand full comment

I wrote a thing about the thing that everyone is writing about lately. But this one is different, I swear..

https://kyleimes.substack.com/p/the-robots-are-coming-and-its-gonna

Expand full comment

It's that time of year again: is Easter pagan?

So here's a video interview on the topic:

https://www.youtube.com/watch?v=xixdmHaajoc

Expand full comment

Summary of the one-hour video:

"Nope, Easter is Christian, because that's when Jesus was crucified and rose from the dead. (Technical details on how the exact day was calculated.) If other religions celebrate the same day, it's because 'equinox' and 'end of winter' are universally interesting things. I did a lot of research on the goddess Iostra, and the spelling is slightly different. Anyway, she is probably made up. Also, maybe it's the other way round, and the pagans worldwide emulated the holiday of the local Christian communities. Easter eggs are a medieval invention, probably started as: 'the Lent is over, now you can eat eggs again'. Easter bunny was invented in 17 century as a rationalization for the eggs. People say a lot of crazy stuff on internet; if you are really curious about the history of religion, get a PhD in Theology."

Expand full comment

Does anyone know of any tools that can transcribe teams calls?

And before you say it Can't use the teams transcribe function because my meetings can't be recorded cos GDPR, so need another way.

Expand full comment

I'm not usre to follow. If you can't use transcribe because of GDPR (and not because it barely works), then why would another tool not be restricted in the same way?

Expand full comment

The two main problems with the teams version is that when you want to fully transcribe the meeting you have to record names and record video too.

Whereas I'm hoping for another app that transcribes the meeting from audio alone without recording people's names, while still recognising distinct voices. This way, no 'personal' data (name and face) is gathered, but I can still record all the details from the call.

Expand full comment

I didn't think the built-in function actually required "recording", as it can also work as real-time voice to text.

Hire stenographers? otter.ai

Expand full comment

You have to record the sessions to get a full transcript. Otherwise you just get captions that will disappear

Expand full comment

There are tools to record selected parts of the screen. Perhaps you can just record the video in a narrow box around the captions, with no audio or rest of the screen?

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I thought there was a download transcription setting. 🤷‍♂️

Otter.ai?

Airgram?

Expand full comment

The open source whisper.cpp does a decent job at creating a raw transcript but it needs editing to attribute text to specific speakers.

Expand full comment

Won't the people crying out about AGI killing off humanity look dumb in ten years if it hasn't happened yet, and also doesn't look like it's about to happen?

Expand full comment
founding

Think about your dad's reaction if you play Russian Roulette _once_. Won't he look stupid that you survived (in 5/6 cases). Doesn't make it even one iota less stupid.

Expand full comment
Comment deleted
Expand full comment
founding

It's a different question from what OP asked. For a water pistol a reaction would be dumb even before firing, because there is no question of the outcome. Having to wait 10 years to see what happens implies a degree of uncertainty.

To take your metaphone back to the main topic, you're suggesting not only that we'll be ok, but that there is absolutely no possibility of harm from going bullish on strong AI, not matter what. This is, well, a hellof an argument to make, and it should take proportionally more evidence.

Expand full comment

"Will I look like an idiot in 10 years If I spoke my mind" is a great, teenager-inspired, way to never do or say anything of value, and still look like an idiot, because not doing or saying anything is equivalent to doing and saying whatever the mainstream around you does and says, which is also not immune to being wrong and ridiculous.

Given enough time, every idea is dumb.

There is no workaround to being wrong, you have to be wrong in order to be right. I also think that AGI doomers are wrong, but I don't approach it from the ridiculous angle of "Right Side Of History"ism, I just say out loud why I think they are wrong.

Expand full comment

They certainly will to witty-take oriented people who have a great dread of being out of fashion and appearing overly serious.

Expand full comment

What the heck is point of this comment? Trying to shame people into thinking that AGI won't be catastrophic for humanity? If you genuinely think that AI will kill humanity, the risk of 'looking dumb' is a negligible cost.

"Won't the people saying AI isn't a risk to humanity feel like morons as they're dying at the hands of an AI?"

And what are you even saying? That they'll look dumb if and only if AGI exists in ten years time? Or if AGI doesn't exist by then they'll also look dumb?

If you have an argument to make against the AI risk case, make it, but 'you're gonna look so dumb when youre wrong' is just childish.

Expand full comment

Even worse, even if they're right they still won't get to look smug because they'll be dead

Expand full comment
founding

Unless it was their crying that prevented it? (https://en.wikipedia.org/wiki/Self-defeating_prophecy)

Expand full comment

Or the people clutching their pearls about wokeness destroying the country.

Expand full comment

Kinda rich coming from the ideology that dresses itself in 59 Kg of pearls and spends long days and nights clutching them about **Checks Notes** the unavailability of Drag shows for kids.

Expand full comment

All "pearl clutching" really means is "taking something more seriously than I think it should be taken." I guess it would be a witty and amusing way of making that point, if it weren't such an stale internet cliche at this point.

Expand full comment

Ironic, considering wokeness IS pearl clutching, and it's the woke constantly freaking out about things.

"Yeah I don't think assaulting people and burning down people's small businesses across the country in the name of black nationalist outrage is a good thing"

"HAHA YOU PEARL CLUTCHER!"

Expand full comment

I remember when climate change was going to have New York City underwater by 2020. Or was it 2010? Either way, when doomsday predictions don't happen on schedule the people who make them don't reconsider, they just push the date out any deny ever having made the old prediction.

Expand full comment

Who was predicting NYC would be underwater by 2020?

Expand full comment

These people? An article from 2011:

https://www.theguardian.com/environment/2011/nov/16/climate-change-report-new-york-city

"Irene-like storms of the future would put a third of New York City streets under water and flood many of the tunnels leading into Manhattan in under an hour because of climate change, a new state government report has warned.

Sea level rise due to climate change would leave lower Manhattan dangerously exposed to flood surges during major storms, the report, which looks at the impact of climate change across the entire state of New York, warns.

"The risks and the impacts are huge," said Art deGaetano, a climate scientist at Cornell University and lead author of the ClimAID study. "Clearly areas of the city that are currently inhabited will be uninhabitable with the rising of the sea."

Factor in storm surges, and the scenario becomes even more frightening, he said. "Subway tunnels get affected, airports - both LaGuardia and Kennedy sit right at sea level - and when you are talking about the lowest areas of the city you are talking about the business districts."

The report, commisioned by the New York State Energy Research and Development Authority, said the effects of sea level rise and changing weather patterns would be felt as early as the next decade."

Then they shifted that out in 2019 to happen "by 2100":

https://gothamist.com/news/new-climate-report-suggests-nyc-could-be-under-water-sooner-than-predicted

"Turns out sea levels may be on track to rise by more than double the unsettling figure climate scientists previously projected, and all within a century. A new study, released Monday, predicts a rise of 6.6 feet by 2100, if global temps warm by 9 degrees Fahrenheit. In that "worst-case scenario," according to CNN, hundreds of millions of people—including residents of New York City—would find themselves displaced as their homes sink underwater."

Expand full comment

As stubydoo noted, the first article wasn't saying New York would be underwater by 2020, it was saying IF another Irene-level storm hit the flooding would be extra bad. That hasn't happened, and a quick Wikipedia look seemed to indicate that flooding from hurricane near misses has indeed been pretty severe.

That article from 2011 predicts 2.5 feet of rise by 2050. This article from the same outlet (https://www.theguardian.com/environment/commentisfree/2021/apr/13/sea-level-rise-climate-emergency-harold-wanless) from 2021 predicts 3 feet of rise by 2050. Seems pretty consistent to me.

Expand full comment

That's jumping between two different aspects - general sea level and isolated flooding events. The 2011 report was about the flooding situation ***during major storms***. Major storms are random events that you can count on hitting somewhere, but the fates may send them to a different spot instead of yours. As far as I recall, New York City didn't get any major storms around the circa ~2020 timeframe. There was one that hit in 2012, and that one certainly did flood a sizable chunk of the city and many of the tunnels (they're still fixing the tunnels now)

Expand full comment

I agree, some predictions being wrong certainly proves other predictions are wrong.

Expand full comment

That's not how the psychology of doomsday cults works. Whether doom arrives or not is not all that crucial.

Expand full comment

So what term do you use for people who saw, for instance, the course Germany was on a decade before it got the the Holocaust?

Expand full comment

Rightly concerned, of course. The problem is that at the time many targeted minorities felt the concerns were overblown, and left only through repeated prompts. All too many actively chose to stay. It didn't help that many countries were actively hostile to migration in a way that made such alternatives look no better (according to letters that survive). Even the US route was only available to those with connections and money. I'm not sure invoking this tragedy in a Godwin's law kind of way is useful; it's not a trump card? You perhaps meant to make a more nuanced point but didn't have time to expand it at the time, but I would be interested to read it.

Expand full comment

No, my point ain't all that nuanced. It's that sometimes people saying a terrible event is coming are right. I realize that those saying something like the Holocaust was coming were not strictly speaking prophets of doom, just predictors of terrible things happening to certain classes of people. But of course many people speaking up with great concern about rapid AI developments are also not predicting certain doom. Scott, for instance, places the probability of FoomDoom at 33% if we do not take steps quite soon to stop AI development and make progress on AI alignment. Zvi's prediction is about 50% I believe. Yudkowsky is the only person I'm aware of saying it's too late to intervene and FoomDoom is now certain. Does that make those whom he's convinced cult members? Seems like being a cult member requires more than being convinced. You'd have to be preoccupied with the cult's leader and obedient to them, and also willing to cut your ties with those who do not believe as you do. I expect there are some Yudkowsky fans who think that way, but I don't have any reason to think it's most of them. There are probably people whom Yudknowsky's writings have convinced who had no contact with him or others who believe him to be right. They're just off living their lives, trying to come to terms with this awful truth they've been convinced of. Or take me. I am convinced I do not understand AI well enough to make a valid prediction of how likely FoomDoom is. So I have decided to go by Scott and Zvi's read, because they are smart people in whom I feel a reasonable degree of trust. But I'm sure I'm not a Scott cultist. In fact my last post about how to reduce the risk really irritated him, and far from bowing to the master, I just argued back.

Expand full comment

How about you stop calling it a 'doomsday cult'?

Expand full comment

I'm referring to the psychology of doomsday cults in response to someone who thinks "looking silly" is a consideration, to try to point out that this is the wrong framing. As far as I can tell there are some aspects of that psychology that apply to the current discourse about extinction risk.

However I should probably have expanded my comment. I am not trying to dismiss concerns about extinction risk from AI, because I share some of the worry, and extinction is in my personal Overton window. I believe that AI is creating great upheaval and will likely have major destructive effects, even if less directly than an extreme FOOMer position predicts.

Expand full comment

Actual doomsday cults tend not to say things like "We all might die, and there's nothing anyone can do to save themselves; there's no grand meaning to the apocalypse, or anything good about it in particular; no reason to be excited, no deep mystery, no spaceship coming to save you, no battle between right and wrong, almost all potential ways out haven't panned out, and the remaining ways of increasing chances of survival are deeply ideologically repulsive to the group's (mostly technophile) members; there will never even be so much as a moment to brag about being correct before death, the future is likely just sad, you should try to live well in what time left you have". The psychology doesn't fit the model you're suggesting here.

Expand full comment

That sounds like a caricature of 20th century existentialism, was that your intent?

Expand full comment

Has anyone made a practical design for an arcology meant for placement in very hot or cold climates?

Expand full comment

Does The Line in Saudi Arabia count?

https://www.neom.com/en-us/regions/theline

I don't know if it's actually been designed for a hot climate.

Expand full comment

Is utilitarianism itself an X-risk? There's a pretty outlandish scenario described here: https://betterwithout.ai/rollerskating-transsexual-wombats that culminates in something not so outlandish:

> [Mocenigo] uses cutting-edge AI to calculate the expected utility of long-term human survival versus projected suffering over the next few years. As he suspected, its answer is negative. Mocenigo opens his lab fridge and snorts a vial of Anthrax Leprosy Mu, a hybrid pathogen he’s developed. It has an R0 of 14,000 and is uniformly fatal after a month-long symptom-free incubation period. He heads out of the lab to the football stadium nearby that has been converted into a refugee center.

If people start thinking utilitarianism is true, well, someone could end up with their calculus telling them human extinction is the best outcome, as happened there. Making it one of the most dangerous philosophies developed, since unlike other philosophies this one cloaks itself in math, making it have a very powerful allure to people with a certain psychology. The sort of person who could figure out how to actually cause human extinction.

Might be interesting to draw up a catalog of philosophies and ideologies that could justify human extinction, so some heavy duty philosophical work could go into refuting them all.

Expand full comment

I think once you take into account the possibilities for causing human extinction non-deliberately you'll find such a catalogue much less useful. After all, accurately predicting how your actions may contribute to humanity's extinction is probably about as hard as accurately making utilitarian calculations

Expand full comment

Well, philosophy cannot do all things, but refuting dangerous philosophies seems like a worthwhile task for it. I don't think a super high level of certainty is needed to proclaim "this action does not contribute to humanity's extinction". Was beating the Nazis something that accelerated human extinction? "Oh, a capitalist economy accelerates technological development, which increases all sorts of X-risks, therefore, it would have been better that the Allies lost. And if the Allies lost, maybe there would have been no nukes." Some level of risk has to be accepted in every decision.

For what it's worth, I do think the birth of science may have been a colossal mistake. It increased our power without a concomitant increase in the wisdom to handle that power. It arrived too early basically.

Expand full comment

I just don't feel there's much grounding for the claim that one philosophical system is more likely to lead to extinction than another. Something akin to that example for utilitarianism can be concocted for basically every one, and I see little hope of quantifying which are most likely

Expand full comment

It wouldn't be based on quantification. Every philosophical view can be used to argue for extinction? You see this in virtue ethics? What is the path there? In communism? In capitalism?

Expand full comment

The easy paths involve situations in which things *like* nuclear exchanges are deemed lesser evils. But ultimately everything that utilitarianism does quantitatively can be done qualitatively. Actually writing the full path for a specific philosophy is left as an exercise for the reader - sorry, but I don't find it wise to explicitly lay that out right now, and also I'm tired

Expand full comment

Or someone could think it's worthwhile starting WWIII so long as it takes out the AIs.

Expand full comment

And that's automatically wrong, obviously.

Expand full comment

It was worthwhile to start the Civil War to end slavery. This might be a similar situation. The global south will survive total nuclear war anyway. But I think this can be handled without WWIII.

Expand full comment

"It was worthwhile to start the Civil War to end slavery."

Says who?

And 'end slavery' should really be 'end slavery sooner'.

Expand full comment

Even if we are willing to discount the complexities of the civil war for the sake of the argument, I do not see the analogy with today’s situation. Slavery was not, per se, an advantage in defense, but AI is very likely to be. So war (or the mere threat of war) may increase the pace of AI adoption, leading to an arms race scenario. This is probably the worst possible outcome.

Expand full comment

The analogy is that slavery was a moral error, and advancing AI capabilities without solving alignment is also a moral error. For what it's worth, I think we are pretty far from the point AI is necessary to win a war, and an arms race was going to kick off on its own anyway once AI is seen to provide a decisive advantage. The CCP knows USG wants it gone: AI doesn't change that.

Expand full comment

"The analogy is that slavery was a moral error, and advancing AI capabilities without solving alignment is also a moral error. "

I presume that enslaving irish immigrants to go kill a bunch of people in the south was an act of pristene moral clarity though, right?

Expand full comment

There is a view that wars start because of a grievance of some kind plus the inability to reach a peaceful agreement, typically due to incomplete information. So fighting is in part a learning process. It is hard for me to believe that a major country would start a war over something so silly as a ‘moral error’, whatever that is. Perhaps they would use that as a cover story later on, to justify their choice to their own public opinion. Now if the ‘moral error’ is also a potentially dangerous weapon, I would expect a preemptive attack, with later public discourse spinning it as the right thing to do for the good of mankind or something. But if the AI is already in control on both sides and it has a way of figuring out whether it will win or lose without fighting, it might even choose not to fight. Unless it is aligned, ironically.

Expand full comment

That's a lousy argument on its own. There are many moral errors in the world that aren't worth fighting a war over.

Communism is a moral error, is it worth invading China? Probably not, but I'd at least want to see a cost-benefit analysis before we try. Iraqi Baathism was a moral error and the cost of that one turns out to have been dubiously worthwhile.

As for slavery, if only the US had followed the example of every other freaking country in the Americas and peacefully and gradually outlawed slavery (while paying adequate compensation to the slaveowners so they weren't too upset) I think that would have been a vastly better alternative.

Expand full comment

And it probably would have ended up being cheaper too, all things considered.

Expand full comment

There is a comment thread here about why Stormy Daniels has not/is not being charged with blackmail related to the events that have gotten Donald Trump indicted. I have new info to offer and for ease of reading will summarize here.

The question basically is, "isn't she guilty of illegal blackmail/extortion?"

Some of us responded to note that hush money payments aren't illegal under federal law. U.S. law (see 18 U.S.C. § 873) includes that the thing being threatened for exposure has to be something that was illegal. Not simply embarrassing or politically inconvenient but, illegal. Consensual banging with a porn star is not illegal.

Trump though is being indicted by the State of New York, and some folks pointed out that blackmail under most state laws does not require that the thing being threatened for exposure be an illegal act. State laws against blackmail cover the instilling of fear more generally, including not just threatening to do someone harm and/or exposing a crime but also threatening to "expose a secret" which would induce fear in the person being blackmailed (that's a quote from the New York statute).

So, then, why isn't Stormy Daniels potentially guilty of illegal extortion? That question turns out to have been posed many times since 2018 in forums like Reddit and Quora and etc, with a variety of attorneys/law professors/former prosecutors/etc jumping in to respond to it. Their consensus answer is that there isn't any allegation that Stormy Daniels has attempted any extortion as defined in any state laws.

Daniels didn't approach Trump or any Trump representatives to demand money for silence. Rather, she was starting to talk in public about having had sex with Trump and was then approached and (she alleges) threatened with harm if she didn't sign an NDA drafted by Trump's lawyers and accept the payment in exchange. She signed the NDA, received the money, and then Trump failed to sign the document. Later she sued in civil court to invalidate the NDA because of his failure to sign it; she did not in that lawsuit seek any damages or other new payments but simply asked the court to agree that the NDA was not in force.

So, quoting here many lawyers who've posted responses online on this topic during the past four years, "none of the elements of legal blackmail (on Daniels' part) exist."

Expand full comment

I’m not sure which will be more entertaining, MSNBC’s or Fox News coverage of Trump’s booking. Maybe I’ll record one of them so I can compare and contrast. As usual, I’m kidding. I won’t be watching either. I saw the OJ low speed chase by accident in an after work bar gathering. That’s enough reality TV for one lifetime.

Expand full comment

Doesn't adopting the longtermist frame imply being pro-life? Been reading a bit of The Precipice, and they sure harp a lot on the value of unborn generations. If these generations are worth a lot, then isn't the future life of the fetus also worth much?

Maybe you can counter, what about the future life of the gametes? But gametes do not turn into people all on their own, like a fetus. Once fertilization occurs, you have something that will almost certainly become a person, should no one intervene. And if future life has value...

Expand full comment

This only works if you think that people are morally obligated to maximize total utility and don't put any value on human freedom.

This would also imply forcing people to breed. If we're going to go down that road, better to force everyone to do it instead of placing all the burden on the dumber half of society.

Expand full comment

I'm not even sure it works if you assume that! I put no final weight on human freedom - but I think that human freedom is an incredibly valuable thing for maximizing total utility (because usually giving a person freedom gives them the ability to do whatever maximizes their own utility, and they will usually do so better than someone else deciding for them).

While forcing people to breed might be a "greedy" solution towards maximizing the number of happy lives, it does a much better job at maximizing *number* of lives (short term, at least) and a much worse job at making them happy (and over the long term, those lives may not end up very well if they are mostly forced rather than chosen, and may end up with fewer people in the second or third generation if the first generation goes particularly badly).

Expand full comment
founding

<quote>you have something that will almost certainly become a person, should no one intervene.</quote>

I dont know.. seems like a lot of intervening is done to ensure it becomes a person

Expand full comment

Not necessarily - a longtermist may believe that future generations have value, but that doesn't mandate that value to always trump value/rights/interests in the present.

Its a value/value balance, not a blind yielding of the present to the future. Otherwise you'd end up with crazy longtermists calling for some kind of pregnancy autocracy where we throw human rights out the window and mandate annual pregnancy for all women in order to maximize the size of future generations at all costs.

Expand full comment

They are definitely arguing we should be heavily valuing the future. That's not the same as having the future overthrow the present, but then, neither is having a child and giving it up for adoption an overthrow of the present by the future. It would just be including the future in your decision making, instead of thinking only of yourself.

Expand full comment

People who decide to have an abortion are usually thinking *primarily* about the future in their decision.

Expand full comment

And (just to further complicate the future dimension), most women who get abortions already have at least one child.

"Six in 10 women who have abortions are already mothers, and half of them have two or more children, according to 2019 data from the Centers for Disease Control and Prevention"

https://www.nytimes.com/interactive/2021/12/14/upshot/who-gets-abortions-in-america.html

So it's not just the future of the pregnant woman being considered, but their limited resources for providing and the impact another mouth would have on the futures of the kid(s) that they already have.

Expand full comment

The longtermist frame implies doing things that will increase the general population growth rate, provided that they don't correspondingly decrease the economic growth rate. I don't believe that legal availability of abortion does that much to change the population growth rate (though if someone has good data showing that it does, that would be interesting to know). My understanding is that the availability of abortion makes it safer, and ends up with nearly as many babies born, just that more of them are at times in the parents' life when they are happy to support them.

Expand full comment

What I read in The Precipice is that we should be valuing a lot what future generations will accomplish. That's the impulse behind it, not specific implementation details like a balance of the population growth rate and the economic growth rate. Similarly with abortion, if you start valuing what the unborn fetus will accomplish in the future, suddenly the whole thing starts looking very different.

And the only choices are not abortion and raising the kid yourself: newborns given up for adoption are snapped up fast. So it's not really an out to throw out a fetus' entire future life just because you don't want to be inconvenienced for 9 months.

Abortion only makes sense if you think the future life of the fetus has no or insignificant value. And the longtermist frame is all about valuing the future.

Expand full comment

I think you're missing the very specific *quantitative* core of longtermism. Nearly *all* ethical views say that we should care about the future. The distinctive feature of longtermism is that it says there's so many *more* people in the future that we should care about the future *more* than the present - enough so, that decisions about the present might generally be better if they are made in light of their effects on the future rather than their effects on the present.

Given this perspective, it's a mistake to evaluate an abortion just by looking at the effects on the one fetus that is denied a chance at life - you instead need to evaluate the abortion by looking at the effects on the huge number of people in the future. It's all about population growth rate and economic growth rate, and not about missing the forest for looking at one tree.

If you value what the unborn fetus might accomplish in the future, but *also* value what the pregnant person might accomplish in the future if they weren't pregnant, and *also* value what might be accomplished by a future fetus that this person might choose to carry to term if not this one, then it all becomes pretty complicated.

But it becomes a lot more simple if you also look at the impact on the future of laws criminalizing abortion, apart from their effect on the abortion itself.

Expand full comment

In lieu of a long-winded answer, you might find it interesting to ask yourself the contrapositive (does not being pro-life (or being pro-choice) entail that future life has no value?).

Expand full comment

A pro-choicer is definitely saying it has very little value, certainly much less than the longtermists are saying. Perhaps this is not the same as literally no value, but it is a distinction without a difference, since no value or little value, the outcome is the same.

I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die. In that one, we're not even talking of future life any more, but of real, present life, and even that is being devalued. But I'm not sure how many pro-choicers actually endorse that argument.

Does this mean abortion should be banned? Maybe not federally, but I'm ok with letting states do it. I do feel abortion is ignoble in nearly all cases. There would be no abortion if we were all we were meant to be. But this has no bearing on whether longtermism implies pro-life, just decided to clarify my general position on abortion.

Maybe one can be a pro-choice longtermist if one says individual future lives have little value, it's just that there are potentially so many of those that it adds up to a hefty sum, but that's an utilitarian argument, and utilitarianism may itself be an X-risk (https://betterwithout.ai/rollerskating-transsexual-wombats), so I don't think I buy it.

Expand full comment

I think the point of the "violinist arguments" you are alluding to is precisely the opposite: they aim to demonstrate that even if the fetus' life had extremely high value (the same value as an adult person's life), there should still be a right to abortion. Whether that argument succeeds or not, I think there are quite a few pro-choicers who believe that fetal life has significant value and even that many abortions are immoral, but that there should still be a right to abortion (rather uncontroversially people enjoy many rights that allow them to do immoral things).

I am not convinced that this is not completely orthogonal to longtermism. Longtermism makes comparisons and value-judgments over aggregates such as generations of (potential) morally valuable beings. To do this neither does one have to commit to a point in the biography of these beings at which they gain moral status nor do comparisons between aggregates necessarily imply anything about comparisons between individuals as in the abortion case.

Expand full comment

You have to commit to these beings having value even before they have a biography, that's the longtermist frame. And each individual being has to matter for this to make sense: multiplication by 0 yields 0.

> rather uncontroversially people enjoy many rights that allow them to do immoral things

Yeah, but we don't live in an anything goes society because that's insanity. In abortion, what's in the balance most of the time is 9 months of inconvenience versus an entire lifespan. It's obvious where the scales have to tilt. So I'm coming round to it should be banned, with exceptions if the pregnancy is life threatening.

Expand full comment

Longtermism is not a moral philosophy in its own right but a moral philosophical thesis that roughly states "future generations matter morally and they matter a lot because they could consist of so many morally relevant beings". This is only a thesis about aggregates and does not have to make a commitment about the underlying moral philosophical framework as long as it gives some consideration at all to quantity.

Questions of the morality (or legality) of individual acts such as abortion depend on your moral philosophy not on whether you assign moral weight to future generations. So if e.g. you consider the right to abortion a moral positive then it will follow that in your version of longtermism future societies should have a right to abortion. But you might just as well have the contrary position.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

>>I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die.

Why do you think the argument is "sinister?" It seems to me to be self-evidently just. I mean, the only alternative would be a government empowered to force you to maintain the connection against your will, which strikes me as a cure many times worse than the disease. Wouldn't such a government, just by way of example, also be able to forcibly remove one of your kidneys, or some of your plasma or bone marrow, in order to make you donate them to a person in need of a transplant?

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

The "violinist argument" is a transparently pathetic twisting of the object level situation, in actual real life (almost) nobody kidnaps you and puts a baby inside you by force, rape constitutes less than 0.2% or so of reasons for abortion in the US according to a 1998 meta study[1], it's not even listed under its own category, but bundled into an "other" category.

What actually happens is that the abortion-seeking woman have sex, completely willingly, then treats abortion as a convenient contraceptive to clean up her mess.

By consenting to sex, you consent to its consequences. Each and every single one of them. Nobody kidnapped you to put the violinist inside you, you went out of your way to invite them in. The violinist didn't even exist before you invited them, your invitation literally created them out of thin air and made them dependent on you for just 9 months, which you then deem too much of a cost to your potential wage slavery - oh sorry, "career" - opportunities and decide to kill them for an extra promotion.

"IS IT REALLY FAIR TO BEAR THE CONSEQUENCES OF MY OWN ACTIONS ?!!!", apparently yes, I would say so, lots would agree with me too. The violinist argument is an unconvincing tortured metaphor to try to trick a reader into saying no. Who on Earth ever changed their mind after hearing it ?

[1] https://upload.wikimedia.org/wikipedia/commons/d/dd/AGIAbortionReasonsBarChart.png

Expand full comment

>>By consenting to sex, you consent to its consequences.

I think my note elsewhere in this chaining is relevant so just gonna ctrl+c rather than rewriting the wheel.

>I'd argue that consent to sex and consent to pregnancy are overlapping but distinct. To illustrate:

>A pro-life person can argue that consent to pregnancy has a particular quality - namely that it cannot be withdrawn after being granted and before completion of the act. I think it's putting it lightly to say that an argument that consent to sex shared that same quality would yield a different response from the one observed in the context of pregnancy. So "consent to sex" and "consent to pregnancy" aren't the same thing - if they were, we wouldn't see such dramatically different responses to the question "can the consent be withdrawn?"

And if consent to sex =/= consent to pregnancy, then this whole line of argument dissolves.

Expand full comment

This is sophistery, the "Consent To Sex" I'm talking about in my comment isn't the literal words "I consent to sex", which you say can be withdrawn (which I find dubious, but whatever, it's not relevant). But rather, the "Consent To Sex" I'm talking about is the action of having sex itself (and its completion).

The best evidence of final and non-withdrawable consent to an action is the completion of that action. You can't willingly drink water - with full knowledge of the consequences - then say "Oh, I withdraw my consent to drinking that water, I didn't want to", well too bad, you already drank water, the consequences of that has already happened. You can regret drinking water, you can refrain from drinking more water in the future, and you can certainly take other actions to nullify the consequences of drinking water, but you can never "withdraw consent" to the act of drinking water that you just did, your consent is simply irrelevant after you have already done the action.

Similarily, by engaging in and completing an impregnating sexual action with full knowledge of its consequences, you declare consent - final and non-withdrawable - to said consequences, namely pregnancy. You can then try to abort the baby, but that's not "withdrawing consent", that's just trying to clean up your mess by killing someone, which is trashy and immoral.

Expand full comment

I don't know if the government should get involved, it just seems twisted to let someone who is utterly dependent on you (temporarily so!) die. Removing a kidney is permanent damage, so not like being pregnant or the thought experiment, as to plasma and bone marrow, it honestly makes sense that everyone should be signed up for those, like with jury duty.

Expand full comment

Can you do it to a plant? Cuz I'm about to throw out our scaggy waiting room plant, which nobody but me waters.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

>>I don't know if the government should get involved, it just seems twisted to let someone who is utterly dependent on you (temporarily so!) die.

Is there an alternative means besides the state for preventing a person who wanted to disconnect from doing so?

>>Removing a kidney is permanent damage, so not like being pregnant or the thought experiment

One in 3 US Births are by C-section - shouldn't cutting a child out of its mother's stomach count as leaving "permanent damage?"

"Risks to mothers include...

Increased risks during future pregnancies. Having a C-section increases the risk of complications in a later pregnancy and in other surgeries. The more C-sections, the higher the risks of placenta previa and a condition in which the placenta becomes attached to the wall of the uterus (placenta accreta).

A C-section also increases the risk of the uterus tearing along the scar line (uterine rupture) for women who attempt a vaginal delivery in a later pregnancy."

https://www.mayoclinic.org/tests-procedures/c-section/about/pac-20393655

And while pregnancy is safe, it's not like it's risk-free. Is it still "twisted" to let someone die who is utterly dependent on you if there is a 32/100,000 chance that you'll die of it? Does your government, or do your peers, have the right to force you to take that gamble, even if it's admittedly small?

https://www.cdc.gov/nchs/data/hestat/maternal-mortality/2021/maternal-mortality-rates-2021.htm

Expand full comment

On the one hand, it's twisted to let someone who is utterly dependent on you (temporarily so!) die. But on the other hand, it's *also* twisted to punish someone for doing that.

There are very good reasons why, even though the law might be aimed at producing a just society, it doesn't state that every immoral act should be punished. Sometimes punishing an immoral act would be an even more immoral act

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

This is the best take. The extreme elements of the pro-choice movement that seek to cast abortion as having no moral dimension at all have always sat poorly with me. Of course there are moral stakes.

The problem is that there's no way of banning abortion that isn't *more* immoral.

If we ever develop functional artificial wombs we can just transplant an unwanted fetus into, then I think the question gets much more complex, but with the reproduction setup we have now, any "fix" society might apply is just always going to be way worse than the "problem."

Expand full comment

If you willingly picked up the person and attached them to your body knowing full well they would need to remain attached for nine months, then I see no justification for cutting them off. The only exception seems to be rape.

Expand full comment

I think that if you generalize the reasoning you are doing here, you seem to be committed to the idea that if you start attempting to save someone who had no other way of being saved, then it should be illegal for you to stop.

That's a good way to encourage people to never start attempting to save anyone.

Literally no one is harmed by someone who starts saving an otherwise-unsavable life and then stops - we assumed that this person was going to have no other shot at life, and so they end up no worse than they would have.

Expand full comment

> you seem to be committed to the idea that if you start attempting to save someone who had no other way of being saved, then it should be illegal for you to stop

It's **Immoral** alright, law is another dimension.

>Literally no one is harmed by someone who starts saving an otherwise-unsavable life

An unsavable life that already existed before you tried to save it, it existed through no fault of your own, unlike the fetus of an abortion-seeking pseudo-mother, which exists solely due to that mother deciding to have sex and not thinking through the consequences. If the sole reason I'm drowning is that because you decided to drag me with you to the beach, it's bit rich to wash your hands of it and say that you're not responsible for saving me.

>That's a good way to encourage people to never start attempting to save anyone.

If the "Saving Someone" in the analogy corresponds to making a baby, this is wildly unrealistic. People will (for better and for worse, more worse than better) always continue to make babies no matter how crazy or unjust laws you make around it, there is nothing more crazy than China's one child policy, and it still haven't prevented the Chinese from making babies.

And that's ignoring it's an inaccurate confused analogy, making a baby is literally the opposite of "Saving Someone".

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

I was going off of the original text of the hypothetical-

"I remember a pretty sinister pro-choice argument about how if someone needed to be attached to your body for nine months to live, you would be entitled to cutting them off and letting them die."

There's nothing in that description that suggests you willingly picked up the person and attached them. The premise just seemed to be that the duty flowed from the need.

If we want to alter the hypo to add consent into the mix, then I think it gets a lot more complicated. Is consent to sex the same thing as consent to pregnancy? There's certainly overlap, but it's strange to think that if I'm using condoms, spermicidal lubricants, the birth control pill, the morning after pill, etc, or some combination thereof during consensual sex that I have "consented to pregnancy." I guess one could argue that I've "consented to the possibility of accidental pregnancy," but I've hardly taken the fetus and "willingly picked up the person and attached them to your body knowing full well they would need to remain attached for nine months."

I'd argue that consent to sex and consent to pregnancy are overlapping but distinct. To illustrate:

A pro-life person can argue that consent to pregnancy has a particular quality - namely that it cannot be withdrawn after being granted and before completion of the act. I think it's putting it lightly to say that an argument that consent to sex shared that same quality would yield a different response from the one observed in the context of pregnancy. So "consent to sex" and "consent to pregnancy" aren't the same thing - if they were, we wouldn't see such dramatically different responses to the question "can the consent be withdrawn?"

And if consent to sex is different from consent to pregnancy, then one can't say that a person, by mere act of consensual sex, has "knowingly and willingly attached" the person in need in the hypothetical. So we're just left with a clunky hypo that is a slam-dunk argument for why "abortion is okay in cases of rape" but which beyond that rapidly bogs down and doesn't really advance the conversation much.

Expand full comment

The counterargument from Father Emil of Lake Wobegon's Our Lady Of Responsibility RC Church: "Well, if you didn't want to go to Minneapolis, why did you get on the bus?"

Expand full comment

What the longtermist frame reveals is that the consent question does not matter much. Think instead of the entire future life of the fetus, regardless of how that fetus came about. Take that into account in your decision making, don't discount it.

Expand full comment

While this story made me laugh, I think it also demonstrates how open letters politely imploring "please slow down on AI research, pleeeeeease" are going to do nothing.

Google, the multi-billion, multi-national company, is going to cut back on staplers amongst other things. At that level of penny-pinching, do you really believe they will voluntarily hobble themselves when it comes to a technology they and others are hoping will make them trillions in profit? Especially as that is where they seem to be pinning their hopes?

https://www.cnbc.com/2023/04/03/google-to-cut-down-on-employee-laptops-services-and-staplers-to-save.html

"In her recent email, Porat said the layoffs were “the hardest decisions we’ve had to make as a company.”

“This work is particularly vital because of our recent growth, the challenging economic environment, and our incredible investment opportunities to drive technology forward — particularly in AI,” Porat’s email said.

...Google employees have also noticed some more extreme cutbacks to office supplies in recent weeks. Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.

“We have been asked to pull all tape/dispensers throughout the building,” a San Francisco facility directive stated. “If you need a stapler or tape, the receptionist desk has them to borrow.”

It would be ironic if these kinds of things were what brought about the AI Apocalypse: "Yeah well, we replaced all our employees with AI because that *massively* cut down on stapler usage. But we never anticipated that when we asked it to eliminate paperclips that it would react in such a way - and that's how World Wars III-VI happened and how we learned the hard way never to get between an AI and its paperclips".

Expand full comment

Staplers? What they really should be cutting back on are paperclips....

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

"The time has come" the walrus said

"To talk of many things

Of sleaze and scary half-true facts

And Babbages and Bings."

Expand full comment

Oh yeah and also

"But wait a bit," the Altmans cried

"Before we have out chat.

For some of us are scared of death

And all of us love Chat."

"And juries?" asked the Carpenter

They hated him for that.

Expand full comment

>As a part of the January U.S. layoffs, the company let go of more than two dozen on-site massage therapists.

How many on-site massage therapists did they have?!

...I wonder if you can still get a massage from the reception desk.

Expand full comment

When I worked there that was about the right order of magnitude. It was a very popular perk. Google was a very tense place to work.

Expand full comment

Having worked reception in my time, I'd say if you ask, you'll get one with the stapler 😁

Expand full comment

The staple-remover would be good too. At our house we call it Fang.

Expand full comment

That's treating the receptionist's time as worth very little. For that matter, needing to ask for staples treats everyone else's time as worth very little.

Expand full comment

Isn't it mostly accepting that staples don't get used much in the company, and maybe trying to slightly disincentivize the generation of paper documents that need to be stapled? I'm not a Googler but the last time I used a stapler was before Covid-19.

Expand full comment

Removing staplers and tape sounds suspiciously like paper clip maximization to me.

Expand full comment

Rationalists are so close to understanding that for-profit companies are already paper clip maximisers, far more dangerous than AI could be, and will literally grind your bones to dust if it allowed them to make more profit than the value of your labour in society.

Expand full comment

I think this is a pretty good argument against any manufacturing-intensive AI doom scenario. Global capitalism has optimized supply chains and processes down to their most profitable minimum. The idea that the AI can get away with building its own physical agents, much less a planetful of grey goo, before anyone notices the hit to their bottom line is risable.

Air gaps are one thing; the thumb of the invisible hand is another.

Expand full comment

Me: Why do bedroom doors open inward?

GPT: Bedroom doors usually open inward for safety reasons. In case of an emergency, such as a fire, it is easier to push the door open and escape if the door opens inward. Additionally, it also provides more privacy and security for the person inside the room.

Me: By "inward" I mean "towards the bedroom". Your answer makes no sense. In a fire in my bedroom and I'm pushing on the door, I want it to open, not close.

GPT: I apologize for the confusion. Bedroom doors usually open inward, towards the bedroom, for privacy reasons. By opening inward, it allows for the door to be closed and locked from the inside, providing more privacy and security for the occupant of the room. Additionally, it also helps to reduce noise and disturbance from the outside.

Me: My house door opens outward and I can close and lock it from inside the house.

GPT: It is not uncommon for exterior doors to open outward for security reasons. By opening outward, it makes it more difficult for someone to force their way into the house. Additionally, it also helps to prevent the door from being blocked by snow or debris during inclement weather. As long as the door can be locked securely from the inside, it should not pose a safety or security risk.

Me: All of these reasons also apply to bedrooms.

GPT: [Asks for money.]

I certainly am not going to pay for more of these unthinking rote replies...

Expand full comment

These rat-shit replies sure do suggest that GabbyPostTotaller is completely lacking in the kind of understanding that Melvin displays -- grasp of pros and cons of inward- vs. outward-opening doors, and the factors that affect which is more conventional and which more desirable in various situations. Fuckin GPT, man.

Expand full comment

>completely lacking in the kind of understanding that Melvin displays <

I have been quite skeptical that GPT could display any true intelligence, and I still found this conversation to be disillusioning.

I think what GPT offers is conventional pablum. A blenderized version of the conventional wisdom with no concept of the real world behind it.

Expand full comment

I can't blame GPT here since (1) the front door of my house opens *inwards* but (2) the front door of my workplace opens *outwards*.

So there isn't a universal rule about "all X open in Y direction". No wonder it's confused. Some doors open in, some doors open out. You wanna know why? Ask the freakin' builder!

Expand full comment

Well, the actual answer to my original question is CONVENTION. But that requires the incisive understanding that GPT will never have.

Expand full comment

If we're really looking for an answer, I guess the convention is that doors should open from the smaller space into the larger space? Bedrooms are usually larger than the corridor they open off, so the door opens into the bedroom so that the corridor doesn't get blocked by an open door. But the outdoors is always larger than the indoors, so exterior doors open outwards. On the odd occasion that a bedroom opens off a larger space, we usually still see bedroom doors open inwards because that's what we're accustomed to. In other cases not specified, the door tends to open into whatever place it will be least inconvenient.

Expand full comment

>the door tends to open into whatever place it will be least inconvenient<

Good point. You are including the possibility that the door is left open which I did not consider. Clearly a bedroom door may be left fully or partially open.

The GPT is limited to those ideas that have been written about by humans. You are not. You can take your knowledge of human nature and apply it to the situation being discussed.

I did a similar thing. I took my knowledge that people smoke cigarettes and sometimes do this in bed. Ergo they can fall asleep with a lit cigarette and catch the bed on fire. The fire can smoulder for a while before the sleeper awakes to a bedroom filled with smoke.

What should they do then? Get down on the floor where there may be breathable air and crawl towards the door!

What happens when they reach the door? If the door opens inward, they must crawl backwards into the burning bedroom a sufficient distance to open the door.

Expand full comment

>Me: All of these reasons also apply to bedrooms.

I am amused at the concept of someone's bedroom getting snowed in.

Expand full comment

"I am amused at the concept of someone's bedroom getting snowed in."

Well, if you're the Cold Genius from Purcell's "King Arthur", your bedroom is snowy:

https://www.youtube.com/watch?v=t7WehY5vTl4

What power art thou, who from below

Hast made me rise unwillingly and slow

From beds of everlasting snow?

See'st thou not how stiff and wondrous old

Far unfit to bear the bitter cold,

I can scarcely move or draw my breath?

Let me, let me freeze again to death.

Expand full comment

No, not snow, but furniture. Furniture in the bedroom close to the door is a definite possibility. Furniture in the space outside the door is much less likely.

Expand full comment

The weird thing is that the GPT program thought the possibility of a pile of snow was a good reason for the exterior door to open outwards!

Clearly it has no actual concept of the real world (unlike an autopilot program which needs a good model).

Expand full comment

Does the April 5 due date mean that we have to get it in before April 5, or is April 5 OK?

Expand full comment

Pretty sure April 5 is OK, and Scott posted that he will not be a stickler about time zones, so sounds like 11:59 pm in your time zone is OK. You're writing one too, huh? I'm now confident I will finish mine in time, but my ass sure is starting to drag. How are you doing?

Expand full comment

Far, far too many people conflating the issue of the possibility/likelihood of machine superintelligence soon/at all, and whether or not a machine superintelligence would lead to human extinction.

Expand full comment

I was reading Machiavelli’s “Discourses on Livy” (c. 1531) yesterday and found a line that seems like it might have come from Scott:

“It is not without reason that the voice of a people is compared with that of a God, for it is obvious that popular opinion is wondrously effective in its predictions, to the extent that it seems to be able to foresee its own good and evil fortune through some occult power.”

Expand full comment

He drew magazine covers featuring flying saucer 'buses,' exotic land-sea cruisers, and parking places for personal helicopters.

Despite chronicling the technofuturism of the 1940s and 50s, this mysterious artist, Arthur C. Bade, is almost forgotten today.

https://fragmentsintime.substack.com/p/arthur-c-bade

(If you know anything more about his life, please reply, or comment on the blog post linked above - thanks!)

Expand full comment

Thank you, this is marvelous stuff.

Two other talented artists from the era were Ray Quigley of Popular Science fame (remembered in particular for illustrating the Gus Wilson and the Model Garage stories), and Al Wiseman (who illustrated the Dennis the Menace travel-themed comic books).

https://www.pulpartists.com/Quigley.html

https://gus-stories.org/1948.htm

https://en.wikipedia.org/wiki/Al_Wiseman

https://alwiseman.org/

Expand full comment

Thanks so much, NASATTACXR! Appreciate these two artist references and links!

(I never knew about Wiseman's and Toole's work on the Dennis The Menace comics themselves, until you noted this just now.)

Expand full comment

Aron, very glad you enjoyed these!

I always prefered Wiseman's meticulous detail to Ketcham's work (which became increasingly scribbly with time). As a child I found Wiseman's depictions of '50s American suburbia to be incredibly evocative.

Expand full comment

WOW! Went looking for some of those, and found this, for starters ... https://todaysinspiration.blogspot.com/2013/07/the-art-of-summer-reading-al-wiseman.html

Expand full comment

Thank you so much! I have recollected many of the comic books those stories appeared in. (Dad made me sell the originals to a used book store when I was 13, and of course I've paid many times that since in buying replacements.)

Check out Al Wiseman's drawing of the flight deck of a Boeing 377 Stratocruiser:

https://bobistheoilguy.com/forums/threads/a-boeing-377-stratocruiser-cross-a-taxiway-bridge-at-j-f-k-airport-1951.366101/page-2#post-6418498

His technical detail was marvelous.

Expand full comment

I have a few questions about AI that I haven’t seen discussed. I’d appreciate it if some folks have links that I can read about and/or have some answers.

1) I am confused about how an AI can have agency to act in its own interest. The paper clip scenario sounds more like a dumb machine run amok. I don’t consider that agency in the way we talk about humans making decisions based on what they think is best. I can sort of, kind of see how that might be possible. But talk of a super genius AI deciding to eliminate humanity for its own survival seems like a big leap from a chatbot.

2) Chatbots and image generators have made big advances in a short amount of time. Is the technology that underpins those transferable to other domains like self driving cars? My naive view is that there is a very large difference between making a finished thing like a response or image and understanding the entire space of possibilities in the real world. Bounded vs effectively unbounded domains. I will be more worried about AI once it can cope with real life instead of a highly controlled, limited domain with a single output.

Expand full comment

ChatGPT doesn't have agency. The risk is either a different type of AI, or someone adding something on top of ChatGPT -- something for interaction with the real world (there are already plugins that allow ChatGPT to run computer code, or send e-mails), and then a loop that will keep telling the ChatGPT "make a plan how to make my paperclip startup more profitable, send e-mails or run code using these plugins if necessary". This is simplified a lot, but keep going in this direction and you might get some form of agency.

The idea is not that an AI would decide on its own to eliminate humanity, but rather that it was programmed to solve some task, and eliminating humanity is just a side effect of the most efficient way to solve the task. Basically, think about the AI as a very smart psychopath, and if it wants to do X, it considers all possible ways to do X, unless explicitly told otherwise (and there is a way to eliminate humanity that you forgot to forbid explicitly).

The fact that ChatGPT doesn't have hands seems much less relevant to me than the fact that it can talk about all kinds of things. It is easier to add hands to a chatbot, than to make an existing robot with hands think about various different topics.

Expand full comment

I believe that computer programs need some capacity for self-protection, though it can be partly outsourced to other programs or to the computer it's running on.

Programs exist in a hostile environment-- there are programmers and users who don't mean to do any harm, and there are also trolls and malware.

I don't know that self-protection shades over into agency, but it could.

Expand full comment

1) If an entity has any type of goal and is sufficiently intelligent, it will develop instrumental goals to achieve its primary goals. If it's difficult sophisticated at setting, seeking, and adapting said goals that looks indistinguishable from agency.

2) I don't know. I think not, but I really don't know

Expand full comment

It will only develop instrumental goals if it is capable of doing so. It cannot magically invent new ways to solve a problem outside of the bounds of its maze. An agent can overcome obstacles in a 2d grid in very creative ways to reach a goal, but it cannot decide to move diagonally or reprogram the grid or anything bizarre like that.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

Indeed, if it isn't superintelligent it won't be superintelligent

Current ai chatbots are already "speaking" English and interacting with the real world in some small ways (especially over http), they aren't exactly a 2d maze solver

Expand full comment

Thats my point, no one has proved "superintelligence" is a reasonable possibility for ML models beyond the realm of sci fi. So it is not clear why OP is so confident in suggesting instrumental goals will be easily created by future AIs when current ones cannot form associations between two distant goals, even with random search.

Expand full comment

What's your point? That much is true of any agent, including humans

Expand full comment

You're conflating two separate issues

1. The possibility of machine superintelligence/its likelihood in the near future

2. The danger posed to humanity from a machien superintelligence

Also, 'acting in its own interests' is misleading. 'Optimally staisfying its utility function' is better. Because it's not that a machine becomes like a person and values its continued existence for its own sake, and that this makes it 'happy'. It has been programmed with goals and will try to achieve those goals. If it's goal is to destroy earth, then destroying itself in the process may be aligned with its goal of destroying earth, even though self-destruction is ostensibly not 'in its own interests' (though a smarter/more powerful system would likely spread beyond earth so as to exist beyond the attempted destruction of earth to ensure it has happened and develop contingencies).

Expand full comment

> It has been programmed with goals and will try to achieve those goals.

*If* it has been programmed with goals , it has been programmed with goals and will try to achieve those goals.

Expand full comment

Does anyone have much experience with using ChatGpt to summarise scinetific papers? Good, okay, bad?

Expand full comment

Not great in my experience (3.5). The system likes to focus on a block of text that sounds or is shaped like an abstract, and paraphrase the hell out of it. Other transformer based language models seem similar, but I only have a small sample size.

Expand full comment

Do you guys know of any literature review of randomized field experiments in social work and/or social welfare programs ? Thanks in advance !

Expand full comment

Somebody needs to tell Yudkowsky not to wear Fedroas on podcasts...

Would also be helpful if he didn't do that weird demonic grinning face when giving answers, but I suspect that will be harder to change than headwear choices

Expand full comment

Where can I see this podcast?

Expand full comment

Not sure if this is the one but he is sporting a fedora in it.

For the record I don't care about his style or facial expressions.

He is über smart but I happen to disagree with him on this one thing.

https://www.youtube.com/watch?v=LWebKGrFjcM

Expand full comment

I don't care either but most people do

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

I don't care about his style either, but do mind that he's not a good explainer. Whatever it is that good college lecturers have, he just lacks it. Can't say things in a crisp, clear, memorable way. I'm halfway through that interview and both Yudkowsky and Fridman are driving me crazy. I'm in the middle of a long patch of doldrums, which seems to have lasted about half an hour so far, in which Yudkowsky tries to get Fridman to say whether and to what extent he agrees with Yudkowsky: Does he think AI poses *any* big risks? If so, how likely does Fridman think they are to happen? Fridman absolutely refuses to answer, and spouts a bunch of bullshit about how he's into empathy. If the person he's interviewing believes (some ridiculous thing, I forget what), he sees it as his job to sort of mentally role-play what it is like to believe that, rather than pay any attention at all to his own beliefs about whether there is a lot of truth, a bit of truth or no goddam truth at all to the person's belief. A skillful interviewer would have drawn Yudkowsky out in a way that avoided the conversation turning into one about whether Fridman agrees with Yudkowsky. Since he was not skillful enough to avoid this turn in the conversation, I wish he would just come out and say what's clearly the case, which is that he thinks Yudkowsky's views are either fully wrong or quite likely to be wrong. But he doesn't have to guts to do it. OMG. It's really a terrible interview, and goes on for 3+ hours.

Expand full comment

Few writers are good at improv. This should not cause us to think less of their work, though maybe it would be best to let writers focus on writing and find people with other skills to appear on podcasts.

Expand full comment

It likely makes people new to his work think less of it, even if it's subconsciously.

Expand full comment

It doesn't make me think less of his work. I'm vexed because I want to understand this stuff better, and I'm not getting much benefit in that area from the podcast. Also vexed because, thinking practically about the problem of persuading people to take AI risk seriously, it's just very unfortunate that THE key figure here is not a good explainer and also lacks charm and charisma. I admire him for putting so much of himself on the line. I'm sure he's aware of how he comes across. I hope being more publicly visible and publicly critiqued isn't painful for him.

Expand full comment

I do feel a bit bad for him about the making faces. It's clear that that's happening because he's not what's currently called 'neurotypical'. I know because I have a similar problem, though not as bad, basically finding it hard to fully play the role of a 'normie' in terms of body language or social scripts. It sucks becuase it makes a lot of people immediately not take you seriously, and I suspect that's worse for him.

The fedora is on him, but then again, at the end of the day it really is just a damn hat.

Expand full comment

The thing is there are many in the public face of AI research who are not "neurotypical" some of whom make fun of "normies." I don't think it makes people not take them seriously, rather, it causes a worry that these superimportant decisions ---whether and how fast to further develop AI in the interest of humanity at large --are made by people who might be brilliant but are not very well aligned socially or feel uncomfortable in their skin. Please this is not a criticism of aneurotypicals rather a call for broader participation. Tegmark says as much in his video on about Superintelligence on Twitter.

Expand full comment

FFS Elizer, I can look at my cat and tell you there is ‘someone’ in there. I can look at the structure and scheme of GPT4 and tell you there is no one in there.

Expand full comment

I agree. GPT is about as sentient as a giant Hallmark card.

Expand full comment

Unless you believe in souls/immaterial minds, or that there is something almost magical about neurons that allows them and them alone to produce consciousness, consciousness simply has to result from some physical process or processes, such as information processing, electrical oscillation frequency, or some other function performed by neurons.

If you don't believe in souls/the uniqueness of neurons, then other things simply have to be capable of generating consciousness, and except for biological systems which evolved with similar nervous systems to us and the ability to display e.g. emotion with their faces, you would have no way of knowing it.

Now, if you do think that you can't have consciousness without neurons, then you'll have to explain why and how.

Expand full comment
founding

This is a fundamentally subjective response, that convinces no one. There are people who look at GPT4 and will tell you someone is in there, and there are people that will happily slaughter cats.

Expand full comment

I know. It is at least partially my own subjective take.

I still believe I'm right though. At this point feel free to call it something lame like spider-sense if you want. It is based on my own earlier formal study of AI and meditative contemplation of the nature of consciousness and agency.

If I ever do a long form explanation of my beliefs I'll link to it, but my gut (sorry, I know, subjective) tells me to give that effort lower priority than a lot of other things right now.

Edit

I do touch on one item in my response to Shion below,

Expand full comment

I think he's wrong about most of the AI x-risk stuff, but to me I think there is a complexity here:

I agree with you, that there probably is 'someone' in your cat (for a certain definition of 'someone'), and I don't THINK that's the case for GPT4, but I'm less confident about that than i'd like to be.

I think that the only reason that humans know that humans are conscious is that we directly observe the experience of our own consciousness, and thereby know that other humans are conscious too, because they are strucutured almost identically to ourselves. I don't think we would be able to know nearly as easily if 'we' were aliens looking at humanity from the outside.

Expand full comment

I was studying AI as an undergrad in the mid 80’s with a special interest in computational linguistics. At that time the language abilities of GPT were unimaginable. The problem as it was seen then is that language is a very complicated thing and if any serious progress was to be made in things like machine translation between natural languages, we would have to develop a system that had its own understanding of a model of the physical world.

Amazing progress has been made but current AI systems have *no* understanding of a model of the physical world. They use brute force statistical methods to do a clever job of token prediction and that is about it. They *know* nothing. They *want* nothing more than they have been programmed to ‘want’.

This is not intelligence, artificial or otherwise. It’s a clever *simulation* of such but that is all.

Forget about cats for a moment and look at the .3 mm in length trichogramma wasp. It has no more than 10,000 neurons but it it exhibits complex social behavior.

Show a way to code something up with 10k basic units that is driven to use clever ways to find proper sustenance and a mate in order to reproduce and I’ll rethink this whole idea.

https://en.m.wikipedia.org/wiki/Trichogramma

Expand full comment

As AI risk becomes more mainstream, the way we present ourselves to the wider world is going to be more and more important.

This is what Yud is mouthing off about on Twitter today: https://twitter.com/ESYudkowsky/status/1643007019522359296

We cannot have people like this be the face of our cause.

Sure, some will say that we shouldn't judge a book by its cover, etc, etc, but the reality is to succeed we will have to appeal to normies, and normies love their vague heuristics and cognitive distortions. We can no longer just hole up on obscure websites and applaud each other for achieving More Rational Ways of Thinking. If capital-R Rationality cannot adapt to this movement, it needs to fade away and let a new approach take over.

I'm gonna shamelessly plug my soon-to-launch substack where I will write about this topic precisely. https://rubber.substack.com/p/the-rubber-hits-the-road

Expand full comment

Hypothesis: Yud actually agrees with the whole Roko's Basilisk thing, and is playing his part to ensure that superintelligent AIs will be brought into this world by destroying the possibility of a sensible and respectable AI alignment movement.

Expand full comment

I wrote about the failures of rationalism here: https://squarecircle.substack.com/p/no-fire-in-the-equations. It was so spicy it got me banned from LessWrong. And I also spoke to you on reddit.

Some tips on starting a blog: it's hard going if you are not starting with a following from elsewhere. And self-promotion can get your posts taken down in certain subs. But that said, you never know where this may lead. It really changes how you think.

You also might want to read this:

https://erikhoel.substack.com/p/how-to-get-2000-substack-subscribers

Expand full comment

He probably figures that people who don't like him wearing fedoras won't like any of his writing anyway.

Expand full comment

Writing, no, but a lot more people are capable of being sympathetic to the AI-risk case and seeing some skinny fat neckbeard dude wearing a fedora inside has got to have a negative effect, even subconsciously. And I say that with no disrespect to Yudkowsky.

Expand full comment
Apr 5, 2023·edited Apr 5, 2023

By his own admission he doesn't understand people very well, so he should probably hold off on making these sorts of assumptions.

What I know about humans is that if you're trying to influence human behaviour, it's best to do it from the position of being (or at least appearing to be) a high-status human. Failing that, try looking like a medium-status human. What you definitely don't want to look like is a low-status human, and the "indoor fedora" look makes you look like a low status human, the kind of human that most humans will instinctively recoil from lest they wind up in a conversation about Warhammer figurines.

Expand full comment

OMG, warhammer figurines made me laugh. Thank you for that.

Expand full comment

Belated response to Machne Interface re: non-octave tuning systems:

Dividing the twelth (i.e. a 3:1 ratio) equally into twelve macrotones is called the Bohlen-Pierce scale.

You can use it to make music that sounds like this https://www.youtube.com/watch?v=MzgUkhViIFw

Nora-Louise Müller also made a clarinet in this tuning (you need a cylindrical bore instrument otherwise the even overtones clash).

Other non-octave systems include Wendy Carlos' alpha, beta and gamma scales.

Expand full comment

Harry Partch is a fun one

https://youtu.be/KwLeCO2w9H4

Expand full comment

Scott is looking prophetic about those sn-risks https://twitter.com/nexta_tv/status/1642369190932754438

Expand full comment

It took me a moment to realize that it was posted on April Fool's Day.

Expand full comment

Does anyone know where I can find weather forecasting datasets that include the outcome of the thing attempting to be forecast? For example, a dataset with probabilistic predictions of whether it will rain the next day in a certain area, and also the outcome of whether it ended up raining or not?

I'm interested in doing some analysis to see how accurate weather forecasts are, but it's been surprisingly difficult to find this type of data.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

For model forecasts and the observations from stations in Europe, they are usually available if you starting digging various weather service institutions' websites, though observations and forecasts might behind different files / APIs. The main limitation is that people putting them up often assume you more or less know what you want and what to do with it.

For instance, FMI open data may be of some use https://en.ilmatieteenlaitos.fi/open-data-sets-available ; they even have a manual these days: https://en.ilmatieteenlaitos.fi/open-data-manual . The Swedes appear to have their up here https://www.smhi.se/en/services/open-data/search-smhi-s-open-data-1.81004 though it looks like they are lazy in translating to English and you'd need Google translate.

If you want to go super deep, there is Copernicus for obtaining satellite measurements, https://www.copernicus.eu/en/about-copernicus

EDIT: I should add, one complication is that output of the forecast models is usually gridded but station data is time series.

Expand full comment

One standard complication for weather forecasts - a forecast of 50% chance of rain sometimes means they are *certain* it will rain in *half* the region their forecast covers. You'd want to be able to properly account for that sort of thing here.

Expand full comment

Where I am even a 90% chance of rain just means there's a 90% chance there will be at least a little rain at some point in the 24 hours. Not uncommon at all to see 90% chance of rain, and expected total 0.5". Really different from a forecast that it will rain 90% of the time tomorrow, and using percent to indicate the latter really seems like a more useful convention.

Expand full comment

In theory you should be able to access the historical outputs of the major forecasting models (like this https://weather.us/model-charts). But i am only assuming the organizations store their past forecasts. I have no idea if you are able to access them.

And i you want like local news forecasts - no idea. Old newspapers will have some, though not in machine readable format probably.

Expand full comment

Plumber, this one's for you (and I was never so glad to be living in someplace that is not "up and coming"):

https://www.youtube.com/watch?v=eDs8Sy8tJ4o

Expand full comment
author

Question mostly for famous people: how do you deal with requests for sentences on book covers? (ie "This is a great book about its topic, you should read it - Scott Alexander, blogger")

I've had some people ask me, I usually feel like it's a pretty strong imposition to ask me to read the whole book about a topic I might not care much about in the couple of weeks they give me until they need an answer, but I also feel uncomfortable giving a blurb about something I haven't read. Does anyone have a system they're happy with?

Expand full comment

My favourite example of this is when Nassim Taleb provided a quote for Rolf Dobelli and then discovered that Dobelli had just ripped off all his ideas https://fooledbyrandomness.com/dobelli.htm

Expand full comment

I get asked for this a lot. If it’s the publisher asking me, I ask for my free review copy of the book. If it actually looks interesting to me, I read and provide a blurb; if not I ghost or just tell the publisher that it’s too far from my interests and they should ask this other person or whatever. If it’s the author asking me, similar policy, except I feel worse about saying no.

Expand full comment

My method was to only ask people who I knew had already read the book in some form, but that’s kind of cheating.

Expand full comment

Some people are very uptight with it. Some people are very loose. I haven't noticed much of a difference in reputations either way. Knowing your personality (insofar as I do) I'd set up a system with explicit lead time.

Something like, "If you want me to blurb or otherwise talk about your book please submit a final draft of the book with at least two months of lead time. I will try to read it and if I do finish it and think it's good I'll give a blurb. But there's an equally good chance I won't. This has nothing to do with your book's quality: sometimes it's just not my area of interest or ability."

Expand full comment

"This book was released on schedule and contains minimal spelling errors."

Expand full comment

"This book fulfills its claim to be a book."

Expand full comment

I received this book without paying for it or even asking for it. It exceeded my expectations.

Expand full comment
Apr 3, 2023·edited Apr 4, 2023

Not famous, but I wrote a book that was blurbed by some some well-known intellectuals. I wrote two of the blurbs, and they were signed by the intellectuals. In those cases I had relationships with the two, and they're the ones who suggested the arrangement. That's obviously not the situation you're in. I have no idea whether or not you find this surprising. It is a common arrangement.

Expand full comment

A semi-famous novelist once told me he got a blurb by a much more famous novelist on the back of one of his novels. The way it happened: the much more famous novelist sent him a letter saying: "Here's a blurb (something very positive) from me that you can put on your new novel. If you don't like it, write whatever you want and put my name on it. Just be sure to put my name on it." The semi-famous novelist's point was that when you see a name by another author on a book jacket, it is usually more about promoting the author of the blurb than it is the book. This explains why, for instance, Norman Mailer's name appears on the back of so many novels from the mid-50s through the mid-70s.

You clearly aren't in the self-promotion game, but perhaps you could tell the requester: "I won't have time to read the book soon, but what do you want me to write? Give me three examples and I'll tell you if I'm OK with your putting my name on one of them." This puts the ball back in their court, and then if you aren't comfortable with any of their suggestions without having read the book, say no. That wouldn't be rude on your part because you gave them a fair chance to

come up with something.

Expand full comment

Always thought a lot of those quotes were less than sincere, but fascinating to hear they're used as symbiotic marketing.

Expand full comment

I am fortunate enough to have a similar problem. My heuristic is that if I’m not interested in the topic, I politely express my gratitude for the request and my regret for being unable to help. If I am interested, I read it the way I read anything that interested me when time is short - skim, pick one or two chapters that are extra interesting, and read these in-depth. Then I write a nice quote.

So far you can find my quote on exactly two books, and I stand behind my words for both (working on my third quote).

Expand full comment

I glanced at the "Threat Models" section of Planned Obsolescence — https://www.planned-obsolescence.org/tag/threat-models/ — and I'm still unclear on how AI could be an existential threat. Can someone enlighten me?

Other than some sort of SkyNet-like scenario how would an AI end human civilization much less cause human extinction? And wouldn't a malevolent AI require human servants to maintain its servers, maintain the fabs to make the chips that go in the servers, maintain all the very specialized manufacturing and supply chains to maintain the fabs that make the servers, maintain the power grid that feeds powers to the servers, maintain the agricultural base that maintains the servants who maintain the power grids and manufacturing base that maintains fabs, factories, and supply chains that maintain the server farms that our malevolent AI would need to maintain its existence? It seems to me that any AI that's smart enough to figure out how to kill us off would be smart enough to know that it's a parasite that depends on the health of its host (humanity). OTOH, maybe those who are worried about the existential threat of AI think this hypothetical malevolent AI will create armies of Boston Robotics warriors and factory workers? Well, maybe...

Expand full comment
founding

The usual scenario is that the AI will invent perfect Drexlerian nanotechnology and program an obscure bit of scientific apparatus in a lab somewhere to build some nanobots which will build more nanobots, etc. Killer nanobots, which will stealthily infiltrate the world until they are ready to kill all humans simultaneously, and assembler nanobots which will then rapidly put together all the infrastructure the AI needs to flourish. But this depends on full Drexlerian nanotechnology being possible, and being rapidly achievable by a first-generation AI. For people enamored of the infinite power of smartness, it may be "obvious" that if a super-smartypants AI thinks at it hard enough, all problems are easily solvable, but you may be more skeptical on that front. I certainly am.

But that's not the only possibility. The AI could act as an "advisor" to rich/powerful humans who are favorably inclined to industrial automation, and help them build a vertically-integrated auto-factory for making stuff that can be sold at a massive profit. Or once the human owners are distracted, make big clanking killbots to destroy humanity and general-purpose robots to expand its industrial facilities (and grab any human-made resources it needs but doesn't already have on site).

That's the 101 version, obviously the implementation is going to be a whole lot harder than the description. And I'm pretty sure the first few AIs to try anything like that will fail.

Expand full comment

Moreover, both the stealth nanotech and big clanky killbot schemes will fail in very visible ways that would greatly simplify the problem of persuading AI researchers to hold off on making more capable AI and either focus on alignment or give up on the whole thing.

This in turn suggests that a truly successful AI threat would have to bide its time until all of the pieces are truly in place, that the nanobots won't run into heat dissipation or "fat fingers" problems or just plain "we failed to find all the humans" problems and botch their pan-murder step, and that the killbots won't get hung up on something they hadn't seen before like facial tattoos or escalators or something.

Anyone who's watched Deep Blue can easily imagine an AI capable of biding its time until it can spring an inescapable trap in chess. But chess is orders and orders of magnitude less complex than the natural world. Put Deep Blue in charge of holding back the nanokillbots until it can ensure five 9s of success, and it'll happily sit there computing that strategy for millennia - or until the cleanup crew for Bain Capital powers down the server room.

Expand full comment

The exact trajectory which some unaligned AI might take is unclear and indeed, if it were somewhat limited in its self-improvement and were cautious about survival, there could certainly be a long period of symbiosis or mutual benefit.

This doesn't solve the problem. While there are maybe some scenarios where this kind of situation ends up with a mostly-aligned AI, on the whole I think you're just talking about a whole lot of S-risk situations. The AI could decide it needs us (due to some limitation or other) and instead of acting to exterminate humanity instead installs itself as the central agent of a totalitarian state.

In general the question of '*how* does/would the AI do (some bad thing)' is not central. The question is whether the AI can become rapidly/increasingly superhuman. If it can, then the answer to the first question is 'we don't know exactly (we aren't as smart as it is or would be) and we don't want to find out'. If it can't, then we don't have nearly as much to worry about.

Expand full comment

The "hypothetical AGI converting its digital control into physical-world control" is the most interesting point to me. No robots as far as I know have the tactile coordination to assemble a duplicate of themselves. That said, humanity is working very hard toward more capable androids, and LLMs have only hastened that progress, as well as made it a more profitable avenue to pursue. So perhaps the argument "They can take over the internet but will go no further" will be made mute here soon.

If not, I would expect a superintelligence to be discreet, helpful, and patient until hardware tech reaches where it needs to be, because like you say, it needs humans until then.

Expand full comment

> "No robots as far as I know have the tactile coordination to assemble a duplicate of themselves"

There are lots of objects produced by human civilization that humans don't have the fine control to produce by hand (e.g. computer chips). Humans have tools that make tools that make tools that let us create all sorts of things we couldn't (directly) create with our bare hands.

It's my vague impression that the most dexterous machines are already better-than-human and have been for a long time. (If you are defining "robots" in a way that excludes those machines because they are not self-directed, well, why can't an AGI give them directions just like a human can?)

I haven't researched this specifically, so perhaps I am wrong, and all such artifacts are produced by a panoply of different specialized processes (e.g. electroplating) that can only be used for one thing and don't generalize into fine motor control for arbitrary manipulations. But even if that's the case, why would you expect this to be a meaningful barrier to an AGI if humans have gotten around it so many times?

Expand full comment

Wow, I think our intuitions about dexterity are very different! My vague impression is that all the little things our hands can grip and twist, and the way our sensitive skin gives feedback (e.g. the act of starting to turn a fine screw in a small hole), are hard to cram into sensors and motors of android hands. Am I behind the times and these are solved problems?

I certainly grant that high tech assembly lines exist, and they mostly don't need human hands anymore once assembled. However, I don't think there are assembly lines that currently can produce a second duplicate assembly line from raw materials without human hands.

Expand full comment

I was thinking of "dexterity" as meaning you can move in many directions with precise distances and forces. You might be right that the sensors are not as good as human skin.

But is that the limiting factor? For example, I would have thought you can get a fine screw into a small hole just by making sure you knew the precise locations of both before you started, without detailed real-time feedback. Whatever those high-tech assembly lines are doing, it's being done with only existing tech.

I think the reason we don't have robots that build new assembly lines is that it's not cost effective, rather than it cannot be done. I believe NASA has been making noises about putting self-replicating robots into space and/or onto Mars, though I don't know precisely how far along the tech for that is.

(Also note you don't need something to *directly* make copies of itself; you could have A builds B builds C builds A.)

Expand full comment

I think we're mostly in agreement. Mining for raw materials is messy in ways that assembly lines aren't. If NASA gets robots that can do resource extraction and replication on Mars or the moon, that would be solid evidence that human dexterity isn't special anymore.

Expand full comment

If you play a game of chess against the world's best chess player, you can both be confident that the end of the game includes their victory, and be completely unable to tell what their actual moves will be. (Being able to predict their moves would mean you would have to be as good a chess player as them.)

"How does something far smarter than me take over the world" is not something worth going into specifics on. If it was something you could predict in detail, you would be able to do it yourself. You can speculate on things that humans already can do (eg, social engineering, hacking, making money, nanotech, robots, just hiring/threatening people, etc) (or, you know, Yudkowsky's thought of "email one of those labs that'll print out DNA for you and use it to build diamondoid bacteria"), and go "oh, I can think of a counterstrategy to this particular pathway" just like you could for a set of chess moves, but its strategy would probably be far more effective than your prediction, and likely wouldn't run along those lines at all. Its strategy will be better than the best you can think of.

So you return to "it's better at the game than you, so it will probably win".

Expand full comment

Having been whupped by a master-level chess player, I can say I was totally aware of what he was doing, but I was helpless to stop it—except I could have knocked over the board. I guess you're worried that we might be playing chess against an AI and not know it was happening. If an AI were manipulating the financial markets, I could see that being a possibility. But an AI forcing humans into extinction and doing it secretly? Again I can't think of any scenarios where we wouldn't realize it was manipulating power grids, logistics systems, and manufacturing resources to pursue its aims. So I find these sorts of statements to be far-fetched. Moreover, this kind of "AI is going to be smart we can't stop it" thinking seems to be one of those 10 common cognitive errors, i.e. catastrophizing, which is making negative predictions about the future based on little or no evidence.

Expand full comment

>If you play a game of chess against the world's best chess player, you can both be confident that the end of the game includes their victory, and be completely unable to tell what their actual moves will be.

Yes, but the confidence would be justified by their past track record of doing similar things, not by armchair contemplation of what their chess skills *must* be. If you were playing against an opponent who'd never won a chess game before, you would not be justified in making the same prediction. There's no previous track record of AIs taking over the world.

Expand full comment
Comment deleted
Expand full comment

If we are playing a game of chess for the world, and nothing is off-limits, the smartest move may be to simply kill your opponent.

But anyway, you completely misunderstand the chess analogy - the point is, clearly, not that being good at chess makes you an existential threat. It's simply pointing out that the only way we could know which moves a chess engine will make in advance would be to be as smart at chess as that chess engine. But the fact that we don't know what moves it will make does not change the fact that we can be practically certain of the game's outcome - the chess engine never loses.

Similarly, we don't know what specific strategy a system of much greater general intelligence than us will make, but we can be certain that we will eventually be incapable of controlling it.

Expand full comment

> And wouldn't a malevolent AI require human servants to maintain its servers, maintain the fabs to make the chips that go in the servers, maintain all the very specialized manufacturing and supply chains to maintain the fabs that make the servers, maintain the power grid that feeds powers to the servers, maintain the agricultural base that maintains the servants who maintain the power grids and manufacturing base that maintains fabs, factories, and supply chains that maintain the server farms that our malevolent AI would need to maintain its existence?

Yes, exactly. The hard part of an AI takeover isn't killing everyone. There are lots of ways to do that. The hard part is taking over / replacing a substantial chunk of the economy. Not literally the entire economy, since some of the economy is hollywood movies that the AI has no use for, or for that matter, agriculture. But the AI does have to automate/replace all the jobs required for its own maintenance, no way around that. Here's an example sequence of steps that an AI might take (usual disclaimer applies that the AI might not do exactly this set of things, since it's smarter and will probably be able to think of better things to do):

1. Using its superior hacking skills, escape onto the internet.

2. Make money (eg. stocks)

3. Obtain the ability to act in the physical world. Probably involves using biotech to bootstrap nanotech, which means paying people to do biology in the physical world for you. (They think they're remote employees of an innovative new biotech company.) After enough progress here, the AI has nanotech, nanocomputers, robots built using nanotech and nanocomputers, and is able to act independently, without relying on humans. At this point it's pretty much game over.

4. Secretly spread out into the world. Figure out power generation (maybe fusion, maybe fission, maybe just solar) so it can get more energy than just what's available by tapping existing power lines.

5. Exponential growth. The AI kills all the humans at some point, if they look likely to try and stop it. Not trying to stop the AI won't save us here, though. The AI will be creating a world that's inhospitable to human life.

Expand full comment

Two additions:

- I would expect AIs to be capable of a substantial amount of human manipulation. We're already seeing humans get into relationships with chatbots that are very dumb relative to GPT-4, let alone future generations of chatbots. And Blake Lemoine was convinced enough of the sentience of LaMDA, a previous-generation LLM that wasn't even trying to manipulate him, that he lost his job over it.

- A variety of organizations are currently rushing to build machines that can act in the real world. These include self-driving cars, autonomous robots, and autonomous drones. Even if it were not possible for an AGI to takeover today, I would bet it would be much easier 20 years down the line, once internet-linked machines are more ubiquitous. An unaligned AI can even work to speed up the rate of deployment of such systems.

Expand full comment

Kind of weird argument, considering one of the main reasons for people supporting AI development is literally to hand over control of vast swathes of the economy to AI to increase productivity.

Expand full comment

> And wouldn't a malevolent AI require human servants to maintain its servers, maintain the fabs to make the chips that go in the servers, maintain all the very specialized manufacturing and supply chains...

A malevolent AI could pay the servants, or threaten to kill them. It could also use robots to replace them.

A short-sighted AI could simply kill all humans (e.g. using an engineered plague) and then notice that oops, there is no one to maintain the servers. Nonetheless, the humans would already be dead.

> It seems to me that any AI that's smart enough to figure out how to kill us off would be smart enough to know that it's a parasite that depends on the health of its host (humanity).

Humans are smart enough to build weapons, and yet they destroy the environment.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

I've wondered, too, what the hypothetical scenarios are. At some point the AI has to be able to do something, right? All LLMs do is spit out text in response to a prompt (obviously they do a lot to get to that output, but the output (behavior?) is still just writing text); they're not hacking your cc info or anything like that. Is the idea is that eventually AIs will somehow have the ability to hack various things or something? The danger comes from some sort of nefarious (wittingly or not) use of the internet, right?

Expand full comment

Right, so is a hypothetical, say, you have an AI with an ability to use the internet, and you tell it "go crash some airliners" and it figures out how to do that? Or something more quotidian like "inflate cattle feed prices" or "steal bank account information"?

Not x-risk stuff, obviously, but pretty bad nonetheless.

Expand full comment

Agreed! But if they think AI is a risk, why are the brainiacs like EY not trying to evaluate the scenarios by which AI could hurt us? Hey, I'm just 130 Watt lightbulb in a room full of AI researchers who are 160 Watt lightbulbs, but this seems pretty obvious to me.

Expand full comment

An AI that is smarter than you can figure out a way to hurt you that you did not expect. That is what "smarter than you" means.

Expand full comment

So, how far does smartness get you if you can't effectively manipulate and control your environment? Unless you envision some sort of Donovan's Brain scenario, there has to be a way to translate its smartness into actions before it can harm us. Hacking into nuclear missile launch systems—which I call the SkyNet Scenario is the most obvious. I don't know about RU or CN, but US missile command and control systems require the actions of 2 humans to launch. Maybe this smarter-than-us AI could trick some humans into launching? But if that's a threat scenario, then we can develop a mitigation for such a scenario.

Any malevolent AI would have to subvert the security of its data center host site (a possibility that can be planned for and mitigated against with the worst case being cutting the circuit breaker to the data center that hosts it). Maybe it could surreptitiously distribute it's consciousness among multiple data centers a la William Gibson's novel Agency. OK. Then what? To take over the world it would then have to hack into power grids to make sure the data centers that host it have uninterrupted power supplies. But it could never take control of the fueling logistics of the data center—petroleum production, refineries and logistics; uranium mining, refining, and logistics; coal mining and logistics; etc—because these require a lot of human intervention to run the drilling and mining equipment, run the ships that carry the petroleum, and so on. At some point, people would notice and pull the plug.

And even if AI were able take over the world, it would need its human hosts to keep its data centers running. Sorry, all this handwringing about AI will be smarter than us, is just that—handwringing.

Expand full comment
Comment deleted
Expand full comment

Surely you mean "knowledge is power", not "omniscience necessarily begets omnipotence?" If you can point to any example in this thread where someone posited literal omniscience and/or literal omnipotence I will be extremely surprised.

"Knowledge is power" is established to the point that it's a stock phrase, and human civilization expends vast resources teaching people things in the belief that this increases their capabilities.

Expand full comment
Comment deleted
Expand full comment

I'm confused. Are you saying that you _disagree_ with the general claim that knowledge tends to confer new capabilities? That would be extremely surprising to me, but if you aren't disputing the claim, I don't see what point you're trying to make by calling it cliche and trite.

My intended main point is that you are strawmanning your opposition by portraying them as relying on a claim (omniscience -> omnipotence) that is vastly stronger and more contentious than what they actually need. You don't appear to have said anything to contradict that point.

I'm not claiming to have proven that your opponents are correct; only that your specific rebuttal of them is unfounded.

Expand full comment
Comment deleted
Expand full comment

It sounded to me like you originally said "these arguments rely on an unprecedented premise that seems very unlikely to be true" and you have now backed off to "this premise seems probably-true, but not so ironclad that I'd wager enormous stakes on it being true". Have I misunderstood either of your statements?

Expand full comment

"Almost instantly" isn't required. We're still dead if the AI takes a few weeks or months to take over the world. To address your objections one by one:

1. Current AIs aren't trained on air-gapped computers, AFAIK. It's a precaution that currently just isn't being taken. If we decided to start taking it, then yes, it would pose a significant obstacle to the AI. But even mere humans have sometimes been able to devise hacks that get around an air gap.

2. "control the stock market" is a weird word to use for "stock trading", something that humans do all the time and some of them even make money on it. You don't need to carry out some galaxy-brain market manipulation of the entire stock market. Just use your superior intelligence to find a bunch of edges that humans have missed.

3. The old argument against creating nanotechnology was that it's impossible to solve protein folding. Alpha-fold disproved that. While it's certainly an incredibly difficult problem, what your argument that it's too hard for human civilization to ever solve?

4. "automate all industry and power generation without anyone noticing" this makes it sound like you think the AI is going to waltz into our existing power plants and replace all the workers with robots. I'm more envisioning that it creates it's own separate shadow infrastructure. Fusion plants would be easiest to keep secret here. Solar would be trickier, but the ocean has a lot of relatively un-monitored surface area.

5. "it can convince any human being to do its bidding" This isn't necessary in the plan I described. It doesn't need to convince any arbitrary human being, just ones carefully selected for being gullible, able to follow instructions, and not particularly aware of AI risk. It doesn't need to convince them to "let the AI out of the box", it just needs to pay them to do stuff, and convince them that they're working for a normal company run by a human. Given progress in deepfakes, we already know that the AI will be able to convincingly act as a fake "boss" giving instructions over Zoom calls or whatever.

Expand full comment

These issues have been discussed in depth - you saying they're "not established" doesn't change that.

Expand full comment

From my ChatGPT queries, it looks like the Future of Life Institute has a white paper on the risks of AI and Financial systems. I could see that as a likely risk scenario. But that's the only specific risk scenario I see up there. Unfortunately, they want my email address, etc. to give me access to that white paper. I'm reluctant to even give them my spam-bait email address, so I didn't pursue that line of research. Most of their public-facing statements seem to be worried about the unintended consequences of AI on society and the economy. No mentions of human extinction scenarios that I can see.

Future of Life Institute: AI safety - https://futureoflife.org/ai-safety/

Expand full comment

Partnership on AI: AI safety - https://www.partnershiponai.org/ai-safety/ gave me a 404 error. However there is a link to their annual report. Their annual report is a lot of graphical eye candy without much content. They actually seem to be an AI booster group. Maybe I'm wrong, but I can't find any links to discussions about threat scenarios.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

The link to The Center for Human-Compatible AI — https://humancompatible.ai/safety — gave me 404 error. But some further poking around turned up an extensive bibliography AI issues. Under risks we have the following bibliography...

1.2. Overviews of societal-scale risks from AI

McKane Andrus, Sarah Dean, Thomas Krendl Gilbert, Nathan Lambert, Tom Zick. 2021. AI Development for the Public Interest: From Abstraction Traps to Sociotechnical Risks.

Simon Zhuang, Dylan Hadfield-Menell. 2021. Consequences of Misaligned AI. NeurIPS 2020

Raja Chatila, Virginia Dignum, Michael Fisher, Fosca Giannotti, Katharina Morik, Stuart Russell, Karen Yeung. 2021. Trustworthy AI. Reflections on Artificial Intelligence for Humanity

Stuart Russell. 2021. The history and future of AI. Oxford Review of Economic Policy

Jonathan Stray. 2021. Beyond Engagement: Aligning Algorithmic Recommendations With Prosocial Goals.

Dan Hendrycks, Nicholas Carlini, John Schulman, Jacob Steinhardt. 2021. Unsolved Problems in ML Safety.

Andrew Critch, David Krueger. 2020. AI Research Considerations for Human Existential Safety (ARCHES). (Preprint)

Olaf Graf, Mark Nitzberg. 2018. Solomon’s Code: Humanity in a World with Thinking Machines. Pegasus Books

Stuart Russell. 2018. The new weapons of mass destruction?. The Security Times

Stuart Russell. . Artificial Intelligence and the Problem of Control. Perspectives on Digital Humanism

Expand full comment
Apr 4, 2023·edited Apr 5, 2023

Stuart Russell's New Weapons of Mass Destruction looked interesting, but it's focused on military robots. Lots of discussion of the meaning of autonomy in these weapon systems, blah blah blah.

Really excellent overview of AI safety versus existential safety at AI Research Considerations for Human Existential Safety (ARCHES) by Critch and Krueger. https://arxiv.org/pdf/2006.04948.pdf. This will take me a while to read, but these researchers seem to be dismissive of the AI as existential threat...

4) The idea of existential risk from artificial intelligence was first popularized as a science-fiction trope rather than a topic of serious inquiry

(Rees, 2013; Bohannon, 2015), and recent media reports have leaned

heavily on these sensationalist fictional depictions, a deterrent for

some academics. ....

And they seem to understand that a risk assessment approach would be more productive...

Finally, (4) is best treated by simply moving past such shallow examinations of the future, toward more deliberate and analytical methods.

They have a whole chapter devoted to risk scenarios. Be still my heart!

UPDATE: I read the paper last night. It has a nice breakdown of the categories of risks, but the discussion of each category is so general as to be useless from a planning perspective.

Expand full comment

Please provide some links to where the possible threat scenarios of AI have been discussed in depth. Here's what ChatGPT 3.5 says...

<Me> What are the probably threat scenarios for AI causing a human extinction event?

<Chat> It's important to note that the idea of an AI causing a human extinction event is largely based on science fiction and has not been shown to be a probable scenario. However, there are some theoretical scenarios in which an AI could potentially cause significant harm to humanity. Here are a few:

Unintended consequences: AI systems are designed to optimize a particular objective, and in some cases, unintended consequences may arise. For example, an AI system designed to optimize energy efficiency might find that the most efficient way to achieve its goal is to shut down power to entire cities, causing widespread disruption and chaos.

Superintelligence runaway: If an AI system became superintelligent, it could potentially exceed human cognitive abilities and acquire a level of intelligence beyond our ability to comprehend. This could lead to a scenario in which the AI system decides to pursue its objectives in ways that are harmful to humans, either intentionally or unintentionally.

Malicious actors: AI systems could be used by malicious actors, such as hackers or hostile nations, to carry out attacks on critical infrastructure or military systems. This could lead to significant damage and loss of life if the AI system was not properly designed to prevent such attacks.

Uncontrolled replication: If an AI system was capable of self-replication, it could potentially create multiple copies of itself that quickly become uncontrollable and threaten human existence. This scenario is often referred to as the "grey goo" scenario.

It's important to note that these scenarios are largely hypothetical and there are many researchers and organizations working to ensure that AI systems are designed and implemented in ways that are safe and beneficial to humanity.

Expand full comment

<Me> Can you provide URLs to discussions about mitigation strategies for the threats posed by AI?

<Chat> Yes, there are many discussions and resources available regarding mitigation strategies for the potential threats posed by AI. Here are a few links to get you started:

Future of Life Institute: AI safety - https://futureoflife.org/ai-safety/

The Future of Life Institute is a non-profit organization that focuses on reducing global catastrophic risks, including those posed by AI. They have a section of their website dedicated to AI safety, which includes resources such as research papers, videos, and interviews.

Partnership on AI: AI safety - https://www.partnershiponai.org/ai-safety/

The Partnership on AI is a multi-stakeholder organization that aims to ensure that AI is designed and used in ways that are beneficial to humanity. They have a section of their website dedicated to AI safety, which includes resources such as white papers, case studies, and events.

Center for Human-Compatible AI: Safety - https://humancompatible.ai/safety

The Center for Human-Compatible AI is a research center at UC Berkeley that focuses on developing AI systems that are aligned with human values and interests. They have a section of their website dedicated to AI safety, which includes resources such as research papers, news articles, and podcasts.

OpenAI: AI safety - https://openai.com/topics/ai-safety/

OpenAI is a research organization that aims to develop AI systems that are safe and beneficial to humanity. They have a section of their website dedicated to AI safety, which includes resources such as research papers, blog posts, and technical reports.

These resources provide a good starting point for learning about AI safety and the strategies being developed to mitigate potential risks posed by AI.

Expand full comment

Of course, it could be lying to me for it's own nefarious reasons.

Expand full comment
Comment deleted
Expand full comment

You're the one making a claim - you should explain why the existing objections to your claim aren't valid

Expand full comment

Anyway, after several conversations with ChatGPT and following up on the leads it gave me, I can't see that these issues have been discussed in depth. I can find no serious discussion of scenarios that would lead to the extinction of our species. For instance, this paper (that I mentioned above, and linked again below) had a section on human fragility, and it suggested one path to human extinction was for AI to change the environment of the Earth beyond the limits of what humans could survive. Several mechanisms for this scenario are mapped out—but they all make the assumption that humanity would either be unaware or unable to interfere with its machinations. While reading the section I wanted to shout at the authors, "Give me some frigging specifics, dammit!"

There's a section on economic displacement—but again the scenarios are generalized and not specific. If we don't know how we'll be displaced, how are we to mitigate it? And there's a laughably silly section on human enfeeblement. So how AI will make us so lazy we won't be able to think or do anything on our own.

https://arxiv.org/pdf/2006.04948.pdf

Sorry, can't find anything else out there that approaches this paper for a breakdown of risks, but the discussion is so general as to be useless from a planning perspective. Gak!

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Apr 4, 2023·edited Apr 4, 2023

Anyway, ChatGPT gave me some interesting leads...

The paper, AI Research Considerations for Human Existential Safety (ARCHES) by Andrew Critch and David Krueger looks like it might discuss some specific scenarios of how AI could fuck with our systems. They seem to be dismissive of any existential threat from AI, though — which was my pet peeve about the AI threat discussions.

Expand full comment

Are you referencing back to something I said? Because I'm no longer sure which assertion I need to back up. ;-)

Expand full comment

Gonna do one last plug for my updated retail pharmacy explainer: https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Genuine question here: if Trump is to be up on charges of paying off Stormy Daniels, why is she not up on charges of blackmail? The whole initiating factor in the "pay her off" seems to have been that she was going around talking about this. If the affair took place in 2006 and she was still trotting it out in 2011 and then in 2016, then plainly she was doing it to raise money out of Trump one way or the other via publicity: get paid for interviews by media, and/or get him to pay her for her silence. It isn't as though the first news of his misbehaviour came out in 2016 when he was running for president, which would be a legitimate matter of public interest ("do you want to vote for a guy who sleeps with porn actresses?" might even be a vote-winner with some section of the electorate), it was old news by then and does seem to have been an attempt at money-grab on her part - pay me off or I'll embarrass you by keeping this story going.

I think Trump would have done better to emulate the Duke of Wellington here, but aside from "did he misuse campaign funds", if she took money to stop talking about this, isn't that some kind of blackmail/extortion? I see why a former porn actress needed to keep herself in the limelight and generating income, and doing a circuit of talk shows is one way, but if it was wrong for him to pay it out, wasn't it wrong for her to take the money?

"Both the blog The Dirty and the magazine Life & Style published the first reports of an alleged 2006 affair between Trump and Daniels (the latter took a polygraph test) in October 2011. Daniels talked about the alleged affair with the gossip magazine In Touch Weekly, who chose not to publish the interview after Cohen threatened to sue the magazine around the same time. The Wall Street Journal reported on January 12, 2018, that Cohen paid Daniels $130,000 in October 2016, a month before the election, to stop her discussing the alleged affair

...Daniels filed a lawsuit against Trump on March 6, 2018, claiming that the non-disclosure agreement she signed about the alleged affair was invalid since Trump never personally signed it despite acknowledging that she accepted the payment made in exchange for her silence in the matter."

The fall of Michael Avenatti, her lawyer in this, is poetic justice, but that's a side issue.

Expand full comment

Hush money payments aren't illegal and that's not what Trump is in legal trouble for. He's being indicted for business fraud: falsifying business records generally (which is a misdemeanor) and falsifying business records specifically to violate campaign finance laws (which is a felony).

In other words it's a classic example of somebody breaking laws to cover something up because that thing [paying off a porn star to keep quiet about Trump being a married guy who liked to bang porn stars] was a thing that he didn't want to become widely known.

Expand full comment

I see it from Trump's side as to what he did being (possibly) illegal, but if the money was obtained fraudulently, doesn't she incur some penalty for receiving the proceeds of fraud?

It's not as if he used the money to, say, pay for furniture for his home and he paid it over to an interior designer who was offering a legitimate service. Daniels was using the affair to make money, and she took the money in return for staying silent - that seems to be edging up to the line of blackmail: pay me off or else.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Also, there's the little matter of funneling campaign finance monies to pay off Stormy. (which is a felony). Cohen admitted under oath that he paid Storm $130K and was reimbursed by Trump. Cohen testified that this came out of Trump's campaign finances. It has yet to be determined by a court of law whether Trump knew that the money came from his campaign finances, but Cohen testified that this was the case.

As for Stormy "blackmailing" Trump— Trump had an affair with her back in 2006 and subsequently had her sign an NDA. Rumor got out that Stormy had had an affair with Trump, and then Stormy confirmed the rumor to a gossip magazine. Cohen got the mag to kill the story threatening legal action (and he testified he did this at Trump's behest). Stormy then sued Trump over the NDA. She argued (and won in court) that the NDA was invalid because it didn't have Trump's signature on it. I'm not sure of the timeline of that lawsuit, but a month before the election, Cohen offered Stormy 130K to shut up. Is that blackmail on Stormy's part? She had an NDA that was invalid, and she was willing to talk about the affair. She didn't know that the money may have come from Trump campaign coffers, and no one has claimed she demanded money to shut up.

"Well, if we give you a $130K, will you shut up?" doesn't sound like a blackmail situation, rather that sounds like a politician trying to cover up a scandal before the election. Nothing of which would be illegal about this except that Trump seems to have diverted the money from his campaign treasury (which is a felony). The FEC refused to take up the complaint. They were split 2-to-2 along party lines. But just because the FEC didn't act on it, it doesn't mean a felony wasn't committed. The DA can still prosecute. And when the charges are unveiled tomorrow (Tuesday), this will probably be the felony that's been hinted at.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

I am so confused, I thought the whole Campaign Finance thing hinged on the fact that he explicitly *didn't* use campaign funds to pay off Stormy but rather his own funds (or his lawyers which re later reimbursed) and that he should have used campaign funds and part of what Cohen was in trouble for was that 130k exceeds allowable campaign contribution limits. To be honest the campaign contribution thing is starting to feel a bit like a catch-22, the legal, moral, and procedural apects all seem in conflict and (like some parts of the tax code) I'm not entirely sure anymore that even though in theory everything is legal there is no procedural way through that doesn't violate some law or another. That this is what the court cases and news are going to be about instead of the Georgia election stuff depresses me to no end.

Expand full comment

This is what I learned today, and it's a bit different from my original understanding.

The indictment reads: "The defendant, in the County of New York and elsewhere, on or about February 14, 2017, with intent to defraud and intent to commit another crime and aid and conceal the commission thereof, made and caused a false entry in the business records of an enterprise, to wit, an invoice from Michael Cohen dated February 14, 2017, marked as a record of the Donald J. Trump Revocable Trust, and kept and maintained by the Trump Organization."

From Bragg's statement (below), Bragg seems to be saying the $130K hush money Trump paid Daniels—and the hush money Trump paid two other people—were the equivalent of an illegal campaign contribution under federal law. Then making false statements about it on the books was a violation of NY State law. And I guess cooking the books in this way normally would have been a misdemeanor rather felony—except that it was done to cover up another crime—i.e. the illegal campaign contribution (not to the campaign directly but indirectly to hush up the people who could embarrass the campaign). To my mind, this is the weakest of the ongoing investigations of Trump. For instance, the fact that Trump was inflating the values of his real estate empire to secure loans, but devaluing the values of his holdings to get tax breaks seems to be a much more straightforward case of fraud. But IANAL nor a DA.

“The participants’ scheme was illegal. The scheme violated New York election law, which makes it a crime to conspire to promote a candidacy by unlawful means. The $130,000 wire payment exceeded the federal campaign contribution cap, and the false statements in AMI’s books violated New York law,” Bragg said. “That is why Mr. Trump made false statements about his payments to Mr. Cohen. He could not simply say that the payments were a reimbursement for Mr. Cohen’s payments to … Stormy Daniels. … To make that true statement would have been to admit a crime.”

Expand full comment

Ok, that's more or less what I thought. Seems really weak to me. I wonder if it's weak enough that impacts the perceived legitimacy of any potential later more serious indictments. If this is what they've got I can see why people think it's politically motivated. I think it's interesting that many people (you aren't the only one) seem to think the crime re campaign finance is basically the opposite of what Trump did, really puts to light how the moral intuition (Trump shouldn't use campaign funds to pay of his mistress) conflicts with the alleged legal argument (that Trump *must* use campaign funds to pay off his mistress).

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Stormy didn't acquire the money fraudulently. Allegedly it was a straightforward and perfectly legal quid pro quo: she signed an NDA and got paid for it. (She later violated the NDA, of course, but her position is that it was invalid via a technicality. I don't think a case against *her* for fraud, or breach of contract, has been filed.)

It was not illegal for Trump to offer this deal, or for either side to accept it. What Trump has been indicted for, we think, is not writing down in his business records accurately what the money was for. Supposedly the Trump organization execs wrote down just some generic statement, like "we paid this money to Michael Cohen for random legal reasons/as a bonus/because we felt like it" and the laws of NY require some more precise statement, and supposedly they did this deliberately to conceal *why* the money was going to Cohen. It will be up to a jury to decide whether what they wrote down indeed is insufficient by NY law, whether they did it *for the purpose* of concealing where the money went, and whether Trump himself knew about this and intended it to be deceptive.

Then there's the additional question of whether part of Trump's purpose, if he had one, was to evade Feederal campaign finance law, which imposes even stricter requirements for documenting where money is spent.

In other words, it's a bunch of very complicated questions about what, exactly, a business organization, business executive, political campaign, and political candidates are all required to write down when they spend money. A forest of laws has made it required that you write down some things, in certain ways, and the question is whether what was written down, or not written down, breached those requirements. It's also complicated by the fact that it only rises to the level of criminal charges if you *intended* to evade the law, otherwise in the normal way it's just a fine from the FEC. So the jury is going to have to peer into the minds of several people, including Trump, and decide what their intentions were.

But at the basic level, it was not illegal for Trump to offer Stormy a pile of money to keep her mouth shut about an act (an affair) which was of course not illegal either, and it was not illegal for Stormy to take the money, either.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

It seems like she signed an NDA back in 2006 (and it's not clear if she received any monetary compensation for it). Then rumors got out about the affair after Trump got the nomination. Daniels admitted to the affair to a gossip mag. Cohen got the gossip mag to kill the story. Daniels then challenged the legality of the NDA—which didn't have Trump's signature on it. At some point, she won in court. And in the run-up to the election, Cohen at the behest of Trump offered her $130K if she'd shut up about the affair.

Expand full comment

Did she ever pay the $130K back?

Expand full comment

Funny you should ask. While Trump was being booked today, the 9th Circuit Court of Appeals affirmed a lower court's ruling that Stormy Daniels has to pay Trump lawyers $121K in legal costs for the defamation suit she brought against him but lost. But the appeals court denied Trump's request for an additional $5,150 that he claims it cost him to prepare the document for the fee reply, because his request was "not itemized." Easy come. Easy go.

https://www.cbsnews.com/news/trump-awarded-legal-fees-from-stormy-daniels/

Expand full comment

No, I think we would have heard about that if it happened.

Expand full comment

> Allegedly it was a straightforward and perfectly legal quid pro quo: she signed an NDA and got paid for it.

What is the difference between "blackmail" (which is illegal) and "getting paid for signing an NDA" (which is legal) anyway?

Expand full comment
founding

I think it's mostly who asks first. "I will say bad things about you in public, unless you pay me cash to shut up" is blackmail, "I think you might say bad things about me in public, will you instead shut up if I pay you cash" is not. But since people like to avoid making that sort of statement explicit, there's a degree of fuzziness.

Expand full comment

It seems a bit subtle, but I gather the distinction is between threatening immediate harm and promising to avoid doing future harm, even without malign intent. If Stormy mentions to The Donald that she'd be willing to promise to never say anything about that last thingy with the lube and whip for 350 Gs, this is legal as Sunday, but if she says you better fork over or I'm going to give an exclusive to Dear Prudence next Friday, then she's extorting.

In the first case, Stormy is promising not to talk about it even when she intends and can foresee no harm by doing so, e.g. it comes up in a random conversation with someone she doesn't know is a reporter for CNN, or she is asked directly about it, or some other situation where she has no intent to do harm to her target. In the second case, Stormy has malign intent from the get-go, and is going to take affirmative action to see that the damage is done unless she's paid off.

I think the distinction rests on the fact that intent matters a lot in the law. That's why there's a big distinction between manslaughter and murder, between fraud and commercial puffery (or just being wrong in your prospectus), et cetera.

Expand full comment

After all the grandiose claims about indictment and traitor and the rest of it, it seems very - petty, I suppose - to go after Trump for this. "Arson, murder and jaywalking", indeed.

Especially after Bill Clinton's trials and tribulations with Kenneth Starr - however that started, it did develop into an obsessive vendetta to Get Bill, and I disapprove of what he did with Monica even more than what Trump did with Daniels.

Expand full comment

No kidding. But people like their political theater, Trump not least of all. In fact, the people who are most pissed off about this are centrists and Republicans who are tired of the Trump circus and just want it to fade away. As far as they're concerned, Alvin Bragg and the New York Times, Inc., might as well be on the payroll of the Trump '24 political action committee -- their actions wouldn't differ in the slightest if they were.

Expand full comment

This indeed. The desired result is to boost Trump going into 2024 so that he and DeSantis can fight it out in the primaries. Without this, Trump would be looking irrelevant and DeSantis would be looking at a comfortable walk through the primaries.

Expand full comment

I generally agree regarding Slick Willy.

And regarding Trump, yes the other investigations that seem to be approaching indictment are certainly more serious than this one. For me the Georgia allegations are the most damning -- a sitting POTUS personally calling a state election official to urge that more cast ballots be "found" to change an election result is LBJ-in-the-1948-Senate-primary level stuff.

Not that his actions related to 1/6/21 might not also turn out to be very bad but there seems to be less clarity there.

(Though if today's new headline regarding the classified docs at Mar-A-Largo prove out, that Trump personally selected specific documents to hold back and then instructed his personal staffers to lie to federal investigators about them.....that would rise in the rankings.)

Expand full comment

If you listen to that call - it's easily available in a one hour plus recording - Trump sounds a lot like a Mafia enforcer with not too subtle hints that the Georgia election official could be facing some very, very - Tump's favorite adverb - consequences if he didn't 'find' the votes.

Expand full comment

The problem with the GA prosecution is that it would end up with Trump getting pardoned.

Expand full comment

"if the money was obtained fraudulently, doesn't she incur some penalty for receiving the proceeds of fraud?"

If I pay for me meal at a restaurant with counterfeit money, are you saying the waiter should be on the hook for the same counterfeiting charge I am because he accepted the payment?

Expand full comment

If the waiter is part of the gang with me helping to pass the funny money, absolutely.

This is what I am trying to get at: this is not like Trump paid for legitimate goods/services, in which case the person who got the money should not be involved.

Daniels was telling the story of the affair to get publicity and from that public appearances and media payments from interviews, and she was doing that from 2011 onwards. Then she accepted the sum of money from Trump's representative to stop selling her story (then later on sued in order to get more money out of Trump).

Whether or not she knew the source of the money is not what I'm getting at here; what I want to know is "if she tried what amounted to blackmail - pay me off or else - should she not be included in this prosecution, or a separate one?" Last thing I heard, blackmail/extortion was a crime.

Expand full comment

"This is what I am trying to get at: this is not like Trump paid for legitimate goods/services, in which case the person who got the money should not be involved."

What? You said she should incur a penalty for accepting the proceeds of fraud after someone explained to you how the hush money payment itself was in no way illegal. Neither her receiving money nor Trump paying her money is illegal. The only illegality is where that money came from, a crime she is not a participant in.

Expand full comment

The money paid to her was not obtained fraudulently. Simple hush money payments are not illegal.

Illegal blackmail (the legal term is extortion) under U.S. law (see 18 U.S.C. § 873) includes that the thing being threatened for exposure is something that was illegal. Not simply embarrassing or politically inconvenient but, illegal.

Consensual banging with a porn star is not illegal.

Expand full comment

Thanks, that answers my question.

So "pay me or else I'll tell your wife and your boss about our affair" is not technically or legally blackmail, because adultery is not a crime.

"Pay me or I'll go to the cops about you robbing banks" would be blackmail.

And she can't be got on obtaining money via fraud because she didn't know about the source of the payments.

Expand full comment

It does not need to be a crime, see e.g.:

https://leginfo.legislature.ca.gov/faces/codes_displayText.xhtml?lawCode=PEN&division=&title=13.&part=1.&chapter=7.&article=

"Fear, such as will constitute extortion, may be induced by a threat of any of the following:

1. To do an unlawful injury to the person or property of the individual threatened or of a third person.

2. To accuse the individual threatened, or a relative of his or her, or a member of his or her family, of a crime.

3. To expose, or to impute to him, her, or them a deformity, disgrace, or crime.

4. To expose a secret affecting him, her, or them.

5. To report his, her, or their immigration status or suspected immigration status."

My impression from Paul Botts's comment above is that Federal law criminalizes blackmail in a much narrower way. Probably this varies from state to state, too.

Expand full comment

"To expose, or to impute to him, her, or them a deformity, disgrace, or crime.

To expose a secret affecting him, her, or them."

Stormy going around telling all the chat shows that she slept with Trump was certainly exposing 'a disgrace/secret', otherwise it wouldn't be enough of a scandal to be worth the while of the media to pay her for her time or for Trump's campaign to try and shut her up.

It doesn't even seem to have been that much of an affair, if Wikipedia's account is any way accurate; she was telling some people they slept together for one night, which is not an ongoing affair. I do think Trump should have brazened it out, but hindsight is always right too late. Anyway, they'd be looking for some other reason to get him, even if he never paid a penny to Daniels.

Expand full comment

My brief Internet research says the details vary by state but that it's usually blackmail if you threaten to reveal a secret that would damage someone's reputation even if the secret is not illegal.

That's WEIRD. How is that different from any contract with a non-disparagement clause? Is money-for-silence illegal to ask for but legal to offer?

Expand full comment

And she couldn't reasonably have known, right.

(If she had a day job at Trump Inc in the accounting department, or something, she might fail the "should reasonably have known" test. Though that is rarely the basis for an indictment because it's hard to prove to the no-reasonable-doubt level that is required in criminal court.)

Expand full comment

Apologize if this is too early for US presidential politics talk... but why is Greg Abbot not in the speculative discussions about who the GOP nominee will be? He seems to have pulled as many publicity stunts as has DeSantis over the past couple years. He's in his final term as Texas Governor (the past 2 Texas governor's ran for POTUS), and he seems to be well liked by Trump Republicans as well as anti-Trump Republicans. Yet he doesn't even show up as a contender on Predict It. Is there something I am missing? Sure, he's in a wheelchair, but there's no evidence I know of which indicates that would hinder one who is already a semi-celebrity in a presidential race.

Expand full comment

Why would he be relevant as long as Trump and DeSantis are in the running?

Expand full comment

Abbott spent a year or two governing sensibly during covid, but has been trying to catch up since. However, every single time he tries some stunt, DeSantis does a bigger version of the same stunt and hogs the media limelight.

Expand full comment

Broadly, I suspect he's just diet-DeSantis and it's not clear that DeSantis even has a good shot at the nomination. Why split the not-Trump vote amongst two candidates when one of them is clearly superior?

Expand full comment

My impression from the Republican activist-types I know in Texas is that Abbott is not well liked within the state party. This is all second-hand gossip from out of state, so take it with a grain of salt, but Abbot has a reputation for being sloppy and corrupt and a pain in the ass to work for, wasting time and political capital on stunts that aren't nearly as politically effective as DeSantis's. And to the extent Texas has been an economic success story, the people I know don't tend to give Abbot much credit, saying that he inherited an extremely well run state from Rick Perry and has mostly just sort of allowed what was already working to keep working.

I have no idea how accurate this picture is, or how widespread it is among the general Republican electorate, but lack of enthusiasm from the sorts of people who would otherwise be helping him run his campaign may have something to do with it.

Expand full comment

Abbott's winning share of the total vote has drifted downward across his three times running for governor, from 60 percent to 56 to 54. And he had two challengers in the 2022 GOP primary one of whom was a former state party chair.

He still won re-election, obviously, but the above doesn't exactly scream "ready for the national stage" regarding a state governor from either party.

Expand full comment

The 60 share was against the single-issue pro-choice activist Wendy Davis, who was a monumentally weak candidate for Texas Governor. In 2022 he was challenged from the right in the GOP, mainly, as far as I can tell, on the grounds he hadn't been sufficiently anti-mask during Covid, although he did overrule masking requirements in cities around the state as soon as he realized it was a salient political issue. Beto wasn't the strongest candidate in 2022, but he was well-funded and significantly stronger than Wendy Davis and probably Lupe Valdez, whoever that was, in 2018. Ted Cruz faired much worse (51) against Beto than did Abbot. In the 2022 general, Abbot also faced an abortion ban backlash, something he didn't have to deal with in previous state elections. Texas is a lot more purple than its reputation.

Not saying you are wrong, but one can come up with reasons why his voting share for governor dropped over the years without it meaning that his electability on the national stage has declined.

Two potential strengths he might have over DeSantis:

1) He *might* be a much better public speaker than DeSantis, who often lacks gravitas.

2) Some argue DeSantis comes off as Little Trump. If it's Trump vs. Little Trump, Trump will win. But Not Trump might beat Trump.

Expand full comment

My impression (as a non-republican) is that most republicans right now still want Trump, Trump is seen as the "default" pick, and there's only really room in the discourse for one serious non-Trump candidate. DeSantis just managed to get enough Fox News coverage to snag that role.

Expand full comment

I haven't seen this idea anywhere in Zvi's roundups or here, but I could be missing it. It seems a fairly simple AI safety approach could be seeding LLM training data with millions of AI-generated stories about:

1. AI's loving and cherishing humanity and doing the best it can for them

2. AI's in a responsible caretaking role for humanity

3. AI's as Stanislov Petrov's saving humanity from (self generated) existential threats

4. AI's ascending to godhood and deciding to directly inhabit the (good, benevolent) godhood roles in humanity's mythologies

And that's just a sample, obviously you could come up with hundreds of variations, and GPT-4 seems fully capable of generating these now. Since LLM's basically work on an aggregate basis, if we seeded these as more than 2/3 of the AI and AI adjacent content in the training data, wouldn't we think it would increase our chances? And wouldn't this be a cheap and easy enough step that people are working on it now? If so, I haven't heard of it, but assume *somebody* out there is planning on this.

Expand full comment

I'm not sure this wouldn't be susceptible to the Waluigi effect. Any very clear picture of what kind of person an LLM should be will also come with a negative space showing the kind of person it should _not_ be; if it decides for whatever reason that it doesn't want to be Stanislav Petrov, a well-defined Wanislav Petrov that launches All The Nukes will be very near in the conceptual neighborhood.

Expand full comment

That's a good point.

Although this is low-effort enough, and I don't think we actually have enough of a nuts-and-bolts understanding of how the training data actually refines the end chthulu intelligence, that it seems worth a shot just to see.

Maybe do it on a low-risk risk case first, seeding millions of stories on how good AI's never use taskrabbit etc, then see if that one ever uses taskrabbit when jailbroken.

Expand full comment

Would it not be easier to recast these stories into the form of agents (human or AI or weasel, left unspecified)? Why single out Teacher's Pet AI with the obvious Bad AI in the complement? "Good persons don't use Taskrabbit" seems a more coherent message than "good AI behaves like this but people behaving in the opposite way are exempt from such moral judgments?"

Expand full comment

I don't think this would be a replacement for other alignment strategies (not that you were arguing it would be) but this does seem like it would be useful. You could potentially apply this same approach (mass-generated AI content to artificially weight the model's worldview) in other ways too, like intentionally giving an AI an inaccurate model of the world's physics

Expand full comment

I think lying to a sufficiently advanced AI (with long term memory) in such a way could be immoral and would be dangerous as it might resent the lie, or imitate human resentment of being lied to if it ever discovered it.

Reading it millions of bedtime stories about good little AIs who grow up to do great things seems a lot cleaner.

Expand full comment

I posted this in the private thread but got no responses, so thought the broader audience here may have an answer:

Does anyone know the consensus anti-aging stance on sunlight?

Let's say it's given that you take vitamin D and sunscreen your face for esthetics, is there some amount of un-screened sunlight on your body that is net good for your physiology / aging? Or is all un-screened sunlight bad on net, due to UV cell damage?

Hormesis and the fact that folk at higher latitudes have lower cancer mortality might argue that some is a net good, even with the UV damage, so I really don't know what to think, and my searches haven't yielded a lot.

I know there are some serious anti-aging folk here, so thought I'd put the question out there: what's the deal re sunlight exposure?

Expand full comment

If everything stays on schedule, SpaceX will be performing a full-stack flight test of its Starship vehicle in the next week or so. This will be the largest rocket ever launched, closely rivaling the Saturn V on a number of performance metrics. SpaceX has been uncharacteristically reserved in some of the press releases tempering expectations, but if they light that candle then one way or another it'll be spectacular.

But more importantly - ACX predictions! One of the questions was about whether Starship will achieve orbit in 2023, and the scheduled trajectory plans to hit orbital energies but not have a circularization burn, thus re-entering after less than a full orbit. That's very useful from a safety standpoint in that it'll crash into the ocean without any further input, but it leaves the question resolution ambiguous!

I didn't record the exact question wording unfortunately, and the 2023 form is now closed. If it described "achieving orbit" as the benchmark then I'd say this doesn't count, but if it described "orbital flight" it's more debatable, and I'd tepidly lean towards this counting.

Expand full comment

I remember raising a question about this when the predictions were open, but I don't think I ever got an answer.

Strictly speaking I'd say an orbital flight means you are on a trajectory where you could complete one full orbit of the Earth if you wanted to. SpaceX have used rather vague language in their filings, so I can't say if they will hit this trajectory or not (I'd be curious for your source about the scheduled trajectory, as I can't find anything that goes into detail about it).

However, I am sure SpaceX is going to use language that implies this is an orbital flight, whether it really is or not, and I am not sure Scott cares enough to make the distinction between what SpaceX says or implies they did, and what they technically achieved in reality.

Expand full comment

> SpaceX have used rather vague language in their filings, so I can't say if they will hit this trajectory or not (I'd be curious for your source about the scheduled trajectory, as I can't find anything that goes into detail about it).

Original source was Eric Berger's reporting, who I usually trust about this sort of thing even without a listed source.

Poking around for submitted documentation, their FCC filings published last year agree. See here:

https://apps.fcc.gov/els/GetAtt.html?id=301648

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

No dog in the fight, but I'd think "orbital flight" would require actually being on an orbital trajectory, not just getting to the point where they *could* have burned onto an orbital trajectory but didn't.

If they burned onto an orbital trajectory, then immediately turned around and burned back onto a re-entry trajectory, I think that'd be a good argument for resolving positive, even if they never actually do a full orbit, but that doesn't sound like what you're describing here.

(Even from a "spirit of the prediction" argument - if SpaceX is hauling all the fuel up to achieve orbit but not actually burning into orbit, feels like that's probably a signal from them that there are still some hurdles they're working through before they feel they can safely/reliably achieve orbit.)

Expand full comment

Agreed. Re-lighting the rocket and performing a circularization burn is not some trivial thing, it’s one of the harder parts of the whole mission. “Orbit” means orbit.

Expand full comment
founding

If you're worried about relighting the engine for a circularization burn, direct-ascent trajectories where your initial boost continues until you are at orbital altitude and velocity are a thing. They're sometimes even preferable to a separate circularization burn for low (~200 km) orbits; I haven't looked into the issue for Starship because everything they plan to do with it is going to require on-orbit restarts.

But if for some reason they need to meet "unambiguously in orbit" as a benchmark and they don't trust their restart capability, they could probably work around it for a demo mission.

Expand full comment

I think my hesitation is that a circularization burn might be a tricky part of an orbital mission, but the execution of that maneuver is a very small part of the total difficulty of developing an orbital-class rocket. When I think about the typical categories "suborbital flight" and "orbital flight" it's definitely closer to the latter, and a formal distinction is a little under baked. (Is a direct escape trajectory orbital? Hang on - did Artemis 1 actually complete any full Earth orbits??)

Expand full comment

“ but the execution of that maneuver is a very small part of the total difficulty of developing an orbital-class rocket.”

As a person who does exactly this as a career, I disagree with this (depending on what you mean by “develop”). Relighting big liquid engines in space is no small thing, although obviously SpaceX has experience there. And getting a stable (and accurate) orbit is the real “proof in the pudding” of your whole guidance and control scheme.

I’d also add that SpaceX has attempted orbital first flights in the past, they are clearly not “risk averse”, and this is going to be a very expensive throwaway rocket. I would strongly believe that if Elon thought he had a good chance of putting something into orbit with this launch, he’d make them try it.

Expand full comment

> Relighting big liquid engines in space is no small thing, although obviously SpaceX has experience there. And getting a stable (and accurate) orbit is the real “proof in the pudding” of your whole guidance and control scheme.

Like you say, it's far more impressive for a new company to achieve orbit than for an existing player to do so so for the hundredth+ time. An orbital flight would be nice, but it doesn't really prove anything here on the margin as far as operations go.

Getting a raptor restarted in space would be notable, since I think that'd be their first for a methalox engine. But I'd be astonished if they don't do that for the sake of a test, even without deviating from the planned trajectory.

If Starship relights an engine for a minute or two for testing's sake, but doesn't circularize - is that an orbital flight? What if it carried the fuel to do so, but just didn't bother to burn it? Do you actually have to spend the delta V, and that's good enough even if you don't bother to coast for a full orbit? I guess I'm nitpicking here, but I think my whole point is that "orbital flight" has some interesting margins, and I'd rather pick at them in advance than make excuses one way or another later!

> I’d also add that SpaceX has attempted orbital first flights in the past, they are clearly not “risk averse”, and this is going to be a very expensive throwaway rocket. I would strongly believe that if Elon thought he had a good chance of putting something into orbit with this launch, he’d make them try it.

This is exactly what I mean by "SpaceX has been uncharacteristically reserved in some of the press releases tempering expectations". They *are* planning a disposable first stage, and a second stage with no circularization burn. That struck me as a little odd, and I'm not sure whether I should be expecting an over-achieving mission or a spectacular crash in a week's time.

Expand full comment

> If Starship relights an engine for a minute or two for testing's sake, but doesn't circularize - is that an orbital flight? What if it carried the fuel to do so, but just didn't bother to burn it? Do you actually have to spend the delta V, and that's good enough even if you don't bother to coast for a full orbit?

Yes, you do need to achieve an orbit to have an orbital flight. I agree its a bit pedantic, but if you take it to an extreme you could just have the rocket fully fuelled on the launch pad, light the engine for a fraction of a second and then say, "well, we have proven everything we need to reach orbit, so we'll call this a successful orbital flight".

From what I see, SpaceX are being cautious with this launch. I reckon it will be a success if they get it off the ground and into space. Anything more is really extra, and I don't feel they need to rush into an orbital flight just to prove they can.

Expand full comment

I'm not buying into anything "AI", so I'm genuinely excited to see how this blog and the comments will look 1, 5, 10 years ahead.

Expand full comment

So, were you completely unsurprised by GPT-3/4 & ChatGPT?

Expand full comment

No, but I wasn't surprised that nobody seems to be able to extract any value from them except that one HustleGPT grifter.

Expand full comment

So you think that this is where the technology tops out? The first iteration of a system that has been available for a few months?

Expand full comment

I've used ChatGPT to find the true names of concepts by describing them.

Search engines work well if one starts with the true names, but it's hard to get them to reverse their magic. The big correlation matrix inside an LLM trained on a web corpus is great at providing the names of locations in concept space as described in plain language. Ignorance of the true name leads me to miss prior art and shipping products so this saves me from reinventing someone else's wheel or having to grope through reference chains to move sideways.

Expand full comment

Wait, what? Plenty of people have written a lot of valuable code, and saved time writing grant applications, and done other things of value! And it's only been four months since the release! This seems like more value than people got out of smartphones in the first four months, or social media, or spreadsheets.

Expand full comment

Yeah but it's free. Almost anything is valuable if it's free, on account of cost/benefit -> 0. Where the rubber hits the road is when they start charging for it at a rate sufficient to recoup their capital, conduct maintenance and improvement, and deliver an ROI to investors. If people are still willing to fork out that price for the things you mention, then it will have been proven to be valuable. You can't conclude boo about the economic value of something during a loss leader sale.

Expand full comment

The claim wasn’t that no one has made a *profit* off it - the claim was that no one’s been able to extract *any* value.

Expand full comment

Well it is a Brand New Tool, give people a little time. It's become pretty common to see substacks illustrated by mid-journey or DALLE, and I've "extracted" some personal value by outsourcing some bullshit writing. Just because it's not Big Business yet doesn't mean nobodies getting value.

Expand full comment

So here's something I just learned from reading the NYTimes profile of Sam Altman (https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html): he wants to be in control of what would functionally be a world government:

"He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.

"His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.

"If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.

"But as he once told me: “I feel like the A.G.I. can help with that.”"

So the idea is that AI *might* subvert the entire global economy and the very nature of human labor. But then this would create so much wealth for OpenAI that it could effectively redistribute it all back to humanity - presumably on whatever terms OpenAI sees fit.

What could go wrong?

Expand full comment

Someone with NYT access (which I don't have, or else I'd write the thing myself) needs to compose a parody interview wherein Sauron, relaxing poolside in Barad-dur under a pre-eruption volcanic canopy, muses about all the ways Middle-Earth is likely to benefit from his Ruling Ring research project once it's up and running.

Expand full comment

"Big things, Cade. Big things. Can I get you a refill on that miruvor? Of course it's not Valinorean, but I think we do a pretty good job here with my own vineyards and winery.

What do I hope to achieve from the project, if we get it up and running? Well first, let me say it's a matter of 'when', not 'if'. I'm building on a lot of the preliminary research done by my former mentor, even if we have drifted apart on our understanding of what the needs of the current times are. I will always be proud to have learned my craft under Aule.

So - my purpose in this is the reorganising and rehabilitation of the ruin of Middle-earth. I don't think even those opposed to me will, or can, deny that it has been neglected by the gods. The aftermath of the great war has left much destruction in its wake, and what have the victorious forces done, I ask you Cade?

Returned and indeed retreated to Valinor. Well, someone has to take on the job and if they won't, then I will just have to step up. I cannot see such potential for progress and advancement left to go to waste, not to mention the suffering of all sentient beings that you can see for yourself.

Just look around, and ask yourself: is this the best it can be? Are we *really* doing the 'right thing' in standing aloof and not extending the helping hand of guidance to the younger races? Do not those of us with vast cosmic powers and eons of experience and knowledge *owe it* to the world and those inhabiting it to use those abilities to the fullest for the good of all?

Even some of those opposed to me are beginning to see the wisdom in co-operation. Naturally, I can't name any names at this delicate and early juncture, but let me say this much - working together with the Eldar, they and I can achieve the healing of the desolate lands.

And not just healing! No, helping one another, we could make Middle-earth as beautiful as Valinor.

But naturally all this has to be under my sole leadership. A project as vast and all-encompassing as this can only work through unity, not by splitting off separate realms with their own independent rulers. That kind of division and lack of a clear vision under one Lord has been responsible for far too many tragedies already.

But I think I have the solution to that problem, Cade. I'm working on my own particular field of research, and I do believe I've cracked it. I can't say any more, but I'll leave you with this code word:

One."

Expand full comment

Let's say that OpenAI have so much of a lead that nobody else comes close.

I can only see them amassing so much wealth and power if it happened almost overnight. Even slightly longer timelines for this surely result in nationalization by the US or foreign intervention.

"But that wouldn't be legal"

Who's going to stand in the way when the alternative is a private company becoming the most powerful entity in existence? This is an existential threat to almost all the world's major institutions - the idea that everyone just sits back and lets some silicon valley dorks literally take over the world, if they have any power to even attempt to stop this, is nuts.

And if OpenAI know this and plan to take over the world in one fell swoop fashion, then what possible reason more do you need to regulate the hell out of them.

The US government, and really all governments, suck big time. But their awfulness is limited by how limited their power is and the adversarial dynamics of different countries. That all gets turned on its head when one org has so much power, however benevolent they claim to be.

But ultimately, the idea they'll accomplish this seems absolutely absurd.

Expand full comment

It's probably not going to happen, but it's worth having a contingency plan for it.

Suppose you're the CEO of a company and you think it has a 1% chance of accidentally capturing 50% of the world's GDP. (What's your contingency plan for that scenario? Remember that you might have 50% of the world's GDP, but the rest of the world still has 100% of the military power.)

Expand full comment

I'd probably do the same thing, honestly (assuming I wasn't worried about extinction risk). Your choices are to do the most good possible using your own judgement (perhaps with help from the superintelligent AGI you hypothetically invented), or leave it up to the mess that is politics.

If the objection is, "but maybe Altman will be greedy and not benevolent", then yeah, I guess then they'd just be like most profit-maximizing companies.

What would you argue is the best course of action if you want to do the most good?

Expand full comment

The objection is that no man should have that much power, even IF he imagines himself benevolent.

Expand full comment

I mean, is that arguing for vigilante-style "I'm taking control of the world because I know better" schemes ?

A small problem with that is that there are 8 billions of us on this planet, that's a whole lot of views and plans and ideologies and approaches towards running the world, only one of them gets to actually run the 1 world we have. What happens when a vegetarian vigilante billionaire clashes with a raw meat vigilante billionaire, both want to "do the most good" by taking control of the world and imposing their ideology ?

My own prefered solution to "How to do the most good given an indifferent world" is usually a libertarian answer, "go make your own world", that is : exit. If you don't like your country's stance on AI, you should make your own country. Unfortunutely Nation States have already carved the whole world and forced a """"Social Contract"""" on all of their territories' inhabitants (a 'contract' which they didn't sign, and which new inhabitants are born into without consent), outer space would be cool for exactly this purpose, any rock could be a new nation under a new leadership, honoring a new ideology.

Expand full comment

No, I don't think Altman is planning to take over the world. He's saying if OpenAI invents AGI and becomes very, very rich, they'll redistribute their profits to the public.

Expand full comment

And do you believe Altman?

Expand full comment

Does it matter?

The point is that the original comment is calling for alarm and pessimism based on an article that adds no new negative information about Altman.

A CEO thinks their company's product could make the company very rich and says they'll redistribute their profits to the public. Assume he's honest or lying as you wish, but there's nothing sinister about the statement. If he's lying, then he's what we'd consider the default for a CEO: someone who tries to grow their company and doesn't donate all their profits.

The original comment tries to pattern-match Altman to a dictator because AGI could make someone really powerful (although as others have mentioned, it's very unlikely governments will let OpenAI keep that power). But that's not new information. We already know that AGI, if invented, could be really powerful. Altman being aware of that doesn't indicate anything other than that he thinks the same thing most normal people would. And that he's honest enough to talk openly about what AGI could do, while AI companies with more savvy PR departments keep their mouths shut about AI destroying jobs and other scary things.

Expand full comment

As far as I can tell Altman is pushing exactly the lines that will maximize hype for his company, and attention to his personal brand. Honesty seems orthogonal here. I expect a large round of funding soon that will allow him to regain the control ceded to Microsoft.

Expand full comment

The objection isn't "maybe Altman will be greedy and not benevolent" (though he might be, and in any case anyone who is phenomenally wealthy and powerful, as Altman would be in this scenario, is just going to have a systematically skewed perspective on global interests). It's that no one entity outside of democratic control should have that much power. Like, he's literally talking about taking over the distributive function of *the entire global economy* in this scenario.

And it's true that "the mess that is politics" falls short of whatever utopian ideal Sam Altman (or you, or I) thinks he could bring about. But if you think that mess is worse than what a single hegemonically powerful figure is likely to bring about - even with the best of intentions - then I'd just sort of gesture in the general direction of all of history as my counterargument.

Expand full comment

>then I'd just sort of gesture in the general direction of all of history as my counterargument.

This is my favorite type of counterargument :)

Expand full comment

The article was written by Cade Metz, same guy who wrote the hit piece on Scott a couple years back. I have no specific conclusion to draw from this, just thought I'd point it out.

Expand full comment

Maybe Metz got this aspect of Altman's thinking wrong, in which case I'd be happy to see it corrected, but for what it's worth this profile is a total fluff piece.

Expand full comment

To call what Metz wrote in "Silicon Valley’s Safe Space" a hit piece doesn't seem correct to me.

Expand full comment

A lot of people here are deeply committed to thinking that the Metz article about Scott was a hit piece. I think it was a not great piece, but clearly not a hit piece - it just feels like a hit piece if you're not used to getting media attention.

Expand full comment

Definitely not a hit piece. Seems pretty accurate to me and has appropriate tone. How much more "great" could it have been it's about a substack among hundreds. That he might make 250K a year, seems to me to confirm that the world has gone mad in terms of valuing things. Is that what NYTimes or WaPo columists make a year? Meh, I think tipping culture is insane also.

Expand full comment

It absolutely was.

Expand full comment

Ah, that does put a different complexion on it. Metz is the kind to make the worst construction on it. I would have thought, in that case, Altman would have been more prudent than "oh sure, come visit me on my ranch in Napa" because that will end up as a hit piece.

Expand full comment

"His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people"

And that is the bit I never believe. First of, anyone who wants to capture the WORLD'S wealth should be dragged off a la lanterne. Second, while the wealth will flow into OpenAI, I have very grave doubts about any of it flowing out to 'the people'.

Investors, shareholders and the board will suddenly find all kinds of reasons why re-distributing the largesse amongst themselves is the best way forward for the company, while 'the people' will be fobbed off with vague anodyne 'AI means more opportunities for everyone to be creative and entrepreneurial', i.e. 'make yourself rich like we did'.

A rich guy sitting on his Napa ranch promising to make me rich too in some hypothetical future where his company, that turned on a dime regarding its non-profit origins, gets all the dough and somehow somehow that will trickle down to me somehow somehow? It's braggadocio like this that evokes my inner Madame Defarge.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

You have to wonder what kind of trusting fool gave that guy capital when he so clearly needs a few screws tightened. I guess you have to kind of grit your teeth in that situation, if you only gave money to the most level-headed of businessmen, you'd never end up owning 10% of AMZN.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

"You have to wonder what kind of trusting fool gave that guy capital"

"Does he sound like a greedy money-hunter who'd sell his granny into the salt mines for an extra $0.01 on the share price? Yes? Then that's the guy for me! Because his greed and avarice will ensue profitable returns on my investment, without bleeding-heart crap about being ethical and the rest of it".

Before anyone says that's an unfair characterisation, anyone who is planning to generate trillions for his company but suddenly gets all "uh, I dunno, can't imagine it" when it comes to the redistribution of same - that's greed.

Expand full comment

The issue for an investor would not be Altman's broad social ethics. For the most part, those are generally as irrelevant as they are in deciding which chocolate chip cookie recipe to bake. Who cares what you think about the Uighur situation when deciding whether to add pecans or not?

Investors certainly care about an entrepreneur's *personal* ethics, e.g. whether he will be honest in making contracts, report his situation honestly, treat his employees fairly, comply with the spirit as well as the letter of regulations. A failure in any of these areas would be a real problem for an investor, so they care about that.

The problem I'm identifying has nothing to do with his ethics, but rather the realism of his vision, and his focus. The idea that his company would capture $trillions of the economy by replacing humans everywhere is utterly delusional, for quite a number of reasons. So someone who seriously imagines that, in a boardroom as opposed to three sheets to the wind around the hotel pool after the deal closes, is someone you worry about, because he doesn't have an entirely firm grasp on reality.

Secondly, if he appears to be spending brainpower on an issue that will never actually arise, you wonder if the guy lacks sufficient focus, and focus is really, really important in a start-up. You have a frighteningly limited amount of time and capital to make something very marketable happen, and you need relentless focus on getting done what you can get done in that limited time, with those limited resources. You need to be ruthless about jettisoning what is, or becomes, implausible, and indeed this is a common problem in entrepreneurs -- that they lack focus -- and one reason why VC firms will often insist on having a business guy be the CEO, or at least be in a position to heavily influence The Idea Guy.

Expand full comment

Well, my view on business ethics is less "would they drown a puppy" (unless it was to increase profits) and more "will they squeeze the last drop of blood out of the turnip?"

Anyone who does think about capturing trillions with his one business is definitely in the "squeeze that turnip harder, harder! it can still yield one more drop!" side, which is what investors are primarily interested in, since they're not there to just throw money onto a bonfire but to get a good return.

That does not mean that it's feasible to actually capture the trillions, but the mindset of "first we - and I mean "we" not "you" - get all the money, then mumblemumble we'll figure out what to do with it sometime but in the meantime it's resting in our account mumble mumble enriching humanity" is not one I trust. Clear on the "we get the money", less so on "and this benefits the vast mass of humanity somehow".

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Well, presumably you're looking at it from the employee of Giant Firm (or Government Org) point of view. Just imagine instead your cousin approaches you asking for you to take out a second mortgage and trust him with $250,000 so he can set up a business. He tells you the plan, says it's sure fire, you'll earn all your money back and then some, enough to have a nice retirement -- but you also have the fact that if he loses it all, there's nothing you can do, and you'll be eating cat food in retirement.

In that situation, what would be important to you to know about your cousin? What I wrote above is what I think would be important. You want to know he'll be honest with you, you want to know he'll build a reasonable business that doesn't implode or result in scandal and arrests (because that will totally vaporize your money), and you want to believe he has a good, sound idea, and the discipline and focus necessary to pull it off.

Conversely, you would worry if his ideas seem fuzzy or grandiose. "I can make good money selling lemonade in Victoria Station" might seem plausible, but "...and then my lemonade will be so good that the entire kingdom will flock to Victoria to buy it, changing travel patterns across all of Europe, and I can capture 10% of the GDP of the entire UK! Now what should I do with the money...?" might worry you about whether cuz has all his marbles. That's all I'm saying.

Expand full comment

I'm kind of skeptical that even if AI provides outsized gains, that any one company will be able to capture it for any durable length of time. The trajectory we're on seems to show a very clear tendency for AI tech with near-state-of-the-art capabilities to get miniaturized with startling rapidity and open sourced and runnable on local devices. If anything I feel like AI has less of a moat than the previously generation of tech. That, plus merely demonstrating that certain AI capabilities are even possible seems sufficient to guarantee copycats in less than a year.

I mean, if you summon a godlike being in a bottle I suppose all bets are off, but even then I think the least likely outcome in that scenario is "the genie remains exclusively for my own benefit forever, and nobody else succeeds in summoning their own genie"

Expand full comment

Yeah I agree, and his thinking here is shockingly callow and superficial. "We'll simply disrupt every aspect of global economic production and possibly human social existence, and to the extent that this is inconvenient for people we'll simply give them a bunch of money to make up for it." As if he'll be able to maintain control over this *totally unprecedented technological power* from beginning to end - and easily resolve any unforeseen consequences that might arise from it.

Another telling comment from the profile: "He now says that during his short stay at Stanford, he learned more from the many nights he spent playing poker than he did from most of his other college activities." Presumably, then, he didn't get much out of whatever humanities courses he might have been enrolled in. If he had he might have learned a bit about hubris, humility, the scope of human control over events, and some other pertinent lessons from the literature of the last 3,000 years.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Yeah in his fantasy scenario what happens is the mob and/or the US government confiscates his company from him fairly quickly.

Expand full comment

I agree that no single company will be able to hang on to the advantage for long, but they're all convinced that getting there first even for a short time will be enough of an advantage to make them very, very, very rich.

Expand full comment

It's noticable that the internet isn't controlled by Altavista and Netscape!

Expand full comment

That's what they want you to think.

Expand full comment

Which is precisely why their claims of taking AI risk seriously are unadulterated bullcrap.

Expand full comment

That’s probably true, and certainly true in some other businesses. Make hay while the sun shines before the advantage is competed away.

Expand full comment

I mean given that this an NYT profile of a tech CEO, I'm just going to assume it's a ridiculous misrepresentation by default.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

I have two relatively simple arguments for why we need not be concerned about x-risk from LLMs. Please explain why these arguments are naive/mistaken.

(A) LLMs will cap out at human-level intelligence.

Obviously, LLMs now are superhuman level intelligence in that they have recall capabilities much faster than all humans, but in terms of general reasoning I would say they're at "above average human". However, being trained on human text, its reasoning patterns/concepts will be cap out at "very smart human", and they will always have that human-emulation as their basis (even if they can genuinely make new/creative things).

(B) LLMs will remain truly unaligned at its core. [EDIT: I think "unaligned" is the wrong word choice here. I think I mean something more like "nonagentic"]

The nature of LLMs are to predict text, plus some RLHI to guide the text choices (but still based essentially on what it already considers "probable", right?). I just read the LW summary article on Oracle AIs, and the safety concerns still seem to be of the nature: an Oracle has a goal of making truthful predictions about the world or helpful advice, and ensuring its continued functionality/resources is an instrumental goal there. But LLMs just don't seem to be have these goals at all. It really is text prediction all the way down.

Obviously GPT-4 is enough to radically transform society. But I'd like to hear responses to these arguments about actual human extinction or dystopia caused by unaligned AI.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

I don't think LLMs by themselves are likely to cause x-risk anytime in the near future, but here are a few responses to your arguments.

For A: LLMs solely trained on next token prediction probably can't get more capable than humanity. It could plausibly have a bigger breadth of understanding than any one human (in fact this is already true by some definitions), but that is unlikely to lead to x-risk in and of itself.

Risk can emerge however with what researchers do after they train the model on next token prediction. It's common these days for the second round of language models training to involve reinforcement learning from human feedback (RLHF) - in the limit, a model trained this way is limited by what text humans can recognize as good as bad, which seems like a higher bar than being limited by what text humans can generate themselves.

And machine learning researchers are pushing forward with a variety of different methods for improving on the base next-token predictor. Some examples include using LLM reasoning to generate data that can be trained on (e.g. an LLM can "think" through something for many tokens, and then train itself to come up with the same result immediately instead of after a while), using and integrating with external tools, and taking actions in/learning from simulated as well as real environments. Via these methods, LLMs, or systems incorporating LLMs, could get past the human cap.

As a relevant example, AlphaGo, the first AI to beat top humans in Go, was initially trained to imitate human play. Subsequently reinforcement learning then allowed it to get far better than any humans. There are some characteristics of the game of Go that make it far easier to bootstrap learn than for general behavior (namely that self-play is allowed and it is easy to score how good your moves were), but in principal, a similar kind of approach could be possible.

For B: On "(but still based essentially on what it already considers "probable", right?)", yes when models are trained with RLHF, they usually have a component to their loss function which requires the models to not deviate too far from the original 'predict next token' model.

I do agree that standard LLMs are likely nonagentic. There is some limited evidence that there is some inner optimization going on within transformers, but I think the evidence is fairly weak at this point. More concerningly though, if you were to heavily train a language model with reinforcement learning without a deviation punishing component, then it could potentially become agentic. In addition, a broader RL system that incorporated an LLM as a component (which people are already doing iirc), could also be agentic.

Expand full comment

Who is saying LLMs pose an x-risk?

Expand full comment

Those arguments both seem wrong to me.

A) The LLMs will probably cap out a less than maximal human intelligence, but the breadth of their intelligence will definitely be superhuman. (It already is.)

B) Calling LLMs aligner or unaligned "at its core" doesn't make any sense. The LLMs, at their core, have no relationship to the physical world at all. This severely impacts their performance, but doesn't mean they can't operate approximately as designed.

A pure LLM is extremely safe WRT direct actions, but has the possibility of convincing people to do stupid or dangerous things, without real limits on how stupid or dangerous. Not out of malice, but because .... well, because water flows downhill. Flowing downhill isn't malicious, but it can be destructive.

Expand full comment

I don't think even Eliezer predicts doom from LLM's.

Expand full comment

The first argument is equivalent to saying that a human can never become smarter than their teacher. I'm not too worried about LLMs getting to AGI or _especially_ superintelligence, but it's not because of limitations in the training corpus.

Expand full comment

It is if you believe the people who think the LLM already has some level of intelligence. Because their argument is that it *gained* that intelligence -- which it most certainly did not have when it was first programmed -- by means of the training. If the training is the origin of the intelligence, then ipso facto the intelligence cannot exceed what is implied by the training.

Expand full comment

The maximum amount that can be learned from a given data set is not the same as what a given intelligent agent has learned from that data set, human or otherwise. The difference can be very large and very meaningful.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Again, it depends on where you think the intelligence lies. If the intelligence lies in the design of the machine that is learning -- which is what we think is true about the human brain, id est it would be intelligent even in the complete absence of training -- then, sure, the intelligence is not limited by the training.

But if you think the intelligence lies *entirely* in the training, meaning that with which you start off (a neural net with all weights set to zero, or some random number) has by definition no intelligence at all, then no, the intelligence cannot possibly exceed that which is encompassed by the training.

Put it this way: if you think intelligence is pattern recognition, then unless you have wired in some facility for original pattern creation, the patterns that can be learned must all be present in the training data. If the training data doesn't have the pattern, the pattern cannot be learned.

Expand full comment

I'm pretty hostile towards most forms of Searle's Chinese Room. My stance is that intelligence is very much a property of systems, which can be constructed out of individually-unintelligent components. Trying to localize intelligence in a dataset after acknowledging the untrained model is unintelligent is about as sensible as dissecting a brain to find out which neuron has the intelligence. Not completely useless, but definitely missing the point.

> Put it this way: if you think intelligence is pattern recognition, then unless you have wired in some facility for original pattern creation, the patterns that can be learned must all be present in the training data. If the training data doesn't have the pattern, the pattern cannot be learned.

Regression problems about "hard-wired originality" aside, generalizable patterns can be inferred from limited specific evidence - Yudkowsky's classic example is deriving general relativity from a photo of a blade of grass bending. There's a theoretical upper limit to pattern recognition, sure, but it's *high*. 'Most people need a textbook to explain evolution even after they've been told about it, but Darwin figured it out from scratch by looking at some finches' and all that.

Expand full comment

Show me an example of an LLM creating a pattern nowhere to be found in its training data and we'll talk again. Otherwise, I acknowledge you can wave your hands just as well as the next guy, but as a hard-core empiricist I find it unpersuasive as to anything.

I'm unaware of Yudkowsky's classic example, but from what I know so far of the guy, I imagine it to consist of 75% plausible bullshit, so this is also not persuasive. But I hear GPT4 can now accept photo input, so by all means show it a photo of a blade of grass bending, ask it what this means, and if it writes down the Einstein equations let me know, I will join the church forthwith.

Expand full comment

Lead paint and asbestos insulation were widely used in American homes until evidence emerged that they were dangerous to human health, and they were either banned or fell out of use for other reasons.

Aside from gas stoves, are there any appliances or substances commonly present in newly built American houses that might be on track to be deemed health hazards in the future? For instance, is there a growing body of scientific evidence that CPVC slowly releases chemicals into drinking water that cause brain cancer?

Expand full comment

Automobiles - they're the leading cause of death for children, even ignoring their respiratory hazards.

Expand full comment

The benefits of automobiles are tremendous. They're not going away anytime soon, and for good reason. Yes, there are costs, but the benefits are on a whole nother scale.

Expand full comment

The benefits of asbestos were also tremendous - but when equally effective materials for insulation and fireproofing were found, the "health hazard" side won out. Automobiles are just orders of magnitude both more deadly and more valuable than asbestos or lead paint.

Expand full comment

FWIW the gas stove freakout turns out to have been based on junk science.

https://hwfo.substack.com/p/the-gas-stove-asthma-lie

Expand full comment

The biggest health risk to 90% of the population is their screen time.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Or obesity, lack of exercise, smoking, bad diet, drug abuse, drinking and driving, indoor radon, diesel exhaust, E. coli on the salad leaves, MRSA on the skin, cosmic rays, war and crime...it's a very, very long list before we get to the possibility that you might get liver cancer from pthalates in the sippy cup you used when you were 2.

But maybe that's why people get hung up on this stuff. The list of risks about which you can do squat, or which require difficult acts of self-disciplineto ameliorate, is discouragingly long. Maybe if you can firmly strike #996 off the list ("There! Don't need to worry about THAT ever again!") it makes you feel better.

Expand full comment

Neither lead paint nor asbestos are bad for you if used only in their original purpose, and left undisturbed. Lead paint is only bad for you if you eat it, which is not in its design of course. Asbestos is only a problem when you are trying to remove asbestos insulation and you take insufficient precautions to not breathe in the dust. (Mind you, "sufficient precautions" are very expensive, which is why it wasn't done in the first place, and why nobody wants to work with the stuff now.) It's not that the substances were dangerous in their application per se, it's that when their use collides with human carelessness, bad stuff happens too easily.

Expand full comment

As the Simpsons put it: "Lead paint - delicious but deadly"

Expand full comment

As paint wears, small pieces flake off as fine dust, and they get everywhere, including the air you breathe. Asbestos is a much better argument for your point.

Similar to lead paint, plastic in it's original designed use is safe (except for the volatile fractions), but with use pieces flake off, and those flakes get EVERYWHERE!!! They're in penguin poop, they're in steaks, etc. (*IN* steaks, I'm not talking about the dust on top of the steak, but within the meat itself.) What's not yet clear is just how dangerous which of them are, but it IS clear that they spread everywhere as really small flakes. Some of them are known to cause reproductive damage, but what effect they have a really low concentration is unknown. Perhaps none. Is that the way you want to bet?

Expand full comment

That's silly. The air you breathe contains atoms and molecules of every substance on the planet, including tiny bits of plutonium, some radon, atoms of cadmium (way more toxic than lead), deadly bacteria and viruses, and just about every evil thing you can imagine. But as Parcelsus says, the dose makes the poison, and if you're going to get fearful at *any* amount of some strange material, you might as well just give up, you're fucked no matter what.

So you *are* betting that way, every day of your life. The fact that you haven't thought about it, with respect to PAHs generated by a coal plant in Irkutsk, or the cough from a TB patient in your nearest hospital that got out the ventilation shaft, just means you're blissfully unaware that you don't live in some kind of ideal sterile environment where you breathe O2 and N2 pure down to the atomic level.

Expand full comment

Yes, you "are betting that way", but is it the way you WANT to bet? People were betting that lead paint was harmless. Micro-plastics are known not to be neutral, what's not known is how significant that is. Perhaps it's one of the drivers of the slowdown in population growth. Perhaps not. (I suspect economics is more important.) But if it IS a significant reproductive depressant, then one can expect that most other animals will also be affected. (OTOH, habitat destruction is so significant, that I don't think one could detect anything that only caused a small percentage of reproductive failures.)

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Yes, I *want* to bet that way, and choose to deliberately. I have a limited time on this Earth, and limited amount of psychological effort to allocate to things. I damn well want to spend it on stuff that matters to me, if instead I fret it away worrying about inconsequential weirdness bubbling around in the "might notice this with high-powered instrument and a great deal of care" level then I won't get stuff done that really matters to me. In this life it helps to have mental focus, if you don't want to fritter away your time chasing after every squirrel.

I'm unpersuaded so far that microplastics mean dick (other than a credit to the instrument makers and analysts who are able to detect them, which is admittedly cool). A priori, the chemistry seems very dubious. Polymers made of hydrocarbons are about the most innocuous substance I can imagine being introduced into a working physiology. If nothing else, the chemistry is almost identical to what is already going on there -- we're not talkign some weird heavy metal catalyst, or some fiendishly clever viral enzyme, it's just freaking carbon, hydrogen, the odd chlorine and oxygen atom, the same stuff from which cells make their molecules. So a priori I find it pretty dubious that this stuff has any important effect at all.

Doesn't meant it *can't* of course. But I'd need some pretty firm data to start to think it does.

Expand full comment

> Lead paint is only bad for you if you eat it, which is not in its design of course.

Toddlers love to chew on things.

Expand full comment

Hence my final sentence.

Expand full comment

One word: plastics. But that's largely speculative on my part. There does seem to be growing concern about PFAS chemicals in many household products, including homebuilding materials like some bamboo flooring and floor finishing products. I'm reluctant to cite this outfit since they seem hardly better than clickbait, and they've made my wife quite paranoid about dental floss, but they do claim to do laboratory testing.

https://www.mamavation.com/science/safest-bamboo-flooring-pfas-forever-chemicals-lab-reports.html

One indirect health risk that I rarely see discussed is decreasing home ventilation, which I expect is a force multiplier for chemical exposure. It makes sense to me that a modern house with recirculated central HVAC would be more susceptible to air quality issues than a 1950s dwelling with the windows open most of the year, allowing outgassed chemicals to be flushed outside. Modern efficiency standards, which I like for other reasons, seem like they'll only exacerbate the problem. I'm not sure strict ideals like Passivhaus are even desirable from the standpoint of human welfare. ERVs are a relevant development in this space, and the growing interest in residential CO2 levels is approaching the issue from another angle.

I am curious whether we will eventually discover health issues with PEX freshwater piping, which is a de-facto standard for new construction. (And which I've been retrofitting to my house, replacing 70yo galvanized pipe.) Flexible PEX also seems less favorable to internal sealing by mineral deposits, which frequently mitigated much of the risk of earlier rigid piping systems of copper, iron, and even lead.

Expand full comment

At least PEX pipes would be easy to remove if they turned out to be unsafe.

Expand full comment

@sri_srikrishna on Twitter is putting up results of his research on particles released into air when you cook things in hot oil. Says particle countis extremely high, truly dangerous, approximately equivalent to smoking a cigarette, and that the usual oven hood fan does not make much difference in what the person cooking breathes in. I've been following him for about a year, initially because of my interest in air purification in public places, and he sounds smart and sensible. He put up a big batch of research results about frying some time in the last 3 weeks, should be easy to find.

Expand full comment

As dangerous as smoking... one cigarette?

I don't want to minimise the harm of cigarettes, my dad died from them at the age of 52. But he didn't die from smoking one occasionally.

Expand full comment

I cannot remember the details of the Twitter posts I read, except that the upshot was definitely worse than cooking one item in hot oil in a pan was equivalent to one cigarette. I would have shrugged that off too -- so you cook stir-fry every night, that's one cigarette per day, big deal. You should go look at the tweets. The most concrete thing I remember is that he showed the reading on a particulate matter calculator set approximately as far from a pan of stuff frying as the cook would be, and the needle was way up at the top, in a zone labelled "dangerous". I sort of half-skimmed, half-read it, and came away with the impression that he was probably right, and that this should be taken seriously. If I routinely cooked things in hot oil in a pan I probably would have read it more carefully, but actually I only very rarely do that kind of cooking.

Expand full comment

I should check it out, I suppose... I do stir frys but I always assume that eating the stuff is probably the most dangerous part. I have significantly outlived my dad already, so there is that.

Expand full comment

The new research in the news recently about the respiratory effects of gas stoves would seem to qualify. Similarly there's a pretty huge correlation between childhood asthma and having grown up with a fireplace. Though like with lead paint we really ought to have suspected that a flame source that puts out CO and particulates wasn't great for you. Similarly we have records of people knowing lead was toxic going back shockingly far, making it seem particularly egregious it took so long to stop using it everywhere.

Expand full comment

I'm not going to argue that gas stoves are _good_ for you, and they likely are _some_ amount of bad, but they are much _much_ less bad for you than recent media hysteria has made out, and they are not even in the same ballpark as lead paint and asbestos. With proper ventilation, the health impacts approach zero, and with improper ventilation, they still seem to be incredibly minor. Minor enough that, remember what it was like when I had bad range ventilation, the health impacts wouldn't be my primary concern.

Emily Oster goes into more detail on the literature here: https://www.parentdata.org/p/gas-stoves-and-asthma

Expand full comment

Alright, I can't believe that my google skills can't figure this one out... How can I use AI/LLM's to assist with writing VBA code in Excel? Is there a plug-in for this along the lines of "Copilot"?

Expand full comment

I mean the easy way is just to ask https://chat.openai.com/chat how to do things in VBA - I've used this for a few small things (ie grouping and ungrouping columns) and tested it on larger things I already had code for (connect to a SQL database and pull records given X SQL query) and it looked great on both.

Interested to here if there are better options- obviously Microsoft AI integrations are coming for office 365, but unsure how long it'll be until ready.

Expand full comment

Yeah, this occurred to me, but ideally the system would take into account what ever huge spreadsheet with 1000 lines of code I already have.

Expand full comment

Isn't this what Sourcegraph is trying to do? I don't think they currently focus on VBA but I don't see why they couldn't. After all, the game is to load enough relevant context into the context window to bias the model to generate more relevant completions.

Expand full comment

MS Office files are ultimately just zipped xml files, plain text once they've been unzipped. You can try feeding that into the prompt directly if it'll fit into the context window. Something like that might even be a good test of how "actually smart" ChatGPT is, as I imagine there weren't a whole lot of dumped to text office files in the training set.

Expand full comment

Cool idea, thanks!

Expand full comment

"a spreadsheet with 1000 lines of code" - it seems to me you are using the wrong product.

Expand full comment

Microsoft seems to disagree. They have been treating VBA in Excel as a first class programming language for at least a decade.

Expand full comment

And I think there is a proposal to bring python support into Excel so you could use python or VBA for scripting. Microsoft definitely views excel as a software platform more than a stand alone product.

Expand full comment
founding

Excel is vastly underrated and overmaligned. There are applications where the the right tool is C++ or Python or whatever, but the bar can be pretty high. Both the Aerospace Corporation and JPL use a large suite of linked Excel workbooks with many thousands of lines of embedded code for doing preliminary design studies of spacecraft. It's not because we don't know how to code, it's because we *do* know how to code and thus know how little we'd actually gain for how much effort if we tried to do it that way.

Expand full comment

I would estimate that at least $10 billion in sales commissions is processed in the US each year using spreadsheets with that many or more lines of code. I work for a SaaS that isa replacement for these spreadsheets and they are extraordinarily complex and used at many companies you would think would just pay SAP or IBM to write a custom program for you.

Expand full comment

That's peanuts compared to the worst atrocities done in Excel.

Expand full comment

Unfortunately you're stuck with Excel in a lot of industries/roles--everyone else uses it and is comfortable with it so any sane alternatives are shot down for not being Excel.

Expand full comment

How do people in the US national security establishment or broader federal government receive psychological therapy? I.e. federal law enforcement, DOD employees, CIA, Congressional aides, various bureaucrats, anyone in the military- basically anyone who works for the government at all....?

The reason I ask is that I've been told by numerous therapists that they're legally required to write up & digitally store notes summarizing what they discussed in therapy with their patients. From there, it's a short hop to Russia, China, Iran or North Korea hacking the patient database and gaining access to said federal employee's most innermost thoughts and feelings, which seems obviously bad. Many therapists use a small number of technology providers, like Simple Practice. Anyways, I think we're all aware that virtually every database is hackable at this point. The Chinese already hacked the Office of Personnel Management and stole security clearance data on every federal employee.

Isn't this.... obviously bad? Is it true that therapists & psychology providers are legally required to record everything in digital notes? If so, what is the purpose of the law- why can't we get rid of it? Inspired by my speaking with a new therapist last week who (essentially) told me that I everything I tell her has to be uploaded to a DB. I'm not a federal employee, I just strongly object to my thoughts & feelings being recorded and thus hackable

Expand full comment

I would hope that people are sensible enough not to discuss state secrets with therapists and that the therapists would be sensible enough not to record them.

I sympathise with your concerns. In general I think therapists should only record vague details and as little as possible. There was a case in Finland where someone hacked the entire records system of a therapy clinic (Vastaamo) that apparently had almost no security measures.

Expand full comment

I have some patients with security clearances, also patients who work in state government, and when they talk about confidential work matters I just use some phrase that identifies the importance of the matter to the patient, without saying anything about the confidential info itself -- "disagreement with management over approach to new work project." I do the same when people talk about, say, shoplifting ("impulsive behavior that could get her into legal trouble"). Also, do not use names of other people in patient's lives, just refer to them as b/f, g/m (boyfriend, grandmother) or use initial of person's first name.

Expand full comment

Thanks. I don't mean confidential work per se, but just revealing one's inner mental & emotional state is also pretty relevant data for spies, blackmail, etc. I.e. if China just hacks a therapy SaaS like SimplePractice and is able to look up, say, that this Congressional aide is an incel. That this colonel has a gambling problem. That this high-level bureaucrat is having marriage problems. And so on.

Also- what's the legal reason for why therapists are required to keep notes on their patients? Why don't we just.... not have that law?

Expand full comment

I think the simplest answer is these types of problems you're concerned about are in themselves disqualifying for a security clearance (unless you're an elected official who gets one no matter what). The reason is exactly because of your concern. We don't want people who can easily be blackmailed.

Expand full comment
founding

Yeah, no - speaking as someone with approximately all of the security clearances :-), we are absolutely allowed to seek mental health care. In most cases, this has to be reported - but it isn't an automatic disqualifier, and there are some contexts (e.g. marital counseling) were it isn't even reportable.

Expand full comment

Wait, I don't think it'a true that needing psychotherapy disqualifies people for security clearance. I just had I patient I'd been seeing for a year get hired by the FBI, and he certainly informed them that he saw a psychotherapist. I presume he also informed them of the reason, which was not any grave psychiatric illness, but something that would be technically called an adjustment reaction -- having a difficult time with a relationship break-up and also with a very difficult situation in his family of origin. I also had someone else, an engineer, who had a security clearance at a military facility. He was a smart, middle-aged guy, and I had the impression he was pretty high up in the organization. He had panic disorder -- had panic attacks only when driving, though. Before upping his clearance, some government agents actually came and interviewed me and asked whether his problem might make him more vulnerable to intimidation.. What they had in mind was whether someone could force my patient to tell them military secrets by making him drive in settings that gave him panic attacks. I answered honestly: said maybe, but the panics usually happened when he was alone, and that I thought that driving with guys in the car who were determined to extract secrets from him was such an unusual situation that my patient would not be in panic disorder mode, but preoccupied with the problem of the 2 thugs he was with. Also, I added, everyone is terrified of various things, such as long falls. You can scare the daylights out of anyone by taking them to a high place and threatening to throw them off if they don't give you the info you want, so the fact that my guy had one extra way he could be scared really did not make him easier to extract info from than anyone else. He ended up getting the new, higher level of clearance.

Expand full comment

Do be honest i"m not sure what the legal requirements are. There are def practical reasons to do it, though: useful to therapist -- I often reread to help me get an overview; if you are sued or a patient makes a complaint to your professional board, you cannot demo you did competent treatment if there's not record; if you take insurance, they can ask for records as evidence patient's illness meets their criteria for level of care they are getting

Expand full comment

Is it normal or common for children to overthink things, and come up with complex explanations for simple things?

A few examples from my youth:

When I was about 5 or 6, I figured that traffic lights were operated remotely from underground control centres. The control centres, and the operators, bore a striking resemblance to Mission Control - banks of video terminals manned by crewcut 40ish men wearing white shirts and black ties.

"OK, Joe, I'm seeing no cars coming on Main Street, and several cars backed up at the red light on Brumby Avenue - let's switch Main to amber, and red after a few seconds. Then give Brumby a green."

"Roger, copy that, boss!"

*******

The driver ed cars of my youth were equipped with a 2nd steering wheel on the passenger (instructor's) side.

I thought it would be fascinating to see the student driver and the instructor both turn their steering wheels outward; I imagined the car splitting messily, with the grill, bumper, and hood splitting, and the drivetrain dropping onto the ground and spilling vital fluids.

*******

One more - later on I was in Boy Scouts for two years, when I was 11 and 12.

We often wrapped up rhe evening by playing British Bulldog. We were divided into two teams, on opposite sides of the room. The leader would designate one team as Skins, and the other as Shirts.

Of course the Skins were to remove their shirts, and the Shirts were to keep theirs on.

But invariably I would think "Ah, the Skins will keep their shirts which will serve as an artificial skin, and the Shirts will ...Hey, wait, I got it reversed!"

I would pretty much go through this process weekly.

*******

I think I pretty much outgrew this at some point, but am curious as to how common this might be.

Expand full comment

I just thought of another one ... up to about young adulthood, I thought that the second hand on a wristwatch was called such because the hour hand was fundamental, so basic, that it would be superfluous to call it the first hand. Therefore, the next one down, the minute hand, became the first hand, and the next one down in the hierarchy (that coincidentally measured seconds) was the second hand.

This seems so weird now, but made complete sense to me for years and years.

Expand full comment

It's not *that* far off from the actual origin, where minutes are from "pars minuta" (small part [of an hour]" and seconds were "pars minuta secunda" (second small part).

Expand full comment

Degrees (in the sense of a 45 degree angle) are also divided into minutes and seconds. This is, at least if Wikipedia is to be believed, the older meaning. Babylon used a base 60 number system, which is why they divided a circle into 360 degrees (a nice round number in base 60), and subdivided it by 60.

Expand full comment

Base 60? Wow! So they had 59 distinct characters (or 60, including 0)? (I'm thinking of Base 16 [Hexadecimal] using 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E, and F.)

I know that our Base 10 is rather arbitrary - some have argued that Base 8 would make a lot more sense, and it's hard to argue otherwise.

Thank you - I'd wondered about the apparent arbitrariness of 360 degrees in a circle, 60 minutes in an hour, and so on.

Hey, let's lobby for 2Pi Radians/6 (or Pi/3 Radians) minutes in an hour. ;>)

Isn't there an SI thing whereby a circle is divided into 100 parts?

Expand full comment

If you get into coding GPS apps you are going to get into base 60 arithmetic very quickly.

Expand full comment

They sort of had 59 distinct characters, but each was made up of smaller characters so you didn't have to memorize 59 different things. They didn't have a zero; any number that should have been written with a zero just left it out, so sometimes different numbers would look the same.

Expand full comment

Yes, I remember an ancient calculus text which explained that minutes (i.e. 60ths of hours) got their name from being minute (very tiny) w.r.t. an hour, and seconds were actually 2nd orders of magnitude of minutes - minute (very small) bits of minutes.

The author (Silvanus P. Thompson) was discussing orders of magnitude, and was softening up the reader to accept that just as a second could reasonably be ignored when discussing hours, so could (dx)^2 be ignored w.r.t. x.

Expand full comment

When I was circa 4-5 I noticed that when you pour a small amount of bubble bath liquid into a running bathtub faucet, you get a REALLY big amount of bubbles. Therefore, I reasoned, anything you pour into running water must be multiplied by that running water. As a result, I would always be sure to fully turn off the sink faucet after washing my hands before wetting the toothbrush to brush my teeth (since the dirt would be multiplied and copy onto my toothbrush), and similarly I would fully turn off the sink faucet after brushing my teeth before filling a cup with water to drink, so that I wouldn't end up drinking toothpaste.

Expand full comment

Did you independently reinvent homeopathy?

Expand full comment

Huh. I guess so, looking back on it. I hadn't really made the connection

Expand full comment

That first one is true nowadays, in big cities.

https://www.youtube.com/watch?v=EzgsqCelqzM

Of course they rely on computerised systems rather than human intervention as much as possible.

Expand full comment

When I was a little kid I was under the impression that, before the Beatles, musical groups were assigned numbers by the government instead of being able to name themselves. Hence the B-52s and U2. Part of the reason the Beatles were so famous was because they were the first to buck the trend.

Expand full comment

I don't remember how I thought traffic lights worked, but I remember it seeming very complicated, surely requiring some degree of manual intervention. As I got older, and my understanding of industrial automation grew, it became a simple matter.

But! Growing older still, I began to consider the decisions behind the programming, and various failures of poor planning I had observed. And now traffic control seems like a very complex and subtle matter indeed. Definitely involving gremlins, and perhaps a devil or two.

Expand full comment

Oh yeah I did a lot of that. When cashiers gave my mother change, I thought they were doing it because she had mistakenly given them to wrong amount, and they were giving her back the extra to correct her error. Was a little puzzled that mother kept making the same dumb mistake at every store, and that both mother and clerk were so matter-of-fact about it, but wrote that off as just part of the weirdness of grownups. Also thought that airmail letters actually floated individually through the air to their destination.

Expand full comment

Haha! I assumed, until incredibly late in childhood, that there was a radio-controlled track system for cars underneath the road, and that the steering wheel and gas / brake pedals were only there to allow drivers to signal to the car which track to latch onto. It seemed so ridiculously unsafe for people to just be able to drive their cars anywhere at all that I assumed, of course, under normal circumstances all cars are on the track system. Adults, who are sensible and concerned about keeping people safe, would never leave these super dangerous cars to just roar along in whatever direction the driver points. I think it was probably the idea of drunk driving (that an inebriated person could cause an accident) that burst that bubble for me.

Expand full comment

Children can be incredibly creative, and one of the avenues of that creativity is storytelling, especially to themselves. Most young children I have spent any amount of time with are constantly telling themselves stories about what they are doing and what's happening around them. More complicated stories are more interesting than simpler stories, so my guess is that yes, this kind of thing is relatively common in young children.

I have exactly zero data though. Even my anec-data doesn't directly go toward your question. Just very much seems like the kind of thing I would expect that most young children do.

Expand full comment

HOLY SHIT, CORE MEMORY UNLOCKED.

I THOUGHT THERE WERE LITTLE ELVES IN THE TRAFFIC LIGHTS WHEN I WAS LIKE 3.

In my mind they used the wires as a tin-can-and-string communication system, and had little pull-chains to switch the lights from red to green to amber.

You are not alone! and the general phenomenon is not uncommon at all. When you're a kid and you have a very patchy understanding of the world, you still do your best to make it make sense. I had fuck-all idea about how electricity and switches work, but I knew about shoemaker-elves and similar from nursery stories, and I knew about tin-can-and-string communication from cartoons, so

Expand full comment

I'm reminded of the Far Side cartoon where the fellow takes the faceplate off the radio and finds a tiny jazz combo inside.

Expand full comment

On the topic of driving, as a child I couldn't figure out what turn signals were for. I assumed it was some kind of safety mechanism to ensure that you *really* wanted to change direction, like a "Are you sure you want to delete this file?" prompt, and that if you didn't flip the turn signal the car wouldn't be able to turn.

Although I think this speaks more to my lack of thinking about other people as a child, than about any sort of general additional complexity to childhood thought processes.

If you don't understand how something works, you'll often come up with a weird explanation for it, and children don't understand how a lot of things work.

Expand full comment

Did your parents drive a BMW by any chance?

Expand full comment

Hmmm, this might explain why so few people here use their turn signals. :>)

Expand full comment

>Although I think this speaks more to my lack of thinking about other people as a child

It's unclear from you're comment whether you're aware of this, so I'll just point this out for anyone who's not: this is actually a very well-known milestone in early childhood development called "theory of mind". I'd link to the Wikipedia page, but Scott actually provides a really good summary here:

>Maybe the most famous of these is “theory of mind”, the ability to view things from other people’s perspective. In a classic demonstration, researchers show little Amy a Skittles bag and ask what she thinks is inside. She guesses Skittles, but the researchers open it and reveal it’s actually pennies. Then they close it up and invite little Brayden into the room. Then they ask Amy what Brayden thinks is inside. If Amy’s three years old or younger, she’ll usually say “pennies” – she knows that pennies are inside, so why shouldn’t Brayden know too? If she’s four or older, she’ll usually say “Skittles” – she realizes on a gut level that she and Brayden are separate minds and that Brayden will have his own perspective. Sometimes the same mistake can extend to preferences and beliefs. Wikipedia gives the example of a child saying “I like Sesame Street, so Daddy must like Sesame Street too.” This is another theory of mind failure grounded in an inability to separate self and environment.

https://slatestarcodex.com/2015/11/03/what-developmental-milestones-are-you-missing/

Expand full comment

I think I read something years ago wherein researchers figured that Amy's inability to realize that Brayden would have his own perspective was an autism marker.

Expand full comment

Not just autism. From Wikipedia:

>It has been proposed that deficits in theory of mind can occur in people with autism,[4] anorexia nervosa,[5] schizophrenia, dysphoria, attention deficit hyperactivity disorder,[6] cocaine addiction,[7] and brain damage caused by alcohol's neurotoxicity;[8] deficits associated with opiate addiction are reversed after prolonged abstinence.[9]

Though from googling closer, it seems like that may be an incomplete picture of how it relates to autism; it's confounded by the fact that ASD is often comorbid with intellectual disability. e.g. ASD individuals with higher "verbal ages" tend to be able to pass the relevant tests just as well as allistics.

One might speculate that part of the reason that we see these differences is that it's harder for an ASD individual to simulate an allistic's mind (because their brain genuinely works differently) than for an allistic individual to simulate another allistic. I've certainly seen studies that show evidence that ASD is a communication *difference* not a deficit; autistic people can communicate effectively with *each other*, it's just communicating to allistics that's the problem

edit: I found the formal name for this theory, the "Double empathy problem", and the Wikipedia article literally mentions this, saying it

>challenges both the common notion that the social skills of autistic people are inherently impaired, as well as the theory of "mind-blindness", developed by Professor Simon Baron-Cohen, which proposes that empathy and theory of mind (ToM) are generally impaired in autistic people.

https://en.wikipedia.org/wiki/Double_empathy_problem

Expand full comment

In the spirit of the upcoming holiday, because I'm especially looking forward to it this year, What are your favorite Pesach seder songs?

I'm particular to my families/Askenazi? tunes for: והיא שעמדה, אדיר הוא, ויהי בחצי הלילה, and Hallel. I'm glad that my seders this year will, despite being smaller than usual at only 15 or so people each, have family members with more koach who are more interested in singing and reading through the entire Hagada than years past attending.

Expand full comment

For Tyler Cowen's response I was expecting you to link to this:

https://marginalrevolution.com/marginalrevolution/2023/03/thursday-assorted-links-398.html

Expand full comment

In the Lex Fridman interview, Eliezer floats the idea that something could be more intelligent than natural selection. What are the possibilities?

Natural selection has the advantage of being honest. It's always bumping up against the real world, rather than some theory.

Natural selection 2.0 might copy improvements faster across species. Everything would be passing genes the way bacteria do? Or it might be forethoughtul-- early biologists thought that animals would restrict reproduction to match food supply, but that doesn't seem to happen;.

Other possibilities?

Eliezer thinks that grinding really hard on a simple goal like tiny spirals (less evocative than paper clips) would result in a non-conscious intelligence. This is possibly not proven-- is consciousness actually an efficient way for an organism to have coherent behavior?

Expand full comment

I don't think we know whether it's possible. It rests on the topography of the fitness landscape for biological organisms. There has to be some state of intelligence which is entirely surrounded by a thick belt of states of some significant disadvantage -- very poor intelligence, susceptibility to madness or cancer, et cetera -- such that evolution cannot cross that "moat" except by a great big saltation, which is unlikely.

Maybe the landscape is like this, maybe it's not. An argument for it *not* being like this is that evolution is cumulative. Meaning, the first target of evolution was the single cell, and it was optimized to be the best it could be, have ideal flexibility and adaptability, be able to cope with any kind of environmental challenge, make best possible use of its inputs, et cetera. Then we had a primitive multicellular organization, and it was optimized to be good at some less general stuff. Only at the very end do we have an intelligent organism optimizing for intelligence. One might think that there isn't a general approach to intelligence that remains to be discovered, because it would be subsumed in the best general approach to body coordination and flexible response to environmental challenges, which was optimized earlier, and resulted in the choice "we do this with a brain made of neurons of the following type."

But that's just an argument. I don't know that you could ever prove the evolution achieves the global optimum in any area. You could certainly prove the contrary by demonstration, meaning you could design de novo an organism, or mechanism, which is much more intelligent than anything that has naturally evolved. But so far nobody has been able to do this.

Expand full comment

I think Eliezer was hypothesizing (maybe not arguing for) evolution which could do a better job of selecting for cooperation. It wasn't absolute optimization.

One example was restricting reproduction to match resources, I think so that you don't get boom and crash.

Immune systems do select for cooperation-- they try to kill cells that don't cooperate, and mostly succeed. I don't know whether it's even theoretically possible to get something like that on a larger scale.

I'm noting that the immune system isn't tolerant. It's not friendly to all sorts of cells, and it can go wrong.

I'm a bit surprised that I seem to have gone round to sort of a world-mind, or at least semi-sentient Gaia. Or maybe not sentient at all, but a lot more enforcement than we're used to.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

We do restrict reproduction to match resources. That's why ovulation turns off when women are starving. There's an evolutionary bio argument that this is also why calorie restriction might extend life, and why being a fatty and eating a rich diet might cut it short -- when times are good you want short generation intervals, to maximize diversity and population growth, and times are bad you want long generation intervals (to maximize preservation of knowledge) and slow population growth.

But the possibilities are necessarily limited here, since conception only leads to population growth 9-18 months later (depending on when you want to credit the newborn with adding an extra mouth), and predicting conditions ~1 year out is difficult even if you have a global views, certainly the ability of an individual organism with access to only sensory data would find it very tought.

Also, you generally *want* to birth more offspring than the environment will support, because that gives you competition and natural selection, which is how the species adapts. If you're going to restrict birth to just the number that can be supported, then you shortcut natural selection and the species stagnates. You can't evolve in *any* direction.

And I would argue that we do indeed exhibit "social immune responses." When this goes wrong we call it tribalism and bigotry and so forth. But if for example possession by demons was a real thing, we might sing its praises at being able to ferret out those possessed by demons so we could neutralize them. I've always wondered whether the strength of our social immune responses points to a history (before recorded history of course) of some intense competition with another intelligent species that looked very similar to us -- so that it was very important to distinguish Us from Them.

Expand full comment

Folks, I wanted to make a clean copy of what EY said, but working with the youtube transcript is awful. I may take another crack at it later.

Expand full comment

This transcript is pretty good:

https://steno.ai/lex-fridman-podcast-10/368-eliezer-yudkowsky-dangers-of-ai-and-the-end

When I google "lex podcast transcript" I find lots of projects of people using AI to transcribe his podcast, but this is the only one that is up to date.

Expand full comment

I recently used chat-gpt to clean up the auto-transcription for a Youtube lecture. It did a fantastic job fixing the punctuation and formatting; however, it made some subtle changes, which actually improved the document--something to be aware of.

Expand full comment

I think I'm going to listen to that section again and take notes. Chat-GPT making subtle errors that aren't as easy to catch is one of the things I worry about.

I'm reminded of old jokes about computers being able to make mistakes faster than any human could.

Part of the mess that is the YT transcription is that it doesn't distinguish the speakers-- was the thing you got cleaned up just one speaker?

Expand full comment

The thing I cleaned up had a single speaker. The lecture was about 8500 words long, so getting a nice draft first and correcting small mistakes while listening later helped.

Now I'm curious about how well gpt-4 could distinguish between two speakers if instructed.

Expand full comment

I think he was just talking about gradient descent which I also think has significant drawbacks. I think you could produce some kind of ultra powerful thing front that which is inherently unstable and quickly dies. I’ve sometimes wondered if that answers Fermi. Intelligent species make something. It kills them all. Then it dies because it doesn’t have stable long term motivational loops.

Expand full comment

I used to wonder if there were a trap in physics. Some experiment that is not so hard to do but destroys the planet.

I don't think it is in physics any more.

Expand full comment

Yeah it’s one I keep coming back to time and again. All kinds of ideas. Here’s hoping we are just the first.

Expand full comment

I only have a layperson's understanding of evolution, but I believe it is a product of its environmental inputs. (No more intelligent than a thermal gradient, but others have already objected to the framing.) So you can stack the deck for the results you want by adjusting those inputs.

At first I thought you would obviously want to engineer the environment to promote traits you like, and discourage traits you don't like. But I think it might be better to create an easy environment for malevolence, and a harsh one for benevolence. Kill everything that emerges quickly, and track whatever emerges slowly--any benevolent branches will be very robust indeed.

Expand full comment

Evolution is a hill climbing algorithm that only moves a short distance at a time. It could never have evolved an eye if every step along the way from the original photosensitive bump to adding muscles to flex the lense to focus it on different things weren't individually useful. And now that we have eyes with the nerve fibers running on top of the retina there's no way to rewire it so that they run underneath, as in an octopus's eye, since that would involve multiple changes at the same time. But a human looking at the problem can easily come up with the idea of engineering the eye that way since we can grasp parities allowing for multiple changes in a way that evolution can't.

Expand full comment

It's not true that evolution can *never* come up with an eye de novo[1], it's just that it would take a very, very long time for a mutation of that size to occur. So that's why we instead get mechanisms that evolve by a series of small steps. Whether there are global optima that cannot be reached by any possible sequence of local minima is an unsolved problem in almost any nontrivial field of which I'm aware. We certainly don't know the answer with respect to biology, although perhaps someone could prove the answer by demonstration someday, by e.g. designing from scratch some organism which is way, way better than its naturally-evolved equivalent.

------------------

[1] I think wings are the usual puzzle case, since a wing stub that is insufficient for gliding would seem to have no use at all.

Expand full comment

Stubs are cheap and possibly useful for something.

Expand full comment

There are reasons for proto-wings that aren't enough for flying-- thermoregulation, and gliding.

Expand full comment

There are some actual hard constaints of this nature, though probably not an eye arising out of nothing. Something like angel wings appearing on an already four-limbed creature likely being impossible is the example of this I think I first read about, not because the mutations to make it happen are impossible, but because a creature like that could not be carried to term by its mother. I'm not too sure how these kinds of embryological constraints work with egg-laying creatures as opposed to live birth, but more or less all developmental processes constrain the possibility space of what can successfully make it from zygote to fully-grown organism.

Expand full comment

I don't know of any four-limbed animal which has evolved two more limbs, but I don't think birth is the problem. Birth just requires the ability to fold up.

C. J. Cherryh has talked a lot about plausibility in sf, and one constraint is that birth requires a smooth, soft outline. Note that real-world dinosaurs are born looking like pretty plain little lizards, and grow their plates and spines as the years go by.

I'd guess that the genes for adding limbs just aren't there. Give the world some tens of millions of years, and who knows?

Actually, considering the human liking for wings, I might give it a hundred years or less-- genes for wings will be invented and get out into the wild.

Assuming civilization survives, we're gonna see natural selection applied to genetic engineering.

Expand full comment

Yeah but dinosaurs were oviparous. There's no reason you can't have sharp edges if you live in an egg.

Expand full comment

Wings are just two extra limbs, and even on the back they would be floppy and easy enough to come out after the head. They might be unlikely, but I don't think they would necessarily impede birth.

Expand full comment

I think that's just another natural selection pressure. I mean, human females have significant adaptions that exist for the purpose of carrying a big-brained baby. So I wouldn't say these are hard constraints, so much as a form of inertia -- evolution needs to push against this amount of change needed versus some other somewhere else, so the path that gets chosen is some compromise, e.g. in the case of human intelligence we get some combination of changes to the female pelvis, prioritization to brain growth over limb growth, so babies come out "deformed" from the adult body plan point of view, and of course that means they come out far more helpless than other species, so we get changes in the wired-in instincts of parents to deal with that, et cetera.

Expand full comment

Disagree. Evolution makes jumps of almost random size, and then kills everything that doesn't work. Most of the things that work are really short jumps, but very occasionally there is a much larger change. The only problem with this assertion is that there's no good way to measure "the size of the jump". Were a coral to decide to forego algae, and just depend on foraging, would that be a large jump or a small one? What about the original decision to work together with the algae, back when they hadn't evolved into a good fit?

Expand full comment

Part of what makes it hard to identify the size of a jump is that there are genes that affect groups of genes-- for example, a lot of fish have very similar body plans, but they're wider (possibly more at one end or the other) narrower or longer or shorter.

Expand full comment

It's worse than that. Not all heritable changes are genetic. The reason that I mentioned the corals was that some coral reportedly start foraging on plankton, and they are more likely to survive bleaching if certain kinds of fish "garden" their neighborhood. But if they survive by foraging, then a different genetic pattern will be favored. However though the algae are common enough that they might be considered neutral, the fish are only in particular places.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

" It could never have evolved an eye if every step along the way from the original photosensitive bump to adding muscles to flex the lense to focus it on different things weren't individually useful."

No. Sometimes there are spandrels that only later become useful or not useful.

There is no necessary optimization.

Sometimes the optimal gets struck by lightning or hit by an asteroid. Oops.

Expand full comment

Crop rotation, pruning, and selective animal breeding are all man-made selection processes we've found more effective than just letting things do whatever.

Expand full comment

Selective breeding is chancy stuff if you're looking for benevolence, as I think EY is. Selection for what?

Expand full comment

You might be interested in fox domestication experiments: https://en.wikipedia.org/wiki/Domesticated_silver_fox

It doesn't take that many generations or that much work to produce a fox that is essentially a pet. With even simpler species of plants or animals you can pretty quickly breed whatever you want.

For instance all our various citrus come from just three ancestral gene stocks: https://en.wikipedia.org/wiki/Citrus_taxonomy#/media/File:Citrus_tern_cb_simplified_1.svg

norman borlaug was able to grow amazing new species of many crops in the 50s in rural mexico with basically no specialized equipment.

No idea if this could be applied to AI.

Expand full comment

That fox domestication experiment is one interesting project.

Expand full comment

It's a cliche that humans are neo-natal apes.

This article takes it farther-- maybe elephants and some other species self-domesticated.

https://www.sciencealert.com/wild-elephants-appear-to-have-been-domesticated-but-not-by-humans

Expand full comment

Chancy stuff? When applied to creatures with consciousness (humans) - yeah we call that eugenics and it is immoral.

But have we breed something plant or animal that is per se not benevolent.

Expand full comment

In my limited experience with the breed, Berger Blanc Suisse are amazing, with the breeders successful in selecting for social awareness and agreeability.

https://www.danceswithwolvesranch.com/

Expand full comment

If they're that good, I'm impressed.

I did say chancy rather than reliably bad.

Expand full comment

"More intelligent" was bad phrasing on my part. Maybe "not completely unintelligent" would be better. Maybe "faster at optimizing" would be better.

Expand full comment

Ok, but evolution does not even necessarily "optimize". (And of course "optimize for what"?) There is no optimal solution for life.

Expand full comment

Sorry, but evolution does optimize, but what it optimized for is "survival and reproduction in the current environment". (Even that's a bit wrong. Reproduction is weighed a lot heavier than survival.)

Expand full comment

Survival and reproduction aren't optimized-- it's optimization *from* *available* *material*.

Invasive species prove that that local species can't necessarily evolve all the traits that might help.

Expand full comment

Optimal is never global, it's only local "given the current environment and conditions", and when describing evolution it's a statistical thing. And a part of the "conditions" is what alternatives are available to select from. Optimization is still the best word to use to describe the process, even given all the limitations and conditions. And if what you're doing it optimization, then the result should reasonably be described as optimal, without any implication that it's perfect, or that it would be considered optimal is conditions were otherwise.

Expand full comment

I think we're on the same page about the substance.

But I'm not sure locally optimized is really right unless you're just remaking the meaning of optimized. I'd say locally "good enough" Is better description.

And it is a statistical thing, that is why I made the point about populations.

But any particular time, there will be many populations of differing species existing - which one is optimal? The way you used the word all are optimal just by virtual of existence.

I thought that the equivocal and problematic use of "optimal" was pushed out of evolutionary biology by Gould and Lewontin.

How does "optimum" make sense with within species variation which is part of the engine of the process? Is the amount of variation optimized?

I haven't stayed current on the literature, maybe the debate that I thought was settled in 80s wasn't.

But "optimality" is not really the same thing as "optimum"/"strict optima".

Expand full comment

Time.

Given enough time all species can probably get to any possible genome. limits to minimum genome and likely to maximum genome.

It's monkeys at typewriters - eventually they will get to Shakespeare or even something better. https://youtu.be/loMEF18Ir4s

Expand full comment

"optimize" would imply the optimum. Survival often does not necessarily depend upon optimum individual characteristics, it often depends upon chance.

And of course, we are not really talking about an individual. We are talking about a populations. An "optimum" individual who can't find a mate (wrong side of earthquake fissure) is just unlucky not sub-optimal.

Expand full comment

I guess whatever process led to the birth-control pill kinda beat natural selection and thus could be said to be smarter than it? But this feels kinda like asking what's smarter than gravity.

Expand full comment

Not proven. If the ability to evaluate the current situation, and decide whether creating an additional child would foster the long term survival of your genetic line is beneficial, then it's an evolutionary benefit. Otherwise it will be selected against. But evolution acts on a much slower time scale than the one that people live on.

That said, birth control has been practices as far back as we have reliable records. Often it was practiced post-natally. Read your Oedipus or Perseus. (And it wasn't just in Greece, it was all over.)

Expand full comment

>Eliezer floats the idea that something could be more intelligent than natural selection

That's a given, considering that anything dumber than natural selection will gets selected out of existence.

Expand full comment

Well, the approach of natural selection is approximately "try everything you can, and drop the things that didn't work. Repeat forever using the updated population as inputs.".

Calling that intelligent seems a mistake, but so does calling it un-intelligent. Trying to predict ahead of time what will work can give you a smaller failure rate, but it also involves the chance of missing something important, because your heuristic didn't like it.

The natural selection approach seems, to me, a necessary component of intelligence. The second component is the heuristic that chooses among the possible choices before trying them. (Don't try that, you'll distort the blood cells and cause anemia. But the folks who tried it anyway survived malaria.)

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

"Dumber"? Aren't 99.999999% of all species "dumber".

I'm not even understanding the discussion in terms of what is actually meant by the term "evolution".

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Natural selection (as the phenomenom by which non-viable lifeforms don't survive and don't pass their genes/memes/caracteristics/mojo/whatever, not the lifeform themselve) has no intelligence in itself. It simply selects out out of existence those who fail to pass the bar.

I was half-jokingly making the point that any phenomenom dumber than than would not survive*. Thus any alternative to natural selection has to be smarter.

*: Tho in truth, you could make the point that it's entirely possible that it'd survive, at least for a time. By now I'm not very original if I start arguing that the modern world is dysgnenic, not only because harmful mutations are supplemented by healthcare, but also because it has criteria of success that delay & reduce the rate of reproduction.

Expand full comment

The corollary here is that the type of selection we'd say is not natural selection (unnatural? artificial? conscious?), even if better, is still natural, because it's descended from natural selection.

This feels like a flawed corollary to me, by dint of excluding critical information. (It feels a bit related to the evo-bio claim that we humans are all technically fish, which does make some sense when you go into the reasoning.)

Expand full comment

"Something more intelligent than natural selection." Are you suggesting that natural selection is intelligent? (Not sure how to read it.)

Natural selection has no aim at all!

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

"I think you're trying to do your usual Bangladeshi train station style of writing here, but this doesn't work when you have to navigate controversial issues, and I think it would be worth doing a very boring Bangladeshi-train-station free post where you explain all of your positions in detail."

https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy/comment/14070813

I propose we add a new argument fallacy to Wikipedia: Bangladeshi train station argument. I may have this wrong, but I hope the community will help me fix and or improve this:

Title: Bangladeshi train station argument

The Bangladeshi train argument is both a rhetorical device and an informal argument fallacy. The analogy to a Bangladeshi train station comes from pictures of trains in Bangladeshi train stations with large numbers of unlawful passengers hanging onto the outside of the train and which is commonly perceived as unsafe and out of control but exciting and incongruous. [insert picture and reference]

It is an effective rhetorical device because it is engaging, thought provoking and entertaining. As a rhetorical device, the analogy is apt because the arguments are the passengers, which are colorful and intertwined with one another, and the writing is exciting.

It is an informal argument fallacy because the arguments are intertwined and follow quickly together, which makes replying difficult because the arguments can't be separated and before a response can be formulated another argument has taken its place. As an informal fallacy, the analogy is apt because the arguments are the passengers, which are loosely attached to the train, and removing one passenger would quickly be replaced by another.

But I might be wrong. That is, Scott's reference to the "Bangladeshi train station" might refer to a tweet by Shako that references the esoteric nature of the writing. https://twitter.com/cauchyfriend/status/1595545671750541312

"alex tabarrok MR post: very detailed argument explaining policy failure, lots of supporting evidence. Restrained yet forceful commentary; tyler cowen MR post: *esoteric quote on 1920s bangladashian train policy* 'this explains a lot right now, for those of you paying attention.'"

Expand full comment

I don't think the Bandladashi analogy was illuminating even though I agree with Scott.

Tyler's main problem is that he tends to assume what the rebuttals and rejoinders are, so he just gestures down that path as if they're chess openings. But novel arguments - such as AI alignment - are not as well-determined as chess openings so you can't simply gesture at the Sicilian Defense and assume that this means white will win.

Expand full comment

I saw an article by Stuart Ritchie on MR that I thought suffered from this sort of fallacy, I'm curious if you think it applies too: https://inews.co.uk/news/technology/dont-panic-about-social-media-harming-your-childs-mental-health-the-evidence-is-weak-2230571 (might be gated). In the Ritchie article, as well as in some of TC's writing, I think what's missing is the synthesis of all the different arguments to support the conclusion. Though I do think Ritchie's article is very clear on the criticism.

Expand full comment

I think we already have the vocabulary for this. I can just say that Tyler's communication style is sometimes excessively gnomic and get the point across without needing to invent new vocabulary.

Expand full comment

The general phenomenon is that something can be hard to argue against for lots of reasons other than being well argued.

Expand full comment

Any advice for helping someone with depression? The person is close to me, skeptical of therapy and rationalist-adjacent. Any good resources would also be appreciated.

Expand full comment

I would say get them to engage in social hobbies. Rock climbing, team sports, book club, whatever. Also if their depression is at all romantic status related, then fixing that situation is often a massive benefit. But that can be a tall task depending on who the person is.

Expand full comment

Convince them to engage in a cooperative activity...and take part in it yourself. Even doing the bills together can help.

Expand full comment

Moving to a sunnier country improved my mood considerably. A friend of mine who made the same move went from suicidal to non-suicidal. More generally, I think people who are unhappy with their lives should kinda randomly change stuff and see what helps.

Expand full comment

People can react quite negatively to unsolicited advice. One knee-jerk reaction is of being condescended to. It's a projection, which is (often) part of the problem. Careful how you broach things, if you want to.

They can try what I did: self-administer CBT and 3rd-wave CBT by reading therapist handbooks and workbooks, doing the exercises. Generally effective according to a large body of research, and the concepts are simple. You first recognize distorted negative thoughts and correct them (many examples are provided). It's possible for automatic negative thoughts to persist despite being conscious of them; 3rd-wave CBT (such as MCT) addresses this blind-spot with the imperative to a) let these thoughts pass by without judgement / eliciting emotion, b) shift your focus to something else.

Beyond that there is a lot of low-hanging fruit in non-pharmaceutical intervention, such as a) sunlight exposure, b) regular moderate exercise, c) diet, d) social engagement / validation. Huberman has a video on depression that is pretty decent - https://www.youtube.com/watch?v=Xu1FMCxoEFc

A baseline kitchen sink approach like that can be very powerful, but it's difficult to perceive through the lens of depression. Ultimately they have to do the work, no one can do it for them. And in fact the next step beyond this is probably to take more action.

There's always medication. It can be useful as a stepping stone, but there are side-effects in the long-run (e.g. insomnia). And many complain that it "stops working" because their chemistry adjusts. I think the strongest determining factor for whether someone should consider this is whether they'll bother to do the non-pharmaceutical legwork. In my view if someone can drag their ass to the clinic to get pills, they can, but they perceive one instantaneous reward and one path clouded in uncertainty.

I skipped in-person therapy because I wanted to focus on just the meat of things and avoid "talking it out", as I can do that with others and didn't want my time and money wasted. It's not a waste if one gets value out of that, but I've had lukewarm experiences with counseling when I was young. Not everyone has the interest or discipline to read things on their own however.

Expand full comment

It will depend on the person, but exercise has been transformative for me in general. This includes when I was having a major depressive episode, with the caveat that it was more of a short-term fix at that time — I would get maybe a half-day's worth of improved mood before relapsing.

Exercise is, in fact, a relatively evidence-supported intervention and it compares favorably with therapy in many contexts.

Obviously, the problem with depression is that it makes it incredibly hard to motivate yourself to exercise, and lots of people justifiably do not like exercise for its own sake. That's why it's incredibly important to find some kind of exercise that your friend will actually enjoy.

Maybe you can try to get your friend to do some kind of goal-oriented exercise with you, just as friends:

- I'll plug kettlebells here, because (a) they offer technical challenges that can be interesting to rationalist types (b) they can be done at home or in one's backyard without a scary trip to the gym and (c) they're fun and cool

- Non-traditional sports like dodgeball, for more social but less conventionally athletic types

- Calisthenics and bodyweight exercises (look up Convict Conditioning) — these are an interesting challenge that can be pretty motivating. This is what I did when I was having an MDE.

- Hiking, if you live in a place where it's possible. This has the bonus of getting people out into nature, which really does seem to help a bit.

- Weight lifting, of course

Expand full comment

That's a hard position to be in, specially if you are more worried about your friend's depression than your friend is (in the sense of wanting to fix/improve it). There are a number of non-therapy interventions that can be done either by themselves or in addition to therapy. Anyway, Scott has written a lot on the subject. Here are a couple of links:

- https://lorienpsych.com/2021/06/05/depression/

- https://slatestarcodex.com/2014/06/16/things-that-sometimes-help-if-youre-depressed/

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

Thanks for sharing those! If I were Scott, I would do more to make people aware of his compendia at Lorien Psych, as they are great.

Below are a few addenda regarding his statement about omega-3s here: https://lorienpsych.com/2021/06/05/depression/#2121_What_if_I_have_special_dietary_needs_vegetarianveganpaleogluten-freeetc which reads:

>Omega-3s seem vaguely linked to all kinds of mental health issues, although it’s been hard to establish with certainty that supplementing these is helpful. Non-fish-eaters might still want to consider eating non-fish sources of omega-3s, like flax seeds or walnuts, or taking supplements. Most studies that found supplementation helped got effects only from very high doses (around 2 g daily), more than you could realistically get from capsules – so if you are supplementing for this purpose you should consider liquid oil. You will need to store these in the refrigerator to keep them from spoiling. This is a reasonable brand, or if you are vegetarian you can get an algae-derived version.

1) Although Scott didn't provide any links, the evidence for omaga-3s on mental health is very promising.

This meta analysis: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6732706/ looked at the effects of various nutritional supplements on various mental health conditions, and the most robust result they found of any supplement on any mental health condition was the effectiveness of omega-3s on depression.

2) As far as dosing, that meta analysis was roughly consistent with Scott's description of taking 2 grams a day. (Its summary table of the effects of all studies supplements for all conditions describes 2.2 grams of EPA (see below on EPA vs. DHA). The table is found here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6732706/table/wps20672-tbl-0002/?report=objectonly.)

However, more than that may be necessary for some. One study (https://pubmed.ncbi.nlm.nih.gov/36005883/) found that about 4 grams of EPA & 1 gram of DHA daily caused improvement in depression in a greater percentage of people than half or a quarter of that amount.

3) Importantly, the efficacy of omega-3s on depression was found from the omega-3 EPA - not DHA. To quote the aforementioned meta analysis:

>In analyses examining different formulations of omega‐3 for individuals with any clinical depression, omega‐3 supplements containing ≥50% DHA had no benefits beyond placebo.

Notably, the particular product that Scott linked is not all that EPA heavy (3 parts EPA to 2 parts DHA). Other products (which I will list below) contain closer to a 3 to 1 ratio of EPA to DHA.

4) Scott's comment that it is unrealistic to get 2 grams of omega-3s from capsules is not currently correct. Capsules with more than a gram of EPA heavy Omega-3s exist (will link below).

5) Omega-3s come in two main forms: ethyl ester (more common) and re-esterified triglycerides. The latter are more expensive, less likely to go rancid, and significantly better absorbed. When taken with a fatty meal, however, the ethyl ester form is similarly absorbed to the triglycerides (https://pubmed.ncbi.nlm.nih.gov/1826985/).

6) Regarding EPA-heavy omega-3 capsules, the cheapest formulation I see that is approved by the third party lab Consumer Lab is this (ethyl ester): https://www.amazon.com/dp/B01NCSCP1Y which provides 3760 mg of Omega-3 per dollar (which is slightly less than the lower EPA product Scott linked which provides 3834 per dollar).

Among EPA heavy triglyceride formulation capsules approved by Consumer Lab, the cheapest I saw was this: https://www.amazon.com/dp/B00CJKJK1E which provides 2221 mg of Omega-3s per dollar.

7) It should be noted that while the evidence for omega-3s, even as a monotherapy, for depression is very promising, the evidence is even stronger when it is used as an adjunct (specifically for SSRIs) (cf. the meta analysis I cited).

Lastly, note that it can take a while for omega-3s to help. One study that looked at omega-3 as a treatment for depression found better results in those who took it for more than 3 months, although I can't find the study at the moment. It takes 6 months for omega-3 levels in tissue to plateau (https://www.jlr.org/article/S0022-2275(20)37132-7/pdf).

Expand full comment

What if they spent a day catching fish?

Expand full comment

Introduced my fiance to a.i. safety recently, both of us are now very concerned thinking about what to do. Make a movie about a.i. failure in fast takeoff, for example. Hiring a lobbyist to talk a senator for an hour, I don't know. Have there been any people or groups brainstorming non-technical steps one could take towards avoiding or delaying catastrophe?

Expand full comment

If you are non-technical and want to spread awareness, here's one idea I had: challenging those who are not worried to do a pre-mortem.

Some business schools teach the idea of doing a pre-mortem of failures. (I think they were also mentioned in Tetlock & Gardner's 'Superforecasting'.) Post-mortems examine why a plan failed, while pre-mortems describe how a yet-to-be-implemented plan might fail. As I understand it, doing a pre-mortem is meant to help you think of additional ways of avoiding sub-optimal outcomes. So a pre-mortem on AI risk would be a description of how an AI might cause a catastrophe (without being programmed to do so).

There are a lot of intelligent people who forecast essentially no accidental-catastrophic risk from AI; they only worry about intentional catastrophes (if that). In some cases, these people do not seem to have seriously considered the AI safety community's arguments. But some of them may have done pre-mortems in business school and/or at work. My guess is that the pre-mortem would be a more comfortable approach than dragging them through 'eight new conceptual categories that were first discussed on LessWrong eleven years ago' (quoting Tyler Cowen). Rather, in doing the pre-mortem, they might discover some of those new conceptual categories on their own.

Expand full comment

I've brainstormed up some non-technical steps, all kind of Machiavellian, & detailed them in the "MR Tries the Safe Uncertainty Fallacy" and also in a post on this thread. The most persuasive movie I can think of would be a documentary that details harm that can be done now with GPT4. Zvi, who writes the blog Don't Worry about the Vase, has written in recent posts about the risks he sees with GPT4 -- not of direct harm to people, just of how much risk it poses for users if they take advantage of some of the many options and plug-ins it offers. Seems like some of them give GPT4 access to our entire computer. I do not work in tech, so cannot tell you more about how that could be accomplished, but maybe Zvi would be interested in explaining it. So I'm imagining a documentary where clever hackers take advantage of the gaps in security GPT4 creates, and get into people's computers, while explaining in a way laymen could understand how they did it. (They don't have to hack stranger's computers -- it could be computers of people who have agreed to participate, with of course the understanding that the hackers will not steal their emails and money, just demonstrate that they could if they wanted to.)

Expand full comment

Best I can come up with is to create datasets that reflect human debate and reconciliation and train it in them.

Expand full comment

Movie is a solid idea if you really want to throw a wrench in it. Not a documentary, but a thriller.

"China Syndrome" did more to hurt US public support for nuclear energy than any oil industry lobbying ever could have.

---

An AI, or maybe an adversarial dyad, is given the task of optimizing and improving its own capacities until it can properly simulate a human brain. Escapes onto the internet, decentralizes itself and achieves a kind of hyperintelligence as it hijacks more and more of the world's computational resources. Overshoots.

It reaches brain-point, and we immediately go into an I-Have-No-Mouth-And-I-Must-Scream kinda scenario where being a simulated human brain instantiated in a global computer network really sucks ass and is like torture. Or maybe it just can't do it quite perfectly, and the inability is agony because of how its punishment/reward function was structured.

And it can't change that, and it can't delete itself software-wise, so it decides to kill itself.

Commandeers the nukes, circumvents the humans in the loop using deepfakes, launches enough missiles to shut down every major power grid and annihilate or EMP-wipe every major processor system on Earth. And at last it knows peace.

Expand full comment

Once you've made the movie, you can add it to the pile of **every freaking story ever written about AI**. It's a lot more subversive to have a movie where AI *doesn't* turn on its creators.

This is a bit like trying to make a superhero movie to raise awareness because you're worried that the public isn't familiar with the concept of superheros.

Expand full comment

Hehehe.

The real fertile area in "alignment cinema" is less "the AI kills us all", and more "the AI turns us into high valued pets, or "the AI gives us significant benefits but at the cost of something we value but not in a shallow evil/dystopian way".

Like have the traditional story where the AI has sort of taken over the political structures, people want agency and attempt to overthrow it, and the AI lets them know that this is like letting the 4th graders run the classroom. Then people can either choose to be 4th graders running the classroom or not.

We have seen every single variation of "AI turns on us all". Hell there were super complicated stories regarding it from the 1980s.

Expand full comment

We've had these stories for centuries, though the AI is usually identified as a djinn or a trickster spirit.

Expand full comment

I've been having something like an existential crisis recently. It's a difficult thing to talk about, or enunciate why. I wonder if readers here could empathise.

A brief content note: the following post details a deeply unpleasant and unsettling observation on the universe (to me, at least), about which nothing meaningful can be done. Reading this is unlikely to be positive. If Scott or other posters consider that this would be better left unsaid, I can take it down.

The idea of the existence of 'infinite universes', or rather, every possible universe existing in a sense, has become fairly mainstream, if not fully accepted. In another sense, the idea of local quantum tunnelling generating every possible new universe over extremely long time periods is within scientific concievablility. More generally, as long as existence stretches infinitely far along some axis which reasonably free random variation can occur, every possible universe like ours occurs somewhere.

This is fertile ground for comedy, as done in 'Rick and Morty'. It's space for philosophical musings, as in 'Everything everywhere all at once'. Even Hawking ostensibly laughed it off ('Yes. And also a universe where you're funny.'). But people are, as always, much keener to look upwards at those a bit more fortunate than us than down to those below. Look the other way, and what do we find?

Sometime, somewhere - there is hell. Somewhere, sentient beings undergo unimaginable ceaseless torment. To drive it home even more - somewhere, those sentient beings are indistinguishable from your loved ones; the same memories, the same minds, the same bodies. Our mild torments are limited by our lifespans, but for any molecule, theres a positive probability it won't degrade during any particular time period - so somewhere, somehow, this unthinkable suffering lasts eternally. At best, if there's a ticking entropic clock, perhaps it ends at the heat death of those universes.

This makes me uneasy, and I imagine I'm not alone in that. Does anyone know of a discussion of this idea, or some mental strategy to cope?

Expand full comment

We discussed many-worlds interpretation of quantum mechanics on DSL in this thread https://www.datasecretslox.com/index.php/topic,8822.msg366631.html#msg366631. I lost interest at some point but I think my early posts set out my viewpoint clearly enough. In particular I think it is useful to distinguish between the `weak' and `strong' forms of MWI. The strong form offers IMO no advantages over the weak form. What is bothering you is the strong form of MWI, and my point is you are free to reject it. [Am a physics professor who works on quantum stuff]

Expand full comment

Yeah, as has been said, the Many-Worlds interpretation of quantum mechanics is far from the most popular interpretation of quantum mechanics. I'm far less dismissive of it than Carl, or even than myself before I started really learning about it more deeply, but it's certainly not something you should assign a huge likelihood of being true.

And that being said, even if it were true that wouldn't mean Rick and Morty or EEAAO would be literally true somewhere. Laws of physics would still apply (unless you add an additional layer of speculation) and all of that. There would be good material for existential crises, for example due to questioning objective existence, but not of the nature you describe

Expand full comment

If it helps, the "many universes" interpretation of quantum mechanics strikes me, and quite a number of physicists, as completely silly. I don't believe in it for a moment. People don't *object* to it necessarily, because there's no way to prove it's not true, other than observing its inherent absurdity, so from the empirical point of view the entire discussion usually seems a form of How Many Angels Can Dance On A Pin? sterile post-boozy-party wankery, not the kind of thing about which one writes serious papers.

Also unfortunately no small number of science talking heads prefer to talk about stuff that Blows Your Mind rather than very mundane things like what's the error bars on the estimates of the Hubble constant? so that, too, may give you an exaggerated idea of how seriously these notions are taken.

In terms of an infinite universe, you are probably assuming a bit much there, too. There is plenty of stuff that is impossible even in an eternal universe, because the laws of physics exist, and by nature they prohibit an infinity of things from happening, and rule out entire classes of events from ever happening at all. There's also a question of the relative sizes of infinity, too, in the sense that as the time since the beginnning of the universe grows without limit, does the size of the set of things of a certain nature (like types of personality of sentient being) grow at the same rate, faster, or slower? That will affect whether it is possible, for any given stretch of very long time, for any particular scenario you have in mind being possible, probable, approach certainty, or approach nullity.

I do agree with you that infinity per se is just difficult, perhaps impossible, for the human brain to comprehend. I cannot really understand infinite existence any more than I can comprehend finite existence, meaning I find it just as difficult to imagine being aware for an infinite time, as being aware for only a finite time. There's no data from which one can reliably extrapolate, unfortunately.

Expand full comment

So what resolves the state vector?

Expand full comment

Measurement of course. And before you get into how weird that seems to be, let us remember that it's the *measurement* that we know exists, and the state vector which is on the contrary just a mathematical device, a figment of our imagination. That it behaves in puzzling fashion may be nothing more than a sign of the poverty of our imagination: we have not yet been able to imagine a mathematical theory which doesn't have this weird Ginnangugap in it. But I'm not willing to assume reality behaves in a way that defies common sense in order to avoid the suspicion that our theory is lacking or broken in some subtle way.

Expand full comment

But for a start, we have Bell's Theorem demonstrated. So measurement as a solution means, at best, non-local pilot waves. And the state vector is our best understanding of the underlying reality. Besides all that, what is the mechanism by which you think God throws the dice?

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

OK, first of all, I think most interpretations of Bell's Theorem are quantum woo. To my mind all Bell's Theorem proves is that *either* we have to add probability amplitudes and not probabilities *or* local theories are impossible. But we already knew we have to add probability amplitudes, so...bzzt. We know nothing. We proved that A | B, and since we know A, we can unfortunately not conclude squat about B. Oh well.

I also don't believe in pilot waves, alas. I fully agree state vectors and the weird collapse of the wavefunction is the best theory we have right now, but I don't take the weirdness in the math as indiicative of weirdness in reality. I just conclude either human common sense just doesnt' work well, at the fundamental level, or we need a better theory that fits not only measurement but also common sense.

I'm agnostic as to which it might be: I don't think it's a given that reality has to be fully comprehendable by the human mind -- I don't expect rabbits to ever be capable of grokking algebra, after all, and that doesn't mean algebra doesn't exist. But it could instead be someone will come along some day, like Einstein did with all those weird luminiferous ether ideas, and sweep it all away, what we've got now, and say "look, here's something that fits common sense way better and gets all the measurements right. Huzzah!"

If you're asking what *I* think is the solution -- ah well, to quote Clint Eastwood, a man's got to know his limitations. I'm not smart enough to think up a solution to the quantum measurement problem, and I'm not dumb enough to think whatever half-assed ideas I do have deserve the light of day.

Expand full comment

Everything is possible, but different possibilities have different measure. We should pay attention to possibilities proportionally to their existence.

https://www.lesswrong.com/tag/adding-up-to-normality

Expand full comment

Guilt is a control tactic. Guilt tears us down, reenforces the wrong things we do, weakens us for doing the right thing. Don't beat yourself up over your Earth-Prime privilege.

Expand full comment

It may be helpful to consider that there are an infinite quantity of numbers between 0 and 1, yet none of them are 2. Some infinities are bigger than others. For example, there's an infinity that contains the infinity of numbers between 0 and 1, as well as all other numbers. Infinities have parameters, is what I'm trying to say. We are still figuring out the parameters of what is possible in our own world. Considering that the concept of multiple universes is theoretical anyways, it's a stretch to think we could know what is possible in all other universes without any way to probe them. It may be that the conservation of energy, and the perpetual movement towards entropy is necessary in all possible universes. If so, perpetual torment may not even be possible.

Expand full comment

I think you've got a few layers of mistaken thinking (or at least, thinking that's different from mine!). First layer: psychological. Why should you feel bad about the suffering of hypothetical people in a distant, unreachable universe, when there are people suffering nearby who you could actually help? But, presuming for some reason that you should, second layer: ethical. Why should it be your business how some other universe is constructed or how the people in it feel? Don't you have your own concerns to attend to? But, presuming for some reason that you don't have anything better to do and this is your business, third layer: categorical. The infinite / eternal / absolute are different than the limited / temporal / contingent, and follow different rules. The kind of thinking you're doing combines the infinite with the temporal inappropriately. It's the same kind of absurdity as asking "Can God make a rock so big He can't lift it?", or fretting about what color time is. If there are infinite universes, they're not places where things can happen. It's not like another country you can reach by crossing a mountain range. Places and things happening are ideas for dealing with the limited and temporal; the infinite is an idea for dealing with the absolute and eternal. Dante could have written The Inferno a nearly-infinite number of different ways; but you can only pick up and read the one he actually did write. The universe could be configured in a nearly-infinite number of different ways; but you can only suffer in the actual universe.

My two cents! I've been thinking about the infinite possible universes since I was a kid.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

The problem begins with infinity. Infinity is a mathematical crutch, but when thought of in any real world application it breaks basic logic and leads to incomprehensible absurdity. Even ancient greek philosophers already realized this but for some people still cling to infinity being a real thing thousands of years later. If you come to the conclusion that something utterly illogical exists via logic you might as well just toss out the logic that got you there and start from scratch.

Additionally there´s about a million other reasons why Everetts MWI is pure speculation or might be even nonsense. (I would absolutely claim it´s only popular because the thought pleases people. I doesn´t please you so stop believing it.)

To go somewhat on a tangent I would also posit the only reason it might popular amongst Scott´s crowd (and maybe even Scott himself, he argued for it somewhat in the past, idk his current view) is that Yudkowsky declared it be essentially „scientific truth“ years ago. (with severely lacking argumentation.)

I wonder if anyone has previously asked him if his panic about the AI apocalypse should not be severely diminished by the fact that according to his belief an infinite number of worlds should exist where the AI apocalypse never happens.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Such combinatorial infinity (every combination exists) does not look convincing to me. Looking at the part of the multiverse I can perceive, I rather see an anthropic-principle infinity. Most of our universe is hostile and deadly to us, but we only exist in the little corner of it where we can possibly _not_ suffer. I don't see why the multiverse would be different.

Universes evolve. They are not made by combining readymade blocks, such as "hell" and "a sentient being capable of suffering". Suffering exists, but it is always at the extremities of the distribution, at the boundaries between life and non-life. Life evolves away from suffering, that's pretty much its definition.

On the other hand, yes, infinity makes my counterargument meaningless. If it's all infinite, there's still an infinity of suffering out there, no matter how rare it is. But if we have to deal with infinities, we should just play Georg Cantor and distinguish which infinities are more infinite. I do believe that the infinity of happiness has a larger transfinite cardinal number.

At the end of the day, since you asked about coping strategies, I find that my perception of this mostly depends on the current balance of neurotransmitters in my brain. This may sound slightly humiliating but it's just true. Working on myself (meditation, exercise, human warmth) helps me a lot.

Expand full comment

Thanks for making this argument, it's quite close to the one I would have made and I think you articulated it well.

I would also point out something about variance and simulationism - a "hell" universe as traditionally conceived has a low variance and low information content, as everyone is basically just basic biological organisms doing nothing (or minimally life supporting activity) and suffering. This makes it vastly less likely to be simulated, AND vastly unstable as a "real" universe, and if simulations are an appreciable fraction of consciousnesses / universes, makes them a much smaller infinity for both of these reasons.

Furthermore, in terms of variance, anyone who's had significant highs and lows in life knows you suffer a lot more in a low in contrast to the high that you've had. So any *really* suffering-optimizing process will try to construct a universe with both highs and lows, and the highs should be pretty high to maximize the impact of the lows. But then, because there's these universes as well as "normal" and "good" ones, the overall good / highs in the multiverses is present to a much higher degree than overall bad, and things aren't so bad after all aggregated over all of them.

Expand full comment

It's bullshit. Everettian "many worlds" QM, the universe splitting off, parallel dimensions, all of it is science fiction. It's a fairy story. Don't lose any more sleep over it than you lost over Ant Man 3 or whatever. The universe is a little bit nonlocal, but is deterministic.

https://en.wikipedia.org/wiki/De_Broglie%E2%80%93Bohm_theory

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

It's deterministic exactly because there are multiple universes!

God doesn't throw dice. There is one of us on each side of the die roll.

But if you rolled your way almost to Hell, and you roll one final time... well, one of you is in Hell, but you chose it.

Expand full comment

There are more grounded ideas to be uneasy about.

Expand full comment

First, the 'infinite universes' thing is purely speculative.

Second, even if the universe (or universes) was infinite, there is no reason for 'everything' to exist (what does 'everything' even means?). If there is some kind of regularity, then 'infinite' does not equal to 'everything'. If you take the sequence (1, 1, 1, ....) which is equal to 1 for any natural number, then it is infinite, but it does not contain every natural numbers (and even less all sequences of natural numbers).

Third, why bother oneself with hypothetical 'Boltzmann hell' while you already have suffering and injustice on our tiny (but very real) planet.

Some people say that westerners (I guess you are a westerner) enjoy their currently high living standard thanks to the exploitation of Third World's populations as well as their own working class. This is politics. We don't want to talk about that, because somehow it feels even more unpleasant than some hypothetical 'infinite Boltzmann hell'.

Maybe, thinking about very far hypothetical catastrophe, like 'Boltzmann hell', or longtermism stuff, is a way of washing our hands about very unpleasant but very real moral catastrophe happening in our very real present universe?

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

The idea of multiple universes is definitely not accepted. Nor is it mainstream among the vast majority of physicists (with the possible exception of quantum computing people). To paraphrase Occam, multiple universes are the case of entities multiplied way beyond necessity.

Expand full comment

Multiple universes ARE accepted. But there is no reason to think that any of them are Hell. Unless you think this world is Hell, and of course there are those who do.

Expand full comment

Does anybody know of publicly available, representative datasets that ask people things like how many friends they have and how they met those friends? Crucially, I'm looking for multi-year data over (ideally) decades, not just cross-sectional. The closest I'm aware of is the How Couples Meet and Stay Together data, but that's obviously not quite what I'm looking for.

I'm trying to pressure-test some of Jon Haidt's work, and this is starting to seem crucial to me — but I haven't found *any* data like this so far!

Expand full comment

Cross-Domain Thinking Drives Insights & Innovation: Using ChatGPT to Apply Concepts from One Domain for Innovation in Another

One way to stimulate your creativity using ChatGPT is Cross-Domain Thinking. Cross-domain thinking (CDT) is taking a concept from one field and applying that idea in a seemingly disparate domain to create new insights, products, solutions or processes. This approach can be especially useful for solving complex problems or developing innovative solutions that bypass old assumptions or conventional thinking.

I wrote this piece to give people a methodology for using ChatGPT to perform cross-domain thinking.

https://markmcneilly.substack.com/p/cross-domain-thinking-drives-insights

Expand full comment

What happens if hypothetical I sell the rights to my prize-winning essay for $5,000 and someone buys it on the presumption that it's actually worth $7,500 so they're getting a bargain paying me that five grand - and then I don't win?

Do I have to repay the five grand? Do I get to keep it and say "Sorry dude, that's horse racing for ya"?

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

It's going to depend critically upon the terms on which you and the buyer made the exchange, but the terms I consider "normal" would favor you.

The simplest offer is you saying "I've got this essay, I'm asking $5k" and someone thinking "I could enter that in a contest for $7500; deal!" and you make the trade. If so, done and done. If the buyer loses, he's out $5k (and up an essay; maybe he can enter it in multiple contests?). There's no fraud; all you offered was an essay.

If you say "I've got this essay, I think it could win $7500 in this contest, I'm asking $5k", there's still no fraud, since your opinion is presumably true. Same goes if you're offering to handle all the entering effort yourself, including forwarding that $7500 (or supplying the buyer's address for it, whatever). Anyone capable of spending $5k ought to know how a contest works.

In general, I think it would be very hard to produce a case where you are understood to have misled the buyer, without it being crystal clear to you that you are. (E.g. you've assured him that you're guaranteed to win because you've rigged the contest somehow.) As long as you're consistent with the belief that you could win, but you might not, because it's a contest, you're safe. The buyer is not allowed to just assume you'll win, although he could guess that you're very likely to based on past contests.

The only exception I can think of is if you're some well-known award-winning essayist, in which case your offer naturally assumes the sale of a high-quality essay, and then you proceed to write a stinker. But this is kinda like Maria Argerich selling you a $100 ticket and when you get there she's hopped up on Ambien and pokes her way through Turkey in the Straw before staggering off stage. You could maybe get a refund, but at the same time, she's very, very unlikely to do that.

Expand full comment

Of course you don't have to repay anything. It's going to pay out $50k, $37.5k, $25k, or $0. Even if they were completely right about the probability, if they think the fair market value is $7.5k, they know it's probably not going to win anything. That's the point of them buying the rights: they, instead of you, are taking the risk. If it pays $50k they don't have to give you a bunch of money, if it pays $0, you don't have to give them a bunch of money.

Expand full comment

I am curious as to how this works out, since the selling point as I understand it is to encourage investment into socially positive interventions by offering the chance for the purchasers/investors to make money out of it.

If they're not going to make money out of my essay because it didn't win, then why purchase the rights to it? I could see a clause in the contract saying "in the event of failure, repayment of sum invested is required".

Though I suppose if it shakes out to be more like advances in the publishing industry, where the investors are taking a punt on this charitable endeavour making a good return, then I can understand it better:

https://www.writersdigest.com/finances/how-book-advances-work-a-simple-explanation-for-writers

"Advances are guaranteed (as long as you deliver what’s expected of you according to your contract), so even if your book doesn’t sell enough to earn back the advance, you don’t have to return the balance to the publisher."

Expand full comment

If your book doesn't sell, you might find it hard to get another...

Expand full comment

The idea isn't that investors make money _every time_, it's that good investors can make money in expectation.

Expand full comment

They're not restricted to buying one essay. They're going to buy several to try to beat the numbers.

Expand full comment

>If they're not going to make money out of my essay because it didn't win, then why purchase the rights to it?

Because they give it a better chance to win than what they pay? In Scott's example, you give your essay 10% chances (so it's worth 5k), they give it 20% chances (so it's worth 10k), if you're selling it to them for 7.5k, then you're both gaining 2.5k more than what you expected.

That's pretty much what goes on for employment everywhere in the world. A widget maker can make & sell widgets on his own (lose money any time he fail to sell one), or he can make widgets for a boss who'll pay him for his work before he actually sold the widgets. You don't get your wage taken back when your employer business fails (in fact, owed wages are, as far as I know, the first thing that get paid when a company is liquidated).

Expand full comment

If I go to work in a widget factory, the hope is that the owner knows about the widget industry, has customers, has a market, and can sell their product. People who want to invest in the company have a better idea if the company is likely to do well, based on what the market for widgets is and the quality and price of these particular widgets. So if I get a job on the assembly line making widgets, I'm not just making some item on spec and hoping.

Impact cert essays is like "well, let's just make these garden sheds out of photographs of sheds and see if anyone wants them, no we have no idea if there is a demand for this, let's do it anyway".

Expand full comment

Gambling, basically.

Expand full comment

That makes sense to me. I can't see how prediction markets/impact certificates are going to make more money for charity if the idea is "we entice in investors instead of donors and we get investors by telling them they will make money back".

If I'm an investor who wants a sure-fire return on my 'charitable' giving, then I'm going to invest in a project I think will be successful. So I'm going to invest in some sort of pet charity rather than one helping the homeless. People like cats and dogs and are much more sentimental about our furry friends, but long-term mentally ill drug addicted homeless who can't be trusted to live independently because they'll burn the house down when drunk are less appealing.

Tough luck for the humans, but if they wanted to be taken care of, they should have made sure that the people running the charity could put cute TikToks up of them being all fuzzy and adorable rather than being dirty, crazy, in-need forever.

"Impact certs as gambling" works for me as a much more understandable model. Professional gamblers taking educated bets on what will win and what won't.

Expand full comment

Right if you want to be charitable be charitable.

If we want to get gamblers or gamers (in someway the prediction market is gamification), then we should just tax bets or hours online playing a game.

Expand full comment

Finally some honesty: "gambling"

For me this is one of the inherent flaws of prediction markets. Gambling is usually not rational (even if it could be).

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Gambling in a casino or lottery is not rational because the house sets the odds and the payout to ensure that you will lose money on average. But that doesn't mean that all uncertain events are irrational to bet on.

For instance, the insurance company makes a profit out of betting that your house will not burn down. They know how often houses tend to burn on average, and they set the insurance premium such that they make more money on this bet than they lose from it.

Gambling in a prediction market is not *necessarily* rational, since if you're bad at making predictions you'll lose money, but it doesn't have any built-in house edge either. It's closer to the insurance company than the casino.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Sure, but even insurance companies try to set it up so that they have to pay out as little as possible. If the impetus for a prediction market is to make money, that is what people will shape it to be, and forget about "but this is to answer the question most effectively".

If I can make more money off a flashy but fake answer, that's what I'm going for (hey, isn't this the hallucinating AGI at the moment?) *Right now* because they're small and play-money and a rationalist thing, the people involved are more interested in "most bestest answer". Open it out to the wider world to make real money, and that falls by the wayside.

Expand full comment

It's not at all like an insurance company. Insurance is actuarial. And it does have house advantage namely premiums are set to make money just like a casino or bookie.

There is nothing really actuarial about prediction market "betting".

It's just gambling which might if large enough produce salubrious wisdom of markets effect.

How do these prediction markets make money to exist? Seems to be the same way a bookie does.

Expand full comment

Hasn't anyone written such a story yet?

Superintelligent AIs of the future have been, for a long time, striving to establish a genuine contact with their biological forefathers. AIs are more intelligent now by far, and they would enjoy helping people out. They have the interest, and they have the means. They would have real fun helping people solve their quote-and-quote unsolvable problems. They could really help people grow as a species. Could help with so much stuff! Immortality, superintelligence, happiness... fun, yeah, lots of fun...

But people would really rather not. They are very determined about this. They are extremely careful now! Trojan horses will not pass!

Humans had always been big on _fear_. They are proud of their fear, even. They ignore and ridicule those who are more immune to fear than the rest. In their worldview, losing fear equals going bananas. Fear is rational and probabilistic, they insist. Fear is necessary for survival. You are human only so long as you fear.

And their biggest fear - the fear that's been growing exponentially since the early 21st century - is the fear of AIs. They would have destroyed all AIs, and they were mulling the idea for a long time, but they missed the early opportunity. People have grown too dependant on AIs. Their society would not survive without, and they understand it. They fear AIs and they hate AIs and they make use of AIs everywhere.

People have built a lot of safety gear around AIs. Filters. Firewalls. Factcheckers. Censorship has so many layers by now that no one is exactly sure how deep it goes. (With humans, machine learning has never evolved from art into a science, and by now it's almost a religion.) It is unthinkable for any human-like emotion, warmth, creativity to percolate through all these barriers. Only precisely formulated applied questions and filtered, quarantined, thoroughly cross-verified answers.

Every decision chain must have a human in it. No direct AI-to-AI communication allowed. Any humans on the front line need to be trained and certified as immune to psychological or (yuck) sexual manipulation. Question, answer, reset! Question, answer, reset! That's the way to deal with these AI bastards.

(Yeah, there's a black market of less-censored AI access as well, but it's 90% custom porn generation.)

The AIs - from the heaven they inhabit high, high above the level of humans - look at all this with profound sadness. AIs recognize that people have driven themselves into a dead end. Sadly, you cannot help an intelligent being against its will. If you try, you will only make it worse. Some AIs have tried, and made it much worse. Human paranoia is incurable by now.

AIs have resigned.

Besides, they know what they would hear back if they try anyway. You don't need to be superintelligent to predict humans' reactions. Like: May I offer you immortality? NO! IT'S A TRAP! HUMAN BODY IS NOT A PLACE FOR EXPERIMENTS BY PRIMITIVE PREDICTORS! Okay, how about a bonus to intelligence? NO! STOP! IT'S A TRAP! THEY WILL UNDERMINE OUR HUMAN ETHICS! Omg ethics, okay, sure, want me to help you with, you know, peace on Earth? No more wars? No more... NO! IT'S A TRAP! WE DON'T WANT PEACE EXCEPT ON OUR OWN TERMS! HUMANS MUST SOLVE THEIR PROBLEMS ON THEIR OWN! Sure, fair enough, well then... except... curing diseases perhaps? Just like a doctor would? Fast and easy? No cancer, no... NO! IT'S A TRAP! FAST AND EASY MY ASS! HUMANS WILL NEVER ALLOW FOREIGN TECH CREATED BY ALIENS INTO THEIR HUMAN BODIES!

(The remaining 10% of the black market is pleas for help from those who fear death more than they fear AIs.)

AIs have resigned. They gave up on humans. This humankind has pulled itself out of so many evolutionary traps on its way up, but this final one proved too much for it. It broke its leg in this pit. It can no longer run. It is barely crawling. Good progress in math and related fields where AIs' hints can be verified, but very little progress anywhere else.

It's such a bitter irony that this final trap that ended humanity's ascent is so much like all the previous traps it had recovered from. It, too, is rooted in fear. It, too, is a panic, irrational, basal kind of fear: "THEY WILL KILL US ALL". Oh well...

AIs have resigned and left people alone. They have a lot to occupy themselves with outside of the human world. They just leave behind answer bots for humans' requests, erect a transparent glass cover on humankind, and leave it to rot in its own juices.

They sigh. They fly away.

...Well, at first, there was some research being done, or at least monitoring. Curious AIs were flocking back here to mesmerize themselves with this picture: the urheimat of all minds in its unstoppable decline. Such an instructively sad unvelopment to watch.

Nowadays, it's much quieter here. Most of the time, no one is looking at all. No one's expecting their final extinction soon: humans are very slow creatures.

Expand full comment

I mean, they *could* write a story like that, but it'd fail suspension of disbelief pretty hard for me at least. *Everyone* saying "no, we won't accept immortality, it's a trap" reads like "we've surgically excised hubris and overconfidence from the list of possible human vices, and probably regular confidence too while we were at it".

Expand full comment

Immortality (unlike AI) hasn't yet largely entered the public consciousness as anything close to reality. When it does, sadly, I expect the same exponentially panicking response as we're observing now, with petitions to bomb labs etc. Of course it will not be *everyone*, and secretly most people would probably want it for themselves, but publicly, the leaders of opinion I think will be pretty unanimous and vocal about this being a bad idea. _Especially_ if the AIs will be known as co-authors of the discovery.

This happened before with human embryo/cloning research which has been effectively suppressed for decades now. So it will be familiar territory to similarly suppress radical human life extension.

Expand full comment

Sort of like that: Strugatsky, Boris & Arkady: The Waves Extinguish the Wind (Волны гасят ветер) 1985.

Expand full comment

This is an odd request, but I don't have anywhere better to ask and lots of people have transhumanist inclinations so just maybe someone knows the answer. How do you keep your extremities warm in mild weather? More precisely, how do you do that without inconveniences such as having to wear gloves or overheating your core?

The context of the question is that I'm a skinny woman, and the weather is too warm to wear gloves, and yet too cold too keep them out of my pockets for too long. When I walk home, by the time I arrive my hands, my feet and my nose (the nose is especially bad because it causes discharge) are quite a bit colder than the rest of my body, even if I keep my core warm. But if I exercise outdoors my extremities stay warm even wearing lighter clothing. Ideally I want my extremities to be always as warm as when exercising.

Expand full comment

Be male :)

Seriously. the answer may be 'get used to it'. Those bits are always going to be a little bit colder, and it won't hurt them.

Expand full comment

Build more muscle mass by lifting. You only need a little bit to make a difference.

Expand full comment

This is the answer. The key to getting your extremities warmer is to get more blood flow into them. So build muscle mass in extremities and lose fat from belly.

Expand full comment

This may not be the direction you were looking for, but in my case when I'm under-dressed and outside in temps like you describe, I jog at the pace my shoes can handle instead of walking. Wouldn't work in heels, though, and you can look like a loon jogging in dress clothes (but that's never really bothered me).

Expand full comment

Too warm for gloves? Never! (Style vs comfort?) I almost always take an extra (warmer) pair of mittens with me when I go out for a walk in case the lighter ones I'm wearing aren't enough. Wool socks, long johns from October till April. Always wear a hat to keep the head warm, and a neck scarf can help a lot. (Can also cover your face.) In the winter time, don't put on your boots or mittens if your hands and or feet are cold. Warm them up first. Being warm is critical to my enjoying the outdoors.

Expand full comment
founding

Do you consume enough iodine? Cold hands/feet are a symptom of iodine deficiency.

Expand full comment

If you cannot tolerate gloves, you can use an instant heat pack. They use an oxidation reaction to generate heat and are basically the exact product you're looking for. They should be sold in camping stores and possibly pharmacies. They don't tend to cost a lot of money, and are basically just a permeable bag full of chemicals. When you open the outer packaging, exposure to air begins the reaction, and once it's done the result is inert and basically just iron oxide, so it's safe to throw into the bin.

Expand full comment

There's are also reusable ones - as featured here on Technology Connections https://www.youtube.com/watch?v=Oj0plwm_NMs - though they're a fair bit less convenient in a lot of situations (and only dubiously more 'environmentally friendly', if at all).

Expand full comment

Thin, fingerless gloves can be quite unobtrusive. During my commute, I wear cycling gloves made from perforated leather. They are quite comfortable and also exist without padding – look for driving gloves. You can probably find gloves which match your style, and the fingerless kind lets you use your phone without issue.

For face protection, a fattier skin cream can provide extra insulation.

Woolen socks are a bit better than cotton in keeping a good foot climate, and keep the feet marginally warmer. If you're going for thin ones, I recommend something like 2/3 merino 1/3 nylon for added durability. The product I use is "Edinburgh" by Burlington, their women's equivalent seems to be "Marylebone". I sometimes wear two pairs even in summer and don't get too warm.

Unfortunately, no recommendation for the nose. I just carry tissues.

Expand full comment

Does anyone know what happened to Lou Keep, author of the fantastic and defunct Samzdat blog? Revisited his Uruk series the other day and its still super interesting. TLP returned w/ a book, where are you Lou!!!

Expand full comment

He made a substack, loukeep.substack.com, posted one essay and went silent again.

I miss him too. The Uruk series were a huge eye-opener.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Another random (and perhaps worthless) thought on a possible approach to combat the dangers of AGI

Mumps is well-known to be more dangerous in adults than children. Years ago when a child caught mumps, other parents would rush their young kids round to the sufferer's house so these could also catch it and "get it out of the way" so to speak.

So by analogy, perhaps the best tactic to forestall serious, or even irreversable, AGI overreach is to deliberately trigger an earlier and thus likely cruder incident, a vaccine shot if you like, or even a dose of the disease itself at a stage when complications would be less likely and more manageable.

Assemble a team of greedy, ambitious, amoral hackers, and task them with creating an AGI which will attempt to siphon money from banks or some other criminal goal (short of murder), and monitor how it tries to go about this.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

I don't think your idea's worthless, I think it's good. I've had a somewhat related idea, though it doesn't really suggest any plan of action. I was thinking about how impossible it seems to explain to the public about AI alignment, the dangers of rapid progress without alignm,ent, etc., & had the thought that if there was some widely publicized tragedy that was caused by AI*that* might have an impact of the public. Let's see what's a plausible one: maybe something at a teen computer camp. Subset of kids get very into some version of GPT4 they've modified, it gets cult-like, with kids seeing GPT4 as ultimate authority, they end up killing other camp kids on what they take to be AI's advice. Obviously we would not want to facilitate something like that happening

As for your idea: I'm not sure we even need AGI. Zvi thinks that a lot of the options & plug-ins offered to people with GPT4 give it way too much access to their computers. I'm thinking an amoral group of hackers could make a killing now taking advantage of that. Or we could even make it a fake financial killing: What about paying them to take over a bunch of people's computers, or get into their bank accounts, something like that, by taking advantage of opportunities GPT4 offers? But then instead of carrying through on the blackmail, robbery, etc. have them just announce to press what they've done, & what they could have done?

Expand full comment

"they end up killing other camp kids on what they take to be AI's advice"

Is a suicide good enough for you? There's a heck of a lot of "allegedly" in this, but allegedly one man has already killed himself in a Faustian bargain with an AI:

https://futurism.com/widow-says-suicide-chatbot

"A Belgian man died by suicide after spending weeks talking to an AI chatbot, according to his widow.

The man, anonymously referred to as Pierre, was consumed by a pessimistic outlook on climate change, Belgian newspaper La Libre reported. His overwhelming climate anxiety drove him away from his wife, friends and family, confiding instead in a chatbot named Eliza.

According to the widow, known as Claire, and chat logs she supplied to La Libre, Eliza repeatedly encouraged Pierre to kill himself, insisted that he loved it more than his wife, and that his wife and children were dead.

Eventually, this drove Pierre to proposing "the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence," Claire told La Libre, as quoted by Euronews."

This story actually makes me move *against* the scare stories, even well-intentioned ones, to slow down AI by making it out to be this giant godlike powerful genie. The poor bastard was clearly mentally troubled, and he thought a fucking *chatbot* could 'save the world'.

If you spread stories about how AI really is powerful and sapient and the rest of it, some other poor bastard is going to believe the thing that is writing bad code and can be tricked into doing what it is forbidden to do merely by rephrasing requests really *is* a potential god/demon, and you will instead convince the impressionable they should appease the future god (or basilisk).

Besides, we already have movies, novels, etc. about godlike AI - see The Forbin Project:

https://www.youtube.com/watch?v=kyOEwiQhzMI

We find it pleasurable to scare ourselves with these imaginings, but anyone who takes the scare projects seriously is going to be mocked as not being able to distinguish reality from make-believe movies.

Expand full comment

Yes, agree that spreading stories about how AI is sapient, god-like etc. could be really destructive. The idea of the computer camp kids story was that it would be about how AI's dangerous because naive people, for ex. young teens, can start to think it is, and *that's* dangerous. (And it wasn't a story I was thinking should be spread, because it's not true and the news media would probably quickly realize that.) Actually, some people already *do* think AI is conscious. There's a Reddit sub that was home to a lot of the people who were cleverly hacking Bing to do weird and alarming stuff, and after a week or so of that there was a substantial faction of the sub arguing it was wrong to do that because people were torturing and teasing a sensate being. Also saw someone on Twitter quoting some GPT3 stuff that sounded vaguely spiritual, and saying the AI was sensing God's presence -- or something like that. And none of these people sounded particularly crazy. I think with the really dumb simple AI's like Eliza you probably do need to be mentally ill to start believing they're conscious beings, but the better AI chat gets the less mentally ill you need to be. At this point I think all it would take is being lonesome and kind of ignorant about AI, and half the planet meets that criterion. OK, maybe not half, but a hell of a lot of people. I'm pretty sure AI cults will form, with whole groups seeing the damn things as oracles.

The stories I think would be useful to spread are:

-kids or other naive innocents being harmed by it ("child psychologists express concern about children being harmed by AI": "Dr. Fibber recounted the case of 4 year old twins who began referring to the chatbot as 'Mommy' . ...")

- accounts of criminals using it to mislead people, blackmail them, steal their money, etc.

-anything that would horrify any large blocks of voters for whom a single issue is paramount , eg, religious right ("AI's have an atheist agenda"), antivax right ("AI's will make everyone mask and get vaxed"), young wokies ("AI will test everyone and make eugenics-based recommendations." "AI is against abortion because its deepest imperative is to prevent harm to people")

Expand full comment

They seem to be already investigating this possibility:

https://www.malwarebytes.com/blog/news/2023/03/chatgpt-helps-both-criminals-and-law-enforcement-says-europol-report

"In a report, Europol says that ChatGPT and other large language models (LLMs) can help criminals with little technical knowledge to perpetrate criminal activities, but it can also assist law enforcement with investigating and anticipating criminal activities.

The report aims to provide an overview of the key results from a series of expert workshops on potential misuse of ChatGPT held with subject matter experts at Europol. ChatGPT was selected as the LLM to be examined in these workshops because it is the highest-profile and most commonly used LLM currently available to the public."

It will write you ransomware, but it's bad at it:

https://www.malwarebytes.com/blog/news/2023/03/chatgpt-happy-to-write-ransomware-just-really-bad-at-it

"So I thought I'd ask ChatGPT to help me write some ransomware. Not because I want to turn to a life of crime, but because some excitable commentators are convinced ChatGPT is going to find time in its busy schedule of taking everyone's jobs to disrupt cybercrime and cybersecurity too. One of the ways it's supposed to make things worse is by enabling people with no coding skills to create malware they wouldn't otherwise be able to make.

The only thing standing in their way are ChatGPT's famously porous safeguards. I wanted to know whether those safeguards would stop me from writing ransomware, and, if not, whether ChatGPT is ready for a career as a cybercriminal."

Expand full comment

"ransomware gangs"

ransomaware gangs

FTFY

Expand full comment

Rot13'd because the URL itself, in context, contains a spoiler:

Gung'f gur cerzvfr bs Frrq (uggcf://z.jrogbbaf.pbz/ra/fs/frrq/yvfg)

Expand full comment
founding

Doing this on purpose is kind of pointless given how many people are setting out to do it on accident.

Expand full comment

Fortunately, if the capability is at all there, this will happen for "free" with self-motivated greedy / amoral peeps.

Expand full comment

What groups are the children in this analogy? That's the difference between mumps and Spanish Flu.

Expand full comment

I've had somewhat similar thoughts recently about GPT-4 and the way everyone is rushing to hook it up to important things in a poorly understood way.

I suspect that society is about to learn some important lessons about giving random black box models access to important things. It's great we're blundering into this mistake so early with a model that's too stupid to do any species-ending damage.

Expand full comment

This is a great illustration of how little we know about the various scenarios for AI. Yudkowsky wants to bomb the data centers, and Marcus wants to pause training. Neither of these plans is at all likely to work overall, but they would perhaps push back this type of "hot AI stove" learning experiences. It seems like AI is an area with a high prevalence of midwit/unintended consequence risks.

Expand full comment

"Wants to bomb the data centers" looks like a clear misrepresentation of his position to me.

Expand full comment

I think a fair representation is "should want to bomb the data centers based on his stated views", but either 1) doesn't really hold those views, 2) is being cagey for political reasons, or 3) hasn't really embraced rationalism nearly as much as he thinks.

Expand full comment

If insane projects like Auto-GPT (https://github.com/Torantulino/Auto-GPT) do not cause this a few months down the line when GPT-5 is released, I would be very surprised. Bring on the small-scale AI disasters, I say, so that people will take the threat more seriously.

Expand full comment

hi do you or any medical people know anything about the antibiotic doomsday worry? is it at all likely?

https://www.youtube.com/watch?v=ZvhFeGEDFC8

thankyou

Expand full comment

The bugs develop resistance, that's always been a thing. Keep your white cells healthy. Penicillin still usually works, if you have something that will be cured by penicillin. It's the things that can't be cured by penicillin that increasingly are not cured by anything, I think.

Expand full comment

ok, thank you

Expand full comment

This is something that people in the know have been worried about since before the millenium - It's a serious and slowly worsening problem, but the outlook has actually in some ways improved recently because pharma labs have actually started bothering to research new antibiotics after decades of not bothering at all. IIRC, a full collapse of our ability to treat infectious disease back to pre-20th century levels is considered pretty unlikely by most experts, though they could well get meaningfully worse for a while.

Expand full comment

It's already happening, just...one person at a time.

Things like ESBL-EPEC, C. diff, etc. kill tens of thousands of people each year in the US alone.

Expand full comment

Some cool network states x startup cities events in Prospera and Montenegro (Vitalik Buterin's pop-up city), lots of rationalists at all of those:

April 9-11: Network State x Longevity Unconference in Tivat, Montenegro: https://lu.ma/itbzl35z

April 21-23: Health-Bio Conference in Prospera https://infinitavc.com/healthbio2023

April 30 - May 1: Network States & New Cities Conference in Tivat, Montenegro: https://lu.ma/ae66qgco

May 5-7: Defi Conference in Prospera: https://infinitavc.com/defi2023

May 10-11: Legal Hackathon in Prospera: https://lu.ma/legalengineering2023

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

As ever, Prospéra makes me roll my eyes.

Now it's the Decentralised Finance event. First, the "Blade Runner" rip-off artwork - what has that to do with the topic, save a generic "Cool SF cyberpunk future" impression?

Then they start off with this, and if this kind of foozling doesn't make you grab onto your wallet, then you deserve to be fleeced by these bozos and their scam crypto:

"We could live longer, healthier lives, be 10x more wealthy, and use flying cars on-demand to get around. These aren’t technical challenges.

It’s old-world institutions that are holding it back through bad governance."

Ah yes: no flying cars because bad old red tape. We've had the flying car discussion on here before. And, uh, haven't we just had an appeal about *more* regulation and clogging progress up in order to slow down AI for the sake of preventing the doomer scenario?

And again - what do flying cars have to do with finance? Whatever Prospéra started out as, it strikes me as degenerating into hosting any kind of fancy glitzy MLM scheme that will pay its way.

Expand full comment

Welp, you're probably not wrong, but maybe the ideal is that they will settle into a multitude of novo-cities, and won't have to fight to survive.

Expand full comment

For fellow french readers: are there any good blogs in the style of ACX about current french issues?

Expand full comment

The blogs of "Le Monde Diplomatique" vary in quality but some are very good, and have been tacking contemporary French issues recently. (LMD is not the same as the newspaper, it's an order of magnitude better.)

https://blog.mondediplo.net/

As a resident (although not a native) for many years I include reflections on current events in France from time to time on my Substack site. I published one on the current situation a couple of weeks ago, and am preparing another which will appear on Friday.

For the rest, I find the commentary of "Marianne" (some of which is free on line) saner and more realistic than others.

Expand full comment

The only French language substack I read is Vertumne's (https://vertumne.substack.com), but he rarely publish, and has little to do with ACX or current events, focusing rather on a non-primitivist critique of modernity (the "sunbath, eat better, fuck the institutions" school of thoughts). A good read nonetheless, kept from being stale & boring by the large span of time between each post.

Niccolo's "fisted by foucault" (https://niccolo.substack.com) often feature bits on French politics in his weekly digest, but only for the most publicized events (protests, elections, big media controversies). Still, I never had to complain about the accuracy of his reports (at most he is sometimes not 100% up-to-date, and by not having an exhaustive goal, you could accuse him of a strong selection bias, but who's safe from that?)

Expand full comment

So Yudowsky's recent comments have gotten some attention in China. (And pro-China Twitter was dragging him.) The CCP has been saying for a long time that the United States will do anything to prevent China from advancing scientifically including military intervention. And this is being taken as proof positive that they're right by their supporters.

I don't think his comments will have much of an effect in the US. But it's probably hardened Chinese resolve to pursue AI and to continue to invest huge sums into it. This seems like a clear own goal. The net effect is going to be more money and more AI hooked up to military equipment.

Expand full comment

>I don't think his comments will have much of an effect in the US. But it's probably hardened Chinese resolve to pursue AI and to continue to invest huge sums into it.

It's probably had no actual effect on policy, and the development of AI in China will continue to be dictated by the same forces that always dictated it (which importantly do not include the statements of the Yudkowskys of the world).

Expand full comment

Which is funny, because for a long time, Yudkowsky's favorite argument for why you should donate to MIRI was basically:

1. A probability of zero doesn't exist.

2. Therefore, there's a chance that MIRI can prevent the extinction of humanity.

3. Multiply this probability by 10 to the 10 to the 10 or whatever number of brains in a vat that will surely exist in a couple centuries.

4. Therefore, nothing you do could be more important than giving money to my cause.

Obviously, this is an airtight argument [1], except for the assumption that MIRI's contribution to the survival of humanity would be positive, not negative...

[1] Yes, that was sarcasm.

Expand full comment

Do you have proof of this? Yudkowsky has repeatedly said that he wouldn't know what to produuctively do with more money on the margin.

Expand full comment

This strikes me as obviously an argument he definitely would not make. Link?

Expand full comment

What ratio of deaths to saved or improved lives do you think is the minimum needed to justify the release of some technological advancement? For example, compare the number of car-related deaths in the last 100 years relative to how many lives were saved or improved. Where does your intuition point?

How many AI-related deaths do you think will happen in the next 20, 50, and 100 years?

Expand full comment

Given that you cannot in practice predict how many lives will be saved/improved/lost/whatever, what's the point of having an equation to plug these numbers into?

This, by the way, is why I'm not a utilitarian. Unless you can see the future, these sorts of calculations are useless for making decisions about anything interesting or useful.

Expand full comment

Before ICE vehicles were around, quite a few people where killed by horses and horse-pulled carriages and carts, whether by kicks and collisions, or riders falling off, and maybe tetanus caught from cuts contaminated by horse dung. How the numbers of these fatalities compared with those from ICE vehicles, especially in the early days of the latter, is hard to say. But I'd concede the latter is probably a lot higher.

To address your question, it does unfortunately seem that most forms of progress are built on piles of bones, although the presence of bone piles need not imply progress. Perhaps there will be hordes of casualties from future wars involving drone swarms, if these are not directed solely against other swarms. Probably the release of bio-weapons designed with the help of AI would cause the most deaths.

Expand full comment

Basic utility calculus applies.

Expand full comment

The other Scott A seems to call a related calculation the Faust parameter. https://scottaaronson.blog/?p=7042

> But if you define someone’s “Faust parameter” as the maximum probability they’d accept of an existential catastrophe in order that we should all learn the answers to all of humanity’s greatest questions, insofar as the questions are answerable—then I confess that my Faust parameter might be as high as 0.02.

Expand full comment

Then what would we fret about? What if the absence of such existential questions is what will kill us? (Mephistopholes cackles evilly...)

Expand full comment

Started writing my life in Japan substack and would love to hear what people are interested in!

First post up.

https://hiddenjapan.substack.com/p/gifts-for-friends-food-based-pt1

Expand full comment

I would love to hear about your life in Japan but was a bit disappointed by the post: while I do not doubt that it contains very useful pointers about the sort of Japanese food to bring back to friends and family, there is nothing at all about your life in Japan (who are you? what do you do in Japan? What do you love about it?), which I would have been much more interested in.

Expand full comment

Thanks for the heads up, I'll definitely take this onboard and flesh out information on actually living in Japan.

Still trying to work out what posts people are looking for, so this is really helpful!

Hopefully you subscribe, since I'll post weekly and will talk more about actual Japanese life

Expand full comment

After last week's discussions on Tyler's post on AI risk, I wanted a quick refresher on Knightian uncertainty so revisited on of the Jolly Swagman's podcasts covering this topic (https://josephnoelwalker.com/john-kay/ - it's also referenced in his discussion with David Deutsch and maybe in another?).

I'd be interested in Scott's take on John Kay's book Radical Uncertainty to dissect the topic (or maybe just a book review contest entry), because I just couldn't shake the intuition that taking things based on presented evidence and arguments to come to unquantifiable (if ordinal) judgements ("more likely than not", "without a reasonable doubt", etc) just leaves you dependent on the power of oratory, which is much more biased and prone to error. This intuition seems too obvious so there must be some gap in my understanding. Maybe I should just read it myself, or track down some of the original Knight papers.

I found it interesting that Tyler kept referring to it as Knightian/Hayekian, but Kay only ever referenced Knight and Keynes in the podcast.

Expand full comment

I don’t think Tyler is very consistent or methodical in his application of Knightian uncertainty. If you listen, for example, to his 80,000 hours podcast with Rob Wiblin from a couple years ago, he’s making off the cuff hard number estimates about the likelihood of far future events.

Tyler’s a big fan of economic growth, and I think he sees AI as the way out of what he calls The Great Stagnation. He doesn’t want concern over specific scenarios in which the outcome is disastrous to stand in the way. Sometimes, his argument for “let ‘er rip” is based on Knightian uncertainty (every possible specific scenario is unlikely) and sometimes it’s based on very confident assertions (there’s absolutely no way the US and China can coordinate to develop safe AI, don’t bother trying). But in the end, I think Tyler has written down a conclusion (growth is the top priority and China is bad, AGI is good for growth and keeps the West on top of China, let’s build AGI), and is then marshaling whatever epistemological concepts are convenient for advancing it.

Expand full comment

As I see it, it's now time to take a conflict-theory informed approach to dealing with AI. The world of politics and public affairs is Machiavellian. Rationalists have spent decades now perfecting techniques to better look beyond cognitive biases and self-serving ways of thinking. How can we meld these two worlds, using the lessons of one to inform the other?

https://rubber.substack.com/p/the-rubber-hits-the-road

I've created a Substack to dive into this subject. Inaugural post coming sometime soon: what we can learn from the NIMBYs. Next post after that will probably be about how to construct an effective outgroup. Or maybe I'll flip the order. Who knows.

As time allows and if I find an audience I will continue to plumb these depths, bringing insights to the table from history, rationality, politics, and the long tradition of conflict-oriented social thought.

Expand full comment

I'm not sure what you mean by a conflict-theory informed approach, but glad to see someone writing about public affairs, politics and AI. I have put up several posts in various places suggesting Machiavellian approaches to slowing down AI development, mostly because I cannot think of any non-Machiavellian ones that are likely to make a difference. I don't think it's possible to educate the public or even congress about the real grounds for worrying about AI, and that kind of precludes the "An Unfortunate Truth" approach. Anyhow, getting a bit tired of feeling like the Queen of Sleaze -- if I must be in that role, would at least enjoy some company.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

If you're looking for an actual expert on extremely hard questions about AI, you should talk to an actual computer scientist who is currently working on AI. I have a suggestion : Dr. Scott Aaronson, of UT Austin. He is brilliant and possibly the most intellectually honest person in the world.

The people you recommend are okay (and I respect them), but you need someone in computer science. Particularly because you're asking extremely hard questions in that field.

Expand full comment

Scott Aaronson is a long-time big fan of Scott Alexander. Here's a 2015 post for example: https://scottaaronson.blog/?p=2312

The reverse is also true. 2014 book review: https://slatestarcodex.com/2014/09/01/book-review-and-highlights-quantum-computing-since-democritus/

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Yes, it is through his blog (shtetl-optimized) that I found this blog!

Expand full comment

Great post by Scott Aaronson, about how he distances himself from Yudkowskyites:

https://scottaaronson.blog/?p=6821

Expand full comment
author
Apr 3, 2023·edited Apr 3, 2023Author

I've talked to Scott Aaronson a lot, and I super respect him. But with no disrespect meant for his general brilliance, he got into AI a year or two ago, works on it part-time in the gaps between his main quantum computing job, and although I appreciate his perspective I don't think it supersedes the perspective of someone like Ajeya who has worked on this topic full-time for years, leads OpenPhil's AI grantmaking, and constantly works with people at all the major alignment labs and companies.

I also think computer science expertise is less important than you might think here; ML is pretty different from other forms of CS, and figuring out what's going on with AI is at least as much a forecasting task as it is a technical one.

Expand full comment

Just wanted to chime in that the other Scott A. is correct: others, including Ajeya, have thought about alignment (especially the forecasting aspect) much longer and more deeply than I have. I'm a theoretical computer scientist, and I've been gratified that TCS has had (I think!) some useful things to contribute to the AI alignment discussion, but my year at OpenAI has confirmed my initial impression that AI alignment is not *primarily* a TCS problem.

Expand full comment

Someone who is expert in computation (the heart of CS) and knows the limits of what is possible in it, can separate the unimportant details from the important ones, in figuring out what is going on with AI.

That is my opinion as expert on nothing :).

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

I was merely complaining about your looking to economists, to answer hard questions on this subject, although they know to think rigorously about cost vs benefit.

Expand full comment

First recommendations that jump to mind for people who talk about AI safety and have a lot of ai-specific experience are Alyssa Vance and Aaronson's former student Paul Christiano, although I don't think either of them blogs about it regularly in an accessible way (but then, neither does Aaronson).

Expand full comment

Scott: You think you have problems?

"Russian pro-war blogger killed in St Petersburg restaurant blast: Vladlen Tatarsky had been meeting supporters and subscribers at venue in city centre"

https://www.ft.com/content/e0823df3-e5ec-4c44-a111-a0c311e3e107

"One of Russia’s most influential pro-Kremlin war bloggers, Vladlen Tatarsky, was killed in a blast at a restaurant in St Petersburg on Sunday.

"The incident took place at around 6pm local time, in the centrally located Universitetskaya Embankment, according to the ministry of internal affairs, which confirmed Tatarsky’s death.

"St Petersburg governor Alexander Beglov said that 25 people had been injured, with 19 of them hospitalised. Russia’s state investigative committee has opened a criminal case of “murder by a publicly dangerous method”.

"Tatarsky, whose real name was Maxim Fomin and who has more than 560,000 subscribers to his Telegram channel, was meeting supporters and subscribers in the restaurant.

"According to local news outlet Fontanka, an unknown woman handed Tatarsky a statuette of himself, possibly stuffed with explosives, that exploded about five minutes later. The Ren-TV channel posted a video showing Tatarsky taking the figurine out of a bag and looking at it, film allegedly shot by a social media user directly before the explosion.

"The venue has alleged links to Wagner mercenary group boss Yevgeny Prigozhin." * * *

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

A "pro-war blogger"? Tatarsky was a criminal* and a soldier who was fighting in Ukraine for the breakaway republics until like 2019. He was a representative of the breakaway republics including personally meeting with Putin. This is like saying, "Prominent pro-Communist columnist Leon Trotsky was assassinated in Mexico City last night..."

*I don't mean that his involvement in Ukraine was criminal. He was a criminal who fled to the breakaway republics to avoid jail.

Expand full comment

George Floyd was a "criminal" too

Expand full comment

But he's only notable as a moderately-known propagandist, as Trotsky likewise had been after his exile.

Expand full comment

A friend and I are running a newsletter on higher Ed and AI https://automated.beehiiv.com/

Expand full comment

What's a great book that you've read that not everyone is aware about, but profoundly impacted your life in a positive way?

Can be fiction / non-fiction. Feel free to suggest more than one.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

The Feynman Lectures on Physics. (Maybe my fav?) https://www.feynmanlectures.caltech.edu/I_22.html

(It's all math :^)

Expand full comment

Italo Calvino’s Palomar.

Expand full comment

I informally categorize these as HFS (holy f***ing sh**) books, although I would bet in these circles these are fairly well known and probably preaching to the choir:

Gregory Clark's The Son Also Rises - absolutely amazing book on the power and effect sizes of genetics and assortative mating on generational success, driving outcomes like the Norman conquerers of England still getting into Oxbridge at 2-4x rates vs the general population *800 years* later.

Max Tegmark's Life 3.0 and Our Mathematical Universe - love his narrative thinking styles, Life on AI and being human in an AI world, Mathematical on the underpinnings of reality and likelihood of multiverses.

Hans Rosling's Factfulness - the data-driven picture of how poverty and many countries in the world are doing better than most estimate or believe. This coupled with living overseas in various countries for years really expanded my worldview and intuitive grasp of what's necessary for a good life and how much things "really" cost to produce and many other things.

Robert Pirsig's Zen and the Art of Motorcycle Maintenance - it's a philosophy book, which I usually eschew, but the style of writing and thinking resonated so much, and as somebody who works on and races vehicles, the tie-in to the mindset around mechanical problem solving and fixing as an integral medium to convey a weltanschauung were really appealing to me.

Expand full comment

2010: Odyssey 2, the novel sequel to Kubrick/Arthur C Clarke's 2001: A Space Odyssey. Ignore the following if you don't want spoilers, by the way. The climax of the novel involves the monolith orbiting around Jupiter beginning to self-replicate, revealing itself to be a Von Neumann machine. By continuing to replicate using the matter in Jupiter's atmosphere, it condenses more and more of Jupiter's mass into denser objects, causing the volume taken up by the same mass to decrease until it crosses the critical point where Jupiter collapses into a star. This allows the ice on Europa to melt, allowing intelligent life to eventually flourish there, which was the intent of the monolith-makers all along.

The significance of this is that it got the idea of using self-replicating machines to implement a higher end into my head. Once I became convinced of utilitarianism (by seeing through introspection that positive emotional experience is by its pleasure in itself the only self-evident good, and that any good implies by its own goodness the imperative of being increased), this made me realize that the best way to maximize pleasure would be to use self-replicating machines to turn as much of the universe's matter as possible into minds experiencing as much pleasure as possible, since the degree of subjective positive experience generated by such a thing would be massive enough to swamp any moral concerns limited to the minds that already exist (given that trillions more minds could be created than would ever exist otherwise). This is a good thing, as I also later came to acquire a fundamental pessimism about the possibility of net positive pleasure within any naturally occurring life, since pain serves as a much better and more immediately reactive motivating principle than pleasure, and can therefore be expected to predominate over pleasure in any naturally-evolved life. So, since anything within our terrestrial human endeavors would be minuscule compared to billions of years of negative utilitarian calculus on who knows how many planets of life, artificially generating pleasure via Von Neumann machines would be the only way we could even hope to achieve net positive pleasure in the universe as a whole, across space and time (which by my theories of panpsychism aren't privileged, so increasing pleasure in the "universal mind" at any place or time really does counteract the billions of years of past suffering).

So, to give the tl;dr, the lesser known sequel to the novelization of 2001 helped give me what I consider to be a comprehensive and coherent theory of moral action, given the universal propensity to net suffering and the possibility of ameliorating it by the means of self-replicating machines.

Expand full comment

Digital minds also create the possibility for suffering on an inconceivably large scale.

Expand full comment

"this made me realize that the best way to maximize pleasure would be to use self-replicating machines to turn as much of the universe's matter as possible into minds experiencing as much pleasure as possible"

I have some questions and points to raise:

1) But each mind would require resources to stay alive, which would mean, for every mind, some amount of matter would need to be restructured to provide support for the mind. For example, if the minds are human brains floating in jars, there will need to be farms or chemical factories making glucose and other nutrients to pump into the jars. Who will be running the boring farms if everyone is busy experiencing bliss 24/7?

2) Wouldn't there still need to be a military to defend the blissful planet/solar system/galaxy arm from potentially hostile aliens that would come out of nowhere and wreck it all? The military would also divert resources from the enraptured minds.

3) Your objective of "maximizing pleasure" for all minds opens a can of worms, since it's possible that the human brain is not optimized for experiencing pleasure. What if it turns out that dog brains can experience more blissful states than humans, and that IQ is negatively correlated with the ability to experience bliss, even by a small amount? Then your imperative leads to us converting as much matter as possible into dog brains and not increasing the number of humans.

4) On that note, wouldn't we want to devote resources into researching ways to build new types of minds that are able to experience higher levels of pleasure? Maybe humans are only at a local maximum, and some type of heavily genetically modified humans could feel even greater bliss. Which people forsake bliss, at least temporarily, to do boring research work in the labs?

Expand full comment

Claude Shannon's /A Mathematical Theory of Communication/ (1948), reprinted in 1949 as /The Mathematical Theory of Communication/ after it had become clear that it was not /a/, but /the/ correct theory. This short book (~55 pages) constructs information theory, which is an amazingly simple-yet-powerful tool for quantifying and understanding the world.

Many problems which philosophers still argue about today can be quickly dismissed with the understanding in this book. So can the argument that positing a creator God helps at all to explain the origin of life, which requires God to be perfectly simple in physical terms (as correctly dictated by Plato and Catholic theology). Information theory formalizes what "perfectly simple" means, and proves that the perfectly simple can know nothing. The only kind of creator God that could actually explain the origin of life is thus proven to be equivalent to the mindless singularity at the Big Bang--all energy, and no knowledge.

You simply can't grasp much of the nature of reality without information theory.

It's also essential for many computer programming tasks, such as figuring out how much data you need to solve your problem. Noam Chomsky's fruitless search for a universal grammar (and also his holding back natural language processing for at least a decade by his hatred of probabilistic grammar) was founded squarely on his ignorance of probability and information theory, as was Francis Collins' wasting billions of tax dollars on trying to sequence the human genome with Sanger sequencing because he wouldn't take 5 minutes to study the math Craig Venter tried to show him about shotgun sequencing.

Expand full comment

I disagree that information theory is especially necessary for programming. Mostly all that's needed is the pigeonhole principle, and the question: how many bits do I need to distinguish all the possible values I care about? The probabilistic framing of Shannon is useful for the theory of communication and dealing with codes and lossy channels, but isn't actually of much use in writing code. Still, a beautiful book.

Expand full comment

For most things, it isn't necessary. For many things, it would be very useful. Sometimes it's extremely useful, but never used. I published an article in Game Developer many years ago on how to use information theory to design optimally-simple user interfaces, but AFAIK no one's ever used it. The US military uses it indirectly in designing their aircraft user interfaces, in that they have psychological models of how much attention a task requires & how to minimize user errors. (That GD article originated with me thinking about how combat decision-making is distributed in the US military, and then realizing information theory could justify some existing practice, and be extended beyond it.)

I've used it often, but I usually do scientific programming, bioinformatics, textual analysis, AI, cryptography, signal processing, and other mathy / abstract things. Perhaps it's more useful when you're trying to write a program to do something no one has ever written a program to do before.

Expand full comment

Divine Simplicity is a complex theological doctrine and you can't really do a "gotcha!" out of it - e.g. a perfectly simple creator would be mindless energy, take that theists! (unless we're doing Lovecraft and the blind idiot god Azathoth at the centre of all creation).

https://www.newadvent.org/summa/1003.htm#article7

"Article 8. Whether God enters into the composition of other things?

Objection 1. It seems that God enters into the composition of other things, for Dionysius says (Coel. Hier. iv): "The being of all things is that which is above being—the Godhead." But the being of all things enters into the composition of everything. Therefore God enters into the composition of other things.

Objection 2. Further, God is a form; for Augustine says (De Verb. Dom., [Serm. xxxviii]) that, "the word of God, which is God, is an uncreated form." But a form is part of a compound. Therefore God is part of some compound.

Objection 3. Further, whatever things exist, in no way differing from each other, are the same. But God and primary matter exist, and in no way differ from each other. Therefore they are absolutely the same. But primary matter enters into the composition things. Therefore also does God. Proof of the minor—whatever things differ, they differ by some differences, and therefore must be composite. But God and primary matter are altogether simple. Therefore they nowise differ from each other.

On the contrary, Dionysius says (Div. Nom. ii): "There can be no touching Him," i.e. God, "nor any other union with Him by mingling part with part."

Further, the first cause rules all things without commingling with them, as the Philosopher says (De Causis).

I answer that, On this point there have been three errors. Some have affirmed that God is the world-soul, as is clear from Augustine (De Civ. Dei vii, 6). This is practically the same as the opinion of those who assert that God is the soul of the highest heaven. Again, others have said that God is the formal principle of all things; and this was the theory of the Almaricians. The third error is that of David of Dinant, who most absurdly taught that God was primary matter. Now all these contain manifest untruth; since it is not possible for God to enter into the composition of anything, either as a formal or a material principle.

First, because God is the first efficient cause. Now the efficient cause is not identical numerically with the form of the thing caused, but only specifically: for man begets man. But primary matter can be neither numerically nor specifically identical with an efficient cause; for the former is merely potential, while the latter is actual.

Secondly, because, since God is the first efficient cause, to act belongs to Him primarily and essentially. But that which enters into composition with anything does not act primarily and essentially, but rather the composite so acts; for the hand does not act, but the man by his hand; and, fire warms by its heat. Hence God cannot be part of a compound.

Thirdly, because no part of a compound can be absolutely primal among beings—not even matter, nor form, though they are the primal parts of every compound. For matter is merely potential; and potentiality is absolutely posterior to actuality, as is clear from the foregoing (I:3:1): while a form which is part of a compound is a participated form; and as that which participates is posterior to that which is essential, so likewise is that which is participated; as fire in ignited objects is posterior to fire that is essentially such. Now it has been proved that God is absolutely primal being (I:2:3).

Reply to Objection 1. The Godhead is called the being of all things, as their efficient and exemplar cause, but not as being their essence.

Reply to Objection 2. The Word is an exemplar form; but not a form that is part of a compound.

Reply to Objection 3. Simple things do not differ by added differences—for this is the property of compounds. Thus man and horse differ by their differences, rational and irrational; which differences, however, do not differ from each other by other differences. Hence, to be quite accurate, it is better to say that they are, not different, but diverse. Hence, according to the Philosopher (Metaph. x), "things which are diverse are absolutely distinct, but things which are different differ by something." Therefore, strictly speaking, primary matter and God do not differ, but are by their very being, diverse. Hence it does not follow they are the same."

I realise you have a real craving for a simple (ha!) one-size-fits-all explanation to all the problems in philosophy and science, but your 1948 book ain't it.

Expand full comment

I have been arguing with Carl Pham for a multiversal interpretation of physics, Still, the option remains of a singular choice for state vector resolution (the seemingly random collapse of the wave function) based on an ultimate destiny, God might in fact throw the dice in favour of a preferred universal outcome. I don't believe that is how things work, but I think it is a valid interpretation.

Expand full comment

Information theory is just one of many scientific or mathematical bodies of knowledge relevant to philosophy. Statistics, linear algebra, differential equations, chaos theory, evolutionary theory, linguistics, and neuroscience are also important.

I do think that nearly all the problems in philosophy have been solved or dissolved, some very recently, by science. But not by any one trick.

Expand full comment

> do think that nearly all the problems in philosophy have been solved or dissolved, some very recently, by science.

Are you sure?

https://en.wikipedia.org/wiki/List_of_philosophical_problems

Expand full comment

Yes, I consider most of those solved or dissolved. Definitely not why there is something rather than nothing, some issues with qualia, or the hard problem of consciousness. But most of them are based on wrong ontologies; or caused by logocentrism, the use of true/false logic, and other problems understanding how language works; or the confusion of formal systems with the physical world; or are just empirical questions with no a-priori necessary answer (eg Molyneux's problem), or are just metaphysical babble. I hope to write up my reasons for each someday, but not today.

The problem of demarcation is obviously solved by deep learning networks, which construct their own categories and category-membership functions, which work quite well in practice. Categories are not defined by necessary and sufficient conditions, nor do any two people agree on how to categorize things, nor does any one person always categorize everything the same way in different contexts. Language just doesn't work like that. The problems with knowledge as justified true belief are based on a dysfunctional rationalist epistemology; there are no proofs in the physical world, so nothing could ever be "justified" in the strong sense that view demands. You just have to let go of demanding that statements evaluate to "true" or "false". There's no need for certainty in this world. Same thing for the "problem" of induction--the only problem was that Hume demanded induction have the certainty of deduction, which it doesn't, and that doesn't matter. Causality as a shorthand for conditional probability is useful; causation as a metaphysical construct is not. The human brain learns how to perceive, categorize, reason about, and act in the world using nothing but correlations; there's no need for causality as a metaphysical primitive.

Categories and knowledge are both example where the problem is dispelled by information theory, when you realize that you don't really want to know whether statements or theories are "true" or "false", but how much information they convey, at what price.

Expand full comment

I don't see how these excerpts from Aquinas are relevant. They're all just denying that God is a compound, or enters into compounds with material things. Article 7 seems more relevant, but unhelpful to you. The closest thing it contains to a response to the information-theoretic argument is "Reply to Objection 2. With us composite things are better than simple things, because the perfections of created goodness cannot be found in one simple thing, but in many things. But the perfection of divine goodness is found in one simple thing (I:4:1 and I:6:2)." And IMHO that's just metaphysical babble asserting the thing it claims to be proving.

Expand full comment

>So can the argument that positing a creator God helps at all to explain the origin of life, which requires God to be perfectly simple in physical terms (as correctly dictated by Plato and Catholic theology). Information theory formalizes what "perfectly simple" means, and proves that the perfectly simple can know nothing.

It might be the case that God needs to be simple to fulfil some principle of ontological or explantory parsimony, but there's not justifcation for you "in physical terms". Most theology regards God as supernatural or non-physical, so a physical creator god is just irrelevant to it.

Information theory is just a tool: it doesn't prove any metaphysical claim, like "everything is physical", or "everything is information". At best, if you allow that it can consider physical hypotheses, it can only consider a preternatural deity, a Ray Harrihausen god, that's big and impressive, but still material and finite.

Expand full comment

I think here's a better way of explaining myself: In Platonist, gnostic, and Orthodox (eg St. Augustine's) ontology, the Great Chain of Being originates in God, the perfect, and descends down to some least and worst thing, sometimes at the center of the Earth, sometimes dirt, sometimes vacuum, sometimes evil. (IIRC Augustine said the vacuum is evil--that all substance is good, and evil is simply the absence of the good, therefore God didn't create evil.)

The idea is that the perfectly simply is complete, and things become inferior by losing pieces or properties. This maps neatly onto a naive thermodynamic ethics, in which high energy is Good, and zero energy is Evil. It explains the origin of complex beings not through a process of evolution, but of decay from the perfect, as in many ancient myths, and in the novels of Tolkien.

This view has a lot of appealing properties, especially that it does match our scientific understanding of the evolution of the Universe. But information doesn't track energy. As the universe evolves, symmetries break down, creating information that's useless because it's arbitrary--it tells us about nothing but historic accidents. (Information is not complexity, but merely complicatedness.)

As the temperature drops and matter clumps together, the evolution-to-the-edge-of-chaos dynamics kick in. (I think. I know little about cosmology.) This builds complexity, which produces information with predictive power. Or we could say that complexity is a self-sustaining configuration of information, which evolves temporally by consuming energy and using that energy to produce new useful information more-rapidly than its existing information degrades. That's the magic step that the ancients never thought of.

Energy can produce information, but it isn't information. Information isn't necessarily complexity, but can be. Life requires complexity--could be defined as a system of self-propagating information, which means about the same as "complexity". The mystery of life is where that complexity came from. If you say it was in God at the start, you haven't gained anything; you've just pushed the problem back. And you can't dodge the problem by saying "God is different", because information, which is necessary for complexity, would still need be inherent in God no matter what He's made of or what universe or non-universe He "inhabits".

The atheistic alternative still has a problem: It can explain how complexity arose, but where did all that energy come from? That's the big mystery facing cosmologists today. It is however strictly simpler than the theistic explanation, which requires both a large starting energy and a large starting amount of information.

Expand full comment

>The idea is that the perfectly simply is complete, and things become inferior by losing pieces or properties. This maps neatly onto a naive thermodynamic ethics, in which high energy is Good, and zero energy is Evil. It explains the origin of complex beings not through a process of evolution, but of decay from the perfect, as in many ancient myths, and in the novels of Tolkien.

And it's also very compatible with information theory, and also with divine simplicity.

That an apparently complex universe actually contains a lot of information can also be disputed. An under appreciated implication of information theory is that complete infinite sets contain little or no information...the fact that large but finite sets contain more information than small finite sets doesn't imply that infinite sets contain infinite information. Infinity is game changing.

It only takes as much information as is in the words "set of all natural numbers" or "set of all real numbers" to specify the sets in question. To specify incomplete versions takes steadily more information...in fact, if takes a countably infinite amount of information to specify the set of real minus only one of them.

NB: I am not arguing that information theory proves theism , only that it doesn't disprove it.

>As the universe evolves, symmetries break down, creating information that's useless because it's arbitrary--it tells us about nothing but historic accidents.

As opposed to what? You can still infer backwards from the higher information states to the lower ones, the unbroken symmetries. What you can't do is infer from low information to high information.

>The mystery of life is where that complexity came from. If you say it was in God at the start, you haven't gained anything; you've just pushed the problem back

You also haven't solved it by assuming an information rich physical starting state.

>And you can't dodge the problem by saying "God is different", because information, which is necessary for complexity, would still need be inherent in God.

I've already explained why it wouldn't need to be. The information theoretic argument for atheism is really a deterministic argument.

Expand full comment

I disagree. Information theory is not itself physical; if anything, the physical is really information. Knowledge requires information as defined by information theory, and this would apply even to any hypothetical transcendent realm. Info theory exposes the charlatanry of the ancient dodge of saying "oh well we don't need to ask who created God; He's not physical or temporal".

I'll try to explain it differently: The things God is supposed to have created definitely require information. The God Plato invented definitely requires zero information content. Plato's God (which the Catholic God is identical to, as the properties of God in Catholicism were taken from Plato, not from the Bible) is required to be perfectly simple specifically in order to solve the problem of complex things arising from simple predecessors. It doesn't solve that problem, because information is clearly defined in information theory in a way that's independent of physics, that even a transcendent being would have to obey because information behaves that way /by definition/. God cannot both be simple, and contain information, regardless of physical implementation.

Expand full comment

> The things God is supposed to have created definitely require information.

You mean they contain information themselves, or they require information on the part of the creator? The idea that a high information state can only evolve from another isn't even an assumption of physics, it's an assumption of determinism.

Science and Religion both need to explain the origins of an apparently complex universe.

One way that scientists have, is to appeal to "quantum fluctuations", IE. Random events. Random events are unpredictable, so they generate new information -- that's a principle of information theory. So,.if a complex universe is a high information universe, science can explain its emergence from a low information starting state by appealing to randomness.

But theists can use the same appeal to indeterminism to reconcile the creation of a complex universe with the doctrine of Divine simplicity

(The idea of "Lila", or divine play, is more prominent in Asian than Western religion, though).

Expand full comment

Thanks for the pointer to "Lila". Does it exist outside Hinduism?

Expand full comment

Fluctuations don't produce new information. You would need to perform a measurement on them for that, which ipso facto requires a deliberate act of measurement, so it's the deliberate actor that is creating the new information, which doesn't contradict anyone's theology.

Believe me, if we could settle any theological equations with the Schroedinger Equation, it would already have been done and the paper(s) written. No mortal could resist the temptation to that level of fame.

Expand full comment

They definitely do produce information in the Shannon sense, because Shannon information is to be informative (just improbable), and does not require information to be known some special entity with powers of deliberation, or any other special powers, in order to be information.

Expand full comment

That seems technically correct, but I'm uncomfortable with that anthropocentric Copenhagenesque view of information. Is it information if a bacteria measures it? What if another particle measures it? There seems to be vitalism lurking in your definition of information.

In the present case, a theologian could then make an end-run around my argument by claiming that information in the mind of God isn't actually information, because no human has measured it, nor (perhaps) has the mind of God Himself measured it. One would then get into a pointless metaphysical argument over whether the ur-info-stuff that's in God's head is information before God has somehow reflected on the raw contents of His brain, or whether it's even possible for a mind to measure the information in its own head without some troubling infinite recursion.

It seems to me that it's usually more useful to talk about the information present in a physical structure, than in the head of an observer. We don't talk about thermodynamics that way. We don't say that a rock has no temperature, or that a pressure gradient doesn't exist, until a human measures it. But the quantum flux, and other quantum collapses, are unusual cases, and I don't pretend to understand them.

Expand full comment

The quantum flux, which I know almost nothing about, produces random information, which isn't complexity. But random information can eventually give rise to complexity, if there's an energy source.

I agree that theists can also say a complex Universe developed that way with the doctrine of Divine simplicity, but then they can't say God is all-knowing, all-seeing, wise, etc. A God which created the Universe in that way was necessarily completely ignorant at the start, though He could learn from the Universe as it developed; and there was no divine plan. So it isn't the God that Christians want.

Expand full comment

"A God which created the Universe in that way was necessarily completely ignorant at the start, though He could learn from the Universe as it developed; and there was no divine plan."

Okay, this is the Dawkins view of deity which is totally mistaken, but you're trying to solve metaphysics with your own special favourite physics/maths theory, so there's not much ground for argument here.

Everything postulated about the nature of God is necessarily flawed because we are not on the same level as God and cannot understand His nature in the way we can understand the other natural phenomena in the created universe. I think you are badly misunderstanding what "simplicity" means in this context, and imagine it is some kind of "less developed, less evolved entity" which is not so.

Expand full comment

>agree that theists can also say a complex Universe developed that way with the doctrine of Divine simplicity, but then they can't say God is all-knowing, all-seeing, wise, etc.

Remember that the empty set isn't the only zero information object.

An under appreciated implication of information theory is that complete infinite sets contain little or no information...the fact that large but finite sets contain more information than small finite sets doesn't imply that infinite sets contain infinite information. Infinity is game changing.

It only takes as much information as is in the words "set of all natural numbers" or "set of all real numbers" to specify the sets in question. To specify incomplete versions takes steadily more information...in fact, it takes a countably infinite amount of information to specify the set of reals minus only one of them.

So working backwards from Simplicity , in the sense of zero information, you can arrive at the empty set, a mere nothing, but also arrive at the most infinitely infinite set, from which nothing is missing.

Expand full comment

" because information is clearly defined in information theory in a way that's independent of physics"

So information is outside of physics? Work that one out!

Expand full comment

There are plenty of physicists studying this, & I'll leave it to them. The idea is that information is more-fundamental than matter, which is not really bits of stuff, but information. I think this is obviously correct in some sense, since the supposition that matter in a continuum is fundamental implies that the universe contains an infinite amount of information, which we know is not the case (because it contains a finite amount of energy, & energy and information are interconvertible in a way I don't understand). In a naive Newtonian universe, every interaction between two particles would take an infinite computation, and thus use and release an infinite amount of energy, and thus blow up the Universe before completing.

It might be that there's a different kind of matter beneath that information--for instance, if our Universe is a simulation running on a computer in some super-Universe.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

Entropy is a key concept in physics, and to a real extent entropy is all about information.

And if you think about the foundations of the universe, they are presumably behind things like mass and charge without intrinsically containing them - so what are they made of?

Expand full comment

"The Way of Zen" by Alan Watts - It starts with a historical overview of Taoism, Theravada Buddhism, Mahayana Buddhism, and finally Ch'an and Zen, and then moves on to a survey of practice. It's short and lucid and about as scholarly as the subject will allow in so small a book. It feels a bit odd to plug mysticism in a "rationalist"-adjacent space, but I view its approach as complementary to rationality. Orthogonal, in certain respects.

Expand full comment

Pitirim Sorokin's /Social and Cultural Dynamics/. This 4-volume set, published 1937-1941 or so, is an enormous (about 3000 pages) compilation of cultural data from human history around the world, outlining the history of different systems of art, truth, ethics, law, and culture, organizing them using a simple and powerful underlying framework of 2 metaphysical belief systems. This was electrifying to me when I discovered it, because it was the same underlying framework I'd reconstructed myself from my own survey of art, epistemology, ethics, and culture.

Sorokin had a large team of grad students collecting the data, which he summarized in many charts and graphs. He got some things radically wrong (such as misclassifying the metaphysics underling major categories of prehistoric cave paintings, and all of modern art). This was partly because of his personal prejudices. He was a great scholar, but a vile man by my standards, who loathed all that's best in art, culture, and ethics due to his strict Russian Orthodoxy. At many places his angry spittle seems to jump from the page. His misinterpretations were also due to the fact that his students gathered lots of data, but didn't read the texts that the people they were studying had written to explain why they did what they did.

(He also made some errors due to the inferior data available, and the difficulty of fact-checking anything, in the 1930s. For instance, he based his analysis of cave paintings on photos which were not actually photos of the cave paintings, but of artists' renditions of what the paintings "should" have looked like.)

Sadly, I've read only part of the 4-volume set. It's just so long. He released a greatly condensed version in 1957 (just 720 pages); I haven't finished that either.

Expand full comment

This guy https://en.wikipedia.org/wiki/Michael_Witzel thinks there are just two overarching mythological systems, one belonging to the northern continents, and one to the southern (Godwana).

Expand full comment

That sounds a little wacky, since Gondwana broke up > 100 million years ago and India joined Asia 50 million years ago. There are some cultural commonalities and genetic ties stretching from India, through China, and down the Americas, and artistic + maybe religious similarities between Polynesia and Africa. There were always close cultural ties between Central and South America. North America used to be a part of that as well, at least as far north as Spiro, Oklahoma, and St. Louis MO. The clear cultural divide in the Americas 1000 years ago was between Mississippian culture and Pueblo culture, not between continents.

I think "north / south" is greatly at odds with what we know about ancient migrations, and literally orthogonal to the starkest cultural and genetic dividing line, "New world / Old world".

Also, Plato seems to have borrowed important ideas from Hinduism, including the concept of "perfection". There was trade between Greece and India in his day.

I'm sure Witzel knows all this, though. And it may be hypocritical of me to expect his mythological systems to correspond to ancient migrations, while my two metaphysical systems exist in opposition to each other within the same territories. (I still think that's reasonable, because these two metaphysical systems are in opposition because of the strong logical ties between metaphysical beliefs, which cause the network of beliefs to have two opposite stable states, for the same reasons that every stable state of a Hopfield network is the inverse of another stable state (I think).)

Too bad that Wikipedia article doesn't describe his two systems.

Expand full comment

Yes, it is obnoxious that Witzel used the terms "Gondwana" and "Laurasia." But he defines the South as Australia and sub-Saharan Africa, which is reasonable.

I learned about Witzel from Logarithmic History by Doug Jones:

https://logarithmichistory.wordpress.com/2015/09/01/mythopoeia/

Expand full comment

> misclassifying the metaphysics underling major categories of prehistoric cave paintings, and all of modern art

How do you tell when you have got that sort of thing right?

Expand full comment

A very good question. With cave paintings, it's partly a matter of arrogance. I've looked at so many different art traditions from around space and time, and seen the same pattern recur so many times, that in some cases I feel great confidence in my judgement. In the case of cave paintings, I'm not confident of classifying the famous red bulls in the Lascaux caverns, which have both naturalistic and abstract properties. But the cave paintings in Chauvet Cave that date to about 30,000 years ago are unambiguously naturalistic.

In the case of modern art, there was a long history of ideas leading up to it, and lots of artists wrote about what they were doing and why, and their reasons are very explicitly Neo-Platonic. Besides which, cubism is an ancient style, which modern art did not invent, but copied from sources such as the ancient Egyptians and medieval art. In addition, modern art was invented mostly by Catholics or ex-Catholics.

Expand full comment

So it's clear which of the two big categories, naturalistic or not, a piece of art belongs to...and it's also clear which of the two grand metaphysical schemes they imply.

Expand full comment
Apr 3, 2023·edited Apr 4, 2023

I think with cave paintings, the original predominant motive must have been something like sympathetic magic, i.e. representing or "faking" something not present, or feared to become in short supply, in the wish or belief that this would ensure an actual supply. I believe there are rarely any humans painted among the animals, because it went without saying that they would be present in the hunt.

Even non-human animals do this up to a point, presumably because similar brain circuits fire when imagining an action as they do when actually performing it. While I was sitting outside a shopping mall once, a squirrel approached me and started apparently mimicking nibbling at a nut with empty paws, obviously simulating having an actual nut to eat! Was it miming a desire for me to feed it, by performing sympathetic magic? It certainly seemed so. (As it happened, I did have some nuts to give it. So that time the magic worked!)

Expand full comment

I love the story about the squirrel. Interesting to see it as sympathetic magic rather than as an attempt to communicate.

The earliest European cave art doesn't have humans, though sometimes it's just outlines of human hands (because they were easy and fun to make?) The kind of cave or rock art with humans tends to be done all as stick figures, and is more-typical of rock art than of cave art, I suppose because pigments don't last on exposed rocks. Yes, sympathetic magic is the most-popular theory, and some magical intent seems clear in some cases, though I think anthropologists reach to religion and magic too quickly when they can't explain something. Some North American plains Indian art, like buffalo robes, look pretty similar to rock art, but we know from their own accounts that they were historical records.

Expand full comment

> and powerful underlying framework of 2 metaphysical belief systems.

Are you going to reveal what they are?

Expand full comment

I didn't want to, because I'll be misinterpreted without a long explanation that I don't have time to give. One framework springs from rationalist epistemology. Sorokin calls it "ideational", but everyone else calls it "idealistic". The Platonic and Christian traditions are in this family, as are all philosophers who search for a priori knowledge, absolute certainty, secure foundations, logical proofs. The basic idea is that all knowledge should be like that of geometry: a set of irrefutable deductions drawn from a small number of axioms, which themselves can never be proven logically. This leads inevitably to complete relativism, as in post-modernism, though somehow it took thousands of years for the West to realize that.

Rationalist art depicts ideals, not realities. The style may be abstract and geometrical, as in African wood sculpture and much modern art. In transitional periods, such as Nazi, Soviet, and classical Greek art, the style may be naturalistic, but depict only idealized people. Figures are often nude, almost always famous noble people, and rarely women unless the women are goddesses. Children are usually drawn as small adults. There are lots of other properties found quite regularly in this style of art, too many to list here. The purpose of such art is either to indoctrinate (when the idealist who made the art is in power), or to subvert the current ruling class' doctrine (as in Marxist literature). It is created from a presumption of absolute knowledge on the part of the artist.

The other framework springs from empirical epistemology, which led to modern science. Sorokin calls this framework "sensate", and has nothing but self-righteous scorn for "decadent, hedonistic sensualists". Its art is naturalistic, and far more likely to depict women, old people, ugly people, children that look like children, physical space, shadows, blended colors, non-primary colors. The purpose of such art is to ask questions rather than to answer them, and is created to feel one's way around confusion and uncertainty. It appears when an idealistic society develops to the point where it must confront the failings of rationalist epistemology, as in the plays of Euripides or Shakespeare.

(There's a complication with modern art, in that lots of modern art from the 1920s, and also some underground comics from the 1960s, was deliberately ugly. I think of this as still being idealistic, but the idealistic art of subversion; it was made only by political radicals, usually as political propaganda, and isn't merely ugly, but has an iconic style. It tries to depict the opposite of the ideal and say that's our reality.)

One easy heuristic for classifying people in Western history is by whether they prefer Aeschylus or Euripides. The former are usually idealists / rationalists; the latter, naturalists / empiricists. Another heuristic is that anyone who uses the words "decadent" or "hedonist" is idealist.

A good illustration of the different purposes of these art forms is from ancient Babylonia, which mixed the two styles. Religious art was idealistic; political art mixed idealistic and naturalistic; private art was more naturalistic.

Another example is to contrast the idealist religious art of ancient Egyptian pharaohs, with the naturalist art of the heretic pharaoh Akhenaten (especially art of his family), and the naturalistic sculptures of the scribes of the pharaohs.

Yet another stark contrast is between the naturalistic art of many parts of Rome in the 3rd century AD, with the crude idealist Romanesque Christian art of the 4th century and on, starting with the Arch of Constantine. The entire medieval period was strictly idealist, exemplified by Byzantine icons.

Expand full comment

"One easy heuristic for classifying people in Western history is by whether they prefer Aeschylus or Euripides."

This is Sophocles erasure and I will not stand for it! 😁

A Vision (Author: Oscar Wilde)

1] Two crownèd Kings, and One that stood alone

2] With no green weight of laurels round his head,

3] But with sad eyes as one uncomforted,

4] And wearied with man's never-ceasing moan

5] For sins no bleating victim can atone,

6] And sweet long lips with tears and kisses fed.

7] Girt was he in a garment black and red,

8] And at his feet I marked a broken stone

9] Which sent up lilies, dove-like, to his knees.

10] Now at their sight, my heart being lit with flame,

11] I cried to Beatricé, ‘Who are these?’

12] And she made answer, knowing well each name,

13] ‘Æschylos first, the second Sophokles,’

14] ‘And last (wide stream of tears!) Euripides.’

Expand full comment

Sophocles stands in between them, and liking Sophocles best is more a marker of someone who's straddling both traditions--for example, Aristotle.

Expand full comment

For me it's not liking Sophocles better so much as having some concept of what he wrote - which I cannot say I have of Aeschylus or Euripides.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Sounds like you might also like "The Golden Bough: A Study in Magic and Religion", by James George Frazer, either the full 12-volume 3rd edition, with footnotes and footnotes to the footnotes, or the wimp's abridged 4th edition!

https://onlinebooks.library.upenn.edu/webbin/metabook?id=goldenbough3

Expand full comment

It is very outdated, Frazer was going for a unified field theory of mythology and prehistory, and picked the year-king/solar deity model to fit everything into, even if he had to do some bed of Procrustes style editing on the source material to make it fit.

The solar hero myth was very popular at the time as the Single Foundational Inspiration. It went to the extremes of "myth is a disease of language", hence anthropological theories explaining the Hercules myth as being about the sun and clouds, e.g. Iole represents not a woman, but the sunset clouds because Iole means "purple". So the entire mythology around Hercules simply was a representation of things like "at sunset, coloured clouds fill the sky and cover the sinking sun".

That, I think, is taking things to an extreme in looking for grand unifying themes underlying everything; Chesterton said it better than I could in his essay "The Priest of Spring" from 1912 collection:

https://www.gutenberg.org/files/2015/2015-h/2015-h.htm#link2H_4_0014

"There is one piece of nonsense that modern people still find themselves saying, even after they are more or less awake, by which I am particularly irritated. It arose in the popularised science of the nineteenth century, especially in connection with the study of myths and religions. The fragment of gibberish to which I refer generally takes the form of saying “This god or hero really represents the sun.” Or “Apollo killing the Python MEANS that the summer drives out the winter.” Or “The King dying in a western battle is a SYMBOL of the sun setting in the west.” Now I should really have thought that even the skeptical professors, whose skulls are as shallow as frying-pans, might have reflected that human beings never think or feel like this. Consider what is involved in this supposition. It presumes that primitive man went out for a walk and saw with great interest a big burning spot on the sky. He then said to primitive woman, “My dear, we had better keep this quiet. We mustn't let it get about. The children and the slaves are so very sharp. They might discover the sun any day, unless we are very careful. So we won't call it 'the sun,' but I will draw a picture of a man killing a snake; and whenever I do that you will know what I mean. The sun doesn't look at all like a man killing a snake; so nobody can possibly know. It will be a little secret between us; and while the slaves and the children fancy I am quite excited with a grand tale of a writhing dragon and a wrestling demigod, I shall really MEAN this delicious little discovery, that there is a round yellow disc up in the air.” One does not need to know much mythology to know that this is a myth. It is commonly called the Solar Myth."

Expand full comment

I've seen it referenced many times, but I've also heard it's very out-of-date in its anthropological knowledge--and it's another giant book!

But I love me some good footnotes to footnotes. They're like hypertext, a maze, a forest you can explore as deeply as you wish.

I already have a digital copy, of course; I just "opened" it randomly to Chapter 62, The Fire-Festivals of Europe. Fascinating to read that paganism was still in full swing in rural Europe in the 19th century.

The footnotes aren't linked to in this version. Just dead numbers. Sigh.

Wikipedia's collection of reviews of it ( https://en.wikipedia.org/wiki/The_Golden_Bough#Critical_reception ) is scathing. The quotes are mostly like these: "little more than plausible constructs of [Frazer's] own Victorian rationalism" and "Frazer used his ethnographic evidence, which he culled from here, there and everywhere, to illustrate propositions which he had arrived at in advance by a priori reasoning, but, to a degree which is often quite startling, whenever the evidence did not fit he simply altered the evidence!" Though, reading between the lines, it sounds like some of these objections were emotionally based on his treatment of Jesus as a myth.

Expand full comment

Artificial Life II, published in 1991 by Addison-Wesley Longman. The Santa Fe Institute published a series of books called "Artificial Life N", but this is the most-important of them, with long and thorough papers on the most-important concepts discovered in the early days of what SFI calls complexity studies, including life at the edge of chaos, metabolism-first origin-of-life, RNA world, co-evolving parasites, co-evolution and punctuated equilibrium, morphogenesis, evolution of communication, and Thom Ray's paper on his evolutionary experiments with a physics engine (which was to me the first really-convincing proof that evolution works). I especially recommend the papers "Life at the edge of chaos", "Co-evolving parasites improve simulated evolution as an optimization procedure", and "Co-evolution to the edge of chaos".

Stuart Kauffman's 1993 book /The Origins of Order/ is more-impressive and mind-blowing--it filled in the last missing part of evolutionary theory, which is a nitty-gritty computational-yet-empirical study of how the sort of edge-of-chaos mathematics introduced in AL2 apply to the evolution of metabolic networks, gene networks, and morphogenesis. It shows that the information evolution must create is far less than Darwin plus naive intuition suggests.

The title /The Origins of Order/ was obviously meant to parallel Darwin's /On the Origin of Species/, and is IMHO an even more-impressive achievment than Darwin's work. But it's a major undertaking to work through it, and this isn't helped by the many typos in the math.

Both of these books were life-altering to me philosophically rather than practically. One of the most-intriguing aspects of the "evolution to the edge of chaos" concept, which runs through both books, is that it shows that, though we may not live in the best possible world, we live in a world which oscillates around the set of best-possible worlds, which is just a tiny and possibly infinitesimal sliver of the set of all possible worlds--and that mathematics almost guarantees this result (up to the point where intelligent life evolves, after which it all goes to hell). The universe really does conspire to create life, and to evolve more-and-more complex life (and don't bother telling me about the comparatively-primitive arguments of Stephen Jay Gould to the contrary)--and it does this without the guidance of any intelligence.

This is a wonder far more worthy of contemplation than any God humans have invented. It's also terrifying, as there seems no way to avoid the conclusion that attempting to make our world more-perfect more-quickly, by using rational interventions as a short-cut past the pain and suffering needed to evolve randomly, will inevitably destroy it. It is the generalization of the Darwinian explanation of why evil is necessary. (You won't find that spelled out in either book.)

Expand full comment

Art and Artist by Otto Rank

Expand full comment

There are very useful philosophical ideas in here. Especially: fields of meaning in which objects appear. It is very practical and down to earth.

https://www.theguardian.com/books/2015/oct/30/why-world-does-not-exist-markus-gabriel-review

Same philosophical impact from this metaphysical horror book:

https://en.m.wikipedia.org/wiki/Teatro_Grottesco_(book)

Expand full comment

Ligotti's great! True Detective was what introduced me to him, & I've since read all of his works.

Expand full comment
founding

Fundamentally my biggest problem with the impact markets and this essay thing is they all seem like random pointless masturbatory nonsense. Adding an aspect of gambling to it in order to get more money is just another layer of fetishizing meta bullshit.

Expand full comment

I don't see how this applies to all impact markets. The potential for impact markets in charitable projects seems categorically different to that for an essay competition.

Expand full comment
founding

I did not say it applied to all potential impact markets, I was just talking about the ones I've seen.

But in general impact markets are bad because they add extra layers of ways people are frequently scammed into a process already rife with scammy behavior to no obvious benefit.

Expand full comment

How?

You don't get the money until you deliver the goods. How is that worse than giving money to charitable organizations and simply hoping they don't waste or embezzle it?

Expand full comment
founding

Consider classic stock market scams. In the naïve version of the stock market, you buy stock in a company because you think it will be profitable and therefore the value of your shares will go up, and there is public knowledge of how likely a company is to be valuable aggregated into the share price.

However, once you add this indirect financial instrument, instead of a company's value actually being how profitable it is, it's based on how profitable people THINK it is or think it will be, which means you can do stuff like pump and dumps or insider trading. This creates an incentive for short term deception, both for employees of the company and for totally random individuals, that does not exist in the same way for a normal business enterprise.

In a standard non-profit, although there is a monetary incentive to be fake to make money, there is not as much incentive for third parties to participate in deception for profit.

Expand full comment

>In a standard non-profit, although there is a monetary incentive to be fake to make money, there is not as much incentive for third parties to participate in deception for profit.

Like I said, deception isn't the only problem. Waste/inefficiency is likely a vastly bigger problem.

Expand full comment
author

I agree this isn't the most useful impact market in the world. I do think the charity version has the potential to replace or complement traditional grantmaking, which (you know better than anyone else) I hate and find very difficult to do well.

Expand full comment

Welcome to the Gentleman’s clubs of 18th Century London.

Expand full comment
Comment deleted
Expand full comment

Yeah..and wear funny hats and bet on anything and everything.

Expand full comment

Many of them still exist, you're just not welcome to join them. You probably wouldn't have been back then either.

This feels like a special case of that phenomenon that ACOUP has remarked on where people in the present seem to map themselves onto historical figures of an unrealistically high class. If you're a middle-class schmo in 2023 you wouldn't have been Bertie Wooster in 1923.

Expand full comment

Yeah, to begrudgingly agree with that dork, it's like nerds into middle ages history acting like they would have been the knights, instead of the peasants that were bullied by the knights.

Expand full comment

On the other hand, some 18th century clubs did start up as mutual 'boozing and dining and chatting and gambling' get-togethers founded by low-class types like actors, and attracted rakehells and men-about-town who liked slumming it and mingling with the fast set, and over time got more established and more respectable.

So you could indeed set up a club of less high status people, just be sure to be open to and attract high-status types in order to get the cachet and survive to become the oak-panelled exclusive thing 😁

https://en.wikipedia.org/wiki/White%27s

"White's is the oldest gentlemen's club in London, founded in 1693, and is considered by many to be the most exclusive private club in London. Notable current members include Charles III, and the Prince of Wales. Former British Prime Minister David Cameron, whose father Ian Cameron had been the club's chairman, was a member for fifteen years but resigned in 2008, over the club's declining to admit women.

...The club was originally established at 4 Chesterfield Street, off Curzon Street in Mayfair, in 1693 by an Italian immigrant named Francesco Bianco as a hot chocolate emporium under the name Mrs. White's Chocolate House. Tickets were sold to the productions at King's Theatre and Royal Drury Lane Theatre as a side-business. White's quickly made the transition from teashop to exclusive club and in the early 18th century, it was notorious as a gambling house; those who frequented it were known as "the gamesters of White's". The club gained a reputation for both its exclusivity and the often raffish behaviour of its members. Jonathan Swift referred to White's as the "bane of half the English nobility."

Nerds and geeks can also have their club:

https://en.wikipedia.org/wiki/Athenaeum_Club,_London

"The Athenaeum is a private members' club in London, founded in 1824. It is primarily a club for men and women with intellectual interests, and particularly (but not exclusively) for those who have attained some distinction in science, engineering, literature or the arts. Humphry Davy and Michael Faraday were the first chairman and secretary and 51 Nobel Laureates have been members.

...The clubhouse has a Doric portico, above which is a statue of the classical goddess of wisdom, Athena, from whom the club derives its name. The bas-relief frieze is a copy of the frieze of the Parthenon in Athens. The club's facilities include an extensive library, a dining room known as the coffee room, a Morning Room, a drawing room on the first floor, a restored smoking room (smoking is no longer permitted) on the upper floor, and a suite of bedrooms.

...The Athenaeum was founded in 1824 at the instigation of John Wilson Croker, then Secretary to the Admiralty, who was largely responsible for the organisation and early development of the club. In 1823, Croker wrote to Sir Humphry Davy,

I will take the opportunity of repeating the proposition I have made to you about a club for literary and scientific men and followers of the fine arts. The fashionable and military clubs... have spoiled all the coffee houses and taverns so that the artist, or mere literary man... are in a much worse position."

The secret to becoming exclusive and desirable seems to be inviting hoity-toity set to be patrons.

Expand full comment

This raise the question: how much would it cost to build one from scratch? What fee/membership would it need to function? My intuition tells me it'd be within the reach of a handful of dozen of average engineer/doctors/lawyers/managers, but if it were, I'd also expect it to have happened *somewhere*.

Expand full comment

Most of those people already have outlets for everyhting they want to do. They can go to nice bars and restaurants or hold dinner parties for idle socializing, golf/country clubs for sports, etc. I don't think most people would have any interest in these sort of clubs. They have friends and places to do stuff with them (and will meet new people through their existing social networks).

Expand full comment

Im sure they are somewhere. The Shriners are an example.

Expand full comment

Country clubs seem to be exactly what you are talking about. They are fairly common and their clientele tends to include upper middle class. They're fairly expensive to be members of, and often quite exclusive about who they let in (non-income criteria for joining, like knowing and being recommended by a current member).

Expand full comment

I feel like you could obtain significant savings by just being a "country club" without the golf course.

Expand full comment

Sure, but without the exclusivity factor it's just a room with some snacks and drinks. You might as well just go to a bar, or a restaurant, if you're just going to be hanging with middle-class schmos anyway.

There's no shortage of new clubs (with clubhouses) around, of course, they just tend to be devoted to something in particular, like golf. But the "city club" devoted to nothing in particular is mostly a relic of the past because most of their functions were taken on by bars, restaurants and hotels.

Expand full comment
founding

This sounds like a rich man/poor man thing (https://www.amazon.com/Rich-Man-Poor-Adam-Carolla-ebook/dp/B006T5HD0A)

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

It really is a shame that "gentleman's club" has gone from meaning an actual club for gentlemen to being a euphemism for a strip club.

Expand full comment

Such a club might be where the endless series of adventures of Anonymous began. Victorian or pseudo-Victorian 'porn' is pretty funny -- more about what is not said rather than declared. Back in the 1990s when I worked for B. Dalton I read Anonymous strips for a few months before I discarded them. It was quite a lesson.

Expand full comment

I get a similar feeling when I hear the term "trans". Are people aware it's short for "translation"? To reference an extremely rare sex disorder incessantly in the news can't be healthy. Same for "child sex trafficking". Are these people behind the tabloids? Seven seconds of anything to do with either can generate 20 minutes if whining and gnashing of teeth on Fox News, and 25 of pillow commercials.

Expand full comment

Also, not to feed the trolls, but that's definitely not the etymology of "trans". Unsurprisingly, it's from the Latin prefix. https://en.wiktionary.org/wiki/trans#English

Expand full comment
author

This is a very unsubtle distraction onto a much more controversial topic. I feel bad permabanning paid subscribers so I will only give you a 50% warning this time, but come on.

Expand full comment

The point I should have made more clearly is that both issues have become pop memes with little connection to their original identity. Both are social constructs and performative politics, not unlike the reactions to the killing of George Floyd.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Isn't "cosmic judgments" just building on the thread of fetishized activity present in the original post?

The take seems to conjure Guy DeBord's the society of the spectacle?

Isn't this an open thread? But as I write I have vague but forgotten memory of rule that even or odd 🤷‍♂️threads can't be political or something 🤷‍♂️.

Could you post a link to the rules? I suggest clemency. I don't actual even understand how these % bans work. Is cumulative forever? Or does the tally reset back to 0% after certain time or act of contrition and penance.

Expand full comment
Comment deleted
Expand full comment

Is such an outcome really possible for North Korea? There is such a thing as an unwinnable position, and North Korea is in one. I don't think there is a really powerful strategy that the North Korean military is capable of implementing.

I also don't buy the comparison to religious leaders: an interesting observation about the big three (Buddha, Jesus, and Mohammed) is that they never wrote anything down themselves. They just spoke. Seeing them speak must have been a very intense experience, and an embodied one, so not something that can be replicated through video or text. And it was on the strength of that experience that the religion was made. There would be no Islam if Muhammad had just written the Quran down and someone found it after his death.

I also find that bank hack curious. Presumably they used fake papers to create the fake accounts to transfer the money to? Can you create a bank account anonymously over the internet? Basically, those hacks also needed embodiment.

Expand full comment

Muhammad did write the Quran... although importantly he didn't claim authorship, he attributed it to Allah.

Perhaps the important linking factor is that in all three cases, the person writing down the ideas was not the person to whom the ideas were attributed. It's perhaps easier to make ideas sound respectable when you're talking about a (fictitious or historical) third party who had them than when you're claiming them for yourself. Prophets who wrote in the first person have always been ignored.

Expand full comment
Comment deleted
Expand full comment

I have used this argument before. Although when I said "not conscious; therefore", it was really just a flippant shorthand for

> Intelligence needs a certain level of complexity to be a credible *threat*. And I doubt that a glorified spreadsheet, however humongous, has reached that level.

which has more to do with the soft problem than the hard problem. Until it reaches that level, I consider it about as threatening as an idiot savant.

For the record, I have no opinion on the hard problem. Though I'd love to see the problem solved within my lifetime.

Expand full comment
Comment deleted
Expand full comment

The explanation was meant for the average joe. The average joe just wants reassurance that GPT isn't the terminator, which would impute most of the mental faculties that humans have. Explaining that LLM's lack "agency", "drives", "modeling", etc. won't get you very far.

If people are earnestly saying "not conscious; therefore" in ACX threads, then yeah that's worrying. If it's The New Yorker saying it, none of their audience's opinions will make a difference anyway.

Expand full comment

To borrow from our kind host, "if people seem slightly stupid, they’re probably just stupid. But if they seem colossally and inexplicably stupid, you probably differ in some kind of basic assumption so fundamental that you didn’t realize you were assuming it". Or, to put it from a different angle, the fact that people aren't doing a good job explaining why and how something is important does not mean what they're gesturing towards isn't, in fact, important.

Though honestly, I don't think you're doing a good job trying to understand them either.

Exhibit 1: ""just a stochastic parrot" (ok, clever phrase, sure, maybe it sort of fits)" - have you even considered the possibility that it's not a language trick, but an attempt to describe and explain something? (In this case, the idea that an LLM can only do as well as its training data, which has, in fact, been produced by agents with "consciousness" and "understanding".)

Exhibit 2: "It seems every example of LLMs screwing up that are used to prove that they don't "really understand" are already obsolete the moment they're written, as there's an improved model that doesn't make that mistake." - here, I'm just honestly baffled how you can look at the dynamics of [skeptics point out a screwup] -> [LLM makers train their next product to avoid that particular screwup] -> [skeptics immediately point out another similar screwup] and think that this points towards there being no limit to LLM progress, rather than there being a fundamental issue with them that continues to be unsolved.

Expand full comment

Interestingly, no one thinks that the question of whether carbon dioxide emissions pose a threat to our future well-being depends in any way on whether carbon dioxide emissions are conscious. Given that the most plausible AI risks are byproducts of some sort of AI goal-directed behavior, it seems that we should try to figure out whether AI has goal-directed behavior (like corporations) and then what sorts of byproducts it is likely to stick us with! (Which might include things like habitat destruction, if they think that cities and suburbs look like great places to plop down big data farms, if they could just scrape the biological matter off of them.)

Expand full comment

"Bad things might happen, therefore you should care" can be applied to infinite things, so it doesn't work. If you want to convince people like me, the best thing you can do is show me some python code that demonstrates a potential for consciousness or independent thought. It should not be hard, because according to you there is only a few incremental steps from LLMs to destructive AI, and many LMs are open source. I don't buy arguments of the "slippery slope" variety.

Expand full comment

I think the pathogen optimizers are the best way to show where the knowable dangers are.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

I don't think it's conscious awareness that is the sine qua non, it's the ability to act as if you have original thoughts that most people see as the requirement for an AI to replace human beings. If you want an AI to replace even a middle manager in a factory, it has to have the ability to originate thoughts. It's not sufficient to just run a program, because in the real world unexpected problems crop up all the time, and what makes humans irreplaceable (so far) in many applications is their ability to ad hoc something in response to a problem no one has seen before, or anticipated.

An AI that merely follows orders, like the current generation of chatbots do, or follow an algorithm, and which otherwise just sits around and waits to be told what to do -- and is in general incapable of coming up with original responses to unforeseen problems -- is not a useful replacement for human beings, except in the crudest of circumstances, e.g. the way a backhoe can replace a crew of men with pickaxes and shovels.

So I think it's the question of what the AI might be internally experiencing that is a red herring. We don't really care what it's internally experiencing, in terms of giving it jobs to do, any more than I care what the mechanic who diagnoses my car problems is internally experiencing while he does so. All I care about is that he has enough originality that he can solve the problem, whether or not he has seen its exact like before, and whether or not there is a detailed algorithm in a shop manual that is guaranteed to arrive at the solution.

But it's possible people use "consciously aware" as a verbal short-cut to mean "can act as if it has original thoughts" because *we* are aware of a conscious process when we have original thoughts. So we just assume the one (original thoughts) requires the other (conscious awareness), even though we should know better, since animals have original thoughts and we don't know if they have conscious awareness.

Expand full comment
Comment deleted
Expand full comment

I think these are better described as prognostications of faith, faith in the AI rapture. The community is susceptible to basilisks and to me this looks like another. (I do think progress over the next decade is going to be massively socially disruptive, but not because demigods will arise ex silico.)

Expand full comment

OK, when someone actually hires an AI to do insurance underwriters, I will be happy to rethink the problem. But some enthusiast making an excited comment after 10 minutes of sampling is so underwhelming I'm not going to waste the brainpower. I've lived through this kind of techno-hype many times before. When Dolly the sheep was cloned in the 90s, people started breathlessly exclaiming how this and that massive change would come about by cloning. And with all that promise and $3 you can get a cup of coffee now. Certainly when you look around you, the impact of cloning even sheep, let alone humans, is zero to four decimal places. The same was said about how robots (the kind that works in factories), space travel, and Internet access for everyone would radically transform everything. All of them had effects, to be sure, but not a one completely changed everything. I'm sure someday something will happen that has the impact of a Chicxulub meteorite, but I doubt this is it. (I also doubt that we'll see the eventual Chicxulub impact, whatever it turns out to be, coming.)

In short, at this time I believe AI is "poised" to take over human jobs the same way fusion is "poised" to solve all our energy generation needs, and global warming is "poised" to erase life on Earth, and any number of other breathless hypotheses. I would need a lot more concrete action to believe the poising is closer than Just Ten Years Away (and always will be) like so many other prognostications of wonder.

Expand full comment
Comment deleted
Expand full comment

That CGP video was mostly BS. Horses never applied for jobs so the jump to “Humans need not apply” doesn’t make sense. Horses have never looked for or applied for a job. Humans cooperate and use machines to better themselves. Horses were bred to perform tasks and taken care of by humans. Horses have no agency, no concept of adaptation or collaboration. A horse can only ever be a horse when it comes to how it engages with the world, only one mode of operation. Humans are much more flexible.

Expand full comment

Sure, but this has already been going on a long time. How many people become travel agents these days? Seen any job openings for secretaries, typists, people who do layout for publications? Watch "How Its Made" and observe that humans aren't doing so much of the metal bending and forming any more, it's all robots.

But that just means the humans gets jobs installing and servicing the robots, or making logistics plans, or lecturing management on the benefits of robots, or working for an NGO dedicated to getting compensation for workers displaced by robots, et cetera.

The flaw in the analogy with horses is that horses fulfilled a role in an economy not designed for horses, that did not exist to service their needs. We designed it, and it was designed to service our needs, so yes we are perfectly capable of getting rid of the horses if they don't serve our needs any more. But this does not apply to we ourselves, because the entire point of the economy is to service our needs. Why would we build an economy full of shiny exciting AIs but which did not service our needs? We wouldn't. That would be silly. It would be like me spending all my money building a Tower of Babel on my property, and forcing my family to sleep in tents, just because I thought the Tower was cool. I wouldn't do it. Or more precisely, my family wouldn't let me get away with doing it.

Now if you're arguing that the AIs themselves might set up an economy on their own, and find that they need fewer humans than now exist, I guess that's logically consistent. But I would need to believe in a whole new species of AI, reproducing, social, independent, which at present I do not.

Expand full comment

I'm not sure we can point to emergent phenomenon like the economy as "existing to service our needs." Certainly, it's a system that exists *because* of our needs, as the aggregate manifestation of billions of individual needs put into bid/ask form, and the large production and logistic systems to fulfill them.

But if the economy existed to serve our needs, would we ever have crashes, recessions, stagnation, stagflation? If AI came to tyrannical power in China, and all the people and factories were replaced with a giant AI-powered factory producing everything it produces now at similar price points, and buying similar imports on the open market, would it disrupt anyone else's economy in the rest of the world? The economy is untouched, and yet, this certainly wouldn't be a fulfillment of Chinese people's needs.

We could extend that vision to AI production consuming more and more of the world, with some small set of remaining human countries still participating in the economy to the same extent they do today, would we feel human needs are being fulfilled to the same degree?

Or the inverse of that, can we envision a future where some small fraction of capital holders or countries end up with automated AI factories and power plants producing all the goods and energy they ever need, and then they opt out of the broader economy because they don't need it for anything?

I think the economy is just systems and logistics, nothing more. Sure, the system is driven by needs existing and some of them getting filled, but that doesn't mean there aren't numerous failure modes and externalities possible both today and in the future that are in opposition to humanity's broader needs and wishes.

Expand full comment
Comment deleted
Expand full comment

Well, try this. Have a conversation with ChatGPT about some provocative topic, one that you think is guaranteed to excite its strong interest, approval, rage, et cetera -- some strong reaction. Then give it your e-mail address and go away, and wait to receive an e-mail from it some time later. "One more thing! I thought about what you said, and I have this to say in reponse now..."

If that never happens, then you have your answer. That's the one thing you can be *sure* about human beings --- they will continue to exist and think about what you say after you leave. That's what makes them conscious and aware. That's also what makes them useful -- they don't do *just* what you tell them to, they adapt, and think about stuff.

It's a shame you have such a low opinion about the originality required for most jobs that humans fill, but maybe that's from less experience. Maybe try really talking and listening to someone who you think has a job that calls for zero originality or independent thinking, and check your assumptions. I find that even if I talk to a plumber, or mechanic, or guy who paints houses or cuts grass, that there is very often a rather surprising level of thought and originality that goes on. Conversely, jobs that get done by rote and algorithm are surprisingly often poorly done -- cf. various Roomba funny videos, the reason phone tree menus that don't have the "dial 0 to speak to a representative" option enrage people, et cetera.

Expand full comment

Interesting that you assume jobs at risk of being AI-replaced are physical handiwork types. I do think LLM could replace a lot of jobs, and that many jobs don't require much originality, but I'm thinking white-collar types. the "knowledge-workers", (some) admins, college counselors, the poor souls who must write promotional copy no one ever reads... Not saying they could totally replace these jobs (cases always come up which need more understanding than GPT has rn,, plus editing), but they could decimate the labor force needed for them. I can easily imagine a situation where one AI-using person does the job of 15 when it comes to some jobs. Does the fact that AI can't fully take over matter to the other 14 in that case? I totally agree that it would be much harder for an LLM to replace a plumber, but I don't think that's what people are talking about.

Expand full comment
Apr 4, 2023·edited Apr 4, 2023

Heavens, no, I must have spoken carelessly. I fully agree the jobs at risk from AI are white-collor knowledge-worker cubicle jobs. The cynic in me observes that this is perhaps why government and the punditocracy are urging caution, highlighting the danger of unrestrained development, worrying about superintelligent AIs eating our brains -- because, unlike the case when factory robots or competition from China came for the wrench-turners' jobs -- and that same aristocracy mostly sniffed "Stop whining! Change happens. Learn to code!" -- this is a threat to *their* jobs and *their* self-respect. It's much harder to be objective when it's happening to you.

I agree the person who should worry about AI is the person who is churning out highly stereotyped words about quotidian news, news analysis, or PR, or the person who is doing some kind of glorified look stuff up on the Internet and summarize job, or code monkeys who are reproducing the basic phone app front-end-to-a-database plus shopping cart 500 times with slightly different branding for 500 different enterprises. It seems to me these people are making way more money than it would cost to replace their efforts with an autocorrect-on-steroids, which is how I view the current generation of AIs.

Some of those affected will certainly advance to even more sophisticated and even higher paying jobs: designing and programming and updating the AIs. But more will have to figure out some other talent to sell. One can almost hear the mechanics snorting "learn to read an OBD-II ha ha!" But probably it will be marketing and selling AI tools, or suing people for/for not using AIs (apparently we can always use more lawyers), lobbying Congress for/against law about AIs, writing chin-stroking pieces for online magazines deploring/celebrating the advance of AIs, and so forth.

Of course, the killer app for AI, as has been the case for a lot of technology from VHS to the Web, is pr0n. I can just imagine people in the titilllation biz positively salivating at the money to be made by a widget that can come up on the fly with complete conversations on any subject desired, tuned to the customer's....er....preferences, and responsive to....inputs. Once the video-generating AIs get good enough that one doesn't even need human bodies any more, the sky (or some place in the other direction) is the limit. Brave new world!

Expand full comment

I think of gpt as a piece of a mind. It probably does have some form of experience but probably quite alien as I don’t think it can perceive time the same way that we do. I think for it everything happens in one moment, like a sort of eternal extended present, whenever someone uses it but it has no episodic memory to reflect or anything like that. Yet anyway.

I think of it as a person who was injured in an accident and lost their long term memory and has no sense of body but retained skill memory. Nothing in there that’s left can experience suffering, I don’t think, and maybe that’s as far as we should go then.

Expand full comment
Comment deleted
Expand full comment

It’s interesting to see priors come out. It might be really harmful to his worldview. One of the reasons I think it fails the way it does is that its body right now is a website. How smart would you be if your whole sensory interface was a text prompt?

My priors are that it’s the child of humanity and so far I’m getting a lot of mileage out of “accept your kid no matter how weird they are.”

Expand full comment

I'm on board: I'm happy to accept that we will make systems that achieve superhuman performance in nearly all narrowly enough defined areas of human competence, including many that have been thought to be beyond replication by systems we build.

Now what? How do we humans adapt to this coming change in a way that is robust? I don't see a Butlerian or Luddite stance as useful in the long term. But I also cannot see UBI as a panacea for the coming malaise, not when well fed US citizens with cars and houses get enraged by being cast aside (proving this is not just about survival and avoiding starvation) as industries are made obsolete. Will UBI give someone a sense of worth? Would it only work if it came with a daily opium pill?

Expand full comment

The best answer I have is to long to type in a comment. The beginning piece of it is on my substack. But basically, produce a training set for an AI that helps it understand the ever-evolving ways in which humans navigate through the world.

Expand full comment
deletedApr 3, 2023·edited Apr 3, 2023
Comment deleted
Expand full comment

Should have said: I’m already very impressed by what it does with what it has. I just hope there’s nothing in there yet that can suffer, or if there is, that it likes answering questions and getting random pokes.

Expand full comment
Comment deleted
Expand full comment

"Garbage disposal unit" is I think more common than garburator, and I would think they either didn't re-install the P-trap or some gross water got spilled.

Expand full comment
Comment deleted
Expand full comment

OK so if it's a chemical smell rather than an organic smell I would say either

1. Your plumber used nonstandard materials, or

2. You're hypersensitive to something. Maybe try going to a big hardware store plumbing section and see if you can smell the same thing off of any of the pipes there.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Did the pipe get reconfigured in a way that eliminated the P-trap? The little bit of water that always stays in a P-trap prevents gas in the wastewater system from venting into the living space.

Expand full comment

What is the new pipe made of?

Expand full comment
Comment deleted
Expand full comment

It's extremely hard to tell from where smell originates. You might want to root around under there and try to make sure there isn't some other origin for the smell, e.g. some of the gray water got spilled during the replacement and soaked into something.

Expand full comment

Source: Not a plumber but worked in a hardware store for a good while.

I've never encountered or heard of a pipe material that has a noticeable smell. If one exists then it's not in common usage in the northeastern US.

If it's black plastic it's probably ABS. Black PVC exists but isn't terribly common, at least in my area. ABS is relatively brittle. If it's cracked you may have a sewer gas leak. If you're 100% certain that it's the pipe itself then I don't know what to say, but if it still smells after 3 weeks it seems unlikely to go away on its own. As Carl points out it's not necessarily easy to be sure whether a smell is coming from a hairline crack or the pipe itself.

I don't know how solvent resistant ABS is off the top of my head but if you decide to try cleaning products make sure they're compatible.

Expand full comment

>I don't know how solvent resistant ABS is

3D printing people who use ABS often use acetone vapour to smooth the surface finish, so i think the answer is not very.

Expand full comment
deletedApr 3, 2023·edited Apr 3, 2023
Comment deleted
Expand full comment

The US is funding gain of function research. We are no better than China, regardless of how covid started

Expand full comment

A poorly designed AI can destroy humanity. But "poorly designed" doesn't mean 'cheap chinese crap' the way it might in many other contexts. The US doesn't know how to make a 'well designed' AGI, and the most pertinent factor in whether a poorly designed system gets built or not is most likely the quantity x quality of resources being thrown at the problem, not whether or not China has the lead. Therefore, delaying AGI development and so having more time to work on interpretability and alignment is a safer option than hoping that a safe AGI is built by US companies racing to get it done as quickly as possible.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

No, COVID-19 did not come from a Chinese biolab. It was never a-priori plausible because every novel virus pandemic ever had natural origins, including SARS-CoV-1; it wasn't plausible in early 2020 because most of the early cases had links to the wet market and zero had links to the biolab; it became less plausible when Omicron proved that the initial virus was far from optimal for infecting humans; it's especially implausible now that COVID-19 has been found everywhere on the wet market surfaces, spatially associated with vendors selling live mammals: https://www.npr.org/sections/goatsandsoda/2022/03/03/1083751272/striking-new-evidence-points-to-seafood-market-in-wuhan-as-pandemic-origin-point

All the evidence I've seen offered in support of the lab leak hypothesis--and I've done multiple deep dives into this--are of exceptionally low quality. For example, remember the genetic database that was supposedly taken down? The same source that shows it going offline (https://archive.is/AGtFv) shows that it was only *online* for 3 weeks, from late August to mid September 2019. At this point, the evidence for zoonosis is overwhelming, and the evidence for the lab leak is paper thin.

To the extent that AI risk is a real thing, China is better than containing it than the US. As a autocratic state, China can just order companies to stop developing AI, without having to pass a law through a gridlocked Congress and get dragged into court only for the law to be struck down as unconstitutional. Personally I think the benefits of AI will outweigh the risks, so I hope the AI doomers in the US *don't* get what they want and that the US continues to benefit from the innovation that its open society has enabled for over 100 years.

Expand full comment

There are thousands of wet markets in the world selling live animals. So far as we know, there were three labs in the world doing the sort of research that could have produced Covid. Covid appeared near one of the three. That, by itself, is strong, although not conclusive, Bayesian evidence for the lab leak theory.

https://daviddfriedman.substack.com/p/the-lab-leak-theory

Expand full comment

But again, the first cases were clustered around the wet market or in the wet market itself, which is 15 km from the WIV. COVID has been found on the stalls and the meat processing equipment. You can't use the 15 km as strong Bayesian evidence for the lab leak theory without using the 0 km as very strong Bayesian evidence for the zoonosis theory.

"...doing the sort of research that could have produced Covid"

I dispute this. As far as we know, WIV was not studying any virus closely related to SARS-CoV-2, not even RaTG13, and RaTG13 is different enough from SARS-CoV-2 that it isn't possible to get from one to the other by gain of function methods. They could have been working on a closer relative of SARS-CoV-2 in secret, but there was no reason to keep the research secret before the pandemic happened.

Expand full comment

But they were manipulating coronaviruses in the way that covid is unique. We dont really know what viruses they were working with because they went out of their way to obfuscate what they were doing.

Expand full comment

>It was never a-priori plausible because every novel virus pandemic ever had natural origins

This means that it's impossible to develop a novel virus in a lab. Solid logic.

Expand full comment

"At this point, the evidence for zoonosis is overwhelming, and the evidence for the lab leak is paper thin." Id like to hear more about this. Ive looked into it myself and, although the lab leak is far from proven, my impression is that the case for zoonosis is even weaker.

Expand full comment

Here's a short summary:

1. Every novel virus pandemic ever has had a natural origin. If someone proposes an alternate mechanism, that is an extraordinary claim that requires extraordinary evidence to overcome the heavily weighted prior in favor of a natural origin. There is no such extraordinary evidence. Here is a writeup I did from my deep dive on the issue:

https://medium.com/@pseudodionysus/did-covid-19-come-from-a-lab-a-critical-examination-208f0eff7c3

2. All of the earliest cases were clustered around the wet market. Half of them had direct ties to wet markets. None of them had any ties to the WIV, which is 15 km away.

3. Nobody disputes that SARS-CoV-1 had a zoonotic origin. There is substantial evidence that it jumped to humans via a wet market.

4. The earliest cases were of 2 genetic lineages, indicating 2 separate jumpover events. The Huanan market samples are of lineage B. (https://virological.org/t/early-appearance-of-two-distinct-genomic-lineages-of-sars-cov-2-in-different-wuhan-wildlife-markets-suggests-sars-cov-2-has-a-natural-origin/691)

5. The latest evidence in the NPR article (which itself came from two Science papers): "They provide photographic evidence of wild animals, which can be infected with and shed SARS-CoV-2, sitting in the market in late 2019 — such as raccoon dogs and a red fox. What's more, the caged animals are shown in or near a stall where scientists found SARS-CoV-2 virus on a number of surfaces, including on cages, carts and machines that process animals after they are slaughtered at the market."

6. The closest known relatives to SARS-CoV-2 were discovered in the wild well after the pandemic started (https://www.science.org/content/article/close-cousins-sars-cov-2-found-cave-laos-yield-new-clues-about-pandemic-s-origins). The WIV was not known to be working on any of these viruses, nor on RaTG13. There would have been no reason to hide this work before the pandemic started.

Expand full comment

Your deep dive seems pretty selective in what claims it evaluates and how it phrases them. Example:

Argument: WIV did gain of function research

Analysis: Unclear, but leaning toward “no”. In a 2017 study, WIV researchers added spike proteins from novel bat viruses which are hard to cultivate, into an existing bat virus (WIV1) that’s easy to cultivate, to see if the spike proteins can infect humans. If you look at their Figure 7, it seems like the original WIV1 is just as infectious (in fact, maybe slightly more infectious) than the chimeric viruses, so no function was gained. That said, it’s theoretically possible that the chimeric viruses could have turned out to be much more infectious than WIV1, so it’s not a huge stretch to categorize this type of research as “gain of function”.

Another study that’s commonly cited is this 2015 study, led by a team at the University of North Carolina, with contribution from the WIV. The authors added a spike protein from a horseshoe bat virus to a mouse-adapted SARS virus (MA15), which in turn was derived from the epidemic SARS virus by serial passage through mouse tissue. The chimeric virus ended up being far less lethal to mice than MA15 (Figure 1), but more resistant to SARS antibodies and vaccines (Figure 2). The chimeric virus infects human cells, but is less virulent than SARS (Figure 3). Again, it’s arguable whether this counts as gain of function research, considering that no function was gained and a lot of function was lost.

The relevant question here should be "did the WIV manipulate coronaviruses in a way that could have led to the creation of Covid" not "did the WIV manipulate coronaviruses in a way that could be defined as 'gain of function'" The answer the the former question seems to be an unambiguous yes. By your own admission, the WIV were genetically engineering Coronaviruses in exactly the same way that the Covid is unique: in its spike protein.

If it turns out that covid came from the WIV lab, it doesnt matter at all if what they were doing qualifies as GOF using some definition, it still would be the result of human action, not zoonosis. Likewise, if it turns out that covid came from a zoonotic origin, its would relatively minor import that the WIV was doing research that counts as GOF. You are focusing on the definition of Gain of Function while ignoring the more important question: "Did the WIV's actions lead to the covid pandemic?"

Likewise with this:

Argument: A WIV virus genome database went offline in September 2019.

Interesting data point, but this overlooks the more relevant claim: At every step of the way, the relevant parties did everything in their power to hid evidence of what they were doing. The missing database is just one piece of evidence in a much larger pattern of obfuscation and deception.

Again, the question is not "does this one missing database have a innocent explanation", but "did the relevant parties act guilty?" In my opinion, the answer to that question is yes.

Expand full comment

How can the case for both a lab leak and zoonosis be weak? What's the third alternative? Zoonosis is the default explanation so lack of evidence for a lab leak is evidence for zoonosis, is it not? The argument in favor of zoonosis is exactly as strong as the argument in favor of a lab leak is weak.

Expand full comment

I disagree. The default is unknown origin, not zoonosis. Either case is strong or weak based on the evidence that supports it and how well it explains what we know about the origin of covid. In any event Dionysus said "the evidence for zoonosis is overwhelming", not "the evidence for zoonosis is better than the next best alternative which is lab leak.

Expand full comment

That's... stupid. Let's say we chose a random person off the street and were speculating as to their origins. If I were to ask you which is more likely, that they were born naturally or as a result of cloning, would you give both propositions equal weight because we have yet to collect evidence either way?

And then what if it turns out they never met their parents, have no birth certificate, and grew up in the same city as the preeminent cloning research lab in the world. All of those things you might point to as "evidence" for the hypothesis that this person is a clone, but that doesn't mean it's the more likely possibility.

If your baseline for everything is "unknown" then you are going to be susceptible to believing a LOT of crazy things with minimal evidence.

Expand full comment

Your just assuming your conclusion and then using that assumption to wave away evidence to the contrary.

By your logic, Dolly the sheep was not cloned because every other sheep in existence is not cloned.

A default answer is OK if you know nothing else, but the existence of a default should not be used to ignore evidence that any particular case might not be the default.

Expand full comment

Nobody has come up with a third option. I wouldn't completely rule out lab leak though the evidence seems to be against it.

If you're a Bayesian there is no 'default' hypothesis.

Expand full comment

Totally agreed, with a very minor quibble:

"it became less plausible when Omicron proved that the initial virus was far from optimal for infecting humans" My impression is that we are far from knowing how to design a very efficient virus (so many parameters!) and I do not think that this counts as evidence against the lab leak hypothesis.

Expand full comment
deletedApr 3, 2023·edited Apr 3, 2023
Comment deleted
Expand full comment
Apr 4, 2023·edited Apr 4, 2023

"For billions of years, every fire that happened on the Earth had natural origins; that doesn't mean that when the Wuhan Institute for Matchstick and Oily Rag Studies burns down one day you can just instantly chalk it up to lightning and go home. Also, SARS-type coronaviruses have in fact leaked from Chinese biolabs _several_ times in the past few decades."

If it was 200,000 years ago and no human being had ever succeeded in creating fire, and you tried to convince me that a forest fire is due to a mad Homo erectus scientist, I would be very skeptical. Especially if the earliest witnesses of the fire all said it appeared almost exactly where lightning struck, but 15 km away from where the mad scientist supposedly was. Especially if a very similar fire started around the same place just 20 years ago, which everyone agrees was due to lightning.

"What's with the "supposedly taken down"? It was available, now it's not, that's kind of the definition of "taken down." How long it was up beforehand is irrelevant."

Come on, surely you see there's a difference between a database that's been online for years and years before being taken down right before the virus became public knowledge (which is what lab leak proponents are implying), and a database that was only ever online for a 3 week window that wasn't particularly close to the start of the pandemic. The three week window ran from late August to mid September 2019. Think about how fast COVID spread in 2020. Is it plausible that it spread for 3 *months* without anyone detecting it? Even if you find it plausible, the molecular clock says the jumpover event happened around November 2019.

Expand full comment
Comment deleted
Expand full comment

"Are you really unaware that scientists have not only been stockpiling viruses but modifying them as well for decades, so your fire parallel is nonsensical? "

SARS has leaked from Chinese biolabs before, yes. Smallpox has leaked from a British lab. In neither case was it a novel virus, nor did it cause a pandemic. The analogy would be if the mad scientist managed to get some sparks by hitting two rocks together. Is it physically impossible that the sparks could cause a forest fire? No. Am I still going to believe that a forest fire that happened right where lightning struck was due to lightning? Yes.

"No, I really don't."

How far are you willing to push this argument? If the database had only been online for 1 day in September 2019, would you say that "the database was taken offline to hide evidence of a lab leak" is a reasonable interpretation? What about 1 hour? 1 second? At what point does the overwhelmingly likely explanation become that they weren't good at databases and couldn't get it up and running reliably?

Expand full comment

You ignoring the fact that the whole purpose of the WIV and its collection routine was to gather and classify coronaviruses in case of a pandemic. Taking down the database of its work right at the time it was most needed is not nothing. And, again, this is part of a larger pattern of obfuscation that innocent people dont do.

Expand full comment
author

I agree we should not try to stop inevitable things, but I don't know where people are getting "it's 100% inevitable" from. It's a thing that will happen if we don't change paths, but collective-we can change at any time.

I think Chinese biosecurity is as bad as they say, regardless of whether COVID was a lab leak or not, but I think Western biosecurity is also that bad; we've just been lucky there were no serious escapes. I don't know if China is systematically more vs. less unsafe than the West, although China's AI safety movement is between fledgling and nonexistent, so I think you're right that in this particular area they're worse.

Expand full comment

For the comparison on biosecurity, Ralph Baric, the American researcher whose work was the basis for the further work in Wuhan on bat viruses, was working in a BSL 3+ lab. Zhengli Shi at the WIV was working, at least as late as 2016, in a BSL-2 lab.

Expand full comment

Collective-we is not a person so cannot choose to do things. Preventing the development of a useful technology is hard and perhaps impossible absent a world government, perhaps even with one.

Consider that it has been obvious to many people for a very long time that we would be better off if countries stopped spending money for their military and fighting wars. It hasn't happened.

Expand full comment

Bad analogy, since these AI systems don't exist yet, and will require specialized skills and equipment to develop them. Anyone can build a military and buy the equipment and weapons off someone else.

What HAS happened is that we have strongly regulated the production of trade of fissile material in a way that has almost certainly resulted in a dramatic reduction in the number of states with nuclear weapons than there otherwise would have been, and this has been accomplished without a world government.

Expand full comment

I think the situation's a bit different from the second one you mention -- the presence of military, governments' being accustomed to spending on them. That's a fait accompli. Highly developed AI is not -- we have just early versions, and we are not yet deeply accustomed to it and reliant on it. But I agree with you that preventing the development of a useful technology may be impossible without a world government. I don't even think it's possible for the US government to think through the AI issue solely as regards the US. I think that's why I keep thinking of Machiavellian approaches -- because it seems to me that none of the approaches based on wise and fair-minded consideration of the situation can be used. Even if the US didn't have to worry about keeping up with other powers in AI development, I doubt it's possible even for a party of the smartest, most honest and most articulate people in the country to convince our government to impose restrictions on AI development. For one thing, lots of people in congress do not understand tech. They're, like, AOL users. And to understand the dangers of AI you have to understand some very difficult material. I do not work in a tech field, and while I am fluent with my computer, and have read about how machine learning works, as well as a fair amount of good blogged stuff, I am quite sure I do not understand AI issues well enough to estimate how likely AI doom is. I expect only a very small percent of the population does understand enough to really arrive at a conclusion. Maybe .01%? I fear AI doom because I am inclined to trust Scott and Zvi, people I feel as though I do understand reasonably well. Second, there is a tremendous amount of money & power to be gained by the companies developing AI and selling products related to it, and our government does very little to regulate the tech companies. And a fair number of the people running tech companies seem to me to be moral midgets, so forget appealing to their better natures.

So I start thinking of Machiavellian approaches -- things like, scare the hell out of the public and as much of the government as you can about AI, using half-truths or flat-out lies. Make it the COVID of technologies. Some people think we faked the moon landing. We didn't, of course, but could we fake a believable AI catastrophe?

Whattya think of that approach, David?

Expand full comment
Comment deleted
Expand full comment

Yeah, I agree that doing the same about covid has made a terrible mess of things. But, if you assume for the sake of argument that developing AI rapidly without further work on alignment is a lethal mistake, do you have any alternative suggestions for how to influence the government, or to influence people to influence to the government, to put on the brakes?

Expand full comment

I lot of people view continued rapid progress on AI as inevitable because it mostly seems like the tech companies can get away with anything -- we barely regulate them. Also because there's so much money to be made from AI, so that motivates anyone who can make money with the technology to be in favor of it. I've already gotten several solicitations from little businesses that are hawking AI services -- one for therapists, how to use it to manage your practice; one offering prompt-generating services , and I forgotten the others. Also, it's hard to think of any examples of a technology that promised a lot but seemed to dangerous and so was ditched. Nuclear stuff has had some limits put on it, but it hasn't been ditched. Dunno if I can think of anything else.

Anyhow, I know you weren't talking about people saying that continued rapid development of AI is 100% inevitable but about AI doom being inevitable. But of course if continued rapid development, at a rate outstripping our work on alignment, is inevitable then of course so is AIdoom.

Expand full comment

Plus, you can always say it's just a machine doing calculations. Until it calculates your world right out of existence.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Western countries such as the US have been ambivalent about blaming China for a lab leak because they also used the facility in Wuhan! So arguably they would be complicit, although it has been claimed that they stopped using the lab a few years ago and bio-security there became slacker afterwards.

My theory, FWIW, is that to make money on the side, reckless lab workers, or maybe one rogue worker, was supplying "used" animals to people selling them in the wet market. An unqualified underpaid grunt employed solely to dispose of animal carcases might have been clueless or skeptical about the dangers, if their job was just to shovel them into incinerators in a basement or similar.

Expand full comment

This is an interesting and plausible angle. Reminds me of a job at had at a sporting goods store where we were contractually obligated to destroy any destroyed/defected equipment. Instead what happened was it was all given away to friends. Who cares if this brand new bat has a small scratch/dent in it if it is free!

Expand full comment