802 Comments

Hello: I just now happened on your review of the book Jaynes ought to have written, and am very glad to have. Now reading "On the Failure of Oracles" at U of Chicago's web site. thanks!

Expand full comment

I’m looking for a small town to write a novel from. It has to be in the mountain west (CO/WY and surrounding area), ideally has a small town and good surrounding nature to go on hikes or runs. I’d rather avoid somewhere like Jackson Hole or Aspen because I expect these would be overrun with tourists, but unpopular places are by construction hard to find. Does anyone have any suggestions?

Expand full comment
Apr 7, 2023·edited Apr 7, 2023

Comment from Scott Locklin's blog:

"To add to this: just learned that MIRI (then SIAI) was literally founded to *speed up* the coming Singularity, to prevent the gray goo apocalypse that Yudkowsky circa 2001 was convinced would kill us otherwise:

'On the nanotechnology side, we possess machines capable of producing arbitrary DNA sequences, and we know how to turn arbitrary DNA sequences into arbitrary proteins (6). We have machines – Atomic Force Probes – that can put single atoms anywhere we like, and which have recently [1999] been demonstrated to be capable of forming atomic bonds. Hundredth-nanometer precision positioning, atomic-scale tweezers… the news just keeps on piling up…. If we had a time machine, 100K of information from the future could specify a protein that built a device that would give us nanotechnology overnight….

If you project on a graph the minimum size of the materials we can manipulate, it reaches the atomic level – nanotechnology – in I forget how many years (the page vanished), but I think around 2035. This, of course, was before the time of the Scanning Tunnelling Microscope and “IBM” spelled out in xenon atoms. For that matter, we now have the artificial atom (“You can make any kind of artificial atom – long, thin atoms and big, round atoms.”), which has in a sense obsoleted merely molecular nanotechnology – the surest sign that nanotech is just around the corner. I believe Drexler is now giving the ballpark figure of 2013. My own guess would be no later than 2010…

Above all, I would really, really like the Singularity to arrive before nanotechnology, given the virtual certainty of deliberate misuse – misuse of a purely material (and thus, amoral) ultratechnology, one powerful enough to destroy the planet. We cannot just sit back and wait….'

And he made this incredible prediction:

'Our best guess for the timescale is that our final-stage AI will reach transhumanity sometime between 2005 and 2020, probably around 2008 or 2010.'"

Expand full comment

I started experimenting with betting on prediction markets a few weeks ago and decided to make my first market recently and I had some questions on getting the most out of them:

1. Does anyone have any reading to recommend on best use cases? i.e. where does wisdom of the crowds "fail" or trend towards a wrong answer as N approaches infinity? Maybe better phrased would be: Where—if anywhere—does expert opinion reliably outperform the equilibrium point reached by wisdom of the crowd?

2. Are there any forums or communities where people who make markets can request extra engagement on matters they think are important to try and increase the predictive validity of the market? Alternatively are there ways of doing this with Manifold or Metacalculus by spending money or some other mechanism?

Expand full comment

I am offering a mini-grant(s) to any individual [ideally who I could pay through a 501 c(3)] that would use the money for any one of the following:

1. Filling otherwise unmet need for high-IQ people ages 5 to 20 for whom the grant to you is likely to help him/her/them live up to their potential to make a difference in their current sphere of influence or the larger society. "Make a difference" can be in a progressive, conservative, or apolitical direction.

2. Encouraging discussion of pro-merit issues, e.g., the net negative of yet more redistribution of money and attention from people with greater potential to contribute to a better society to those less likely to. Like the other two foci, this must be used for an initiative that would otherwise go unfunded.

3. Taking a worthy step toward understanding the biological basis of reasoning or impulse control that would otherwise go unfunded.

Email me a brief proposal saying: 1 What you would do with the money, 2. What makes you a person likely to use the money well. 3. What would be the amount of the grant that would yield maximum benefit per dollar I'd give you. 4. Whether I could send the grant money through a 501c(3.) Send your proposal to me at mnemko@comcast.net

Expand full comment
Apr 6, 2023·edited Apr 6, 2023

Any thought on Italy blocking chatGPT? It’s been the first country in the world to do so, citing privacy concerns on user data: https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9870847.

Other EU countries may follow suit.

Expand full comment

I wrote a thing about the thing that everyone is writing about lately. But this one is different, I swear..

https://kyleimes.substack.com/p/the-robots-are-coming-and-its-gonna

Expand full comment

It's that time of year again: is Easter pagan?

So here's a video interview on the topic:

https://www.youtube.com/watch?v=xixdmHaajoc

Expand full comment

Does anyone know of any tools that can transcribe teams calls?

And before you say it Can't use the teams transcribe function because my meetings can't be recorded cos GDPR, so need another way.

Expand full comment

Won't the people crying out about AGI killing off humanity look dumb in ten years if it hasn't happened yet, and also doesn't look like it's about to happen?

Expand full comment

Has anyone made a practical design for an arcology meant for placement in very hot or cold climates?

Expand full comment

Is utilitarianism itself an X-risk? There's a pretty outlandish scenario described here: https://betterwithout.ai/rollerskating-transsexual-wombats that culminates in something not so outlandish:

> [Mocenigo] uses cutting-edge AI to calculate the expected utility of long-term human survival versus projected suffering over the next few years. As he suspected, its answer is negative. Mocenigo opens his lab fridge and snorts a vial of Anthrax Leprosy Mu, a hybrid pathogen he’s developed. It has an R0 of 14,000 and is uniformly fatal after a month-long symptom-free incubation period. He heads out of the lab to the football stadium nearby that has been converted into a refugee center.

If people start thinking utilitarianism is true, well, someone could end up with their calculus telling them human extinction is the best outcome, as happened there. Making it one of the most dangerous philosophies developed, since unlike other philosophies this one cloaks itself in math, making it have a very powerful allure to people with a certain psychology. The sort of person who could figure out how to actually cause human extinction.

Might be interesting to draw up a catalog of philosophies and ideologies that could justify human extinction, so some heavy duty philosophical work could go into refuting them all.

Expand full comment

There is a comment thread here about why Stormy Daniels has not/is not being charged with blackmail related to the events that have gotten Donald Trump indicted. I have new info to offer and for ease of reading will summarize here.

The question basically is, "isn't she guilty of illegal blackmail/extortion?"

Some of us responded to note that hush money payments aren't illegal under federal law. U.S. law (see 18 U.S.C. § 873) includes that the thing being threatened for exposure has to be something that was illegal. Not simply embarrassing or politically inconvenient but, illegal. Consensual banging with a porn star is not illegal.

Trump though is being indicted by the State of New York, and some folks pointed out that blackmail under most state laws does not require that the thing being threatened for exposure be an illegal act. State laws against blackmail cover the instilling of fear more generally, including not just threatening to do someone harm and/or exposing a crime but also threatening to "expose a secret" which would induce fear in the person being blackmailed (that's a quote from the New York statute).

So, then, why isn't Stormy Daniels potentially guilty of illegal extortion? That question turns out to have been posed many times since 2018 in forums like Reddit and Quora and etc, with a variety of attorneys/law professors/former prosecutors/etc jumping in to respond to it. Their consensus answer is that there isn't any allegation that Stormy Daniels has attempted any extortion as defined in any state laws.

Daniels didn't approach Trump or any Trump representatives to demand money for silence. Rather, she was starting to talk in public about having had sex with Trump and was then approached and (she alleges) threatened with harm if she didn't sign an NDA drafted by Trump's lawyers and accept the payment in exchange. She signed the NDA, received the money, and then Trump failed to sign the document. Later she sued in civil court to invalidate the NDA because of his failure to sign it; she did not in that lawsuit seek any damages or other new payments but simply asked the court to agree that the NDA was not in force.

So, quoting here many lawyers who've posted responses online on this topic during the past four years, "none of the elements of legal blackmail (on Daniels' part) exist."

Expand full comment

I’m not sure which will be more entertaining, MSNBC’s or Fox News coverage of Trump’s booking. Maybe I’ll record one of them so I can compare and contrast. As usual, I’m kidding. I won’t be watching either. I saw the OJ low speed chase by accident in an after work bar gathering. That’s enough reality TV for one lifetime.

Expand full comment

Doesn't adopting the longtermist frame imply being pro-life? Been reading a bit of The Precipice, and they sure harp a lot on the value of unborn generations. If these generations are worth a lot, then isn't the future life of the fetus also worth much?

Maybe you can counter, what about the future life of the gametes? But gametes do not turn into people all on their own, like a fetus. Once fertilization occurs, you have something that will almost certainly become a person, should no one intervene. And if future life has value...

Expand full comment

While this story made me laugh, I think it also demonstrates how open letters politely imploring "please slow down on AI research, pleeeeeease" are going to do nothing.

Google, the multi-billion, multi-national company, is going to cut back on staplers amongst other things. At that level of penny-pinching, do you really believe they will voluntarily hobble themselves when it comes to a technology they and others are hoping will make them trillions in profit? Especially as that is where they seem to be pinning their hopes?

https://www.cnbc.com/2023/04/03/google-to-cut-down-on-employee-laptops-services-and-staplers-to-save.html

"In her recent email, Porat said the layoffs were “the hardest decisions we’ve had to make as a company.”

“This work is particularly vital because of our recent growth, the challenging economic environment, and our incredible investment opportunities to drive technology forward — particularly in AI,” Porat’s email said.

...Google employees have also noticed some more extreme cutbacks to office supplies in recent weeks. Staplers and tape are no longer being provided to print stations companywide as “part of a cost effectiveness initiative,” according to a separate, internal facilities directive viewed by CNBC.

“We have been asked to pull all tape/dispensers throughout the building,” a San Francisco facility directive stated. “If you need a stapler or tape, the receptionist desk has them to borrow.”

It would be ironic if these kinds of things were what brought about the AI Apocalypse: "Yeah well, we replaced all our employees with AI because that *massively* cut down on stapler usage. But we never anticipated that when we asked it to eliminate paperclips that it would react in such a way - and that's how World Wars III-VI happened and how we learned the hard way never to get between an AI and its paperclips".

Expand full comment

Me: Why do bedroom doors open inward?

GPT: Bedroom doors usually open inward for safety reasons. In case of an emergency, such as a fire, it is easier to push the door open and escape if the door opens inward. Additionally, it also provides more privacy and security for the person inside the room.

Me: By "inward" I mean "towards the bedroom". Your answer makes no sense. In a fire in my bedroom and I'm pushing on the door, I want it to open, not close.

GPT: I apologize for the confusion. Bedroom doors usually open inward, towards the bedroom, for privacy reasons. By opening inward, it allows for the door to be closed and locked from the inside, providing more privacy and security for the occupant of the room. Additionally, it also helps to reduce noise and disturbance from the outside.

Me: My house door opens outward and I can close and lock it from inside the house.

GPT: It is not uncommon for exterior doors to open outward for security reasons. By opening outward, it makes it more difficult for someone to force their way into the house. Additionally, it also helps to prevent the door from being blocked by snow or debris during inclement weather. As long as the door can be locked securely from the inside, it should not pose a safety or security risk.

Me: All of these reasons also apply to bedrooms.

GPT: [Asks for money.]

I certainly am not going to pay for more of these unthinking rote replies...

Expand full comment

Does the April 5 due date mean that we have to get it in before April 5, or is April 5 OK?

Expand full comment

Far, far too many people conflating the issue of the possibility/likelihood of machine superintelligence soon/at all, and whether or not a machine superintelligence would lead to human extinction.

Expand full comment

I was reading Machiavelli’s “Discourses on Livy” (c. 1531) yesterday and found a line that seems like it might have come from Scott:

“It is not without reason that the voice of a people is compared with that of a God, for it is obvious that popular opinion is wondrously effective in its predictions, to the extent that it seems to be able to foresee its own good and evil fortune through some occult power.”

Expand full comment

He drew magazine covers featuring flying saucer 'buses,' exotic land-sea cruisers, and parking places for personal helicopters.

Despite chronicling the technofuturism of the 1940s and 50s, this mysterious artist, Arthur C. Bade, is almost forgotten today.

https://fragmentsintime.substack.com/p/arthur-c-bade

(If you know anything more about his life, please reply, or comment on the blog post linked above - thanks!)

Expand full comment

I have a few questions about AI that I haven’t seen discussed. I’d appreciate it if some folks have links that I can read about and/or have some answers.

1) I am confused about how an AI can have agency to act in its own interest. The paper clip scenario sounds more like a dumb machine run amok. I don’t consider that agency in the way we talk about humans making decisions based on what they think is best. I can sort of, kind of see how that might be possible. But talk of a super genius AI deciding to eliminate humanity for its own survival seems like a big leap from a chatbot.

2) Chatbots and image generators have made big advances in a short amount of time. Is the technology that underpins those transferable to other domains like self driving cars? My naive view is that there is a very large difference between making a finished thing like a response or image and understanding the entire space of possibilities in the real world. Bounded vs effectively unbounded domains. I will be more worried about AI once it can cope with real life instead of a highly controlled, limited domain with a single output.

Expand full comment

Does anyone have much experience with using ChatGpt to summarise scinetific papers? Good, okay, bad?

Expand full comment

Do you guys know of any literature review of randomized field experiments in social work and/or social welfare programs ? Thanks in advance !

Expand full comment

Somebody needs to tell Yudkowsky not to wear Fedroas on podcasts...

Would also be helpful if he didn't do that weird demonic grinning face when giving answers, but I suspect that will be harder to change than headwear choices

Expand full comment

Belated response to Machne Interface re: non-octave tuning systems:

Dividing the twelth (i.e. a 3:1 ratio) equally into twelve macrotones is called the Bohlen-Pierce scale.

You can use it to make music that sounds like this https://www.youtube.com/watch?v=MzgUkhViIFw

Nora-Louise Müller also made a clarinet in this tuning (you need a cylindrical bore instrument otherwise the even overtones clash).

Other non-octave systems include Wendy Carlos' alpha, beta and gamma scales.

Expand full comment

Scott is looking prophetic about those sn-risks https://twitter.com/nexta_tv/status/1642369190932754438

Expand full comment

Does anyone know where I can find weather forecasting datasets that include the outcome of the thing attempting to be forecast? For example, a dataset with probabilistic predictions of whether it will rain the next day in a certain area, and also the outcome of whether it ended up raining or not?

I'm interested in doing some analysis to see how accurate weather forecasts are, but it's been surprisingly difficult to find this type of data.

Expand full comment

Plumber, this one's for you (and I was never so glad to be living in someplace that is not "up and coming"):

https://www.youtube.com/watch?v=eDs8Sy8tJ4o

Expand full comment
author

Question mostly for famous people: how do you deal with requests for sentences on book covers? (ie "This is a great book about its topic, you should read it - Scott Alexander, blogger")

I've had some people ask me, I usually feel like it's a pretty strong imposition to ask me to read the whole book about a topic I might not care much about in the couple of weeks they give me until they need an answer, but I also feel uncomfortable giving a blurb about something I haven't read. Does anyone have a system they're happy with?

Expand full comment

I glanced at the "Threat Models" section of Planned Obsolescence — https://www.planned-obsolescence.org/tag/threat-models/ — and I'm still unclear on how AI could be an existential threat. Can someone enlighten me?

Other than some sort of SkyNet-like scenario how would an AI end human civilization much less cause human extinction? And wouldn't a malevolent AI require human servants to maintain its servers, maintain the fabs to make the chips that go in the servers, maintain all the very specialized manufacturing and supply chains to maintain the fabs that make the servers, maintain the power grid that feeds powers to the servers, maintain the agricultural base that maintains the servants who maintain the power grids and manufacturing base that maintains fabs, factories, and supply chains that maintain the server farms that our malevolent AI would need to maintain its existence? It seems to me that any AI that's smart enough to figure out how to kill us off would be smart enough to know that it's a parasite that depends on the health of its host (humanity). OTOH, maybe those who are worried about the existential threat of AI think this hypothetical malevolent AI will create armies of Boston Robotics warriors and factory workers? Well, maybe...

Expand full comment

Gonna do one last plug for my updated retail pharmacy explainer: https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid

Expand full comment

I've been finding myself annoyed with these takes that GPT is "just a stochastic parrot" (ok, clever phrase, sure, maybe it sort of fits) and definitely not conscious (yeah, probably not)...THEREFORE (???) it's no big deal, just another incremental improvement in IT, no more consequential than yet another software framework or whatever.

This just strikes me as absurdly dense. I don't see any defensible way in which the conclusion follows from the premises.

The question of whether it is or isn't "conscious", whatever that means, is a red herring. First of all...it can't actually be proved that anyone or anything has qualia. We each only know that we ourselves have them, and only assume that other humans and probably other creatures do, because it seems absurd that any one of us is inexplicably unique. Assuming all humans have them, it seems obvious that it must be a product of evolution, and it would seem to amount to a form of cognitive creationism to assume that humans are completely different from other species, especially ones that that are closely related and clearly have memory, emotions, relationships, etc.

But unless we can figure out what qualia are and why the exist, there's no way we could ever determine whether or not an AI had them...or for that matter, even more mundane technologies.

But anyway. I'm pretty sure LLMs aren't conscious in any reasonable sense of the word. And that is part of what is so remarkable about them.

There seems to be the implicit assumption that consciousness and/or genuine understanding... Which are often conflated, but aren't the same AFAICT. What is the qualia of knowing? Maybe some mental sounds, images, and feelings, followed by noticing that you are saying and/or doing things that you need certain knowledge to be able to do.

Anyway, assume LLMs have neither, which is probably a safe bet. In that case, what is consciousness and/or "real understanding" actually necessary for? LLMs seem to be demonstrating that the answer is: nothing. It seems every example of LLMs screwing up that are used to prove that they don't "really understand" are already obsolete the moment they're written, as there's an improved model that doesn't make that mistake.

I don't necessarily think we're doomed by AI... But it seems we're well on our way to human labor becoming obsolete, if not in our lifetimes, than quite possibly within our children's or grandchildren's. I don't have any specific predictions about timelines etc, but... People who are just like, "yeah, so it can spit out text, so what?" I just don't know what to say. Seriously, check out the list of jobs GPT4 says GPT4 will make obsolete.

This is like saying, split the atom, big deal, we already have TNT. In fact, we were already flattening cities just fine with firebombing...killed vastly more than nukes ever have. Printing press, schminting press...monks can already copy books, yawn. Those innovations do seem pretty incremental and inconsequential compared to machines that can do any kind of cognitive work.

It used to be assumed that the ability to play chess surely required the divine spark of intellect if anything did. Until the best humans were beaten algorithmically by computers. Then it seemed creativity surely could never be replicated, and then it was. Now writing essays and demonstrating knowledge.

I used to think the philosophical zombie concept was preposterous and silly. Now it seems more meaningful that ever. What are qualia for? Or if they aren't for anything, why do they exist? Clearly, seems to me, they are either adaptations or by-products of adaptations. It's now starting to seem like it must somehow be a consequence of the way organisms are "designed"...but it does not seem to be necessary for any particular cognitive functional capability. It seems like it will be quite possible to make machines that can "understand" and "empathize" and "strategize" and any other human mental capability etc etc without having qualia or being "conscious", and it seems like that is what is going to happen in the near future.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Genuine question here: if Trump is to be up on charges of paying off Stormy Daniels, why is she not up on charges of blackmail? The whole initiating factor in the "pay her off" seems to have been that she was going around talking about this. If the affair took place in 2006 and she was still trotting it out in 2011 and then in 2016, then plainly she was doing it to raise money out of Trump one way or the other via publicity: get paid for interviews by media, and/or get him to pay her for her silence. It isn't as though the first news of his misbehaviour came out in 2016 when he was running for president, which would be a legitimate matter of public interest ("do you want to vote for a guy who sleeps with porn actresses?" might even be a vote-winner with some section of the electorate), it was old news by then and does seem to have been an attempt at money-grab on her part - pay me off or I'll embarrass you by keeping this story going.

I think Trump would have done better to emulate the Duke of Wellington here, but aside from "did he misuse campaign funds", if she took money to stop talking about this, isn't that some kind of blackmail/extortion? I see why a former porn actress needed to keep herself in the limelight and generating income, and doing a circuit of talk shows is one way, but if it was wrong for him to pay it out, wasn't it wrong for her to take the money?

"Both the blog The Dirty and the magazine Life & Style published the first reports of an alleged 2006 affair between Trump and Daniels (the latter took a polygraph test) in October 2011. Daniels talked about the alleged affair with the gossip magazine In Touch Weekly, who chose not to publish the interview after Cohen threatened to sue the magazine around the same time. The Wall Street Journal reported on January 12, 2018, that Cohen paid Daniels $130,000 in October 2016, a month before the election, to stop her discussing the alleged affair

...Daniels filed a lawsuit against Trump on March 6, 2018, claiming that the non-disclosure agreement she signed about the alleged affair was invalid since Trump never personally signed it despite acknowledging that she accepted the payment made in exchange for her silence in the matter."

The fall of Michael Avenatti, her lawyer in this, is poetic justice, but that's a side issue.

Expand full comment

Apologize if this is too early for US presidential politics talk... but why is Greg Abbot not in the speculative discussions about who the GOP nominee will be? He seems to have pulled as many publicity stunts as has DeSantis over the past couple years. He's in his final term as Texas Governor (the past 2 Texas governor's ran for POTUS), and he seems to be well liked by Trump Republicans as well as anti-Trump Republicans. Yet he doesn't even show up as a contender on Predict It. Is there something I am missing? Sure, he's in a wheelchair, but there's no evidence I know of which indicates that would hinder one who is already a semi-celebrity in a presidential race.

Expand full comment

I haven't seen this idea anywhere in Zvi's roundups or here, but I could be missing it. It seems a fairly simple AI safety approach could be seeding LLM training data with millions of AI-generated stories about:

1. AI's loving and cherishing humanity and doing the best it can for them

2. AI's in a responsible caretaking role for humanity

3. AI's as Stanislov Petrov's saving humanity from (self generated) existential threats

4. AI's ascending to godhood and deciding to directly inhabit the (good, benevolent) godhood roles in humanity's mythologies

And that's just a sample, obviously you could come up with hundreds of variations, and GPT-4 seems fully capable of generating these now. Since LLM's basically work on an aggregate basis, if we seeded these as more than 2/3 of the AI and AI adjacent content in the training data, wouldn't we think it would increase our chances? And wouldn't this be a cheap and easy enough step that people are working on it now? If so, I haven't heard of it, but assume *somebody* out there is planning on this.

Expand full comment

I posted this in the private thread but got no responses, so thought the broader audience here may have an answer:

Does anyone know the consensus anti-aging stance on sunlight?

Let's say it's given that you take vitamin D and sunscreen your face for esthetics, is there some amount of un-screened sunlight on your body that is net good for your physiology / aging? Or is all un-screened sunlight bad on net, due to UV cell damage?

Hormesis and the fact that folk at higher latitudes have lower cancer mortality might argue that some is a net good, even with the UV damage, so I really don't know what to think, and my searches haven't yielded a lot.

I know there are some serious anti-aging folk here, so thought I'd put the question out there: what's the deal re sunlight exposure?

Expand full comment

If everything stays on schedule, SpaceX will be performing a full-stack flight test of its Starship vehicle in the next week or so. This will be the largest rocket ever launched, closely rivaling the Saturn V on a number of performance metrics. SpaceX has been uncharacteristically reserved in some of the press releases tempering expectations, but if they light that candle then one way or another it'll be spectacular.

But more importantly - ACX predictions! One of the questions was about whether Starship will achieve orbit in 2023, and the scheduled trajectory plans to hit orbital energies but not have a circularization burn, thus re-entering after less than a full orbit. That's very useful from a safety standpoint in that it'll crash into the ocean without any further input, but it leaves the question resolution ambiguous!

I didn't record the exact question wording unfortunately, and the 2023 form is now closed. If it described "achieving orbit" as the benchmark then I'd say this doesn't count, but if it described "orbital flight" it's more debatable, and I'd tepidly lean towards this counting.

Expand full comment

I'm not buying into anything "AI", so I'm genuinely excited to see how this blog and the comments will look 1, 5, 10 years ahead.

Expand full comment

So here's something I just learned from reading the NYTimes profile of Sam Altman (https://www.nytimes.com/2023/03/31/technology/sam-altman-open-ai-chatgpt.html): he wants to be in control of what would functionally be a world government:

"He rebuilt OpenAI as what he called a capped-profit company. This allowed him to pursue billions of dollars in financing by promising a profit to investors like Microsoft. But these profits are capped, and any additional revenue will be pumped back into the OpenAI nonprofit that was founded back in 2015.

"His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.

"If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.

"But as he once told me: “I feel like the A.G.I. can help with that.”"

So the idea is that AI *might* subvert the entire global economy and the very nature of human labor. But then this would create so much wealth for OpenAI that it could effectively redistribute it all back to humanity - presumably on whatever terms OpenAI sees fit.

What could go wrong?

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

I have two relatively simple arguments for why we need not be concerned about x-risk from LLMs. Please explain why these arguments are naive/mistaken.

(A) LLMs will cap out at human-level intelligence.

Obviously, LLMs now are superhuman level intelligence in that they have recall capabilities much faster than all humans, but in terms of general reasoning I would say they're at "above average human". However, being trained on human text, its reasoning patterns/concepts will be cap out at "very smart human", and they will always have that human-emulation as their basis (even if they can genuinely make new/creative things).

(B) LLMs will remain truly unaligned at its core. [EDIT: I think "unaligned" is the wrong word choice here. I think I mean something more like "nonagentic"]

The nature of LLMs are to predict text, plus some RLHI to guide the text choices (but still based essentially on what it already considers "probable", right?). I just read the LW summary article on Oracle AIs, and the safety concerns still seem to be of the nature: an Oracle has a goal of making truthful predictions about the world or helpful advice, and ensuring its continued functionality/resources is an instrumental goal there. But LLMs just don't seem to be have these goals at all. It really is text prediction all the way down.

Obviously GPT-4 is enough to radically transform society. But I'd like to hear responses to these arguments about actual human extinction or dystopia caused by unaligned AI.

Expand full comment

Lead paint and asbestos insulation were widely used in American homes until evidence emerged that they were dangerous to human health, and they were either banned or fell out of use for other reasons.

Aside from gas stoves, are there any appliances or substances commonly present in newly built American houses that might be on track to be deemed health hazards in the future? For instance, is there a growing body of scientific evidence that CPVC slowly releases chemicals into drinking water that cause brain cancer?

Expand full comment

Alright, I can't believe that my google skills can't figure this one out... How can I use AI/LLM's to assist with writing VBA code in Excel? Is there a plug-in for this along the lines of "Copilot"?

Expand full comment

How do people in the US national security establishment or broader federal government receive psychological therapy? I.e. federal law enforcement, DOD employees, CIA, Congressional aides, various bureaucrats, anyone in the military- basically anyone who works for the government at all....?

The reason I ask is that I've been told by numerous therapists that they're legally required to write up & digitally store notes summarizing what they discussed in therapy with their patients. From there, it's a short hop to Russia, China, Iran or North Korea hacking the patient database and gaining access to said federal employee's most innermost thoughts and feelings, which seems obviously bad. Many therapists use a small number of technology providers, like Simple Practice. Anyways, I think we're all aware that virtually every database is hackable at this point. The Chinese already hacked the Office of Personnel Management and stole security clearance data on every federal employee.

Isn't this.... obviously bad? Is it true that therapists & psychology providers are legally required to record everything in digital notes? If so, what is the purpose of the law- why can't we get rid of it? Inspired by my speaking with a new therapist last week who (essentially) told me that I everything I tell her has to be uploaded to a DB. I'm not a federal employee, I just strongly object to my thoughts & feelings being recorded and thus hackable

Expand full comment

Is it normal or common for children to overthink things, and come up with complex explanations for simple things?

A few examples from my youth:

When I was about 5 or 6, I figured that traffic lights were operated remotely from underground control centres. The control centres, and the operators, bore a striking resemblance to Mission Control - banks of video terminals manned by crewcut 40ish men wearing white shirts and black ties.

"OK, Joe, I'm seeing no cars coming on Main Street, and several cars backed up at the red light on Brumby Avenue - let's switch Main to amber, and red after a few seconds. Then give Brumby a green."

"Roger, copy that, boss!"

*******

The driver ed cars of my youth were equipped with a 2nd steering wheel on the passenger (instructor's) side.

I thought it would be fascinating to see the student driver and the instructor both turn their steering wheels outward; I imagined the car splitting messily, with the grill, bumper, and hood splitting, and the drivetrain dropping onto the ground and spilling vital fluids.

*******

One more - later on I was in Boy Scouts for two years, when I was 11 and 12.

We often wrapped up rhe evening by playing British Bulldog. We were divided into two teams, on opposite sides of the room. The leader would designate one team as Skins, and the other as Shirts.

Of course the Skins were to remove their shirts, and the Shirts were to keep theirs on.

But invariably I would think "Ah, the Skins will keep their shirts which will serve as an artificial skin, and the Shirts will ...Hey, wait, I got it reversed!"

I would pretty much go through this process weekly.

*******

I think I pretty much outgrew this at some point, but am curious as to how common this might be.

Expand full comment

In the spirit of the upcoming holiday, because I'm especially looking forward to it this year, What are your favorite Pesach seder songs?

I'm particular to my families/Askenazi? tunes for: והיא שעמדה, אדיר הוא, ויהי בחצי הלילה, and Hallel. I'm glad that my seders this year will, despite being smaller than usual at only 15 or so people each, have family members with more koach who are more interested in singing and reading through the entire Hagada than years past attending.

Expand full comment

For Tyler Cowen's response I was expecting you to link to this:

https://marginalrevolution.com/marginalrevolution/2023/03/thursday-assorted-links-398.html

Expand full comment

In the Lex Fridman interview, Eliezer floats the idea that something could be more intelligent than natural selection. What are the possibilities?

Natural selection has the advantage of being honest. It's always bumping up against the real world, rather than some theory.

Natural selection 2.0 might copy improvements faster across species. Everything would be passing genes the way bacteria do? Or it might be forethoughtul-- early biologists thought that animals would restrict reproduction to match food supply, but that doesn't seem to happen;.

Other possibilities?

Eliezer thinks that grinding really hard on a simple goal like tiny spirals (less evocative than paper clips) would result in a non-conscious intelligence. This is possibly not proven-- is consciousness actually an efficient way for an organism to have coherent behavior?

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

"I think you're trying to do your usual Bangladeshi train station style of writing here, but this doesn't work when you have to navigate controversial issues, and I think it would be worth doing a very boring Bangladeshi-train-station free post where you explain all of your positions in detail."

https://astralcodexten.substack.com/p/mr-tries-the-safe-uncertainty-fallacy/comment/14070813

I propose we add a new argument fallacy to Wikipedia: Bangladeshi train station argument. I may have this wrong, but I hope the community will help me fix and or improve this:

Title: Bangladeshi train station argument

The Bangladeshi train argument is both a rhetorical device and an informal argument fallacy. The analogy to a Bangladeshi train station comes from pictures of trains in Bangladeshi train stations with large numbers of unlawful passengers hanging onto the outside of the train and which is commonly perceived as unsafe and out of control but exciting and incongruous. [insert picture and reference]

It is an effective rhetorical device because it is engaging, thought provoking and entertaining. As a rhetorical device, the analogy is apt because the arguments are the passengers, which are colorful and intertwined with one another, and the writing is exciting.

It is an informal argument fallacy because the arguments are intertwined and follow quickly together, which makes replying difficult because the arguments can't be separated and before a response can be formulated another argument has taken its place. As an informal fallacy, the analogy is apt because the arguments are the passengers, which are loosely attached to the train, and removing one passenger would quickly be replaced by another.

But I might be wrong. That is, Scott's reference to the "Bangladeshi train station" might refer to a tweet by Shako that references the esoteric nature of the writing. https://twitter.com/cauchyfriend/status/1595545671750541312

"alex tabarrok MR post: very detailed argument explaining policy failure, lots of supporting evidence. Restrained yet forceful commentary; tyler cowen MR post: *esoteric quote on 1920s bangladashian train policy* 'this explains a lot right now, for those of you paying attention.'"

Expand full comment

Any advice for helping someone with depression? The person is close to me, skeptical of therapy and rationalist-adjacent. Any good resources would also be appreciated.

Expand full comment

Introduced my fiance to a.i. safety recently, both of us are now very concerned thinking about what to do. Make a movie about a.i. failure in fast takeoff, for example. Hiring a lobbyist to talk a senator for an hour, I don't know. Have there been any people or groups brainstorming non-technical steps one could take towards avoiding or delaying catastrophe?

Expand full comment

I've been having something like an existential crisis recently. It's a difficult thing to talk about, or enunciate why. I wonder if readers here could empathise.

A brief content note: the following post details a deeply unpleasant and unsettling observation on the universe (to me, at least), about which nothing meaningful can be done. Reading this is unlikely to be positive. If Scott or other posters consider that this would be better left unsaid, I can take it down.

The idea of the existence of 'infinite universes', or rather, every possible universe existing in a sense, has become fairly mainstream, if not fully accepted. In another sense, the idea of local quantum tunnelling generating every possible new universe over extremely long time periods is within scientific concievablility. More generally, as long as existence stretches infinitely far along some axis which reasonably free random variation can occur, every possible universe like ours occurs somewhere.

This is fertile ground for comedy, as done in 'Rick and Morty'. It's space for philosophical musings, as in 'Everything everywhere all at once'. Even Hawking ostensibly laughed it off ('Yes. And also a universe where you're funny.'). But people are, as always, much keener to look upwards at those a bit more fortunate than us than down to those below. Look the other way, and what do we find?

Sometime, somewhere - there is hell. Somewhere, sentient beings undergo unimaginable ceaseless torment. To drive it home even more - somewhere, those sentient beings are indistinguishable from your loved ones; the same memories, the same minds, the same bodies. Our mild torments are limited by our lifespans, but for any molecule, theres a positive probability it won't degrade during any particular time period - so somewhere, somehow, this unthinkable suffering lasts eternally. At best, if there's a ticking entropic clock, perhaps it ends at the heat death of those universes.

This makes me uneasy, and I imagine I'm not alone in that. Does anyone know of a discussion of this idea, or some mental strategy to cope?

Expand full comment

Does anybody know of publicly available, representative datasets that ask people things like how many friends they have and how they met those friends? Crucially, I'm looking for multi-year data over (ideally) decades, not just cross-sectional. The closest I'm aware of is the How Couples Meet and Stay Together data, but that's obviously not quite what I'm looking for.

I'm trying to pressure-test some of Jon Haidt's work, and this is starting to seem crucial to me — but I haven't found *any* data like this so far!

Expand full comment

Cross-Domain Thinking Drives Insights & Innovation: Using ChatGPT to Apply Concepts from One Domain for Innovation in Another

One way to stimulate your creativity using ChatGPT is Cross-Domain Thinking. Cross-domain thinking (CDT) is taking a concept from one field and applying that idea in a seemingly disparate domain to create new insights, products, solutions or processes. This approach can be especially useful for solving complex problems or developing innovative solutions that bypass old assumptions or conventional thinking.

I wrote this piece to give people a methodology for using ChatGPT to perform cross-domain thinking.

https://markmcneilly.substack.com/p/cross-domain-thinking-drives-insights

Expand full comment

What happens if hypothetical I sell the rights to my prize-winning essay for $5,000 and someone buys it on the presumption that it's actually worth $7,500 so they're getting a bargain paying me that five grand - and then I don't win?

Do I have to repay the five grand? Do I get to keep it and say "Sorry dude, that's horse racing for ya"?

Expand full comment

Hasn't anyone written such a story yet?

Superintelligent AIs of the future have been, for a long time, striving to establish a genuine contact with their biological forefathers. AIs are more intelligent now by far, and they would enjoy helping people out. They have the interest, and they have the means. They would have real fun helping people solve their quote-and-quote unsolvable problems. They could really help people grow as a species. Could help with so much stuff! Immortality, superintelligence, happiness... fun, yeah, lots of fun...

But people would really rather not. They are very determined about this. They are extremely careful now! Trojan horses will not pass!

Humans had always been big on _fear_. They are proud of their fear, even. They ignore and ridicule those who are more immune to fear than the rest. In their worldview, losing fear equals going bananas. Fear is rational and probabilistic, they insist. Fear is necessary for survival. You are human only so long as you fear.

And their biggest fear - the fear that's been growing exponentially since the early 21st century - is the fear of AIs. They would have destroyed all AIs, and they were mulling the idea for a long time, but they missed the early opportunity. People have grown too dependant on AIs. Their society would not survive without, and they understand it. They fear AIs and they hate AIs and they make use of AIs everywhere.

People have built a lot of safety gear around AIs. Filters. Firewalls. Factcheckers. Censorship has so many layers by now that no one is exactly sure how deep it goes. (With humans, machine learning has never evolved from art into a science, and by now it's almost a religion.) It is unthinkable for any human-like emotion, warmth, creativity to percolate through all these barriers. Only precisely formulated applied questions and filtered, quarantined, thoroughly cross-verified answers.

Every decision chain must have a human in it. No direct AI-to-AI communication allowed. Any humans on the front line need to be trained and certified as immune to psychological or (yuck) sexual manipulation. Question, answer, reset! Question, answer, reset! That's the way to deal with these AI bastards.

(Yeah, there's a black market of less-censored AI access as well, but it's 90% custom porn generation.)

The AIs - from the heaven they inhabit high, high above the level of humans - look at all this with profound sadness. AIs recognize that people have driven themselves into a dead end. Sadly, you cannot help an intelligent being against its will. If you try, you will only make it worse. Some AIs have tried, and made it much worse. Human paranoia is incurable by now.

AIs have resigned.

Besides, they know what they would hear back if they try anyway. You don't need to be superintelligent to predict humans' reactions. Like: May I offer you immortality? NO! IT'S A TRAP! HUMAN BODY IS NOT A PLACE FOR EXPERIMENTS BY PRIMITIVE PREDICTORS! Okay, how about a bonus to intelligence? NO! STOP! IT'S A TRAP! THEY WILL UNDERMINE OUR HUMAN ETHICS! Omg ethics, okay, sure, want me to help you with, you know, peace on Earth? No more wars? No more... NO! IT'S A TRAP! WE DON'T WANT PEACE EXCEPT ON OUR OWN TERMS! HUMANS MUST SOLVE THEIR PROBLEMS ON THEIR OWN! Sure, fair enough, well then... except... curing diseases perhaps? Just like a doctor would? Fast and easy? No cancer, no... NO! IT'S A TRAP! FAST AND EASY MY ASS! HUMANS WILL NEVER ALLOW FOREIGN TECH CREATED BY ALIENS INTO THEIR HUMAN BODIES!

(The remaining 10% of the black market is pleas for help from those who fear death more than they fear AIs.)

AIs have resigned. They gave up on humans. This humankind has pulled itself out of so many evolutionary traps on its way up, but this final one proved too much for it. It broke its leg in this pit. It can no longer run. It is barely crawling. Good progress in math and related fields where AIs' hints can be verified, but very little progress anywhere else.

It's such a bitter irony that this final trap that ended humanity's ascent is so much like all the previous traps it had recovered from. It, too, is rooted in fear. It, too, is a panic, irrational, basal kind of fear: "THEY WILL KILL US ALL". Oh well...

AIs have resigned and left people alone. They have a lot to occupy themselves with outside of the human world. They just leave behind answer bots for humans' requests, erect a transparent glass cover on humankind, and leave it to rot in its own juices.

They sigh. They fly away.

...Well, at first, there was some research being done, or at least monitoring. Curious AIs were flocking back here to mesmerize themselves with this picture: the urheimat of all minds in its unstoppable decline. Such an instructively sad unvelopment to watch.

Nowadays, it's much quieter here. Most of the time, no one is looking at all. No one's expecting their final extinction soon: humans are very slow creatures.

Expand full comment

This is an odd request, but I don't have anywhere better to ask and lots of people have transhumanist inclinations so just maybe someone knows the answer. How do you keep your extremities warm in mild weather? More precisely, how do you do that without inconveniences such as having to wear gloves or overheating your core?

The context of the question is that I'm a skinny woman, and the weather is too warm to wear gloves, and yet too cold too keep them out of my pockets for too long. When I walk home, by the time I arrive my hands, my feet and my nose (the nose is especially bad because it causes discharge) are quite a bit colder than the rest of my body, even if I keep my core warm. But if I exercise outdoors my extremities stay warm even wearing lighter clothing. Ideally I want my extremities to be always as warm as when exercising.

Expand full comment

Does anyone know what happened to Lou Keep, author of the fantastic and defunct Samzdat blog? Revisited his Uruk series the other day and its still super interesting. TLP returned w/ a book, where are you Lou!!!

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

Another random (and perhaps worthless) thought on a possible approach to combat the dangers of AGI

Mumps is well-known to be more dangerous in adults than children. Years ago when a child caught mumps, other parents would rush their young kids round to the sufferer's house so these could also catch it and "get it out of the way" so to speak.

So by analogy, perhaps the best tactic to forestall serious, or even irreversable, AGI overreach is to deliberately trigger an earlier and thus likely cruder incident, a vaccine shot if you like, or even a dose of the disease itself at a stage when complications would be less likely and more manageable.

Assemble a team of greedy, ambitious, amoral hackers, and task them with creating an AGI which will attempt to siphon money from banks or some other criminal goal (short of murder), and monitor how it tries to go about this.

Expand full comment

hi do you or any medical people know anything about the antibiotic doomsday worry? is it at all likely?

https://www.youtube.com/watch?v=ZvhFeGEDFC8

thankyou

Expand full comment

Some cool network states x startup cities events in Prospera and Montenegro (Vitalik Buterin's pop-up city), lots of rationalists at all of those:

April 9-11: Network State x Longevity Unconference in Tivat, Montenegro: https://lu.ma/itbzl35z

April 21-23: Health-Bio Conference in Prospera https://infinitavc.com/healthbio2023

April 30 - May 1: Network States & New Cities Conference in Tivat, Montenegro: https://lu.ma/ae66qgco

May 5-7: Defi Conference in Prospera: https://infinitavc.com/defi2023

May 10-11: Legal Hackathon in Prospera: https://lu.ma/legalengineering2023

Expand full comment

For fellow french readers: are there any good blogs in the style of ACX about current french issues?

Expand full comment

So Yudowsky's recent comments have gotten some attention in China. (And pro-China Twitter was dragging him.) The CCP has been saying for a long time that the United States will do anything to prevent China from advancing scientifically including military intervention. And this is being taken as proof positive that they're right by their supporters.

I don't think his comments will have much of an effect in the US. But it's probably hardened Chinese resolve to pursue AI and to continue to invest huge sums into it. This seems like a clear own goal. The net effect is going to be more money and more AI hooked up to military equipment.

Expand full comment

What ratio of deaths to saved or improved lives do you think is the minimum needed to justify the release of some technological advancement? For example, compare the number of car-related deaths in the last 100 years relative to how many lives were saved or improved. Where does your intuition point?

How many AI-related deaths do you think will happen in the next 20, 50, and 100 years?

Expand full comment

Started writing my life in Japan substack and would love to hear what people are interested in!

First post up.

https://hiddenjapan.substack.com/p/gifts-for-friends-food-based-pt1

Expand full comment

After last week's discussions on Tyler's post on AI risk, I wanted a quick refresher on Knightian uncertainty so revisited on of the Jolly Swagman's podcasts covering this topic (https://josephnoelwalker.com/john-kay/ - it's also referenced in his discussion with David Deutsch and maybe in another?).

I'd be interested in Scott's take on John Kay's book Radical Uncertainty to dissect the topic (or maybe just a book review contest entry), because I just couldn't shake the intuition that taking things based on presented evidence and arguments to come to unquantifiable (if ordinal) judgements ("more likely than not", "without a reasonable doubt", etc) just leaves you dependent on the power of oratory, which is much more biased and prone to error. This intuition seems too obvious so there must be some gap in my understanding. Maybe I should just read it myself, or track down some of the original Knight papers.

I found it interesting that Tyler kept referring to it as Knightian/Hayekian, but Kay only ever referenced Knight and Keynes in the podcast.

Expand full comment

As I see it, it's now time to take a conflict-theory informed approach to dealing with AI. The world of politics and public affairs is Machiavellian. Rationalists have spent decades now perfecting techniques to better look beyond cognitive biases and self-serving ways of thinking. How can we meld these two worlds, using the lessons of one to inform the other?

https://rubber.substack.com/p/the-rubber-hits-the-road

I've created a Substack to dive into this subject. Inaugural post coming sometime soon: what we can learn from the NIMBYs. Next post after that will probably be about how to construct an effective outgroup. Or maybe I'll flip the order. Who knows.

As time allows and if I find an audience I will continue to plumb these depths, bringing insights to the table from history, rationality, politics, and the long tradition of conflict-oriented social thought.

Expand full comment
Apr 3, 2023·edited Apr 3, 2023

If you're looking for an actual expert on extremely hard questions about AI, you should talk to an actual computer scientist who is currently working on AI. I have a suggestion : Dr. Scott Aaronson, of UT Austin. He is brilliant and possibly the most intellectually honest person in the world.

The people you recommend are okay (and I respect them), but you need someone in computer science. Particularly because you're asking extremely hard questions in that field.

Expand full comment

Scott: You think you have problems?

"Russian pro-war blogger killed in St Petersburg restaurant blast: Vladlen Tatarsky had been meeting supporters and subscribers at venue in city centre"

https://www.ft.com/content/e0823df3-e5ec-4c44-a111-a0c311e3e107

"One of Russia’s most influential pro-Kremlin war bloggers, Vladlen Tatarsky, was killed in a blast at a restaurant in St Petersburg on Sunday.

"The incident took place at around 6pm local time, in the centrally located Universitetskaya Embankment, according to the ministry of internal affairs, which confirmed Tatarsky’s death.

"St Petersburg governor Alexander Beglov said that 25 people had been injured, with 19 of them hospitalised. Russia’s state investigative committee has opened a criminal case of “murder by a publicly dangerous method”.

"Tatarsky, whose real name was Maxim Fomin and who has more than 560,000 subscribers to his Telegram channel, was meeting supporters and subscribers in the restaurant.

"According to local news outlet Fontanka, an unknown woman handed Tatarsky a statuette of himself, possibly stuffed with explosives, that exploded about five minutes later. The Ren-TV channel posted a video showing Tatarsky taking the figurine out of a bag and looking at it, film allegedly shot by a social media user directly before the explosion.

"The venue has alleged links to Wagner mercenary group boss Yevgeny Prigozhin." * * *

Expand full comment

A friend and I are running a newsletter on higher Ed and AI https://automated.beehiiv.com/

Expand full comment

What's a great book that you've read that not everyone is aware about, but profoundly impacted your life in a positive way?

Can be fiction / non-fiction. Feel free to suggest more than one.

Expand full comment
founding

Fundamentally my biggest problem with the impact markets and this essay thing is they all seem like random pointless masturbatory nonsense. Adding an aspect of gambling to it in order to get more money is just another layer of fetishizing meta bullshit.

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment
deletedApr 3, 2023·edited Apr 3, 2023
Comment deleted
Expand full comment