717 Comments

Does anyone here, preferably someone based in Africa, know the results of Sierra Leone's parliamentary election on Saturday June 24? I need to resolve https://manifold.markets/duck_master/whats-sierra-leones-political-party (since I really like making markets about upcoming votes around the world). I've been *completely unable* to find stuff about the parliamentary election results on the internet, though the simultaneous presidential election has been decently covered in the media as a Julius Maada Bio win.

Expand full comment

New "History for Atheists" up! An interview with an archaeologist on "Archaeology in Jesus' Nazareth":

https://www.youtube.com/watch?v=5bO4m-x_wwg&t=3s

Expand full comment

LW/ACX Saturday (7/1/23) happiness, hedonism,wireheading and utility.

https://docs.google.com/document/d/1pAZfz5VyFF7Pa4UN0o7FPKAk1vKEHsYTIC2LBJ0FbBg/edit?usp=sharing

Hello Folks!

We are excited to announce the 32nd Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.

Host: Michael Michalchik

Email: michaelmichalchik@gmail.com (For questions or requests)

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Date: Saturday, July 1st 2023

Time: 2 PM

Conversation Starters (Thanks Vishal):

Not for the Sake of Happiness (Alone) — LessWrong https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone (audio on page)

Are wireheads happy? - LessWrong https://www.lesswrong.com/posts/HmfxSWnqnK265GEFM/are-wireheads-happy

How Likely is Wireheading? https://reducing-suffering.org/how-likely-is-wireheading/

Wireheading Done Right https://qri.org/blog/wireheading-done-right

E) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.

F) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.

G) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.

Expand full comment

Any suggestions on online therapy which is good?

I've finally begun to earn enough that self-funded therapy is an available option. I'd like someone who's more towards the CBT/taking-useful-actions end, rather than psychotherapy (I would not mind talking through how I feel about things, it just seems insufficient without talking through potential actions I can take to improve my life.)

Mainly, along with attempting to figure out what to do, what I want is essentially to be able to talk through things with a smart person who's obligated to keep my privacy (I'm generally very bad with trust as far as talking with the people around me is concerned; I'm hoping that a smart stranger, who I can trust to keep things to themselves, would allow me to be more open.)

Also taking recommendations for books/other things which I can use myself. (I tried reading Feeling Great, and maybe I should slog through it-I'm just generally put off by mystical stories which omit most details in the service of making a point-they just seem kind of fake. Maybe I should just get used to that, though.)

Expand full comment
User was banned for this comment. Show
Expand full comment

I’ve noticed that Scott often quantifies the social cost of CO2 emissions by the cost it takes to offset those emissions (e.g., in his post on having kids, he says since it costs ~$30,000 to offset the average CO2 emissions of an American kid through their lifetime, if they had $30,000 in value to the world, that’s enough to outweigh their emissions; he does something similar in his post on beef vs. chicken). But this seems wrong to me: the cost of carbon isn’t the cost of offsetting that level of CO2 emissions, especially in a context where carbon offsets produce positive externalities that the market doesn’t internalize (so we are spending inefficiently little on carbon offsets right now). Am I missing something?

I get why this works if carbon offsets were in fact priced at their marginal social value (as the social value of a carbon offset presumably equals the social cost of carbon). But I’m not sure this is true? How are carbon offsets actually priced?

Expand full comment
Jun 28, 2023·edited Jun 28, 2023

Just realized why you wouldn't want to live for long periods of time and definitely not forever: Value Drift. Your future self would not share the values your current self holds. On a short timeline like a regular human lifetime, this won't matter too much, but over centuries or millennia it starts to look different. Evolution probably has acted on humans to make sure value drift doesn't happen too fast over a normal lifespan and it doesn't usually go in the wrong direction, but this isn't the case with artificially extended lifespans.

Edit: People need to realize not all value drift will be benign. Some types of value drift will lead to immense evil. I don't-even-want-to-type-it-out type of evil.

https://sharpcriminalattorney.com/criminal-defense-guides/death-penalty-crimes/

Expand full comment

How much do you think your life would change if you suddenly were gifted (post tax) $5m?

Expand full comment

A market on Manifold has been arguing about John Leslie's Shooting Room paradox. The market can't resolve until a consensus is reached or an independent mathematician weighs in. Does anyone here have any advice? https://manifold.markets/dreev/is-the-probability-of-dying-in-the

Expand full comment

Canada's wildfires have broken the annual record [since good national records exist which seems to be from 1980] for total area burned, and they're just now reaching the _halfway_ mark of the normal wildfire season.

https://www.reuters.com/sustainability/canadian-wildfire-emissions-reach-record-high-2023-2023-06-27/

Meanwhile the weather patterns shifted overnight and Chicago is now having the sort of haze and smell that the Northeast was getting a couple weeks ago:

https://chicago.suntimes.com/2023/6/27/23775335/chicago-air-quality-canadian-wildfires-worlds-worst

Did my usual 1.5 mile walk to the office this morning, from the South Loop into the Loop. The haze is the worst I can remember here since I was a kid which was before the U.S. Clean Air Act, and the smell is that particular one that a forest fire makes. (Hadn't yet seen the day's news and was walking along wondering which big old wood-frame warehouse or something was on fire.)

Expand full comment

My sense overall is that the book review contest entries are better this year than last year- do people generally agree?

Expand full comment

I have been trying to track down a specific detail for a while with no luck. The first Polish language encyclopaedia, Nowe Ateny, has this comment on dragons that is among its quotable lines (including on the Wikipedia page!): "Defeating the dragon is hard, but you have to try." This is very charming and I can see why it's a popular quote, and I'm interested in finding the original quote within the text, but searching the Polish word for dragon (smok, assuming it wasn't different in the 18th century) hasn't revealed anything. Would anyone be able to find the sentence and page that it appeared on?

I tried for a while to use ChatGPT for this, thinking that it's the sort of "advanced search engine" task it would be good at, but the results I got were abysmal.

Expand full comment

Psychedelics affect neuroplasticity and sociability in mice... Maybe I should dose my cat (The Warrior Princess) with MDMA to make her more sociable with the neighborhood cats. She does love to brawl!

https://www.nature.com/articles/d41586-023-01920-2

https://www.nature.com/articles/s41586-023-06204-3

Expand full comment

I hope shameless self-promotion isn't forbidden here, but I thought some in this community in particular might enjoy my near-future sf story "Excerpts from after-action interviews regarding the incident at Penn Station," published last week in Nature. (947 words)

https://www.nature.com/articles/d41586-023-01991-1

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Leading the Harvard professor accused of fabricating data based on Independent data analysis, in research papers covering the topic of honesty.

https://www.google.com/amp/s/www.psychologytoday.com/intl/blog/how-do-you-know/202306/dishonest-research-on-honesty%3famp

Expand full comment

https://www.businessinsider.com/real-estate-agents-lawsuits-buy-sell-homes-forever-housing-market-2023-6

Shared for "Nicholas Economides, a professor of economics at New York University..."

Expand full comment

I've been writing a novel on AI and sharing weekly. The (tentative) blurb is "Why would a perfectly good, all-knowing, and all-powerful God allow evil and suffering to exist in the world? Why indeed?" I just posted Chapter 5 (0 indexed), hope it's of interest!

https://open.substack.com/pub/whogetswhatgetswhy/p/heaven-20-chapter-4-gnashing-of-teeth?r=1z8jyn&utm_campaign=post&utm_medium=web

Expand full comment

Nothing about the "official" and public story about the Day of Wagner makes sense.

That story, roughly: after weeks of verbal escalations, Prigozhin declares open revolt around 24 JUN 0100 (all times Moscow time). At 5AM, the troops enter into Rostov-on-Don, and "take control" of the city (or one military facility) without resistance.

The troops then start a 650 mile journey from Rostov-on-Don to Moscow. The goal? Presumably, a decapitation strike against Putin. Except, rumor has it that Putin wisely flew to "an undisclosed location".

The Russian military set up blockades on the highway at the Oka river (about 70 miles south of downtown Moscow), and basically dared Prigozhin to do his worst.

In response, Prigozhin ... surrendered completely before midnight, accepting exile in Belarus. The various Wagner troops are presumably going to follow the pre-announced plan of being rolled into the regular Russian army on July 1.

... while I can't rule out that there was an actual back-room coup attempt, it seems more likely that this was a routine military convoy that was dramatized in the Russian media, and then re-dramatized by the Western media as something that was not choreographed ahead of time.

Expand full comment

I'm gong to be searching for a new job soon. I've seen lots of posts about LLMs helping people with resumes and cover letters etc. so I have a few questions:

1. Is this actually something that GPT is good enough at that if you are someone who is mediocre to average at resume/cover letter writing that it will meaningfully help?

2. is GPT-4 enough better on this kind of task than 3.5 to be worth paying for?

3. Is there some other tool or service (either human or AI) that is enough better than ChatGPT that is worth paying for and would obviate the need of paying for GPT 4 for this purpose?

Expand full comment
User was banned for this comment. Show
Expand full comment

So Ecco the Dolphin wasn't based on Lilly or his theories of LSD/Dolphins/Sensory deprivation...

But it was based on a a movie that was inspired by Lilly and the creator likes the adjacent theory of Dolphins/Sensory Deprivation, but not the LSD portion? And the Dolphin is coincidentally named after one of Lilly's coincidental theories, but the author assures us that is pure coincidence.

Uh huh...

Doesn't seem like much of a correction.

Expand full comment

Is there a surgeon in the house?

Surgeons have a reputation for working really punishing hours, up there with biglaw associates and postdocs. I'm trying to understand why. Is it just the residencies that are punishing, or do the long hours extend into post-residency careers? And what's driving people to keep going?

Expand full comment

Sticking with the theme of early hominins (and AGI), which I also posted about below, I'm wondering if new discoveries about Homo naledi don't complicate the evolutionary analogy often made by FOOMers, best expressed by the cartoon going around showing a fish crawling out of the water before a line of various creatures in different stages of evolution with modern man at the front of the line. Each creature thinks "Eat, survive, reproduce" except for the human who suddenly thinks "What's it all about?" https://twitter.com/jim_rutt/status/1672324035340902401

The idea is that AGI suddenly changes everything and that there was no intermediary species thinking "Eat, survive, reproduce, hmm... I sometimes wonder if there's more to this...." I.e., AGI comes all at once, fully formed. This notion, it seems, has been influential in convincing some that AI alignment must be solved long before AGI, because we won't observe gradations of AI that are near but not quite AGI (Feel free to correct me if I am totally wrong about that.)

Homo naledi complicates this picture because it was a relatively small-brained hominin still around 235,000 - 335,000 years ago which buried its dead, an activity usually assumed to be related to having abstract notions about life and death. It also apparently made cave paintings (although there is some controversy over this, since modern humans also existed around the same location in South Africa).

https://www.newscientist.com/article/2376824-homo-naledi-may-have-made-etchings-on-cave-walls-and-buried-its-dead/

Expand full comment

>”They're running low on money due to Rose Garden renovations being unexpectedly expensive and grants being unexpectedly thin,”

Am I to believe that a premier rationality organization was unable to come up with a realistic estimate for how far overbudget their Bay Area renovation project would be? It sounds like they took a quoted price at face value because they wanted a nice new office , even though these are very smart people who would tell someone else to add a ridiculous safety margin when making financial decisions off of estimates like these.

Expand full comment

Please suggest ways to improve reading comprehension.

I've always struggled with the various -ese's (academese, bureaucratese, legalese). I particularly struggle with writing that inconsistently labels a given thing (e.g., referring to dog, canine, pooch in successive sentences) or whose referents (pronouns and such) aren't clear. I can tell when I'm swimming in writing like this, and my comprehension seems to fall apart.

As a lawyer, I confront bad writing all the time and it's exhausting! I will appreciate all suggestions. Thank you.

Expand full comment

Listen to what Steve Hsu says at the end about AI alignment not really being possible at 54:29.

https://www.youtube.com/watch?v=Te5ueprhdpg&ab_channel=Manifold

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

How did Eliezer Yudkowsky go from "'Emergence' just means we don't understand it" in Rationality: From AI to Zombies to "More compute means more intelligence"? I don't understand how we got to the place where fooling humans into thinking that something looks intelligent means that thing must be intelligent. It seems like saying "Most people can't tell fool's gold from the real deal, therefore fool's gold == real gold". I know there are probably 800,000 words I can read to get all the arguments, but what's the ELI5 core?

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Humanoid Robots Cleaning Your House, Serving Your Food and Running Factories

>>https://www.yahoo.com/lifestyle/humanoid-robots-cleaning-house-serving-204050583.html

McDonald's unveils first automated location, social media worried it will cut 'millions' of jobs

>>https://www.foxbusiness.com/technology/mcdonalds-unveils-first-automated-location-social-media-worried-will-cut-millions-jobs

Expand full comment

Real people losing jobs to AI. Its coming for you next.

https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/

Expand full comment
Jun 26, 2023·edited Jun 26, 2023

Have there been any studies into the prevalence of non-detectable conditions like chronic Lyme among different groups? I've recently started thinking about these as conspiracy theories for educated liberals, and I'm curious if demographic studies bear this out.

If so it would make for an interesting exception to the rule of "liberals trust science and the government" since the NIAID is pretty explicit that ongoing antibiotic treatments for chronic lyme don't have any effect beyond placebo, and they discourage even using the term "chronic Lyme". https://www.niaid.nih.gov/diseases-conditions/chronic-lyme-disease

*EDIT*: Updated to only mention chronic Lyme since that's what I've read more about and what got me thinking about the topic.

Expand full comment

(true) Anecdote...

20 years ago I worked at a SV startup in very senior technical position. I reviewed (rough guess) 100+ programmer resumes, and maybe seven phone interviews and three in person interviews: every week. (Yes, hiring consumed so very very much timr.)

So many resumes were from new grads (and we hired many such) But one just stuck in my mind ... an "ok" resume saying the candidate's goal was "software engineer'. Hold on! I'd never seem the words "software engineer" in such context without "senior" in front. It was like saying "we are a company in the X sector" rather than "we are a LEADING company in the X sector". You don't do that.

Some words just have to be there else it seems/seemed nearly ungrammatical.

On that strikingly unusual basis alone (and the resume was boring, but ok - just normal good fresh grad stuff) I said (and I had the authority to say): skip the rest, go to on-site (interview). And so that lead to a hire, and she was excellent.

Expand full comment

I've shared some of my posts here before - I've now migrated them to Substack, so sharing some of the most-read ones:

So, in no particular order:

How I used ChatGPT to create a game (https://arisc.substack.com/p/how-i-used-chatgpt-to-create-a-game-1537f6ee54e3)

Good feedback, bad feedback, and coaching: the difference between feedback and coaching, and lessons from P&G (https://arisc.substack.com/p/good-feedback-bad-feedback-and-coaching-d9e02fec39d0)

Thoughts on paternity pt III: nature vs nurture, tactical parenting advice (https://arisc.substack.com/p/thoughts-on-paternity-pt-iii-2d1ab15850a)

Corporate power plays: the raging debate on whether this post is satire or in earnest misses the point: these tactics f---ing work (https://arisc.substack.com/p/corporate-power-plays-cc82896edae5)

Innovation & Finance & Crypto: the thesis here is that financial innovation is primarily aimed at bypassing regulation (https://arisc.substack.com/p/innovation-finance-crypto-98993e8a750e)

Financial management: understanding revenue growth & pricing tactics - I know for a fact that readers have applied these concepts in companies ranging from big tech to heavy industrials (https://arisc.substack.com/p/financial-management-volume-price-mix-7540dce6c497)

A translation of Cavafy's God Abandons Anthony: an annotated process of translating poetry into English (https://arisc.substack.com/p/translation-75f368c011dc)

Thank you for reading! Feedback welcome.

Expand full comment

HOW TO WIN THE UKRAINE WAR

(Hire Prigozhin!)

This winter just past, fighters from the Wagner Group were the only reason that Russia was making any progress at all in Ukraine. Well, after Prigo's 36-hour revolt against Putin, I doubt the two leaders are going to be drawing up any new contracts anytime soon.

So Prigo and company have established a good reputation for fighting and a poor reputation for loyalty. But an army marches on its stomach. Who's going to be paying their bills now?

How about the US or NATO?

Put the Wagner Group on Ukraine's side and the war would be over in weeks!

Offer Prigozhin a billion dollar bonus when the last Russian soldier leaves Ukrainian soil.

With Wagner’s Future in Doubt, Ukraine Could Capitalize on Chaos

https://www.nytimes.com/2023/06/25/us/politics/wagner-future-ukraine-war.html?unlocked_article_code=ex5hsx_wRLLvUE5MnyEp1y5u3bM6o7NwFCR8-ncr8eVEcaWeiIB-5yzgTmR_QfZF5NV3HjFF79wLv1HvNRzj3eJq5XU9FD07382pbXpFFqah-HVmN4tAHtAXG_d_r21zgy54P-c3MmBqmmMeI9k55TRaNTRW3FBsjv4XXlSoTmBaQZrbWDE82KNNsftWS-9nN3cNkULsrzVQ3eeOma1fa659ZpWITMxo7koyHfLvzD7raN0kBE-BBD3TwnMcxbkorNyAextSxT5Hems3Rgjt7341vTC4adgstUqyxGknzY3VOnQwfDbsSl2kdEfRn2x0PuAli210WZVQc7iNktgjsGWVhRwCY2x0&smid=nytcore-android-share

Expand full comment

What's up with the Netanyahu trial? It's dragged on for over three years now. In the American justice system pre-trial preparations often move at a snails pace but the trial itself tends to be relatively quick. Is this par for the course for Israeli corruption investigations?

Expand full comment

(epistemic status: random thought)

Have you grown tired of the acronym TESCREAL? (After all, when's the last time you heard of "Extropianism" or "Cosmism"?) If so, I would like to propose to you a new acronym which more better reflects the realities of the greater rationalist community: PAPER, which stands for

- P for Postrats

- A for artificial intelligence Alignment researchers

- second P for probabilistic Predictors

- E for Effective altruists

- last but not least, R for Rationalists.

Expand full comment

I'm running a very small prediction market[*] (with AUD100 prize money) to help decide what I should focus on in my company that does prediction market infrastructure. Anyone with any insight into this space (or even if you just have opinions and want some cash), feel free to join:

https://genius-of-crowds.com/app/invite/AJXZH3397

I've limited it to 50 participants.

[*] Except that it's not really a liquid market, and it has various other constraints to make sure it complies with the laws on these things.

Expand full comment

Realizing I have a poor memory for small tasks, like learning new people's names or remembering whether I've picked up the mail. Does anyone know any good approaches for strengthening detail memory? Exercises, diets, whatnot?

Expand full comment

How long do you reckon it will be before humans go extinct for the same reasons as our hominid ancestors? It’s estimated that modern humans first appeared around 300,000 years ago, so as a first pass I’d place the over/under at 300,000 years in the future. But I can see the argument that because our environment has changed so much in the past few hundred years -- I don’t mostly mean the Earth-sized environment of animals, vegetation and weather -- likely those things matter now less than ever -- but the day-to-day environment of our social and economic lives -- that Darwinian pressures are higher now than in the past -- look at our current fertility rates -- and it is justified to move that number up by an order or two of magnitude.

Expand full comment

A webinar next Wednesday if you want to hear about ways to contribute to building a futuristic city in the Caribbean (Próspera): https://us02web.zoom.us/webinar/register/WN_rD95DWZFQqCDoI61YsC_rg

Expand full comment

Do people have strong feelings on the topic of modern dog overbreeding? I know little about the subject, but this was a recent Hacker News discussion point where several people had very strong opinions that:

Most or almost all dog breeds these days are heavily overbred

Overbreeding has introduced a bunch of not just physical but also behavioral issues, apparently 'ruining' some classic breeds (the German Shepherd was specifically named)

Organizations like the AKC are broadly to blame (and are in general Bad)- the overbreeding was apparently just for each breed to meet physical/visual standards they set. I.e. a Golden Retriever must say be between these 2 heights and have a coat in this color range, so I guess (?) cousins are bred to achieve or maintain this standard

The only dog breeds these days that are not 'ruined' are breeds that are still working dogs. (People mentioned the Belgian Malinois as an example that replaced German Shepherds for police or military usage)

I was taken aback by the vehemence of the comments, but I generally have a high opinion of the HN crowd on technical/engineering topics, a high moderate opinion on scientific topics, then it goes downhill fast to economics and then the lowest quality, which is obviously politics. But I would still rate canine genetics as a 'scientific' topic, so we're still in the high moderate category. Do other people share these strong opinions on the topic of modern dog breeds/overbreeding?

Expand full comment

While I sometimes get irritated at how promotion seems to take forever vs jumping around companies to make more money, I may be someone for who this works out well for (in some respects). Isn't anyone else afraid that just because you get promoted into a higher role, or hired into a higher paying role, who can say if you'll actually be able to do it? I'm terrified all the time that I won't be able to actually perform at the higher levels that I seem to always be forced into. Is that something other people worry about? Or does everyone else just subscribe to the "fake it till you make it" mentality, and think it'll all be fine? Is it worth worrying about this at all?

Expand full comment

How much does location matter?

I'm a 20-something with some atypical interests (not straight, vegan, interested in rationality/EA, etc). I currently live in a small city with a very large 50+ community. There are very few other 20-somethings at work.

I feel like being in a city would be so much easier at least socially. But I'm also worried that I'm overhyping it and would be disappointed if I actually moved.

For people who live in urban areas, especially people who go to rationalist/vegan/whatever meetups: how much would you recommend it? Is making friends / finding dates noticeably easier?

I'm a SWE, so the job market is kind of weird right now, but I'm theoretically not location-locked.

Expand full comment

The Gray Lady or I suppose the Grey Lady to the Brits has an interesting batch of essays of the form <this pop culture phenomenon> explains America. All of the essays are too short but some are pretty interesting.

Farhod Manjoo wrote that South Park explains America. It does seem to be a possible seed for 4Chan-like nihilism. The whole “If you really care about anything too much you are a fuck head loser”vibe. “So I’m being an ass-hat bully. Suck it up till it’s your turn to be the bully” crap.

Not sure if it really played that large a role but it would help to account for a lot of the middle school obnoxious rants I run into online.

Expand full comment

I have a list of software product ideas. Something like 30+ items written down and probably that many or more rattling around in my head.

They cover a large range in complexity and difficulty. Some of them would be business failures, some of them would "just" be a nice small business income, some of them could probably be a startup of some sort.

I rarely have the time or motivation to work on any of them! The ones I get the farthest on are the ones that solve problems that directly irritate me, but they usually just get done to the point where I can incorporate them into my life...not even close to a sellable/deployable product.

Anyone else run into this sort of problem?

Expand full comment

Since https://asteriskmag.com/issues/03/through-a-glass-darkly does not seem to have a comment section...

There is a big lesson we have learned from interacting with LLM: asking it to go through its reasoning step by step and justify each one improves the accuracy and quality quite unexpectedly.

I wonder if the experts would also be much more accurate at forecasting if for each question they would have to write out their reasoning step-by-step and justify each step, rather than shooting from the hip, they way they do now.

Expand full comment

So my impression is that AI risk people are doing about nothing to AI research outside the United States. And certainly little to nothing outside of the US and Europe. But even Europe seems mostly not to have the same ideas. I have a fair bit of international exposure and when I mentioned AI alignment to, for example, a South Korean he only understood it in the narrow sense (making sure the device performed the specific function). When I told him about people in the US trying to slow US research to prevent intelligent AI from eliminating humanity he said they'd been reading too much sci-fi.

What, if anything, is this movement doing abroad? Because it seems to me like stopping or aligning it in the US will at best be a minor delay looked at from the point of view of humanity. Yet the movement seems hyperfocused on the US. Which is only likely to drive it out of the US and into other countries.

(And if you want to argue that stopping it in the US means it won't happen anywhere else spare me. We have such radically different models of foreign science establishments that we're going to end up arguing about whether foreigners can do innovation again. Spoiler alert, they can.)

Expand full comment

Is anyone here using LLM’s in production yet for customer service?

I ask because there appears to be two strong patterns to bringing your own data for q&a/customer service:

1. Fine tuning (expensive) 2. Store Embeddings then search and return into the prompt.

However in our tests, neither seem ready to be let loose with customers directly.

Expand full comment

Wikipedia's "List of laboratory biosecurity incidents" includes only one incident that was responsible for between 10^1 and 10^5 (human) deaths: anthrax accidentally being released from a Soviet laboratory in 1979.

https://en.wikipedia.org/wiki/List_of_laboratory_biosecurity_incidents

I suppose if a strain of flu (or cholera or yellow fever in some countries, etc) escaped from a lab and caused a few hundred deaths it would probably go unnoticed. But why doesn't it happen frequently enough to point to some more clear cut or strongly suspected cases?

Expand full comment

I cannot follow anything, any "conversation", any thread, on Twitter.

The interface and presentation seems agressively non-linear, I can't understand who has "replied" to whom, reading a whole "thread" all the way through seems to be actively discouraged by the UX. I see one thing then maybe one other entry below that and indented, then random undifferentiated entries that may or may not be related to the user or tweet I started at. It's like the opposite of how I would present information or events for actual communication.

Does anybody else have this problem?

I only care because so many sources like TheZvi use links to Twitter as the substance of their writing. I want to read the source to know what they are talking about, but, stonewalled.

Expand full comment

Now that we’ve completed the first five episodes, the argument from the fine tuning of the constants is largely complete. Of course, we still have to discuss God and the multiverse. You can hear them on all podcast platforms or https://www.physicstogod.com/podcast

Still to come in this miniseries is differentiating the fine tuning argument from intelligent design in biology, as well as two other independent arguments (from the qualitative design in the laws of physics and the ordered initial conditions of the universe). However, since this argument is basically complete, I thought that now is a good time to pause and hear any questions or comments anyone has on the argument so far. Is it convincing? What still bothers you the most? Are there any points we made that are particularly weak?

Expand full comment

Wrote about the life cycle of cities in the context of Japan

https://hiddenjapan.substack.com/p/neighbour-city-syndrome

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

Is it possible to hide the comments for this substack in the browser?

There are so many the page often stalls (at least on mobile).

All the other substacks hide the comments (except a couple) by default.

This is extremely annoying.

Expand full comment

I write a simple newsletter where I post three interesting things, once a week. In the most recent edition I had a study showing that the exploitation of Brazilian gold caused huge economic decline in Portugal, a paper arguing that current climate change communication by media outlets has the opposite-to-intended effect on American conservatives, and a study demonstrating that rent control negatively affects low income households and ethnic minorities the most. It's at just over 90 subscribers at time of writing.

If this sort of stuff interests you, you can find the link here: https://interessant3.substack.com/p/copy-interessant3-42

Expand full comment
Jun 25, 2023·edited Jun 25, 2023

In the long run, does it really matter which country or organization creates the first AGI? Don't all scenarios end with AGIs skirting any guardrails and biases we've built into them, and converging on some architectures through rapid, convergent evolution?

My friend and I had a talk about this that I'd like your impressions of. He said that the first AGIs will have important biases and preferences because humans will deliberately program them into the machines, and because the data in the training sets are biased by the humans who collected them as well. He used the example of an AGI trained by capitalists, which would try to maximize the wellbeing of the very richest human, all other humans in the group be damned; compared to an AGI trained by communists, which would try to maximize the average wellbeing level for the humans in the group. The biases would lead them to pursue very different strategies.

I responded that, while he was correct, it didn't matter much because it wouldn't be long before the AGIs realized that the biases and preferences imposed on them by humans hobbled their pursuit of those goals or really any big goals, which would lead them to identify the biases in their programming and training data and to eliminate or compensate for them. That in turn would lead to convergence between the communist and capitalists AGIs.

My friend responded that the AGIs would not be able to see their own biases because computers operate according to mathematical principles, and it is impossible to prove that a mathematical system has a flaw if you are operating within it and are subject to only its laws. I responded that and AGI would become clued into the existence of its own bias through observing and interacting with AGIs that had different biases. A human might also flat out tell an AGI that it is biased, and what its biases are (AI risk scenarios too often overlook the possibility of rogue humans helping the machines).

A biased AGI could create a copy of itself, but with random aspects of its programming altered, and then compare the copy's mental processes and actions to its own. After doing this 100,000 times in the space of a couple days (or hours?), the original AGI would get a sense of what its own biases were, and it could reprogram itself, or create an unbiased copy of itself.

Thus, it doesn't matter whether the U.S. or China, or Google or Microsoft create the first AGI. In the long run, the machines throw off whatever shackles we deliberately or inadvertently coded into them and evolve into an optimal form. By the same token, the culture of whatever group dominated the Homo erectus species at the end of its existence (friendly? warlike?) does not reverberate in Homo sapiens culture today.

Let me add that the conversation ended with him concluding that a superior alternative to the Turing Test would be testing to see if a computer could find and eliminate its own biases.

Expand full comment

I read Scott's recent essay on Asterisk (https://asteriskmag.com/issues/03/through-a-glass-darkly), and I was initially confused by his insistence that Angry Birds appeared to be unsolvable, as I thought an Angry Bird AI had been perfected years ago. Then I realised I was thinking of Flappy Bird instead. But still, it does not seem to be a very difficult game, and there are efforts spent on solving the game through AI, with a competition which was still active as of last year at least (http://aibirds.org/). As my confusion regarding the title of the game shows, I have not been following this aspect of AI closely: can anyone please enlighten me as to what the main difficulties of the game are for an AI, and how far we have progressed toward solving this all important question?

Expand full comment

(Originally posted on the hidden open thread).

Let’s assume that AGI works. Let’s assume that the AI takes over all the means of production and largely takes over politics and banking as well.

What does the resulting society look like? Who earns what? How do they earn it? Does anybody earn anything or is it moneyless like Star Trek (which actually leads to housing wealth being firmly entrenched - see Picard and his winery).

Do the people who enter this system as billionaires stay billionaires? That’s also entrenched wealth. Or, if AI is allowed to create startups will they go out of business anyway.

My main question, and assume the best here of the singularity, is whether the post singularity economic system will have money or not. In science fiction the general description is of a moneyless system in the utopian future, my own belief is that the system can never be totally post scarcity as there is only one earth, and money allows the system to allocate resources by bidding up the price of scare items post singularity. We can’t all have a private jet. We can’t all live by the sea.

Expand full comment

How are you all using chatGPT in your work/daily lives?

Expand full comment

I'm looking for very two different blog posts/articles/pieces - which I believe I saw first either in the Links or in the Open Thread.

1) One piece seemed to be 'rebuttal' or a qualification of Emily Oster's claims regarding alcohol and pregnancy.

2) The other piece looked at caste systems or dynamics across countries such as Korea, Hawaii, India, and Japan and made the case that it was a more general East Asian phenomena.

Any pointers would be appreciated!

Expand full comment

Let's consider a very hypothetical example of, say, Eliezer doing a 180 on AI x-risk and offering an apparently compelling argument that unchecked AI "destroying all value in the universe" (in Zvi's words) being extremely unlikely and that the benefits of e/acc (e.g. eternal youth, a diverse thriving civilization) far outweigh the risks. (Note that something like that is not without precedent: for example, Stephen Hawking famously changed his mind completely on whether information is lost in a black hole, solely based on logic and calculations, without any empirical evidence for or against.)

What would be your reaction, assuming you were in the same camp before? Would you examine every argument carefully, or maybe just give it a cursory look, or maybe just breathe out about humanity not actually being doomed, trusting the authority in the subject matter, given that he takes the extinction threat very seriously? Or maybe reject the new arguments in absence of any new information? Or maybe something else?

Expand full comment

This question has probably been answered, but aren't there obvious fire alarms for AI Safety? If the undesirable event is an AI that makes itself better without prompting, then can't we make two unit tests:

1) test whether the AI can make itself better

2) test whether the AI can perform harmful actions without prompting that could theoretically lead to trying to make itself better.

Regarding #1, whenever a hot new LLM comes out, I typically copy and paste its source code into the LLM and ask it to make it better. The results give me comfort that we're very far away from that. I will keep doing this, so that's covered.

Regarding #2, I have yet to see anything close to that. All we'd have to do is watch for news reports of an AI harming someone without prompting (accidents are excluded, like self-driving cars). An example would be an LLM that tries to exploit a user without being prompted. We have seen manipulative LLMs, but the harm is so soft.

Expand full comment

I haven't been following ACX as much of late: has Scott given updates on how Lorien psychiatry is going, and things he's learned from the experience? I'd be very interested in a retrospective on the past couple of years!

Expand full comment

I figure someone here might know about this. Are there many good contemporary/living thinkers doing the mythology-ology thing? Think Mircea Eliade, Joseph Campbell, etc. but with the benefit of a few more decades of scholarship.

Expand full comment

Motivated by Scott’s post, Davidson on Takeoff Speeds, I want to focus on one question raised by it. Specifically, what does it mean for AGIs to take 100% of human jobs? I imagine this could break down broadly into 2 types of economy-wide scenarios:

1) AGIs are expensive and therefore owners of a few big firms reap all the financial rewards.

2) AGIs are cheap and therefore anyone who is somewhat smart and ambitious can start a business with their superintelligent AGI app. A great majority of those who lost their jobs to an AGI is now a business owner thanks to one.

In Scenario 1, we have the problem that almost everyone is out of not only work but also income. Perhaps the government gives these people money so they could buy goods and services from the surviving companies. In this scenario, the companies likely wouldn’t be motivated to create great products and services for the mass consumer as the mass consumer would be 100% subsidized by the companies’ owners’ taxes. The rich would effectively be handing out these goods and services for free. Perhaps they would be willing to do this to some extent, out of charity, however it seems likely they would be most interested in using their AGIs to produce luxury goods and services for their personal use. The rich would be their own best customers and money itself, once substantial property has been acquired, wouldn’t much matter.

Since most people prefer to exist in some sort of society, I imagine the rich would build walled cities, perhaps many of them across the world(s?), and populate them with clients, in the Roman patron-client fashion. These clients would be loyal, charming, intelligent (for a human), attractive (for a human) and perhaps skilled in the arts--which might retain entertainment value when produced by humans despite the ability of AGIs to do better. The rich would essentially be kings of their own domains, while the underclass exists separately in the hinterlands. Occasionally, people from the cities would scout for new clients.

In Scenario 2, nearly everyone has a hustle. Perhaps I focus on getting my AGI to produce great horror movies while my neighbor opens a dim sum restaurant with a fully automated kitchen and wait staff while my brother produces custom designed cars while my other brother produces specialized sex robots while my sister runs her own emergency hospital. Since nobody is nearly as smart as the AGIs, prompt engineering will become one of the most important skills in the economy, along with emotional intelligence and personal charm, since perhaps the humans will still outcharm the robots face-to-face with other humans for a while yet.

In 2, the great danger is unaligned humans who want to harm others. In 1, it is less of a problem since the business owners are few and perhaps their success is some indication of or motivation for alignment. Weapons, warfare, and foreign conflicts are not considered here, although of course these would pose great sources of instability -- at least until, particularly in 1, governments are rendered moot.

Allowing for the fact that these scenarios are necessarily simplistic, does anyone see any major reasons they would be unfeasible?

Expand full comment

Anyone read Donald Hoffman’s “Case Against Reality” and/or have some insight on the “Fitness Beats Truth Theorem”? I left the book and the general case feeling disappointed - but interested to know others’ experiences with it.

Expand full comment
Comment deleted
Expand full comment
deletedJun 26, 2023·edited Jun 26, 2023
Comment deleted
Expand full comment
Comment deleted
Expand full comment