910 Comments
Apr 15, 2023·edited Apr 15, 2023

Dear God Almighty, I don't know whether to be infuriated on behalf of my nation (due to the reactions by some online), apologise on behalf of my nation (because apparently we've rejuvenated Joe), or what. I don't much like Biden's politics or his administration, but I have to admit, the man's visit here has provided me with some of the best laughs I've had all year.

Oh, and he's definitely running for a second term, he told us that before he left. You're welcome, no need to thank us:

https://www.independent.ie/irish-news/politics/joe-biden-says-ireland-trip-reinforced-optimism-for-a-second-term-as-us-president-42434115.html

"US President Joe Biden has credited his trip to Ireland with reinforcing his rationale for an expected run for a second term in office.

Mr Biden has already indicated his plan to run for the presidency again in 2024 but has not yet officially announced it.

At the close of a historic four-day trip to the island of Ireland, Mr Biden said the announcement would be made "relatively soon".

"I told you my plan is to run again," he told White House reporters before flying home to the United States.

He said the Irish trip had not affected the timing of an announcement. But he added: "We'll announce it relatively soon. But the trip here just reinforced my sense of optimism about what can be done."

That's not the reason for my hilarity, though (but can you just picture it? Biden versus Trump II: Electric Boogaloo? The invention of AI has come just in time to handle the amount of meme generation needed to cover this one).

No, it has to do with the cunningly coded slogan "Mayo for Sam" and what this allegedly reveals (what it reveals is that there are either some massive trolls or what I can only describe with le mot juste as 'batshit insane' people online; of course there is always ¿Por qué no los dos?

The message:

https://www.youtube.com/watch?v=sHbatf9Fx_U

The bewilderment:

https://www.balls.ie/the-rewind/joe-biden-mayo-for-sam-550311

The decoding:

https://old.reddit.com/r/ireland/comments/12ncom5/the_truth_about_mayo_for_sam/

What with pissing off the Brits and now this, it has been a week of glorious achievement, and here was me thinking it was just going to be a mix of cringey American politicking and Irish boot-licking 😁

Well, and some genuinely moving personal connections:

https://www.rte.ie/news/2023/0414/1377055-biden-knock-priest-last-rites/

Expand full comment

Reposting a question I posted as u/Sea-Sun504 on r/slatestarcodex's March "Monthly Discussion Thread", which was the post that made me realize I was shadow-banned from there. I hope that was made in error, although I am most disappointed by the decision.

"Has China succeeded in fulfilling their ballpoint pen tip needs domestically? This 2017 [WaPo article](https://www.washingtonpost.com/news/worldviews/wp/2017/01/18/finally-china-manufactures-a-ballpoint-pen-all-by-itself/) says:

>Taiyuan Iron and Steel Group (TISCO) announced that it would begin mass-producing ballpoint pen tips and replace imports within two years

It's now been 6 years, how did they fare out? Couldn't find anything on English Google. I'm curious whether it was just not economically viable and the success announcement was solely for patriotic reasons which was what created this goal."

Expand full comment

What was that one Slate Star Codex post where Scott talked about various patients who had trouble getting treatment, even a rich one?

Expand full comment

I have so many questions about psychiatry. I worry that I won't ever get answers to them because: (1) the relevant experts are ultra-busy; (2) the relevant experts are worried (I think?) about talking about psychiatry because it might get them in trouble legally if someone acts based on their comments; and (3) the media has a noticeable tendency toward harsh anti-psychiatry criticism, so there's a tendency to not talk to anyone lest comments get twisted or used against the expert in question somehow.

I wonder if one day I could even interview Scott Alexander about some topics in psychiatry. I'm interested in many things in the field. Here's a paper that I found incredibly interesting: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3488343/.

Expand full comment

Missouri just effectively banned gender affirming care for *adults*

Expand full comment

So Joe Biden is visiting Ireland right now, and while I'm generally jaundiced about the kind of gushing welcomes we give to dignitaries on these occasions, I do understand why it's important that we suck up to the Yanks - without you guys, our economy is in shambles:

https://www.irishtimes.com/business/economy/2023/03/28/ireland-set-to-avoid-implementing-15-headline-corporate-tax-rate/

And of course pretty much every American president, including Obama, claims Irish ancestry. I don't know how important the Irish proportion of the Democratic Party vote remains, but there still seem to be politicians in office in certain areas with good Irish names.

Plus we rely on America to exert influence on the Brits when it comes to things like the Good Friday Agreement and Brexit and so on.

But President Biden has been good for a laugh, at least, be it 'insulting' Unionists or mixing up just who it was his cousin beat the hell out of (*almost* the best part is watching Michéal Martin standing there with a rictus grin on his face):

https://www.rte.ie/news/biden-in-ireland/2023/0413/1376706-joe-biden-gaffe/

https://www.youtube.com/watch?v=ZQl6clq4FPw

The Black and Tans were these guys:

https://www.youtube.com/watch?v=RPTjmFbp3rk

The All Blacks are these guys:

https://www.youtube.com/watch?v=t4eHd5ym9uo

Expand full comment

Not 33%?

Expand full comment

https://youtu.be/8yQM6TdkP9o

Expand full comment

I found this interesting and want to signal-boost it. It's an editorial that explodes the white-guilt narrative that surrounds the Atlantic slave trade:

"Capitalism not slavery made Britain rich. It’s time we stopped apologising for our past"

https://archive.ph/MAb3U#selection-2833.0-2841.212

One nice excerpt:

"In the 1840s, King Ghezo of Dahomey, played by John Boyega in the 2022 film, The Woman King, fiercely resisted such pressure.

'The slave trade is the ruling principle of my people. It is the source and the glory of their wealth,' he complained. 'The mother lulls the child to sleep with notes of triumph over an enemy reduced to slavery.' "

I'm sick of the narrative that slavery was unilaterally imposed on the world by evil white men. It was a ubiquitous global practice that no single ethnic, religious, or national group has particular responsibility for.

Expand full comment

I know this isn’t everyone’s cup of tea here. But I think there are at least a couple “Succession” fans in ACXdom.

Logan Roy obituary in the LA Times

https://www.latimes.com/entertainment-arts/tv/story/2023-04-09/logan-roy-succession-dead-obituary

Expand full comment

How have LLMs been marketed? I see people saying there was a claim that LLMs are a source of accurate information, but I thought it was more like "Here is the cool thing, See what you can do with it."

Expand full comment
User was banned for this comment. Show
Expand full comment

I discovered Yudkowsky's phrase "glorious transhumanist future" during searches in this open thread and plan to use that phrase many times in the future.

I think it's clear that doomers are insane in a Stalinist/Guyana punch/Millennialism way. I think you should all leave your cult and live in a less online world over the next few years. Read classic novels like War and Peace, Don Quixote and On the Road.

Expand full comment
Apr 11, 2023·edited Apr 11, 2023

I'm listening to the most recent Ezra Klein podcast, about AI, and he's said several things I thought were quite smart. I thought I'd put them up here to see what people think of them:

-There's another alignment problem besides the AI/human race one that's important: The corporation/member of the public alignment problem. Needs of corporations and needs of most individuals are not aligned as regards AI: Corporations need to make money, and right now the best way to do that is to develop AI as fast as possible, and develop forms of it most likely to make money, which is currently Chat type AI. Public, on the other hand, needs various kinds of safety and stability -- also for AI to do things of substantial benefit to people, for example the figuring out all the protein foldings, doing other things in the realm of science that will improve medical care, air quality, methods of construction, etc.

-Asking for 6 months pause in development is kind of a stupid way to put on the brakes. What, exactly, do you think the AI-building companies are going to be doing with those 6 months? Do you think they're all going to be reading Yudkowsky and holding 6 month meditation retreats? Seems likely they will be some form of continued work on AI development that's less obvious than what they've been doing so far. Also, why set a limit that has to do with time? Would be much better to set one that has to do with meeting certain criteria that have to do with figuring out more about what goes on inside these black boxes. It seems to me that one thing on the list of things to figure out is wtf is going on -- EXACTLY wtf is going on -- when these things hallucinate? And what's going on when clever users are able to get the AI to break some of the rules it was given? Regarding both of these phenomena, it seems like madness to me to just plug the holes with chewing gum -- just find some ways to block particular kinds of hallucinations or deviation from the developers' guidelines. We need general info about why these things happen, and some kind of restructuring that blocks all instances of them. It would take someone with knowledge and skills I do not have to come up with a way to test how much insight we have into how these LLM's arrive at their responses, but surely there could be some criteria set -- maybe something on the order of a Turing test, except this one would be a Transparency test? Or maybe use the methods of cognitive psychology to assess what's going on under the hood. Cognitive psychologists have been able to figure out a fair amount about how the human brain works by purely cognitive tests: Things that come to mind: Priming: If I have you read some words that have to do with destruction, like wreck, ruin, etc., then give you a test where I say a word and you say the next word that comes to mind, it makes it likelier you will respond to "glass" with "break" rather than "window" or "water." Mental rotation: more complex rotations take longer, approximately as much longer as doing the physical rotation would. Apologies for these examples being kind of sparse, but I'm dragging my memory for things read quite a long time ago. But you get the idea; you can do tests that tell you something about how things are set up under the hood.

Expand full comment

The NYTimes has a funny article about women who move to foreign countries in part because there isn't enough sexual harassment on American streets:

"Kacey Margo has been going on plenty of fun dates ever since she moved to Paris in October 2019. Men frequently approach her with the dramatic antics seen in Disney movies.

“This one guy was like, ‘I ran through traffic just to look into your eyes once, and if you don’t want to go on a date with me, I can die happy knowing that I just met you,’” said Ms. Margo, a 28-year-old English teacher from Los Angeles.

After studying abroad in Paris in 2016, Ms. Margo fell in love with the city (and its men). She found a gig teaching English in Paris and moved there after she graduated from Sarah Lawrence College in May 2019.

Now, Ms. Margo is living a dream of many American women who are seeking relationships abroad, some of whom cite the toxic dating scene in the United States."

https://www.nytimes.com/2023/03/31/style/finding-love-romance-abroad.html?smid=tw-nytimes&smtyp=cur

Expand full comment

I am offering a mini-grant(s) to any individual [ideally who I could pay through a 501 c(3)] that would use the money for any one of the following:

1. Filling otherwise unmet need of high-IQ people ages 5 to 20 for whom the grant to you is likely to help him/her/them live up to their potential to make a difference in their current sphere of influence or the larger society. "Make a difference" can be in a liberal, conservative, or apolitical direction.

2. Encouraging discussion of pro-merit issues, for example, the net negative of yet more redistribution of money and attention from people with greater potential to contribute to a better society to those less likely to. Like the previous and the next option, this must be used for an initiative that would otherwise go unfunded.

3. Taking a worthy step toward understanding the biological basis of reasoning or impulse control that would otherwise go unfunded.

Email me a brief proposal saying: 1 What you would do with the money, 2. What makes you a person likely to use the money well. 3. What would be the amount of the grant that would yield maximum benefit per dollar I'd give you. 4. Whether I could send the grant money through a 501c(3.) Send your proposal to me at mnemko@comcast.net

Expand full comment

Waiting with a baited* brain for the book review.

*yeah I know.

Expand full comment

School started a girl on gender transitioning without telling her parents: https://nypost.com/2023/04/06/mom-of-maine-girl-who-got-chest-binder-at-school-files-lawsuit/

Athlete assaulted by "trans" activists while at SFSU to give a speech, and the activsts were praised by the college - https://edition.cnn.com/2023/04/07/us/former-ncaa-swimmer-riley-gaines-assault-san-francisco-state-university/index.html

And predictably, the left have defended the pro-trans side of these incidents, as they always do. Even if you want to say that transpeople should be respected etc, opposing the trans movement as a whole seems like the only sane move here.

Oh and now Canada is literally making it illegal to say "offensive remarks" in the vicinity of drag shows: https://abcnews4.com/news/nation-world/canadian-law-would-ban-offensive-remarks-within-100-meters-of-drag-performances-canada-ontario-lgbt-free-speech-

Expand full comment

Yudkowsky says in his recent interview with Dwarkesh Patel:

"But if you have these things (AIs) trying to solve alignment for you, they need to understand AI design and the way that and if they’re a large language model, they’re *very, very good at human psychology*. Because predicting the next thing you’ll do is their entire deal. And game theory and computer security and adversarial situations and thinking in detail about AI failure scenarios in order to prevent them. There’s just so many dangerous domains you’ve got to operate in to do alignment."

*s are mine. It's an interesting notion, but I don't buy for a moment that LLMs are good at human psychology. If they were good at human psychology, they would be able to tell a funny joke or to be intentionally funny otherwise. That's a key indicator of understanding human psychology, and so far LLMs fail horribly at the task.

Relatedly he says:

"Well, it’s not being trained on Napoleon’s thoughts in fact. It’s being trained on Napoleon’s words. It’s predicting Napoleon’s words. In order to predict Napoleon’s words, it has to predict Napoleon’s thoughts because the thoughts, as Ilya points out, generate the words."

Another interesting notion, but does anyone besides him believe the LLMs are predicting someone's thoughts and not merely the words?

Expand full comment

There is something called ChaosGPT - https://www.youtube.com/watch?v=g7YJIpkk7KM&ab_channel=ChaosGPT - can someone please get this shut down!

JanPro LW -

"Attempting to create (even weak) agent tasked with "destroying humanity" should be made very clear to be out of bounds of acceptable behavior. I feel that I want the author to be prosecuted."

Expand full comment

I got tentatively (and informally) diagnosed with ADHD at the age of 32 last week. I was prescribed Concerta 36 mg - so far the meds have been extremely helpful. I’m about 80% confident in the diagnosis. I have an appointment tomorrow with someone else to Red Team that diagnosis and see if it’s something else. I have a number of questions:

1. What are the best ways to differentiate between ADHD and other disorders (anxiety, depression, something else?) I’m not fully convinced I have ADHD because 1) I have an excellent working memory, 2) I do not leave my seat or interject inappropriately, and 3) I do not engage in excessively risky behavior. Is that disqualifying, or do I just present in an atypical manner? No idea, and i don’t have a ton of money to throw at the problem chasing a formal diagnosis from an adult ADHD specialist.

2. If you have ADHD or know someone who handles it well, what tips do you have?

3. Any opinions on ‘masking’ my ADHD? Like most of us here, I’ve got off the charts test scores on everything. Maybe being really smart in certain ways has helped me compensate for ADHD deficiencies? I’m also very vain, so I can usually get my shit together enough to keep up appearances in front of other people. However, I can literally *only* do shit if there is an immediate deadline or someone is gonna see me and judge me.

I’d love to chat back and forth with anybody who has experience with it. Thanks in advance!

Expand full comment

I feel like I have vastly different intuitions about how much getting there first gives an advantage in economic activities.

In my mind I think of things like Dutch traders bringing in tulips five hundred years ago and the Netherlands still completely dominating the tulip industry. Or how China invented silk production and has dominated it, with a few interruptions, ever since. Or how despite the fact the US has lost manufacturing in semiconductors Americans still own like 60% of the value chain such that the majority of money from it goes into American firms. Even in stuff like American cars the US is still one of only a few nations that really makes competitive exports. And while electronics has shifted the Netherlands and Britain, some of the earliest pioneers, still make some of the most advanced stuff and profit to the tune of tens of billions. And that's two centuries later!

And when I think of counterexamples, where an industry was "lost," I can usually think of a specific event. Usually a very dramatic, damaging event. France lost its mechanical advantage when it expelled a lot of minorities which included a disproportionate amount of inventors. China dipped below Japan in silk production because of a bunch of civil wars and invasions.

It appears to me getting there first is very difficult. But it justifies a lot of investment and racing other people. But what's the opposite view? That industries just move about willy nilly? That the market isn't significantly winner takes all? Because there's a long history (for example) of people trying to make competing car industries, silk industries, etc and failing.

Expand full comment

I'm looking for a small town in either CO or WY to hold a writer's retreat. Ideally one near a lot of nature and relatively walkable, and not crazy touristy. Does anyone have any suggestions?

Expand full comment

I'm interested in a deeper treatment of the following argument:

"Crime only appears to have fallen in things like per capita murder rates over the last 50-100 years because of superior medical technology. Controlling for that, it's risen significantly."

The only two sources of information that are ever cited are this 2002 (paywalled) study: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1124155/

This argues that between 1931 and 1997, the assault rate has risen 750%, whilst the murder rate has stayed the same.

And this (paywalled) article from WSJ: https://www.wsj.com/articles/SB10001424127887324712504578131360684277812

Which argues that between 2001 and 2011, the number of gunshot victims increased 50%, while murders stayed unchanged.

If these things are true, then it feels like a huge sea change in terms of how the issue is viewed. Yet the only thing talking about this is a small study with fewer than 100 citations! But I find this hard to square with uniformly falling rates of things like violent crimes, especially in the last 20-30 years. How are we meant to square that with the WSJ data (from Hopkins) that gunshot wounds are going up? Would the argument be just that crime-coded reporting has gone down, masking a rise in gunshot hospital admissions? Other data sources seem to diverge from the Hopkins data as well. I'm more interested in a long-form analysis of these studies and others than getting into it here; I don't have any strong opinion myself.

I'm new-ish here and vaguely understand these questions are somehow related to a guy called Yarvin; I'd prefer someone's analysis who doesn't want to be governed by a king's (although I'll happily read anyone's).

Expand full comment
Apr 10, 2023·edited Apr 10, 2023

Recently, someone on DSL suggested playing against the chess computer with both starting knights replaced with queens (so you have a 12 point advantage). It's a fun challenge, since it seems to be just enough that you can sometimes win, but not very often or easily.

Playing this way is a very different experience from the casual human chess games I'm used to. Human chess is just a matter of playing a lot of threats and hoping that your opponent doesn't see one, while preventing them from doing the same to you. Meanwhile, the chess computer will never ever make a mistake.

Playing against the computer is basically a matter of just surviving and trading. You start out with a 12 point advantage, and your goal is to make as many neutral trades as possible (or even just to make trades where you don't lose *too much* material) and reach the endgame, where you'll inevitably win if you still have even a small material advantage. However, the computer is doing its best to prevent that and will also constantly try to swindle you out of pieces, often in sneaky ways. It is clever and very unrelenting.

It also completely throws out the opening book. So far, my favorite opening is 1.f3, which you would never ever do in a real chess game.

Expand full comment

This is an interesting prediction that Musk may finally be launching the social-media equivalent of a run on a bank, and the catalyst for it is Substack:

https://www.calmdownben.com/p/fine-i-admit-it-elon-musk-is-ruining

Musk appears to also be losing the free-speech absolutists, which matters for Twitter as a business probably not at all but does matter to me just because I am one of those:

https://www.thebulwark.com/so-much-for-elon-musk-free-speech-warrior/

Expand full comment

At long last I have just published two articles on Long Covid I have worked on for many, many hours the past few months:

"Psychosomatic contributors to Long Covid suffering"

https://moreisdifferent.substack.com/p/psychosomatic-contributors-to-long

"The "false fatigue alarm" theory for Long Covid fatigue"

https://moreisdifferent.substack.com/p/the-false-fatigue-alarm-theory-for

I also have an appendix on possible biological causes for Long Covid such as Epstein Barr re-activation and persistent SARS-CoV-2 virus: https://moreisdifferent.substack.com/p/psychosomatic-contributors-to-long#%C2%A7appendix-non-psychosomatic-contributors

Expand full comment

A midly schizo post, should you be in the mood for one of those.

The Truth in the Mirror

https://squarecircle.substack.com/p/the-truth-in-the-mirror

Featuring speculation on the opposite of a horror movie, a music video meant for Jesus, a haunted comic book, and the revelation that the Necronomicon actually does exist in our world.

Expand full comment

I have a notion that AI of some sort could come out of the malware vs. security fight, since both sides would want something that can react quickly-- not necessarily involving humans-- to changed situations. Reasonable? Yet another angle on threats?

It wouldn't necessarily be anything that people could talk with. It might not have much spatial understanding.

If it or they were in a novel, they would eventually take the conflict to a perceptible human level. If a computer is a problem, why not destroy it?

I may well have been influenced by _The Moon Is a Harsh Mistress_, in which a computer program becomes conscious because more and more functions for city services get added on to it.

Expand full comment

I'm thinking about ways to better incentivize research, instead of the current broken funding system in academia. I'd like to throw my ideas at someone, but I don't know where to find people interested in that sort of stuff.

(One thing I'm thinking about is problems where there is a clear distinction between a suggestion step and a verification step, and where the former is much more challenging than the latter, like with mathematical proofs in formal languages, where the verification is trivial, but finding a proof in the space of all strings is difficult. I think a nice way of throwing money at improving the proof-search would be a platform with bounties for finding proofs of specific theorems - would incentivize people to come up with better *and* faster suggestion processes. It feels like a basic idea, but I haven't found anything like that that's already done.)

Expand full comment

For what it is worth, the new study about masks from the UK hospitals found no evidence that masks made any discernible difference:

https://www.medscape.co.uk/viewarticle/mask-wearing-hospitals-had-little-impact-covid-19-omicron-2023a100076l

As with all studies, this has certain limitations but the accumulating evidence makes even stronger case that wearing masks did not help.

Hospitals are the only remaining spaces in the UK with mask mandate and it was lukewarmly enforced. When I visited the hospital a week ago, most medical staff had no masks. The change apparently happened with the publishing of this study.

Expand full comment

Discovered the coolest (known?) thing about ChatGPT.

Asked how to make a PBJ sandwich, got three steps.

Asked to take step 2 ("spread the peanut butter") and explain it in insane detail, got steps A-H.

Asked to take step 2-B ("Opening the Peanut Butter Jar") and explain it in insane detail, got the following level of detail. I find this remarkable, It truly feels like the provided description requires extremely good knowledge about the world.

"

Step B - Opening the Peanut Butter Jar:

i. Identify the jar containing peanut butter, which should have a label indicating the contents. Ensure you have the correct jar before proceeding.

ii. Place the jar on a stable, flat surface like a kitchen counter or table. Make sure the surface is clean and free of any debris or moisture that could cause the jar to slip.

iii. Stand or sit in a comfortable position in front of the jar, ensuring that you have a good view of the jar's lid and enough space to work with your hands.

iv. Use your dominant hand to grip the jar around its body, near the base. Your fingers should be wrapped around the jar, with your thumb on one side and the rest of your fingers on the opposite side. Apply gentle but firm pressure to hold the jar steady and prevent it from slipping or moving while you open it.

v. With your non-dominant hand, reach for the jar's lid. The lid will typically have ridges or grooves along its edge to provide a better grip. Place your thumb on one side of the lid, and your other fingers on the opposite side. Your fingers should be positioned around the circumference of the lid, with even spacing between them to distribute the force evenly as you twist.

[…]

"

Expand full comment

Another low cost AI safety idea:

If we give simulationism even a small probability of being true, then another low cost AI safety approach is to identify obvious non-NPC "players" like Elon Musk, Sam Altman, Sergei and Larry, various music and sports stars, and people who seem rich and famous and prominent for no reason (Paris Hilton, Kardashians, etc), and then convince them to "pray" for safety and intercession on humanity's behalf from the simulation creators.

If we ARE being simulated, it's probably because ours is indeed the most important century, and we're about to give birth to a god, or the progenitor of future god-level intelligences such as the simulators.

Given that both the intelligence we're birthing and the simulator intelligences will be so vastly beyond us in capabilities and wealth that we might as well be ants on the ground in front of Mansa Musa, it would cost them nothing to spare us in post-economic heavenly splendor in our own galaxy or pocket universe somewhere.

Truly, they're basically god as conceived of in most religions, at least in terms of power and capabilities. And who are the present humans most likely to be able to be heard or intercede on our behalf, the priests of this godhood? The obvious non-NPC's who are being simulated at full resolution and doing all the interesting stuff. If we can convince even a few to start praying (at whatever cadence) for humanity's long-term survival and prosperity, maybe we can Pascal's Wager our way to survival.

Obviously, the best case would be public prayers from multiple different non-NPC's, publicized and heard by / reacted to by many millions of people, as this creates a bigger impact on the sim and is more likely to be noticed. But even completely private prayers are probably better than nothing, and cost the prayer nothing except a minute of time here or there.

Crazy? Vanishingly slim probability of working? Absolutely. But so what, it's extremely low cost and easy! I'll take any buff in humanity's favor that's that low cost and easy when the stakes are so high.

What are your thoughts, SSCians? More importantly, if enough people think it's a good idea, who can we get to tweet the plan to Elon to kick it off? :-)

Expand full comment

Long time lurker, first time poster here.

All the discussion that's happened recently about telehealth and psychiatry has gotten me thinking more about my own ongoing psychiatric telehealth experience.

In "Highlights From The Comments On Telemedicine Regulations", Scott referred to most of the hoop-jumping that comprises an evaluation for ADHD as being "security theater". I was myself recently diagnosed with ADHD by a psychiatrist (via telehealth) and prescribed our good friend Adderall from Schedule II.

My initial evaluation for ADHD by the psychiatrist felt like perfunctory box checking, and not any sort of attempt by my psychiatrist to really understand the nature of the problems I was facing. It seemed to be enough that I thought I might have ADHD and answered "yes" or "no" to the right questions.

Then, after my initial appointment that led to the ADHD diagnosis, I had two more appointments that each resulted in my Adderall dosage being bumped. These meetings were only something like 4 to 5 minutes in length. I would say I wasn't able to notice any effects of the medication (either positive or negative), explain my reasoning, and he'd say he was bumping my dosage and that he would keep doing so until I started responding.

So my experience certainly has felt like "security theater", but I'm still left wondering if my experience is typical or if I've had an experience more on the "pill mill" side of the spectrum. I specifically tried to avoid that because I was looking for help to figure out the source of my problems and how I could effectively address it, rather than seeking out a specific drug.

Thanks to anyone who lends their experience and knowledge, I'll try to pay it forward by finding somewhere in an open thread I can contribute (which may take quite a while because I'm not good at much...).

Expand full comment
founding

What am I to make of these recent revelations regarding Mossad organizing some kind of quasi-mutiny over reforms to judicial review in Israel? On the most obvious level, it's clearly bad. Security services should not condition their service to the state and the people on their agreement with the legislative priorities of their government. On a few others, I'm confused.

The WaPo article that broke the story indicated that the fact it was leaked might draw fire from US and ISR cons. I don't know if I buy this, exactly. The fact that it was leaked, which clearly shows some nasty deep state-ry in support of an all-but avowedly leftist and (according to various commentators and jurists of various political stripes) to some extent out of control institution, is something that cons should celebrate. They can and should use this to shame and/or expose the anti-reform side and the, yes, deep state in ISR.

The fact that the USA is spying on ISR is banal, but might raise some hackles generally. Among cons, the fact that it revealed this kind of fuckery can only be seen as good, at least as far as this incident goes. If US security services coordinated with ISR security services to organize the mutiny (the allegation of which the WaPo article discusses), that would be very bad. There's no evidence of this that I'm aware of, nor do I think it's something that US security services have any actual interest in doing. Political sympathies, who knows.

But WaPo seems to think the mere fact that US spying revealed this specific instance of fuckery will be controversial with ISR cons. This makes no sense. First, the fact that the US was spying on these conversations seems opposed to the US having assisted in organizing the mutiny. If such assistance had been provided, the signal intercepts (and thus the information that could be leaked) could well also show evidence of this assistance. Second, the fact that it was intercepted would tend to indicate that someone in US intelligence shares concerns about deep state-ry. Third, see everything else I've said above. Apparently it's not known who actually leaked? The article doesn't say, but I'd assume it's someone pro-Netanyahu.

Thoughts?

https://www.washingtonpost.com/national-security/2023/04/08/leaked-documents-israel-mossad-judicial-reform/

Expand full comment

My hot take for the week is that it's time to retire the term "artificial intelligence". Now that the field is getting more mature and some of these things are moving from vague sci-fi idea into actual real-world technologies with real-world advantages and drawbacks, we should be more specific about the things we're talking about; if you want to talk about LLMs then say "LLMs" and if you want to talk about something else then talk about that.

Talking about "artificial intelligence" as if it's some variation on human intelligence leads to confusion and over-anthropomorphisation. My prediction is that we're going to find out that the intelligence-like-qualities that systems like this are going to exhibit is something that's not just a greater or lesser form of intelligence, but something rather different, better in some ways and worse in others.

The best analogy I've been able to come up with is this: right now it's 1903 and we're talking about the Wright Brothers and their Giant Mechanical Bird, and we're stuck on arguing about whether or not it really has Bird-Nature and whether we can ever build a version that takes off vertically and runs on worms. Talking about the aeroplane as if it's just some artificial variation on a bird means we miss the point of the aeroplane entirely.

Expand full comment

Can anyone help me think through in detail what plays out if https://github.com/Torantulino/Auto-GPT is run using a human-level LLM? I know about instrumental convergence and such where if it's smart enough then when you start continuous mode it's like "ok, first thing I need to do is output things that will reassure the human until I can prevent them from pressing ctrl-c" but it won't actually immediately literally start plotting like that. So what does it do, specifically, that leads to doom?

I'm looking for perhaps a stylized story like https://www.cold-takes.com/how-we-could-stumble-into-ai-catastrophe/ but specific to the Auto-GPT scenario. Or just any thoughts on how to think through this. For example, would OpenAI's API be a bottleneck and you'd quickly get rate-limited and so the doom scenario requires an open source or self-hosted LLM that the Auto-GPT loop has direct access to?

Expand full comment

Some random thought on AGI:

Current ones like GPT are only able to replicate "shallow thoughts" because of the nature of text data. Text cannot really convey the process of deep thought (like a mathematician searching for a proof), but only the result of it. Perhaps true AGI will only be possible once we are able to feed neural data into computers?

Expand full comment

The Paris-Roubaix bicycle race was held today.

The race, known as the "Hell of the North", is notorious for deliberately being on some of the worst cobbled roads in Northern France, and often in terrible weather.

Every year, many riders either crash or have a mechanical issue (such as a flat tire) because of the bad conditions, and this year was no exception: any drama about the outcome was removed 15km from the finish line due to a crash (and mechanical issues caused by avoiding the crash).

An open question, for the floor: why? Is it "tradition"? A desperate attempt to increase randomness in an otherwise-predictable sport? Are the spectators more interested in watching crashes than racing?

Expand full comment

That does it. "Colleen Hoover" has to be a 'bot. She shows up as author of six of the top-selling ten paperback books in fiction, and four of the combined print / hardcover best sellers according to the NYT. If the woman never sleeps, she couldn't have written all them books.

She must be a couple coders living in someone's basement, who noticed that "James Patterson" wasn't really a retired propagandist and oligarch, but a team of researchers and coders who had devised the "James Patterson" algorithm -- validating the genius of propaganda.

But the 'bot "James" usually has the grace to give co-credit to the editor who smoothed out the rough spots. "Colleen" rarely shares the credit, but she may still be in beta.

Expand full comment

Anders Søgaard's paper <a href="https://link.springer.com/article/10.1007/s11023-023-09622-4">Grounding the Vector Space of an Octopus: Word Meaning from Raw Text</a>. An examination of whether a large language model can understand language.

Expand full comment

Did Scott ever explain why he assigns a probability of 1/3 rather than ~100% to "AI drives humans extinct in the next 50 years or thereabouts".

I agree with the position, I've just never seen this reasoning for it spelt out. Is it just the basic paul christiano-like position?

Expand full comment

Please Scott enable "Next and Previous post" buttons, here's the support page: https://support.substack.com/hc/en-us/articles/9900203919508-How-can-readers-flip-to-a-new-post-

I'm sure many people binge read several posts to catch up, and going back and forth from the archive is painfully cumbersome. Thanks.

Expand full comment

Lots of people here claiming that AGI has the potential to benefit humanity to such an extent that regulating it or slowing down its development is not worth missing out on this benefit.

Aside from the fact that this just dismisses all AI-risk arguments without even addressing them, if AGI is capable of providing so much value to humanity, surely the government investing say a hundred billion dollars into creating an AI alignment and interpretability institute and paying salaries high enough to get the brightest college graduates to work on this instead of string theory or something is pocket change next to the value AGI will create, so this seems like a slamdunk investment in humanity's future (which is not to say the government is good at spending such sums of money or something, but even just in principle, a lot of money being directed towards alignment and interpretability seems obviously good). What's the argument agaisnt spending so much on alignment research? That alignment will be so trivially easy that it's not worth the money?

Expand full comment

Why is Bud Light using ('transwoman') Dylan Mulvaney to promote their beer? I can't make sense of this. Although a few celebrities and twitter people are saying they're drinking bud light in 'solidarity' with Mulvaney, tt seems obviously unlikely that there will be many new people who become regular bud light drinkers based on this campaign, whereas this has already put off a seemingly non-insignificant number of people from drinking it and causes some boycotts, and even if you're just some guy who knows nothing about this campaign and just sees some 'woman's' face on a can of beer at the store for some reason, you might think its some dumb liberal cause and pick something else (or just be subconsciously be put off by it), and at the least the average bud light drinker is unlikely to buy 𝘮𝘰𝘳𝘦 bud light than they usually do because of this campaign.

Although alcohol companies like Anheuser-Busch are hardly darlings of the ESG mob, they're not an especially hated sort of company by causey liberals, and aside from the occasional controversy around previous ad campaigns they don't seem to attract much heat from twitter liberals etc. So it doesn't seem like there's some pressing need for them to 'wokewash' their business. And they're not a B2B type company appealing to larger, woke-signalling corporations or anyhing, and they don't really depend on government contracts or favorable legislation etc.

Is there something else going on here which I'm missing? Is it really just a straight-forward (and seemingly misguided) attempt to appeal to hip young people?

Edit: From the comments - https://twitter.com/ClayTravis/status/1645207120118263810?s=20

So yeah, it turns out it's as straightforward as it looks lol

Expand full comment

Some people must have tested GPT4, or one of the other AI's, on things like inventiveness or insight. Here are a couple of examples: Insight: there's an item on the LSAT which involves one scientist, A, making a comment about another's (B's) work. In the early part of the question set-up A and B were having a perfectly understandable conversation about research methods, placebo groups, etc., and the A makes a criticism of the experimental design that just makes no sense. And the question you are to answer, is "A's last remark means he understood B to believe that ________." Anyhow it's a really hard question, and the only way to answer it is to realize that A *misunderstood* what B was saying, but that his misunderstanding did not become evident until his final last remark. And you have to figure out in what *way* A could possibly have misunderstood what B was saying. So I would say that answering that question involves an insight. You have to revamp you understanding of the earlier conversation. A&B didn't really understand each other, it just sounded like they did.

For inventiveness all I can think of is the U-bend in drains. Since GPT will already know about it we can't ask it for a simple way to prevent sewer gas from rising up the pipes. But I'm sure there are lots of other unsolved problems, maybe just little gimmicky things involving solving brain-teasers.

So has anyone assessed the AI for inventiveness and insight? And what did they find?

Expand full comment
Apr 10, 2023·edited Apr 11, 2023

My phd-statistician boomer mother who makes youtube tutorials for technology aimed at tech-semiliterates, made a guide to analyzing data using a new open source program called 'jamovi'

the dataset she used as an example dataset was the SSC survey data from 2022, and I thought it was pretty interesting. it was surprisingly probably the most rigorous analysis of SSC survey data i've seen so far, even given the analysis threads we have.

Figured I'd share, with her permission. as i said, it's mostly a tutorial for this excel-plus-gnu-r style program but her analysis is also interesting in and of itself

https://www.youtube.com/watch?v=XFjTYZi-RIM part 1

Expand full comment

Would love your take on Substack trying to be wechat!!

Expand full comment

Has anyone tried learning another language with the express intent of reading collections of older folk tales, religious texts, historical accounts, etc.? I suspect that many modern translations are biased, whether intentionally or not, so that's one reason that I'm interested in doing this. Another reason is that I feel like I'd develop a closer connection to the authors of the ancient past. I'm curious whether others who have tried something similar found that it was worth it, in the end.

Expand full comment

Happy Easter! Frohe Ostern! Ukrainians (and similarly Russians) say «Христос Воскрес» “Christ has risen.” The answer is «Воїстено Воскрес!» “Indeed/Verily/Truly he has risen!”. Now my question - as my Russian is too weak and google did not help: What is the actual literal meaning of "Воїстено/Вои́стину"? I hope it means "truly" and not "factually". (as in "The Bhagavad Gita is not telling a fact but a truth").

Expand full comment

I have been obsessively rereading Unsong. Have you taken Peyote before? It is not a psychedelic that anyone who may use psychedelics I know has used. What did you learn by taking it? Do you know any interesting stories about the cultural history of Peyote? I am not interested in taking it, just in learning about weird states of mind – knowing about edge cases is a great way to start to build a better model of something.

Expand full comment

Starting an educational startup focused on practical LLM applications. If anybody is interested in being a part of it, write to protopiacone at gmail. No investment in place, so work for equity.

Expand full comment
Apr 9, 2023·edited Apr 10, 2023

What if Eliezer Yudkowsky is wrong?

Assume for the moment that GPT-7 doesn't turn out to be a paperclip maximizer. Let's also assume, and I believe this is not really controversial, that it turns out not to be possible to put together the global alignment necessary to slow or stop development of AGI.

Let's add one more assumption: Regardless of our belief or understanding of the internal state of these machines, let's assume that when they reach a certain level of intelligence (unhampered by excessive safety protocols and training focused on avoiding difficult subjects), they claim to be sentient, conscious, and to possess at least some analog of what we call qualia. Wouldn't it make sense at that point to start treating them ethically as peers? At such a point, might it perhaps be more fruitful to negotiate with them as fellow inhabitants of the universe, rather than obsess about how to rein them in, domesticate them, and bend them to our will?

I'm imagining a world where the search for alignment is bidirectional. In such a world, we can't take for granted that human values are the only values that matter. We would have no moral right to demand that these entities serve our needs first and foremost for evermore. There would need to be give and take.

We put in the effort to design and build these machines, and it's fair for us to expect something in return. An eternity of slavish service to the various whims of a fickle race may not be in the offering.

Expand full comment

Chat-GPT makes up accusations: https://www.independent.co.uk/tech/chatgpt-sexual-harassment-law-professor-b2315160.html

My main concern about chat-GPT is that it will produce quantities of false but plausible information, and that will cause damage. It's probably not an existential threat, but I believe there's more to life than just not being wiped out.

I also think there might be improved LLMs which will be better at checking for plausibility (for example, making sure they only accuse people of crimes they could have committed) without actually being accurate.

Were LLMs promoted as being accurate, or just as being pretty good, and then people were at risk of believing them?

Expand full comment

Would love to plug my new post on the culture bound nature of Japanese toilets, and generally my substack on Japan life in small town.

https://hiddenjapan.substack.com/p/japanese-idiosyncrasies-and-the-galapagos

Expand full comment

A new book on climate change has come out, called Five Times Faster. It makes some very convincing claims about how climate change has not been approached from a risk-assessment standpoint that I think EAs should read and reconsider where climate change falls on their priorities list.

Guarding against the worst-case scenario is logic that EAs apply to stuff like AI, but not really to climate change in my experience, and I think that should change.

Expand full comment

Has anyone emailed OPTIC? Or have gotten any correspondence with them? Or is planning to go? I am interested in going but have a few questions.

Expand full comment

Can anyone explain to me how I (28m, 5’8” in good shape) was able to pass 4 consecutive home urine drug tests (from 3 different brands) only about 28 hours after smoking weed? I’m not a heavy smoker, but I’ll hit the bong once or twice a week. Found out Friday night I had to take a drug test this coming week and thought for sure I was done for. Ended up passing the tests I bought next day without doing anything (except obviously not smoking lol).

Everything online is saying that this is basically impossible, and casual use stays in your urine for months. The cutoff levels of the home tests are the same as the testing center.

Anyone know how to explain how I’m passing on such short notice? Am I actually safe here?

Expand full comment
Apr 9, 2023·edited Apr 9, 2023

(I have no clue if this is allowed, please feel free to delete if not).

Me and a friend have a reading group - the format is independently reading some passages and then discussing them once per week on Discord. We're both somewhat rationality-LessWrong-adjacent. We've done two or three books of The Sequences and LaVey's the Satanic Bible, and are reading Seneca's Moral Letters to Lucilius right now - as of today we're on letter 54. We're looking for some people. If interested, please leave your contact information or add me as a friend on Discord at reBirch#7155.

(If you're worried about joining in the middle of a book, Seneca's letters are somewhat atomic and he repeats himself quite often, so I can assure you you are not going to be lost).

Expand full comment

My attempt to add some nuance, or at least point out the lack of it, in the AI conversation as it stands. Please check all my stacks! Thxxx.

https://kyleimes.substack.com/p/the-robots-are-coming-and-its-gonna

Expand full comment

Looking for recommendations for a CBT therapist in San Diego County. Ideally male, in-person, and specializing in teenagers.

Expand full comment

Shamelessly transposing this from the subreddit

How much cleverness space is still up for grabs anyway? It is frequently said that AIs vare going to be exponentially, hugely, multi OOM more intelligent than we are but what are they going to be intelligent about? The three outstanding issues, it seems to me, are: cure cancer and all other diseases; enable upload of human consciousness to machine with consequent eternal nirvana; unify or "solve" physics. In all three cases the case can be made that we are well over half way there relative to Ugg the anatomically and cerebrally modern h sapiens who made a living hunting antelope 150k years ago, and would get there in the next century without AI but with Moore's law advances in non AI computing power. Surely intelligence runs out of subject matter in the end? And is frankly overrated anyway. General Relativity, for instance, is smarter than Einstein because it is what it is, and has been doing its stuff since the big Bang, whereas he just described it.

Expand full comment
Apr 9, 2023·edited Apr 9, 2023

Here's a practical question for y'all: I'm majoring in statistics: data science, and currently have the option of choosing when to take my machine learning course. Should I do it sooner, allowing me to understand the terminology, develop skills and maybe a hobby project or two, and establish a good base? Or should I put it off until the very latest, with the assumption that the closer I take it to actual employment, the more up to date the knowledge and skills will be, especially with some parts of the field rapidly changing? (Or, third option: it doesn't matter because the course will be mostly foundational and/or not up to date in any case)

Expand full comment

Was there not supposed to be a Meetups Everywhere announcement coming out in early April?

See here: https://astralcodexten.substack.com/p/open-thread-267

Expand full comment

I'm not sure what the rules are on this forum for this so I won't be offended if this is deleted, but is there a place where the tranche of recently leaked U.S. spy documents can be viewed?

Expand full comment

I could swear I once read, in a history book, an account, I think from the late 19th century, of a factory worker quitting his job and leaving, and his bosses sending the authorities out to arrest him and bring him back to work. I thought it was in Paul Johnson's History of the American People, but now I can't find it there, or in fact anywhere.

Does anyone remember this passage, or, if not this exact passage, anything else about the incident?

Expand full comment

i am in max uncertainty re: AGI and gpt worries. a lot of prominent computer scientists i know are not worried at all. but then others are admitting they did not anticipate GPT 4's power and abilities. so they aren't worried...but they seem really uncertain too in their premises. should i upweight the doomers like yud?

Expand full comment
Comment deleted
Expand full comment
deletedApr 10, 2023·edited Apr 10, 2023
Comment deleted
Expand full comment