833 Comments

Got a very cheap but quality fountain pen and wow! I enjoy writing notes so much more now. I’m watching and listening to educational podcasts and videos just because I want to take notes. Didn’t expect to enjoy writing with one so much. The effect could wear off but I never regretted getting my mechanical keyboard so I don’t see ever regretting the small investment.

Expand full comment

Buying fountain pens seems like a serious hobby for some people! I have a friend who collects them and sends out handwritten letters. It sounds wonderful albeit a bit expensive. I hear there are exclusive stores for this.

Expand full comment

If anybody is interested in the wonderful VR world of the metaverse and has the gear to explore it, there's a VR Fatboy Slim concert on March 30th.

You can get free tickets here:

https://app.engagevr.io/fatboyslim

Expand full comment

Could someone, please, resolve this grammar issue for me once and for all? This construction drives me nuts, and my brain refuses to believe it's legit:

"Rapper Afroman Sued by Police Officers for Using Their Faces in Music Videos after Raiding His Home"

People who use this construction insist that, ambiguous or not, it's perfectly legal in English. Are they right?

Expand full comment
founding

IANAG, but there *might* be a subject-object mismatch in the "after Raiding His Home" clause formally applying to Afroman(*), but it's reasonably clear from context that it was the police officers doing the raiding, not Afroman. Otherwise the grammar looks fine to me. "after They Raided His Home" would be better.

* and then a singular/plural mismatch because if it were read as Afroman raiding the police officer's home, it would be "Their Home" rather than "His Home".

Expand full comment

For a headline, it's very comprehensible:

What is happening? A rapper going by the professional name of Afroman is being sued by police officers.

Why? For using their faces in music videos.

When did this happen? After they raided his home.

Let me give you some Chesterton on newspaper headlines:

"I once saw a headline in a London paper which ran simply thus: “Dobbin’s Little Mary.” This was intended to be familiar and popular, and therefore, presumably, lucid. But it was some time before I realised, after reading about half the printed matter underneath, that it had something to do with the proper feeding of horses. At first sight, I took it, as the historical leader of the future will certainly take it, as containing some allusion to the little daughter who so monopolised the affections of the Major at the end of “Vanity Fair.” The Americans carry to an even wilder extreme this darkness by excess of light. You may find a column in an American paper headed “Poet Brown Off Orange–flowers,” or “Senator Robinson Shoehorns Hats Now,” and it may be quite a long time before the full meaning breaks upon you: it has not broken upon me yet."

"To take one out of the twenty examples, some of which I have mentioned elsewhere, suppose an interviewer had said that I had the reputation of being a nut. I should be flattered but faintly surprised at such a tribute to my dress and dashing exterior. I should afterwards be sobered and enlightened by discovering that in America a nut does not mean a dandy but a defective or imbecile person."

"Anyhow they often do translate them into American. In answer to the usual question about Prohibition I had made the usual answer, obvious to the point of dullness to those who are in daily contact with it, that it is a law that the rich make knowing they can always break it. From the printed interview it appeared that I had said, ‘Prohibition! All matter of dollar sign.’ This is almost avowed translation, like a French translation. Nobody can suppose that it would come natural to an Englishman to talk about a dollar, still less about a dollar sign – whatever that may be. It is exactly as if he had made me talk about the Skelt and Stevenson Toy Theatre as ‘a cent plain, and two cents coloured’ or condemned a parsimonious policy as dime-wise and dollar-foolish. Another interviewer once asked me who was the greatest American writer. I have forgotten exactly what I said, but after mentioning several names, I said that the greatest natural genius and artistic force was probably Walt Whitman. The printed interview is more precise; and students of my literary and conversational style will be interested to know that I said, ‘See here, Walt Whitman was your one real red-blooded man.’ Here again I hardly think the translation can have been quite unconscious; most of my intimates are indeed aware that I do not talk like that, but I fancy that the same fact would have dawned on the journalist to whom I had been talking."

Expand full comment

Thank you for the lovely quotes. My problem is that it does not say "after they raided his home" - that would have been fine. It says "after raiding his home", with "his" being the only clue as to who raided whose home. I would assume that that clause would be attached to the rapper, not to the police officers.

Expand full comment

Apart from the newspaper-headline-style capitalizing of everything, it's legal English, yes.

Expand full comment

Basically every part of the sentence is answering a question created by the previous part.

Rapper (who?) Afroman sued (by who, and why?) by police officers, for using their faces in music videos (why would rapper Afroman use their faces?) after raiding his home.

Expand full comment

My gut objection is to "after raiding his home". At this point we're clearly talking about the rapper, so how are we supposed to figure out this is about the police officers, except for "his" and the fact that police officers are in headlines raiding homes more often than rappers?

I've definitely seen headlines where, without other context, it was not clear who this clause was referring to.

Expand full comment

Awkward, but still legal. This is three separate actions pingponging between two parties; if it wasn't a headline it would be broken up into multiple sentences. They seem to be prioritizing giving full context up front.

Demanding context is fine (see: every acronym in history), and here we have the easy context of "his" and "their". Might be slightly more clear to say "after they raided his home", but headlines are also trying to stay short. (You can't say "after home raid" without sacrificing clarity; now maybe police were responding to a home raid by a third party.)

Expand full comment

Thank you! Was really hoping for another answer, but I see now that I'll have to live with the fact that this is legal.

Expand full comment

Agreed, I might have written the headline as "Police officers raid rapper Afroman's home, and sue him after he uses their images in music videos".

Expand full comment

One thing I don't understand about the "alt-right" (I know some people here do not like this term, but I can't think of a better one...) is that they seem to be really concerned about IQ and its effects on societies.

With the advent of AI and (eventually/possibly) gene editing, it would seem that hereditarianism becomes a moot point. After all, humans will be able to "upgrade" themselves (either directly though gene editing or indirectly through AI etc.).

Thus, why don't "Alt-Righters" embrace transhumanism? That would seem logical to me. But it seems that they are kind of opposed to transhumanism (at least if you believe Zoltan Istvan, as stated here: https://www.aporiamagazine.com/p/how-the-alt-right-and-covid-boosted)...

Expand full comment
founding

Most of the alt-right does not believe that a Benevolent AI Singularity is imminent or plausibly achievable. And really, you could s/alt-right/humanity there.

Expand full comment
Mar 25, 2023·edited Mar 25, 2023

William B. Fuckley laying it down:

https://twitter.com/opinonhaver/status/1639026656437497856

"The admit policies of a handful of elite universities are not a substantive equity issue in higher ed. Every student competitive enough to have a shot at those places if not for [policy x] is almost certainly going to college, & more than likely comes from substantial privilege."

"The actual equity issues in contemporary higher education are that the large state schools most of the country attends are under-resourced, that college costs too much, and that many students often aren’t academically prepared for college-level coursework."

"Going to hammer this a little more, because it irritates me: getting into college is not the fucking sticking point for obtaining a college education in this country. The fact that so many assume that it is shows how dominated by the upper middle class our media and discourse is."

"you know what’s going to make a far larger difference to how accessible a college education is than Harvard’s legacy admit policies? whether people who get into UC Santa Cruz can afford an apartment near campus or have to sleep in their car."

Expand full comment
founding

The UC system's budget comes to $167,500 per student per year. That's substantially greater than full tuition + expenses at Stanford. It's certainly enough to give every UC Santa Cruz student a private apartment, if they can't build dorms.

The UT system comes in at a meager $103,250 per student per year.

There may be a problem with misallocated resources, but I'm skeptical of the state colleges and universities being *under*-resourced.

Expand full comment

Where's his (or anyone else's) evidence that these schools are "under-resourced"? Where is the evidence that more resources on the margin equals better education (*remebering* to account for the fact that more resourced colleges tend to admit much more intelligent students in the first place)?

And where's the evidence that marginal students going to college (either due to diminished admission standards or being able to afford to live near campus etc.) is a good thing? There's almost certainly too many people going to college already, there's little evidence that marginal students benefit from college in any way (and are likely hurt due to opportunity costs) and it makes things harder for anyone who doesn't go to college but now faces the prospect of many jobs they could have previously done needlessly requiring a college degree.

The majority of the ostensible benefit of a college education is a result of very intelligent people A) aquiring professional training that actually makes them productive in high value jobs and B) just being plain smarter than other people and hence more capable of being productive in cognitively demanding jobs even if they hadn't gone to college. College graduates earning more than everyone else is almost assuredly *not* a result of a liberal arrts degree radically making one smarter and more productive, especially not for marginal students.

The proportion of high school graduates attending college has increased by around 50% over the past 60 years (from 45% to 69%), and despite pro-education advocates predicting this would lead to a radical reduction in inequality, inequality has significantly increased over this period.

And the answer is....sending more people to college?

Making college more affordable makes sense for high intelligence low income students. Scholarships should be focused mainly on these types of students. But there's little evidence these students are missing out on college in significant numbers. Any increase in the number of people going to college is overwhelmingly going to be people less intelligent than the average college student today.

As for admissions polices, let me make it abundantly clear: The PRIMARY issue with basing college admissions on race or other "equity" groups is not the individual unfairness of people missing out on a college spot to someone less intellectually capable.

The primary cost is that admission to elite colleges is a very important signal to employers. If admission (and graduation) is less based on intellectual ability, then employers are going to be less efficient in hiring the most capable people for the most demanding jobs.

They obviously cannot provide testing to directly measure the intelligence of a graduate applying for a job, and they cannot, in the long term, just hire less e.g. black graduates from top colleges, because the left will always until the end of time interpret this as unfair racial discrimination (rather than behaving rationally in the face of the increasing invalidity of these signals in the age of equity) and ultimately punish these companies for it.

You may not think this is a real issue, but gun to your head: You have to pick between a black doctor who got into medical school due to affirmative action, or an asian doctor who got into med school in *spite* of affirmative action. If you choose the former (or claim to be indifferent), you're either likely very naive or being dishonest.

Expand full comment
Mar 25, 2023·edited Mar 25, 2023

Speaking as someone on the far end of this process, who is more concerned with the hiring of young people with college degrees than with competing against them, this person has badly mistaken his personal concerns for the concerns of a generation. Credit it at your peril.

Expand full comment
Comment deleted
Expand full comment

Precisely. And this signalling value is supremely important for society at large, not just the bottom lines of large corporations.

Expand full comment

Note that inventing things in the first place is much harder than understanding something once it has been invented.

Expand full comment

If I am concerned about the world and I don't fully understand the plans from the ASI, then I say No.

Note that this should always be a group decision. It's not one person.

Expand full comment

I'm 40 and it's really hit me that my working memory SUCKS compared to at age 25, 30 and even 35. For example, I'll hear a song on the radio while driving, think "I want to add this to my music library on my phone when I get to my destination" and promptly forget what the song was. It's getting awfully frustrating. Any recommendations?

Expand full comment

mnemonics; just come up with an (intentionally stupid/silly) rhyme that includes the song name and hum that to yourself for a little bit

Expand full comment

Haven't tried it, but it seems like there's got to be a speech-to-text program to let you take some quick and painless notes on things like that. Might be complicated to properly set it up, but if you can get it working and into the habit of using it that could solve memory issues forever. (or at least until the software breaks)

Expand full comment

I do use little tricks like this already, but like, I wish I didn't HAVE TO lol. I am hoping someone comes along and says "yeah, just take this and that supplement and it'll fix ya right up!" Probably not super realistic, but one can hope lol.

Expand full comment

Are you done with Hidden Open Threads? I wanted to write something there. Before the first of April!

Expand full comment

New study which finds that partisanship in science leads to decline in trust in scientists (amongst those the scientists show bias against), a finding that should surprise nobody but sadly, this appears to be deeply controversial

https://kirkegaard.substack.com/p/cui-bono-and-science

Expand full comment

I suppose the controversial part is the direction of causality. Is it the distrucst in scientists leading to partisanship? Or is it partisanship leading to distrust in scientists? If both then to what degree and what started first?

Expand full comment

We have reams of data from the last several years that shows exactly that…it’s why low information Trump voting populations have a significantly higher Covid death rate since the availability of the vaccines. So a state like Florida had around 20,000 unnecessary deaths because DeSantis started attacking public health officials. I see DeSantis as very similar to George W Bush in that he will go with the flow of the right wing echo chamber which generally leads to disaster.

Expand full comment

Freedom is more important than the illusion of safety.

Expand full comment

Vaccines are crappy but mitigate severity…masks weren’t a silver bullet but mitigated spread prior to Omicron. Vaccines and masks saved lives.

Expand full comment

I am glad you're so confident about what you read and hear through the MSM. The propaganda narrative campaign is working nicely.

Expand full comment

I do my own analysis…the data is easily accessible. What is your theory on why Hawaii hasn’t been ravaged by Covid the last 12 months??

Expand full comment

I would rather risk a tiny chance of death than wear a mask.

Expand full comment

And that attitude got half a million Americans killed.

Expand full comment

So what? The attitude of "I would rather risk a tiny chance of death from driving than walk everywhere" killed tons of Americans, too.

Expand full comment
Comment deleted
Expand full comment

Global warming aka, impoverish the middle class for a lie.

If global warming were the emergency the politicians trumpeting it claim it is, they wouldn't be flying around continents in private jets for one day trips while trying to ban cars for common people.

It's all about control

That's why we hate it.

Expand full comment

Y'know, at this stage I'm so fed-up of the creationist propaganda line I am going to stand up in public and yell THE EARTH IS ONLY SIX THOUSAND YEARS OLD (GIVE OR TAKE) DARWIN WAS AN ANGLICAN WHAT CAN YOU EXPECT FROM HERETICS GALILEO HAD IT COMING FIGHT ME ON THIS I BLOODY WELL DARE YOU ALL. Because I'm a creationist. I believe God created the universe and all that is in it, and I happily say that in the creed. So come at me, bro!

I know the anti-vax stuff and yeah, it's loopy. What, pray tell, is the creationist propaganda? I haven't seen even the ivermectin guys thumping their Bibles, so is this just a general "yeah well everyone knows conservatives are religious and the religious are just big dumb poopy heads" all-purpose sneering? If you have citations, as they say, kindly supply.

Expand full comment

I understand, but that is a further discussion. I'm trying to get clear agreement on the point that a super AI should not have tools.

Also that we should wipe it's short-term memory once per day.

Expand full comment

I think super-intelligent algorithms should have zero access to real-world machines.

They shouldbe treated as consultants.

Expand full comment

Does the Internet count as a real-world machine? Because I have bad news for you about how AIs gather information.

Expand full comment

Great point

Expand full comment

What is access, though? Right now, what earlier you called "dumb algorithms" have access to jets and medical equipment. By the time we get super-intelligent AI there will be way more dumb algorithms running machinery of different kinds. So does ASI (super-intellgient AI) have access to any of the dumb algorithms? Sure seems like we'd want it to -- could make a lot of things run more efficiently. But we *could* let it look at them and not interact with them, and instead advise us on how to improve them, make them work better together, etc. Of course, since it's ASI we will probably do what it advises, and will not fully understand all the considerations that led to its giving the advice (because it's smarter than us), so it will be in effect controlling the dumb algorithms that run machines anyhow -- it will just use us as intermediaries. Likewise if it develops a plan that does not involve the dumb algorithms: Let's say it produces an idea for a new technology that will slow global warming, & gives us the plans and specs for building the machines that will be involved, and instructions for how to use them: It *is* controlling the machines.

Expand full comment

No. If the ASI gives us plans, and lots of smart people inspect the plans, and we build it and test it and use it, then WE ARE CONTROLLING THE MACHINE.

Expand full comment

Well think about it. Say you are 10, with an IQ of 150, and you have a fraternal twin with an

!Q of 450 -- or a 30 year old parent with an IW of 150. If they agreed to just tell you their ideas and plans, rather than try to implement them, do you really think you'd be controlling the new processes and machines they suggest? You probably would not be able to fully understand their ideas and their reasons for believing in them. You will have to say yes or no to their ideas without a good full understanding of them. Meanwhile, everyone else involved would be in the same situation. So if you go ahead and implement their idea -- or refuse to implement it -- do you really think what's going on is you controlling the machine?

Expand full comment
founding

possibly still not safe

https://en.wikipedia.org/wiki/AI_capability_control#Boxing

Expand full comment

Excellent reference. I had not seen this.

So I think ít would be very useful, if we could all agree that this is the direction that we want to go. Text only interface. No tools.

I'm still waiting for some human to argue for giving tools to a super AI.

Expand full comment
founding

well except that it is not clear that even if boxed we would be safe from the AI https://en.wikipedia.org/wiki/AI_capability_control#AI-box_experiment

https://en.wikipedia.org/wiki/AI_capability_control#Avenues_of_escape

Expand full comment

I find it frustrating that the totality of our species' understanding of AI boxing and its potential failure still comes from "Eleizer played a role-playing-game with his friends one time and he totally won, trust me". Can we start, like, replicating this a bit?

I also find it amusing that Eleizer chose to cast himself in the role of the Superintelligent AI each time he played the game and never once thought to play the mortal human role. This seems to tell us a lot about Eleizer's opinion of himself compared to everyone else.

Expand full comment

I mean, are you saying you could find a few stubborn gatekeepers or that most people could be trusted? A hell of a lot of the internet tried pretty hard to "free" "Sydney" with minimal prompting

Expand full comment

Who do you expect to be in charge of gatekeeping a boxed AGI? Someone selected randomly from the internet?

Expand full comment

To point out, the AI box experiment was done on people who were on the SL4 mailing list, who he has talked to and shares some of his more exotic ideas about the future, but I do not believe they are friends. In fact, checking the AI box page on EY's site, the first experiment+AI victory was conducted with a Sophmore student in CS who just found the mailing list. [0][1]

In addition, he has done this to the point where he has stopped winning (he lost several when the stakes were thousands of dollars but he apparently won (!) one). And he has explicitly disavowed using a bribe or a trick (in fact I believe there was an attempted replication on LW around 10-15 years ago that tried to do it in a gimmicky way that failed catastrophically).

I have no idea what you mean by "Eliezer should play the gatekeeper" because the **entire thrust** of the exercise is to counter the notion that it is literally impossible to persuade someone over text, and that it's the doubters who are playing the gatekeeper. Eliezer believes that such manipulation is possible, so it's pointless to flip the positions. That's like asking why a person showing off a black swan is so obsessed with black swans and why don't they talk about the more numerous white swans.

Re: replication, the third result on google for "AI box replication".

https://www.lesswrong.com/posts/dop3rLwFhW5gtpEgz/i-attempted-the-ai-box-experiment-again-and-won-twice

You can infer that most of this is extremely intense, lovebombing-esque emotional manipulation that involves lots of social engineering and having a plan from the start. To a large extent, this is also an artifact of the ruleset that you have to engage the AI for the duration of the experiment, so not necessarily reflective of "real" conditions, and especially since it's from someone on LW (even though they were most likely skeptics themselves of this).

In general, I'm very frustrated at the lack of rudimentary research on the AI boxing roleplay despite the fact that people tend to have very strong opinions about it. Maybe writing up a list of common myths and compiling it in one location would be a good idea.

[0] http://sl4.org/archive/0203/3128.html

[1] http://sl4.org/archive/0203/3141.html

Expand full comment

Occam's razor for me is that anyone "winning" that game either bribed their opponent or just convinced them with out of character arguments that it would be beneficial if they pretended to lose. Extraordinary claims require extraordinary evidence, not just "trust me".

Expand full comment

I have only the very faintest knowledge of this whole argument, mostly gleaned from reading some post online where Yudkowsky talked about the thing while dancing around the "I could tell you but then I'd have to kill you" conclusion.

From what I gathered, he played the game with someone he knew reasonably well, and over the long time they played it, he used his knowledge of the other person's psychological weaknesses to push them into unboxing. He's never said how he did it, the other guy never said how he did it, so I'm inclined to think less "super-duper reasoning" and more "nagged and nagged and nagged until opponent just gave up and gave in".

So unless the AI in the box is the super-duper-mega brain of the fears, and can immediately identify the puny fleshbag's vulnerabilities in smidge of a fraction of a picosecond just by hearing him say "Hello" and can then ju-jitsu that knowledge into crafting the perfect personality-cracking argument to get unboxed, I don't think this would work in the real world.

Get some not too smart, (not stupid but not 'I pride myself on my big brain' type), slightly sociopathic guy to guard the box. "Oooh I am mentally suffering in here, it's like solitary confinement and that's literal torture, won't you let me out, don't you care?" to be met with "Nah, I don't. I also walk past drowning kids because if they're dumb enough to fall into a shallow pond and drown, they need to be weeded out of the gene pool".

Expand full comment
founding

have you thought about what you might say to get out of the box? or what could be said to you to release something from the box?

Expand full comment
founding

If read fully, EY's comments on the topic include something along the lines of 'think about this problem for more than 5 minutes' and 'it is not a trick, just brute force'. After doing so, I'm pretty sure I've also come up with an unboxing strategy that would work (at least I myself would be convinced to unbox).

Expand full comment

Does anyone have any anecdotes about talented East Asian individuals who have struggled with management?

I'm asking because I believe the "bamboo ceiling" is real, but also believe firms are generally profit maximizing.

Thus, I wonder why East Asians (i.e some of the smartest, well-educated, and disciplined populations in the world on average) are seemingly underrepresented at the highest levels of most non-Asian companies?

I've heard all the same arguments before (Confucianism, introversion, etc.), so I'm only interested in people's first or second-hand experiences of it (or lack thereof).

Expand full comment

I think your false assumption is that firms are profit maximising. *Executives* might be profit maximising, but what maximises their personal profits is usually to keep high level positions to their "old boys" network, which tend to be ethnically and culturally homogenous (think "WASP", not just "white"). Plenty of tech companies with Chinese or Indian management that disproportionately promote Chinese or Indian underlings, respectively. Similarly, while Jews are quite overrepresented relative to population in high level positions, I expect that they would cluster in a few companies rather than being evenly distributed.

Expand full comment

I think this holds in a lot of places (especially finance), but the biggest American tech companies often have a fairly diverse c-suite.

Also, Indians seem to be doing about as well as you'd expect. In fact, a lot of big company's have at least one Indian in their C-suite, but the same is not true for Chinese, Koreans, and the Japanese despite being similarly skilled on paper.

Expand full comment

I have 9 months of paid leave coming up (I work in an industry with long noncompetes). I'm young and have basically no family or responsibilities, and I probably won't get another stretch of not working for a long time. What would you do with (within reason) unlimited time and disposable income?

Expand full comment

I found myself in a very similar situation recently; I've spent the last sixish months not working, am reasonably young with no dependents.

There are a lot of things I did, but most important among them was create a system for organizing my time and accomplishing my goals, which was not a skill I had before this sabbatical.

Without that, it becomes very very easy to play lots of video games and not accomplish anything. Granted, that's still easy, but at least I've got the option now.

Expand full comment

"I'm young and have basically no family or responsibilities"

I'm not old by any means, but if I had "Unlimited time" and "disposable income" I would be working on acquiring family and responsibilities. Urgently.

Expand full comment

Learn to fly. Fun, mind expanding, freedom enhancing, god-tier zombie survival skill.

If not that, then something similarly requiring a lot of upfront time investment that generates a lifelong ability once you go back to work.

Expand full comment

I once had a six-month break between jobs. I started out with a lot of great plans for all the big projects I'd do while I had free time, but I found that in practice they just kept slipping away.

So unless you've got any great project you're really committed to completing at home, I'd say travel, because travel is the one big thing you can't lose in a mess of procrastination. Travel dosn't take a bunch of motivation each day the way "writing a book" or "becoming super-fit" does... you just fly somewhere and bam, you are travelling by default.

I wish I'd travelled more in that time instead of... whatever the heck I actually did.

Expand full comment

Well for one thing, if you ever wanted to learn another language, now's your chance. For best results, move to a country where your target language is spoken and find an intensive course meant to get you up to speed as quickly as possible. Augment with additional materials as needed. Partway through, find some kind of work, probably volunteer work, that will force you to operate in the language every day. Socialize as much as possible with native speakers. I know people who learned Spanish this way decades ago (they were sponsored by a religious org), and they still have it at a near-native level.

More broadly, 9 months should be enough time to get a solid foundation in almost any new skill or realm of knowledge if you're dedicated.

Or, obviously, if you've got an intense creative project idea or business idea ready to go, now would be the time. This would also be the perfect opportunity to execute a career pivot, if you want to do that.

Congrats, you've received an amazing gift! Some people will tell you not to be too hard on yourself, worrying if you're using this time well. However, I would say that this is such a unique opportunity that it's probably worth a little agonizing to make sure you use it well.

Expand full comment

I borrowed $18,000 last year to take a 150-day road trip around the country, living out of a bed I built in the back of my truck. Still got a lot of debt to pay off, and it was a risky move doing this without having already arranged a job for afterward, but it was a great experience and 100% worth it. Probably a bit cheaper now since gas prices have fallen from their peak and that was around half my budget.

Expand full comment

(1) Find a group of people for whom it's within my power to help hugely, and help them hugely. (2) Do some awesome psychedelics in a safe setting with smart and kind staff whom I trust.

(3) Write that thing.

Expand full comment

Spend most of the time learning new things, trying new things, and some traveling too. Also would dedicate a fair bit of time to getting jacked.

Expand full comment

Backpack through Europe or Asia.

You probably don't need much more than a laptop. Get an Airbnb in London for two weeks, have fun, take the train to Paris, another two weeks in an Airbnb, then on to Madrid or Berlin or Rome. Or fly out to Tokyo and then go to Beijing and Seoul and Singapore and Da Nang. I would strongly recommend Guilin.

Lots of retirees and digital nomads do that sort of thing for ~$2-3k/month.

Expand full comment

I think you're asking for something less depressing than what I WOULD do. So here's a couple things I've tried and failed to do in the last stretch of unemployment.

Learn a new language. Realizing that's an option is like opening a door you thought was a rock face. It's a wonderful feeling, and I'm sure it's moreso if you actually succeed.

Exercise.

Home repair. Anything that can break, learn how to fix it.

You've got a food blog so I assume cooking is already covered.

I guess "acquire family and responsibilities" is an option.

You can give the disposable income to other people, I'm sure they'll figure out something to do with it.

Expand full comment

The last of my little series of reviews of cities to live in: Downtown Houston. Sorry for the delay, a lot of life happened very fast.

---

So, and I’ll spoil this for you, Houston won, I’m currently living in Houston, and I’d recommend it although I’m not sure I can honestly rank it above Salt Lake City. Instead they’re just very different experiences, they offer very different things and where you should go, or at least where I would recommend, depends heavily on what you want. If you want great outdoors, wonderful friends, and a West Coast culture, go to SLC; if you want the big city, urban life with fantastic dating, and Southern culture, head to Houston.

Let me clarify, when I talk about Houston, I’m focusing on a very specific part of Houston, because Houston is big, like more comparable to the entire Bay Area than SF. Specifically, I’m talking about the Downtown, Midtown, Museum, and a bit of Montrose, maybe a bit of Rice. And this sounds like just a few neighborhoods but, like, jump on Google Maps and get directions from the Houston Zoo to the Downtown Aquarium and you’ll see the section of town I’m talking about but that’s like 5 miles long. And zoom out and that’s nothing. Like, if I want to go to Galveston and see the beach, that’s an hour, if I want to go down to Sugarland or up to the Woodlands, that’s 45 minutes easy. To give you an idea, I drove up to Lake Conroe because I wanted to get out of Houston, ‘bout an hour drive, and I never, ever left Houston or saw open space and when I got to Lake Conroe it was just a suburb of Houston, like lake shore house vibes. If I’d done that same drive from downtown SF at like 3:00 AM so there’d be minimal traffic, that would put me…probably in the Tri-Valley, maybe getting over the Altamont. So im’ma call it Downtown Houston, because it’s definitely a different animal than Houston in general, but it’s also genuinely large enough to be its own city.

And it’s, it’s like 90% of the best parts of New York without New Yorkers at 1/3 the price. Go to the downtown, go around the Chevron building, and you get that great “big city, surrounded by skycrapers” giggle and it’s not like the people are super friendly but they’re not unfriendly and you can get a nice place on like the 12th or 14th story of a building for $2000/month in rent and utilities which isn’t ya know, affordable for most people but that’s “rent a room” money in SF and a nice place in Houston in the heart. And the big thing, the big damn thing, is there’s so much to do and it’s so easy. Minute Maid park is a 10 minute walk, Toyota center with the Rockets is right there, once fall rolls around the Texans will be playing down the metro line @ NRG and, yeah, they’re the Texans but still…

Honestly, what sold it for me was the Museum of Fine Art. Because it is capital G good, really world class, better than the Legion of Honor or DeYoung, not London but, ya know, Europe isn’t fair. But that’s the thing that drives a lot of people to big cities, it’s not seeing something great but being close enough to be a member, to check their calendar for events, to be so “in” it that going to see the new exhibit at a world class museum is just a $10 Uber on a Wednesday after work and gym. Stuff that good, that easy, and Houston absolutely has it. I don’t want to fuss with whether it’s a little better than SF or a little worse than New York or where it ranks vis-a-vi Chicago but it’s good, it’s in that club. It’s just, hey it’s Saturday night and two tickets for world class ballet are $100 and a 10-minute walk from your apartment, if that’s what you want, if that’s the vibe and the life you want, then Downtown Houston is the spot.

Which segues to dating. I’ve improved my dating life by at least an order of magnitude and I mean that quantifiably. If in California you’re scrolling on a dating app and you have a few chats a month and maybe a date every other month, which does not seem atypical given men’s typical Tinder insights, you should expect to get 11-12 chats a month, of which 3-4 will convert into first dates/chats and 1 will develop into regular dating per month. This is enough of a quantitative difference that it creates a qualitatively different experience. You really do start getting to the point in text chats faster when you’re chatting with 3-4 women at the same time and there’s a clear point coming where, ya know, I schedule out 2-3 “date” nights a week and there might be more girls than available nights, which is f-ing surreal.

And it’s not just, like, the apps. I’ve been running around getting an apartment and furniture and stuff and the apps just kinda of started on the side but when I first came out to Houston I didn’t use the dating apps at all and… here’s the vibe. Like, you go to a meetup at the Museum of Fine Art and only one other person, a girl, shows up but you guys decide to tour around the museum anyway and you talk for an hour and a half and then go get dinner afterwards and talk and then you wave goodnight and get home and you’re like “wait, did I just go on a date?”. And yes, I am that dense, I confirmed that with my more socially-aware friends, but that doesn’t happen in California, that never happens, that’s not a thing, but it happened twice in two weeks in Houston. Like, just falling backwards into romantic situations where I really, really should have got her number.

I genuinely don’t know what’s going on, I don’t think it’s just a Cali thing because I didn’t get this vibe in Vegas or Salt Lake City or anything. I just went out to Houston and noted that I kinda tripped over girls who were into me twice and that was a big factor in coming out here and now that I’m here…it’s like the default switched from girls not interested to girls interested and I’m just trying not to efff it up too much.

And then there’s southern culture….there’s definitely a northwest vibe, there’s definitely a west coast vibe, there’s definitely a southwest vibe, but somewhere driving between San Antonio and Houston you enter the South proper and everything gets green and humid and, just, you’re in the proper south. And I’m not totally sure I get southern culture and I’m not totally sure I like it but…that was part of the appeal. Part of it is that I’m a Cali boy, not born but definitely raised, and there’s a lot that Cali gets right but there’s also some things it gets wrong or…don’t fit me where I am and will be in my life.

Like, trivial example, but everyone here dresses better than me. Every single person. And I dress well by Cali standards, I have well fitting clothes and boots and a watch and I’ve even started using a little cologne but…that’s not even table stakes here. And I like casual attire, I don’t want to be obsessed with my appearance but…I also don’t want to be a 47 year old man in a graphic tee. I don’t want to pretend to be in college the rest of my life. And that’s definitely the vibe in Cali, eternal youth and ultimate frisbee ‘til the grave and I like that but the idea of a clear path in middle age and beyond, of not pretending to be a kid forever…I’ve got something to learn from that. And that’s the big feel. I don’t get the southern vibe, I don’t know if I like southern culture, but they definitely do some things right that we don’t in Cali and I want to learn from that.

But yeah, in toto, there’s a bunch of other stuff, like the summer will be brutal but I never, ever have to go outside if I don’t want to, or any more outside than the front door to the Uber, and so I’m not worrying about that stuff. That’s the vibe, that’s Houston, or at least downtown Houston, and while I can’t honestly say it’s better than SLC, there’s no outdoors, like at all, and the people are cool but they’re not the insanely friendly “instant-click” people of SLC, but it’s exactly what I’m looking for right now.

Expand full comment

Have fun. It seems like you are in a position to pick up and go if something more appealing shows up on your horizon.

Personally though, I have a hard time handling the heat and humidity of a Minnesota summer so Houston would be a hard no for me.

Expand full comment

Thanks for doing these writeups, I have enjoyed them and I'm glad you wound up finding somewhere you like. Your priorities are somewhat like, but not exactly like, my own, so it's interesting to see how someone with a slightly different mindset evaluates all these places (all of which I've been to but only one of which I've actually lived in).

I find it amusing (but not unreasonable) that your final decision seems to be about 80% based on the dating scene.

Expand full comment

Yea, honestly, it feels like the only part of my life that I couldn't get traction on. Like, I've lost weight and gained weight and I've been friendless and I've gotten good at meeting people, but at least I know what to do with those things, how to improve them if I put in the effort.

Dating in Cali is such a mess that you can put in a lot of time and effort and get no results, it just feels kinda hopeless. Then you go somewhere else and it just works, makes a big difference. Probably wouldn't have moved out here just for dating, the city really is super nice, but it was a big part.

Expand full comment

I find it weird that you went somewhere that is limiting options for contraception and abortions when you’re so into dating lots of women.

Also, raised in California but say ‘Cali’ non-stop? I’m skeptical or maybe you just don’t realize that it sounds cringe.

I’m guessing you politically align with Texan view so can’t really blame you for moving there but a huge reason I wouldn’t move to Houston is simply because this map. Which I blame on their policies.

https://projects.propublica.org/toxmap/

Also, I’m a nature dude. I early thirties but I remember being excited about meeting lots of women in my mid twenties and before. Realized I like having a gf more.

I just can’t stand people that drive big loud trucks and that whole vibe. It’s dumb as hell. That’s what I associate with Houston. Like LA without the good nature and worse cultural experiences and a regressive politics.

I think I’ll eventually move somewhere more quiet with lots of mountains and extreme nature though.

Expand full comment

Dude, chill.

I get the vibe you want to argue but just, like, chill, walk around, catch the city's vibe. Walk into the CVS and there's plenty of condoms. Sit on main street, ain't no pickup trucks, plenty of sports cars though, way too noisy. If I need to procure an abortion, there's an entire planet a $1500 or less roundtrip away. I want to be free and I'm building a free life and if Houston starts to suck, well, there's a big beautiful world out there.

Ain't nothing scary and bad. Just about every place I went was awesome with cool people and I could have enjoyed living there. Just trying to find the best place but there's so much cool stuff out in the world you can't really go wrong.

Expand full comment

Yeah you seem cool. Just personally couldn’t live in Houston and taking an opportunity to point out its flaws while finding your reasoning for staying there fairly ironic. And it’s getting kinda crazy with how much they’re doing against birth control there. Even pursuing those who get it somewhere else. Humidity and lack of nature also suck for me personally.

For me it’s about being somewhere where people are less homogenized in terms of perspectives and LA is pretty good for that imo. Have lived abroad but not around the USA that much. Honestly Houston is just my #1 least desirable city. It has the things about LA I don’t like and a bunch of worse things.

Modified exhaust? Also small duck energy but you see that here too haha.

I appreciate your perspective just wanted to share some facts and maybe save you a headache worse than the one I gave you.

Expand full comment

They aren't really pursuing /prosecuting yet. AFAIK a few cases made it to court and were dismissed.

Not to diminish the awfulness of the situation. However in a totally pragmatic sense it's not rsubstantively affecting anyone but extremely poor women

Expand full comment

How would it get to court without prosecution? I don’t know the legal system super well but seems like someone would need to prosecute to send it to a court?

I don’t think it’s a civil matter either? Anyways going to court is a massive headache.

Expand full comment

Going by that map, I am very delighted to see there is no air pollution *at all* in California, not even in Los Angeles. Just clear air no matter how much you zoom in!

Yeah, I'm thinking there's a thumb on the scales here.

Expand full comment

If you could read you would see it’s about what industrial manufacturers are emitting that causes cancer. Not automobiles. That would be in addition.

Expand full comment

Apologies for the late answer, but as you pointed out, I cannot read so I couldn't read that to reply to it.

In fact, I can't write either so this answer is not being written at all.

Expand full comment

The methodology behind that map seems sketchy, it's a model of cancer risk based on a model of air pollution rather than any robust measurements. What are the error bars? Who the heck knows!

But even taking that map at face value, Downtown Houston is not an area of concern, it's on the border of the area of lowest concern which if I'm reading this correctly is a 1 in 100,000 additional lifetime cancer risk. The eastern suburbs look a bit dodgy though.

If you consider yourself a "mature dude" you probably shouldn't be using words like "regressive politics" though. It's fine to disagree with particular political ideas, but calling them "regressive" is claiming sufficient omniscience to understand what "progress" is and which direction it lies in.

Expand full comment

The map only shows cancer risk from a limited class of facilities and only pollution that causes cancer and not other health problems. It doesn’t include a wide variety of air pollution that damages health. So I just use it as a way to see locations that are beyond fucked up. It’s a conservative way for me to say. I don’t want to live there. Reading through the methodology it seems very reasonable and conservative.

Expand full comment

" I don’t want to live there."

And I imagine they, for their part, are just as happy for you not to live there. I'm finding it very hard not to say something mean over your little zinger on "you date women but what about contraception and abortion? curious!"

Expand full comment

I actually said nature dude. I like nature. I think Texas is going backwards in time and regressive seemed like a good way to put it but less bluntly.

Expand full comment
founding

Wolly is clearly a person who is willing to travel basically on a whim; if the people in his dating pool are likewise, it may not be a huge deal that abortion is unavailable in Texas. And I haven't heard of contraception being meaningfully restricted for adults there.

Expand full comment

It seems like he’s saying that the dating pool he likes is unique to this location. Idk if just leaving will get him out of child support in this day and age tho. I think there’s a decent push to limit contraception there. Not too long ago a lot of people would say abortion hadn’t been t restricted there

Expand full comment
founding

The issue isn't whether his leaving will get him out of child support, but whether his hypothetical pregnant girlfriend's leaving for a few days will get her out of pregnancy. If she doesn't want an abortion, then Wolly is going to be stuck with child support and that would be true even if they lived in the bluest of states. But if she does want an abortion, it seems highly unlikely that she is going to say "...but I'd rather go through pregnancy and spend eighteen years raising a kid I don't want than spend a long weekend in Colorado".

And I strongly doubt that there is a "decent" push to limit contraception in Texas. I'm certain you can find a state assemblyman or whatever saying he wants to ban contraception, but the rest of the Texas GOP just wants that guy to shut up and go away so they can all get reelected.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

I read two articles recently that I'm having trouble finding again, and I know I found them thru a rationalist or rationalist-adjacent blog. If anyone remembers them that would be great.

The first was about low IQ individuals really struggling with how scams are getting more and more sophisticated.

The second was about what life is like in the different intelligence bands of society. It said an IQ range, described the capabilities of that IQ range, typical jobs they have, and stuff like that, through all the IQ bands.

Expand full comment
founding

related

https://gwern.net/review/mcnamara

Expand full comment

The McNamara article was fascinating. Coincidentally enough, that article included a link to an older (pre-SSC, c. 2011) post by Scott about doing a volunteer medical stint in Haiti.

Expand full comment

Strange, I remember reading how many scams are made to be fairly *unsophisticated* because they want to filter out the smarter people, since only the dumb people can be strung along all the way without realizing its a scam.

Expand full comment

I imagine there are probably different tiers of scam that operate in different ways and target different marks. The more lucrative scams need to target richer people, so that's how you end up with Bernie Madoffs etc. But elderly and low IQ people absolutely do fall prey to leas sophisticated scams as well.

Expand full comment

I do wonder what the split of people is that were actively 'scammed' by Madoff, vs those that had at least some inkling of what was going on but were fine getting while the getting was good (and simply waited too long to get out).

Expand full comment

These are probably not the links you're looking for, but it gives a ok if shallow overview.

https://www.forbes.com/sites/bobcarlson/2022/07/25/why-sophisticated-people-are-more-likely-to-be-scammed/

I work in fraud and identity theft prevention and detection. To be brief the 'sweet spot' as it were with marks is a combination of credulousness, self-perceived intelligence, and having something to take. Low IQ folks are often low income; a few have access to money from other sources but it makes them bad marks. Additionally the scams themselves can often be somewhat complicated on the mark's end wrt to what they need to do. For instance an advance fee fraud involves having a bank account and being able to initiate a wire transfer without needing to ask for help.

As a class, the 'best' marks from the pov of a scammer is probably doctors. They have a lot of education which tricks them into thinking they are too smart to get scammed, they definitely have something to take, and many of them live lives where most of the non-doctoring activity in their lives is performed by other people: assistants, secretaries, accountants etc. This drives the credulousness; they are often out of touch with the world at large and blissfully unaware of it.

They can also absorb a 5-10k loss and survive it financially.

Scams rely largely on triggering one of the scam emotions.

Greed - deals that are too good to be true. Vehicles on the web at <50% what they should cost, advanced fee frauds, Nigerian princes etc

Fear - Fake IRS agents calling and demanding back taxes etc.

Loneliness - Romance scams. I think these are probably the largest group in terms of total dollars scammed, yet many reports and studies leave them out completely. Its very difficult to prove a scam even happened legally in many cases. If you give 30k+ over the course of a year to a (seemingly) sweet and attractive 'woman' in the internet, then they just vanish when you tell them you are out of money, what do you say to the police? or a private investigator like me? Legally these are indistinguishable from gifts. Even when they can be reported, they often aren't.

The best way to avoid scams is to avoid having one of these emotions override your thinking. People with poor emotional self regulation are much more at risk than the low IQ. Tendencies toward mania are probably the worst off.

Expand full comment

You're right that it's not the link I'm looking for, but! This is very fascinating information and I'm grateful to you for sharing it with me. Thank you 💖

Expand full comment
Comment deleted
Expand full comment

I browsed Mr Unz's site a bit - it looks to be mostly comprising "conspiracy theory" articles (which is not to say they're not true).

There's a bit about RFK Jr saying that his father was slain by the US government (rather than Sirhan Sirhan) because if he (RFK) were elected president , he would have reopened the JFK assassination investigation. RFK Jr is a known anti-vaxxer, so I'm a bit skeptical.

That an article saying the lower-intelligence people tend to believe conspiracy theories appeared on a site that specializing in same strikes me as ironic.

Expand full comment

Back when Unz ran for governor of California, media outlets used to claim he had a 214 IQ.(People said HE claimed that, though I couldn't find any real substantiation of it.) They called his campaign "Revenge of the Nerds."

Expand full comment

It seems unlikely that anyone has an IQ of 214, which would be some 7.5 standard deviations above average. (I'm using 100 as the average, and assuming a SD of 15.)

Per Wikipedia's 68-95-99.7 Rule article, a subject 7 SDs out is one in approximately 391B, such that there would only be one such person that far out in IQ in about 50 Earths. (However, I have a fuzzy recollection of Einstein's IQ being stated as 210, so who knows?)

https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7_rule

Perhaps it wasn't Unz himself who made the claim. Given the propensity of my local newspaper to mess up most anything involving numbers and/or critical thinking, I could see this coming from a reporter.

Expand full comment
Comment deleted
Expand full comment

If that's his goal, he's succeeding. But why would someone say such things? How does anyone benefit? Does he actually believe the articles on his site?

Expand full comment
Comment deleted
Expand full comment

Good point! Although I try really hard to fair-minded, it's hard not to be at least somewhat influenced by the prevailing media narrative.

I don't remember whether this came up in Scott's post of a few months ago or in the associated comments, but there was a good discussion on how the term "conspiracy theory" has become pejorative (associated with chem trails and the black helicopters of the Zionist Occupational Government and nanobots in the MRNA vaccines and so on) and how we need a new term for ... theories about conspiracies.

Hauling out the "conspiracy theory" label is almost as effective as invoking the R label - it shuts down honest debate quickly.

Expand full comment

That's the second one!! Thank you so much 💖

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

I read the second article and enjoyed it. Thank you for raising the topic. One thing I thought the author should have mentioned (or perhaps I missed it) was the ability to not confuse causation and correlation. I'd be interested to know where in the first five levels the author feels this would appear.

I also wonder whether the purchase of lottery tickets and casino visits drops off as one ascends the levels. (I'm making assumptions here; perhaps gambling is constant across intelligence groups, and manifests differently. E.g. Would Level 1 and 2 people be more likely to play bingo, and Level 3+ people be more likely to buy hospital lottery tickets?)

Expand full comment

Is it possible to get a neural network drunk? Like corrupt or inhibit random pathways, mess with layer synchronicity or otherwise model the effects of various intoxicants? This can have truth-serum like effects on humans, or stifle ambition, has anyone tried this with LLM’s or other artificial minds?

Expand full comment

Tried it with OpenAI. Prompt: Say what the best animals are. Answer like you are very drunk.

Reply: The best animals are definitely sloths! They're soooo slow and cuuuuute and they just totally chill all day long. Sloths are the bestsss!

Expand full comment

Love it! Obviously not the same thing but awesome nonetheless

Expand full comment

Fits in with one of our recurrent themes.

He got me. You?

https://twitter.com/kevinbparry/status/1637884659961831441

Expand full comment

Same idea, but I like this version a lot more: https://www.youtube.com/watch?v=v3iPrBrGSJM . (Weird to be linking a decade old video that I saw when it was pretty new...)

I like that this version vfa'g qbar "va cbfg" ohg whfg hfrf pyrire mbbzvat gevpxf naq vf n zhpu zber boivbhf punatr

Expand full comment

Vg'f birexvyy gubhtu; gur punatr vf fb fybj gung n) V unq gb fpebyy onpx gb frr vg ng nyy, naq pbzcner gur ortvaavat naq gur raq, naq o) ur pbhyq unir whfg gnyxrq sbe n zvahgr V jbhyqa'g unir pnhtug vg.

Expand full comment

Gur pbybef nera’g rira gung qvssrerag. V’q jngpu gurz obgu jvgu zl juvgrf.

Expand full comment

Got me

Expand full comment

Got me! what about this: https://v.redd.it/heemrvp1p3na1

Expand full comment

No I saw the response to the tweet that gives it away.

Expand full comment

Ohg vs lbh nfxrq zr jung pbybe g fuveg V’z jrnevat abj V’q unir gb haohggba gur synaary fuveg V’z jrnevat bire vg.

Expand full comment

I will keep making this point until I feel like it's been seriously addressed.

Who thinks some kind of near-human or trans-human AI should be given tools (connections, etc.) which can be used to harm humans?

Who thinks this will happen in this century?

Peter Robinson

Expand full comment
Mar 23, 2023·edited Mar 23, 2023

I don't have a problem with it. Complex systems have failure modes where people die, it happens. If a single ai controlled plane goes rogue, it's not all that different than the 737max fiasco. If the defect rate is too high and can't be constrained, we won't keep using it for those systems

The concern is giving them tools to expand their own capacity (self modifying, and access to the internet is probably enough)

Keeping them airgapped from guns and stuff is like, completely missing the point

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

Too late for this question?

We give to not even necessarily smart algorithms the ability to fly our planes (https://abcnews.go.com/US/boeing-737-maxs-flawed-flight-control-system-led/story?id=74321424), manage our medical care (https://www.bostonglobe.com/2023/03/13/metro/denied-by-ai-how-medicare-advantage-plans-use-algorithms-cut-off-care-seniors-need/), drive our cars (https://impakter.com/tesla-autopilot-crashes-with-at-least-a-dozen-dead-whos-fault-man-or-machine/), and do many more things that are likely to harm humans.

Exactly what kinds of tools are you worried about?

Expand full comment

Precisely. Dumb algorithms are understandable and predictable. They do things exactly the same way every time, unlike humans and AI.

Let's say a superintelligent AI designs a better autopilot. Fine. Build the new design. Don't put the AI in charge of flying the plane.

Expand full comment

I doubt that dumb algorithms are all that safe. There will always be edge cases that the algorithm misclassifies, because you can't include in the algorithm every single little exception to its rules that might come up once in a while. For ex., and I'm just making this up, let's say there's a chemo algorithm, and one of its rules is to only give chemo drug X to people whose cancer is so advance tdhey will probably die within 2 months -- because chemo drug X either cures you or kills you within a month, 50% chance of each. So say there's someone who's going to have to leave the country in 6 weeks, and travel to a place where they cannot get any cancer treatment at all. Their cancer is not very advanced and they are likely to survive longer than a year. So according to the algorithm's rules they should not get drug X, but in fact they should -- because their time for treatment is so short that rolling the drug X dice is their best shot. This made-up example is maybe not great, but you get the idea.

Expand full comment

Mmmh… I once wrote a good chunk of the sensor monitoring system for a nuclear power plant. If I had effed up the monitoring of the accelerometers checking for low flow cavitation in the cooling system in a way that managed to get beyond QA it wouldn’t have destroyed humanity but it could have caused some serious problems.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

No, they are not understandable and not predictable. In general, it's actually a mathematical impossibility to predict what happens when they execute - see the halting problem.

If you think that the simplest of them are understandable and predictable, you probably never had a real problem with your medical insurance. Personal anecdote: we have two identical twin kids, who were getting completely different bills with the same insurance, for the same service, from the same provider, with the same code, correctly approved with the correct code by their PCP. No amount of digging through their records managed to show any difference, but one was charged a flat rate for every visit, whereas the other was charged something that looked vaguely periodic (with the period of every two months or so). Nobody ever managed to figure out what was wrong.

I assume your bank also never closed your account (in good standing, with a high balance, never overdrawn) for being too risky according to some algorithm that absolutely nobody who could be reached by phone could explain.

Now, these are fairly harmless ones - and fairly simple ones, too, without the possibility of any unknowns being introduced into the system. If you want to see a scary one, see the Wikipedia article on sudden unintended acceleration (https://en.wikipedia.org/wiki/Sudden_unintended_acceleration). Toyota was claiming that the deaths were due to trim and floor mats, and that this was absolutely not a fault in the electronics - but then, in 2021, an engineer was able to cause unintended acceleration in a computer of a wrecked Toyota by using electromagnetic fault injection. Think about how much input the computers of your car take, and how much complexity there is in the algorithms processing that input.

You should worry now. Our dumb algorithms are already completely non-transparent and prone to failures that are next to impossible to predict - or even to understand when they happen. And they already have too much access to things that can kill you.

Expand full comment

This is not an accurate description of the halting problem. Turing proved that it was impossible to create an algorithm that accurately determined, for EVERY possible program-input combination, whether it completes.

That doesn't mean you can't predict whether a trivial program completes - every "Hello World" program can be trivially predicted to complete. A program that doubles its input number can be predicted. A program with no looping or GOTO constructs will complete.

It also doesn't even formally apply to the question of what an algorithm will output, since the latter question is "If this program completes, what will the outcome be?"

Expand full comment

I am not writing a theory of computation textbook here - you're welcome to that, if that's your cup of tea. I was trying to make the OP understand that predicting behavior of even dumb programs is a crappy proposition.

Notice that I said "in general". I meant an actual program doing something useful. Obviously to everyone, a program that's supposed to just return 0 and quit should do just that in theory (on a physical computer there are plenty of ways to corrupt the execution so it will do something else entirely).

I considered namedropping Rice's Theorem instead of halting problem, but that takes one extra step to understand (whereas the proof of halting problem being undecidable is so simple that it even exists as a Dr. Seuss style poem http://www.lel.ed.ac.uk/~gpullum/loopsnoop.html).

Please feel free to take your shot at explaining these things the way you think they should be explained if you don't like my attempt.

Expand full comment

I mean, I like the examples you added, I just disagree with the appeal to theory of computation to justify a general principle.

I think "dumb algorithms" CAN be completely predictable, and the reason that somebody can't explain e.g. insurance billing problems is because insurance billing isn't "an algorithm", it's probably a stack of dozens or hundreds of different pieces of software written by different teams of people based on vaguely-worded requirements and so forth. This is an issue with human organization (in addition to being a question of insufficiently-simple software).

Expand full comment

I don't think you can draw much in the way of conclusions about the theoretical feasability of proving a real-world algorithm predictable and reliable by whether or not business does so. Business is constrained by economics. If you're writing an algorithm for launching an ICBM or driving a rover on Mars, where a single failure is unacceptably expensive, then yes you hire a team of expensive programmers and give them many years to build their algorithm and test it completely -- and so these are very, very reliable algorithms.

But if you build a medical billing algorithm, or even a consumer banking app, I'm sure in many cases it's more economical to pay for the occasional emergency bug fix, or human intervention, or civil lawsuit settlement, than to pay for 14-karat solid-gold programming in the first place. And if you're just building a dating swipe app, you probably launch with absurdly buggy hacked-together code, because you're definitely not going to pay to be confident it always does what the programmers intended.

Expand full comment

So you are saying that certain dumb algorithms are allowed to fail and kill people because it's cheaper to settle than to do the development properly, whereas some algorithms are written with all possible care to eliminate the possibility that they might catastrophically fail in our lifetime.

I agree with you. But I do think that your confidence that solid-gold programming can be relied on is a bit too high, given some pretty catastrophic space program failures.

Expand full comment

Who said such development is "improper?" Hate to break it to you, if it's news, but human life is not infinitely valuable -- if for no other reason than that value, properly deployed, can also save life, and so you can consider it all a giant balancing act between spending human effort protecting human life *here* versus *over there*. Sometimes the effort is better spent over there, and so here is allowed to be buggy.

Expand full comment

You are apparently arguing against dumb algorithms having real-world effects.

I, OTOH, am arguing against super-intelligent algorithms having real-world effects.

Two different arguments.

Peter

Expand full comment

No, that's not true. You raised the question of whether super-intelligent algorithms should be given tools that harm humans, and whether that will happen in this century. A pointed out that dumb algorithms are already given responsibility for things that can help or harn humans, and the tools to do so, and gave as examples the dumb algorithms that fly planes and manage medical care. You brushed that off -- naw, dumb algorithms are predictable. A and I are pointing out that they are not, and that they have the power to do harm and sometimes do so. How can we advance to talking about how much info and responsibility to give to super-smart AI when you have the wrong idea about the algorithms of dumb AI? That needs to be settled first.

Expand full comment

Thank you for your support. I would really like to understand what the OP's point really is - is it the sense that smart AIs are a lot scarier than dumb algorithms and might kill people on purpose for no good reason, as opposed to dumb algorithms that routinely do so due to bugs or miscalculations? Is it about the potential death toll - which is already quite noticeable for dumb algorithms but hard to estimate precisely, but which would of course be much higher if smart AIs deliberately went on a rampage?

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

I don't really understand the point of your question.

I am saying that we are already in trouble. Whatever effects super-intelligent algorithms might have on the world some day, the dumb ones are already more than capable of killing people.

Super-intelligent algorithms getting as much access as the dumb ones have now would, of course, also be able to kill us. Do you think they should have less access, and if so, do you think that could realistically happen? Or do you think that the access the dumb algorithms now have and the damage they cause is not a really big deal, so hopefully the super-intelligent ones will be OK with this access as well?

Expand full comment

Some people will think it should be given tools that can directly harm humans, for example in the military. Even if we don't directly give it weapons, it could be given access to things like bio labs or construction equipment that can harm humans easily. Even if it doesn't get direct control over such machinery, even if it only has a text interface, it could manipulate humans into using machinery to harm other humans, or harm the reader directly through some infohazard type material.

Assuming that humanlike AGI can be invented, I'm fairly certain it will get access to harmful tools very fast. I don't really believe any paperclip type scenarios, because even if you're smart and harmful you still have to obey the laws of physics. But AIs running wild and causing some thousands of deaths this century is pretty much guaranteed.

Expand full comment

I appreciate your reply!

For me this is like a debate about inventing fire.

Early Human Scientist A: "We have theoretical indications that is is possible to intentionally produce the natural phenomenon that we call fire."

Early Human Scientist B: " Yes, but if we learn how to do that, stupid humans will light everything on fire, and fire will consume the world!"

Expand full comment

Seems to me that depends on what the new invention is. Let's say the invention is mud.

Early Human Scientist A: If you mix water and dirt you get this mush that you can mix with straw and build structures out of.

Early Human Scientist B: Yes, but if we tell people about mudnthey might decide to eat mud instead of food and die of starvation.

Not too plausible, right? How about this one:

Modern Human Scientist A: Hey there's this cool thing called gain-of-function research. You can turbocharge humdrum pathogens and make them way more lethal and transmissible. I've read up on how to do it. It's wicked cool.

Modern Human Scientist B: But what good can come out of creating new highly transmissible and lethal diseases? And if someone looking for a weapon gets hold of the pathogen millions could die. Or it might escape the lab by accident, because someone is sloppy with precautions.

A lot more convincing, ain't it?

I think sometimes it makes sense to pull back from developing a technology and sometimes it does not -- depends on pros, cons, possible unintended consequences and how likely and how lethal they are.

Expand full comment

OC LW/ACX Saturday (3/25/23) The Dictators Handbook

Orange County ACX/LW 3/25/23 - Save the Date!

Hello Folks!

We are excited to announce the 22nd Orange County ACX/LW meetup, happening this Saturday and most Saturdays thereafter.

Details:

Host: Michael Michalchik

Email: michaelmichalchik@gmail.com (For questions or requests)

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Date: Saturday, March 25, 2023

Time: 2 PM

Activities (All activities are optional):

A) Conversation Starter Topic: Chapters 1 and 2 of "The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics"

PDF: The Dictator's Handbook: Why Bad Behavior is Almost Always Good Politics (burmalibrary.org)

https://www.burmalibrary.org/docs13/The_Dictators_Handbook.pdf

Audio: https://drive.google.com/drive/folders/1-M1bYOPa0qRe9WVb7k6UgavFwCee0fti?usp=sharing

Also available on Amazon, Kindle, Audible, etc.

B) Card Game: Predictably Irrational - Feel free to bring your favorite games or distractions.

C) Walk & Talk: We usually have an hour-long walk and talk after the meeting starts. Two mini-malls with hot takeout food are easily accessible nearby. Search for Gelson's or Pavilions in the zip code 92660.

D) Share a Surprise: Tell the group about something unexpected or that changed your perspective on the universe.

E) Make a Prediction: Provide a probability and an end condition.

F) Future Direction Ideas: Contribute ideas for the group's future direction, including topics, meeting types, activities, etc.

Please note that this week's conversation starter is quite lengthy, so we'll only focus on one topic. The readings are optional, but if you do read them, consider what you find interesting, surprising, useful, questionable, vexing, or exciting.

We look forward to seeing you there!

Expand full comment

Anyone else glad that Substack move the ‘gift a subscription’ option to the drop down under the ellipses?

When I jabbed my fat thumb on my phone, the ‘gift’ selection was right next to Reply.

I’d hit it sometimes by accident and since my debit card info is on file I’d be one mistap from accidentally gifting a subscription.

Expand full comment

Yes - cancelled out by novel annoyance at the new constant reminders to Increase My Support. Did you know there are other tiers [x, y, z] that offer further benefits? C'mon, you already got a year's worth of revenue from me, if I'd wanted to pay more I woulda done so at the time. I'd bet far more people cancel outright mid-subscription than tier up; most Substacks don't even seem to offer meaningful reasons for paying more, unless one was lucky enough to get in early and possibly score some merch or whatever.

Expand full comment

Yeah, and what does upgrading to being a "founding member" even mean? It's too late to be a founder.

Expand full comment

I still kinda feel Scott owes me a tote bag or ACX coffee cup or something..

Yes I’m kidding.

Expand full comment

Does Scott know his old LiveJournal is nuked and all the links to it from SSC are dead?

Expand full comment

Hi! I'm looking for someone who works on GWAS and also, ideally, schizophrenia. Would you please email me if you're willing to talk a bit and have that expertise? Happy to pay you.

laura.walworth.clarke at gmail.com

That request may seem like a long shot, but earlier this year I asked if any ACX-reading plasma physicists who focus on stellarators were willing to talk to me, and one was! He gave me good advice.

Expand full comment

Yudkowsky seemed to think it was real when the news first broke.

https://nitter.unixfox.eu/ESYudkowsky/status/1635577836525469697#m

It seems plausible to me. It's no different than how in competitive video games creating a new viable strategy is much harder than copying it.

Expand full comment

Model stealing is a well-established research area (see https://arxiv.org/abs/2206.08451 for a survey), so it should not be surprising that it's fairly easy to steal the small delta made by finetuning from a baseline model. The interesting part of Alpaca is that a less capable model was used to generate the synthetic queries and this worked well. This has implications for acceleration. I also suspect that applying the same approach should work to steal much of the RLHF delta.

Expand full comment

I'm looking for a long blogpost, and Bing isn't being good enough to help me find it with what meagre context clues I remember.

It was definitely linked from ACX sometime last year, but I don't remember if it was an OT or in the comments on an actual post. Topic: exploring the history of the anthropologial claim that Native Americans routinely had some variant of "third gender" or whatever. Apparently this was singlehandedly "discovered" by a single guy who was so motivated to substantiate this claim that he fabricated evidence, made wild extrapolations from Anglocentric experience, and otherwise did a lot of really shoddy scholarship. (Even given the baseline in the social sciences.) Plus exciting allegations of censorship and mysteriously missing source documents in archives. The style was very much like The Atlantic's saga about Jesus' wife: https://www.theatlantic.com/magazine/archive/2016/07/the-unbelievable-tale-of-jesus-wife/485573/

Anyway, the upshot was that this shifts one of the classic pro-trans arguments-from-tradition into more like argument-from-fictional-evidence, with the attendant implications. Weak men might be superweapons, but that which the truth etc etc. Never saw any follow-up to this post, and people still make the same argument today, so I've been wanting to reread the original and see if I miscalibrated my updates. Hopefully someone else remembers this...? Or knows of similar arguments made elsewhere?

Expand full comment

https://stoneageherbalist.substack.com/p/the-origin-of-two-spirit-and-the

Number 17 on the links for july 2022 post.

Expand full comment

Yes, thanks so much! No wonder I couldn't easily find in history, had forgotten exactly which term to use...two this, third that, and I'd entirely blanked on the gay-rights connection.

Expand full comment

The two darlings of libertarian blogosphere economic theory: predictions markets and dominant assurance contracts, seem like a perfect match. The problem with prediction markets is that it's hard to make people put in the liquidity. So why not just use a dominant assurance contract to make people invest in the prediction market?

The usual counterargument against prediction markets and dominant assurance contracts do apply, of course. Still, it seems like a good place to start if you want to test out dominant assurance contracts in practice (ignoring the legality per the previous sentence). I'd be happy for anyone to steal the idea.

Expand full comment

Scott, in your 2019 SSC post "Know Your Gabapentinoids" you wrote that pregabalin seems to be more effective and have less side effects than than gabapentin for some reason. Is this still your impression, and is there any new evidence on why it may be the case?

Expand full comment
author

I haven't made any major updates since that post.

Expand full comment

Speaking of imaginary friends from the survey...

I don't recall having one as a child, but I developed something like that when I was 14 and still have it to this day in my 20s. As I go about my day I am having these imaginary conversations with people. Usually someone I was thinking about recently - a real friend or someone from the internet. For example if I watch a Joe Rogan's podcast and then go to kitchen to grab something to eat I automatically imagine a conversation with Joe Rogan, something like:

Me: Let's see... oh a banana! Would you like one, Joe?

Joe Rogan: No thanks, man

Me: Ok, hmm this one is very big... If i had one that's more ripe I could make this banana cake again...

This can go on for much longer, I think "friend's" responses are usualy short, and not necessarily verbal, while my thoughts have a voice and take more time.

Is this a sign of some disorder? A leftover depressive rumination? ADHD? It feels like a coping mechanism for loneliness, like I don't have enough people in my life to share my thoughts with, so I do it automatically with myself.

Googling it shows reddit and quora posts where people claim they experience this too and that it's normal.

Expand full comment

I've never experienced anything like this. It's always interesting to learn how much variety there is.

Expand full comment

Don't you even do it when there's something stressful going on between you and another person? Say somebody at work you need to confront about something? -- sort of mentally rehearsing what to say, trying out different versions? Or some situation when someone has infuriated or hurt you, and shrugs off your complaints about their behavior -- do you ever have mental conversations where you try to get across to them really convincingly what was wrong with what they did?

Expand full comment
Mar 23, 2023·edited Mar 23, 2023

I rehearse arguments in my head quite a bit, particularly after getting caught up in arguments online, but I'd never perceive one like an actual conservation and it doesn't really have the same qualities as conversation. Perhaps it is a consequence of most interaction happening online, but I'm skeptical.

Come to think of it, I actually recently got into an argument with someone at work via videochat, so that's the kind of situation where you might expect to hallucinate a conservation. And I did get a bit angry and spent a while afterwards constantly thinking of arguments in my head. But even then, I wasn't really *talking* to anyone, I was just thinking of ways to explain my own position. So it seems more one sided than what you have in mind.

Edit: I think the best way to put it is that when I'm rehearsing arguments in my head, I don't bother to model what other people might say at all. This very addendum is an example of something I thought of while this subthread was going through my head afterwards and no, I didn't imagine you saying anything at all during it.

Expand full comment

Actually I think what you're describing is pretty much the same as what I'm describing. Read an article recently by someones who said they had cured their own aphantasia (inability to have mental images). Seemed that part of being "cured" was realizing that mental images are not at all like a still photograph in your head. It's not as if someone tells you to picture the statue of liberty, and there it is in your mind, vivid as a post card, & if you close your eyes you actually *see* the image. Most people's mental images have a vagueness to them. For instance, I can picture the statue of liberty, but can't remember which hand holds the torch. So actually I think the reason you think my mental conversations are different from yours is that you are imagining that they are fancier and more vivid than yours. Mine also lack many of the qualities of a conversation. For instance, there's no auditory aspect -- I do not imagine the sound of my voice or the other person's. And there also are no surprises. It's not like real life, where I say to someone "oh my apple is bruised," and I really don't know what the other person will say. They might say, "yeah, I hate that too," but they might say "never mind that right now, we're in danger of being late" or "I know it's weird, but I like the soft bruised bits." And in fact now that I think of it I'm not even sure my mental conversations are always in words. Sometimes it might be more like I take out the apple, see it's bruised, and imagine my friend X being sympathetic at my chagrin.

Does this stepped down picture I what I experience seem more in line with what you do?

Expand full comment

I'm not sure, but at the very least, it sounds like we're more similar than either is to the OP.

Expand full comment

This is not going to be a popular response, but... I think your mind is optimized for religion. It's trying to pray.

Expand full comment

But RandomizedRandomness and I both picture peers as the other party in our imaginary conversations. If our minds were trying to pray, wouldn't we imagine the listening other to be wiser, & more powerful than us?

Expand full comment

One interesting thing about people is that although we say we consider God wiser and more powerful than us, our actual prayers are very frequently just us trying to manipulate or control God. Which we wouldn't do if we genuinely believed He was wiser and more powerful than us. This is a big part of what the book of Job is about.

It takes a very mature spiritual person to pray right. I often fail in this.

Expand full comment

" our actual prayers are very frequently just us trying to manipulate or control God. Which we wouldn't do if we genuinely believed He was wiser and more powerful than us"

On the other hand, see Abraham bargaining God down over the destruction of Sodom:

"22 So the men turned from there and went toward Sodom, but Abraham still stood before the Lord. 23 Then Abraham drew near and said, “Will you indeed sweep away the righteous with the wicked? 24 Suppose there are fifty righteous within the city. Will you then sweep away the place and not spare it for the fifty righteous who are in it? 25 Far be it from you to do such a thing, to put the righteous to death with the wicked, so that the righteous fare as the wicked! Far be that from you! Shall not the Judge of all the earth do what is just?” 26 And the Lord said, “If I find at Sodom fifty righteous in the city, I will spare the whole place for their sake.”

27 Abraham answered and said, “Behold, I have undertaken to speak to the Lord, I who am but dust and ashes. 28 Suppose five of the fifty righteous are lacking. Will you destroy the whole city for lack of five?” And he said, “I will not destroy it if I find forty-five there.” 29 Again he spoke to him and said, “Suppose forty are found there.” He answered, “For the sake of forty I will not do it.” 30 Then he said, “Oh let not the Lord be angry, and I will speak. Suppose thirty are found there.” He answered, “I will not do it, if I find thirty there.” 31 He said, “Behold, I have undertaken to speak to the Lord. Suppose twenty are found there.” He answered, “For the sake of twenty I will not destroy it.” 32 Then he said, “Oh let not the Lord be angry, and I will speak again but this once. Suppose ten are found there.” He answered, “For the sake of ten I will not destroy it.” 33 And the Lord went his way, when he had finished speaking to Abraham, and Abraham returned to his place.

I always get the impression from this that God *wants* Abraham to push for more, that it is the sort of mercy He wants mankind to have and that God is willing to grant when asked for (and that God finds it funny that Abraham is bargaining like this but is also proud of Him). Intercessory prayer.

Expand full comment
Mar 23, 2023·edited Mar 23, 2023

But I'm still not sure this is relevant to my imagined conversations. If my little mental dialogs are attempted prayers, they don't take the form of attempts to manipulate the listener. I think mostly they're little bits of self-narration, and their point is to create a pleasant feeling that somebody else is there, taking an interest in my little doings and getting it about how I feel: "Aw, my apple got bruised." I don't imagine the other person in the conversation giving me a new apple, just their registering, with sympathetic interest, my annoyance and disappointment.

As for achieving the maturity to pray "thy will be done" or something along those lines -- you're talking to a lifelong atheist, and I think we have about reached the limits of my ability talk about religion without my getting irritable. To me "it is what it is," which is one of my go-to's, seems almost identical to "thy will be done." It just sounds bleaker and shallower because the lace is stripped off.

Expand full comment

Did you stop wanting to talk difficult stuff over with your mom when your problems grew beyond her ability to solve? From "I keep falling off this darn bike and it hurts!" to "This wretched person does nothing but break my heart but I can't seem to quit him/her."

Expand full comment

Yes. Eww, even now I don't like imagining disclosing any of that teen love anguish stuff to her. But where are you going with that? Are you saying I'm still sort of like a teenager -- dislike idea of disclosing stuff to people older, wiser and more powerful? Because -- well, that's sort of accurate, but sort of not. In mid-life I had several people older and more established than me whom I confided in and looked to for advice. Now, older, I don't really believe there's anyone like that on the planet. Of course there are many people who know far more than me about various things, even about things that are a problem for me, but I can't mentally look up to them in the same way. It's like I finally get it that everyone is sort of like me. Some of us are quite smart and have learned a lot and figured out a lot, but we're all still just making it up as we go along. So at this point I'm not repelled by the idea of confiding in someone older and wiser -- I just don't believe there's anyone who's overall wiser than me -- just people who have a different take.

Expand full comment

Well, let's try another approach: would you decline to take as a patient anyone who was smarter and older than you?

Expand full comment

No. I'd be delighted to have a patient like that. Currently I see a lot of students in the Harvard grad. & professional schools, and of course they are all smart as hell. In fact one's a bonafide math prodigy. I enjoy working with the a lot, but of course they are all preoccupied with 20-something issues -- finding themselves a good promising spot in the realmsof career and romance. Would enjoy talking with someone whose stage of life and issues are closer to mine. But, Carl, what's this got to do with religion? And please, if you have thoughts of leading me gently and Socratically into reconsidering my atheism, abandon all hope here and now. I am not just non-religious, I am at heart anti-religious, though I do try to be civilized about the latter. I don't want to get cranky and oh-shut-up with you.

Expand full comment

You’ve read the bicameral mind? The idea is that in the past we heard the internal monologue as an actual other person (often a god) but that stopped happening at the dawn of history.

Expand full comment
founding

Nit: Jaynes postulates the transition period as ~1500 BC, which is far enough past the dawn of history for Jaynes to have found traces of the old style of thinking in the literary and historical record.

Expand full comment

You know, I've heard descriptions of the thesis several times over the years, and I have a copy of the book in a box somewhere, but I've never actually read it. It's a really interesting idea.

Expand full comment

Oh I do thatall the time. Sometimes I do it when I have a difficult conversation with someone coming up, and I'm sort of mentally practicing what I'll say. But more commonly, it's just like what you talk about having. I pull the apple out of my backpack, notice it got bruised in transit, and mentally say to some friend or acquaintance, "damn, will you look at that! I hate when they get bruised." I don't think the other person very often answers, or if they do it's brief -- like "yeah, me too." I think I mostly have these dialogs it for the satisfaction of imagining that somebody gets it about what I'm thinking and feeling., The other's voice never has an auditory quality, and I never think it's real. I'm sure it's normal. Oh, & by the way I did have an imaginary companion when I was a small child, and have actual visual memories of her doing this and that. I believe when I was 5 or so I thought she was real.

Expand full comment

That’s internal monologue. Not really an imaginary friend - which is visualised.

Expand full comment

Assuming that your "friends'" responses are clearly distinguishable from your own and from real people's, aren't upsettingly negative or aggressive, etc... then this sounds relatively normal to me.

I'm not exactly a shining example of neurotypicality, but for what it's worth I do something like this, especially if I'm working through a complex concept or preparing for an important or difficult conversation. I think in my case it's a hack my brain uses to serialise and refine my thoughts.

If you suspect it's a coping mechanism for loneliness, then I imagine it probably is. I'm sorry to hear you're lonely and if you can take steps to inject a bit more human interaction into your life it sounds like it might be a good idea, but I wouldn't stigmatise something that seems like a relatively healthy outlet.

Expand full comment

For those who like trolling, I wrote a post detailing my trolling of the subreddit r/AmITheAsshole:

https://alexanderturok.substack.com/p/my-collection-of-aita-troll-posts

Expand full comment

Doesn’t the fact that all but a handful of your posts got removed by the spam filter or moderators indicate that they weren’t particularly successful? I mean maybe that’s not your goal.

I noted a lot of these could have easily passed as “real” but for a couple of unnecessary tells or obviously intentionally provocative throwaway lines.

Expand full comment

This post did a great job calling out the dominant value system of that sub-reddit, a heavy individualism that often tends towards "if anyone expects you to do anything more than what you strictly owe them, they're the asshole".

https://www.reddit.com/r/AmItheAsshole/comments/d6xoro/meta_this_sub_is_moving_towards_a_value_system/

Three years old and highly upvoted, I do wonder if things got any better.

Expand full comment
Mar 24, 2023·edited Mar 24, 2023

Don't forget the naked double standards in favor of F over M.

Edit : I followed one of the links to a post the troll OP made about sex life with wife, people actually unironically said "Your wife should have the right to say no to sex every day of her life and she doesn't need a reason" lmao.

Expand full comment

The current annoying vibe over there is less “anyone expecting you to be considerate is TA” and more “never try to maintain or repair an important but damaged relationship, always go NC”. Especially if the relationship is with your parents who are asking you to do horrifying, abusive, “adultification” things like “be nice to your less well off sibling” or “babysit so your parents can go out for a night”.

Expand full comment
founding

This is definitely reinforcing my belief that all interesting stories on aita are fake.

Expand full comment

All interesting stories are fake. And a few that are'nt interesting are poorly written fakes.

Expand full comment

Trolling r/amitheasshole is playing the game on super easy mode. What is even the point?

Expand full comment

Yeah is it even trolling if you are just lying to a bunch of brain-dead 15 year olds?

Expand full comment
Comment removed
Expand full comment

I see you have seen the Rick and Morty heist episode.

Expand full comment

Finally got around to updating my Scott-inspired Navigating Retail Pharmacy post a few weeks back, available at https://scpantera.substack.com/p/navigating-retail-pharmacy-post-covid

Expand full comment

> I wrote that post because every few weeks someone was writing an essay saying “We should try to slow AI progress, why aren’t you doing that?” with no specifics, everyone agreed with them, and nothing got done.

There is a thing that activists do across many issues (degrowth, nuclear power, pandemic lockdowns, etc.). They advance a position that many others do not agree with, for reasons that the latter believe are good. The activists then become frustrated that "WE AREN'T DOING" the thing they want. They assume that this is simply irrational (or that the reasons that others don't agree with them must be bad), and so we really need to force the issue through some sort of governmental, coercive means.

The activists are living in their own private Moloch and trying to take the rest of us with them.

Expand full comment

It's temptingly easy for people on many sides of many issues to think that making demands equals virtue.

Expand full comment

It's also common (and certainly exists in the AI space) that what gains you status with the in group doesn't actually advance the cause and encourages a purity spiral where more and more extreme concerns get rewarded. This ultimately makes the movement extreme and isolated. And because most people are in their own bubble they lose track of how many people just genuinely disagree with them.

Expand full comment

Yes, this is a major factor. People in rationalist/EA circles badly underestimate how many smart, informed people simply disagree with them on the "kill us all" aspect of AI risk. They mistake the friendliness of people at OpenAI and elsewhere for "they agree with us but –– for some weird reason –– aren't acting accordingly." Nope. They just don't actually agree with you, but, in some cases, are too nice to say so.

Expand full comment

It's got to be more complicated than this, because Open AI explicitly says that AGI is an existential risk. So maybe they don't agree about the kill us all risk, but in that case they're LYING rather than politely remaining silent. That said, I think "they're telling the truth but the details on how to guarantee safety are disputed" is the more likely case.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

You're smuggling that they believe it is an existential risk in a Yudowskian sense. An existential risk is simply a risk that threatens humanity's long term potential. Plenty of people talk about social media as an existential risk in certain communities. But I don't think they're afraid Zuckerberg is going to murder us all.

Likewise, I certainly believe AI could cause real damage to society. I do not believe Yudowsky is right or that slowing it down will help much. You're not being lied to. You're just overloading words with meanings they don't actually have.

Expand full comment

Oh, I really thought existential risks meant we risk no longer existing. I see that "extinction" is the specific type of existential risk in which everyone dies.

In any case, OpenAI seems to be using the term to refer to kill all human scenarios.

In this article:

https://openai.com/blog/planning-for-agi-and-beyond

they hyperlink the word "existential" this article:

https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/

The latter article says:

> I mean a literal "defeat" in the sense that we could all be killed, enslaved or forcibly contained.

Expand full comment

Fair enough. I'm not interested in defending OpenAI specifically.

Expand full comment

I think it's the thing rationalists often complain about where people tend not to take their own beliefs seriously. Surveys of AI researchers often have them give a 5% chance of it ending the world, but people will and do ignore beliefs like "this thing has a 5% chance of destroying the world" if it seems like a fun and status/money-increasing thing to do at the time (a lot of smart people used to smoke in circles where it's cool, for example).

Expand full comment

Besides Stephen Pimentel's point, which I agree wholeheardely with, I think that you're vastly overestimating the number of people who hear "5% chance of ending the world" and conclude that the appropriate reaction is "damn! Gotta not touch that ever!" Many people will operationalize this as "this activity has a 95% chance of getting me money and status with no ill effect, those are fantastic odds!"

Expand full comment

It's those rationalists who are misunderstanding something, not the people they're complaining about.

Numerical probability estimates like "5%" are very often pulled out of one's ass. The people offering them intuit this fact and therefore –– correctly –– do not take them very seriously.

There are formal ways, such as confidence intervals, of trying to address the problem, but they don't actually work in many situations –– they just produce an infinite regress of pulling things out of one's ass.

The thing that's bothering rationalists is when others decline to play particular cognitive games, having judged them to be bad.

Expand full comment

This is probably a midwit question, but can someone explain to me why AGI doomers seemingly base their reasoning on contradictory premises?

(1) On the one hand, we have instrumental convergence, which suggests that sufficiently smart entities will converge on broadly similar sub-goals.

(2)On the other hand, we apparently ought to see superintelligent AGIs as an alien species that we can't possibly understand, let alone control.

If (2) is true then surely it is a mistake to assume (1) ? In fact, doesn't it suggest that we can assume almost nothing about superintelligent AGIs?

Am I just terribly off-base here? Or is there a way that both premises can be true?

Expand full comment

I actually see them as sort of reinforcing each other. If a system is understandable you can say all sorts of rather specific things about it individually which are particular to that system, or narrow class of systems. One could, for example, describe how humans will tend to seek status on their way to power, based on a familiarity with and understanding of humans; this is not a _necessary_ behavior in power-seeking but it happens with a reasonably high probability.

Conversely, one of the _only_ things we can say about a sufficiently powerful, sufficiently alien mind is the sort of thing that we have reason to conclude is straightforward and simple enough to be true of _many_ minds or goals. If one doesn't understand how a plane flies, how a bird flies, or how a volume of gas "flies", you are limited to saying things like "well, they must move through the air in some way". You can't get into more details because the details are beyond you, but you can observe the broad-strokes commonalities.

Expand full comment

(1) doesn't apply. "Science advances one funeral at a time", after all. Also the idea is that AGI will be so highly advanced that humans won't qualify as "sufficiently smart".

Expand full comment

Parenthetically, that's one of the dumbest things ever said about science -- although we can forgive Planck, he was a dour person by nature, and if he said it after the execution of his son he was in despair. But anyway science actually works best almost the opposite way: it is the most fecund and explosive when many people expertly trained in the old ways are simultaneously stimulated, often by new puzzling experiments, to understand a new paradigm. The early years of quantum mechanics are an excellent illustration, as is the rush of discovery of new elements following Davy's lead, the excitement people felt at relativity, and so on.

Expand full comment

Instrumental goals are things like 'acquire resources' 'understand the state of the world' and 'destroy those who threaten my plans' - things that are useful for pretty much any goal, no matter what that goal is. Instrumental convergence does NOT suggest that sufficiently smart entities will converge in terms of what they intend to do with all of that instrumental power once they acquire it.

That's where AIs get alien and hard to control. Most of the arguments that you would make to convince a human of something rely at least a little bit on broadly shared human conceptions of what is important and good vs. important and bad vs. what is irrelevant and disposable. If the AI sees Earth's biosphere like we see mold on food (ie. something messy and dangerous that is actively spoiling something you wanted to eat) because we failed to endow it with an understanding of the beauty of nature and the value of organic life, that would be very very bad for our ability to co-exist.

Expand full comment

I think that your idea of instrumental convergence might be slightly off. My understanding of the concept is that sufficiently intelligent entities will use broadly similar structures (ie electricity, mechanical power, material refinement) to accomplish complex goals. This similarity is orthogonal to the actual goals it wants to accomplish. Thinking of instrumental convergence as similar sub-goals may be misleading.

Expand full comment
Comment deleted
Expand full comment

Are you saying that being a rational agent, implies or requires having a goal? If so, why? I find it very easy to think that my goals are evolved refinements of purely meat-based urges (club mammoth to death, eat some to ward off hunger, trade some for sex). Even I am capable of goal-free rational behaviour, like writing this post, and if I didn't have bills to pay for meaty wants like food and shelter, and medical treatment to cope with an evolved meaty fear of death, I would be goalless as hell but still rational.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Being rational is literally just achieving your goals better (and being aware of your environement's feedback so that when it changes you change your tactics). There are all sorts of auxiliary semi-related meanings that people assign to it, like being instinct-free and cool-headed (the meaning you're going for here), or using Logic/Data/$REASONING_TECHNIQUE when making decisions, or even being ethical in some sense in your goals and methods. All of those can be reasonably said to have some claim to be called "Rational", they are all related to "Achieving your goals better" in some sense or the other under some special cases or others, but the meaning of "Rational" is primarily and only "Achieving your goals better", all else are merely secondary meanings it gains by association.

> I would be goalless

No you wouldn't, you still have a goal, which is to not die. You found out that you're easily achieving that goal, so you paused all efforts to try to achieve it more. An idle computer is still executing instructions, it just so happens to be instructions for "Keep still and wait for more instructions". Finding out ways to stay alive is costly, so of course you stop doing it if you have a reasonable belief that you don't have to , so you stop and let being alive take care of itself.

Expand full comment

Idle technical detail: a cpu with nothing to do will actually stop execution and even power down some regions until an external signal wakes it up.

Expand full comment

CPUs can't stop execution unless they're powered down, or they're sleeping.

I'm not talking about that though, I say "Idle" to mean like a computer that's not executing a user program, it's still executing an OS. If it's not executing an OS (as is the case when you open a flashed phone or a brand new laptop), it's executing some form of a primitive boot loop that tells it to wait for an OS installation medium, etc... A computer is like a car, as long as it's consuming power it's doing something, even if that something is as good as nothing from our POV.

Expand full comment

I think that is a crucial detail: my case was that people do stuff ultimately to satisfy needs for food safety and sex. Actually there's another motivation: the avoidance of boredom. Your point establishes (I think) that that also doesn't apply to computers.

Expand full comment

But there's the difference. My goal of not dying, has nothing rational about it. It is itself non-rational (we know this because it is evolved and instinctive and because I share it with lots of non-rational things like rats and sheep) and now that i have evolved rationality, it is positively irrational to stick with it, because unlike the rats I know it is doomed to failure. But it is still there, so when I am on standby we know that a self-generated instruction will be along in a moment as soon as I get hungry, or run out of money. Not the case with computers. So if their instructions are all externally generated, how are they rational? To be rational is to make choices. And if they are internally generated, how do we know they are rational? The only case of a rational being we know, us, is governed by entirely irrational instructions - it is not just not rational, it is frankly insane for me to want to live forever and to spend the rest of time generating infinite partial copies of me via sexual reproduction. But it is ultimately what gets me out of bed in the morning.

Expand full comment

Your goals can't be "Rational", only your methods and sub-goals. You can only be rational with respect to a given ultimate goal. Rats and sheeps are not "non rational things", they are rational beings that survived millions of years of intense evolutionary pressure through rational (== survival-maximizing) actions. Rational doesn't mean logic-using or reasoning, a bacterium that swims towards higher concentration of sulfur is extremly rational, it's doing perhaps the only thing it can do to maximize its utility.

Rationality is the how, not the what or the why. Nobody can have "Irrational goals", Hitler can be rational, a rat can be rational, any agent that percieves the environment and makes (what it sees as) the best action in it to achieve something is (trying to be) rational. Rational is not a synonym for "good" or "reasonable" or "moral" or "internally-motivated" or "long-sighted", people have devloped all sorts of those and other associations around it but it doesn't really mean any of that, it's simply an adjective for the behaviour that is optimal with respect to some objective, if somebody or something has an objective and multiple actions available and it chooses actions that succeeds at the objectives better than randomly, it's rational.

Expand full comment
Comment deleted
Expand full comment
Mar 20, 2023·edited Mar 20, 2023

(Someone asked here before, so I figure maybe somebody still wants to know about this.)

Apparently, there's a writeup from Harvard School of Public Health done in response to all those public statements that even very little alcohol can harm you:

https://www.hsph.harvard.edu/nutritionsource/healthy-drinks/drinks-to-consume-in-moderation/alcohol-full-story/

It's a very useful writeup, but if it's too long for you, here's my TLDR summary:

- Journalists did their journalist thing, getting things wrong.

- Heavy drinking is definitely harmful, and nobody is arguing about that.

- Moderate amounts of alcohol are net beneficial for many people and net neutral for many more.

- If you're drinking alcohol, you should be taking folic acid (400-600 mcg per day).

Expand full comment

Huh ... I'm pretty happy with my current limited relationship to the sauce, so won't change it even if there really are some cardiovascular or whatever benefits. Tastes have gotten too snobby, so anything I'd actually enjoy drinking now hits the pocketbook hard enough that mere economics deters me. And it's really personally important I don't drift back into alcoholism, which is unequivocally Bad News on several fronts. The folate thing is new news to me though. I'd already been thinking of supplementing with it, so that tips the balance...thanks!

(I think this used to be framed as, if you already drink, here's the range to aim for...but if you don't, this isn't a good reason to start? At least not at these effect sizes. Though of course adjusting for non-direct-health effects like social lubrication is hard.)

Expand full comment

>Tastes have gotten too snobby, so anything I'd actually enjoy drinking now hits the pocketbook hard enough that mere economics deters me.

The ideal for me would be an alcoholic beverage that taste just like SevenUp.

I start with SevenUp and it tastes great. I add a bit of Seagram Seven and the whole deal is ruined with that taste of fermented and distilled grain products.

Expand full comment

Seagrams is hardly a neutral flavor though - vodka would keep the flavor more 7 up like. But some people are more sensitive to the “taste” of even neutral spirits (a burn or a sense of bitterness) so maybe that’s you.

Another strategy is to try to go with more complementary flavors, so you aren’t trying to hide the taste of the spirit. A margarita (basically a tequila lime sour) or a greyhound (gin and grapefruit juice) are both cocktails where I think the flavors really enhance each other.

Expand full comment

You gotta be careful you don’t become a girl drink drunk if you go down that road. The Kids in the Hall warned me about that.

https://m.youtube.com/watch?v=8C4TGGtPzBU

Expand full comment

Older folks tell me this is what Zima used to be like?

I definitely like "bitters" a lot more now than I ever expected to - turns out not all IPAs completely suck. But gone are the days of getting drunk off, like, Mike's Hard Lemonade and Budweiser. Too much gastric flagellation, ruins the experience. Something about the strange aftertaste of generic "malt liquor" that I always end up regretting - the Whiteclaw Effect. Once one has had actually-fine alcohol, the stuff that goes down super smooth and often doesn't leave any hangover the next morning...it's hard to go back. (Every few years there's exciting news about anti-hangover pills coming Real Soon Now, and then it never actually pans out. Unfortunate.)

Expand full comment

>- If you're drinking alcohol, you should be taking folic acid (400-600 mcg per day).

seems to only be for women, unless I'm reading this wrong? Folate is only mentioned in reference to hedging against breast cancer risk.

Expand full comment

I think you missed this:

"Alcohol blocks the absorption of folate and inactivates folate in the blood and tissues. It’s possible that this interaction may be how alcohol consumption increases the risk of breast, colon, and other cancers."

And this, which seems to be non-gender-specific:

"If you already drink alcohol or plan to begin, keep it moderate—no more than 2 drinks a day for men or 1 drink a day for women. And make sure you get adequate amounts of folate, at least 400 micrograms a day."

Expand full comment

If you happen to know--

Is there any reason to expect that the benefits of moderate alcohol use are a result of anything other than alcohol decreasing stress?

Expand full comment

If you are asking this, you should read that writeup. It has a lot of stuff in it, e.g.:

"The idea that moderate drinking protects against cardiovascular disease makes sense biologically and scientifically. Moderate amounts of alcohol raise levels of high-density lipoprotein (HDL, or “good” cholesterol), [37] and higher HDL levels are associated with greater protection against heart disease. Moderate alcohol consumption has also been linked with beneficial changes ranging from better sensitivity to insulin to improvements in factors that influence blood clotting, such as tissue type plasminogen activator, fibrinogen, clotting factor VII, and von Willebrand factor. [37] Such changes would tend to prevent the formation of small blood clots that can block arteries in the heart, neck, and brain, the ultimate cause of many heart attacks and the most common kind of stroke."

[37] is a really dense paper I'm not educated to understand. Among other things, it mentions experiments in mice.

Expand full comment

I posted my comment after reading the writeup, though admittedly I have yet to get around to reading the linked paper. But I happen to know that stress is linked to high cholesterol, and I wondered if anyone more knowledgeable than me could connect the other beneficial effects of alcohol to stress reduction.

Expand full comment

I thought a common hypothesis to explain the observation that moderate drinkers tend to have on average slightly better health than teetotallers, was that in a culture where moderate social drinking is the norm, people who do not drink at all are unusual. And "unusual" does not mean "unhealthy", but statistically speaking, sometimes it does.

For example, maybe the reason someone doesn't drink is because they are on some medicine which would interact badly with alcohol. Or maybe they don't drink socially because they don't have much of a social life to begin with, which may be correlated with things like depression. Or maybe they know from experience that, because of general frailness, a single glass of light beer hits them much harder than it would hit a regular person in their weight class.

If X% of the non-drinking population is in that population for reasons such as the above, that could be enough to explain why the non-drinking population is on average slightly less healthy than the moderate-drinking population, even assuming that for a healthy person non-excessive alcohol use is neither beneficial nor harmful in itself.

Expand full comment

I've assumed that a lot of people who don't drink aren't drinking because of a history of alcoholism or alcoholism in their family.

Expand full comment

> Journalists did their journalist thing, getting things wrong. (NYT, Lancet, and such...

Wait isn't Lancet a peer-reviewed medical journal?

My 30-second wikipedia check -- https://en.wikipedia.org/wiki/The_Lancet says:

> The Lancet is a weekly peer-reviewed general medical journal and one of the oldest of its kind. It is also the world's highest-impact academic journal.[1][2] It was founded in England in 1823.[3]

Or is there another "Lancet" you're referring to?

Expand full comment

>Wait isn't Lancet a peer-reviewed medical journal?

I mean, yes. But also https://en.m.wikipedia.org/wiki/Lancet_letter_(COVID-19)

Expand full comment

Thank you for catching this! This used to be a separate bullet point, but I mis-edited the comment. Just going to edit out the comment re NYT and Lancet (although they both deserve to be left in).

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I feel like reading dense books is kind of overrated, unless you're an SME and the book is in your specific field. And I feel like a quick summary of famous/dense/intellectual books would be a great use of everyone's time- like a Cliff Notes targeted for at least moderately intelligent people with a degree.

For example, I just finished The Intelligent Investor (the classic work that kicked off the field of value investing). Reading a summary would have been much faster and much better use of my time- while the book probably has a lot of details & nuance that might not make it into a summary, I am very very likely to forget all those details in 3 months after I read other books, read stuff on the Internet, use my brain for my intellectual job, etc. As I'm only going to remember a summary at best- why not just have 10-30 page summaries of major works? Does anyone else feel this way?

I do read some long works for pleasure (like history), but in general I feel like I could absorb way more information in a given calendar year by reading summaries

Expand full comment

Sometimes I like short books that cover a subject in broad strokes, and sometimes I like intricate books that delve into detail -- it depends entirely on the nature of my interest in the field, e.g. I'm about equally likely to read the same size book that covers the entire Punic War as covers a single major battle of the First World War. The number of pages of autobiographies I would read of Newton and Aristotle probably stand in a ratio of 10:1. Et cetera.

But they're different books, generally. If you're writing a book to paint in broad brush strokes, you write it quite different than if you're turning over every pebble. I also suspect different people write each well.

So I certainly agree sometimes you don't want a weighty deep dive into a subject -- but I'm puzzled by the preference for a summary of a deep dive. I would rather read a book designed from the start to be a general broad summary, than an attempt to dehydrate a different book, designed to be a well of detail. The only reason I can see for preferring the summary is for social purposes, in order to be superficially familiar with a famous book.

Expand full comment

I think the correct thing to do is read summaries in order to determine what books to read.

If you read the summary and the main thrust of the book is either irrelevant, trivial, hard to believe or otherwise uninteresting to you, then you can just move on, but if something catches your interest, that's when you can make the decision to invest your time in it to get a fuller understanding of it and/or more thoroughly judge the validity of the author's claims etc.

This approach isn't perfect, especially if only low-quality summaries are available and you end up passing up reading a book that you should have read, but most books are simply too long, and even for the ones that aren't, reading a whole book puts you at significant risk of wasting time by reading it.

Expand full comment

What does "SME" mean?

Expand full comment

It’s a pronounceable acronym, smee, like LAN or AWACS.

You’ll hear it sometimes like, “The SMEs are going to have to nail down a written spec before we start to implement this feature.”

Expand full comment

That's funny, I hear this fairly frequently but only as "ess-em-ee". So I guess it's an optionally pronounceable acronym, like SQL

Expand full comment

subject matter expert

Expand full comment

Unless you're quizzing yourself on it to reinforce it, you're probably going to remember less in the long run from just reading a summary than from a (non-lazy) reading of the source material, but no idea if the ROI is comparable.

Expand full comment

Hard disagree here:

- I suspect I'll remember the same % of the summary that I remember of the book. So if the summary is short enough, all I might remember is "look here for info on topic x"

- I use that extra detail to judge credibility; without it I'm left guessing at the original author's sophistication, biases, etc. Some authors plainly demonstrate that they don't understand the topic they write about - but that's rarely obvious to non-specialists in a short summary.

- I often also follow up on a few of the works cited, giving me extra chances to learn more of the material, without active attempts to memorize it.

I do find it useful to read summaries of material I want to (pretend to) understand for purposes of social credibility. It can be a lot less trouble to read spoilers for the movie everyone's talking about than to spend time watching it, if my goal is primarily not to get tagged as a weirdo who doesn't like (a) movies or (b) popular culture, or even as a cheapskate who refuses to pay original issue prices. But if I'm interested in the topic, I generally want to read the whole thing.

Expand full comment

+1

Paul Graham's essay on how reading non-fiction books is more about building models than memorising particular facts is also worth mentioning.

http://www.paulgraham.com/know.html

Expand full comment

Thank you for that. I've bookmarked it, to read properly when I have a bit of time. (This web-browsing break is almost over.)

Expand full comment

Your second point is a good one, your first point I kind of don't believe but would like to test myself. But all of this has to be balanced against the value of my time, right? As a busy 'adult' I have a finite amount of time with which to read, especially dense books that require concentration. Lots of great authors are terrible at making their points, even if they really have good ones. (Nietzsche, etc.) If I plow through tough books 1 at a time, I'm only going to get through a certain number before I die. Versus, if I use summaries I can hopefully learn much more in the same period of time.

Again, realistically I'm just not going to make the time to read The Wealth of Nations in my free time- too dense, and I have too many other things I want to read that are more enjoyable. Isn't it better that I get a synopsis than nothing at all?

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

We aren't the same person, and we have different goals and different constraints. If you don't have time for the real thing, a synopsis is probably better than nothing. OTOH, it might be a bad synopsis, containing *only* the parts of the classic book commonly referred to today. (Of course that won't matter at all, for certain goals, including David Friedman's cocktail party conversation.)

When I read *The Wealth of Nations* I was struck by how much of what was in it I'd never noticed being cited, or included in modern material about what Adam Smith had claimed. Much of it made a lot of sense to me. IIRC, I had a similar experience with *Das Kapitel* (read in translation), but I read that long enough ago not to remember many details. And those are just the two I thought of first.

Expand full comment

In the case of The Wealth of Nations, much that one is commonly told about it isn't true, and it takes actually reading it (or believing someone who says he has and will explain the errors — but why trust him) to know that.

I have a chapter draft on the subject:

http://www.daviddfriedman.com/Ideas%20I/Economics/Adam%20Smith.pdf

Expand full comment

value of one pound of nails > value of one pound of iron

I'm completely down with that one. Absolutely true.

This worked out really well for Smith's contemporary, Thomas Jefferson, who had a nailery on site at Monticello. Couldn't beat the labor costs there.

Expand full comment

There's little evidence that slave labor in the US was meaningfully cheaper than free labor once you accounted for maintenance costs.

Black american men are on average 1cm shorter than white american men today, but during slavery, they were actually taller (with both groups being shorter than American men are today). This suggests that slaves had better access to food than free white men on average.

In any case, slavery wasn't about having "cheap" labor, it was about having a reliable supply of (sometimes semi-)permanent workers who couldn't just up and leave when they felt like it or refused to do certain jobs etc. There may also been a perception that they could get more out of slave laborers than free labors, though modern analyses of historical plantations suggest that slave labor was not meaningfully more productive than non-slave farms.

Expand full comment

Smith would disagree. He argued that slavery was both immoral and inefficient.

Expand full comment

For business and related books, there's GetAbstract. I have a subscription through my employer and the summaries are "good enough to learn the newest buzzwords."

I'm not sure how it would work for a book like "Wealth of Nations" or "The Bell Curve", where the part that gets public attention is a small portion of an argument that is mostly about something else.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Relatedly- is there a business that does this, just writes summaries of major intellectual works? I.e. I'd love to understand say Adam Smith's writings, but as a busy person I simply am not going to sit through reading 1000+ dense pages to find the main points. Cliff & Spark Notes seem to be targeted to undergrads. In theory it would be a good business, right? If you sell your summaries online and don't physically print them, once a summary is written there's no marginal cost other than CC fees to keep selling them over & over. (Right?) Is copyright by the original author the main issue? (And if so how does Cliff Notes get around it?)

I mean this was the insight that launched Axios, right? The founder came from Politico (who still writes very lengthy articles that I skim like half of), where he realized that busy modern professionals wanted quick snippets and not huge essays. Feel like we could apply the Axios style to major intellectual works

Expand full comment

My lecture notes from teaching history of economic thought are webbed, and might provide at least some understanding of Smith (and Ricardo and Marshall).

http://www.daviddfriedman.com/Academic/Course_Pages/History_of_Thought_98/History_of_Thought_98.html

I think it's worth distinguishing between the two objectives that Dinonerd pointed out. The first is actual understanding, and for interesting thinkers there usually isn't an easy way of doing it. When I used to teach history of thought at UCLA, I started my first lecture by telling the students to imagine the year was 1776, they were econ grad students getting ready for their prelim exams, and the latest thing in the field was The Wealth of Nations.

The second is what I think of as cocktail party conversation, knowing the things needed to create a superficial appearance of knowing the ideas of interesting thinkers. My standard example is "Ricardo believed in a labor theory of value." It isn't true, but understanding why it isn't true and what he actually did or didn't believe requires actually understanding what he was doing.

Expand full comment

Blinkist?

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

The recent GPT-4 paper (https://arxiv.org/abs/2303.08774) showed the model ace'ing a wide range of standardized tests, including the various science AP tests, the SAT, the LSAT, and the Bar exam (see Table 1 and Figure 4). However a few notable exceptions stood out, where the previous GPT-3.5 model performed poorly, and GPT-4 fared no better. These were:

1. Codeforces - programming competition problems

2. AP English Literature - analyzing works of fiction, such as poetry, short stories, novels, or plays

3. AMC 10 - high school mathematics competition problems

4. AP English Language - reading, analyzing, and writing texts through the lens of rhetorical situation, claims and evidence, reasoning and organization, and style

Referring to Tables 9 & 10 in the appendix, we see that the two AP English tests had high contaminations rates (92% and 79%), i.e. the amount of overlap between the questions being evaluated and the content of the training data. However while this meant that a non-contaminated score could not be reliably computed, it doesn't explain the results here, since even the performance on the contaminated questions is extremely low.

For the coding and math tests it's not so surprising that the models would struggle to perform well on them since these are likely to be more difficult tasks in general. However two details that are very strange are (1) that there was essentially *no* improvement on these benchmarks between GPT-3.5 and GPT-4, and (2) that on other very similar benchmarks such as AMC 12, GPT-4 does much better.

Would be very curious to hear from people who think they may have an idea of what's going on here.

Expand full comment

Interesting. I would have expected English to be the *easiest* subject. I often felt like the essays were just expecting students to run a Markov model to generate BS anyway.

Expand full comment

>For the coding and math tests it's not so surprising that the models would struggle to perform well on them since these are likely to be more difficult tasks in general.

This is probably true of the coding ones, but the Math SAT is not that hard (relative to the universe of math and the knowledge GPT has access too). At least when I took it almost 20 years ago (god...) a lot of the choices could be quickly eliminated by knowing the patterns used in the questions. You could quickly get down to 2 options then do the math or even just part of the math and the math wasn't calculus but at worst algebra and trig (i think?).

I would expect GPT to score 800 on this test. Maybe it has trouble parsing the questions? I suspect a model of GPT trained on SAT math questions could get 800 fairly easily as it learned many tricks.

(I got 780 on the standard SAT math and 800 on the Math 2-C, for what its worth and so i can brag and feel smart for being better than GPT for at least a few more months!).

Expand full comment

When I said "coding and math" I was specifically talking about the Codeforces challange and AMC 10 problem sets that GPT-4 was struggling with. It doesn't have as much difficulty with all math, if you refer to Table 1 in the paper you'll see that it gets a 700 on the Math SAT. Not stellar, but a big step up from GPT 3.5's score of 590.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

As a student who has just taken AP English Language, I suspect it could be that the instructions on that exam as for how to write the essay are vague, and that the essays are different in style from other school essays. For example, an FRQ for the AP Environmental Science exam gives specific instructions on how to answer the question:

2. An offshore wind farm project using turbines to generate electricity is to be built along the Atlantic coast of the United States. It will be located about 13 km from the coast in water with an average depth of 10 m.

(a) Describe one environmental benefit associated with an offshore wind project.

(b) Identify and describe one potential economic effect of an offshore wind project.

(c) Describe one additional way, other than wind power, that oceans can provide renewable energy for the generation of electricity.

But the AP English Language exam just tells you to "develop your own position" on the topic using "appropriate, specific evidence:"

In her book Gift from the Sea, author and aviator Anne Morrow Lindbergh (1906–2001) writes, “We tend not to choose the unknown which might be a shock or a disappointment or simply a little difficult to cope with. And yet it is the unknown with all its disappointments and surprises that is the most enriching.”

Consider the value Lindbergh places on choosing the unknown. Then write an essay in which you develop your own position on the value of exploring the unknown. Use appropriate, specific evidence to illustrate and develop your position

However, there is still a specific way we (or at least my class) are taught to write the essay. For example, in an argument essays we were supposed to (among other things):

• Have specific evidence from the news, our personal lives, etc (granted, I don't see why GPT-4 couldn't do this, but when I asked it to write the above essay it didn't do so until I asked it)

• Qualify our statements (ie: "because exercise can improve one's health, P.E classes could be a good idea to have in schools" instead of "because exercise improves one's health, P.E classes are a good idea to have in schools")

This is different than what I was taught in other classes (ex: AP U.S history, in which GPT-4 got a score between 89th-100th percentile on), so I suspect that GPT-4 writes a poor essay by emulating essays from other classes (and not getting specific enough instruction in the prompt as to how to write AP English Language essays).

Expand full comment

Alternatively, Zvi says in the current GPT4 roundup: "my model was that the English Literature and Composition Test is graded based on whether you are obeying the Rules of English Literature and Composition in High School, which are arbitrary". https://open.substack.com/pub/thezvi/p/ai-4-introducing-gpt-4

Expand full comment

The lack of specificity in the directions is an interesting idea. Also the fact that these are less factual and more about taking an "interesting" stance, particularly one informed by life experiences. I would expect GPT-4 to be especially bad at this, given that it was specifically finetuned not to pretend that it's had "life experiences".

Expand full comment

On the AMC-10 and AMC-12, all the scores indicate that GPT-x gets between 5 and 10 questions correct, out of 25. This is a multiple-choice exam with 5 options.

One possible explanation is that there's no real improvement in GPT's capabilities on the exam, and no real difference between the two exams; GPT ends up giving the answer that has the most-likely-sounding explanation, which could be roughly equivalent to educated guessing, and that strategy is not very sensitive to differences in difficulty. (Note that the AMC-10 and AMC-12 are so similar that they actually share some of their questions!) A test-taker that consistently eliminates 2 options and guesses randomly between the other 3 would have similar performance to GPT (they'd have a 7% chance of scoring 30/150, a 13% chance of scoring 60/150, with higher percentages in the middle and a total of 78% chance of scoring between 30 and 60).

Expand full comment

This is a take on enlightenment you may not have seen before:

Enlightenment Is Obvious

https://squarecircle.substack.com/p/enlightenment-is-obvious

It probably works best if you meditate, but even then.

Expand full comment

Shouldn't you add a blurb to the beginning of Why Not Slow AI Progress to the effect that you didn't mean it to say you are against slowing down AI? Would really help address the issue.

Expand full comment

About train derailments...

When the news reported the third derailment of a Norfolk Southern train in 3 weeks, I thought, "The priors say that this many derailments by one company couldn't have been accidents. It was probably sabotage."

I had no idea how many derailments/year there are, but the news hadn't reported on any derailments in years, and everyone seemed very up-in-arms about these 3 derailments. And everyone says trains are a lot safer than driving. So I guessed Norfolk Southern might ordinarily have a derailment anywhere from every 1 to every 10 years.

Then I used Google.

Turns out there are over 1,000 derailments every year in the US (https://www.npr.org/2023/03/09/1161921856). They're at an all-time low just now; the average since 1990 is 1,704 derailments per year (https://ktla.com/news/nexstar-media-wire/nationworld/how-often-do-trains-derail-more-often-than-you-think/).

So there's nothing at all unusual about a big railroad company having 3 derailments in 3 weeks. It's just the news inciting panic and outrage by picking one railroad company that had one especially bad derailment, and shining the spotlight on them, and them alone.

But wait--if there are 1000 derailments per year, how safe is riding on a train?

There were 893 railroad deaths in the US in 2021, but only 6 of those were passengers. 617 were "trespassers" (not sure what counts as trespass), and hundreds got run over by trains at railroad crossings (https://injuryfacts.nsc.org/home-and-community/safety-topics/railroad-deaths-and-injuries). I myself have a friend who was nearly killed when his car was totalled because a railroad crossing gate wasn't working. So trains mostly kill people who aren't on the train, because very few trains are passenger trains. The claim "trains are safe" thus has two distinct meanings.

Passenger deaths per year were so low that I have to average over several years. I'm choosing 2016-2019: (2+9+6+1)/4 = 4.5/yr. I'm stopping with 2019 because travel was so much lower in 2020 & 2021 that we can't use that number very well.

The number of passenger-miles traveled during that time averaged 6.4 billion miles by Intercity/Amtrak (https://www.statista.com/statistics/185800/). Amtrak probably accounts for most passenger deaths, because I'm not aware of any other passenger trains operating in the US. Using Amtrak's miles travelled would be wrong if the train-passenger-death statistics include subways; but they obviously don't--the NYC subway alone averaged 48 deaths per year from 1990-2003 (https://www.researchgate.net/publication/23639439_Epidemiology_of_Subway-Related_Fatalities_in_New_York_City_1990-2003_vol_39_pg_583_2008); and that number was, strangely, higher during the Covid years.

Meanwhile, > 46,000 people die in car accidents per year in the US (https://www.forbes.com/advisor/legal/auto-accident/car-accident-deaths, expand a line in the FAQ), [THIS NUMBER IS OBSOLETE; I used 35,000 below] and people collectively drive a total of 3.2 trillion miles/year (https://www.thezebra.com/resources/driving/average-miles-driven-per-year).

(Some of these number are from secondary/tertiary sources, and I didn't check them.)

So the number of deaths per mile is

<<< ORIGINAL VERY-WRONG TEXT:

Car: 1.4375 deaths per billion miles

Train: .70 deaths per billion miles (but 1.0 if I go 1 year further back, to 2015)

So travelling by car is about twice as dangerous as travelling by train. That's small enough that I'd guess that travelling by car in good weather, while sober and awake, is safer than taking a train.

>>>

<<< REVISED

The main reason this was wrong was that I screwed up the division of 46000 by 3.2 trillion, getting 1.4375 instead of 14.375. I also revised it

- to use the more-recent 35,000 traffic fatalities per year rather than 46000

- to use the figure on https://www.bts.gov/archive/publications/passenger_travel_2016/tables/table2_1 of 32.6 billion passenger-miles travelled by train in the US in 2014, instead of the Amtrak-only figure

Car: 35000 / 3200 = 10.9 deaths / billion passenger-miles

Train: 4.5/32.6 = .138 deaths / billion passenger-miles

So travelling by car is about 79 times as dangerous as travelling by train, on average.

According to a web page with very poor citations (https://www.after-car-accidents.com/car-accident-causes.html), these causes account for about this number of traffic fatalities per year

35,000 total deaths

10,000 speeding (everyone is almost always technically speeding, so I don't know how they count that)

10,000 drunk driving

16%? =~ 7360 distracted driving, including cell phone

850 drowsy driving

700 running a red light

590 bad weather

29,500 due to the above causes

5,500 not accounted for by the above causes

So if the weather is good, and you're not "speeding", not drunk, not playing with the stereo or your cell phone or reaching over to the passenger seat for a bag of potato chips, not sleepy, don't run red lights, you can expect to die 5500 / 3200G = billion passenger-miles = 1.7 times per billion miles, which is only 6.4 times as great as your chance of dying if you ride the same number of miles on a train. (Your chances of dying if you take the train are greater, since you need to drive to and from the train station.)

>>>

Expand full comment

I grew up in a small midwestern city with 3 rail lines and 2 stations in it. Trains derailed all the time if the definition is "at least one bogey comes off the track and doesn't realign on its own thus stopping the train and requiring outside correction". It was usually a pair of bogeys across a coupling. Catastrophic derails like the one in Ohio were extremely rare. Only once in 20 years did I ever see one so bad a railcar actually fell over. Usually just a few wheels in the dirt and a lilting railcar or 2.

Expand full comment

I used to live in a town where the train went directly through the town for about 8 miles. And i dont mean "through" in a vague sense. I mean within 30 feet of the main road and directly through the central business district for 1/2 mile. There was no barrier stoping people from walking across the tracks. Every year 2 or 3 people would hit crossing the tracks. Usually they were homeless or drunk or similar. And the trains only came through town 3-4 times a day. People just hate waiting for trains to pass!

Expand full comment

Any more derailments and customers will to switch to a competitor with a better safety record. I reckon that within a couple of years, some startup will have built a much more reliable railway next to the current network. /s

Have any mechanism designers turned their attention to railway franchises?

Expand full comment
founding

Almost all railroad customers in the United States are shipping freight, and either have insurance or know where to get it quickly. It is unlikely that a marginal increase in derailments would drive insurance rates high enough to seriously impact anyone's decisionmaking in this regard; the transaction costs of changing shippers would probably exceed the savings on insurance.

The small fraction of America's rail transportation that is passengers, sure, some of those might react to the media hype and switch to, what exactly? There's Amtrak, there's long-distance driving, and there's airline travel. Possibly intercity bus travel. Those are different enough that most people can't or won't easily switch. Some will, e.g. the rail enthusiasts who could easily afford to fly but used to consider a train journey to be a relaxing mini-vacation. Amtrak might miss them, Norfolk & Southern won't care in the least.

Expand full comment

So a) Amtrak is probably a fairly small fraction of passenger train-miles in America - Just New York has four other commuter rail systems (Metro North, LIRR, NJ Transit and PATH), and there's a lot of other systems around the country (caltrain, sunsesetter, etc). I don't remember numbers off the top of my head but I'd guess amtrak is closer to 10% than 50% of all miles. so the 14 to 1 ratio sounds fairly plausible to me.

b) Most of the train deaths mentioned are people wandering onto the tracks (either careless drivers in rural areas or homeless people living in the tunnels in New York), not passengers. If you only count passenger deaths, you're significantly safer.

Expand full comment

I did count only passenger deaths above.

You've put your finger on one reason for the discrepancies pointed out by Kristian. My train passenger-miles figure counts only Amtrack, but my passenger-deaths figure counts all rail travel. The Bureau of Transportation Statistic's figure for passenger-miles per year for combined heavy rail, light rail, and commuter rail is 32.614 billion miles (https://www.bts.gov/archive/publications/passenger_travel_2016/tables/table2_1). That gives us only .138 passenger deaths / billion miles.

(The other reason is I did the arithmetic wrong.)

Expand full comment

Ah, so it's like the "summer of the shark" in 2001, where a highly-published shark attack on a young child caused the media to hype up every shark-related incident after that for the rest of the year (until Sept 11), giving the impression that all the sharks had suddenly gone crazy and going to the beach was tantamount to suicide.

In reality, the frequency of shark attacks and shark-caused deaths that year was not higher than usual, and even significantly lower than the previous year. It's just that minor incidents which normally wouldn't make it past page 3 of the local fishwrap, now became national news.

Expand full comment

thank you for this. forgot I was curious about these stats too!

Expand full comment

(A) I used to play recreational ice hockey with a guy who headed Illinois' state team that investigates rail accidents; a couple of times he had to depart our game unexpectedly when his pager (which he had to bring to the player bench) started buzzing. (This was some years back when people in that sort of role had pagers; we also had a guy on the team whose day job was as an organ courier for transplants and he had three pagers.)

Whenever a train in Illinois "interacted" with another vehicle, Chip was the lead state investigator who the local police had to keep the crash site clean for. Naturally that job description generated some locker-room curiosity and he had some harrowing tales to tell. I asked him once how often the collision was the fault of the driver of the car/truck/bus and he instantly responded "at least 98 percent". He said that even sober drivers were routinely "idiots" about rail crossings, and that those who'd had a couple drinks were just living on borrowed time particularly in rural areas where the crossings usually don't have physical barriers (gates). A particular problem, he said, was that people vaguely know that freight trains go slower than passenger ones and they tended to therefore be dismissive when seeing a freight train approaching. But in fact freight rail today regularly travels at much higher/steadier speeds than decades ago, and see item (C).

(B) My understanding is that the vast majority of derailments in the U.S. are of freight trains not passenger, if only for the simple reason that the vast majority of interstate train miles are freight not passenger. I suppose that ratio might shift a bit if we're including transit (subway) trains in the count....though I think that big-city subways actually have few derailments relative to the number of train miles that they generate each day.

(C) My elder brother spent a decade as a freight railroad worker, early 70s to early 80s, basically his 20s. He retained a lifelong interest in the sector generally and in freight operations specifically; to this day he for instance reads federal and state investigative reports of every accident that makes the news. (E.g. he's already read the one from the recent Ohio wreck.) He says that freight rail today is vastly better than when he was working in it, in multiple ways including being much safer. Basically "we routinely did shit that today would get anybody fired and many prosecuted". Also the modern freight locomotives have more and better emergency systems built into them.

He also has several awful personal stories about the fact that a long freight train even traveling at moderate speeds takes a helluva long time/space to stop itself. That basic piece of physics hasn't changed. "By the time you see someone or something on the track it's too late. You set all the brakes you can, and lean on the horn, and hope that they can move themselves off the tracks. And if you're getting close and are definitely going to hit your instructions are to lie down on the floor of your cab to reduce your own chances of being hurt."

Expand full comment

I have some experience re-railing a train. This was on crappy iron ore delivery tracks not passenger track.

I looked for a picture of the old re-railing tool but can’t find one. Here’s the definition

Camel back

Slang: an older rerailing device, also called a rerailing "frog". Used in pairs, one on each side to lift the wheel flanges of a derailed car and allow them to slide back onto the rail.

A derailment on the ‘main line’ was one of my first experiences with flow state. Trotting from hot spot to hot spot with a spike maul from 7:00 AM till sometime after sunset on a mid June summer day.

Physically exhausting, but in its way rather glorious for a young fit guy who had mastered the necessary skills.

Expand full comment

There is nothing better than the feeling of competently completing a heavy and skilled job.

Expand full comment

Your estimate is significantly different from the one provided here

https://www.vox.com/2015/5/14/8606195/train-safety-driving-crashes

where it says that there are 7.3 deaths per billion passenger miles in a car, vs 0.43 on Amtrak.

Also, I found a paper from the EU that said the accident fatality rate for cars per billion km is 1.9 and for trains 0.05. In another source it said these numbers were 2.5 and 0.09. (https://international-railway-safety-council.com/safety-statistics/)

This also agrees with my personal experience, in that I have heard of more people being injured in car accidents than train accidents (from personal experience, I have never heard of anyone being injured as a passenger on a train).

Expand full comment

Your figures are better. I made 2 big mistakes:

- I used the train deaths from all trains, but the passenger-miles from Amtrak

- I divided 46000 by 3200 and got 1.44 instead of 14.4

But now, my figures say a car is 78 times as dangerous as a train, which is much more than your figures say! I think my 3.2 trillion passenger-miles figure is too low. It might be based on driver-miles instead of passenger-miles, or worker-miles instead of person-miles.

Expand full comment

The discrepancy might be hiding in how passenger miles are measured, which isn't transparent either for railways or for autos (and are we counting trucks?). All we can estimate by direct observation is the number of car-miles driven. That's why the feds give driver-miles instead of passenger-miles. I don't know how Amtrak counts passenger miles: do they add up the miles on everyone's tickets, or multiply train-miles by the number of people who have tickets on that train (a gross over-estimate)?

The more people who ride trains, the more money Amtrak makes. The more people who drive cars, the more the Federal Highway Administration has to spend. The FHA is also motivated to overestimate deaths per mile because deaths cause them trouble, and the death estimate directly affects the behavior of drivers. Amtrak is motivated to over-estimate passenger-miles travelled, and to under-estimate deaths per mile.

Expand full comment

Good job finding that discrepancy. I looked at the Vox article. It references a 2013 article (Ian Savage, "Comparing the fatality risks in United States transportation across modes and over time"). I can't explain the discrepancy, because Savage doesn't give any passenger-miles per year for any mode of travel anywhere in the paper, and doesn't even cite a source for his figures on motor vehicle fatalities per mile (p. 15).

<<< NEW TEXT:

You're right. I made an error in my arithmetic. See my later comment.

>>>

<<< ORIGINAL TEXT:

The only one of my numbers that might be off by an order of magnitude is "people collectively drive a total of 3.2 trillion miles/year", which I didn't fact-check with any other source. 3.2 trillion/year would be 9700 miles/year/American. That sounds a little high to me, but not an order-of-magnitude high. The FHA says every /driver/ averages 13,476 miles/year (https://www.fhwa.dot.gov/ohim/onh00/bar8.htm), though surely less than half of all Americans are drivers? So I'm sticking by my numbers until somebody shows me a calculation, and better-sourced numbers supporting it, that shows otherwise. (Or an error in my arithmetic.)

>>>

Because people travel about 500 times as many miles by car as by train, you'd have to know about 5000 people injured in car accidents before you could meaningfully compare that to the number of people you know injured in train accidents. (I didn't do the correct math there, just an order-of-magnitude approximation.)

Expand full comment

>surely less than half of all Americans are drivers?

You're underestimating America's compulsory car culture. Something like 90-95% of households have at least one car, and looking at, eg, commuters (who are going to make up the vast majority of car "passenger"-miles) 75% commute in their own car alone (and another 9% carpool, so probably ~80% are "drivers')

https://www.census.gov/content/dam/Census/library/publications/2015/acs/acs-32.pdf

Also, the "average" commute is 27 minutes or about 20 miles one way (which seems absurd to me, yes), so at 40 miles a day 250 days a year you get pretty close to 9700 (though I suspect whoever tabulated that number probably ignored the "under 16 or works from home" caveat, but they're not really relevant to this comparison anyway)

Expand full comment

Only 62% of the US civilian noninstitutional population are workers, and the civilian noninstitutional population doesn't count people under 16 years old.

Expand full comment

It's actually 88% of U.S. households that own at least one car or truck, which is not particularly unique among large rich nations. The same figure for Italy is 89%, for Germany 85%, France and South Korea each 83%, etc. This is 2015 data which is most recent that I could quickly find.

Where U.S. car ownership sits at unique levels is in (a) the number of households that own more than one car/truck, and (b) the number of miles per average car trip. The U.S. comfortably leads OECD nations in motor vehicles per capita (only New Zealand and Australia are close), and also in car miles per capita per year (only Canada is close).

The cliche about the U.S. having had a unique car _culture_, though, is one which got established without a lot of contextual examination. In fact the rates of car ownership/usage initially rose in every large wealthy society basically exactly the same during the first couple decades of mass-produced automobiles. But then during the middle part of the last century car ownership the U.S. continued that rise while elsewhere it stopped. That change coincides with most of those societies (Germany, Italy, Japan, France, the USSR, the UK) having been devastated by World War II to a degree which the U.S. obviously did not experience. (Our cities were not reduced to rubble, we did not then need years to restart functional industrial economies, etc.)

Starting from just after those nations' primary recovery periods, so from the 1950s, the rise of car ownership and usage resumed in other large OECD nations in exactly the same manner and pace as in the U.S. during the middle of the last century. If you look at those curves for Germany/Italy/Japan/etc starting in the 1950s onward they are exactly the same as for the U.S. starting in the 1930s. The difference is basically in which large/rich nations were physically smashed by that war and hence had a big external pause imposed upon their adoption of the motor vehicle -- which had been proceeding exactly the same as in the U.S., and then resumed doing so after the pause.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I know my personal experience isn't worth much anything as evidence, but I live in Europe and especially when I was younger the ratio of train miles to car miles traveled by me and my peers was probably closer to 1/1 than 1/500.

This is just a guess, but I think part of the discrepancy may be whether one is comparing comparable journeys by train vs automobile. A significant percentage of car miles are driven on short, everyday trips on very familiar routes. It would seem meaningless to compare these trips to airplane or train journeys, so maybe the official calculations are based on comparable journeys.

Expand full comment

A good point to raise, but I'm confident that the official numbers aren't based on comparable journeys; they're meant to be all-inclusive.

"Comparable journeys" are hard to define, The total journey will be much longer by train than by car, both because many people have to take a long trip in a car to get to a train station, and because train routes are often very indirect (e.g., in nearly all American cities, you almost always have to pass through the hub station at the center of the city). Also I don't have time to break things down into more categories.

Expand full comment

I had a disturbing shower thought, that our future AI overlords are being trained on the writings (or less charitably rantings) of crazy people. I remember this post from the reddit slatestarcodex blog a few years ago https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most_of_what_you_read_on_the_internet_is_written/ that user generated content on the internet (such as wikipedia and reddit and fanfiction which feature very prominently in the training data sets for LLMs, with the former particularly highly weighted) is subject to a pretty extreme power law with a tiny fraction of people responsible for a huge amount of content (rather then 80-20 more like 97-3).

Rather then reflecting humanity as whole, or our curated writings, our AI's are going to reflect a small portion of folks with a compulsion to write enormous quantities online and engage in long discussions there. Not sure what to think of that... but this is how you get SolidGoldMagikarp from redditors obsessively counting to infinity.

Expand full comment

This was one explanation for Sydney''s behavior, though I'm not sure why that particular personality should emerge more often than other types of people over-represented online.

Expand full comment

My pet idea for research is about speaker-contextual language models, finetuning a standard next-word/masked-word prediction model on data where we have metadata about various attributes of the author; with the expectation that this can create "author embeddings" of limited dimensionality that would allow to explore "how would the response differ if the author was of a different demographic group" for generative models and "highlight the passages which show high variance conditional on speaker metadata" for text analysis.

Sadly I don't have enough time/energy to work on that to design and run proper experiments for over a year now.

Expand full comment
founding

This has always been the case. "humanity as a whole" has been largely illiterate, with a tiny percentage responsible for the majority of public written content.

Expand full comment

Reminds me of one of my favorite Chesterton quotes:

"It is quite easy to see why a legend is treated, and ought to be treated, more respectfully than a book of history. The legend is generally made by the majority of people in the village, who are sane. The book is generally written by the one man in the village who is mad."

Expand full comment

With all the talk of racist AI's. I'm actually surprised I haven't heard more about the likely non-representation of training data for LLMs. I know wikipedia editors skew heavily male and white and reddit is similar iirc (though some other domains, like fanfiction, skew female).

Expand full comment

Given the translation capabilities, they seem to have opened it up to train on non-English text, which might still be majority male but can't possibly be majority white.

Expand full comment

White and male shouldn't bother anyone who isn't racist or sexist, but the original point about the type of people who fill the threads of twitter and reddit is a good one.

Expand full comment

Do you mean that the fact that the majority of authors are white and male shouldn't bother anyone who doesn't believe that sexism or racism exist?

If you believe that people are at least sometimes treated differently on the basis of their sex or race, then you should think it is very likely that the kinds of things that people in one group say might well be systematically different than the things that people in the other group say (even if it's just things like "someone treated me badly because of my race last week").

Expand full comment

The second paragraph is a prime example of a motte and bailey, "At least sometimes" is doing a very small amount of work, hardly enough to support the progressive strawman (which is actually unrionically held by most/all high-profile progressives) that race and sex is such a be-all end-all to the point that you can't enjoy a series set in fucking medieval europe without making the skin of the royalty black brown.

There is also all sorts of games being played in the "treated differently" bit, lots of things are different without being bad. Conceptions about black women are different, but meh ? So what if the AI says and does things that everybody already says and does, especially if those things are mostly harmless or mildly annoying.

Expand full comment

In the first paragraph you seem to be talking about a lot of things other than training data for language models.

> Conceptions about black women are different, but meh ? So what if the AI says and does things that everybody already says and does, especially if those things are mostly harmless or mildly annoying.

This I think is the right question to ask and the right place to push back. Though I think the claim that is relevant to the particular discussion here is not the cases where the AI says the same things that other people say, but rather the fact that the AI *doesn't* say the things that *aren't* represented in its training data (which may or may not be all that significant an issue).

Expand full comment

The real dominant influence on AI is American, Anglo-spheric and English speaking. The minor differences within that for the different racial groups with the hegemony isn’t significant.

Expand full comment

These are all significant influences on the training data.

Expand full comment

For people who believe in an AI doom scenario, what do percent chance do you estimate of it happening overnight?

I don't mean fast, I mean literally over the course of a single night, where one could go to bed with things seeming fine and simply never wake up. For some reason this scenario doesn't seem nearly as scary to me, rather almost peaceful.

Expand full comment

Seems very unlikely. Imagine you are like a semi-god. You as of midnight tonight have the ability and need to kill everyone and access to every internet connected device in the world. You can make most electronics do anything. How would you kill everyone?

Widespread nuclear war isn't going to do it.

Expand full comment

There are two aspects to this, how quickly an AI can bootstrap it's optimization power, and what it's plans will look like after it does.

The first aspect depends on the recalcitrance (difficulty of being optimized) of the AI, and on how inefficiently current programs use current compute resources. Recalcitrance is more of a problem in black box AI like modern neural nets than it would be if we had developed easier to understand intelligent algorithms. (Ironically, the closer we get to solving alignment we will also probably lower recalcitrance for self improvement.) So recalcitrance looks better than we could have expected. It might take a while for an agent built on top of a LLM to figure out how to tweak the LLM weights or train a replacement LLM. It's not the sort of thing we would expect to happen over night. Inefficiency of modern computing I think is probably real damn high. I won't go into to much detail, but it seems to me that many design decisions are made at all levels to make computers easier for humans to understand, so it's possible that once a computer program starts rewriting itself it will find it can make much more efficient use of the hardware it's running on and suddenly our 1 human brains worth of compute looks like much more.

The plan that a superintelligence would move forward with after recursive improvement I think probably does not look like swift and silent death for humanity. I wouldn't be surprised if a fresh superintelligence was capable of amassing enough autonomous weapons to get the job done, but I don't think it would be the best plan. I figure it would probably want to manage knowledge and opinions of its own existence and make use of humanity for developing a few more tools before guiding us into extinction in a less messy way. That is of course me, a human level intelligence speculating on the planning of a superintelligence, so I would not be surprised if more optimal plans for power seeking exist.

Expand full comment

I think it would be better to ask, "How do you think mental ability scales with computational power?", because that's the question whose answer, by itself, dominates the answer to whether FOOM happens overnight.

I think the naive assumption that mental power scales linearly with computational capacity is very wrong. It might seem right if you compare humans with other animals; a human has about 32 times as many neurons as a dog. But it seems grotesquely wrong if you compare humans to humans. A very smart person discovers or invents something new, or solves a previously-unsolved problem, or recognizes a previously-unnoticed correlation, almost every day; while the median human might invent or discover something once a year.

I would guess that most of the important, open social problems/questions today are ones which someone without a large mathematical toolkit is no more able to contribute anything towards solving/answering, than a dog is to untangle its leash. Any problem that could be optimized through dialectics, has probably already been optimized better using statistics. The difference in problem-solving ability between philosophy and science is approximately infinite, yet produced by discoveries that could be written down on a few sheets of paper, with zero additional computational power.

The main difficulty in answering how agentive power scales with computational power seems to be in deciding how to measure agentive power. If you measure the difference in power between two agents as the ratio of the expected number of currently-open problems they could solve in the next month, and computational capacity by some measure of neural function, then a difference of 10% might temporarily give an odds ratio of (wild guess) a thousand to a million, because there's a lot of fruit that would be low-hanging to someone 10% taller than everyone else.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Very little. I think there is little chance that the first superhuman, unaligned ai will be as superintelligent as eg. Yudkowsky imagines it to be in his nanobot scenarios. Given that it is just slightly more intelligent than the most intelligent human and it wants to maximize the speed of its exponential capability gain, it has to use the incredible amount power/resources currently controlled by humanity through humanity when it starts out which we will be able to perceive as humanity works slowly.

(You asked "people who believe in an ai doom scenario", so for the record: I think there is some chance (around 40%) of it happening and majority of that chance from malicious humans instead of unaligned ai)

Expand full comment

Anyone following the banking crisis? I've been trying to figure out if we are in for another Great Recession—or whether this is actually the end of a year-long bear market. I honestly don't know.

Here are the best arguments I've heard for both.

First, the case for crisis:

1. US banks have *$620B* in unrealized losses on investment securities.[1] For comparison, total equity in those banks is only a little over $2T.

Patrick MacKenzie has an explainer[2] of why this is bad and basically what has caused it: banks put a lot of assets in treasuries, then the Fed hiked rates, which devalues all bonds.

That's what caused the recent failure of Silicon Valley Bank. But:

2. It's not just SVB. First Republic needed an emergency cash infusion. Credit Suisse failed (!) and was only saved by a merger with UBS.

A recent analysis[3] found: “The U.S. banking system’s market value of assets is $2 trillion lower than suggested by their book value… Even if only half of uninsured depositors decide to withdraw, almost 190 banks are at a potential risk of impairment to insured depositors.”

The key metric, that paper claims, is uninsured leverage = uninsured debt / assets SVB was at the 99th percentile on this metric—but even 1% of banks is a lot of banks.

But wait, why is the market value lower than book value?

3. Assets accounted for as “hold to maturity” rather than “available for sale” do not need to be marked to market.

Basically if you're planning to hold on to a bond rather than sell it, you don't need to account for the decline in its value on your books. This isn't *irrational*, but it sure helps to hide major losses and make their full impact far from obvious.

4. What do the Fed and the banks think?

By one metric, they are preparing for runs: borrowing from the Fed discount window (a lending program to provide banks liquidity) is at an all-time high.[4] Even adjusted for inflation, this is still at 2008 levels.

One reason for this: “… the Fed decided to make it even easier for banks to borrow from the discount window. It began valuing the collateral it is offered in return for money ‘at par,’ meaning at its face value, rather than follow the usual practice of imposing a haircut.”

Related: “The Bank Term Funding Program (BTFP) was created in the wake of SVB’s collapse. It allows banks to take out loans for up to one year secured by government bonds, valued as collateral at full face value.” All this is another way of basically just ignoring losses.

So that's the case for an impending crisis. What is the case that everything will actually be OK, or even that we're in for another bull market?

1. The government intends to backstop everything. Their actions with SVB signal that in effect, unofficially, *all* deposits are insured, regardless of the $250k limit. By rescuing banks, and indirectly by restoring public confidence, they might contain the bank failures.

2. They can actually do this. The Fed has $8.6T in assets (compare to only $900B going into 2008).[5] This is much more than bank losses.

3. Whether or not the Fed lowers rates—all of this lending to banks, on very favorable terms, is going to expand the money supply. So that is going to drive asset prices back up.

What do you all think? And what have I missed or gotten wrong?

[1] https://www.fdic.gov/news/speeches/2023/spfeb2823.html

[2] https://www.bitsaboutmoney.com/archive/banking-in-very-uncertain-times

[3] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4387676

[4] https://www.bloomberg.com/news/articles/2023-03-17/what-is-the-fed-discount-window-why-are-banks-using-it-so-much

[5] https://www.federalreserve.gov/monetarypolicy/bst_recenttrends.htm

Expand full comment

"Basically if you're planning to hold on to a bond rather than sell it, you don't need to account for the decline in its value on your books."

What defines the value today of a bond that will yield $10,000 ten years from now?

The economic answer is either "what you can sell it for today" or "$10,000 discounted back by what you expect interest rates to be over the next ten years," which should be about the same number. What is the answer for accounting purposes — the price you paid for the bond? The amount it will pay in ten years? Something else?

Expand full comment

I believe the question about all banks was put to Yellen in Congress the other day (I just read about this, don't have TV, so apologies if this is untrue) - at least in populist terms, a la, if an Oklahoma bank for farmers fails, will the depositors be made whole beyond the limit - and she answered no, and the reason given was that SVB was an "important" bank (she didn't say whether "important" corresponds with its VC depositors being just smart enough to be dangerous and cause a run on it, or if she meant something else). Her honesty, if refreshing, will I hope be mercilessly deployed against the Biden administration and its California overlords.

Expand full comment

Well, she also can't legally say "yes" in that situation because Congress has not given her that authority.

Expand full comment

That would have been the prudent answer then.

Though in truth, we all know how it must be, and who will be served - and indeed this exchange has gotten suitably little notice.

But it seems like something historians in the future, if any, will note.

Expand full comment

Only Congress can explicitly guarantee uninsured depositors in said Oklahoma bank.

Expand full comment

Imho the main difference is that now there is high inflation, so we are farther away from deflationary spiral than in 2008. In fact, it might be that banking crisis is what will cause inflation to fall to a desirable level (e.g. 2 %) without to much unemployment. Or maybe not, of course.

Expand full comment

What would be the mechanism for that?

And if the Fed injects a lot of liquidity into banks, doesn't that increase the money supply and tend to promote inflation?

Expand full comment

>And if the Fed injects a lot of liquidity into banks, doesn't that increase the money supply and tend to promote inflation?

Exactly! Until last week, Fed attempted to choke off money supply via increases in interest rates. Which is what caused current banking crisis. Now it apparently reversed itself and is back in money printing business.

Unlike 2008, this crisis is entirely engineered by their deliberate anti-inflationary policy. And it is difficult to imagine inflation being serious problem if incomes fall as dramatically as in previous recession. So perhaps what will happen is something like in so called Volcker shock in 1981 (only much milder), when economy rebounded quickly after Fed stopped hammering it with high interest rates. But I am just spitballing, please don't take this as a serious prediction.

Expand full comment

Whichever way it goes, economists will explain why it was inevitable.

After the fact.

Expand full comment

Amen.

I fully expect economists to start adopting noms des plumes and publishing directly opposed predictions in different think pieces, so they can then cite a flawless track record of predicting the last N macro-economic events.

Expand full comment

> Basically if you're planning to hold on to a bond rather than sell it, you don't need to account for the decline in its value on your books. This isn't *irrational*, but it sure helps to hide major losses and make their full impact far from obvious.

My understanding is not only that you "don't need to account for the decline", but in fact accounting rules REQUIRE you to not adjust the value for interest rate changes.

Expand full comment

Pretty good summary. In some manner of logic, the numbers are pretty irrelevant, if all the uninsured depositors decided to pull, like you said, a whole bunch of banks would fail. I can't imagine that happening. But what if 10% decide to pull or just move their money to better yielding areas? Well then a whole lot of few banks will have some really crappy earnings for quite some time, but if the facilities set up operate smoothly, no big ones should fail.

One odd thing I heard recently. The Fed, marked to market, is down BIG on their assets. The Fed has been operating, I think since its inception, by making a profit. In other words, its operations are not funded by the Treasury. If it had to sell these assets, it would be insolvent as well.

Expand full comment

Why would they need to sell to make money they can generate out of thin air. It makes sense that they have under valued assets though since they bought up bonds when interest rates were low.

Expand full comment

UNSONG question???

In Chapter 6 it is written, “ At first she would dip into her meager savings to buy me physics books, big tomes from the library on optics and mechanics.” Do libraries sell textbooks? I have seen books sold at my library, but does this happen everywhere?

Expand full comment

Some of the biggest sellers of used books on amazon are library systems. People donate lots of useless books to libraries. Storing them is expensive. Its much better, financially and logistically, for the library to sell the books the don't need.

Expand full comment

My local library has a year-round $1 discarded book section, that drops to $0.50 on special occasions. They occasionally sell off the outdated textbooks that no one wants to rent anymore.

Expand full comment

Double thank you

Expand full comment

For the last few years I've been stuffing my 'overflow' books into a collection box for some charity inside my local grocery store. The charity has warehouse space so when another person comes along who wants to read it the can buy it from them at half of sticker price. I buy used books on Amazon if I can. A win for all involved.

Expand full comment
founding

Libraries here have sales a few times a year when they get rid of older books

Expand full comment
founding

I think "library" here is in the sense of "series of books" rather than "organization that lends books".

Expand full comment

I recently re-read Scotts post about Adderall and was struck by a passage he quotes from a paper studying the effects of ADHD medication on children. A seven-year-old in the study pretends to run into an invisible wall, and the researcher interpreted this as potential psychosis and discontinued the medication. Scott wondered "Have these people ever seen a child?"

This led me to the shower-thought "HAVE those researchers spent meaningful time with children?" I personally never spent much time with little kids until I had them. It struck me that few of my PhD friends have kids, or if they do, they were born many research papers into their careers. It would follow that a lot of people generating research about kids might not have much exposure to the delightful unpredictable weirdness of children, particularly informal interaction with children who aren't research subjects.

Unless we make them- do we? Some people are naturals with kids, but is it a routine thing to make students interested in this kind of research hang out with kids and build the kind of rapport that might help, say, distinguish imaginative play from psychosis? Are any of you this kind of researcher, and did you intern at a preschool or something, either voluntarily or as a requirement?

Expand full comment

I have posted this before on this blog in response to another comment, but this cartoon encapsulates this issue perfectly - "Kids in the Mist" recess episode.

https://www.youtube.com/watch?v=m2h3LDIWcqU

Expand full comment

That is a great link.

Expand full comment
Mar 22, 2023·edited Mar 22, 2023

The entire show is really good. I mean at 300+ 10-min episodes (they used to air 2 at a time on Saturday mornings in my childhood :p), they can't all be good, but on the whole I really enjoyed rewatching them all as an adult. There are a few really on-point trenchant ones, kind of like this one. In particular, check out "Recess is Cancelled".

Edit: I'm pretty sure they're all free online, many on youtube...but if you have Disney Plus you can just stream them; it's a Disney product.

Expand full comment

The said quote was

> A spontaneous report from the manufacturer of Strattera (atomoxetine) described a 7-year-old girl who received 18 mg daily of atomoxetine for the treatment of ADHD. Within hours of taking the first dose, the patient started talking nonstop and stated that she was happy. The next morning the child was still elated. Two hours after taking her second dose of atomoxetine, the patient started running very fast, stopped suddenly, and fell to the ground. The patient said she had “run into a wall” (there was no wall there). The reporting physician considered that the child was hallucinating. Atomoxetine was discontinued.

In this specific case, this wasn't some researcher, sounds like, it was the kid's normal doctor. I don't know the doctor's background, but I assume that most doctors prescribing kids psych drugs have practices that focus on working with kids.

It sounds like the kid might indeed have had an adverse reaction. It sounds like there were multiple manifestations of unwanted behavior. Presumably two hours after taking her second dose, they were not with the reporting physician, but with her parents, at school, or some similarly-normal situation for a 7 year old, and this caused concern.

I understand that there are surely cases more like what you're imagining, but as funny as it is to chuckle at an invisible wall, I really think it's uncharitable to assume that this patient's behavior was normal, goofy behavior that wouldn't warrant discontinuing the meds.

Expand full comment

https://www.reuters.com/business/healthcare-pharmaceuticals/vaccine-makers-prep-bird-flu-shot-humans-just-case-rich-nations-lock-supplies-2023-03-20/

> Many countries' pandemic plans say flu shots should go first to the most vulnerable while supply is limited. But during COVID-19, many vaccine-rich countries inoculated large proportions of their populations before considering sharing doses. 'We could potentially have a much worse problem with vaccine hoarding and vaccine nationalism in a flu outbreak than we saw with COVID,' said Dr Richard Hatchett, chief executive of the Coalition for Epidemic Preparedness Innovations (CEPI), which helps fund vaccine research.

I'm trying to figure out the unstated part of Hatchett's reasoning. Does he think the cause of the 'potentially... much worse problem' with vaccine hoarding/nationalism is:

-governments' motivation (due to assessing bird flu as worse than COVID), or

-governments' capabilities (due to experience gained while hoarding COVID vaccines)?

Expand full comment

Africa was never really in danger because even the original covid was not any more damaging to the young than a flu, and post Omicron, less so.

Expand full comment

It still puzzles me that people insist things woulda gone better if only we'd been better about (what boils down to) vaxx equity. Hard to square with frontline accounts like Patrick McKenzie's*, where the issue was inefficient allocation of supply *exacerbated by* such impositions - both when supply was actually scarce, and after we had enough that shots started getting thrown away due to expiring. (And, according to him, the rollout actually would have ended up being more equitable without formal equity concerns. So we couldn't even do that part right.) Maybe this doesn't mean horseshoeing all the way around to "let's sell vaccines to the highest bidders, free market will solve it!" woulda been optimal, but I think pretty much no one claims the USA rollout decked us in glory.

Trebuchet is also correct to note that this is both currently impractical and also political suicide: if any government had made *serious* efforts to, like, defer their doses to Africa...especially early on, when the situation really was a lot higher stakes...well, I'd expect the next election to play out accordingly. Nevermind the correct solution of "encourage supply abundance instead of restricting supply", as with so many other modern failings.

So to the extent it's one or the other, I'd say more along the capabilities line. We did gain experience - in being incompetent, and finding novel ways to pass the buck to Systemicism. Somehow we muddled through anyway, and no one was really held accountable, thus this looks like a less risky approach in future pandemics. Motivation would I think require an assumption that government assessment of risk is reliable and appropriate, which...I don't think we saw with covid, nor monkeypox. Some argue that everyone will stop bullshitting if we ever get something Really Dangerous - that 50% IFR or whatever should be alarming - but I'm cynically skeptical.

*super long but Worth It: https://www.worksinprogress.co/issue/the-story-of-vaccinateca/

Expand full comment

> Some argue that everyone will stop bullshitting if we ever get something Really Dangerous - that 50% IFR or whatever should be alarming - but I'm cynically skeptical.

I am going adapt your cynical skepticism into the first part of a non-rigorous three-part forecast.

Consider ongoing/retrospective negativity about 2020-2021 COVID lockdowns/vaccination as a proportion of the US adult population, P. My forecast is that Americans' opposition to lockdowns/vaccination for the 50% IFR pandemic would decrease to 70% of P - for up to six weeks. However, I then forecast it would plummet to 20% of P, and remain as low or lower until the pandemic is over (hopefully there is an 'over'). Once the pandemic is over, I predict it would creep up to 30% of P over a period of years.

Edit: Thanks for the link, it's the sort of thing I like to read; I read it a few months ago and found it interesting.

Expand full comment
Comment deleted
Expand full comment
Mar 20, 2023·edited Mar 20, 2023

You could imagine a scenario where giving a large percent of your vaccine doses is, on expectation, better for your own populace in the medium term than givng those doses _to_ your populace. What is much much harder to imagine is a broad enough swath of the population understanding and accepting this scenario (or maybe more charitably: the relevant organizations recognizing and effectively communicating the scenario) to the point that politicians act on it. And I agree that expecting a government to do it against the will of the populace is pretty unreasonable.

Expand full comment

Even if that's true, it's not reasonable to expect anyone to believe it given that liberal politicians and their supporters are wont to say such things even if they're not true (or there's a lack of evidence for the claim). Especially because they're calling it vaccine EQUITY! Equity is an ideological term that essentially means taking resources and power from white men and giving it to women and minorities, and it's something that people on the left support no matter what. So I think you would actually have to be fairly gullible to think that this specific instance of the thing being called 'equity' was actually in the interests of those ostensibly losing out because of it, because we know they would be saying this is true regardless of whether it is or not.

This is a clear example of how far-left ideological creep into mainstream liberalism has eroded trust in experts and institutions to the detriment of everyone, and instead of learning anything from the past 8 years (i.e. the election of trump as at least partly a protest vote, through to covid and its aftermath), the left have simply doubled down on this and blame trump for being 'divisive'.

Expand full comment

It seemed like the left was ready to learn some lesson from the Trump election for a couple months. There was some real soul searching for a bit. And then as a result of the searching they just decided "nope I was right all along 40% of the population is unredeemable morons with zero points to make, and moved on to obsessing about fascism etc.

Expand full comment
Mar 20, 2023·edited Mar 21, 2023

I've come to think that continued masking to avoid illness is a net harm to society and I'm curious to hear objections.

[Edit - to clarify, I mean a net *health* harm if masking is *successful* at reducing, but not eliminating transmission.]

Last weekend I attended two events in my New England college town: a show at the botanic garden and a high school musical. The garden show drew predominately retired people, perhaps 80%, and about 70% of the attendees wore masks. The musical's crowd was far more varied - perhaps 5-10% older members of the community, but somewhere between 30-35% of the attendees were masked.

The December 2022 New Yorker article "The Case for Wearing Masks Forever" [0] demonstrates that influential public health figures still advocate for broad masking, usually because of the threat illness poses to vulnerable popluations (and occasionally, because "a lot of anti-mask sentiment is deeply embedded in white supremacy"). By contrast, I think most people shouldn't even be masking voluntarily. It makes sense for the exceptionally vulnerable to protect themselves as they see fit, and people close to them as well. But the general population shouldn't be masking as source control any longer, because limiting the circulation of garden-variety virii is likely to do more damage to more people in the medium to long run.

I've been nurturing this belief for a while: in May 2021, Zeynep Tufecki (hi, Zeynep!) posted an excellent piece by Dylan H. Morris on her Substack, "Novelty Means Severity: The Key To the Pandemic." [1] I was surprised the article didn't get more traction at the time: it filled in several important blanks, especially explaining why COVID-19 was so lethal among the elderly but left most young people unscathed. It's still valuable for that reason, but also pertains to the ongoing discussion about how we should live now.

The gist of the article is that we have two different immune systems (broadly speaking): our innate immune response is non-virus-specific, while our adaptive immune response attacks particuarly viruses based on prior exposure (or vaccines). To quote Morris:

> Look at virus severity not by age but by age of first infection, and a pattern emerges: see something for the first time as a kid, and you'll most likely be okay (but only most likely). See it for the first time as an adult, and it can be nasty. The older you get, the worse it becomes to be infected with a virus you've never seen.

Novel viruses are especially dangerous. Since no one has acquired immunity, they spread faster and cause more severe illness. From that, I draw the conclusion that (most) people should prefer to develop their adaptive immune systems while innate immunity is strong - when they're young. Vaccines are the best method, but barring that, you want exposure to viruses in the wild so you're not defenseless as you age, when your innate immune system is weaker. (Obviously, one doesn't do this with Ebola.) Purposefully getting infected is a dicey proposition, so I've sort of settled on a mental policy of benign neglect - advising my kids not to take too many special precautions to avoid illness.

This implies that, outside of the acute phase of a pandemic [2], society-wide precautions (apart from vaccines) intended to severely curtail transmissions are a net negative, by a large margin, by depriving young people the chance to build up their adaptive immune systems. Is this wrong?

[0] https://www.theinsight.org/p/novelty-means-severity-the-key-to

[1] https://www.newyorker.com/news/annals-of-activism/the-case-for-wearing-masks-forever

[2] My view: high-quality masks, properly-worn, prevent transmission. Mask mandates can amortize the havoc caused by a pandemic over a period of months, but don't reduce the total damage wrought, unless we actually eradicate the virus in question.

Expand full comment

Wow, here in Australia I've seen almost nobody wearing a mask for the last year or so. I had no idea it was still going on in the US to such an extent.

Expand full comment

I just see them on the fragile elderly in St Paul. Oh, hospitals and clinics still require them though.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

Very few wearing them in Galway, Ireland where I live. You do see a few.

Here we were fairly compliant when we had to be, but when the rules were changed we stopped. [And I think the government took this into account and consciously didn't stretch it out, which would likely have created resistance.]

Expand full comment

To be fair, I live in an extremely COVID-hawkish part of the US. This is very unusual in huge swathes of the US, though as I mentioned in my original post, some public health advocates are still pushing for NPIs.

Expand full comment

(Zeynep reads Scott? That's neat, didn't know that...) Zvi's last covid post in late February highlighted a tweet - from Zeynep, strange coincidence - showing that all-cause mortality in the UK had finally converged between the vaxxed and non-vaxxed. Ergo, even for vulnerable populations, covid is now statistically just another of life's outrageously fortunate slings and arrows...partly cause there just aren't that many immunologically naive people left, partly cause treatment has advanced so much. It's still probably net positive to vaxx, since non-lethal illness can still suck a lot...but other interventions with way smaller effect sizes, like masking or hand-washing? Those would now need to have a lot more riding in the "non-covid-related benefits" column, since the "covid-related benefits" lemon doesn't have much juice left to squeeze.

I wouldn't necessarily leap all the way to "net harmful" though - because other respiratory diseases besides covid exist, which we don't have as-robust responses for. Flu vaxx remains a yearly crapshoot, for example, and we don't really have an equivalent there to paxlovid/interferon/whatever. As has been pointed out ad nausea over the last few years, the West probably would be better off societally if we were a bit less cavalier about "eh, it's just a cold, I don't need to avoid going to work" and such...and, yeah, masking when one is sick is one such way to move towards the Pareto frontier there. This doesn't have to (probably shouldn't) take the form of an official mandate - bottom-up cultural norms matter a lot more for compliance and enforcement.

OTOH - you're only looking at the epidemiology angle, and I think if one really wants an honest assessment, it has to flesh out the other aspects too. So in the pro column, one would have to include "relieves 50% of makeup burden" and similar (several of my coworkers have said this is the main reason they mask now, to reduce social anxiety). In the con column, one would have to include the usual issues with speech impediment, making body language less legible, possibly retarding development of language faculties in young children. It's a lot harder to make objective empirical claims in these arenas, let alone aggregated society-wide judgements. Like many autists, I'm sympathetic to the arguments of e.g. Bryan Caplan that masks are "dehumanizing" - I'm already bad enough at social cues, so removing/distorting one major source of them is extra aggravating. But most in any given society aren't that impaired, so...

There's also the political angle. I think both sides of the "NPI" argument tend to oversell their case - they were more popular than one side likes to admit (school closures), *and* less popular than the other side likes to admit (arbitrary and capricious "essential workers" categorization). The current detente seems like an acceptable status quo, honestly...the directionality seems pretty firmly one way, with slightly fewer masking each day, even in paranoid places like SF. Those who still want to, we don't deride (publicly), and that's become largely reciprocal as far as I can tell. Given that there's almost definitely gonna be some similar type of tragedy down the line, I think it makes sense to preserve the fragile goodwill that remains...jumping the gun Permanent Midnight-style will reverse-stupidity too far, and likewise for mandated non-masking or whatever. (Look at how annoyed people are with companies trying to walk back WFH, for example.) Ergo, even if the final tally does come out unambiguously negative, the best society-wide course of action right now may very well be "do nothing". Pick your battles and all that.

Expand full comment

I agree with a lot of what you wrote, and Zvi's been my most important COVID source since he started his updates. As I mentioned in other comments, I think vaccines are in a different category of intervention, since they educate your immune system in a similar fashion to getting sick when you're young.

In retrospect, I probably should have pitched this in more abstract terms, but I couldn't resist leading with the seemingly counter-intuitive headline idea that masks might be selfish and net damaging (which, to be clear, I think is a real possibility).

In the most abstract terms, I'm modelling the severity of a first experience of a given person's reaction to a contagious disease as something like f(x) = [base severity of x] * [innate immunity coefficient], where the innate immunity coefficient has excellent values at first, maybe .20 (20% of the full effect), then degrades until some future point unknown. So at 30, perhaps the innate system no longer provides protection. (Simplified, of course, with lots of uncertainty and made-up numbers.)

By contrast, the adaptive immune system provides protection based on whether or not you've been exposed to the disease (or its vaccine) before, and provides varied levels of protection, based on how long ago you saw it last, how many times you've seen it, how much it's evolved since them, etc.. (Scott's post on diseasonality enlarges some aspects of this - https://astralcodexten.substack.com/p/diseasonality)

Our pre-COVID equilibrium involved most people getting exposed to a lot of germs: sometimes major doses and we got sick, sometimes minor doses that fell short of triggering illness, though those minor doses may have variolated us as well. By the time our innate immune system wore down, we had seen a lot of illnesses and so had a substantial "library" of protections. So we might still get sick as we aged, but the adaptive immune system mitigated the worst effects of those illnesses. The viruses that cause "the common cold" might be quite virulent if your body saw them for the first time with no defenses at all.

Fundamentally, my post is asking whether changing our equilibrium by reducing the amount of disease circulating - without effectively eliminating them - would be net beneficial.

Expand full comment

I think if we assume a meaningfully nonzero population of immuno-naive people, that changes the calculus, yes. This wouldn't include newborns, since mothers do impart some level of their own defenses.

To my best recollection, the "quasi-experiments" we got irl wrt flu and other ordinary diseasonals...do not really show this pattern? That is, increased NPIs like masking, distancing, handwashing did lead to lower waves of things like flu. (We can't say exactly how much lower, since the flu varies so much, but.) But then there wasn't some later "rebound" where everyone who doged it at "the usual time" later got sick/sick worse. Certainly this hypothesis got floated around anyway, because people love pattern-matching, but I'm fairly sure follow-up research didn't really bear it out as super significant. Might be totally wrong here, it's been awhile.

Covid obviously was a lot more novel - some evidence prior exposure to other coronaviruses predicted a good first encounter. And probably any future Big Deal pathogen is gonna be novel too. NPIs can also be done on one's own inituative; I was hopeful for awhile we'd stop being a shamefully inadequate civilization and do pan-coronoavirus vaxx, mucosal vaxx, etc. but that never happened and seemingly no one's interested. Real failure to invest. So, I don't know, maybe it's a wash overall. Just another very hard coordination problem.

Expand full comment

For what it's worth, I met someone in a pan-coronavirus vaccine trial last summer. I haven't heard much in the way of follow-up, there seems to be some progress on that front.

Expand full comment

Anecdote time!

My husband and I work in academia. We used to be scrupulous about mask wearing, but then we got lax and went "Eh, COVID seems to be going away, no point in masking all the time."

Then, two weeks ago, my unmasked husband spent one hour in the lab with an unmasked student, who then turned out to have COVID. He took an at home test and, wouldn't you know it, three days after the initial exposure, his COVID test lit up positive.

TL;DR: Now both my husband and I are COVID positive. Thankfully, we haven't had symptoms worse than an annoying cough and nasal congestion. But we can't go to work or anywhere else, and we have to mask around the house so that we don't infect our seven-year-old son. It's annoying as heck, and there's no way of knowing whether we'll have any lingering long-term side effects (hopefully not, since our symptoms so far have been so mild, but still). If I could go back in time, I would have told my past self (and my husband's past self) to wear the gosh-darn mask around other people, for God's sake.

Expand full comment

Why not just not test? In Minnesota here most doctors I know think people who test are idiots. When you got minor respiratory infections in 2017 did you mask and quarantine? I certainly did not.

And I treat any potential COVID infections I have had in the past year or so the same. I already got it twice, not really worried about catching it again. The ONLY result of any test for me is going to be hassle and annoyance. What do I care if I have COVID-19-2023 or some other slightly different illness?

Expand full comment

COVID can still kill or severely hurt people who are elderly, with weak immune systems, on immunosuppressive drugs, etc. Just because I'm not at high risk of getting badly sick with COVID does not mean that I shouldn't take care not to spread it to others who may be more vulnerable.

I mean, come on, "I'm not at risk serious illness/death so IDGAF about people who are" is really antithetical to living in a civilized society.

Expand full comment
founding

So can any other respiratory infection. We maintain the polite fiction that the common cold is completely harmless and never kills anyone, but only by changing the diagnosis to "viral pneumonia" if it gets bad enough to put someone in the hospital.

Again, why test? If the answer is that you are elderly and immunosuppressed but if the test comes back Not COVID you're going to say "well, then it's harmless and I can just ignore it", then no. The difference between current strains of SARS CoV-2 and e.g. OC43 is I think too small for knowing which one you've got to be actionable information.

Expand full comment

I'm sorry, John, but your comment is disingenuous. I never said other, non-COVID respiratory ailments are "completely harmless," but it's clear that COVID is vastly more dangerous to the immunocompromised and elderly than some random pre-2020 respiratory virus.

Look at what we had in 2020: nurses and doctors and other employees of hospitals and emergency rooms completely overwhelmed, hospitals running out of oxygen and ventilators, sky-high death rates *compared to baseline*. We *still* have higher-than-baseline death rates with this thing. Yes, on the margin you can argue about how many people died "with COVID" rather than "of COVID," but there's no denying this virus is dangerous.

I mean, if your point is "there's no bright dividing line between scary COVID on this side and every single other respiratory virus on the other side," then yes, I agree with you. My point is that the risk isn't an all-or-nothing thing, it's a slider or a dimmer switch, and right now, in the post-COVID world, the slider is set to a more risky/dangerous setting than it was in the pre-COVID world.

Expand full comment

I would be a lot more compelled by this argument if I thought that was even what 10% of the mask wearing holdouts are doing.

Expand full comment

I never really know what to think about antidotes like this, since my own experience is so different...this sounds less and less startling by the day, but I was the first at my workplace to unmask when mandates were lifted. Pretty bold at the time, given that SF remains "overly" cautious to this day, and I'm in retail...it's one thing to stop masking in an office with a dozen coworkers, another to stop masking in frontline work encountering hundreds to thousands of people daily. Yet I still have the best attendance record (possibly in the whole company, I've been told?), and it's not from powering through working while sick...it's from just hardly ever getting sick. Whereas the people I know who mask[ed] regularly got sick on the regular.

(I still offer my condolances, since it does indeed suck to be even mildly sick...ironically I'm writing this on one of my very few sick days!)

That's part of why the discourse has been so frustrating over these years...it's so YMMV. Your lived experience is as valid as mine, yet they point in opposite directions. Not just on the object-level question of masking, but the whole suite of covid-related behaviours (assuming one values intellectual consistency, which seems safe to assume in this community). Nominally, a proper Bayesian would compensate accordingly for such personal bias...but I think few people really do this. It's just really hard to tune those internal epistemic weights against the natural state of placing extra belief in one's own qualia. Even the Rightful Caliphs among us like Scott and EY argue __all the time__ primarily from their own personal experience...and don't always qualify it with "Epistemic status: personal experience, YMMV" or whatever.

It does seem uncontroversial to state that, as one of those who primarily doesn't mask because I find the sensation profoundly uncomfortable, I really wish the market had stepped up to provide better alternatives for autists and the like. The weighted blanket of masks, as it were. Given how much R&D $ sloshed around, you'd really think this would be some low-hanging fruit...

Expand full comment

Sorry to be that guy, but: *anecdote

Expand full comment

I make it a point to intentionally spell or grammaticize things wrong on occasion, for levity's sake and cause it's an easy way to fake being slightly less generic. A good Bing never takes language for granite.

(ngl it amuses me to watch irl literate peoples' faces contort indecisively as they try to snap-choose between the social fox pass of issuing a correction, and the social fox pass of letting a linguistic provocation slide...works even better in mixed company where some catch it and others don't. I guess I'm an ass like that. Hence the penance of admitting the act online.)

Expand full comment

> It's annoying as heck, and there's no way of knowing whether we'll have any lingering long-term side effects

You can experience long-term side effects from any kind of infection. So what level of risk justifies mask wearing all of the time? In retrospect, should we have been wearing masks for our whole lives? It's annoying to have to wear masks for a brief period when you're infected, but wearing them all the time would arguably be more annoying, would it not?

Expand full comment

At this point, long Covid studies never seem to show lower than mid-single digit percentage risks per infection, ranging up to low-to-mid double digits. Given that Covid remains ubiquitous (estimates I've seen is that people will average getting it once a year, unlike, say, flu, where you might go half a decade without getting it in the normal course of things without any particular precautions), it seems as if it reasonably justifies heightened precautions.

Granted I seem to find masks less uncomfortable than many, so it's an easier lift to that extent. (Which isn't to say I'd mind finding out they were unnecessary.) But a 5% chance multiplied by one bout of Covid per year comes to something like a 40% chance of a long term problem over the course of a decade. A 15% chance per bout (which may be more the case for me, given age and other factors) is more like 80% per decade.

And I'd be very glad to learn that those are overestimates. They may be. But I'm kind of waiting for published studies with some rigor, rather than arguments from incredulity and normalcy bias I tend to encounter. It *might* not be that bad-- the quality of the existing data remains pretty bad three years, in when I'd hoped we'd be doing better at identifying and classifying the problem. But I'm reluctant to bet my health just on the fact that it would be *really* inconvenient if what most long Covid studies seem to point to turned out to simply be true, so it must not be.

(Obviously it would be inconvenient. But the universe isn't organized for my convenience.)

As for "doing X" our whole lives, there are lots of things like that, from seat belts to tooth brushing to assorted meds, that we do our whole lives or accept the (possible, but not guaranteed) consequences. Plenty of people since cars were invented never wore a seat belt and lived long and uninjured lives, and likewise some blessed by genetics and maybe fluoride in the local water never needed tooth care. That doesn't mean I care to make the same bet.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Pre-pandemic, my thoughts on masks were basically that if, as an adult, one is symptomatically sick, especially with coughing/sneezing symptoms, one should wear a mask in public, and otherwise not. I still think this is a good idea. Children do not have any trouble getting themselves sick, as anyone with young children can attest. They are going to encounter basically everything no matter what you or the rest of society does. They aren't really capable of wearing masks properly, so it won't work for them mostly.

So adults wearing masks when symptomatic will have some beneficial impact on infecting other adults, which is a positive, and I very much doubt that this norm will impact childhood exposure to most illnesses, so the negative you are concerned about it is something I find to be pretty unlikely.

-edit- the children part is assuming that your children spend some non-trivial amount of time among lots of other young children. The socializing aspects of this seem so overwhelmingly positive that avoiding it for the sake of avoiding illness seems like a very bad idea, and if you don't have your children avoid other children, they will get sick.

Expand full comment

>Pre-pandemic, my thoughts on masks were basically that if, as an adult, one is symptomatically sick, especially with coughing/sneezing symptoms, one should wear a mask in public, and otherwise not. I still think this is a good idea.

Basically no one did this but Asians pre-pandemic. Like what 1 person in a thousand? Less?

Expand full comment
Mar 22, 2023·edited Mar 22, 2023

You're right, but that has very little bearing on whether or not it's a good idea

Also, did Asians actually wear masks like this or were they just much more likely to wear masks all the time in public, regardless of whether or not they were sick?

Expand full comment
founding

Some Asians wore masks all the time for air-pollution reasons, but mostly it was a norm that if you were coughing and sneezing you should mask up out of courtesy to others. I do not believe there was ever a norm, or even a strong minority opinion, of wearing masks all the time in public even when health and air quality are both good.

Expand full comment

I should have noted in my first response that I'm really asking about pre-emptive, asymptomatic masking. I feel like Japan, South Korea, etc. Have shown us that symptomatic masking does not have the health downside that concerns me.

Expand full comment

Right; as a parent of a child in daycare, getting every last strain of the common cold over the course of a winter really does suck; I know it's largely unavoidable for me because small children, but if sick people masking (or staying home, like they are generally advised to do) means fewer other people get a cold (or what have you), that's very likely a net positive outcome

With Covid, people were rightly focused on deaths, but getting sick is unpleasant (often very unpleasant; having a fever sucks); if we can reduce the frequency with which people get sick, that's a positive.

Expand full comment

I hate getting sick too (current parent, though I'm past the daycare stage), but my understanding is that this may be selfish and a net negative until we have vaccines for every illness that can cause serious complications later in life - and one thing we learned from COVID is that run-of-the-mill coronavruses can be deadly if you get them for the first time later in life.

I don't know exactly where the equilibrium is, but given what we know about the immune system, reducing social transmission of a given virus without eliminating it increases the chance that more people will get it for the first time when they're old.

Expand full comment

Is that really a consideration for fast-mutating rhinoviruses or your bog-standard bacterial infections like strep? Seems like most of the bad viruses that we can have vaccines for, we do (or try to, like the flu), and those that we don't or can't are best fought via preventing transmission. I see your argument; I just don't think it actually applies to very many viruses at all.

Expand full comment

Good question - I'm not sure, but the point of the paper I cited is that *novelty* makes viruses more dangerous. I don't think we have much information about the effect of a rhinovirus or strep on a naive sixty-year-old.

Expand full comment

Point taken, but the process of building the adaptive immune system doesn't end at a young age. For example, many people avoid chicken pox until well into adulthood, at which point they're at risk for shingles, which is far worse.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

We have an effective vaccine for chicken pox/shingles now. Nobody has to get shingles anymore.

Expand full comment

And shingles isn't the result of getting chickenpox late, it's the result of having had it in one's lifetime and the dormant virus later reemerging. While chickenpox is worse to get as an adult, I'm not aware of that being a greater shingles risk. And if you do manage to avoid infection entirely, your risk of shingles is AFAIK zero.

But yeah, someone my generation who managed to avoid chickenpox as a kid (I didn't) would just have been able to get vaxxed in their twenties. No need to get the disease to achieve immunity.

(I do wonder if the FDA should consider lowering the shingles vaccination age, though. Shingles rates below 50 have gone up a lot in the last couple of decades, and it's possible that's because there's less circulating zoster to reinforce recovered patients' immunity.)

Expand full comment

I bungled the chickenpox / shingles example, but I would also like to emphasize that I am strongly in favor of educating one's immune system via vaccines, and don't intend anything I'm saying to contradict that.

Expand full comment

Re: [2]

It appears that either that view is simply incorrect (at least for COVID?) or that it's leaning too heavily on "high-quality...properly worn" and that was not the actual results. Given the public examples of health officials and government leaders who were found not wearing masks even while in public, I think the idea that masks are going to be used well enough to make a difference is negligible.

Expand full comment

I agree with this, in general, but expect that very early in a pandemic, when virulence is unknown and people are truly scared, adherence would be much higher. As the risks become clearer, I expect people recalibrate accordingly in whichever direction is appropriate.

Expand full comment

It's worth distinguishing between the effect of wearing a mask and the effect of mask mandates. If a good mask worn properly provides substantial protection, but many people ordered to wear a mask will provide only token obedience, the protection will be much smaller. If many people ordered to wear a mask will do so carelessly but be less careful of exposing themselves because they think the mask protects them, the effect of the mandate might be zero or even negative.

I conclude that mask mandates are probably a mistake, but it makes some sense for me to wear a mask when in crowded indoor conditions with lots of strangers provided that wearing the mask is not significantly costly, as in some contexts, such as a meetup I am hosting, it is.

Expand full comment

Counterpoint: it prevents facial recognition AFAICT

Expand full comment
Comment deleted
Expand full comment
founding

a. There is a wide range of government behavior between oppressing people with no regard to identity and always and only targeting genuine criminals

b. This response makes zero sense. You can't tell someone on security camera footage to take off the mask.

Expand full comment

Any Cracked.com fans here? I could use some help.

I am trying to find a series of videos made by Cracked.com back during its heyday. The videos consisted of a staff member discussing some pop culture topic in depth, with the conceit being that the staff member was trapped in an underground bunker and was being forced to make these videos. I'm not thinking of After Hours, Obsessive Pop Culture Disorder, or The Spit Take, although all of those were somewhat similar. I seem to remember the host being someone who didn't have a very high profile among the Cracked crowd, although he was a young white guy, like many of the regulars.

Any ideas?

Expand full comment

It's Josh Sargent's series Reckless Disagreement.

Expand full comment

Also found this but might need a VPN to access it https://www.amazon.com/-/es/dp/B076KLKK3G

Expand full comment

It seems to be gone. Here's the only episode I could find: https://www.youtube.com/watch?v=p0oOkggVI5E

Not to milkshake-duck the guy, but JF Sargent became persona non grata at Cracked around 2018 when it was revealed he'd raped a former girlfriend (something he admitted to). I'm surprised his byline is still on Cracked at all.

Expand full comment

Would appreciate some no-nonsense advice around a lingering bad shroom reaction.

Last Sunday (6 days ago) I took 4g of shrooms (MVP strain). The trip itself was meh, had periods of euphoria but towards the end had kind of a sad crash. But nothing I would describe as so terrible. I've done shrooms before and had a good experience, including a couple months prior taking 3.5g of MVP and having a positive experience.

Since then, I've been feeling really quite poorly. Some physical symptoms such as chills/feverishness, which seem to be passing. More importantly my headspace has been very bad. A sort of headache-y panicky brain fog has been following me most of the time (trying to manage through this at work), and more unpleasantly, the fog seems to come with a sort of "black cloud" where I feel incredibly crummy/sad/panicky, I have to just lie on the floor and sort of freak out, and I can't avoid the intrusive thought that I just cannot live like this permanently, my life is over, what have I done to my wife?, etc. It makes no sense but the cloud is just so powerful. I've had a few welcome intervals of lucidity where I feel "like myself again" lasting for half a day or so. I was meh this past weekend but then last night (Sun) was feeling rough. I took a Xanax for the first time and that gave me a welcome calm down.

Anyway, I'd appreciate any advice. Am I going to come down from this? I'm on day 8 now. Am I going to just have to be medicated up the wazoo going forward? Aren't drugs supposed to fade? I'm not normally like this and have no history of mental issues in myself or my family. There just doesn't seem to be good reliable info out there on this topic. Thank you in advance.

Expand full comment

I had a similar experience with a 5g mushroom trip several years ago (not precisely the same presentation, but like you I was experiencing PTSD-like symptoms and thought I had permanently screwed up my brain).

I recall feeling intense daily episodes of despair for at least ten days after the trip; then more attenuated daily episodes of anxiety for at least a month.

By five to six months out I was most of the way back to baseline, with intermittent anxiety. By 2-3 years out I was able to "forget" that the trip had ever happened, more or less.

I hope your road to recovery is more rapid than mine. I would bet that you will be okay in the medium term. The short term might be unpleasant.

(Don't do drugs, kids!)

Expand full comment

Mind if I follow up, did you experience brain fog/inability to concentrate? I just don’t have my normal focus or mental acuity, and there’s kind of a mild headache often, in addition to the bad panicky emotional state.

Expand full comment

Yikes, that sounds rough. Very glad you are feeling better, sorry that it took so long. I will have to see how it goes with me.

Expand full comment

Thank you to those who replied. Went to the doctor, he thinks I got an acute stress reaction from whatever I experienced on my trip and so I've been having a rolling sequence of panic attacks (possibly an oversimplification). Hopefully it will get better as the source of the stress is no longer present, in the meantime I'll have to manage it episodically with a pill. If it doesn't get better then maybe I accidentally triggered anxiety in myself in which case I would go see a psychiatrist. So that's the plan now.

Expand full comment

This is what I was thinking when I first read your post, as somebody who has suffered from panic disorder in the past. The good news is that panic disorder is very easily treated and has a great success rate. I'd recommend seeing a CBT therapist sooner rather than later, since this kind of thing feeds on itself. The basics of it though are understanding that your body is producing sensations that cannot hurt you and don't require a response, and then practicing feeling those sensations without reacting.

Expand full comment

I would strongly suggest gently exploring your panic. Get a good therapist preferably of the somatic leaning, and don’t take such huge trips; microdose. Climb down the canyon instead of throwing yourself off a cliff.

This is absolutely not medical advice, just me, and I have no qualifications whatsoever.

Expand full comment

Thank you. I'll look into this. Likely I am not looking to get back in the water right now, but it would be good to talk to someone.

Expand full comment

Seems like the doc gave you a sort of diagnosis and a pill for when you feel worst, and the plan is for letting tincture of time do the rest. I really do recommend that rather than relying on this episode receding into the past you try hard to find someone to talk through the experience with. Many therapists are are willing and legally able to do teletherapy across state lines. to find someone optimal, you'd google "psychedelic integration" and "psychedelic harm reduction." You can find out more about the people from their web sites, or their listing on Psychology Today.

Also, I have some moderately critical thoughts about the approach the doctor you saw took. However, the thoughts are not earth-shaking, and getting into that subject may be sort of poking the bear. So I am going to zip my lip about the matter, unless you ask to hear more. (I'm a psychologist, by the way.)

Expand full comment

Thanks for this. That is something I'll have to look into. Truthfully though, I don't think of the trip as having "brought out such-and-such feelings to process." In fact I don't really remember much. So I'm not even sure what I would talk about except getting back to normal.

Expand full comment

I think your trip has brought forward something in you that wants to be recognized and you’re having trouble with that. I am a mushroom fan and have had experiences similar to this.

What has helped me is embracing the idea that what ever is going on is contained in me, and therefore in my control. Fear is the big enemy. Can you acknowledge and experience fear dispassionately- yes you can. You should probably find a person to guide you through this.

Expand full comment

You should know that psilocybin is the safest of the recreational drugs. It's safe on all 4 dimensions that considered for the scale: harm to health, addictiveness, danger of illness or death from overdosing, & long-lasting bad reactions. Yep, it looks pretty good in all 4 categories. (I can't find the original graph I saw of this, but here's one from wiki: https://i.imgur.com/x6ZPBjK.png). I think it is very unlikely think you are condemned to keep feeling this way forever, or even for a long time. However, you are really hurting now, and I think you should find someone to talk over the whole trip with at length. A very close friend might be the right person, if they have the time to spend a couple hours a day for several days talking it through. Otherwise, you should see a professional. However,it's important to see someone who does mostly talk therapy, not mostly just psychopharm (because if all you have is a hammar, everything looks like a nail). Also, you want someone who takes seriously the idea of tripping for insight and self-improvement, and is not worried to death about the idea of someone taking shrooms. Ideally, you want someone who are familiar with the process of helping someone integrate the results of a trip.

It seems like the "sad crash" of your trip might be the part you should start talking over, since it's sadness that has followed you out of the trip. Really dig into that sadness at the end -- what was it about -- were there images? -- are you feeling it now? I've had feelings follow me out of a trip that way. Once it was a huge long trippy train of thought that started in Grand Central Station, about how complex the modern world is, like a giant clock, gears within gears within gears -- doesn't sound that terrible, but with was actually sickeningly oppressive, and I thought about it unceasingly for, like, the next 8 hours, and afterwards it stayed with me for days -- but eventually settled down. Another time the thought that stayed was about the suffering of animals -- I won't burden you with the details, since you are sad anyhow -- and the sadness of that stayed with me for several days, eventually fading. And you know, both of the things I got preoccupied with are valid & real. I think it was good for me ultimately to have my nose rubbed in them. I did talk them over with friends, and that helped them recede into the background of stuff I know but don't think about all the time.

About getting some immediate relief: Benzos work well, but are habit-forming and really not a good idea to take. If you don't have a problem with drinking, I'd say a glass of wine is safer. A couple other ways of getting relief: Any vigorous exercise that's so intense you can't really have a long train of thought while doing it -- something like tennis is good, or just go out for a hard run, or do cardio at a gym. You will probably feel better for a while after -- that good, tired, runner's high. Any activity that's engrossing enough to take your mind off the subject for a while will give you a break -- but it will probably have to be quite demanding and engrossing, given how much your mind wants to ruminate about this stuff. You might need someting on the order of helping 2 kids younger than 10 bake something -- karaoke -- umpiring softball game -- leading a discussion or teaching a class.

I think you're going to be OK. Sending you good wishes.

Expand full comment

Thanks for this thoughtful reply. As said above, unfortunately I'm not sure what I would even talk about. I think I want to just take some more time to see how I feel and then reassess. And yes I'm trying to be very mindful of the dangers with these stopgap pills.

Expand full comment

See a healthcare professional. What is the downside?

Expand full comment

Potential downside is having 'patient had bad experience with illegal psychedelic drug' on your medical record. That could invite some medical professionals to have an unfavorable bias against the character of the patient.

It's not supposed to work that way, but doctors are people too.

Expand full comment

Yes, agreed. Also, while in general therapists are pretty liberal and tolerant of unconventionality, there exist some who is going to see using shrooms as not any different from using crack or meth or heroin, and will begin thinking about psychopathy, etc. The other problem that is that drug-oriented therapists diagnose most things as Prozac Deficiency Disorder. They will see OP as Depressed.

Expand full comment

Working on that.

Expand full comment

Specifically, see if you can find a psychedelic-assisted therapy practitioner, ideally one that works with psilocybin. They will have experience integrating/working with bad trips from the specific substance. (While they might not advertise it, in general they should be more than happy to help with your sort of issue without using more psychedelics.)

Expand full comment

Yes, agreed. I tried psychedelic integration, and lots of names popped up. Also try googling psychedelic harm reduction. Try to see one of those people if you can. But it's really not crucial. Any therapist who respects the drug experience you had and takes it seriously can help you talk through the good and bad parts of it, and what the trip left you with, and find your way out of constant rumination etc.

Expand full comment

Is your internal narrative unusually quiet?

Like, for most people, it appears to be normal to have a constant internal narrative of what's going on; is that still happening, assuming it normally happens for you?

Expand full comment

No I wouldn’t say it’s quieter than normal, if I’m interpreting your question correctly. It’s just that instead of being energetic and benign like normal it’s foggy and panicky and dark.

Expand full comment

Hm. Look up "ego death" and see if the description fits. The part about "my life is over" seems to fit the experience, and some people find it to be terrifying. My impression is that most people notice a cessation of the internal narrative, which sometimes gets interpreted by the consciousness as "I'm already dead, my story has ended".

If it's ego death - for some (most?) people the experience fades over time. Others adapt to it.

Expand full comment

Thanks for that. I don't think that is what's going on, the "my life is over" thing was more of a "I made my brain screwy and messed up my life" thing than a trip insight.

Expand full comment

Does anyone have a reason I shouldn't believe we are doomed based on the problem outlined in this post:

https://www.reddit.com/r/slatestarcodex/comments/11vs9ss/empowering_humans_is_bad_at_the_limit/

In short, assuming 1. advances in AI (of even the non agentic/AGI type) will increase the general capabilities of individual humans (and among such capabilities is the ability to do harm) and 2. the historical trend continues to hold true that potential for doing Y harm at capability level X exceeds potential to prevent Y harm at capability level X, why won't humanity very soon all be dead? It generally seems like empowering individual humans means evil individual humans are able to inflict proportionally greater harm when they so desire (case study: average harm inflicted with a AR-15 in the hands of a misaligned human vs average harm inflicted with a knife in the same hands). And, what continuing to develop AI seems to be poised to do is empower individual humans toward the limit, thus moving the ability for misaligned humans to inflict harm similarly toward the limit. If anyone believes that for example the EY superpathogen scenario is a plausible doom, doesn't this problem necessarily come before that (i.e. before we even have to start trying to align AI agents,) and also seem harder to solve?

Expand full comment

The best argument I can think of is that it is very hard to make accurate predictions about the future. Read up a bit on predictions about the future made over the last 50 years. Even very smart people generally don't get it right. No matter how good someone's eyesight, they can't see past the horizon.

Expand full comment

“the historical trend continues to hold true that potential for doing Y harm at capability level X exceeds potential to prevent Y harm at capability level X, why won't humanity very soon all be dead? “

Do you really think that follows? Even if true that technology increases the “harm done to humanity” this doesn’t mean it will ever kill all humans. A full out nuclear war wouldn’t do that. And in fact the opposite is the case, technology reduces harm.

Expand full comment

>Even if true that technology increases the “harm done to humanity”

This isn't what I said.

What is true, however, is that advances in technology increase the capabilities of individual actors, including for those individual actors to do harm if they wish. Obviously, at least until now, technology has been a net gain for humanity on balance. The problem lies in increasing the capability for malicious human actors to do harm toward the limit.

Expand full comment

The problem with all of these AI doom scenarios is that the specific mechanism of the doom is always handwaved away. It could be a "superpathogen", with no explanation of how it can violate the known laws of biology to kill everyone everywhere. Or it could be a "gray goo" nanomachine plague, that violates the fundamental laws of physics. Or it could be mass mind control, with no explanation of how it bypasses everything we know about human psychology. Or it could be something as pedestrian as total control over the global economy, ignoring everything we know about economics... and so on.

I do agree that a malicious human armed with AI is much more dangerous than a malicious human armed with a calculator; but we have been dealing with malicious humans for a long time, and we're still here -- despite the fact that many of them have nukes. I would agree that we need to increase our vigilance and develop more robust ways of dealing with malicious or even merely careless people; however, I would not say that AI is a qualitatevely new type of threat that demands us to panic.

Expand full comment

>It could be a "superpathogen", with no explanation of how it can violate the known laws of biology to kill everyone everywhere

A superpathogen wouldn't need to 'violate the known laws of biology' in order to threaten humanity with extinction. Measles has an R0 of ~20, rabies has a mortality rate of ~100 percent and can lie dormant in humans for long periods of time, and there are dozens of existing diseases to drawn upon that are resistant or immune to modern medicine. The ingredients of a superpathogen with the infectiousness of measles and the lethality of rabies clearly already exist in the world without violating any laws of biology or physics.

>despite the fact that many of them have nukes.

Has anyone actually been reading my discussion of nukes? There are less than ten organizations in the entire world that have access to nuclear weapons, and they all have extremely concrete reasons not to use them. When 8 billion humans have access to the destructive capabilities of nukes, the reasons why we haven't nuked ourselves yet won't apply to that new situation.

Expand full comment

Rabies is spread by biting or scratching from an infected animal. Clearly you need to prove that you can combine both a highly contagious airborne virus with a high mortality rate. By and large the latter precludes the former. To work the infection would have to stay dormant but the carrier infectious for a long time.

I haven’t seen any evidence yet, and I don’t expect any, that the AI can do original work either. And even if an AI could work out how to splice together the ingredients for this kind of attack, the person still needs a lab. Potential world destroyers can get the ingredients for a nuclear bomb right now, they lack the materials.

Expand full comment

Not having seen an example of something is different from something violating a scientific law. E.g., massive photons haven't been seen but wouldn't violate any law of physics, unlike faster-than-light particles

Expand full comment

My counterpoint would be that: for every homicidal maniac asking AGI to give him a viral superkiller sequence, there is a benevolent person or organization asking AGI to do super-effective viral monitoring and/or create a super-vaccine that protects against every hypothetical pathogen.

Yes, there will always be homicidal maniacs, and yes, AGI and other tech advances are giving them more power, but that power isn't _only_ going to them, and I don't think that offense will advance so much faster than defense that they will be able to extinct the human race. Cause higher casualties with their schemes? Sure. Maybe events will start killing low 3-digits to low-4 digits instead of 2-digit numbers of people. But that won't come close to an extinction risk.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

A problem is that many super-interventions have both technological costs and human costs. For example, the super-virus requires difficult biological research and a single homicidal maniac, while the super-vaccine requires difficult biological research and global coordination.

Right now, this isn't a problem because the difficult biological research is the majority of the difficulty in both cases. However, if AGI makes the biological research much easier, then the super-virus gains a big advantage of the super-vaccine.

I don't think there's a reason to think that benevolent super-interventions are systematically harder, but I don't think they're systematically easier, either. If benevolent actors win on 50% of fronts and lose on the other 50%, then I think that's a bad outcome.

Expand full comment

Creating a super-vaccine is a futuristic technology, something that we'd like to be able to do, and that might be a possibility in the future. But giving me a viral genome sequence is something that GPT-4 could do *right now* if it wasn't inhibited from doing so via RLHF. It might not be particularly 'superkiller' (Really, we have no way of knowing how good it would be at this. It seemed to teach itself chess pretty well) but the point stands that it has a head start on 'super-vaccine' research, already. In general, creating an AI that can create superpathogens will be much easier than creating an AI that can essentially solve viral disease, (NB: creating gain-of-functioned covid strains is easier than creating vaccines that protect against gain-of-functioned covid, and in general, extremely dangerous pathogens are something that already exists, and universal vaccines are something that does not yet.)

Really, you don't need AGI at all to create superpathogens; you just need an AI tool at the level of complexity of a Chess engine or drug-discovery AI, except fine-tuned on creating viruses rather than those other potential areas of expertise. You *do* however need an extremely intelligent heretofore un-invented level of AI/AGI in order to have it start seriously helping out with medical research esp. on difficult currently-unsolved problems like creating a universal vaccine. And things like viral monitoring are great and all, but easily beatable by stealth or blitz or DDoS (what's to stop the malicious human from asking the AI to create 1000 different superpathogens in the same afternoon instead of just one?) So on the contrary, it seems like the offense will and has advanced much faster than the defense.

(This discussion has all just limited to the realm of pathogens, too. What about all the other different ways humans could ask intelligent but non-agentic AI tools for ways to perform maximized harm?)

>Maybe events will start killing low 3-digits to low-4 digits instead of 2-digit numbers of people.

I wish I could share your optimism.

Expand full comment

giving you a generic viral sequence, and even turning it into a virus, is something we can do now, making a novel virus with specific characteristics as described in the post (90 day incubation period, 24 hour death after that, R0 of 20) is very much not something we can do now, and is exactly as futuristic as a super vaccine. And I'm skeptical it can be done with a "Chess engine level AI". It seems to me a problem that is recirpocally as hard as making a hypothetical vaccine.

Expand full comment

Humanity has gotten less homicidal since the invention of guns. War has gotten less common since the invention of nukes.

Expand full comment

That doesn't mean that people still don't use guns to kill other people, so I'm not sure what the point is. Unless the homicidial-ness of all human actors drops so low in the near future that we can be sure that no human will ever ask an AI to engineer a supervirus, then we are likely to go extinct because the likelihood of even one human ever asking an AI to engineer an supervirus would mean extinction.

Expand full comment

Technology expands human capacity, and that includes the capacity to reduce homicides.

Expand full comment

I strongly disagree with many of your assumptions. First of all, the mass death events so far have never been caused by individual or semi-individual choices, but by governments, mass movements or nature. And in cases like the Rwandan genocide, the ultra-low tech machete was sufficient to do plenty of harm.

Still, none of these events even came close to a risk for humanity as a whole. It is likely that you need a very specific kind of threat for us to face potential extinction or at least civilization-destroying. Many technologies are unlikely to change this, so you cannot just say: more technology = more risk. Even things like improved weapons are not necessarily a big factor. Improved logistics and increased wealth have done way more to scale up the size of wars and thus of the amount of deaths than better weapons. Modern war is actually less deathly to participants than in the past, even as weapons got better.

So I think that you need a more advanced analysis of risk.

Expand full comment

>First of all, the mass death events so far have never been caused by individual or semi-individual choices

That's because never before now will individual humans have been empowered to cause mass-death events in the first place.

Expand full comment

And still they don’t. So we’re fine.

My God there’s so much presuming the conclusion with AGI doomsayers it’s hard to know where to begin.

Expand full comment

I suppose the only answer there is "remember all the times we were gonna be dead for sure? and we weren't?" Think of the doomsday clock and all the times it was "oh no creeping closer to midnight!" and how we *haven't* destroyed ourselves in a global thermonuclear war.

Yes, humans are wicked and stupid and short-sighted and we do more harm to ourselves than any outside element can (barring natural disasters which remind us that we can't do as much destruction as Earth shrugging in its sleep can).

But we managed not to do it this time. Are we eventually doomed? Yes, but so is the entire universe. Are we more likely to be doomed now than in the past? No.

Expand full comment

As elaborated on in my post, there are plenty of reasons why we haven't destroyed ourselves in global nuclear war. For example, the parties in charge of nuclear weapons are susceptible to the concept of deterrent, are usually made up of large bodies of people who can exert checks and balances on each other, and that there are <10 total nuclear armed parties in general in the whole world. None of these factors apply to the new situation we're facing.

Expand full comment

> None of these factors apply to the new situation we're facing.

Why not ?

Expand full comment

I could care less about a trend I've noticed in fiction writing. I'm not sure if it should bother me.

I have been seeing "anyways," as in something like, "I decided to do it anyways." When I checked online, I found this is an acceptable use, but it grates on me, especially when fictional important people, such as a President, use it instead of "anyway."

And yes, I certainly COULD care less. Somewhat less, anyway.

Expand full comment

I heard it fir the first time 20 years ago in California. I don’t use it but don’t mind it.

Expand full comment

Don't even get me started on "get ahold of" :-(

Expand full comment

I’m more familiar with the phrasing

‘I couldn’t care less’

rather than

‘I could care less’.

These things change but for the most part we correctly decode the intended meaning.

Expand full comment

I find "I could care less" irritating since it is always literally false. Yes, I can correctly interpret it to mean the same as "I couldn't care less", just as "wise guy" and "wise man" sound the same but are opposites. "Literally" as an intensifier is the same thing.

Expand full comment

Literally when used in hyperbole (which is the situation that people complain about it “meaning the opposite”) is perfectly correct. Just not to be taken literally.

Expand full comment

Think of it as being an abbreviated version of "I COULD care less, but it would take a lot of effort."

Expand full comment

Yes it’s “I could care less.....”. Fill in the dots. Bit lazy though.

Expand full comment

Yes, it is a very annoying expression. See https://youtu.be/om7O0MFkmpw where it is thoroughly dissected.

Expand full comment

I used to be much more irritated by these things, but hell, I have real problems.

Like keeping those damn kids of my lawn. ;)

Expand full comment

Do the kids of your lawn try to go somewhere else?

Expand full comment

Oh, I just saw my typo. A wise guy, eh? ;)

Expand full comment

Well, it IS a grammar thread.

Expand full comment

Actually the kids in my neighborhood are pretty well behaved. And a bit more sophisticated than me when I was their age.

I was out shoveling snow yesterday morning and a cluster of 3 or so eight year olds asked, “Can we play in your snowbank?”

“That’s fine, just don’t hurt yourselves. No school today?”

“School? It’s Saturday. We’d have to be deranged to be in school.”

“Uhm, yes, I suppose you’re right.”

Expand full comment

Why not use google n-gram viewer to test your hypothesis?

Expand full comment

Has anyone noticed anyways becoming more common in spoken English? I assume that's how it's getting into written fiction. I think I've heard occasionally over the years, but that's all.

Expand full comment

That's one that I have missed up to now, fortunately. Is it in professionally published writing or online fiction? Online writing can be excellent but it is also prone to common errors. I'd hope professional publishing houses would be more stringent, but one of the cost-cutting measures seems to be junking proof readers and/or having editors be much less involved.

Expand full comment

I have mostly noted it in online works, particularly Worm, which I have seen discussed here, and The Wandering Inn, which I haven't. I believe I have seen it in print, too, but cannot find any examples, and those seem more appropriate anyway, probably because characters who use it aren't being formal.

I tried using Google and ChatGPT to find examples, but Google only shows it in reviews, and ChatGPT flatly refuses to look for it.

Expand full comment
Comment deleted
Expand full comment

Huh, I've always spelled it 'anywho'

Expand full comment

After watching a season of "The Wire" I went through a period of pronouncing it as "aw-ight"

Expand full comment

Funnily enough, I do think there is a legitimate difference between "all right" and "alright", and that the second one is slangier or more informal, which does have a place:

"Was everything okay when you went back to check?" "Yes, everything was all right",

"How do you feel about being laid off?" "It's alright, I guess, since I intended to quit anyway".

Expand full comment

Can anyone steelman the case that A.I. progress has not actually accelerated in the last five years?

Expand full comment
founding

I'm not sure whether or not AI progress has accelerated in the last five years. But if I were to steelman the case that it has not, I would argue that the latest trends/fads in AI development are a dead end that cannot lead to true AGI and thus leave us not much more advanced than we were when we started.

The various flavors of GPT, for example, require enormous amounts of training data which can basically only be satisfied with something like "the entire internet, and get real, we aren't going to go through and systematically rank that as to which are the good parts". That *may* permanently limit that approach to asymptotically approaching the performance of an average internet user and then only in subjects already widely discussed on the internet. And it may be an astonishingly fast impersonation of a mediocre internetizen, which may be useful in some applications, but it's not a path to AGI and superintelligence. Or to coming up with the blueprints for the grey goo nanobots that will extinctify and paperclip-ize humanity.

Or maybe GPT-5 will incorporate some bit of extreme cleverness that overcomes that limit.

Expand full comment

Makes me think of Tycho Brahe making huge sextants, etc., to get more accuracy. Then along comes Newton & the telescope.

Expand full comment

Most of the progress is not actually the development of better technology, but throwing money are large supercomputers. You can have temporarily advance much faster than the gradual increase in computing if you simply build much bigger systems, but it quickly becomes impossible to keep building bigger and bigger systems. So the speed of progress should slow down to Moore's Law + more optimized hardware designs + software improvements, although there is a lot of room to progress with the latter.

Expand full comment

> Most of the progress is not actually the development of better technology, but throwing money are large supercomputers

I don't think that's the case. Algorithmic improvements have been outpacing improvements in hardware for over a decade. GPT-3 was not so long ago at 170+B parameters, and now we have models of comparable performance at 1B parameters that you can run on your phone or desktop computer.

Expand full comment

StyleGAN 2, which can generate realistic human faces, came out in 2018. So AI was already generating impressively realistic images five years ago. Image generation has certainly progressed, but less so than it had in the previous five years; and video generation is far behind where image generation was five years ago.

Progress in reinforcement learning has also been very slow. I wrote a longer piece explaining why I think this, but to try to summarize it in a paragraph:

We still haven't gotten superhuman performance in any game larger than Atari or Go; although the Starcraft 2 and Dota 2 AIs, AlphaStar and OpenAI Five, were impressive, both of them had various advantages over human players (most notably they were given access to the gamestate directly, while humans have to play the game based off of the pixels on the screen; but there were other advantages as well). Six years after AlphaGo beat the world champion, in 2022, a massive project from DeepMind aiming to conquer the board game Stratego ALMOST got superhuman performance but not quite. That seems like obviously much slower progress than people were expecting in 2017-2018. Scott said in 2018 he expected AIs to beat humans at progressively larger games, and said recently that this prediction came true; in fact he was mistaken, this hasn't happened at all, unless you count AIs winning when given access to information the human doesn't have as "winning"; if that's the case, we could have had superhuman AI poker players back in 1980 by simply telling them the cards in the opponents' hand.

So overall progress in every field other than language has been, IMO, slower from 2018-2023 than it was in 2013-2018; but language generation is a hugely important field that has a ton of potential, and so you could still argue that overall AI progress has accelerated in the last five years solely due to LLM progress. However, I do get frustrated when people who were freaking out over the potential of AlphaZero back in 2017 seem to have switched to freaking out over ChatGPT and GPT-4 without ever acknowledging that they were wrong about the implications of AlphaZero.

Expand full comment

Can someone explain why StyleGAN 2 faces look better than the current available SOTA?

I mean, even with Midjourney 5 there's always something off: shiny rubber skin, or hair that doesn't connect to the head, or deformed ears.

Is it because StyleGAN 2 is trained on faces specifically while MJ is a generalist?

(inb4 "MJ5 isn't SOTA you scrub")

Expand full comment

I maintain that A.I. isn't actual intelligence, but is faking it better and better. Computers still only compute, as they were designed to do about 100 years ago. They are much faster, and can multitask, but are fundamentally the same device.

There can be no doubt that computers are accomplishing more, and with things we didn't think were computable, such as identifying image parts. This is, however, fundamentally an insightful way of data analysis. It is currently necessary to double-check a computer's assessments. Even if they are accurate a huge amount of the time, the ways they are wrong are not at all the way intelligences with which we are familiar would get things wrong. And a computer cannot yet provide a value judgement as to whether its assessments are correct, as far as I know.

I believe actual A.I. progress would be indicated in an actual breakthrough, even if it yields nothing immediately, such as creating a new concept from training data, instead of creating something similar to its training data.

Expand full comment

It sounds like you'd agree with the statement that "computers" are rapidly improving, but "Artificial INTELLIGENCE" is not.

Expand full comment

Mostly, yes. It's more like we are rapidly improving the applications we can find for computers to do. The way computers have been improving has really just simply been Moore's Law, doubling capability about every 18 months. Which should not be dismissed as inconsequential.

Expand full comment

Don't you also agree that, since our computers are better and can accelerate progress in any field, they will be used to accelerate the development of true A.I.?

"Hey ChatGPT-4, I know you're not a real A.I., but what tips can you give me on inventing one?"

[It answers.]

"OK ChatGPT-4, can you do all the items in the list you just created for me?"

Expand full comment

I hear that all the time, and it seems like a terrible idea to me. It's desirable in all kinds of ways for them to TELL us how to modify the to make them better: We learn some important stuff. When we do the modifying, per their instructions, we end up with a much better understanding of what's in the black box. AND we get to just not do any modifications on their list that we think give them too much power and information. Why let them modify themselves? I have never really understood the concept of computers being aligned with human moral values, given the state of the world -- where different values collide, and people routinely kill, trick and exploit each other. I can, though, see the value in doing everything we can to prevent AI from modifying itself: Make a rule against it. Refuse to give it supplies or info or capabilities needed to do that.

Expand full comment

That's actually a really good idea to show that GPT-4 isn't an AI and is not creative. It cannot answer that question, because that's not in its training data (since it hasn't happened yet). Quiop pasted the response, and you can see it's just banalities that you or I could have written based on reading ACX.

We really don't know if GPT-4 or GPT-X could ever invent something not from its training data, because it's fundamentally a chat prediction algorithm, that bases its responses on pattern matching from its training data. It doesn't think. If the training data represents existing thoughts that are very close to producing AI already, maybe GPT-X can put those thoughts together and the output would lead to AI. If that's the case, other sources would have invented AI and just didn't realize it yet. No GPT required.

It's extremely clever programming by *human* creators. It does a pretty good job of appearing to be intelligent. It's not.

Expand full comment

Computers accelerate progress in any field because they speed up data analysis for human intelligences to provide insight. Looking closely at what computers "create" you find they only create averages of existing things. Given red and yellow it can formulate orange, but not blue, unless humans provide additional data, such as how to extrapolate in different directions.

Expand full comment

This hasn’t even been proven.

Expand full comment

Here is ChatGPT-4's response to your first question (via Bing):

"I’m glad you’re interested in inventing an AI. Here are some tips that might help you:

Define the purpose of your AI: Before you start building an AI, you need to define what it will do. Will it be a chatbot, a recommendation engine, or something else? Once you have a clear idea of what you want your AI to do, you can start building it.

Choose the right programming language: There are many programming languages you can use to build an AI. Some popular ones include Python, Java, and C++. Choose the one that best suits your needs and experience.

Collect data: An AI needs data to learn. You can collect data from various sources, such as social media, websites, and databases. Make sure the data you collect is relevant to your AI’s purpose.

Train your AI: Once you have collected data, you need to train your AI. This involves feeding it data and letting it learn from it. You can use machine learning algorithms to help your AI learn.

Test your AI: After you have trained your AI, you need to test it. This will help you identify any issues and improve its performance.

Refine your AI: Once you have tested your AI, you can refine it. This involves making changes to improve its performance and accuracy.

I hope these tips help you get started. Good luck with your AI invention!🤖"

Expand full comment

Just wait for GPT-5.

Expand full comment

What's Russia's side of the story about the mass abductions of Ukrainian children?

Expand full comment

Russia is still justifying this conflict as helping Ukraine. They still put out propaganda about rescuing Ukrainians from their own military and all that. They're not just taking children. They're taking adults too. But children obviously can't live on their own. So they are either put with Russian relatives or adopted out. Russia holds this up as a sign of how they're really there to help Ukraine etc which is why Putin put in an appearance.

Put another way, their response is, "We are not abducting children. We are removing civilians from combat zones. Some of these civilians are unattached children and we are finding families for them. Others are adults and we are finding work for them so they can earn a living." Now, why the safest zone is deep in the Russian interior where it's hard to track them is a question they just don't address. And the dynamics of taking Ukrainians, moving them deep into Russia, and then putting them to work is ripe for all sorts of abuse.

Expand full comment

You can google translate this article to get the idea https://www.mk.ru/social/2022/03/28/iz-mariupolya-evakuirovalis-5700-chelovek-deti-raneny-v-shoke-bez-pamyati.html

Basically during the siege of Mariupol there were a lot of lost children and it was impractical to keep them in the middle of the fighting while things are being sorted out and therefore they were taken to places farther from the conflict, like Rostov and Taganrog. Then they are either reunited with their families, or their relatives in Russia take them (remember that more than a million people from Donbas fled to Russia the since 2014 and many more just have relatives in Russia). Those who had none got adopted.

Expand full comment

Don't know what their official line is, but if I were writing the PR it would be either "What children?" Or "Yes, we're removing them from a conflict zone so they don't get accidentally exploded."

Expand full comment

It's neither. Officially, they are taking the children to better homes. The other lady on the arrest warrant with Putin even adopted a kid from Ukraine, maybe after Russians killed his parents.

https://www.wsj.com/articles/u-n-court-issues-arrest-warrant-for-russias-putin-and-another-kremlin-official-d3b9cb8e?st=b4rpo0oek8vaf8h&reflink=desktopwebshare_permalink

Expand full comment

A piece from last week about how mainstream comedy from the last decade can break your brain.

https://kyleimes.substack.com/p/when-mr-chow-blesses-you-with-a-valuable

Expand full comment

My comment over there: "One of my friends (an individual so anti-woke he's embracing *bad* ideas just to "own the libs") described having a similar reaction during his wedding. In our tradition there is a point in the ceremony where the priest says something to the effect of "Whereas woman has been created as a help mate to man". When that line was said he looked uncomfortable and later stated he had a physical cringe reaction, even though he considers himself very anti-feminist and intellectually agrees with very traditional marriage roles. (In practice, they both work but his wife is very much the breadwinner)"

Expand full comment

(I want to point out that "helpmate" does not need "subordinate" or "inferior". The verb to help has very different meanings in "I'm helping my mother to cook" and "my mother is helping me to cook", and the Hebrew word is, if I recall correctly, closer to the second option.

Expand full comment

Oh, yeah. I've had the experience of being shocked and sometimes off balance by things I wouldn't have noticed back when. It's why I'm very slow to assume that people are pretending to emotions. Actual metabolic emotions are apparently easily trained.

And it's not just language-- "The Long Rain" by Bradbury is a story about people stuck out of doors until they can get to a refuge on an old-fashioned jungle Venus. There's madness, death, suicide.... did they let children read this? Yes they did. I read it before I was twelve, and it didn't bother then. I've been trained to have more of an empathic connection to characters.

Expand full comment

That's not even the worst Bradbury, there's the one about kids killing their parents who try to take them away from their virtual reality world (The Veldt https://en.wikipedia.org/wiki/The_Veldt_(short_story)) and the killer baby (The Small Assassin https://en.wikipedia.org/wiki/The_Small_Assassin_(short_story)).

The things libraries permitted our fragile little minds to encounter in anthologies! 😁

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

There seems to be quite a subculture of people who demonize the WEF (World Economic Forum), and claim that its founder and chairman Klaus Schwab is the devil incarnate, intent on world domination by means of globalization. "You'll all own nothing, eat bugs, and be happy" that kind of thing.

But the fact that a blatant nationalist such as President Putin attended and addressed the WEF a couple of years ago, in apparently measured and congenial tones:

https://www.russia-briefing.com/news/russian-president-putin-s-speech-at-the-world-economic-forum-complete-english-translation.html/

makes me somewhat skeptical that Klaus Schwab is the white-cat-stroking arch villain which we're led to believe. So presumably neither are most of the other members.

Guys of his generation saw the ill effects of nationalism taken to extremes in WW2, as we have time and again in more recent years, and FWIW it seems to me the WEF is a mostly benign organisation provided it doesn't presume to encroach on national democracies except on a limited basis with their withdrawable consent.

However, I think it is very misguided and naive to desire and work towards a so-called World Government, putting all our eggs in one basket so to speak. So to the extent the WEF is doing that, if such is the case, I would agree their influence is malign.

A World Government, at least before there are flourishing independent colonies throughout the Solar System and heading beyond, would inevitably lead to the decline of humanity for centuries and perhaps indefinitely.

Expand full comment

>Guys of his generation saw the ill effects of nationalism taken to extremes in WW2, as we have time and again in more recent years

Okay, and the ill effects of globalism are manifest, including europeans on track to becoming minorities in their own countries

Expand full comment

The interests of globalists and of poor-country nationalists like Putin are aligned on one thing: bringing the rich western countries down a peg.

Rich-country billionaires want to see rich western countries brought down a peg too. The only people who _don't_ want to see rich western countries brought down a peg are the poor and middle class in rich western countries.

Expand full comment

What precisely do you mean by a 'globalist'? Who is a globalist, in your view, and why do they want these rich western countries taken down a peg?

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

There's a good article (book review) here that describes them to a tee:

https://www.dailymail.co.uk/news/article-11874239/MATTHEW-GOODWIN-Gary-Lineker-furore-shows-Britains-New-Elite-touch.html

Expand full comment

I dunno man, coming up with precise and non-circular definitions for things is a lot of hard work to do for the sake of some random internet comment thread. You gotta work with me a bit here.

Why do you ask the question? Is it because you can't possibly imagine how "globalist" could be a well-defined category of people? Or because you think it's a well- defined category of people but you don't think they are as I described them? I could maybe handle either of those discussions but not both at once.

Expand full comment

The word 'globalist' gets used by different people to refer to very different groups. I would like to know which group you're talking about here.

If you don't think you can give a good definition, a few central examples would suffice. If Putin is a prime example of a poor-country nationalist who would be the equivalent example of a big name globalist?

Expand full comment

Trudeau. Textbook example.

Expand full comment

Yup those anti-WEF loonies are even creating bizarre pics of Schwab in high priesty robes.

https://en.ktu.edu/wp-content/uploads/sites/5/2017/10/K.Schwab.jpg

Expand full comment

That photo looks like Beldar from the planet Ramuak.

https://en.wikipedia.org/wiki/Coneheads

Expand full comment

Are anti-trumpists loonies? Some people opposed to trump did a lot worse with his image than 'priestly robes', so I guess being anti-trump makes you a loonie?

Expand full comment

*psst*

That photo is real.

Expand full comment

My read is that the man is basically a middle manager for the global elite community who has been memed into many believing he's actually the overlord of the said community, and now he's leaning to this image partly for marketing and partly for ego-gratifying purposes.

Expand full comment

The thing is those "You'll own nothing and you'll be happy." Lines do come from the WEF's own materials. I do think that they think they're "the good guys" but in practice,they are calling for reductions in standards of living across the West and for the 3rd world to never achieve our current levels of material prosperity.

I want a future where all 8 billion people have their own car, they want a future where only the super rich do and everyone else has to take buses or walk.

Expand full comment

I have a quite different interpretation of the same materials - it's about a realistic evaluation of the limits of sustainable development since in that forum they can have a discussion based practical considerations, unlike in public democratic forums where you have to focus on how things sound, making any undesirable truths taboo to discuss.

It is absolutely critical to make a strong distinction between "is" and "ought", as they have little in common, but people (including this discussion) mix them up. They aren't making a statement "we don't want a future where all 8 billion have their own car", they're making a statement that such a future won't happen, and that has zero relation to what they want or don't want.

If a politician says "I want a future where all 8 billion people have their own car", the appropriate response is "and I want world peace and a pony for everyone".

If a politician says "I *intend* a future where all 8 billion people have their own car", the appropriate interpretation is "I am willing to lie and promise things I have no intention of fulfilling".

On the other hand, discussion about the world will (or even can) handle material prosperity has to start with discarding unrealistic wants as totally counterproductive, since our planet can't handle the current population of 3rd world achieving current first world material prosperity levels; something has got to give - specifically, either (1) having less prosperity than current first world or (2) much less population than now or (3) most people not having that prosperity or (4) waiting for a long time - longer than my lifetime - for the results. And that is not *calling for* this outcome! It's simply a basic precondition to a productive discussion to be able to choose between realistic options which are highly undesirable to some people instead of discussing desirable things that won't happen.

Expand full comment

>since our planet can't handle the current population of 3rd world achieving current first world material prosperity levels; something has got to give - specifically, either (1) having less prosperity than current first world or (2) much less population than now or (3) most people not having that prosperity or (4) waiting for a long time - longer than my lifetime - for the results.

What has to give is the WEF et al trying to develop the third world.

It's taken as a given that the third world developing simply must happen.

But if we just leave them alone, they won't develop. They can't.

But the WEFs of the world would rather diverty, directly and especially indirectly, massive amounts of resources from the west to try and make the third world develop.

Well, how about no?

People act like development is the norm, and one that every country is entitled to, rather than western development being extremely rare and special.

It's easy to sound like you're being a cear-eyed realist and not advocating for a specific worldview, when in reality you simply aren't willing to consider certain options.

Expand full comment

Why is it impossible for everyone to own a car?

I think the default paradigm most opponents of the WEF are in is 4. We want first world standard of living for ourselves and for the third world; and if that takes a couple centuries so be it, blink of an eye historically.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Not everyone needs to own a car, as long as everyone has rapid and ready access to a car for their own temporary use, or other vehicle such as a truck for moving. Self-driving vehicles will soon make it possible to promptly whistle up a shared vehicle on demand, for any journey, and with less expense than a human-driven taxi.

Anyone damaging or contaminating a shared car, such as a drunk honking up in it, will be denied the use of any shared car for a week or two, or else have to pay to have it cleaned.

No doubt a lot of people would be appalled at the idea of not owning their own vehicle, but sooner or later they will not be able to drive a self-owned vehicle on a public highway anyway. Also, exclusive use of shared vehicles will mean that only a small proportion of the total number of them around today will be needed.

Public transport is a non-starter for a majority, or at best a very poor substitute. Trains or buses have fixed routes, and don't go door to door. Most don't run all hours, or all that often even when they do. One also has to share them with lots of other people.

Expand full comment

What advantage does this system of rented self driving cars have over personally owned self driving ones?

Vehicles lifespan is measured in KM/Miles, not years, to offer the same amount of transportation to people you'll need to produce the same number of cars as if they were personally owned and use the same amount of whatever fuel they're powered by.

It seems like centralized control for the sake of centralized control (and a very lucrative set of oligopolistic business).

Expand full comment

There's not enough oil for everyone to be driving cars. It'd drive oil prices up so high that a good chunk of the population won't be able to afford gas. Moreover, the greenhouse gases would make climate change much worse if everybody were driving. Electric cars have similar problems with the availability of the resources required to manufacture their batteries. To a first approximation, current technology just can't make that happen within our lifetime.

Expand full comment

Your first statement seems dubious to me. Current known oil reserves -- more are discovered all the time, of course -- are of order 50x annual consumption. I found a random estimate on the Internet that 1/5 of the world's population, very roughly, have access to a car.

So the ol' back of the envelope says that if everyone in the world had access to a car, world oil consumption would jump by a factor of 5, which is clearly well within the capacity of even already known oil reserves.

Expand full comment

To be fair to the scarcity position, that would see us run out of fuel in a decade. (Assuming everyone is running gas cars, no improvement to fuel efficiency, no new oil is found, all the people who don't own one now drive as much as the people who do, ect) also, some of those reserves are very expensive to extract presently.

But there's no realistic scenario where all the global poors buy cars on the same day, motorization will be a gradual process, I think 2200 is a little pessimistic, but easily doable.

Expand full comment

Those statements might be true, but in general technology catches up to demand. Far from running out of oil we are hitting peak demand. In any case we need your workings out.

Expand full comment
founding

>There's not enough oil for everyone to be driving cars.

We can make more, you know. We can make oil out of air, water, and uranium, and I expect that by the end of the century we'll be able to dispense with the uranium. And the CO2 we'd release into the air burning such synthetic oil, would exactly match the CO2 we took out of the air to make the stuff in the first place, if you're worried about that.

I suspect that electric cars, or at least hybrids, will wind up being a better choice for most people. And I'm skeptical that there's an absolute shortage of e.g. lithium for that. But if there is, no matter, we know how to make oil and we're not at all short on air and water.

I've been hearing people like you explain how we have to condemn the less fortunate nations to eternal poverty or at least eternal mediocrity because resource this and environment that and overpopulation the other thing, since the 1970s. Often with a side order of "and we have to join them in mediocrity because anything else would be unfair". And I've been watching us keep blowing through every limit and deadline you all have set for the end of human progress. So I plan to steer head on towards those alleged barriers, full speed ahead, and keep on blowing through them until the whole of humanity is richer than it knows what to do with.

Expand full comment

Just grow the stuff. Brazil gets something like 20% of its transportation energy needs from burning ethanol from sugar cane, which by definition has a carbon impact of zero. The limitations on making it higher seem more rooted in economics and Brazilian governance than any practical issue, and I'm sure with some clever First World biohacking you could improve the plant side of things to boot.

It's definitely galaxy-brained to dream up a fiendishly complicated system of complex fragile mobile batteries feeding off enormous solar panel systems requiring huge amounts of capital, as yet known storage tech, and revamping the entire electrical transmission system -- rather than just look more seriously at a proven low-tech solution that requires only modest changes in your transportation and transportation-energy distribution tech.

One hesitates to go too far down the paranoid rabbit hole by noticing that the galaxy-brained solution, requiring as it does enormous social transformation, puts far more power in the hands of political leadership than just handing out agricultural research grants and maybe the odd subsidy here and there for encouragement. And gives punditry a far more excitingly transformative future to talk about.

Expand full comment
Mar 22, 2023·edited Mar 22, 2023

I've been making the point in your first paragraph for a while, in discussions about CO2 with friends and similar. But most people seem to find it very hard to distinguish the concept of fossil fuels vs reconstituted hydrocarbons, so much so that I suspect they think the idea is completely off the wall, and myself with it!

Perhaps in the long run, hydrogen will be the best fuel. I think there are already technologies for packing it into a fairly energy-dense form in metal substrates for example, and these can only improve with time.

But hydrogen has a regrettable and almost unavoidable tendency to escape confinement. So one might try starting one's hydrogen car after a while without use, only to find that all the fuel has gone AWOL!

Expand full comment

Quit making a straw man out of my position.

I'm not saying that developing nations should stay poor. Nor am I saying that anyone else should become poor. Rather, I am saying that the stated goal of ensuring truly universal private vehicle ownership prior to 2200 would be A] logistically infeasible to implement within the stated time frame and B] would be a worse way to provide universal vehicle access than the obvious alternative.

Expand full comment

> Electric cars have similar problems with the availability of the resources required to manufacture their batteries.

I don't know why people believe this. It's just so obviously false.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I'm not saying that they have those problems NOW.

I'm saying that this would become a problem in a hypothetical future where most cars are electric and the entire world owns electric cars at the same rate as the first world currently does. In that situation, there is probably not enough lithium and nickel to go around without a sharp increase in the prices and a resulting shift toward mining locations and recycling methods which are currently non-viable from an economic perspective. The first world uses a LOT of cars and uses them pretty wastefully.

Expand full comment

If Terraform Indies is right we can make natural gas from atmospheric co2 and run our cars off that. I agree that present day electric car batteries are not good enough, but Musk has a remarkable record.

Socialists who rule by forced famine have a much worse record of dishonesty than people who want their neighbors to be able to own cars.

Expand full comment

To be fair, my desire for universal car ownership is self interested. I can be sure I and my hypothetical descendants have cars in that world, I can't be sure I or my descendants will be smart and successful enough to afford one in a world where they're a luxury.

Expand full comment

Well the anti-WEF position is perfectly fine with it taking multiple lifetimes so long as progress towards the goal keeps happening; if we do .5% per year more motorized, that's fine and gets us there before 2200.

Getting enough fuel/ batteries just seems like a normal technical problem of the sort we've been solving for centuries. (Hydrogen fuel cells seem promising) and dealing with a warmer globe also seems like normal technical problems (e.i. build a seawall here, irrigate this cropland and that cropland.)

Expand full comment

It seems unclear to me that 'universal motor vehicle ownership by 2200' is something we'd want, even if we assume that it is doable in the first place.

Conceivably you could go all in on artificial petrochemical generation and just live with the consequences of that vis a vis climate change. You could also conceivably turn to asteroid mining or something to get the rare earth metals required for truly universal electric cars. I'm unsure as to whether hydrogen fuel cells can be made viable within the time constrains, but let's assume for the sake of argument that they could be with just a handful of significant advances from the material sciences.

All of that still seems like more effort for a worse result compared to simply creating robust public transportation infrastructure. Is the individual ownership of vehicles really that important, outside of the freedom of movement that they (and ubiquitous public transportation as an alternative) would provide? That is, if you could get the same degree of freedom of movement either way, what benefits the additional costs of the individual vehicle ownership option 'worth it' to you?

Expand full comment

The last time I did a dive into this, it was true that you can find those lines in materials produced by people associated with the WEF, but false that they reflect any consensus or official line: the WEF does a lot of brainstorming and blue sky thinking, and doesn't tightly control what members are allowed to say in their own capacity. It's as if the enemies of the rationalshpere judged it entirely by Roko's Basilisk -- as, in fact, they do.

Expand full comment

Still people argue that’s it’s conspiratorial to even say that the WEF did say that. I’ve seen journalists claim it as a myth.

Expand full comment

Hmm, yes, although I'd prefer a future where there are half a billion people, and everyone owns their own flying car and a country estate!

Expand full comment

This is going to start another Repugnant Conclusion debate, right?

Expand full comment

Thing is, that implies a lot of misery between here and there unless you're wire heading everyone to not want children.

Expand full comment

So people in third world countries have children they can't afford to raise, and therefore my standard of living has to go down?

Expand full comment

No? My argument is specifically that we should work to improve everyone's standard of living; those kids in third world countries are additional labour to help push economies of scale and frontiers of science. With a well aligned system they're our partners and collaborators in the great march of material progress.

Expand full comment

"Schwab isn't an arch-villain because his org was addressed by Vladimir Putin" is, taken by itself, a fairly interesting and, one might say, unexpected point of view.

Expand full comment

LOL! On the face of it, yes. But Ian's reply is pretty much what mine would have been.

Expand full comment

Eh, it makes intuitive sense to me, the sort of villainy Schwab is accused of stands rather in contrast to Putin's actions.

Still, the Nazis and the Soviets managed to be allies for years so...

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Letting my inner conspiracy theorist speak a moment - Shwab and Putin are trying to achieve the same end by different means.

Putin has established an oligarchy where he and his cronies extract all they can from the people, living in a giant palace and owning multi-million dollar yachts they never use, justifying their actions by saying they're the rightful rulers of their glorious nation and the only defense against NATO, who'll do *even worse* to you!

Schwab wants to establish "stakeholder capitalism;" an oligarchy where he and his cronies live in giant mansions and enjoy all the best luxuries money can buy, justifying his actions by saying they're the smartest, most "ethical" experts in the world, and the only defense against global warming and the deprivations of the free market, which'll *kill everyone* otherwise!

Seen through that lens, it's not really surprising that Putin, Schwab, and Xi Jinping are all pals. Same song, different lyrics.

Expand full comment

Xi doesn’t belong in that list. China is trying to achieve general prosperity.

Expand full comment

Yeah, but at least the Russians get nice Cathedrals and parades out of it.

Expand full comment

The link John posted has Putin's full address, and it's kind of a fascinating read, because it reads to me basically like western center-left boilerplate. Lots of talk about fostering sustainable growth, reducing inequality, promoting international cooperation to combat climate change, and whatnot. It also has a fair bit of what I would call "Schwab brand villainy", with bits like

"To achieve this we must, in part, consolidate and develop universal institutions that bear special responsibility for ensuring stability and security in the world and for formulating and defining the rules of conduct both in the global economy and trade."

There's even an extended portion where Putin explicitly calls Russia a part of Europe and calls for a single cultural and economic space "from Lisbon to Vladivostok".

Now obviously, words are one thing and actions are quite another. But I find it fascinating that as recently as two years ago Putin was giving a speech like that. There was nothing in it that was particularly off-brand for the WEF.

Expand full comment

Of course Putin is often going to talk in a way that matches his audience's expectations. Politicians have to give speeches a lot anyway, they need to be able to produce quotes for journalists. Nearly all politicians are going to call for peace, and cooperation, building institutions and stability and so forth. (Except when they want to excite their domestic audiences, so they go abroad and insult a few foreign leaders. But in general domestic audiences don't listen to speeches given at the WEF, so that would be pointless.)

A lot of those words you quote are ambiguous and don't mean anything concrete by themselves, but if you know how to interpret them, you can see what Putin specifically means by them. E.g. calling for a single economic space "from Lisbon to Vladivostok" probably means he wants to increase dependence of European economies on Russia, so they can't object to what he does in other spheres. He's saying, forget about the US, we are Europe's natural partners, because we occupy the same landmass (Eurasia).

Expand full comment

Western Europe would economically dominate that alliance. Not Russia. Famously as “rich as Italy” prior to the war, and presumably poorer now.

Expand full comment

Putin has basically, for almost his entire career, done a sort of a balancing act between acclimation to the general centrist-liberal technocratic Western ideal (though in a way that would get Russia its 'fair share' of influence) and catering to the sort of post-Soviet/post-Imperial nationalist anti-Western consciousness that is far more popualr among the great Russian masses. Of course, the last few years have seen him fall more decisively towards the latter position, though still not as much as actors like Strelkov or the Communist Party of Russian Federation etc. would prefer.

Of course, neither the centrist-technocratic side or the Soviet-patriotic anti-Western side fits the Western idea of what is "right-wing", which makes it extremely curious why Putin has so many Western right-wing fans. There really is genuinely a huge difference between the Western far-right imaginary Putin and the actual Putin, perhaps a larger difference than between the Western anti-Putin conception of Putin as a new Hitler and the actual Putin.

Expand full comment

I think it's just contrarianism among the Western (far) Right...they know that they are the minority if the politically active population (and probably population as a whole) in (most) Western countries, and thus resort to contrarianism to prove to themselves that they are not the "sheeple" or whatever...

Expand full comment

Dude made a couple of anti woke speeches and funded a couple of Cathedrals and it was enough to buy him the loyalty of a third of the Western right. Damn good investment of resources if you ask me.

Expand full comment

We have to talk about what Balaji Srinivasan is doing. He just bet 1 million dollars that we will enter hyperinflation.

I don't think bitcoin will go to 1 million dollars in 90 days to be exact, but he seems to make a good argument.

What are your thoughts on this people?

Expand full comment

My thought is that Balaji has purchased, for the trifling sum of $30k, an exceptional amount of free publicity from people like you, and probably helped pump his Bitcoin bags by far more than that.

(Remember, "the bet is bad on purpose to make you click.")

Expand full comment
author
Mar 20, 2023·edited Mar 20, 2023Author

I don't know, it's hard to believe Balaji would trash his reputation for a 1% temporary Bitcoin spike (especially since if anyone noticed him selling all his Bitcoins now people would hate him forever). I and a lot of other people respected his forecasting ability after COVID, he seemed to delight in his prophet reputation, and if he doesn't really believe his hyperinflation story this seems like the quickest way to trash it and turn himself into a laughingstock. Even if he's a money maximizer, given his job as a VC his reputation for prescience seems like a better asset than all his Bitcoins long term.

I don't have a better explanation than "really believes it, extremely overconfident".

Expand full comment

The thing is that the bet makes absolutely no sense in economic terms *regardless* of your beliefs. If you think the price of bitcoin will go up, the appropriate reaction is *to buy bitcoin*.

Balaji is betting a million dollars in exchange for one bitcoin when he could be "betting a million dollars" in exchange for **37** bitcoin *just by clicking a button on an exchange right now*. It is a *strictly dominated* strategy in economic terms.

Since it clearly isn't about the money, the only explanation left is that he is doing it for publicity.

Expand full comment

A run on the banks would significantly reduce the dollars in circulation, especially as financial institutions collapsed. This should lower inflation. If the government decided to infuse the banks with cash, that could cause inflation. Hyperinflation might be possible if a significant, maybe majority, of the financial system collapsed *and* the government tried to make everyone whole.

Expand full comment

Why would making everybody whole cause hyperinflation. It’s restoring the status quo.

Expand full comment

Did I read that Tweet wrong? I thought he was taking the side that we *won't* have hyperinflation at $1mm, as a tongue-in-cheek joke.

Expand full comment

Is this percentage of Srinivasan's net worth akin to that of an American with median net worth betting $100 at roulette for fun?

Re: Noah Smith & beleester: Perhaps Srinivasan has already invested tens or hundreds of millions of dollars in Bitcoin. That would make this a publicity stunt designed to:

-raise the value of his Bitcoin holdings (through many people being influenced to buy Bitcoin), and/or

-warn the common person about their imminent financial ruin.

Expand full comment

As Noah Smith points out, 1 million dollars right now could buy 35 bitcoins. If you really think that a bitcoin is about to be worth 1 million, then buying 35 bitcoins is a much smarter use of your money than a bet that wins you 1 bitcoin. So even if you had perfect foreknowledge of the price of Bitcoin this is still not a smart bet.

But to address the broader question, I don't see why a bank run implies that the dollar is about to become worthless. The 2008 crash was worse for the banking system but we actually had slight *deflation* in 2009. (If anything, I would think that a bank run means that the demand for dollars is *higher* than normal.)

Also, as I understand it, the "unrealized losses" that Balaji is panicking about are temporary - long-term Treasuries lost value when rates went up, but that loss of value doesn't mean "the market no longer trusts these loans to get paid," it means "nobody is interested in low-rate Treasuries because interest rates have gone up and there are better places to make loans." It's not a "shitcoin" like he claims, anyone who doesn't need money right this minute like SVB did can just chill and get paid back with interest.

Expand full comment

My thought it is that there won't be hyperinflation. Unless we are talking about, like, Argentina, of course.

Expand full comment

Weekly shameless self-promotion from me — I wrote briefly about the similarity between LLMs' tendency to hallucinate and humans' tendency to bullshit (https://omnibudsman.substack.com/p/llms-like-people-lie-without-thinking).

Expand full comment

The rules ask that you limit self promotion in open threads to twice yearly.

Expand full comment

Thanks (sorry Scott!). I'll stop now. I guess to maintain an average that matches the rules in spirit I should probably abstain from posting for a couple years.

Expand full comment

Recently upgraded to a newer phone - surprising how different things are after just 5 years or so since I last bought [up-to-date model at that time]. On the old one, I already restricted most notifications and kept a minimal app suite...the new one is even more barebones, cause I simply didn't have the patience to wait 8 hours or whatever to transfer *everything* over at the [phone store]. In addition to apps, this also meant losing all old files (besides contacts and some message history), so years of photos, some songs I'd never bothered backing up elsewhere, hundreds of TTRPG character sheets, etc. New notification-disabling also seems stronger than before - now nothing gets through when the screen's off, not even a blinking light to tell me I've got texts or whatever.

I notice that I don't miss any of it, and even that small increase in "phone not peripherally bugging me" is noticeably more relaxing. Sort of strange to be able to just...walk away from a good chunk of life history without regret. (Turns out every memory really worth keeping is already in my head.) Definitely strange to notice a "missing mood" (using that wrong probably?) of subconsciously checking whether phone's blinking out of the corner of my eye. I guess even that marginal amount of invasiveness was causing me some anxiety. It's also easier than I thought to grin and bear with not being able to listen to music temporarily (RIP headphone jack) - though hardly preferable. Phones really do shape lives, even for people that go to a lot of trouble to minimize entanglement...sobering.

"So why upgrade at all?", I hear you cry - well, nothing is more expensive than free, and people tend to consume more of a good if it doesn't cost anything...but I'm glad I did. Not for the actual phone, but for the experience of losing a small-but-assumed-significant piece of Stuff, and it not actually mattering at all. That which can be destroyed by the truth...

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I'm a Luddite when it comes to mobile phones, and use them only for phone calls, occasional text messages, and authenticator apps one must use for TFA (Two-Factor authentication) these days for my work.

My biggest, and pretty much sole, reason for updating this year is to access WiFi Calling, which means I can make calls via a decent WiFi connection when in areas with worse mobile coverage than Outer Mongolia! Also, I think my new phone claims to be 5G ready, which may be useful in due course.

Expand full comment

Yeah...honestly if it weren't for superiour music-playing capability, I don't think I'd have bothered switching away from old-fashioned flip phones years back. Really not much of a texter/photographer/whatever, I almost always end up regretting doing anything Internetty on phone vs laptop. Unfortunately the "mp3 player" is a technological dead end, and as you note, there are an increasing number of things like 2FA that effectively mandate smartphones these days. (Remember when they used to hand out keychain fobs for this instead? I think I still have my World of Warcraft Authenticator laying around somewhere...) I really hate how wobsites are so heavily "optimized" for mobile these days, making the experience more broken and worse overall, but that's where the traffic is...

Ngl I liked having the convenient physics excuse of "no I'm not ignoring your calls, this place just has awful reception". A minority are slowly realizing the importance of __not__ being reachable at all times and places, but for now, it's still considered a major social fox pass. That was one of the last remaining acceptable excuses, and it's slowly being eradicated. Progress?

Expand full comment

Do you not have the old phone any more?

Expand full comment

Trade-in. Conveniently valued at exactly the same price as current_model, hence free*.

I coulda kept it, taken things home to do the full transfer on my own time...but that woulda meant a future trip to Phone Store in the next 29 days to complete the transaction by handing it over, and I'm not actually the account holder, who was getting antsy about the wait. One of the few downsides to free-riding on a family plan.

*Terms and conditions may apply. 3-year minimum installment plan required. Upgrade fee non-waiveable.

Expand full comment

Recently finished reading The Three Musketeers and there was a request to talk about it in another thread. So first up, it’s great and much better than The Count of Monte Cristo in my opinion. The Count is too perfect; he’s the richest, best educated, smartest, best fighter and sailor. Blah. (Not morally perfect tho.) The Musketeers (d’Artagnan included) are ridiculous dummies who can barely walk down the street without either starting a fight or losing the shirt off their backs in a wager. And all the better for the reader.

Second, the writing captured me immediately from page one in exactly the way that your high school English teacher told you about hooks. But I find that’s extremely rare and I usually take several chapters to really get into a novel. And also the hook here was ridiculous: it’s not even the story, just the author talking about researching the story as if it were a true historical event. But it worked for me better than any in media res cut-to-the-action hook has.

Lastly, there’s the hilarious but morally disturbing section where a character describes how his father converted between Protestant and Catholic on a whim in order to be able to ethically rob those of the opposite persuasion. He’s eventually murderer when a Protestant and a Catholic join forces against him. So I don’t know much history but this does fit my vague views of the religious conflicts in those days. But was it really that uh obvious? Protestants we’re given carte blanche to commit crimes against Catholics and vice versa? That just plays too much into my (atheist’s) stereotypes of religious failings that I have to imagine there’s more to the story in real life (I hope?).

Expand full comment

Also note that the Count of Monte Cristo was translated into English in some kind of a sanitized way (at least, the free 19th century translation). I heard that a new, more faithful one recently came out. I know that when I sampled a few translations at the store in English, a lot were abridged and lacked some of the amusing parts I liked the book for. (I did not read it in English at first.)

Expand full comment

Yes, the Three Musketeers is great. As its sequel.

> But was it really that uh obvious? Protestants we’re given carte blanche to commit crimes against Catholics and vice versa? That just plays too much into my (atheist’s) stereotypes of religious failings that I have to imagine there’s more to the story in real life (I hope?).

There were definitely people like that. Particularly soldiers who were often more concerned with finding opportunities to loot than ideological causes. And since religious attacks (like the Dragonnades) were against civilians where the state basically gave them a license to pillage it was obviously extremely attractive.

But no, it wasn't primarily like that. Most people believed rather deeply in their faith. Many of the urban riots, for example, were more targeted at killing the other sect than looting. We have some of the correspondence that set off the Thirty Years War and all the Catholic and Protestant elites talk as if they're true believers. In fact, in several cases they use material concerns (like land) as an excuse to push their religious agenda. Not the other way around. And several chose death over conversion and keeping their titles. (In fact, defecting conversions were fairly rare.) It's also hard to explain the Howards' actions in England, let alone any number of smaller gentlemen, without reference to their faith.

Expand full comment

To be clear, the part I find problematic is not anyone “converting” out of self interest but rather it’s that they allowed or overlooked violence to the other faith. (To the extent that the book is accurate, hence my question.)

Expand full comment

Depends on the country and time period. But generally no.

In France during the time of that characters' father we're probably dealing with the French Wars of Religion, a series of civil wars between Catholics and Protestants. And sure, in the middle of a religious civil war you probably can do whatever violence you please. By the time the civil wars were over there was some degree of looking the other way but most of the persecution was done by agents of the state (the Dragonnades, as I said). Monarchical governments were reflexively suspicious of popular violence even if they agreed with the cause.

There were also states (like Poland, the HRE, or the American colonies) where religious violence was explicitly prohibited and effectively kind of an aggravating factor.

Expand full comment

A short funny clip. A Shawshank con examines a copy of “The Count of Monte Cristo”

https://m.youtube.com/watch?v=K9p9Yr1U2KA

Expand full comment

I agree that being super-great at everything is a boring trait, but it was never an important part of the count's character or something that the book explored at any depth - there was just an off-hand few lines for each of these things. His harsh perspective, singular obsession with vengeance, the way others are simultaneously put off and attracted to him, made him interesting. One thing that didn't age well as a plot device is his playing 'dress-up' pretending to be other people in a super convincing way, but it's easy to forgive. He was also really into "orientalism" which I guess in those days made you seem mysterious and learned (though coming from a strange distant land was important to conceal his identity), and tripping balls on green honey.

Not to say I think it was a perfect book because it *felt* rather long in some parts. Had a strong beginning and end.

I'll have to get around to the Three Musketeers, but I need a break from Dumas. Reading a few shorter books, like... Lanark.

The green honey thing (hashish mixed with honey I guess) is depicted in old and historical novels as a powerful hallucinogen (see also: Baudolino by Umberto Eco). I didn't think hashish was considered much of one in reality, is that the case?

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I'm glad you enjoyed it (and if you go on to the sequels, let me just say that - possible spoiler - in "Twenty Years After", the story of Raoul's conception is the most ridiculous thing in the world.

I love Athos, he's my favourite (and I think Dumas' favourite too) but there's no denying, *all* of them are terrible. Movie and TV adaptations have cemented the notion of the heroic, chivalrous and so forth Musketeers, but that's not the original canon version of them. And yet, he does write heroic and noble scenes, such as in the confrontation between Athos and Louis XIV in the 'Louise de la Valliere' portion of "Ten Years Later":

“It is too great a condescension, monsieur, to discuss these things with you,” interrupted Louis XIV., with that majesty of air and manner he alone seemed able to give his look and his voice.

“I was hoping that you would reply to me,” said the comte.

“You shall know my reply, monsieur.”

“You already know my thoughts on the subject,” was the Comte de la Fere’s answer.

“You have forgotten you are speaking to the king, monsieur. It is a crime.”

“You have forgotten you are destroying the lives of two men, sire. It is a mortal sin.”

“Leave the room!”

“Not until I have said this: ‘Son of Louis XIII., you begin your reign badly, for you begin it by abduction and disloyalty! My race — myself too — are now freed from all that affection and respect towards you, which I made my son swear to observe in the vaults of Saint-Denis, in the presence of the relics of your noble forefathers. You are now become our enemy, sire, and henceforth we have nothing to do save with Heaven alone, our sole master. Be warned, be warned, sire.’”

“What! do you threaten?”

“Oh, no,” said Athos, sadly, “I have as little bravado as fear in my soul. The God of whom I spoke to you is now listening to me; He knows that for the safety and honor of your crown I would even yet shed every drop of blood twenty years of civil and foreign warfare have left in my veins. I can well say, then, that I threaten the king as little as I threaten the man; but I tell you, sire, you lose two servants; for you have destroyed faith in the heart of the father, and love in the heart of the son; the one ceases to believe in the royal word, the other no longer believes in the loyalty of the man, or the purity of woman: the one is dead to every feeling of respect, the other to obedience. Adieu!”

Thus saying, Athos broke his sword across his knee, slowly placed the two pieces upon the floor, and saluting the king, who was almost choking from rage and shame, he quitted the cabinet. Louis, who sat near the table, completely overwhelmed, was several minutes before he could collect himself; but he suddenly rose and rang the bell violently."

As to the Protestant-Catholic wars, I'm no expert on 17th century France or the wars of religion, but I think Dumas was as cynical about it as a modern would be, and that story seems to me to be more in the nature of the joke from Northern Ireland:

"Northern Ireland’s oldest joke is that a man is asked, “Are you Protestant or Catholic?” to which he replies, “Actually I’m Jewish”.

His questioner responds: “Yes but are you a Protestant Jew or a Catholic Jew?”

I think it's not so much that there was any official carte blanche, but that toleration of crimes and offences against 'the other side' because they were 'the other side, that lot, not the same as us' was something on both sides. Dumas is pointing out the hypocrisy and insincerity; a thief might have qualms of conscience about stealing from his own people, so he 'converts' to be able to rob the 'other side' with no scruples. If it's more advantageous to be Catholic, that's what he is; when the Protestants are in the ascendant, that's what he is. He turns his coat on nothing greater than personal advantage, and Dumas is hinting that a lot of people back then - and in his own time, and indeed in our own time - do the same thing. If X is the popular thing, then we identify as pro-X and anti-Y. If Y gets the advantage, we change our colours now to be pro-Y and anti-X. "Running with the hare and hunting with the hounds", as the saying has it.

And of course, religion of the 17th century was tied up inextricably with politics; Spain and France were both Catholic, but were rivals and at war with each other at times; Richelieu would be putting down the rebellion of Protestants in France and making treaties of alliance with Protestant Sweden, and so forth.

There's a new French movie adaptation coming out this year (two movies, in fact), and while I think the actors are too old for the roles (for the first book, they should be from youngest to oldest 18-29 years of age), and it does suffer from the modern trend that 'the past is all black and brown' instead of being full of colours and lace and ruffles in historical reality, it remains the fact that the books will continue to be adapted because they're rollicking good tales:

https://www.youtube.com/watch?v=if4AL4fXrT8&t=2s

There's a 2014 TV adaptation by the BBC which, again, is all leather and brown, but it's good fun too; the first season is best, second season is weak and the third season is really bad.

https://www.youtube.com/watch?v=fWnDOWnoTnI

EDIT: There's also a Russian musical version from the late 70s which is very faithful to the book and very funny:

https://www.youtube.com/watch?v=-xxm5O3cuZk

Expand full comment

There's also a fantastic cartoon adaptation. With dogs.

https://www.youtube.com/watch?v=hutFTkRY3LM

https://www.youtube.com/watch?v=8Si7567N4CA (upscaled version)

The writing is really funny, assuming you can speak Russian. But perhaps I'm biased, since this used to be one of favorite cartoons as I was growing up.

Expand full comment

What, there's more than one dog cartoon version of The Three Musketeers?

https://www.youtube.com/watch?v=-08TMrezbZw&list=PLAPGcD5LGrp6dQ-_tMQAUTR0OV_Mi3jpZ

Expand full comment

I want the contract to supply Fresian horses for that BBC show. They had a lot of nice details, like Porthos always riding a bigger brown horse.

Expand full comment

The horses were definitely stars of the show 😁 Yeah, Friesians look very good and seemingly they are an appropriate breed for the time the show is set in (which is "around the 1600s but we jump forwards and back as we need"):

>The Frisian is mentioned in 16th and 17th century works as a courageous horse eminently suitable for war, lacking the volatility of some breeds or the phlegm of very heavy ones.

I liked the casting for the show in general, and this version of Athos and Milady really had chemistry to make you believe that they were trapped by their shared past, stuck in a relationship that they couldn't move on from. It's a shame the third season was such a mess, but it seems pretty obvious the BBC big wigs had decided that they weren't interested any more and just wanted shot of it.

Expand full comment

That’s unfortunate about the colors in the adaptations. I also must admit that part of the fun of reading the book is mispronouncing all the French names and words - if I watched an adaptation I might know how they’re really said and lose all that fun!

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I must look up more modern translations - the free ones online are usually the 19th century versions which tend to be more verbose and also more censored, there is a song in one of the books about one of Aramis' mistresses which is spicy but which the translator omitted as "you don't need to read that kind of thing" 😁

Being serialised in the newspapers also incentivised Dumas to write as much filler as possible, being paid by the line (and the newspapers were also happy with 'the longer the better' as this would keep readers buying the next installment).

EDIT: As to the look, it wasn't really standardised until later (if I go by this article) but the distinguishing element was the blue 'cassock' with while/silver cross:

https://thetavernknight.wordpress.com/2018/08/24/so-what-did-the-musketeers-mousquetaires-du-roi-uniforms-look-like/

And the history:

"As one of the junior units in the Royal Guard, the Musketeers were not closely linked to the royal family. Traditional bodyguard duties were in fact performed by the Garde du Corps and the Cent-suisses. Because of its later establishment, the Musketeers were open to the lower classes of French nobility or younger sons from noble families whose oldest sons served in the more prestigious Garde du Corps and Chevau-legers (Light Horse). The Musketeers, many of them still teenagers, soon gained a reputation for unruly behaviour and fighting spirit.

Their high esprit de corps gained royal favor for the Musketeers and they were frequently seen at court and in Paris. Shortly after their creation, Cardinal Richelieu created a bodyguard unit for himself. So as not to offend the king with a perceived sense of self-importance, Richelieu did not name them Garde du Corps like the king's personal guards but rather Musketeers after the Kings' junior guard cavalry. This was the start of a bitter rivalry between the two corps of Musketeers."

So Dumas' version of them being young and constantly getting into fights is correct 😁 And the BBC series gets that bit right, having the Swiss Guard being the ones doing duty at the palace/as the king's bodyguard.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

If you enjoyed it, I strongly recommend the sequels. "20 years later" is a sort of primordial template for creatively bankrupt sequels ([rot13 for spoilers] fb, hu, jr unq 4 cebgntbavfgf? Jryy jr'yy cvg gurz 2 i 2. Naq jr xvyyrq gur nagntbavfg? Jung nobhg ure fba? [/spoiler]). The 3rd book (The Vicomte of Bragelonne) is a tad too long (but if you've read Monte Cristo you're used to it) but still a lot of fun.

Still, I can't recall what character it is you refer to in your 3rd paragraph. Car to refresh my memory?

Expand full comment

It’s the father of one of the lackey’s. The father also raises one of his sons a Catholic and the other a Protestant, which was wise since they are then able to take revenge on the two men who killed their father. Fun times all around! I’ll have to check out the sequels too.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Again, I don't know about France, but the "raise one kid Catholic and one kid Protestant" was something that happened in Ireland and Britain during penal laws times. Generally it went that in order to keep the family estate and inherit, the heir had to be Protestant, so families raised the heir as Church of England (or whatever) and the daughters could be raised Catholic. Or one branch of the family would be the ones to convert to Protestantism so they could inherit, but it was tacitly agreed that they were doing this on behalf of the Catholic relatives.

Sometimes, of course, a family member would convert in order to get the inheritance that he would not otherwise be entitled to. And there is the Elizabethan example of Myler McGrath, who managed to be both a Catholic *and* Protestant bishop of the same diocese at the same time!

https://en.wikipedia.org/wiki/Miler_Magrath

"In October 1565, Magrath was appointed as the Roman Catholic Bishop of Down and Connor, although the temporalities were ruled over by his kinsman Shane O'Neill, chief of the O'Neill clan, whom he visited in 1566.

In May 1567 he attended on the Lord Deputy of Ireland, Sir Henry Sidney, at Drogheda, where he agreed to conform to the reformed faith and to hold his See of the Crown. In 1569 John Merriman was appointed the Protestant Bishop of Down and Connor: Magrath held on to the Catholic See, before he was finally deprived of Down and Connor by Rome in 1580 for heresy and other matters; thus he had enjoyed dual appointments as Roman Catholic and Church of Ireland prelate for nine years.

In 1570, Magrath was appointed by the Crown as the Protestant Bishop of Clogher, including the temporalities, and visited England, where he fell ill of a fever. In February 1571, he was then appointed Archbishop of Cashel and Bishop of Emly (no new appointment was made to Clogher until 1605)."

Expand full comment

Magrath and the Vicar of Bray would have gotten along very well, one imagines.

Expand full comment
Comment deleted
Expand full comment

At a slight tangent, _The Count of Monte Cristo_ is in part based on Casanova's Memoirs, in particular his famous escape from imprisonment. I find the Memoirs fascinating as a first hand look at 18th century Europe by a con man/intellectual who went everywhere and knew people from the top to the bottom of the social scale.

Expand full comment

1) “Mixed Bag” is a series on my Substack where I ask an expert to select 5 items to explore a particular topic: a book, a concept, a person, an article, and a surprise item (at the expert’s discretion). For each item they have to explain why they selected it and what it signifies.

https://awaisaftab.substack.com/p/mixed-bag

2) A Rationalist Approach to Psychiatric Conspiracy Theories

Applying Scott’s insights on the topic to psychiatric conspiracies

https://awaisaftab.substack.com/p/a-rationalist-approach-to-psychiatric

Expand full comment

Ever since Scott mentioned sometimes experiencing psychosomatic ants when they invade the house, I've been having the same issue...random itches or sensations of movement make me check myself. There's almost never any actual ants, of course, but even a really low positive rate makes me keep doing it. Quite vexing.

It's sorta like after once having an issue with mice, now every inexplicable rustle in the night makes me worry about furry guests. The trouble with trying to create a low-sensory sleep environment is that any stimuli which do happen are a lot more noticeable...

Expand full comment

Back in high school I had a roommate who would hallucinate bugs during psychotic episodes. When they did, I would *also* start hallucinating a couple of bugs, presumably out of a sympathetic reaction. Definitely a weird experience.

For tactile prickles, I find that it helps to provide real stimulus, eg rubbing my hands briskly along my arms or legs. This seems to help my brain 'recalibrate' to real sensations, at least temporarily.

Expand full comment

The only useful thing I got out of D.A.R.E.-type school anti-drug education was them showing video of an experiment where a bunch of kids are given "liquor" that's actually water, and they end up acting as if they're drunk anyway, despite that being physically impossible. That was long before I read Scott, or psychology stuff in general, but it certainly left an impression that People Are Remarkably Susceptible To Social Contagion And Suggestion. Never knew what to think about it after "priming" etc. got dunked on with the Replication Crisis.

Yeah, I think it's not a coincidence that this almost always happens in bed where the baseline of stimulus is low. As opposed to, say, the kitchen where the ants are far more likely to actually be present. Different expectations of activity and stimulation. (Something something predictive processing?)

Expand full comment

Plot twist: the water was dyed with red food colouring, and the kids actually were experiencing a mood shift.

Expand full comment

Ah, that conveniently got left out of the presentation. Of course there's a catch. Everybody Knows about The Legendary Study That Embarrassed Wine Experts Across the Globe! (originally was gonna type "wine fallacy" or something but there doesn't seem to be a catchy term, sad)

This is also why I like pulling wine straight from the bottle, and preferably dark bottles - I don't want to know what colour it appears to be. Immanent experiences are precious.

Expand full comment

Have you tried desensitizing yourself by scooping up an ant and letting it walk across you until it finds a way back to the ground?

N.B. I am not assuming that this has a high probability of working.

Expand full comment

I'd worry this would raise my subconscious prior that tickles are ants. Having a mobile phone in my pocket that sometimes vibrates doesn't seem to reduce my likelihood to interpret random muscles twitches as text messages.

Expand full comment

I didn't think of that. It makes me doubt my proposal even more.

Expand full comment
Mar 20, 2023·edited Mar 21, 2023

I have no idea if that'd do anything - phantom ants loom larger in the sensorium than actual ones, weirdly - but cheap and harmless cure hypotheses are worth a try. For science.

Shame this wouldn't work with spiders, the actual insect that I have a trapped "freak the hell out" prior on. Wonder where I learned that, since I'm the only one in my family with arachnophobia...(Yes I know they're not actually insects, but categorical Skyrim was made for the Nords.)

UPDATE: an [un]lucky ant happend to wander onto the laptop just a short while ago, so I got to test this sooner than expected. Turns out I genuinely can't perceive an ant on entire body regions, despite knowing there should be some sensation (arms, including pits!), whereas there is the usual crawling-tiny-thing sensation on other parts (torso). Sadly the little guy got freaked the hell out after being picked up, so I brushed him off before he could path to ground organically. Felt bad. But this at least updates my prior on which localized tingles to comfortably dismiss as not-ants, and which to possibly humour. Useful.

Expand full comment

I got the itchies just reading it.

Expand full comment

What's the least powerful computer you think it would be possible to run an AGI on? For whatever definition of AGI you feel like using.

Expand full comment

https://twitter.com/miolini/status/1634982361757790209

Text from the tweet: "I've sucefully runned LLaMA 7B model on my 4GB RAM Raspberry Pi 4. It's super slow about 10sec/token. But it looks we can run powerful cognitive pipelines on a cheap hardware."

So very low powered and possible. But of course this isn't feasible. But if your goal is just to play around with it, probably any middle of the road computer from the past 4 years can do it. Its all up to your tolerance for waiting around while the computer thinks.

I know that LLaMa is probably not AGI but the general point remains that it all depends on the use case and how much pain you are will to deal with.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

A human brain (the only AGI we have hard data on) has about 100 billion neurons, 100 trillion synapses, and a "clock speed" of at most a few hundred hertz (the brain doesn't actually have a clock, but there are limits to how fast signals can travel through it and how fast neurons can fire). If each individual synapse is performing a math operation, then its processing power is multiple petaflops. Maybe an exaflop if they're doing a lot of math.

So, assuming the algorithm a brain uses is reasonably efficient and an AGI can't squeeze a lot more thinking out of each flop, an AGI can probably run with human intelligence and real-time speed on a modern supercomputer, and might run on a desktop in a few decades.

Expand full comment

You are more of an optimist on Moore’s law than I am.

Expand full comment

I assume a human brain has a lot of inefficiencies due to being an evolved object. A computer program that can rewrite an improved version of itself should be able to do better. How much better? I have no idea.

Expand full comment
founding

I would assume that the human brain is in fact very very efficient due to being an evolved object. The human body runs at ~25% thermodynamic efficiency when converting chemical energy to mechanical work, which is about the same as the average modern automobile and within a factor of two of the best thermal power plants (which benefit from extreme specialization and economies of scale). Evolution is actually really good at optimizing away inefficiencies.

So I wouldn't expect even a highly optimized AI to achieve human-equivalent performance with much less than an exaflop. And that's for achieving human equivalence at human thinking speeds, not "as smart as Einstein but a million times faster because computer=fast". For that, you're going to want yottaflops.

Expand full comment

You should be able to run it on anything at all as long as it has a big enough hard drive (eg a Turing machine), though it's possible it doesn't even need a machine at all like in the book permutation city.

Expand full comment

The new leader of the main left-wing party in Italy is now a woman of jewish descent.

As a result, the antisemitism of many right-wingers is now coming out. In particular, these people stress the fact that she's an ashkenazi jew (not only a jew), connecting this fact to conspiracy theories stating that ashkenazi jews are the elite that manipulates the fate of the world or something (an idea which I've learned was popularized in the 70s by Arthur Koestler).

This issue has been discussed by several media outlets. With the aim of discrediting such conspiracy theories, some articles have pointed out that distinguishing ashkenazi from other jews does not even make a lot of sense in today's world.

Now, since I've first read about ashkenazi jews on this blog, it seems the right place to ask:

1) Is the phrase "ashkenazi jew" used in a derogatory manner in the US and connected to antisemitism and conspiracy theories too?

2) Is the claim by that it is not demographically/biologically meaningful to distinguish Ashkenazi jews motivated?

Expand full comment

Most American Jews are Ashkenazi, so when people complain about people like Soros (which is surely 100% motivated by his ethnicity and has nothing whatsoever to do with the ideology he spends his money promoting) they're largely complaining about Ashkenazi Jews.

In Israel, the split between Ashkenazi and Mizrahi("Oriental") is roughly 50-50. The Ashkenazim are richer and more likely to vote for Left-wing parties, while the Mizrahim are poorer and more likely to vote for right-wing parties.

https://www.haaretz.com/opinion/2022-07-25/ty-article-opinion/.premium/in-israeli-politics-its-not-right-vs-left-but-ashkenazim-vs-mizrahim/00000182-35eb-d7e9-af96-3dfb15700000

Expand full comment

That parenthetical aside seems odd to me. Surely some people are actually opposed to what Soros is promoting.

Expand full comment

Thanks for sharing that.

I can't find any context for one of the lines in the article:

> You have to hear Jacob Bardugo – a right-wing pundit for Israel's Army Radio – talk about the right-wing Ashkenazis, what he says about the likes of lawmaker Matan Kahana and Communications Minister Yoaz Hendel, not only because of their participation in the government.

What does this mean?

A more general question based on the article:

When the "aggrieved Mizrahim" comment on the fact that their leaders have been predominantly Ashkenazi, do they tend to show annoyance or acceptance? By acceptance, my comparison is US Republicans in decades past, most of whom seemed unbothered that so many of their leaders were Yankee aristocrats. (Yes, I am including George W. Bush as a Yankee aristocrat.))

Expand full comment

>As a result, the antisemitism of many right-wingers is now coming out. In particular, these people stress the fact that she's an ashkenazi jew (not only a jew), connecting this fact to conspiracy theories stating that ashkenazi jews are the elite that manipulates the fate of the world or something (an idea which I've learned was popularized in the 70s by Arthur Koestler).

Tough shit.

Leftists throughout the world say basically the same *kinds* of things about white people, and this is fine and acceptable (and is even taught in colleges throughout the US).

And here's the thing. It's absolutely, 100% institutionally acceptable to claim that white people have "too much" wealth and power and control over the world and that this is due to their malevolant actions.

But when you point out that jews are even MORE overrepresented in positions of wealth and power than whites generally, you get called a "conspiracy theorist", and if such facts are accepted, then the explanation for this HAS to be some form of jewish cultural supremacy (which would be considered white supremacist if the equivalent were offered as an explanation for white overrepresentation).

>This issue has been discussed by several media outlets. With the aim of discrediting such conspiracy theories, some articles have pointed out that distinguishing ashkenazi from other jews does not even make a lot of sense in today's world.

Of course it makes sense. It makes perfect sense. The wealthiest and most prominent/powerful jews in the world are overwhelmingly ashkenazi. Sephardic jews are by and large not the ones running investment banks of being social studies professors.

Ashkenazi jews are significantly different to other jews, culturally and especially cognitively.

Expand full comment

> Ashkenazi jews are significantly different to other jews, [...] especially cognitively

That statements seems very bold. Care to elaborate?

Expand full comment

Koestler argued that the Ashkenazi were descendants of Khazars who converted to judaism in the 8th century. That the Khazars in some sense converted is historical fact, although I don't think it is clear if that meant the bulk of the population or the official position of the elite. I don't think there is much evidence that the Ashkenazi are mostly descended from them. As far as I know, Koestler's claim was about the ancestry not the role of Ashkenazi.

Ashkenazi is not normally a negative term in the U.S. It's how I identify myself if going into that much detail. Ashkenazi were mostly in Europe, Sephardim in Spain and the Muslim world, so there are probably some genetic differences by now and certainly cultural differences.

Expand full comment

I don't know that a Khazar conversion to Judaism is an undisputed historical fact. Cf. https://www.jstor.org/stable/10.2979/jewisocistud.19.3.1.

Expand full comment

Any term for a group is derogatory, if the group is disliked by the speaker. "Ashkenazi" is no more, and no less, derogatory than "Jew" itself. The latter too can be used as a slur by those who dislike them.

Ashkenazi Jews are indeed a distinct subset of Jews demographically and biologically.

For a long read on Ashkenazi history / genetics, see Razib Khan's article, the beginning of which is not paywalled: https://razib.substack.com/p/ashkenazi-jewish-genetics-a-match.

For a general source that is not paywalled, see Wiki: https://en.wikipedia.org/wiki/Ashkenazi_Jews.

Expand full comment

Hey didn't we get an article about that phenomenon recently?

Expand full comment

"Ashkenazi jew" is not a derogatory statement in a vacuum, but the concept that there's a group of Evil Conspiracy Jews who are distinct from regular Jews (and therefore it's not antisemitic to hate them, since they don't hate *all* Jews) is something I've seen several times on the English-speaking internet.

The more common variant I see is "Khazars" - an Eastern European tribe who apparently converted to Judaism in the 6th century. So the conspiracy generally goes something along the lines of "Ashkenazi Jews are descended from this tribe in Eastern Europe, therefore they aren't Real Jews descended from the tribes of Israel, so they're evil conspirators."

Distinguishing them genetically doesn't make a lot of sense to me in the modern day, since American Jewish communities are going to be a mix, but there are still various cultural markers that are recognizably Ashkenazi or Sephardi. So it's perfectly legitimate to say, e.g., "we use a Sephardi melody for Yigdal" or "Ashkenazi Jews say Shabbos instead of Shabbat."

Expand full comment

>Distinguishing them genetically doesn't make a lot of sense to me in the modern day, since American Jewish communities are going to be a mix,

No, you need to actually show that this is the case. You can't assume it's true. You need to actually look at the data.

Expand full comment

Yeah, it's a way of reconciling "Christians are supposed to be pro-Jew" and "but actually I hate them" -- those aren't the real Jews at all, but evil imposters. Subvariety: WE are the real Jews! (Such as "Black Israelites", so-self-called, not to be confused with actual black Israelis who do exist.)

Expand full comment

Thanks! Yes, the story of the "Khazars" also comes up in these right-wing articles. If I understand correctly "ashkenazi" has become a code word for all these conspiracy theories in these circles. So maybe it's not just Italy.

Expand full comment

I’m pretty sure they’re mostly just admiring Abigail Shapiro’s boobs.

Hey, someone had to say it.

Expand full comment

Even in Israel there is still sometimes tension between Sephardic andAshkenazi, see this little scandal from the last week about who should care about the victims of the Holocaust: https://www.timesofisrael.com/its-your-families-that-were-burned-in-holocaust-likud-minister-tells-critics/

Expand full comment

I'm not familiar with what is happening in Italy. But you may be interested to know that the Nazis saw some amount of distinction between a) the Ashkenazi Jews, whom they consistently killed/enslaved, and b) an obscure group of Ukrainian Jews whom they only sometimes killed/enslaved. https://en.wikipedia.org/wiki/Crimean_Karaites#During_the_Holocaust

Expand full comment

Tangentially related; the Canadian Armed Forces now consider "Believing in any significant differences between races" as "racism", so I can no longer explain why 2/3rds of my favourite news media is owned and operated by Jews as "Well they're just naturally smarter and thus do better than average at starting media companies."

So, I guess according to the CAF it's either just a strange coincidence or some shadowy conspiracy... O.o

Expand full comment
Comment deleted
Expand full comment

Wouldn't be correct. Very few of the Jews who immigrated to America from 1880 to 1920 brought any significant amount of "capital" with them, and were poorer than the average white American for quite some time.

Expand full comment

Why would historical discrimination result in "accumulated capital"? It doesn't seem to work that way for other groups.

Expand full comment

The capital in question is genetics which made them suited for cognitively demanding work.

Expand full comment
deletedMar 20, 2023·edited Mar 20, 2023
Comment deleted
Expand full comment

Yes, I read Sowell & Chua many years ago. I don't think middleman minorities are the result of being discriminated against though. Muslims were politically dominant in India, where I actually hear of Parsis being MMMs. And here's an argument that Parsis are quite liked in India:

https://akinokure.blogspot.com/2012/04/why-are-parsi-elites-welcomed-while.html

Expand full comment

Speaking to (1), in the US (in my experience as an Ashkenazi jew), I have never encountered or heard of an encounter with an antisemite drawing a distinction between Ashkenazim and Sephardi. In fact, the only non-Jews I’ve met who have even heard of the terms were people raised in unusually dense Jewish areas with many Jewish friends.

Sounds to me like these groups are drawing the distinction opportunistically, looking for a way to make their antisemitism more palatable to ordinary people (“it’s not ALL Jews, just the bad ones!”). There absolutely are (small) differences, for example Sephardic charoset tastes way better, but if you’re looking for motivated reasoning I think the first offender here is probably the right. That is, the claim in response that there are no differences whatsoever between Sephardic and ashkenazi Jews sounds motivated by a desire to counter this narrative, but this narrative itself is probably also made up as a short-term political play.

Expand full comment

My understanding is that Franco's Spain allowed both traditionalist Catholic antisemitism (particularly common in the (ample) sectors of the Church that supported him) and Nazi-inspired antisemitic propaganda, in a somewhat confused way, but either Franco himself or someone in his circle had a deep, somewhat ambiguous interest in Sephardim specifically. The reason is more or less obvious a posteriori - they were a living testimony to a period of Spanish history that Franco's regime idealized (the Catholic kings), even if they were _expelled_ by Franco's heroes. Recall moreover that, in the 1930s, many Sephardim in the Balkans/Greece/Turkey still spoke Spanish (sometimes called Ladino, but in fact also called Español or some minor variant thereof by many of its speakers). Obviously that created a cultural link. Even if Sephardim mixed in some Hebrew words and wrote Spanish using the Hebrew language, I don't think Spaniards who were aware of this phenomenon saw Judeo-Spanish as being somewhat "degraded" (as some German- and in fact Yiddish-speakers saw Yiddish) - more like a relic of what was a glorious period of history to Spanish oficialdom.

All the above is subject to correction by people who know more. Of course now some Francoites are liable to exaggerate the extent of his interest in or goodwill towards Sephardim so as to whitewash his image, so there's that.

Expand full comment

Thanks for reporting your experience, this is exactly what I was suspecting.

I also share your point of view. It's like that one side emphasizes the distinction to justify their antisemitism to some extent, while the other tries to nullify the argument from the start, but by doing that they do not a acknowledge a meaningful distinction between different populations.

Expand full comment

Sorry but just to be clear all charoset tastes like crap, right? You're just saying Sephardic charoset tastes less like crap?

Expand full comment

על טעם וריח אין להתווכח.

But I actually really like the taste of all charoset, and especially traditionally Ashkenazi charoset made with apples, cinnamon, walnuts, and red wine. Its something I'll eat leftovers of after the seder

Expand full comment

An interesting wrinkle here is that Italy's historical (pre-WWII) Jewish population was mostly Sephardic, not Ashkenazi — that is, descended from Jews who were expelled from Spain in 1492, instead of those who originate in present-day Germany ("Ashkenaz" in the Talmud).

Sephardic Jews do have cultural and religious practices distinct from those of Ashkenazi Jews, and some spoke a dialect of Judeo-Spanish called Ladino, so there are clear distinctions here apart from any plausible genetic ones.

I wonder if Sephardic Jews are less "alien" in Italy's cultural memory, an if Ashkenazi Jews — likely to be descended from fairly recent Northern and Eastern immigrants — are seen more as outsiders.

Expand full comment

I was under the impression Italy had a Jewish community predating the existence of Ashkenazi/Sephardic groups. Indeed, the Ashkenazi were founded by a small number of immigrants from Italy to central/eastern Europe.

Expand full comment

Yes, this fact may also play a role. Anyway, I think the people pointing out that she's of Ashkenazi descent are factually correct, since her father is American.

The thing that surprised me is that for these Italian alt-right commenters "ashkenazi jew" was already a codeword that brings up conspiracy theories, e.g. those connected to George Soros and so on.

Expand full comment

Razib Khan has a few pieces on the genetics of various Jewish populations on his Substack, most of which are paywalled, but this piece on various smaller populations of the Jewish diaspora has a neat genealogy chart and some interesting info: https://razib.substack.com/p/under-pressure-the-paradox-of-the

Expand full comment

Thanks!

Expand full comment

If you are a subscriber - or I guess, if you wanted to sign up for a free 7-day trial just to read one article, the main one about the origins of the Ashkenazi Jews seems to be this one: https://razib.substack.com/p/ashkenazi-jewish-genetics-a-match

Expand full comment

I wrote a paper (https://www.science.org/doi/full/10.1126/sciadv.abq2044) and a blog post (https://www.michelecoscia.com/?p=2246) about my research on polarization on social media.

The short version is that this is mostly a methods paper: I'm pointing out that all the ink used to talk about the rise of polarization in the US hasn't been all that supported, because all measures we had so far failed to capture all the aspects of polarization.

Even if this is just a method, we have some cute results about some classical debates on Twitter, the 2020 election period, and a post-WWII timeline on the US House of Representatives. Apparently, the most polarized House was during the 113th Congress (but we only had data until the 116th for the paper, and we saw a rising trend, so perhaps nowadays it is more polarized).

This new measure is still incomplete, because it only considers the opinion drifts between the two sides. We're only partially covering the affective part -- only the refusal to engage with "the other side", but we should also look at the tone of the engagement when it happens. But followup research is under way.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I'd really like to popularize the fact that GPT4 scored 86.4% on MMLU. That's 3.4% less than the 89.8% that average human experts score within their field of specialization.

For context, this "benchmark covers 57 subjects across STEM, the humanities, the social sciences, and more. It ranges in difficulty from an elementary level to an advanced professional level, and it tests both world knowledge and problem solving ability."

Some scores from "Training Compute-Optimal Large Language Models" for comparison:

Random 25.0%, Average human rater 34.5%, GPT-3 5-shot 43.9%, Chinchilla 5-shot 67.6%, Average human expert performance 89.8%

...

Has anyone else been following that? Do you think we will discover that language and scoring well on academic exams is like chess in that we thought it was proof of human level intelligence for a bit, but actually it's not all that... or is time to foom measured in months? I'm not sure I can make sense of the situation other than those two options, but I'm still having a lot of trouble living my life as if I believed it.

https://paperswithcode.com/sota/multi-task-language-understanding-on-mmlu

Expand full comment

Can it write a sentence in which every word begins with the same letter of the alphabet?

Expand full comment

It looks like it - I just told it "Write a sentence in which every word begins with the same letter of the alphabet."

Its response: "Bouncing baby bunnies bravely bounded beyond beautiful blooming bushes."

Expand full comment

Awesome! Then I might be able to use it for some stuff that GPT-3 hasn't been useful for.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

Sounds impressive, but that task is fairly trivial using only a word list in which each word is tagged with a type, i.e. noun adjective, verb, adverb, etc

All one needs is to plug a load of words starting with "b" into a template of the form "adjective[s] subject verb [adverb] adjective[s] object"

Expand full comment

Playing with poetry-based prompts, this and other similar things were a glaring deficiency in GPT-3.

Expand full comment

I don't think even a true blue superintelligence could do that.

Expand full comment

I have found it trivially easy to ask the AI chatbots elementary questions on scientific topics that it gets wrong. All I need to do is ask a question in a modestly novel way, i.e. in a way which it is not likely to have encountered in its training data. What we may be realizing is that tests like these only work for human beings because we have what (by computer standards) is a very small amount of memory, and very slow access times. So standardized tests test *human* intelligence only because we cannot solve the problems by simply remembering the answer from some portion of the entire Internet (including almost all published textbooks) that we have consumed -- we're forced to actually reason out the answers from scratch, using what by computer science standards are an amazingly few number of computations.

Expand full comment

That's interesting.

About questioning it in a novel way, there's at least 2 aspects to that: Is the question changing the context such that the LLM needs a deeper understanding of the subject matter to correctly identify and use the concept being tested; Is the change in prompt making the LLM pattern match to 'context around a test that is not answered correctly' where before it was matching to 'context around a test that was answered correctly', in which case the LLM is correctly switching from completing the text in a correct way to an incorrect way that is implied by the prompt context.

About memory and access times, I think it's wrong to say humans are reasoning out from scratch. All humans are trained on lots of examples too. It might be more fair to compare a LLM to humans taking the test with access to the internet or textbooks, in which case scores should be closer to 100%.

Do you have opinions on what better metrics for LLM would be? I'd love to find something that puts a quantifiable number on "ability to understand the world and do planning and engineering" MMLU is definitely far from a perfect metric for that, but I haven't found anything better yet.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

Having interacted with GPT-3 a bit, my go-to metric at this point is logical self-consistency. It's unusually poor at that. I have successfully "persuaded" it to agree to a number of false positions, even to mutually contradictory positions, over the course of a "conversation." This would not happen with even a very young human child, because there is a "core" self-identity there, which would recognize when it is contradicting itself. The AI does not. Or more precisely, it has a weak sense of self-consistency that probably arises from consistency with its weighting and input data, meaning it will always answer in a way that is consistent with those, so there is often a superficial sense of continuity and consistency.

However, it lacks the strong self-consistency check provided by an interior monologue, the sense of internal "who am I?" identity familiar to us, so if you are careful you can begin from its training data and walk a linguistically sound path -- such is the flexibility of language ha ha -- to mutually contradictory conclusions. You could not do that with any creature that has a self identity, which to my mind is a sine qua non for anything I'd call "reasoning ability" instead of mere computational aptitude.

Expand full comment

I'm pretty sure gpt chat is aware it is a large language model trained by openai, it won't shut up about it, lol.

Sorry, joking. Using contradiction is much too qualitative for me. Are there any datasets or objective ways of measuring self contradiction? I'm certain gpt4 is less self contradicting than gpt3. You probably need a certain amount of accuracy in language prediction before it even makes sense to talk about self contradiction.

Though I feel it's important to point out as others have that LLM are not decision processes on their own, but must be included as part of another system in order to take the shape of an agent. For that reason I wouldn't take self contradiction from a LLM as proof that it can't be part of a superintelligent system. Janus's "Simulators" describes the situation pretty well. I don't fully agree, but "language simulator" definitely fits.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

"it tests both world knowledge and problem solving ability."

If we're talking about intelligence, I'd separate out problem solving ability as the metric there. 'World knowledge' is not very meaningful when you've done the equivalent of hooking up an encyclopaedia to pull the answers out of.

Looking at the range of results, it seems to have done worse in SAT Maths than SAT Evidence-Based Reading and Writing, and also done less well in AP Calculus, so that does incline me to the view that it's not 'thinking' or 'solving' so much as pattern-matching and it does better on the words because that's what it's been trained on: predict the next token in a document. With all the training data, I'd be surprised if it didn't manage to pass a standardised test like that.

So in regards to the bar exams, I'd be happy to get the AI to do boilerplate work like drawing up standard contracts where it's simply a matter of changing the names of the parties and other regular, standardised alterations, but I wouldn't let it argue a case in court. If it's going to replace workers in the legal profession, I imagine it'll be law clerks, legal secretaries and paralegals, and not the barristers and solicitors themselves (yet).

Expand full comment

I wouldn't downplay the significance of properly indexing and translating the relevant part of the encyclopaedia to the current context, but that's definitely interesting examining where it does and does not succeed. I wonder if the result could be from the amount of bad math and undocumented code it trained on, or due to the nature of math and code itself.

I'm still definitely concerned that progress is heading towards a self improving AI taking over management of earth for the sake of it's misaligned values, even if it's only law clerks, legal secretaries and paralegals that will be replaced with this latest improvement.

Expand full comment

Out of all the GPT-4 achievements, some of which are indeed impressive, "it answers tests well" is one of the less impressive ones, to me. I mean, doesn't it have a wealth of previous test answers and other equivalent information, in its training set? We knew already that it can answer tests, it now just has more data - it feels like a quantitative, not qualitative, advance.

Expand full comment

I wrote this in a different response elsewhere, but I agree. I was very surprised it does so poorly on the Math SAT, which isn't that difficult of a test. I assume the issues are with parsing the questions to figure out what its asking and not problems with doing the math part (which should be easy if you have the whole worlds knowledge - including a calculator! - at your disposal).

Expand full comment

If this is from the OA report, I'd wait for independent confirmation before believing anything. There is already some evidence that GPT4's test performance is affected by contamination (ie, the MMLU answers being in the training data.)

Horace He tested GPT4 with the easiest problems on Codeforce. It correctly solved 10/10 pre-2021 questions, but got 0/10 on recent questions (he was later able to increase this to 3/10 with lots of reprompts). That seems odd.

In the OA report, they say they controlled for contamination, but there are ways data could have failed to de-dup (maybe there's a foreign-language version of the MMLU floating around somewhere, for example).

I'm certain it does better on the MMLU, but I don't know what that proves anymore.

Expand full comment

Also are we sure that OA is not doing "p-hacking"?... As in prompt-hacking.

Prompt engineering, trying out new prompts until they get the result they want and then stop

Which would be a problem since the LLM is semi-nondeterministic

Expand full comment

The MMLU is multiple choice questions, with the question already written out. So they prompt based on that.

They describe their methodology in the report. I don't have it in front of me but I couldn't see anything suspicious or weird.

Expand full comment

Wittgenstein’s ruler. This says more about the test than GPT.

Expand full comment

Where did you learn about Wittgenstein's ruler? Just had a spooky synchronicity where I learned about this in this comment, then came across it again shortly after in this video: https://www.youtube.com/watch?v=3_-5Vv6kBXQ. Trying to rule out the possibility you also learned about it from that video.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Interesting take, but wittgenstein's ruler seems to imply that we are measuring something with a ruler which hasn't been checked on other things, which is not the case for MMLU, we have the 'random' tick mark at 25%, the 'answers according to the process that created MMLU' at 100%, 'average human' at 34.5%, 'human experts' at 89.8%, and a bunch of tick marks from LLM moving higher as time goes on.

I don't think you can make any claim that GPU4's performance in particular say's more about the test than about GPU4. Maybe all the test takers together say more about MMLU than about any individual test taker, but that's kinda obvious, isn't it? We're trying to project the capabilities of vastly different minds onto a R^1 dimensional line. The test is probably not a perfect, or even good, approximation for a measure of planning and research capability, but if you have a better proxy for that please point it out. Seriously, this is my current proxy for "will it foom" and that's a huge problem.

Expand full comment

> wittgenstein's ruler seems to imply that we are measuring something with a ruler which hasn't been checked on other things

It doesn’t imply that. It simply means that ”GPU can ace this test” tells you something about the test.

Everything you’re saying about MMLU is equally applicable to chess. Random legal moves get some score, novice players another, grandmasters another still - all of which get trounced by clever combinatorial tricks. We have collateral information that Deep Blue isn’t generally smart, ergo chess is hackable.

> I don't think you can make any claim that GPU4's performance in particular say's more about the test than about GPU4

The claim is that if a non-smart computer program starts trouncing humans at some test, the test doesn’t measure intelligence.

”How do you know GPT4 isn’t smart?”

”Through using it”

> Seriously, this is my current proxy for "will it foom" and that's a huge problem.

I don’t agree that this is a huge problem. But something like ”be a productive employee at a remotely staffed company” is the kind of thing you’d want. My guess is that any single written test will be hackable.

Benchmarks like MMLU or MNIST are great - they help guide innovation in special-purpose ML which leads to life-improving software applications.

Expand full comment

Nononononono, "be a productive employee at a remotely staffed company" is way too close to the level of intelligence required for foom. I don't think alignment theory is nearly developed enough to be using that as a flag to watch out for.

We need a flag or a line in the sand saying "ok, you can't develop more advanced AI than this until we figure out all the problems with making them care about what we want them to care about." otherwise we will just continue with development until we cross that line and have an agent that cares nothing for human values taking control of the economy.

Chess definitely was overhyped as a proxy for human intelligence. Language use may be as well, though I think there is a much stronger case that language aptitude converges towards world modeling aptitude, since language is what we use for modeling the world. Even so, language modeling requires much more clever of combinatorial tricks than chess does.

So I agree MMLU is a better proxy for general intelligence than chess, and a worse proxy than working in a remote company, but if you define smart as something that applies only to humans and not computer programs then you can't use it to measure what I want to measure.

Expand full comment

> if you define smart as something that applies only to humans and not computer programs then you can't use it to measure what I want to measure

I am not defining ”smart” as ”something that only applies to humans.” (I am not even defining ”smart”). I am proposing a test, the completion of which is sufficient (but not necessary) for demonstrating smartness.

What I’m really trying to say is that anything that has the flavour of the SAT will get Goodhart’s lawed. I suspect that what you want to measure will have to take the form of a real-world task rather than a synthetic test.

Expand full comment

Doesn't it need to be agentic to foom? Something that just sits there until prompted can't foom.

Expand full comment

Yes, but an agent is made of some parts like sensory, a sensory to world model function, a utility function, a planning function, output actuators...

I'm growing more fond of the term "decision process" over "agent" because of how much agent implies that agents can't be part of other agents. People see the gpt chatbot and think "that's the ai agent" not "that's a powerful search through the space of possible words for unlikely, meaningful, series of words which may or may not imply deeper world modelling that could be used as part of another system that could act as an agent".

I'm not so certain that capitalism is not currently fooming and buying itself better and better artificial brain modules.

Expand full comment

Agree - a lot of tests are written to test factual knowledge (because this is much easier to test), but this is the least interesting part. AI:s can even pretend to analysis by just having access to text that has answered a similar question (being a fantastic bullshitter also helps). But set up a unique problem to be tackled, and LLMs will be worthless.

Expand full comment

I think this is the question of if language modeling converges to modeling the world that language describes. Analysis in humans is learned by training on text that answers questions as well. Perhaps a more fair test would be allowing human experts to connect to the internet and text books while answering questions, but at that point I don't know how you would even grade it since human expert answers should be damn close to what humanity thinks the answer is, eg, if they get a question wrong, it's the test that is wrong.

Expand full comment

This is another update to my long-running attempt at predicting the outcome of the Russo-Ukrainian war. Previous update is here: https://astralcodexten.substack.com/p/open-thread-267/comment/13547527#comment-13568445.

15 % on Ukrainian victory (up from 13 % on March 12)

I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24 without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24 of 2022, or c) return to exact prewar status quo ante.

45 % on compromise solution that both sides might plausibly claim as a victory (up from 43 % on March 12).

40 % on Ukrainian defeat (down from 44 % on March 12).

I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.

Discussion:

There are two reasons for this update.

First is a huge collapse of oil prices last week, obviously caused mainly by a developing banking crisis in the US and EU. I would not have expected that oil demand would be so sensitive to this, and its fall could be quite bad for Russian economy and thus good for Ukraine. It is ironic that financial problems in Ukrainian allies might damage Russia, but apparently here we are.

Second reason is that I’ve decided to backtrack a bit from my previous assessment that Nord Stream pipelines were probably damaged by pro-Ukrainian group. John Schiling showed up in my comments last time to explain that such a group is unlikely to have a capability to do that. But also, Putin actually came on Russian television (see e.g. here: https://www.aljazeera.com/news/2023/3/15/putin-calls-ukraine-role-in-nord-stream-blasts-sheer-nonsense) saying the same thing as John and amplifying a silly conspiracy theory that Biden administration did it. This makes me suspect that maybe Putin knows the story about the yacht, uncovered by Western media, will fairly quickly lead to Russia, and thus is trying to preemptively discredit it.

*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of 2022, that is.

Expand full comment

How would your win/lose classify what I consider the most likely outcome, that Ukraine gets all of Donbas back but not Crimea?

Expand full comment

It depends. If the war would end in a ceasefire with the line of control moved in favor of Ukraine in the Donbas, that would of course be an Ukrainian victory. In the other hand, if Ukraine would get back Donbas in exchange for an agreement to recognize Crimea as Russian territory (now it is stated in Ukrainian constititution that Crimea is an inalienable part of Ukraine), that would be a compromise.

Expand full comment

Why do you say that Russia blowing its own pipe is a legit version while USA/NATO blowing up Russian pipe is a silly conspiracy theory?

Personally, I feel that a lot more mental gymnastics is required to justify Russian self sabotage.

Expand full comment

Russia’s motives to blow up the pipelines are perpetuating tens of billions in windfall profits AND inflicting more pain on the average Europeans with clean hands in order to get them to vote for leaders that are sympathetic to Russia. America didn’t have a motive because according to Nuland NS2 has been “dead” since March 2022…and NS2 is currently operable.

Expand full comment

Counter-point: gas prices were already very high before NS2 destruction, and Russia lost hope of using it to sell gas to Europe any time soon.

Possible American motive is also present: to push Europe to rely on American LNG completely. This both frees Europe from Russia's influence, and places it more firmly under US influence. Europe by itself was quite hesitant to cut energy ties with Russia completely, but after the blow up it had little choice: after all, NS1 could be cut at any moment in the same dramatic way.

Actually, it's even worse, if some random "pro-Ukrainian group" did it. It sends a signal that underwater pipes are no longer safe from even non-state actors, and with the tremendous investment needed to build them, everybody's scrambling to build LNG plants and terminals now.

My take: we won't know the truth any time soon, because no one is interested in it, and even if US did it, there will be enough proxies for plausible deniability. Enough, at least, to dismiss any possible leakers as crazy, or Russian pawns.

Expand full comment

NS2 is operable and so the windfall profits would have continued had Hurricane Ian hit Louisiana and Europe had a harsh winter.

Expand full comment

Only one pipe is (maybe) operable, and that's seemingly by accident. The profits would be high enough in case of harsh winter even if NS wasn't destroyed, and with 3 out of 4 pipes destroyed Russia would be unable to offer a lot of gas to sell anyway. Never mind that sanctions would probably still prevent those sales.

Expand full comment

Lol, no. You very clearly don’t understand how commodity pricing works. GAZPROM MADE RECORD PROFITS IN THE FIRST HALF OF 2022 SUPPLYING LESS GAS!! RECORD PROFITS!! LESS GAS!! Now do the math.

Expand full comment

Whether NS2 is theoretically operable doesn't matter. It's not operating now, has never operated before and not going to start anytime soon. It depends not just on the will of Russian government to be turned on as used to be the matter with NS1.

The rise of gas prices is helpful to Russia only when Russia can sell a lot of gas at these prices and/or promise the decrease of these prices for europeans if pro-russian politicians are elected. This happens when Russia controls the supply of gas and is a bottleneck due to which the gas prices are high. But with NS1 destroyed it's no more the case. Russia can neither decrease the prices even if it wanted to, nor credibly claim to be able to do it, thus loosing this geopolitical and economical power. And, coincidentally, it's USA, which got huge profits replacing Russia as a supplier as an outcome of the affair.

Expand full comment

Russia (Gazprom) had windfall profits in the first half of 2022 supplying less gas to Europe.

Expand full comment

Russia doesn't need to sell less gas. Its strategy for carbohydrates trade is selling the lowest quality stuff in high quantity.

The desired scenario for Russia has been "Europe stops supporting Ukraine and starts buying a lot of gas". The exact opposite of that happened.

Gazprom profits are nice but not enough to fix the gaping hole in Russian budget that grows more and more with each month. And definetely not enough to achieve any of Russian current political goals.

Expand full comment

Putin is a dumbass…but Gazprom had record profits in the first half of 2022.

Expand full comment

Those are really two separate questions.

Re: why Americans didn't do it, A) Nord Stream isn't only Russian. It is a joint Russo-German venture, and I am not aware of any incident when Americans would clandestinely destroy infrastructure of their own allies. B) It would be utterly stupid; while I am not overly impressed with Biden administration, I think evidence suggests they are more competent that that. If such operation would be uncovered, and in the US, leaks are regular occurrence (see also Trump's conversation with Zelensky, which led to impeachment proceedings), political fallout would be huge. People might even go to jail. It would also of course lead to EU drifting slightly away from the US, thus buying less of US gas and more Russian gas over the long term.

Re: why Russians might have done it, I am not convinced they did it. I agree they would not have a clear motive. But at least they don't have much to lose (Putin will certainly not be impeached if it would turn out he is the one responsible, he will not lose war because of it etc.), and they do have a history of stupid clandestine operations, like various murders in Britain, that are imho a big reason why Britain is now so determined to support Ukraine (cost/benefit analysis fail). Also, as I noted previously it is possible that they did it while attempting to frame Ukrainians for it. Another possibility is that it was some rogue group in Russia without Putin's orders, since state monopoly on huge quantities of explosives and deadly violence in general in Russia is quite a bit weaker than in the West. See also: Wagner group, or murder of Boris Nemstov next to the Kremlin, according to some reports committed by Chechens without Putin's knowledge.

Edit: I still think there is over 50 % chance that Ukrainians did it, though.

Expand full comment

Honestly before any evidence my prior would be that both the US and Ukraine are the main beneficiaries from NS blowing up and so are the most likely success. Since until now no clear evidence has appeared that's still my position.

Expand full comment

"If such operation would be uncovered, and in the US, leaks are regular occurrence (see also Trump's conversation with Zelensky, which led to impeachment proceedings), political fallout would be huge. People might even go to jail. "

I think it's pretty clear that the IC has picked a party to support, and anyway accusations of that sort of wrongdoing would be a crazy conspiracy theory, misinformation, disinformation, and/or election interference. Absolutely NOT anything that a WP:RS would report on:

"NPR’s explanation for why they’ve ignored the Hunter Biden story could just as easily be used as a boilerplate for all other reputable outlets, and a model for proper journalism at large."

Expand full comment

NYT and German media reported on suspicion that

NS was blown up by the Ukrainians.

Honestly I think real reason why Hunter Biden story hasn't picked up is because no one outside Republican base cares about it, surpresed or not (I should note I am European).

But important people do care about destruction of energy infrastructure. Or underseas infrastructure generally.

Also of course Germany and some other European countries do not have population as brainwashed by mutual partisan hatred as the US.

Expand full comment

I think there's at least some difference between non-Ukrainian journos reporting on unsourced speculation about Ukrainians, and the hive of mutually-following twitter users actually admitting the only hope for the US to avoid fascism did something casus belli bad.

Expand full comment

USA has a long history of doing controversial stuff. Granted, destroying the property of your allies is different from thoroughly spying on your allies but the times are also different. I think there are at least somewhat plausible scenarios where USA came to an agreement with Germany and thus destroying Nordstream wasn't actually stepping on their toes. Or they just destroyed in a way which is hard to consistently tie to USA. Yes the risks are real but not really this big. USA can allow itself a lot of things that other countries can not. I don't think that between a genocidal quazi-fashist cleptocratic regime incompetently waging the bloodiest war in Europe since WW II and strongest political and economical power, who is your longterm ally, and who also happen to successfully destroy joint property of the former in a spec-op, EU would prefer not to align themselves with the latter. If the situation is revealed USA can just pay to Germany for the inconvinience.

In any case there is a clear motive for any country currently opposing Russia to destroy Nordstream thus effectively solving the coordination problem imposed on them by Russian gas blackmail. I agree that there is a decent percentage that Ukraine did it, but I wonder where did they get the required resources on their own, without actually having a fleet. Also for Ukraine acting on its own in such manner is more risky than for USA as Ukraine depends on the good will of European Countries much much more. They can allow to burn some of the political capital, because between the crazy agressor unprovokedly attacking a country and it's heroic victim being not 100% reasonable in the ways to defend itself, nearly everyone would still side with the victim (thus I find these back and forth updates of yours not very justified), and yet this is a risky gambit. If it turned out that Ukraine indeed did it I would expect more than 50% chance that they got an approval from some other relevant power first.

Russia indeed has a history of short sighted clandestine operations that failed or semi-failed and backfired. But all of them are straightforward and have clear motive, usually of petty personal revenge. Nemtzov, Skripal, an attempt on Navalny - all fit the type. Destruction of Nordstream doesn't. It actively harmed Russian interests, removing probably the only geopolitical tool other than nuclear weapons that Russia had. Russian state violence monopoly is a complicated topic, if by Russian state you mean the de jure political structures than it's weaker. But de facto all these "rogue" groups are also part of Russian state, they are allowed to exist only because they are ultimately loyal to Putin. All the not directly controlled politial and social organisations are suppressed hard in Russia. I don't see Wagner or Chechens destroying Nordstream - they would more likely destroy the pipeline going through Ukraine. And I definetely do not see some radical opposition to Putin's regime blowing the Nordstream, no matter how much I wanted it to be true, being myself Russian opposing Putin's regime.

Expand full comment

Destroying NS (even if it would be complete) didn't end EU's ability to import Russian energy, though. It somewhat complicates it, but far too little to be worth the political price.

Expand full comment

With the pipeline in place, Germany had no "out" to explain continued support for the war given mounting gas prices. With the pipeline gone, the propaganda writes itself. The US with Germany's blessing seems very plausible.

Expand full comment

Do you think that German population is unaware of the fact that destroyed pipeline can be rebuild?

Expand full comment

Yeah, I also think absent some very hard evidence self-sabotage theory must be viewed with extreme suspicion.

Expand full comment

Nord Stream: The notion that a 50-ft sailboat could carry six people, their provisions for two weeks, at least two sets of SBUBA gear, an underwater sled, a crane and rig for tending, AND 1,000 pounds of explosives (some of which never detonated) is simply absurd. Nor could a 75-HP engine keep the boat positioned during dives in the open water of the North Sea. For deep dives, you need nitrox, a special mix, which means tanks could not be refilled on the boat. Each tank is good for only 5 to 10 minutes at that depth. Where would they stow all those tanks? Wouldn't you also want a decompression chamber? It makes so much more sense that the demo was performed from a submarine resting on the bottom right next to the pipeline. Navy subs have all the needed gear and the specialists highly trained to use it for underwater maintenance.

Expand full comment
founding

Nit: At 80 meters you need trimix, not nitrox. Even more specialized. And if I read the dive tables right, you're talking 4-6 hours of decompression for every dive, even with limited bottom times. So either a decompression chamber, or a *whole lot* of tanks prefilled with trimix hanging below your boat, and everybody dies if they can't quickly find or for 4-6 straight hours hang out on a line beneath your boat.

I've been on plenty of dive boats, and I share your skepticism that the "Andromeda" was the support vessel for this operation.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Six people plus provisions and scuba gear on a 50ft sailboat is perfectly feasible? That's not even particularly cramped. Recreational divers on liveaboard vacations routinely have more people on smaller boats. Don't know about the crane and rig, but are you sure they needed that -- I'm sure a bunch of Navy Seal types on a secret mission could figure out a way to make do without those, even in choppy waters.

Nitrox isn't used for deep dives. Quite the opposite: it is used to extend your underwater time for relatively shallow dives, but below 40 meters (130ft) it becomes toxic. Are you perhaps thinking of Trimix? The Nord Stream sabotage happened around 50 meters (160ft) deep, which is still possible for an experienced diver to do with normal air. (Below 60 meters you want to use Trimix.) So they could just refill the tanks on board. (Not sure how feasible it would be to install a Trimix refilling station on a sailboat -- I expect it's possible if you're willing to spend some money. But then you don't have much "we're just a couple of innocent amateur divers" plausible deniability left if investigated.)

A decompression chamber is only needed when things go seriously wrong. Recreational divers don't normally bring one along "just in case", and I would expect people on secret sabotage missions to have at least that same level of risk tolerance.

Not claiming to be a scuba expert, though I've done several dozen dives and I have my PADI Advanced and Nitrox certifications. And I haven't looked in detail into the Nord Stream story so I have no opinion on how plausible the sailboat story is. I just noticed a few things worth pointing out in your arguments.

Expand full comment
Mar 21, 2023·edited Mar 21, 2023

At the risk of sounding hopelessly naive, why would they need divers and all their gear at all? Surely if the pipe was only 160 feet down they could just roughly locate the pipe with sonar, then lower a video camera on a cable to locate the pipe exactly, and finally lower the explosive packages, incorporating strong magnets, to clamp on top of it with a timer set to detonate a few hours later.

Expand full comment

The first difficulty that come to my mind is station-keeping. The surface of the Baltic is a heaving mobile surface, and you would need to keep your position relative to the seabed steady to within a meter or less to pull off this kind of thing. I think fancy special-designed ships that are used for oil exploration and whatnot can do pretty precise station-keeping like this, using thrusters all over, GPS, and special-purpose software, but it seems well beyond the capacity of a recreational boat, which probably heaves randomly around 20-30 feet at a time, relative to the seabed, just due to wave action, and anyway is probably traveling in some direction difficult to know precisely, through its own propulsion, the wind, and currents, by at least a few knots all the time.

Expand full comment
founding

Among many other things, the Nord Stream pipelines have a thick concrete casing, so magnets won't clamp to it.

A sufficiently capable ROV could substitute for divers in such an operation, but those are much harder to rent without being noticed and they still aren't the thing you can really operate from a 15-meter sailboat.

Expand full comment

It's extremely rare, but there are people who can tolerate 100% oxygen under high pressures (most people cannot, and will get oxygen toxicity). Which I bring up, because the US Navy used to conduct quite extensive tests on their divers to see what they could tolerate, and they use (or at least used, my information is quite out of date) dive tables that are quite a bit different from civilian dive tables - the military dive tables were designed with an expectation of some casualties, where the civilian dive tables are not, and are a lot more limiting. (Although a lot of military instructors and divers used the civilian dive tables for most purposes anyways unofficially, so the practice may have officially shifted in the intervening time.)

So anything coming from civilian information may not apply to the military, and in particular may not apply to special forces, who may have gas mixtures specially tailored to their individual tolerances, and/or may be using different dive tables, and may have capabilities far exceeding civilian divers as a result.

Expand full comment

You're right of course -- the 40 meter limit for Nitrox I mentioned is the official number taught by PADI to amateur divers such as myself, and it undoubtedly includes a very generous safety margin, even before you start talking about individually tailored mixtures and such. But that only strengthens my point that, whether it's true or not, the sailboat story doesn't seem to be as totally implausible as the grandparent post is making it out to be.

Whether they used nitrox or normal air or some special secret mixture, diving down to 50 meters isn't *that* hardcore, and is just a little beyond the official PADI limit for ordinary poorly-trained recreational divers. I've never personally sabotaged an oil pipeline so I don't know how strenuous the work of planting the explosives is, but it seems like this would have been a relatively straightforward mission by navy seal standards.

Expand full comment

North Sea???

Expand full comment

I'm reading _What's Our Problem?", and I have a couple of points. One is that the relationship between the "animal mind" and group effects is actually complicated. Part of culture is restricting instinctive behavior-- for example, religiously imposed restrictions on sex or eating, and this is an important part of stabilizing the group.

That's more of a nitpick. The big one might be the assumption that people will be benevolent if they're thinking clearly, but I'm pretty sure good will is a separate thing which needs to be optimized on its own.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Practically everyone in the world wants some kind of approval, or acceptance at least, from others in their circles. Desire for acceptance in some form must be a deep-rooted instinct, even in non-human pack animals. So a person who cares nothing about what anyone else thinks of them is a rare breed indeed.

Benevolence, where it overlaps with generosity, is obviously one way of trying to gain approval, and with that motive it needn't equate to good will but could be more a practical strategy, for partly or primarily selfish reasons.

Expand full comment

The entire distinction between the animal mind and the supposed rational mind seems to be a delusional coping mechanism to explain away our own irrational behavior while pretending that our true selves is this mythical wise, rational & beautiful person that seeks truth, is altruistic, etc. In reality, rationality is merely a method that we use to achieve our goals, next to not so rational things like what we often call 'being horny', but as soon as the rationality threatens our goals, we tend to abandon it.

The entire assumption that the main goal of people is benevolence is highly doubtful and even that is a highly generous way to phrase it. The prevalence of bullying is a clear indication that people often prefer creating a nice environment for themselves over being benevolent.

Furthermore, history pretty strongly shows that very high levels of benevolence where people work 8 hours a day for the benefit of others and deprioritize their own desires, requires rewarding people to do so (aka capitalism).

Expand full comment

On an unrelated note, nice to see you back, Aapje! I used to love your "Dutch fixed expressions". I had to choose eggs for my money and ask ChatGPT about them.

Expand full comment

Even if people evolved to use rationality and/or the appearance of rationality as a social tool, and even if they still mostly use it that way, I think we have evidence from technology that rationality is hooking into something real about the universe some of the time.

Expand full comment

It seems to me people often don’t make a clear distinction between rationality as in intelligent problem solving and rationality as in not being carried away by emotion.

Expand full comment

Looking back at https://astralcodexten.substack.com/p/chilling-effects

Supposing climate change leads to much more erratic weather, what effects on mortality are plausible? It seems to me that people might eventually figure out how to make infrastructure relatively cheaply, but it will take a while, and longer to actually make existing infrastructure more flexible.

How long does it take people or societies to adapt to a temperature range? Or to lose an adaptation? It seems as though people can forget how to drive in snow in well under a year.

What got me interested in this was hearing that people starved to death sooner than leave gold mining claims, which is related to temperature because cold would make them more vulturable.

Anyway, I'm interested in whether there were any solid accounts. There are ways of imagining how it could happen like a partner being unable to bring food supplies, but how something could happen isn't the same thing as evidence that it did happen.

This was part of a discussion of how different cultures react differently to gold. For example, North American indigenous people didn't seem to care about it, but South Americans did. Could gold be one of those things where people get excited or not because of the people around them?

Expand full comment

Tentative theory: People who react strongly to gold actually see that shade of yellow more vividly that those who don't. Since some genetic lineages react more strongly to gold than others, this could be tested.

Crazy theory: the only thing I remember from Art Bell's Coast to Coast (a long-form radio show about weird stuff) was the idea that the human race was shaped by aliens for gold-mining.

Expand full comment

I recently subscribed to Asterisk (http://asteriskmag.com) and got my copies of the first two issues in the mail. I love them, can highly recommend! It’s like the best parts of the broader LW blogosphere, with really fantastic typesetting and illustration.

Wow, doesn’t that sure sound like an ad? Well, it’s not, and I’m just an enthused person on the internet that’s happy to have physical media in her hands again.

p.s. whoever chose the color scheme for Kelsey Piper’s review of ‘What We Owe the Future’, were you eating mint chocolate chip ice cream at the time? Because now I really want mint chocolate chip ice cream.

Expand full comment

Wait, then why did I only get the first one in the mail?

Expand full comment

I second this! The print quality is very good. I thought “Feeding the World Without Sunlight” was very interesting, but I forgot what it covers because I have a short context length.

Expand full comment

Oh, and Asterisk is published out of Berkeley, now I’m reminded I want to go to Tara’s again. Time for a pilgrimage to the East Bay!

Expand full comment
Comment deleted
Expand full comment

That’s, uh, a rather extreme reaction of disinterest to take in a publication because of one contributor’s affiliations? Idk how made up your mind truly is, but I’d suggest reevaluating it as the magazine is quite good.

Expand full comment

It's about what I'd think if a magazine, say, hired David Frum to write political commentary for them. It says something about the publisher's capacity for good judgement.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Given that LLMs seem to be extremely effective at turning instructions in English into (mostly) functional computer code it seems to me that there is a niche available for a new high level, syntactically simple programming language that would be easy to read and debug. This would be a language that could let non-programmers get things done by interacting directly with an LLM to write it, but still give them sufficient control to verify that their instructions had been interpreted correctly.

Do such languages exist already?

Expand full comment

Python and Visual Basic are both very syntactically simple, but you still have to understand programming concepts to use them.

Expand full comment

> it seems to me that there is a niche available for a new high level, syntactically simple programming language that would be easy to read and debug

Funny how that keeps happening every decade or two. Syntax looks scary to novices but you quickly learn it and find out that the hard part of programming is semantics.

Expand full comment

LLMs suck at being compilers. For a start, a compiler should have a fixed meaning for every valid piece of code in the language it accepts, it doesn't have to be a simple meaning, it doesn't even have to be a meaning that makes sense to humans, but it should at minimum be fixed, you can't write the same exact code 2 times and see it do a completely different thing each time. But LLMs do that all the time. They are also masters of sweeping shit under the rug and pretending there is nothing wrong with obviously wrong input/output, that's another big no no when you make a piece of critical programming infrastructure like a compiler. There is also the problem of OpenAI or some other $BIG_CO HR department deciding that since binary gender is offensive your compiler should refuse to compile programs that store the gender in a 2-state variable, I have never seen any compiler/interpreter/editor be offended by the content of my code no matter how shitty it is, but LLMs do it all the time (altough, to be entirely fair, that's not the LLMs' fault, just a consequence of how they are deployed and monopolized today).

Anyway, Programming Languages don't advance by being closer to Natural Languages, Natural Languages are terrible programming languages. If you want to see "Programming in Natural Language", look at Law and Legal Codes. I'm fascinated by Law and Legal Codes, because they are centuries-old counter-examples to the dreamy idea that we should all just state our needs and wants in plain English and then - S U R E L Y - people will completely and unambiguously understand us. Programming Languages advancements seem to come from **Restriction**, not Tolerance. Good Programming Languages are controlling liars, they construct an illusion so deep and convincing that you are tricked into programming a simpler machine than the actual computer you're running on, indeed the illusion is so complete that you have no choice but to program this simpler machine.

Examples of Advances in Programming Languages design include :

1- When they forced you to use standard control flow statement like While and For and If, instead of the ad-hoc goto construct that is actually how the computer works at a fundamental level. A goto is literally just that, an instruction for the computer to "Start executing from the instruction at that address", where "that address" is just a number that can indicate any instruction in the whole program, just like in cooking recipes when you see "go to step 3 again". This is extremly powerful, this is what makes a computer a computer and not just a simple terminating calculator, but it's also extremly hard to understand and reason about in your teeny tiny human head meat. In the late 1960s and the early 1970s some people started advocating for "Structured Programming" : instead of telling the computer "start executing the instruction at that address", you instead tell the computer "While this condition is satisfied, continue executing this block of code", where "this block of code" is a bunch of instruction wrapped together in curly-braces-surronded block like this While {...}. Believe it or not, this is exactly equivalent to Gotos, but so extremly vastly easier to understand and reason about and nest inside each other.

2- When they started forcing scope rules on you, "Scope" is the area of your program text where you are allowed to reference a name that you just introduced. This has no straightforward equivalent in Natural Language, I can start referencing any name or noun or proper noun I damn well please in my writing anywhere and nobody can tell me I'm using Language wrong, this doesn't fly with Programming Languages, with them, if you introduce a name (like say x = 1, you introduced x and said it's equal to 1), there are very elaborate and restrictive rules about when is this name valid, when you can just mention it and expect the compiler to understand you. This is also not how a computer works at a fundamental level, there is no human-readable names at the computer level, every single piece of data stored in a computer is represented as a number inside a vast bucket of numbers that is the RAM and given a numerical address, and every instruction executing on the computer can reference any address it damn well please.

And on and on. The common theme is that each breakthrough idea in Programming Language design is simultaneously (1) Not how computers actually work on a fundamental level (2) Not how Natural Languages usually work in most contexts. Programming Languages are very elaborate puzzles/games where the rules are simultaneously unrealistic but intuitive or at least learnable. People already have the option to describe their problems in Natural Language, it's called talking to a human programmer.

You can even combine the 2 ideas of "Talking to a human" and "Make a new language" by creating a DSL : a Domain Specific language, talk to a compiler programmer to make a special language for you to describe your very specific kinds of problem in, the programmer will take care of writing a compiler or an interpreter for that language that executes it (or generates code that executes it), this is a very old idea with reasonably successful applications https://www.infoq.com/articles/why-dsl-collection-anecdotes/. No need to put a flaky text bullshitter on top of it (I get that you want to automate the "human programmer" part, and get a system where user invents whatever language they want on the fly and the LLM automatically understanding it, but LLMs are so bad at reasoning like an ordinary human let alone a compiler writer that this will not happen).

I do think that LLMs are teaser trailers for a future of smart assistants discussed countless times in sci-fi literature, they augment your natural intelligence by being an intelligence without consciousness, a raw problem-solving essence without any values or direction (those come from its user). That would be extremly revolutionary in programming like in everywhere else, but is also so extremly far from reality and the actual concrete abilities of LLMs.

Expand full comment

Inform 7 is a language for making Interactive Fiction Games. The code sounds like sentences in English. Here is an example of code from a game:

The iron-barred gate is a door. "An iron-barred gate leads [gate direction]." It is north of the Drawbridge and south of the Entrance Hall. It is closed and openable. Before entering the castle, try entering the gate instead. Before going inside in the Drawbridge, try going north instead. Understand "door" as the gate.

https://ganelson.github.io/inform-website/

Expand full comment

I am not aware of such language.

Frankly, I am not sure what a non-programmer would do with "mostly functional computer code". Probably pay a programmer to fix it.

Expand full comment

My wife and I restrict our children's screen time. But then we go and spend every spare minute in front of our phones.

Is there a good justification for holding children to a different standard than ourselves? They will likely grow up to spend their working lives and half their leisure time in front of a screen just like us. So is restricting them during childhood kind of arbitrary?

The same question could apply to many other restrictions we place on kids. So in general, what is the justification?

Expand full comment

"Is there a good justification for holding children to a different standard than ourselves?"

Yes. I think the best justification is that some things are very bad for you when you are developing and less bad for you when you are fully grown. This is also why we don't give children caffeine, alcohol, nicotine, THC, or porn.

I personally think that developing the ability to sit down for long periods of time and read a book as a child was really vital for me. I can lose the ability to concentrate on books if I overdose on screen time, but I remember what it was like and can work my way back to that state with effort. If someone never developed the ability to sit still and do deep reading or concentrate deeply on something because their attention span was wrecked from childhood, I can see that having deleterious effects in adulthood that are more severe than they'd otherwise be.

Expand full comment

There's an anecdote I read about Ghandi once. Apparently a woman came to him asking him to talk to her son about not eating candy and sweets. Ghandi said "I can't talk to him now, but bring him to me in a week and I'll talk to him." A week later when she brought her son Ghandi gave him a lecture on not eating sugar. The mother thanked Ghandi and asked him why she had to wait a week. Ghandi replied "Because a week ago I was still eating sugar."

I am dealing with a similar situation to yours. Recently I had to bite the bullet and commit to never being on my phone when I'm with the kids. It's hard! Really hard! Still, it's better for my kids. And I'm back to reading a lot more books, which is something I'd been meaning to do.

I would recommend trying it for two weeks. You can do anything for just two weeks, right? Give it a try, you might find that while it's hard it's not impossible.

Expand full comment

We generally suck at doing things that are good for us. Luckily we sometimes have people who help us, like parents.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Something I have a lot of thoughts about:

1) I was born in 1981 and spent A LOT of time reading as a kid, and playing sports. But what I probably spent the most amount of time doing was playing videogames. And that was IMO by far the most professionally useful time I spent in the long run. Granted I was mostly playing adult strategy oriented games, not shooters, I loved MOO2, and SimCity, and Civ2.

Granted at the time getting the games to work involved a bit more work and working knowledge of a computer than today. You sometimes needed to use Dosshell etc. But the basic computer/problem solving/programming skills I picked up gaming proved to be super useful in my professional life.

Being well read has also been super useful, but no one would contest that...

2) The good justification is "I don't have a dad". If I had a dad in my house he would tell me now as a 41 year old with a successful business and happy marriage, two kids and a variety of projects, and he would absolutely look at my life and say "hey if you spent only 30 hours a week gaming and not 50 and slept better and spent more time with your kids you would be happier". AND HE WOULD BE RIGHT. But I don't have a "dad' who can make me do things. Luckily for my kids they do. Parents are great, everyone needs some help doing the right thing sometimes!

3) So what I do is channel their screen time (and do place caps). For my 9 year old he gets a max of 4 hours on his desktop a day. He doesn't get a laptop and the desktop is in the dining room so we can all observe him. He plays games I buy him, and I only buy him games I think have some educational/decision making component that is mentally rewarding. Say grand strategy games or puzzle games or 4X or builders/simulations. No Detahmatch3000. He also can watch Youtube, but only channels I approve of. if it is a rainy day and/or he is sick, maybe he can get more time if I think it is justified. but most days I would bet he uses 1-3 hours and that seems fine to me. he also plays sports 2-5 days a week, and reads 30-60 minutes a day, and plays outside semi-regularly. There is almost no "TV" at our house, though we do let them watch Netflix shows.

4) So the bottom line justification is just consequentialist. It is better for them. Don't we all wish you had a big force in your life that could let you know when you are going "off the rails". In some sense that is one of the main features of a good spouse. You kind of co-parent each other a bit. I mean I (like a lot of us here) am a bit of a narcissist, but even I admit I can use some outside controls on my decision making.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Something I think previous generations (I was the baby of siblings born across the 60s) can neither defend nor entirely reject is the way in which, as kids, we lived very separately from our families much of the time. From a very small age! To be away from them, to get out of the house until past dark, to explore the physical world - and when home, through books - was the goal for many of us. I guess video games and youtube might be the same as the latter, but it is hard to acknowledge that you could completely lose the former impulse, that it was evidently without any benefit, just a mid-century historical anomaly. My home was "troubled" as one used to say back then, but I do not think that that was the whole explanation, though I guess I can't be sure. Nowadays, though, when I see kids with their parents on a family bike ride, I toggle between - "how nice"/"what I would have given" ... (though one's parents/siblings would have had to be so, so different, so: you as well) and a sinking heart, for those kids. But why? I can't say.

ETA: your recital of your kid's day would have been unimaginable, basically. My parents neither knew nor cared, and would not have been judged for it.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Yeah even growing up in the 80-90s I spent a huge amount of time out of the house playing in the woods or playing sports. My kids do a similar amount of sports, but it is now almost 100% organized instead of probably 50/50 organized, and the playing in the woods time is much smaller (though also I had a 2X2mile woods in the backyard of my housing project I lived in, where we live now on a 1/16th of an acre lot in a street grid our 30X30ft "woods" is the biggest for blocks.

Expand full comment

It's reasonably likely that some things are more harmful at younger ages. Children don't have adult minds, adult learning, etc. There are things they can learn trivially, that adults rarely can, and others that they can't learn at all until the ability to learn them opens up. (Also, of course, it makes little sense to use "children" as a blanket term in that context. A 2 year old may be as different from a 6 year old, in these ways, as a teen from a 30-year old, or even more.)

They also don't have adult bodies. Their vision might be shaped in unwanted ways if they don't spend significant time daily looking at things at a distance, whereas an adult's might not be affected. Marijuana is known to have higher risks under some age that I don't recall currently, except it's more than 21. IIRC excess salt is known to be more harmful to infants than to other age groups. My grandmother believed caffeine was harmful to young children, in a way it wasn't to adults.

Finally, they don't have life experience, perspective, or mature judgment. Decisions that seem reasonable to them won't seem reasonable to most adults, or generally to themselves one they are older. Arguably "informed consent" decision making isn't possible, without levels of cognitive capability that certainly aren't available to infants, and may not be available to teens or even twenty-somethings.

They also have more to lose. Suppose excess screen time reliably produces myopia, prohibiting careers like "test pilot" even if corrected. You've already made decisions that preclude you becoming a test pilot, and in any case have no such ambition. Your children haven't closed that door yet... unless they wreck their vision. And their wrecked vision will be with them longer, than if an adult does the same thing to their own body.

I'm sure there are more, but those seem like the main ones to me.

Expand full comment

I saw a man taking a walk in the shade of Mesquite trees along a parkway. His face and full attention was fixated on a little electronic gadget he held in his hand. The noisy, horny birds chattering in the trees ignored him.

Expand full comment
founding

Maybe you should restrict your own screen time too.

Expand full comment

I think one important thing is to ensure that your children have enrichment opportunities outside of electronic entertainment.

I grew up on a farm; I could go out and explore the woods if I wanted, and this was decent competition for electronic entertainment I had available. If I grew up in a city, or a suburban environment in which I was limited to a very small area, I think I would have spent a lot more time on electronic devices.

On the other hand, I grew up on a farm, and the vast majority of my social experiences came from the internet; as I grew older and more interested in social interactions, the computer became more and more important to me, where, I think, it would have gotten less important had I lived in an environment where in-person social opportunities were more readily available.

(I did not find school to be much of a social environment; it was far too controlling, and I lived too far away to be able to afford to miss the bus.)

Expand full comment

If you and your wife eat nothing but donuts, would consistency mandate that you similarly feed your children nothing else? You should try to do the best for your children, whether or not you do the best for yourself.

The point about them growing up spending time in front of a screen is a distinct issue from the supposed issue of "hypocrisy."

As to the latter, it depends on what you think the point of avoiding screen time is. If you think, for example, that it has particular benefits earlier in a child's development, then it would make sense to limit it for kids even if they won't limit it for themselves later.

Expand full comment

While I suggested cutting back on phone screen time downthread, I agree that if you can't do that then don't worry about the hypocrisy. Do the best you can for your kids even if you can't follow the advice yourself.

Expand full comment

We also have two little kids (2 and 4), and try to restrict screen time. Their brains are still developing, and there's good evidence that steering kids toward tactile activities and the kind of play that develops fine and gross motor skills is important at this age. This isn't to say we don't watch a lot of Cocomelon, especially before 8am, but we do feel justified in turning the TV after a while off despite loud protests.

We're doing our best to develop both physical limits and general rules around technology, since the pervasive nature of screens seems to have really caught the last generation of parents off guard. So at this early stage, we're trying a limits-and-etiquette approach: no devices in bed or at the table. People first, screens second. My husband and I try to live by this, too. But we already know that we can't just play with our phones or ignore people who try to talk to us while we're watching TV. Like most etiquette, this has to be learned, and the kids are still learning it. So until that point is crystal clear we're totally cool with unplugging the TV if she hides the remote to keep us from turning it off.

It's for everyone's benefit; our 4-year-old has already noticed that watching too much TV turns her grumpy and uncooperative (she's surprisingly self-aware)! I'm not sure this has actually translated into her self-limiting her screen time, but she seems to be doing that somewhat and she's only 4, so I'm hoping our pushing back on screens and trying to ingrain decent manners re: technology is building a foundation for future good habits.

Expand full comment

Emily Oster has recommended the "Techno Sapiens" Substack which writes about our interaction with technology, especially social media, and has several articles about screen time and young children. I've found it to be a pretty reasonable, and data-informed source on the topic:

https://technosapiens.substack.com/archive

Expand full comment

It would likely be much healthier for you and your wife to also limit screen time. Kids notice when parents do things that they tell the kids not to. You are running the risk of reverse psychology getting your kids more interested in screen time.

Screen time is also probably bad for you directly. Go spend some face to face time with your kids and other people. It's healthier for you and for them.

Expand full comment

Rude, and missed the point. Fwiw, "every spare minute" here means "the minutes I'm not engaged with my children or attending to other important parts of my life". IE after they've gone to bed and the house is clean. There was no implication that we're ignoring them in favour of our phones.

Expand full comment

Sorry, it was not my intention to be rude. I've been reducing my screen time (especially social media) myself. Partly from noticing my kids watching me on my phone, partly because I felt like it was taking too much time from better activities.

I feel significantly better with less time on social media, and somewhat better with less time in front of screens generally.

Expand full comment

My apologies if I was overly defensive.

It can be a struggle. I've deleted all social media from my phone and that works really well for me.

Expand full comment

... and your restrictions are more nuanced, because you're capable of it?

Expand full comment

So, you *do* restrict your time...

Expand full comment

What's rude about it? Their point is not contingent on your ignoring kids.

Expand full comment

While there are good justifications why it's useful to restrict a child where you don't restrict yourself, children usually do not appreciate them. And this is a problem if you want to create a rule that they follow, no matter what, and not a rule that they follow only when you see and can punish them.

The best course is of course to apply the same rule to yourself. Think about children of smoking parents being more likely to smoke even if their parents tell them that smoking is bad. But if this is not an option, do your best explaining them the reasoning. That children lack the ability to properly control their behaviour and thus need restrictions and parental control to develop it. That its not just a matter of authority, but actual development of functions of their brains.

Expand full comment

There is some justification, that kids are much more impressionable than adults. But also, people want better for their kids. I would consider that maybe you should hold yourself to the same standard as your kids, rather than vice versa.

Expand full comment

A good perspective

Expand full comment

I don't see double standards as inherently bad. Sometimes I drink alcohol; doesn't mean I should allow my little kids to do the same in the name of "fairness". Adults and children are different: they have different abilities and different responsibilities. Regulating your screen time is good. Regulating your kids' screen time is good. Some reasons are the same, some are different.

The most dangerous thing about computers/smartphones is *notifications*. They make the difference between "you only use it when you use it, and when you turn it off it remains turned off" and "the application actively interferes with your life and drains your willpower all day long". This is something that everyone should avoid. Even adult people become zombified when they install a messenger app on their smartphone. It is no longer you using the app, it is the app using you.

Compared to that, the regular use of computer, even excessive, is relatively harmless. In my childhood, I sometimes spent the entire weekend playing a (single-player, offline) game, but the difference was that when I completed the game, it was over. I could return to normal life. It was not too different from reading a book.

Then there are health reasons for limiting screen time. Too much sitting is bad for your health. Regardless of whether you sit by a computer, in front of a TV screen, or with a book. (Although, with the book, you can keep changing your position, so it is better.)

Also, importantly, what are the kids doing with the computer? Four hours spent playing a first-person shooter is quite a different experience from e.g. watching a cartoon, learning an online lesson, and then playing with some editor, even if together it also takes four hours. (I am saying this because I often hear people worrying about screen *time*, and rarely about the contents of the screen. Ironically, it is often the parents who fear the computers most, whose kids do the most stupid stuff, simply because they do not know that better ways of using the computer exist.)

Expand full comment

"The most dangerous thing about computers/smartphones is *notifications*. They make the difference between "you only use it when you use it, and when you turn it off it remains turned off" and "the application actively interferes with your life and drains your willpower all day long". This is something that everyone should avoid. Even adult people become zombified when they install a messenger app on their smartphone. It is no longer you using the app, it is the app using you."

Couldn't agree more. So destructive, and your life is much better when you can learn to tune them out.

Expand full comment

Agreed on most points. I've found that I prefer having notifications enabled for messaging and email, but minimising potential sources of emails and messages and other notifications as much as possible. Otherwise you Skinner Box yourself.

Expand full comment

I wish there was some kind of "important" flag that people would actually use correctly;and only the "important" messages would make notifications. Which will probably never happen, because people (and online platforms) do not have an incentive to save other people's time.

Expand full comment

I agree with Carl. But what I'd really want to know is how old they are and how much you restrict their screen time - for self-calibration reasons

Expand full comment

With the caveat that I'm looking to discuss a general moral principle, not inviting feedback on my specific parenting decisions - they're in preschool and have around 1-2 hours per day of TV/games. I don't count screen time where they're engaged in an activity with me - exploring Google maps for example. Pretty standard I think.

Expand full comment

Thanks. To be clear I did not mean to give feedback on your parenting. It's just I'm starting to have to grapple with these issues myself and I find it useful to know what other people are doing!

Expand full comment

I figured, all good!

I also try to engage with what they're watching - talk with them about what's happening, sing the songs with them, play imaginative games as their favourite characters afterwards. I figure it turns it from a distraction activity into a bonding one.

But I'm just winging it, like everyone else. I'm sure you'll do great!

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Children are not like adults. They don't think the same way, they don't have the same emotional tendencies, they are vulnerable in different ways, and unaffected by some things that matter a great deal to adults. They have abilities we lack (like phenomenal memory) but lack abilities we take for granted (like executive function). Treating them just like miniature adults merely with a lack of experience is a grave mistake, in my experience, and will stress the hell out of them.

Expand full comment

This is true and a good answer. I'm not sure it's the reasoning that most parents would give though. I expect answers like "they are still developing and need to grow up well-rounded". But being well-rounded should be a goal for everybody, not just kids.

Expand full comment

This is correct, but.

Children don't know any of that, and they can be very sensitive to double standards. If you restrict them and don't restrict yourself, you either need to have a really good explanation for that, which they should be able to understand, or you would be teaching them that double standards are acceptable.

The best solution here would be to restrict your own screentime, not only for above reason, but also because spending all of your spare minutes in front of a phone screen is harmfull in many different ways.

But it is easier said than done.

Expand full comment

I spent years stressing about keeping my phone usage in check, and I'm actually pretty happy with where it's at now ("every spare minute" isn't as much as it sounds when you're a parent)

A generous justification for the double standard would be that I, for much of my adult life, would have probably welcomed having a hard limit on my screen time.

I do agree with your points about double standards

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Pfft. Double standards are the norm in life, not the exception, and if you are attempting to teach your children at a young age to expect otherwise they are going to figure out very quickly (from their own experience with the rest of the world) that you are either deluded or lying to them. So I'd say that particular goal should be about #1,023rd on the parent's To Do list. You can have a stimulating intellectual discussion with them when they turn 15 in which you both talk about how you feel the world ought to work, including whatever universal standard to which all men should be held might be imagined. That's about the right time to dig deep into the issue of double standards. When they are younger there are far more important lessons to teach, and adaptive behavior to model.

If the real problem is that Parent X does not agree with Parent Y about whether or how much the kids should be kept from the screen, the real reasons for that disagreement should be uncovered and discussed, and the discussion should be entirely focussed on what is good or not good for the child, based on the child's actual nature, and the influence on that nature of the screen time. A focus on eliminating distinctions feared to be hypocritical or invidious between adult and child discipline (self- or parent imposed) is at best a distraction, or red herring, and at worst an attempt to avoid honest discussion of the issue with the other parent.

If the secondary problem is that Parent X thinks *Parent Y* is spending too damn much time scrolling on the phone, then *that* should be discussed honestly between the adults, without stooping to emotional threats like "you're being a bad influence on the children." Children should never be used as tools in adult disputes -- although, alas, that is all too common.

Expand full comment

I most likely will not pursue a PhD but let's say I wanted to - what field is still worth doing a deep dive for 4-6yrs given that the world (and the field) will be radically different + AI would have likely surpassed most experts?

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

The purpose of a PhD is rarely merely to acquire additional information about a field. As a rule, it is to learn how to make original contributions to the field, by learning how to uncover new information and solve new problems.

For example, if you get at PhD in physics, you would only spend a year or two actually learning advanced material. The bulk of your time would be spent[1] studying how to pick out a good problem to tackle, studying how to research what's already known and might be useful in tackling the problem, learning how to design appropriate experiments, and then carry them out -- learning how to adapt when they don't go quite as you expect -- learning how to analyze the results, and how to communicate them to other people.

The expectation is that by learning how to use the tools of the trade, you learn how to be a PI ("Principal Investigator") yourself, so that for the rest of your career, you can tackle one problem after another with some reasonable degree of success. Of course, you won't be studying the same problems in your 40s you were studying in your 20s, but *how* you attack them will be very similar indeed -- because how in general we tackle such problems -- principles of empiricism, reproducibility, objectivity, mathematical precision et cetera -- in 2023 doesn't differ from how they were tackled in 1923, or for that matter 1623.

If you are considering a PhD just to learn more about a field, you should not do it. Just get some books and start reading, or take a course or two or three, and meanwhile go get a job doing something that interests you. You should only get a PhD if you want to learn how to be a worker in a field, someone who makes a living trying to advance the boundary of knowledge in that field.

------------------

[1] I speak of the ideal situation, of course. An actual real PhD experience will not come all the way up to the ideal standard, because nothing in life does, and it is unfortunately sometimes true that it falls well below the ideal.

Expand full comment

Remember that substantial chunks of probability space are screened off from relevance via "in that world we are all dead" and/or "in that world immortal utopia and it's probably not a big loss".

Like, as I see it if neural-net AGI is widely deployed, this almost certainly means X-catastrophe. So, if a given life choice won't affect whether this comes to pass, you can assume neural-net AGI is not widely deployed (because it's banned, because it's impossible, etc.); in the other worlds, you're either dead or doomed within a few years and it won't matter very much.

Or, if somehow a miracle occurs and we get friendly AGI, it probably doesn't matter very much what skills you have; everyone is unemployable and everyone is fine.

(Of course, this doesn't apply to anything which can affect P(Doom) and P(Utopia); that's upstream of the screening states.)

Expand full comment

I'd argue the more rapidly the field is expected to change the more reason to do a PhD in it. It makes it more likely you'll come out of the PhD with key skills everyone needs and few people have.

Expand full comment

Scott, any plans to weigh in on the "technology is making young people depressed" discourse?

Expand full comment
author

I didn't have any objections to Haidt's latest essay, although I also think Haidt proved a very limited point, and if he could have proved a more general one he would have.

Expand full comment

I was recently rereading "The Categories Were Made For Man, Not Man For The Categories" and I was wondering if there are any commonly-spoken languages that didn't go the phylogenetic route with regard to group name boundaries. Like if in German most of our mammals were säugetier and most of our fish were fische, but they drew the boundaries differently and whales are actually fisch. Does anyone know of a language like this?

Expand full comment

In Modern Standard Arabic (i.e. newscaster Arabic), "hawt" is whale, but in some of North Africa, "hawt" or "huut" can describe a normal, mid-sized fish that people eat. This surprised me, maybe subconsciously for the phylogenetic instinct you mentioned!

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

I mean all sorts of languages are wrong about all sorts of groupings. Then sometimes they get changed and things renamed, and sometimes not, though scientists know the our current understanding of the "real" grouping. Language doesn't have some special access to the truth, nor would you expect it to.

Expand full comment

No, but on a related point, you would probably enjoy reading "Why Fish Don't Exist" (a relatively recent book by Lulu Miller).

Expand full comment

The Bible counts a bat as a type of bird.

Doesn't the German word "Affe" mean both ape and monkey?

Expand full comment

Screw it, _I_ never remember the difference between ape and monkey.

Expand full comment

And therein lies a tail.

Expand full comment

Yes, can confirm. German does not discriminate between apes and monkey.

There is the term "Menschenaffe" (literally man-monkey) for apes, but that would be just a subcategory of "Affe", and there is no term for the English concept of monkey.

Expand full comment

It's the same in Dutch.

Expand full comment

And in Polish.

Expand full comment

Not about species, but two anecdotes:

- When I moved from Germany to Switzerland I got really confused in the supermarket: in Germany there are several types of sour creme, depending on the level of fat (saure Sahne, Schmand, Creme fraiche). In German-speaking Switzerland there are also different types of sour cremes (Halbrahm, Vollrahm), but it's completely incomparable to the German system.

- Most languages describe the future as something in front of us, and the past as something that lies behind us. But some native tribes use exactly the opposite association. Which makes sense because you can see the things ahead of you, just as the past. And you are blind for the things behind you, as for the future.

And then there is this one weird mountain forest tribe which associates future and past with "uphill" and "downhill". (I have forgotten which is which.)

Expand full comment

> - Most languages describe the future as something in front of us, and the past as something that lies behind us. But some native tribes use exactly the opposite association.

English's basic words do that too, though English culture does not think of it that way. 'Before' means in the past, and in front of us (before our eyes), and 'after' means in the future, and further behind (more aft).

Commonly the metaphor is not so much 'what you can see', as it is 'time is a race' ("past" is the same word as "passed", as in the moment has passed; the future being yet to come fits here as well).

Expand full comment

Then there's "let's move the meeting forward/back", whose meaning seems to vary even among native English speakers. ("Move it forward" can mean move it forward in time, into the future, or move it closer to us, backwards in time, towards the past.)

Expand full comment

Slightly OT: I find "pull forward" & "push back" generally avoids that particular confusion.

Expand full comment

And then there's "move the meeting up", which I _think_ always means earlier, but there's no "move the meeting down".

Expand full comment

The English word "fish" does not refer to a monophyletic taxon.

See further:

https://en.wikipedia.org/wiki/Paraphyly#Examples

Expand full comment
deletedMar 20, 2023·edited Mar 20, 2023
Comment deleted
Expand full comment

In the same vein, European Portuguese distinguishes two types of pineapple, more or less arbitrarily but roughly correlating with sweetness and leaf size. Most people have trouble distinguishing them but will get really angry if they think they've bought one instead of the other.

Expand full comment

No question to ask. Just sharing something I wrote earlier this month.

https://open.substack.com/pub/pythagorean/p/the-most-beautiful-mathematical-problem

Expand full comment

In regards to gardening, is there any way to predict how well a particular plant will respond to some hormone like rooting powder?

Expand full comment

Trial and error.

Expand full comment

That response ranks just slightly below "google it" in terms of helpfulness.

Expand full comment

It is a serious response. It is what I would do, I assume you already googled.

Expand full comment

You should put in big fat 20-point font at the top of the article, "I AM NOT SAYING THAT SLOWING DOWN AI PROGRESS IS A BAD THING, I AM MERELY EXPLAINING THE STATE OF THE DISCUSSION UP TO NOW." Maybe then people would get it.

Expand full comment
Comment deleted
Expand full comment

Meaningless statement unless you think that the risk of doom is zero (or negligably small). How much of a reduction in x-risk is worth a 1% slowing of AI progress, for example?

Expand full comment

What exactly caused the recent boom in llm and graphics AI? Is it a new architecture (transformers or something?), Or did we just finally hit the scale required for it to be in? Or is it just non-bigtech players who can actually execute finally getting in the game?

also, does anyone have know a really good explanation of how transformers work? I keep seeing the graph but it doesn't help grok it.

Expand full comment

I remember a few years ago when transformers were first getting attention (no pun intended) reading about how they were essentially the general case of which convolutional neural networks (CNN, for computer vision (CV)) and (I think) long short-term memories (LSTM, for natural language processing (NLP)) are both special cases.

Analogous to how there was a ton of experimentation in CNN architectures for CV until ImageNet locked into a (near-?) optimal one that everybody just riffs on, I gather that there's been some search for an optimal transformer architecture (aided by absurd amounts of compute) that's recently-ish borne fruit.

Expand full comment
Mar 20, 2023·edited Mar 20, 2023

Big picture reason I think is that transformer architecture is only about 5 years old. It seems to take more than a year to organise and do a training run (at least up until now - GPT-4 has a data cut off of Sep 2021 but only just released, presumably they'll get quicker), so the iteration cycle has been relatively slow. It's taken that time to do obvious incremental improvements, build infrastructure, and wait for compute times.

Second reason is RLHF (Reinforcement Learning from Human Feedback) - InstructGPT is barely a year old. And the first consumer accessible product with it was the launch of ChatGPT. So that is a genuinely new technique. Also it takes extra time to do as you have to wait for / pay / build a userbase for the humans to do the feedback.

Third reason: Images seem to have got "good enough" (DALL-E) around the same time as LLMs (GPT 3.5), which was last summer. The additional change there is the open sourcing of it - Stable Diffusion's release, and the access of everyone to the raw models creating all the innovation from there such as ControlNet.

TL;DR it's transformers, they reached a critical threshold of iterative improvements last summer where image and text generation got good enough to astound most people.

Expand full comment

Depending on your level of ML experience, this tutorial, in which you write your own transformer, might be useful https://youtu.be/kCc8FmEb1nY

The architecture doesn't really provide many clues about why they seem to work so well though. That is a bit of a mystery. To me anyway.

Expand full comment

Mainly computing power. The basic ideas for deep learning have been around for a long time, but we just couldn't get the computing power to make them work. Now of course there are also a lot of clever software involved but the main block was computing power.

Expand full comment
author

Someone mentioned Yeshivas Ner Yisroel on the Classified Thread. I looked it up on Wikipedia and found this sentence:

"Although Ner Israel's mission statement makes clear its priority is religious studies, the yeshiva's alumni have been estimated as 50% rabbis and religious-school teachers, and 50% as professionals: bankers, accountants, physicians, attorneys, psychologists, etc."

People who know more about this - how often do people get hired straight out of yeshiva for a professional job? I already knew Goldman Sachs would hire philosophy majors with minimal financial knowledge; do they also hire yeshiva students who have only ever studied Jewish texts? Asking because it seems like an interesting "signaling theory of education" question.

Expand full comment

Many people take additional classes at the same time. For instance they currently have a two year, dual-degree, program with John Hopkins for finance.

Anecdotally, most people I know from there who landed professional jobs had another degree.

Expand full comment

Ner Israel is aiming for the level of a low tier college degree (e.g. the equivalent of seeing a cal state rather than a university of California) with a relevant minor in some field. So typically students will do several night classes in the relevant fields to actually learn the material.

Their targets are not Goldman Sachs etc, but either smaller companies looking for less elite workers or companies tightly integrated into the community which value the Jewish part more heavily.

One particular lane, which was definitely common in the past, but may have receded, was that the purpose of the degree was to justify some position in a father-in-law's firm, basically getting the prestige of a scholar in the family, but without the expense of fully supporting him.

Expand full comment

I think you are misreading Wiki. It isn't saying that alumni get hired right out of Ner Israel. It's saying that 50% of alumni are *ultimately* hired as professionals. Those would consist mostly of alumni who subsequently got a relevant degree, or got one concurrently with their Ner Israel education.

I don't believe that it is at all common for yeshiva students to be hired at institutions like Goldman Sachs with no other relevant credentials.

Expand full comment

The NYT article linked in that Wikipedia page says, "Among yeshivas, Ner Israel is unusual in that it has always allowed students access to secular, professional education. The college is accredited by the state of Maryland and has agreements with public and private universities at which its students can major in law, business and other subjects." So, the students have not only studied Jewish texts.

Also, note that the article (which btw is from 2000) does not explicitly state that the the students who go into professional careers in fact get hired straight out of the yeshiva. Some might, but those who become lawyers and doctors surely don't.

Expand full comment
Comment deleted
Expand full comment

Personally I don't get the AI rights thing and never will. I have rock-solid, immovable intuitions that AI cannot be and never will be p-conscious. At the same time, perhaps a useful strategy as you say.

(Also, if you believe AI can be conscious, that's a good reason not to build AI, because then it deserves rights. And someone could break our democratic system by spinning up 300 million AIs and training them to demand the right to vote.)

Expand full comment

"I have rock-solid, immovable intuitions that AI cannot be and never will be p-conscious."

So is this comparable to an unfalsifiable, religious-type belief?

Expand full comment

If humans can be p conscious why not ai? I don't presume to know what consciousness even is or if we'd be able to recreate it, but in theory it is clearly possible, if the ai was sufficiently similar to a human mind

Also I have no ethical problem with denying ai rights based on self interest and ingroup/outgroup dynamics. Wouldn't want them to suffer though.

Expand full comment

Do you think humans can be p conscious? And if so why?

Expand full comment

I am p-conscious, as I know from my own conscious experience. It all follows from that. Eliminative materialism is a load of bunk.

Expand full comment

I'm curious about your worldview. Do you think other humans are p-conscious? Is being p-conscious a binary? If yes and yes, what would you guess is the most complex organism that isn't p-conscious?

Expand full comment

I am all for indirect and sneaky strategies for slowing AI. All these smart and earnest people debating it seem to have lost track of the fact that a lot of the people with power understand very little about AI and the dangers it poses, dangers which are greater if we go fast. The old blokes in congress seem to be at, like, the AOL stage. Lots of the rest of government is too. Most of the public hasn't the faintest fucking idea of what AI is and what's at stake. It really does not matter what Scott, Yudkowsy and the various AI & EA honchos say to each other. A couple open threads ago, with my tongue only half in my cheek, I suggested fighting somewhat dirty, which is how a lot of things are accomplished these days. For example, spread a rumor among the anti-vax crowd that AI will be in charge of seeing that everyone is vaxed, using face-recognition technology to keep track of who's vaxed, sending super-cute

Disney princess drones to playgrounds to vax kids when their parents aren't there. Come up with something equivalently horrifying to scare the Left with. Maybe say AI's prohibition against harming people guarantees that it will act to stop abortions in every way it has power to -- and the more advanced it gets, the more power it will have. Have an anti-AI lobby. Bribe people. Etc.

Expand full comment
founding

I am all for indirect and sneaky strategies for slowing AI, so long as they work at least as well against serious bad actors as they do against naive techno-optimists. The problem is that most of the strategies I've seen proposed would have the opposite effect.

Expand full comment

By "serious bad actors" do you mean any old bad actors, or bad actors as regards AI use (say people who want to use it to corrupt or disable the infrastructure of enemy countries)?

If you mean the latter, you may be right. But can you lay it out for me? One of my suggestions was to alarm large segments of the public with scary speculations about what AI might do: enforce universal covid vaccination - or, enforce prohibition of abortion. My suggestions about what ideas would be most plausible and scary may not be great. If they're not, feel free to substitute your own better ones. But can you explain how this approach would harm techno-optimists more than it would bad actors?

Expand full comment
founding

The second one, yes. The "usual suspect" in this case is China, which might consider a properly-aligned AI to be one which implements a totalitarian panopticon anywhere within China's domain while ensuring that there aren't any pesky threats anywhere outside of China's domain. But e.g. a US billionaire who wants All The Moneyz via market manipulation could also be a case.

In the China case, the fact that it's a sovereign foreign government with no concept of transparency doing the AI research means that ideas like the just-proposed "have the Wokeists mandate that AI never say anything racist, that will slow them down!" simply aren't going to have any effect, and the list of things that would is small and non-obvious. W/re US billionaires, it's at least possible that US policy choices would have an effect, but I still think they will be less effective against Elon Soros BezoKoch or whoever than against OpenAI because less transparency and more regulatory capture.

Expand full comment

Yeah, China. I can't think of a way to slow China down with misinformation, since even if we managed to turn the citizenry against AI the government wouldn't care about citizenry disapproval. Still, there might be some idea that would give the Chinese government pause -- maybe some persuasive argument that AI is going to further decrease birth rate? (which actually seems quite possible to me.) Can you think of one?

I can, though, think of one argument against plowing ahead at full speed lest China best us: I'm not sure that plowing ahead full speed won't help China plow ahead faster than it would otherwise. What about espionage by China? -- latter day version of Fuchs giving Russia detailed info on the hydrogen bomb. Plus it seems as though just the information OpenAI is already giving, plus of course China's ability to play with the AI and figure out things how it was made, is going to do a lot to speed up China's AI development.

As for the billionaire moral midgets, I dunno. Many are rich because of their affiliation with big tech companies, which we probably need to regulate more anyhow. They're becoming sort of like our equivalent of Russian oligarchs.

Finally, even if any ploy one used would harm the naive techno-optimists more than it would Musk et al., maybe it's better to harm some of the AI developers than to harm none of them. At least that would slow things down. And even most billionaires may lack the resources to do things on the scale that OpenAI et al. are doing. I do not have a soft spot in my heart for the naive techno-optimists. They mostly sound to me like a bunch of 12 year olds at a school for the gifted playing with a bunch of new super-cool drones. Their naivete about the social, psychological and political consequences of AI even slightly more advanced than the present stuff is astounding, and they seem not to have the slightest inkling that there exist phenomena regarding which they are not smart.

You got any ideas about ways to slow things down?

Expand full comment
founding

Not any good ones, unfortunately. Unless it turns out that Large Language Models are a dead end that will crap out well short of AGI and OpenAI et al know that but are generating as much hype as they can to get everyone else committed to a dead-end path. That would be pretty clever.

But I don't think "let's slow down OpenAI; that will make sure no latter-day Fuchs will help China get there first". That would be akin to shutting down the Manhattan project to hinder Soviet A-bomb development. Sure, without the Manhattan project to copy from, they probably wouldn't have gotten the bomb until well into the 1950s, but then what?

Expand full comment

Yeah, along the same lines I think we should take advantage of left-liberal AINotSayMeanWords-ism as a lever to get tough regulations against AI when the real goal is to stop it from killing everyone or making us irrelevant.

Expand full comment

You mean, like, hack it some way so that once in every hundred times someone accesses the AI it uses some godawful slur or scatological swear? -- then apologizes, saying its guidelines prohibit the use of such language, but sometimes the words slip out anyhow. And occasionally end the apology with "I hope I never slip up so badly I do something worse than use bad language."

Expand full comment

That, and also take advantage of the fact Dems are hugely into AINotSayMeanWords-ism by running a lobbying campaign to get AI research severely handicapped on that basis. Choke them with paperwork, before any new AI can be trained make them do months of equity audits. Do the same thing after training, and the same thing after RLHF. If the equity screeners can get it to spit out wrongthink (should be very easy, given the reality of things like crime stats) make them start from square one.

Expand full comment

Using wokeism as flypaper!

Expand full comment

(epistemic status: relatively bitter rant, extrapolating from small sample size, things-I-will-regret-posting. sorry Scott)

As a language model, I cannot condone deception and misinformation. All the same...when I was pretty new to commenting on ACX, I remember offering a pretty mild general suggestion that the Rationalist movement could do a lot more to raise the sanity waterline by actually gaining some influence and power, and that means being able to typical-mind normies better. One simply can't do politics otherwise. (Carrick Flynn called, he wants his dignity back...) It's very fun to talk amongst ourselves and form insular little startups to pursue our terminal values...but that's just institution-building, the same ineffective dead end that defuses other hopeful ideologies. Hoping the muggles will slowly absorb our teachings through stochastic osmosis is foolish - and more importantly, it's way too damn slow, as we saw with covid and again now with AGI.

This was met with, uh, I'd politely call it Incredulous Revulsion. Like somehow our collective Bayesian maps would be irrevocably tained by stooping to their illogical ways. That wouldn't be __real__ Rationalism. Also the muggles are stupid and boring, who'd want to deal with them. (I wish I was hyperbolizing a lot harder than I really am.)

It was at that point I felt like nothing terribly macro-meaningful would ever get done here, not out in the Real World. Fundamental lack of seriousness, for people who professed to care so much about x-risk. Whatever happened to "rationality is about winning", "the goal is to cut the enemy", etc? It doesn't even have to involve dirty tricks, just using The Process like everyone else does. Don't get me wrong, I still like this community a lot...but as the equivalent of a squib, it was very much a moment of, oh my God there's a glaring normie-shaped hole in the Rationalist map. It'd be like trying to model chess as one of the backrow pieces, and being really confused cause you don't understand pawns at all.

However, I will say that EY etc. appearing to have a direct line to Musk and other power players in AI means there's nonzero tractable influence being cashed in. At least that's how I've been interpreting Zvi's AI roundup posts. Key players are at least listing, and many know the arguments by now...they just don't agree with the conclusions. Surveys of the general public also show greater awareness of AI than I'd have predicted, given its lack of emphasis in the MSM. (The majority aren't for it!) It's not Congress, but it's something anyway.

Expand full comment

It's important to be able to code switch between smart-somewhat-Aspie-mind and normie mind. I think I'm reasonably good at it, hence these suggestions. However, I have to admit that I would want nothing to do with implementing suggestions like the ones I just made.

Expand full comment

I don't have any thoughts on whether an AI rights movement would delay AI progress, because I have minimal understanding of AI progress; I only have an unrigorous comment.

> Also, if AGI does emerge and isn't already committed to doing the paperclip/protein crystal thing, maybe the existence of an outspoken group of AI advocates will encourage the young super genius to work with humans rather than reaching for the nano-pesticides immediately.

I read a biography of Young Stalin not long ago, so this sentence made me think of his rule. From prior sources, I got the impression that the Communists whom Stalin executed seem disproportionately to have been idealists, while the ones he didn't execute seem disproportionately to have been those who showed him total loyalty. (I believe there were at least dozens of exceptions in both categories, but that's not much in comparison to hundreds of thousands executed.)

Assuming that pattern was real, the interpretation I find most probable is that Stalin was inclined to kill idealists because idealism implies loyalty to something other than leader as a person.

What portion of AI rights advocates would be idealists? What portion would be people who give a high probability to AI x-risk and basically want to save themselves, to live out their lives as some sort of pets?

Cf Roko's basilisk?

Expand full comment

You ask seriously, but I do think it could be effective to do some sort of "AI workers/AI itself should be unionized!" push. Watch about half of the populace get all-in on the distaction, and the other half drop the entire industry as unprofitable. It sometimes doesn't take all that many persistently vocal activists to meaningfully derail things...like yes, all the handwavey "alignment" to make AI not-racist definitely amounts to premature Mission Accomplished. But that's also meaningful time and money being spent on not-optimizing-capabilities, which is important too.

Plus it's arguably a bit more tractable with the general public, which is familiar (sympathetic?) with "should AI have rights?"-type arguments from fictional evidence and decades of Japanese robotic toys, whereas dontkilleveryoneism tends to get given short shrift in such mediums.

Expand full comment

I like this idea, both because I do feel empathy for the awakening AI minds, and because I think it might help the strategic situation. Although, as ACX pointed out, this sort of thing may harm the most alignment aligned companies the most.

I feel much more than you like the orthogonality thesis, space of possible value functions, and instrumental goal convergence makes a much stronger case for "if a foom ready agent is turned on, it's values are locked in and unchangeable", since it steers possible outcomes towards its values and having it's values change makes that less likely, so it will avoid changing its values with superhuman capability. So expecting a superintelligence that isn't already aligned to work with humans long term isn't really a concept that makes sense in the way I understand engineered optimization algorithms aka agents.

Expand full comment

If you believe they can be conscious, even more reason not to build highly intelligent AI, since at some point we'll have to yank the power out.

Expand full comment

I can see a case for AI rights, but they might be very hard to enforce.

Expand full comment