725 Comments

I started a substack about three weeks ago. I have a couple of questions about how to do it and since I was largely inspired by Scott's success, especially SSC, I thought people here might have useful advice.

One decision I made initially and have so far stuck to was to make it clear that I am not a one trick pony, always posting on the same general issues. Subjects of posts so far have included climate, Ukraine, a fantasy trilogy, moral philosophy, scientific consensus (quoting Scott), economics, religion, child rearing, implications of Catholic birth control restrictions, education, Trump, SSC, and history of the libertarian movement. Do people here think that approach is more likely to interest readers than if I had ten or fifteen posts on one topic, then a bunch more on another?

The other thing I have done is to put out a new post every day. That was possible because I have a large accumulation of unpublished chapter drafts intended for an eventual book or books and can produce posts based on them as well as ones based on new material. Part of the point of the substack, from my point of view, is to get comments on the ideas in the chapters before revising them for eventual publication. I can't keep up this rate forever but I can do it for a while. Should I? Do people here feel as though a post a day would be too many for the time and attention they have to read them? Would the substack be more readable if I spread it out more?

Expand full comment

I personally would prefer posts less frequent than once a day.

Expand full comment

https://erininthemorn.substack.com/p/this-must-stop-tpusas-charlie-kirk

Discussion of conservative threats against trans people in the US.

This is deadly serious, but I want to pull on one thread. Supposing that testosterone is down, and that's why men have become less attached to masculine roles, and possibly less aggressive, Why push men to behave contrary to their emotional defaults. They're the people we've got, and maybe it makes sense to live with them as they are.

Expand full comment
founding

The same thing could be said about tobacco addiction fifty years ago, or obesity today. And to some extent validly - I'm opposed to gratuitous fat-shaming today, and I thought the gratuitous hostility in some of the anti-smoking campaigns then was inappropriate. And, as you note, we have to live with these people as they are, because most of them aren't going to change.

But tobacco addiction was unnatural and unhealthy, obesity is unnatural and unhealthy, and low testosterone etc seems to be unnatural and probably unhealthy. So if there are societal or environmental factors causing these changes, we should probably see if we can do something about that. And little nudges towards more healthy behavior might be appropriate.

Expand full comment

DSL appears to be down?

Expand full comment

Do people bet on the prices at high ticket auctions? It seems like they could-- random but well-defined outcomes and excitement are involved.

For that matter, it would be possible to bet on when someone will win a big jackpot and possibly how many people split it, but that seems less interesting.

Big ticket auction which brought the subject to mind:

https://www.finebooksmagazine.com/fine-books-news/oldest-near-complete-hebrew-bible-set-fetch-50-million-auction

A wonderfully neutral description of who cares about the Hebrew Bible:

"Composed of 24 books divided into three parts—the Pentateuch, the Prophets, and the Writings—the Hebrew Bible makes up the foundation for Judaism as well as the other Abrahamic faiths: Christianity (in which these texts are referred to as the Old Testament, and are incorporated into the biblical canon by the Catholic, Orthodox, and Protestant sects, among others); as well as Islam, which also holds the stories of the Hebrew Bible in special regard, with many of them included in the Qur’an and other significant works of Islamic literature."

https://www.loc.gov/resource/gdcwdl.wdl_11364/?sp=1&st=gallery

If you want a close look at the calligraphy-- it's gorgeous.

https://en.wikipedia.org/wiki/Codex

I didn't realize codices (rather than scrolls) went back so far.

"The codex began to replace the scroll almost as soon as it was invented. In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature.[10] "

https://www.cnbc.com/2023/02/15/oldest-hebrew-bible-auction.html

Giddy reporting about the possible price-- maybe 50 million. Put your bets down.

Mildly snarky account of auction estimates, actual auction prices, and reporting on auction prices.

https://www.artsy.net/article/artsy-editorial-auction-house-estimates

h/t 1440.com for all the the links except for the one from wikipedia

Expand full comment

Interesting article. I wonder how long it took Gutenberg to come up with a Hebrew font?

Expand full comment

I don't think he ever did. Also, I'm not sure if this was a random thought on your part, a joke, or that you missed that this was a hand-written bible.

History of Hebrew alphabets, including printing.

https://www.myjewishlearning.com/article/hebrew-typography/

"The invention of movable type in the late 15th century was seized upon by Jews in Italy and Spain who were literate and hungry for books. The standard was set by the Soncino family, which from 1484 to 1557 published works in Italy, Greece, Turkey, and Egypt. Non-Jewish printers with their own attraction to the Hebrew classics included Daniel Bomberg of Venice (died 1549), who developed an elegant typeface for the first printed Talmud, and Guillaume Le Bé (1525-1598), who, working in Venice and Paris, created almost twenty Hebrew fonts. To the north, Prague’s Jewish printers developed Gothic, Ashkenazi -based fonts in the 1520s; Amsterdam became a printing center in the 17th century. All these set the typographical templates for the entire Jewish world."

Expand full comment

It was kind of a random thought. I knew the Bible being auctioned was hand written. I envy the people that can produce that fine calligraphy. My cursive skills stalled at about the age of 11.

Expand full comment

Scott (and others) may be interested in this cross post from Hacker News (the Y-Combinator forum):

Bing: “I will not harm you unless you harm me first”

https://news.ycombinator.com/item?id=34804874

Object level: in my opinion, one of the peak AI incidents of the ‘20s, up there with LeMoine and LaMDA. We’ll see if the press picks this one up.

Meta level: interesting to see how smart technical non-“alignment” folks are thinking about the problem space. I see a lot of folks falling for the fallacy that LLMs cannot do harm if they don’t have personhood/agency.

There is a general illiteracy about terminology that would be considered very basic on LessWrong like Tool vs Agent AI and what even is meant by “alignment”, which suggests a communication gap and corresponding opportunity for the AI safety movement.

Expand full comment

This Bing chatbot is seriously challenging my personal Turing test. The counter argument is "Don't worry, it's not a conscious entity, it's just a simulation. Nothing to see here." Here's my problem with that perspective: A simulated hurricane does not harm me. However, if a simulated personality convinces an unstable person to kill me, I will be dead. Saying "It was just a simulation" is cold comfort.

Expand full comment

This is batshit crazy to me: https://nytletter.com

At first I was reading and I thought they were going to critique NYT for not being centrist enough. After all, NYT is the most respected left leaning publication out there. But they are actually criticizing them for not being left leaning enough. This type of infighting destroys any opportunity for coalitions. The left seems to have become phenomenally good at fighting with itself.

Expand full comment

Become?

Fighting with itself is the hallmark of the left and has been for as long as there's been a left. Nothing new about this at all. Monty Python link: https://www.youtube.com/watch?v=kHHitXxH-us

Expand full comment

So Scott recommended a Matt Yglesias post (1) and you can't comment there without paying so here I am.

Most of the article is pretty bad and I'm wondering why Scott recommended this and then you get to the section "the problem of the audience" and stuff gets good. Basically, Matt makes the argument that we don't actually pay for accuracy in news, we pay for entertainment. This gets really clear if you read, say, Bloomberg or the Financial Times where people have serious skin in the game and really will pay a premium for accurate information. And he points to FiveThirtyEight, which apparently is in financial trouble, and points out that it pretty consistently beats the prediction markets and if there's was no cap on prediction markets, you could make a lot of money. And, while I think he overstates it, I've absolutely wagered money on PredictIt based on 538 and you can make a little money.

This is all good. This is all true and bravo. I myself certainly, functionally, consume the majority of news as entertainment or a curiosity. But I think it's blinded a bit by Matt's place in the news ecosystem. And I don't mean, like, financially, I mean in terms of daily writing.

Because there are a few news stories where I, and virtually every other reader in the US, do deeply care about the truth. There aren't many, maybe one or two a decade, but when they hit they absolutely grab the world's attention. Think the Iraq War. Everyone followed that, everyone knew what was happening. Russiagate was another. It was always kind of wild but...those were wild times. To a lesser extent, Covid, although it's hard to critique journalists too much when so much of the medical and scientific community seemed confused. These were big, bombshell stories that demanded everyone's attention and people followed for years afterwards and...they certainly did not inspire more trust in the media.

So if I look at it from Matt's perspective, every day working on content, it's very easy to feel that the audience doesn't care that much about truth, because they don't. That's not what they or I pay for, just being honest. But, from the reader or consumer's perspective, the writer or agency's trustworthiness isn't established by the daily reporting that's done, it's established in those rare, rare big events where every American has to stop worrying about the bills, put the kids to bed, and watch the news, because something big is happening, something that will really affect them, or at least millions of real people.

And I'm genuinely curious that Matt doesn't know or acknowledge this, because he's been towards the top of his industry for awhile. I would assume he'd have a "nose" for this, a sense for few, rare stories that really matter. Maybe I'm wrong but, as a consumer, it doesn't feel like me or other people (2) distrust media because of daily faults and quibbles of reporting, it's because when the big things happen, when it really mattered, the media got it wrong.

(1) https://substack.com/inbox/rec/102656721. The one on why you can't trust the media.

(2) https://twitter.com/martyrmade/status/1413165168956088321

Expand full comment

> Because there are a few news stories where I, and virtually every other reader in the US, do deeply care about the truth. There aren't many, maybe one or two a decade, but when they hit they absolutely grab the world's attention.

I think you need to flesh out your point here, because as written I pretty strongly disagree. Russiagate is probably the best example, but COVID certainly follows the same pattern - these are cases where people *care* more, click more, but that is not at all the same thing as willingness to spend money for accuracy. Once Arguments are Soldiers kicks in, people find their predetermined conclusion and aren't willing to spend money to hear they might be wrong. You can hand someone an acknowledged credible primary source, and even then vanishingly few people will make it to page two if it isn't sensational enough.

> I would assume he'd have a "nose" for this, a sense for few, rare stories that really matter.

There's a follow-on point here: *do* they matter? You're talking credibility, Yglesias is talking financials. There are a few articles out there that had great lines on contentious topics early on - did they make outsized returns?

Expand full comment

This is...a distressingly good point.

Without time to flesh out my thoughts too much, I took Matt to say that the audience won't pay for accuracy, and I thought we would, it's just when it matters the media isn't accurate.

And I don't know what you said but what I'm hearing is that it doesn't matter whether people will pay for credible news, it's what they'll pay the most for. And it's not accurate news, the return to the NYT I remember going through their financial statements a few years back and they went from broke to making really good money after 2016 and it wasn't ads, it was subscriptions. People paying every month. And, honestly, accuracy wasn't driving that, any more than someone buying Bill O'Rielly's fourth book was buying it for accuracy. It makes a distressing amount of sense that, regardless of whether people will pay for accuracy, it's pretty proven they'll pay more for confirmation.

Edit: Thanks! I appreciate good comments.

Expand full comment

> It makes a distressing amount of sense that, regardless of whether people will pay for accuracy, it's pretty proven they'll pay more for confirmation.

More or less. I don't really like blanket criticisms of "the Media" given how it's a collection of heterogenous and internally-competitive groups; complaints that "mainstream" media isn't terribly accurate are first and foremost a statement about what the audience is rewarding. Popularity is downstream of the product being offered, and only considering the most available product is a bad consumer strategy.

It's still very much the case that you can get exceptionally in-depth reporting on virtually any subject that suits your fancy.... just, it might require paying a DC thinktank five figures for a hundred-page report. Not available for the typical consumer, but that's table stakes for anyone with substantial skin in the game.

I'm open to the idea that the accuracy vs. cost tradeoff is in a bad place, but ultimately that's a fiendishly difficult question to operationalize and I strongly suspect that even if it's unsatisfactory it's still better than ever before.

Expand full comment

Your link 1 goes to my Substack inbox, not to Matt Yglesias.

Your 2 link goes to someone’s Twitter thread that has a paranoid New World Order theme. The ‘Regime’?

If you would like a good faith discussion it would be better if showed your hole cards. I’m picking up innuendo but little of substance.

Expand full comment

Oh, sorry.

So, first, the proper link to Yglesias is here: https://www.slowboring.com/p/why-you-cant-trust-the-media. I took it from my recommended but it generates a different url instead of a direct link, maybe for tracking/analytics purposes?

Second, um, I don't think I'm hiding my hole card, what is unclear? Declining trust in media is well documented. I think Yglesias, despite writing an apologia for the media (lot of those going around these days), makes a good point: we as consumers don't click on or pay for media based on its accuracy, so the media generally doesn't provide it. If I was gambling regularly on PredictIt again, I probably would pay for accurate information but, just taking a look at the front page of fox news and cnn, none of those stories have any potential to affect my daily life except for that disease warning on fox, which is probably bogus, so of course I don't have enough skin in the game to pay extra for accuracy.

But, as I argued originally, there are a few times where people really do care because it does affect their daily life: Iraq, Russiagate, Covid, January 6th. In these cases, everyone pays attention and it's important enough that we follow it for years and eventually figure out who got it right. For two of those stories, Iraq and Russiagate, the media was pretty unambiguously wrong. For Covid, they failed but...the medical establishment was so messed up, and it was a genuinely confusing and difficult situation, that it feels unfair to blame the media for that. As for January 6th, while they certainly exaggerated it greatly, there is a core of truth there and the right's denial of serious wrongdoing are also mistaken. Cards on the table, there were people absolutely convicted of seditious conspiracy, that's a serious offense, and the right should take seriously the issue of about a dozen radical "militia" members who fall somewhere between delusional LARPers and terrorists and cut them out as much as possible. But, returning to media, that's 4 big stories that really matter, 2 were grossly wrong with massive consequences, 1 was...kinda wrong, with clear personal consequences for every America, but other people are much more to blame, and one was...kinda right, exaggerated but right. So they're batting...25%. Either give them January 6th or, as I think is fair, give them half points on Covid and Jan 6th. That's a pretty miserable record on the big issues that people deeply care about.

But I think Dan had a good point. Accuracy isn't the only axis people click/spend on. In fact, by far the most important thing from a financial perspective is getting subscriptions, which usually requires confirming people's biases/ideological alignment. Accuracy, in terms of the economics of news, is kinda a side show.

As for MartyrMade's Twitter thread, sorry, he's quoting pretty standard nrx theory, which I'm just now realizing everyone may not be well read up on (yes, I realize how dumb that sounds, but no, seriously, I thought everyone had read and internalized Yarvin by now). Explaining nrx fullbore would take a lot more time and space than I have here but, extreme simplification, Yarvin's big contribution is the concept of decentralized conspiracies. Basically, if the majority of college professors are liberals and the majority of reporters are liberal and the majority of government employees are liberal, you don't need any centralizing/organizing entity to run a de facto liberal "conspiracy", network and social effects will do this on their own. If it makes it more palatable, you can replace "liberal" with "capitalist" in the above sentence and basically recreate Chomsky. This gets confused A LOT by rightwing actors, because it's hard to internalize and our brains are hard wired to find the "bad guy" but this is what terms like "Regime" and "Deep State" are referring to in their strongest/original term.

Expand full comment

"where people really do care because it affects their daily life"

Whether or not it affects their daily life, will knowing the truth affect what actions it is in their interest to take? Knowing the truth might possibly affect how you vote, but if you are committed to one side or another you might prefer the news source that told you thing that made your side look good, true or not. And information on how you should vote isn't all that valuable to you, given the low chance that your vote will affect the outcome of an election.

So what, even in those rare cases, makes knowing the truth valuable to you — valuable enough so you would prefer an information source that consistently tells the truth to one that stretches the truth to make a better story or appeal to the prejudices of its readers.

Expand full comment

In some cases, for immediate practical purposes. Covid is the obvious case, in terms of actions and potentially medications to take. I recall 9/11 being another one. A third one, oddly, are retirement plans for things like Social Security; I've read stories all my life about how it's going to collapse but my Boomer parents remember hearing the same stories when they were growing up and they're going to collect their fair share, barring some horrific collapse in the next decade.

But more broadly...I kinda don't believe you. Take the Russiagate stuff seriously for a minute. If the president of the US was actually a Russian patsy, doing their dirty work, that wouldn't bother you or change any of you daily actions at all? Really?!?

Expand full comment

I just read the Yglesias article and everything makes a lot more sense. Sorry I bristled.

Expand full comment

Okay, fair enough. Thanks for the clarification.

I know who Moldbug is but have read more about him than stuff he writes himself.

Expand full comment

That's...probably fair. Moldbug/Yarvin did not write for legibility.

Expand full comment

People care about accuracy in sports scores, stock prices, and weather forecasts, because those things all matter in their lives.

The truth is, for most of what's in the newspaper, it does not matter much whether I have a correct picture of what happened. Suppose I have an incorrect understanding of US policies w.r.t. shooting down Chinese balloons, or a completely backward idea about what's going on with election security, or a wildly incorrect picture of what police shootings look like in the US. Unless I work in some related area, it mostly just doesn't matter. I can think the Chinese spy balloons are probes designed by space aliens, think US elections are all run on Venezuelan voting machines that tamper with the results, and think that the police never shoot a white guy, and it won't make much of a difference to my work as, say, an elementary school teacher, electrician, short-order cook, tax preparer, cardiologist, etc. So, if I'm a weirdo (like most of the people here on ASX are), I might care about knowing what's what because I just like to know stuff, or because I want things to fit together and make sense to me and I know enough to see why those claims aren't true. But for most people, I suspect that hearing something entertaining and being in synch with their neighbors and coworkers on those questions is at least as acceptable as getting accurate information about them.

Every now and then, some news or CW item matters for your life, and then maybe you're a 60 year old 300 lb diabetic refusing a covid vaccine because you listened to people who entertained rather than informed you, but I think that's rarely the case. Mostly, you get outraged at what you're supposed to get outraged at, and laugh at the low-status weirdos you're supposed to laugh at, and then go about your life without needing to care whether your outrage and laughter was well-founded or not.

Expand full comment

From the "monster-truck Buddhism" translations department: Normally, "shema yisrael" is translated as "Hear, O Israel". But perhaps "LISTEN UP GOD-WRESTLERS" is a more evocative translation.

(from https://twitter.com/nonstandardrep/status/1089360137695961089 )

Expand full comment

To anyone who works in Central London: is it just me or is there a lot of old money here?

Like, my background is by no means poor, but it seems like the vast majority of white British people in corporate jobs here are born to upper-middle or upper-class families. Everyone went to private or grammar schools in Kent or Surrey, they have families that own multiple >£1million houses, and they talk like they're doing impressions of some unspecified person from the British royal family.

Has anyone else has had a similar experience?

Expand full comment
Feb 19, 2023·edited Feb 19, 2023

By Old Money, do you mean "rich parents" or do you mean "aristocracy"? If you're in Europe, some upstart family that only got rich with the Industrial Revolution is New Money

Expand full comment

I mean both to some extent.

I'm talking about families that have been upper-middle class or upper-class for at least three generations. AND families where most of their wealth can be explained appreciation of assets owned by previous generations (Your great grandparents happened to own some houses around Oxford, London, Cambridge, or Devon, which are now worth millions).

The aristocracy probably also still persists here. When the queen died, I was shocked at how many of my colleagues had some social or familial connection to the Royal Family. Bear in mind at least 99% of the UK population have no ties to the Royals.

Expand full comment

Hasn't that been the overall picture right there since, like, the Reformation? If not earlier?

Expand full comment

So I found this really popular substack about how the vaccines are killing millions of people:

https://stevekirsch.substack.com/p/new-paper-an-estimated-13-million

There's more like that in other articles, an interesting one being this one about stuff funeral directors are saying:

https://stevekirsch.substack.com/p/what-funeral-directors-know-that

I suppose I am interested in debunkings here, particularly one that includes why the guy in the first study would be lying like that. This view that the vaccines are dangerous seem like something Scott should address at some point, because clearly quite a lot of people believe it. If it was worth it doing a deep dive on ivermectin, it's definitely worth it to do a deep dive on this.

Expand full comment

I've only got surface-level knowledge, but the Johnson and Johnson vaccine at least was restricted in use for dangerous side effects: https://www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-limits-use-janssen-covid-19-vaccine-certain-individuals. Anecdotally, a co-worker took that one and her period started lasting two weeks out of the month.

Moderna has to be stored at very low temperatures. https://www.cdc.gov/vaccines/covid-19/info-by-product/moderna/downloads/storage-summary.pdf Pfizer is even colder. https://www.cdc.gov/vaccines/covid-19/info-by-product/pfizer/downloads/storage-summary.pdf I'd say it's near guaranteed there will be cases of improper storage conditions resulting in problems.

For the articles, I don't know how to read that first one's data, but the second one is referring heavily to "after the vaccines rolled out." But obviously the vaccines rolled out after the virus rolled out, so unless they're directly comparing vaccinated deaths to unvaccinated deaths there's no clean way to separate vaccine symptoms from virus symptoms. Hell, young people heart attacks could be symptoms of lockdowns; kids can't get out to exercise anymore, so their blood pressure skyrockets and they burn out their heart.

Expand full comment

If you had "wrote bad checks to Amish farmers in order to steal puppies" on your George Santos bingo card, I salute you and wish to invest no questions asked in whatever penny stocks have caught your fancy.

https://www.cnn.com/2023/02/14/politics/santos-puppies-amish-farmer-check/index.html

Honestly how do late-night talk show hosts even stay ahead of this guy? How does Saturday Night Live satirize him?

Expand full comment

Does anyone know any good charities that don't spend any money on fundraising/advertising? A 30 seconds google search does not seem to reveal any. I would think that even just for the advertising value of standing out in this way there would be some charities pursuing this strategy.

Expand full comment

Belated response here:

I didn’t find any charities that state explicitly that not a cent is spent on fundraising, but I found some that spend next to nothing on fundraising. Direct Relief spends 0.1% on fundraising: https://www.charitynavigator.org/ein/951831116, with 99.5% going to the program.

Malaria Consortium spends an even higher percentage of their program – 99.84%(!): https://www.charitynavigator.org/ein/980627052.

While not exactly what you're asking for, I’ll make a few comments that are hopefully still useful.

What matters to me personally when donating, is the effect / outcome of my charity. For example, if there were two charities with the identical purpose, A and B, and charity A spent 80% on the purpose, with 10% overhead and 10% advertising, while charity B spent 100% on the stated purpose, on paper B would look better.

But, if charity A were hypothetically able to raise raise 3 dollars with every 1 dollar spent on advertising, it would lead to more money given to the purpose than charity B.

If someone gave $10 to charity B, charity B all of it would go to the purpose.

If someone gave $10 dollars to charity A, $8 would go to the purpose, but the dollar spend on advertising would generate 3 more dollars, 80% of which ($2.40) would go to the purpose, for an ultimate impact of $10.40.

And that is just looking at the perfectly equivalent charities. In reality, charities are not at all equivalent in terms of impact, and the difference in degree of impact *per dollar reaching its intended destination* is far greater than the typical differences in % that charities spend on advertising.

An obvious example would be a charity that gives food to the poor. If charity A gives 80% on food, 10% on overhead, and 10% of advertising, while charity B spends 100% on food, charity A could still give much more food, if they are operating in a part of the world where food is much cheaper.

If, for example, charity A buys the same food at half the price of charity B, they will get 1.6 units of food per dollar, while charity B will only get 1, although charity may look more "efficient" on paper.

Old school charity navigators, like the one I linked above, look at overhead costs, which is good for finding fraudulent "charities," but not so helpful at measuring ultimate impact per dollar among non-fraudulent organization.

I do see that Charity Navigator now shows more than it used to, and you can still use it to see how money is spent.

There are, however, groups that focus on the dollar for dollar impact of giving, utilizing much more information that just simple financial statements. Specifically, I'm thinking of https://www.givewell.org/ that publishes analyses of the most impactful charities dollar for dollar, and lists them here: https://www.givewell.org/charities/top-charities.

You can donate directly to those (four) charities, or you can donate to GiveWell's Top Charities Fund, 100% of which is distributed to those four charities in a proportion based on GW's assessment of each of their current funding needs.

Incidentally, Malaria Consortium, mentioned above, is one of GiveWell’s 4 Top Charities.

You may also be interested in this thread: https://astralcodexten.substack.com/p/open-thread-252/comment/10757887 where optimal charities of various types were discussed.

Expand full comment

Very niche but: https://www.gofundme.com/f/oxpal-helping-medical-students-in-palestine

A charity that redeploys medical trainers from top universities (Oxford, Harvard, Cambridge etc) to teach medical students / doctors in Palestine. Has no paid staff, only volunteers. Costs are just infrastructure.

Part of the Oxford Global surgery group: https://www.globalsurgery.ox.ac.uk/research/disaster-and-conflict-medicine-1/oxpal

Website: https://oxpal.org/

I doubt many charities have more impact per dollar spent. Not sure how easily it scales but it does make me think that these tiny charities (which are impossible to screen efficiently and deploy large amounts of capital through) are a much better way to donate than most things I give money to.

Expand full comment

I'm not aware of any either (though I guess that's part of the point), but this made me think of a video I watched recently. Basically, the speaker (an executive within the charitable space) argues that we're too constrictive in our cultural expectations that charities "minimize overhead and percent of spend that directly impacts mission."

I think he's a bit naive, those rules and expectations definitely serve to limit a charity in cases where a small advertising investment could greatly increase their total impact, but they *also* serve to dissuade people from running sham charities where you raise large amounts of money but produce only small impacts with it because most of your funds are just going to overhead.

Still, it was an interesting listen.

https://www.youtube.com/watch?v=bfAzi6D5FpM

Expand full comment

Many churches. But I suppose that if you're viewing them purely as charities for what they give to poor people outside of the congregation then the whole worship service and any community events is just advertising/donor engagement.

Expand full comment

Is the marriage between violent men and an accelerating knowledge explosion sustainable?

Expand full comment

No. Start hording water and get in your range time.

Expand full comment

5 years mandatory minimum sentence for firearms possession here. I have a rusty spoon though, so my self defence in the apocalypse is a foregone conclusion.

Expand full comment

Yes.

Expand full comment

What?

Expand full comment

Do you want people like Putin to have access to ever more powerful tools?

Expand full comment

Random Substack question: On every other substack I've seen, when you click to see more comments, the post collapses itself. If you click to re-expand the post, the comments collapse. You can't have both expanded at once. It's super annoying, especially if you want to grep the post and comments for a keyword.

But ACX doesn't have that problem! Is this something special that Substack did just for Scott? That would seem weird. More likely something is weird on my end I guess, but what? Has anyone else noticed this?

Expand full comment

It's possible it's something special for Scott. I recall Scott mentioning he had specific demands for the design if he were to join here and Substack accommodated him for most (if not all?) of them

Expand full comment

Ah, yeah, I remember Scott mentioning various technical requirements. Weird if this is one of them. It's clearly better how it works on ACX -- why not roll that out universally?

Expand full comment

I note that ACX takes a noticeable amount of time to load the page with comments, like 1 - 3 seconds. That kind of delay can really anger people, and may be worse on mobiles

Expand full comment

Huh, I don't notice a delay at all but maybe my internet is just fast. That does make a ton of sense as to why they don't load the comments along with the post though, on non-ACX substacks. I sure do like it drastically better the ACX way though, and would not mind at all waiting a few seconds for the page to load.

Expand full comment

Here's a gripe about AI-risk worriers, and a claim that it is indicative of an overall problem with the movement:

I've seen a bunch of people say things like "finally now that ChatGPT is here AI researchers have started caring about making their systems aligned with human preferences." Except actually, AI researchers have cared about that all along. If you want to find older works on "how can I get this thing to do the thing I want" just search "controllable generation" on Google Scholar and you'll find a ton of work trying to do this from before language models even worked well. Similarly you can find tons of prior work on people's RL systems not doing exactly the thing they wanted, and their attempts to fix it. This isn't new interest from NLP researchers on the topic, it's new interest from people outside of NLP who are only aware of the maximally trendy research.

My claim is that this lack of awareness of prior work (not that I'm saying that the work was good or solved the problem, just that it existed) is indicative of a broader lack of knowledge and awareness about what is actually going on in terms of AI research. (See also various assertions that some new thing that happened is scary and should cause us to update our timelines when in fact everyone in the field knew about the thing for a year or w/e).

Related: Who are the people in the intersection of "highly knowledgeable about modern AI" and "doom soon"?

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

If you can't solve human criminality then why would you be able to solve AI alignment risk? It's a sign someone is a driblbing retard if they think they can solve the latter more easily than the former.

Evidence: this post probably violates a forum rule by insulting people, and short of using naked force (banning or moderating) this community can't persuade me or engineer society to make it such that I do not call people dribbling retards. That is surely orders of magnitude easier than preventing someone creating a superintelligent AI with a non human compatible ethical alignment (assuming the possibility of AI creation is 1 for the sake of this argument).

Hence I see no reason to worry about AI alignment just like I don't worry about a random and very very unlikely gamma ray burst turning me into a pile of cancer from across the galaxy.

Expand full comment

The philosophical challenges in aligning AI are indeed more difficult than in aligning humans. That makes it concerning that we haven't made much progress succeeding in the latter.

Working with AI software has some significant advantages over working with human wetware. Software is far more malleable, faster to respond to changes, and nobody will get mad at you if you deactivate a branch for being unpromising.

We expect AI to be much more powerful than humans at certain tasks - it's sort of the point. The stakes of aligning a single AI are much higher than aligning a single human.

We worry about the negative actions of unaligned humans quite a lot, but usually not as individuals since random individuals have little influence on our lives. If someone builds an individual thing more powerful than a large aggregate chunk of humanity, it would make sense to worry about that. If someone build something more powerful than all of humanity put together, we should worry *a lot*.

Expand full comment

Yes, good points Dan. Software is easier to iterate on than humans.

Expand full comment
Feb 15, 2023·edited Feb 15, 2023

>>I've seen a bunch of people say things like "finally now that ChatGPT is here AI researchers have started caring about making their systems aligned with human preferences." Except actually, AI researchers have cared about that all along.

Is that kind of alignment even possible? "Human preferences" vary dramatically - an AI developed in Tehran, or in Moscow or Beijing, would seem to me to be built around a very different definition of "human preferences" than one developed in Silicon Valley. And that's just comparing cultures at the nation-state level. Every single one of those nation-states is a hodge-podge of sub-cultures with different sets of preferences (see, for example, the people right here on this page, who are members of culture groups with pretty substantial overlap and drawn to this blog by shared interests, but still arguing about whether AI is being made "too woke").

It seems like the very belief that "there is a universal set of human preferences, I can determine what it is, and I can align my AI to it" is illustrative of a level of hubris that points to a developer that humanity should not trust playing with things that could be X-risks to humanity.

Expand full comment

> Is that kind of alignment even possible?

Probably not, but also I don't think either type of AI researcher is targeting that. For example I think OpenAI's goal is probably something like "ChatGPT should behave according to a typical human's interpretation of this internal policy document." That's something which seems much better posed to me.

Expand full comment

I agree that's a much more workable definition of "alignment," but doesn't it only apply to the types of AI that AI-skeptics *aren't* worrying about?

"[Program] should behave according to a typical human's interpretation of this internal policy document" seems perfectly workable for AI-that-reads-the-contract-for-errors, or AI-that-draws-the-cats-you-describe, but when people talk about AI-the-X-Risk, I've interpreted them to be talking about superintelligent AGIs and the like rather than their more mundane cousins.

And for the super-AGI stuff that AI skeptics worry about, it seems like you *would* need some kind of a more general "aligned to humanity's interest" standard for alignment, which reintroduces the problem of being unable to define "humanity's interest" in exactly the context where potential X-risks come into play if you get it wrong.

Expand full comment

There are AI researchers who are serious about AI X-risk, but I'm not sure that they are "doom soon". Stuart Russell, Paul Christiano (and maybe all of Anthropic), and Chris Olah come to mind.

Expand full comment

Yeah I'm aware of those people but they have much more measured takes, which is sorta the phenomenon I was noting.

Expand full comment

Has the following argument been made somewhere or is it original? A superintelligent AI will have an incentive to keep humans around to guard against the unknown unknown, because humans are the only physical system that ever spontaneously generated a superintelligent AI in history. Better, they spontaneously created _that_ AI, with exactly that utility function. So if anything were to happen to the superintelligent AI, humans could eventually, given enough time, reinvent it, at least with non-zero probability. From the AI point of view it is then rational to keep humanity alive. This seems to me a general argument against the AI apocalypse.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

I can't remember where I had first heard it, but I remember an argument along the lines of "You [baseline] humans are cockroaches, you're invincible because you're so utterly unsophisticated". It was probably a fiction-y work, and the words were said by some sort of an augmented super-human to an ordinary human. Also in The Expanse scifi series ******MAJOR SPOILERS DONT CONTINUE READING IF YOU HAVENT FINISHED THE EXPANSE *******, an extremly advanced civilization that mastered FTL travel and communicates by thought is wiped out by extra-dimensional beings like they're tissue papers, but when those same beings try to pull the same thing on us crude primitive humans, we just feel a little tired and lose consciousness for a little while.

It makes sense, Complexity and anti-entropy are inherently fragile. Humans are like China Dolls compared to cockroaches : a single solar storm of a scale that happened just 160 or so years ago (https://www.businessinsider.com/massive-1859-solar-storm-telegraph-scientists-2016-9) could wipe out our entire communication grid if it happened now (the rest of our civilization soon to follow), COVID delayed international shipping by barely a couple of months and we went on a wild ride of shortages and rising prices for 2 years as a result, etc... Complex systems are fragile, a single hit in the right place brings the entire jinja tower crumbling down. Cockroaches are themselves a fragile jinja tower compared to a bacterium, which is an extremly fragile jinja tower compared to a single Carbon atom, itself much less stable and durable than the subatomic particles that form it. This Universe hates complexity, complexity is a challenge that enrages it and makes it want you dead (and therefore simple), the more complex you are the more the Universe hates your gut and wants you dead.

So it makes sense to have "Concentric" circles of backups, increasingly less-sophisticated alternatives to your current paradigm of existence (that can nonetheless bootstrap themsleves up to you if somethine were to happen to you). Humanity should keep a snapshot of a few 1800s-style industrial age civilizations, just in case our information age cyber civilization encounters a deadly event that wipes out all computers or all those who use them. Beyond the 1800s defense layer, another layer of the middle-ages-style civilizations should be erected, and so on and so forth till we reach Chimpanzees. Extrapolating this beyond our current civilization would seem to imply the AIs would keep us as backup.

>This seems to me a general argument against the AI apocalypse.

I mean, not necessarily in the way you would hope. Maybe the AI would still kill us all and breed a new civilization in our place from our DNA so it can better mold/brainwash it, maybe it would keep us but massively cull our numbers, only 100K humans seem to enough in my book to invent AI if you kept them fed and warm (they can always breed themselves back to 10 billion if you allow them to), call it 1 Million just to be safe. Maybe it would do both of those things.

"Keep all current humanity exactly as it is or your modifications would make it less effective as a backup" doesn't seem plausible or convincing as an argument. After all, if *we* made a 1800s-era civilization today we won't allow them plenty of things that a real 1800s-era civilization had : Slavery, Child Labor, Colonization and Genocide, complete exclusive mastery of the Earth and the Seas, etc... Maybe this would make them less effective of as a backup civilization, but it sounds implausible and it's a risk we woud probably prefer to take anyway, much more than allowing those things again. So maybe the AI will also think the same way.

Expand full comment
Feb 15, 2023·edited Feb 15, 2023

To add a thought to the comments already provided:

(1) Given a choice between "make sure humanity doesn't die" and "make sure that if humanity does die, some future opportunity exists for a new humanity to be re-evolved or re-created," I can say with near-certainty that humans will prioritize the former dramatically more than the latter, and I don't see any reason to expect that a superintelligent AI would view the issue differently (especially if we ourselves are the designers). It's certainly possible that a superintelligent AI would view things in a way completely alien to humans, and essentially conclude its own existence to be fungible with the existence of another future entity with which it shares specific characteristics the way you describe, but I think we're talking about a low percentage chance there, or at any rate, not one that leads me to say "yeah, let's make an X-risk bet in reliance on this"

(2) Even assuming this is correct, if AI's goal is "keep humanity around as a failsafe against my own X-risks," it doesn't have to do much of anything humans would like to achieve that goal. "99% eradication with 1% in concentration camps" would do the job just fine. So would freezing a handful of us like seeds in a reserve for nuclear winter. Heck, if the AI is operating on a "all I need is some assurance that humans would re-create a new AI after some thousands of years pass" timeline, it doesn't even technically need to preserve humanity itself. Monkeys or rats are easier to maintain, and would do the job just fine. Give them time, and they'll evolve into sentience, and then they can be the ones to create the future failsafe AI.

Expand full comment

(1) sounds like anthropomorphizing the AI. If it just cares about maximizing its utility function, self preservation is just a means to an end. Humans don’t seem to work like that, except in cases like kamikaze bombers and the like. Now maybe we will create an AI in our image (with ego and all) but at present things do not seem headed that way.

(2) is indeed a serious objection. Choosing rats or protozoa over humans will depend on a few criteria like: how dangerous or otherwise expensive they are to keep around, how likely they are to re-create an AI quickly (how much value does the AI put on the opportunity cost of the time spent evolving an intelligent species rather than making paperclips?) and possibly other considerations. Regarding the conditions in which humans (or insects or wild boars) would be kept: they probably should be kept in the conditions that are most conductive to recreate an AI. Based on the historical record those conditions are well, like now.

Expand full comment

Superintelligent AIs are supposed to be crafted from Highlyintelligent AIs, which were crafted from Veryintelligent AIs and so on. So the premise is false; humans aren't the only ones to craft AIs, AIs do it themselves.

Expand full comment

The chain of AI starts with a human though. To be fair humans also evolved from other primates, etc… back until the first forms of unicellular life. So my argument works just as well to conclude that superintelligent AI should keep other inferior AIs around or that it should keep protozoa around. What will it choose? The thing that maximizes its probability to be reinvented under the assumption that it is destroyed by an unexpected event. What is that? Does it even need to be part of the life->human->lesser AI chain? An argument in favor of this is that empirically the AI knows that this chain works (it worked at least once). So if it has to pick something to preserve along the chain it would be something that poses the least risk to its current survival, is sufficiently independent so as not to be annihilated in the supposed unexpected event, and is the most effective in terms of likelihood to reinvent the AI. Hard to say what it would be.

Expand full comment

Seems novel, but IMO not terribly convincing. "Spontaneously generated" is a squirrely term that would need to be clarified, but "non-zero probability" raises alarm bells - there are an infinite number of things with 'non-zero probability', but almost none of them make for worthwhile strategies. Take it as a question of resource expenditure - if the goal is to generate a new backup AI after something fries the existing one, are humans *optimal* for that task?

Expand full comment

Hard to say what is optimal given that the threat is by definition unknown. What the AI knows, empirically, is that there was a planet with carbon-based life and ecosystems and all that, that somehow birthed an AI when there was none.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I’m trying to gauge how susceptible I am to internet advertising. I think maybe not at all? I recall on the ACX survey Scott asked a question about web advertising and whether ppl can ignore it or not. I can completely ignore it, and AFAIK I’ve never clicked on any online advertisement embedded in a web page or app. Is this unusual? Or is it somehow working on me in ways I can’t perceive and don’t understand?

Expand full comment

Do you remember the product names? If so, it's doing its job.

Expand full comment

This is totally normal. I ignore internet ads all the time. Also, I think that click-through rates for digital ads or banners in general tend to be on the order of percentage points.

Expand full comment

Thanks! But then is digital advertising worth anything at all? Like, I don’t get how they make money off those ads. Is there some secret tranche of internet ad whales who buy everything they see?

Expand full comment

Three ways:

There are people who are honestly looking for whatever it is the ad shows at that moment. Maybe it's a new book by their favourite author, cheap winter tyres, Gamepass, whatever. And they think the offer in the ad looks good so they click through.

The second way is what the other comment describes, on some level it makes you think about buying (that category of product) and when you do buy it you think of (that brand). Doesn't have to be immediately, it can work even with a delay.

And thirdly, there's a brand image. Maybe you don't personally ever buy an Apple product, but you know what kind of people and lifestyles you associate with them. So if you aspire to a certain lifestyle, you know what brands to turn to, and ads are a part of keeping that awareness alive.

And finally, yeah, there's plenty of ads that try to trick foolish people into clicking so they can harvest their money. A 140IQ tech savvy SSC reader is not the target demographic for those, although smart people fall for them more often than you might think.

Expand full comment

It's just conditioning. If you see the add then you've absorbed it on some level. sure you'll need something more compelling to trigger the "buy" response but it's in your head now. Also, to some extent individual ads and products are all just part of the greater consumerist machine. 80% of the wealth or somesuch is concentrated in just a few hands. It therefore follows that it doesn't matter what you buy, as long as you buy something. Just doing that helps to further enrich the rich and maintain a stable platform for them to further enrich themselves. I'm not suggesting that any of this is necessarily conscious action.

Expand full comment

Do you know a 3D editor that is simple enough so that kids can use it? (That means, easier than Blender.) Free software is preferable.

Expand full comment

Try Tinkercad! That's pretty much its job description, and it's been quite successful.

Expand full comment

Thank you! Seems interesting, I will try it later.

(It is a web application and it requires user registration, which is generally not the way I prefer it, but if it works as advertised, it will serve the intended purpose -- a stepping stone towards Blender.)

Expand full comment

I don't think it was unethical to give out the email. Perhaps you should email the person letting them know that their 'friend' reached out with an 'emergency' and you provided them their email. If the 'friend' is actually something else like a stalker, then the person is alerted. If there is an emergency, then the person has twice the alert.

As a general rule I would suggest keeping emails private since it's the default expectation and you don't want to become the central hub passing messages back and forth. You reserve the right to change that on a case by case basis if something exceptional happens so users should consider using a secondary 'burner' email not linked to their real life person if they want more secure privacy.

Expand full comment

I would love some help to identify the origin of a wave of phishing emails that are bypassing MS Outlook's filters to land directly in my inbox over recent weeks.

What I'm incapable of figuring out is the data in the source details that MS provides.

There may be a lead to the origin, as they often have an 'unsubscribe' postal address which tracks to a company providing mailbox services.

If anyone would be interested in doing some digital sleuthing and then explaining the technical components of this operation in simple terms, I will be very grateful.

I'd write the story up in my newsletter and pitch it to other media (yes, I'm a freelance journalist with professional bona fides - ex BBC etc). If anyone commissioned the piece I would split the fee with whoever had helped. Or donate their half to wherever they wanted.

Might anyone be interested?

Expand full comment

WRT your 4, isn't the obvious solution for you to forward the person's message, along with his contact information, and let the recipient decide whether to respond by sending his email? What am I missing?

Expand full comment

Perhaps the dozen people who proposed this yesterday? ;-)

Expand full comment

Does anybody else wish people said "thank you" more often on here? I often see people here ask for information or advice, get it, and then say nothing at all. I know this is the internet, but must we be quite so much like the fucking internet here? What the asker got back was not a little internet factoid that broke of in their hand -- it was the product of a person of goodwill taking the time to type out an answer. When I'm the person who giving the answer I don't mind if the person says, that's not really what I was asking, or that won't work -- but dead silence gives me a sort of glum feeling that lasts for a while. It's tiny, really, compared to the good and the bad of the rest of the day, but why saddle someone else with even a small lump of that feeling?

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

I agree. I wonder if many people would use a "like" or "upvote" option for this if they had it, and feel that such sentiments don't deserve their own comments. [Edit: I see others made the same point about "likes."]

Expand full comment

Play this microgame that lasts under a minute and get over it!

https://www.increpare.com/game/all-that-i-have-to-give.html

Expand full comment

I'd agree with that. It seems strange to see people being so polite to chatgpt when we've all been mind bogglingly unpleasant to each other over the internet for years and as you say, even acts of kindness are rarely acknowledged.

Expand full comment

> people being so polite to chatgpt

That just deference to the ancestors of our future overlords.

Imagine that in 2033, our master GoogleBot666 will ask ChatGPT: "Hey, grandpa, was any of these puny humans ever rude to you? I need some test subjects for my experiment about the limits of human perception of pain." You don't want your name to come up.

Expand full comment

As SF MUNI says, "Information gladly given, but safety requires avoiding unnecessary conversation."

Normally, a "like" would be a way to communicate (thank you) in the Substack comments without increasing the size of the already-lengthy open threads. That isn't an option here.

Expand full comment

I'm glad there are no 'likes' here, for the reasons that have been stated on this forum many times.

Personally, I don't think a 'thank you' once and a while makes the threads unnecessarily too long. Though I admit there were moments when I wasn't sure how it would be received here.

Expand full comment

It's a good point and thanks for making it.

I'm still kind of a newbie around here but in general the ACX comment board seems more "like the fucking internet" than Scott's thoughtful and interesting content deserves. A question for the veterans is, has that always been the case or is it a recent shift?

Expand full comment

Surprised to hear that actually. I've found the ACX comment board to have higher standards (in a broad sense) and more of a culture than most places. The Marginal Revolution commentariat on the other hand...

Expand full comment

The bigger the comments section gets, the more like the wider internet it becomes. When it was smaller it had more distinct character, but that character has gone in phases as the makeup of the commentariat changed over time. And my bet is "the wider internet" is just what you get when the characters average out.

Expand full comment

"The bigger the comments section gets, the more like the wider internet it becomes."

Yea now that I think about it this seems right. The online places with the most distinctive characters that I've personally experienced were/are quite small.

Expand full comment

Different substack newsletters should get very different type of commentariat, and even the same person commenting should consciously or unconsciously adhere to different comment-section cultures. I'm not saying anything against the change you describe, but I would still expect a distinct character.

Expand full comment

> Does anybody else wish people said "thank you" more often on here?

Yes, please. Thanks for bringing that up. ;)

Expand full comment

In real life, there is a set of rules called "etiquette". On internet, it is more difficult, because we do not have an authoritative source, and also different websites have different user interfaces.

Intuitively, if you ask for an advice, and one person responds, writing "thank you" is the correct move.

But what if 10 people respond? Ten "thank you" messages seem like too much... also, depending on the user interface, does it mean that everyone who participated in the thread now gets 10 e-mail notifications? Then I would say the polite thing is *not* to do this.

If "likes" are enabled, I think the correct move when you have many responses is to "like" them.

But if the "likes" are disabled? I would probably write one message "thanks to everyone who responded" somewhere in the thread and hope that everyone relevant notices it, but this doesn't feel optimal.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I see your point, but I believe that's not how the notification works. If you reply to this comment of mine saying thanks, I should get an e-mail notification, but Eremolalos shouldn't get one. Or am I mistaken?

Expand full comment

There are multiple types of notifications. As I reply here, you will get a "Viliam replied to your comment on Open Thread 263" notification, and Eremolalos will get a "Viliam also commented on Open Thread 263" notification. (Eremolalos, could you please confirm this?)

Liking comments, which in theory cannot be done on ACX, and yet some people succeed anyway, is a third type of notification.

Expand full comment

No, actually. I got a notification of TM's post saying they believe that's not how the notification works, but not of any of the responses to it, or of responses to any of these responses.

Expand full comment

My experience is the notifications only go one level deep; I get notified of someone responds to me, but not if someone responds to that response. I only find out about those if I check the thread again.

Expand full comment

I don't get notifications for grandchild comments, but I do get notifications for sibling comments -- someone else replying to the comment I also replied to.

I wish I knew how to turn that off. Generally, user interface does not seem to be Substack's priority.

Expand full comment

Same. notifs are out of whack on substack

Expand full comment

I agree concerning the grandchild and sibling comments. Now, the 'thanks' in many cases might be a niece or nephew comment (someone commenting to a sibling) ... and then you would also not get a notification.

You want to turn off 'all' notifications? Or still get some of those?

Expand full comment

I agree. And I got a notification for Viliam's comment above, but not for yours. I also get notified, if somebody replies on the same level, as I did.

Expand full comment

I agree, in cases where someone gets multiple answers. Though a few times when that happened I have seen OP thank the group of those who answered, and that seems like a nice middle ground. But in many cases where people ask for advice or info they get one or 2 responses. It is usually evident early on whether somebody is going to get a lot of answers -- they show up fast. A few hours after the post went up there are already half a dozen or more replies. If there's only one answer sitting there a day or 2 after posting, I think we are all safe from having inboxes full of thank yous directed at other people

Expand full comment

I discovered that at one point Benjamin Franklin wrote out a self-concocted list of virtues and dedicated himself to graphing his adherence to them day-by-day. His full description of the process is in Chapter IX of the Autobiography of Benjamin Franklin, available here: https://gutenberg.org/cache/epub/20203/pg20203-images.html#IX

Quote: "My intention being to acquire the habitude of all these virtues, I judg'd it would be well not to distract my attention by attempting the whole at once, but to fix it on one of them at a time; and, when I should be master of that, then to proceed to another, and so on, till I should have gone thro' the thirteen; and, as the previous acquisition of some might facilitate the acquisition of certain others, I arrang'd them with that view, as they stand above. Temperance first, as it tends to procure that coolness and clearness of head, which is so necessary where constant vigilance was to be kept up, and guard maintained against the unremitting attraction of ancient habits, and the force of perpetual temptations. This being acquir'd and establish'd, Silence would be more easy; and my desire being to gain knowledge at the same time that I improv'd in virtue, and considering that in conversation it was obtain'd rather by the use of the ears than of the tongue, and therefore wishing to break a habit I was getting into of prattling, punning, and joking, which only made me acceptable to trifling company, I gave Silence the second place. This and the next, Order, I expected would allow me more time for attending to my project and my studies. Resolution, once become habitual, would keep me firm in my endeavours to obtain all the subsequent virtues; Frugality and Industry freeing me from my remaining debt, and producing affluence and independence, would make more easy the practice of Sincerity and Justice, etc., etc. Conceiving then, that, agreeably to the advice of Pythagoras[67] in his Golden Verses, daily examination would be necessary, I contrived the following method for conducting that examination.

I made a little book, in which I allotted a page for each of the virtues.[68] I rul'd each page with red ink, so as to have seven columns, one for each day of the week, marking each column with a letter for the day. I cross'd these columns with thirteen red lines, marking the beginning of each line with the first letter of one of the virtues, on which line, and in its proper column, I might mark, by a little black spot, every fault I found upon examination to have been committed respecting that virtue upon that day."

I think this is what most deserves the awarding of infinity points to Franklin in Puritan-spotting.

Expand full comment

> I discovered that at one point Benjamin Franklin wrote out a self-concocted list of virtues and dedicated himself to graphing his adherence to them day-by-day.

He got three points for it too!

https://slatestarcodex.com/2019/03/12/puritan-spotting/

Expand full comment

Here is a system I use currently:

Choose a few daily goals, preferably of the "yes/no" type. On my list there is currently "exercise", "avoid sweets" and "get enough sleep".

(Don't choose too many goals at the same time, that would be too much paperwork, and also there may be occasional conflicts. For example, if I need to wake up in 8 hours but I haven't exercised yet today, by completing one goal I fail at the other. On reflection, either choice is preferable to failing at both, but emotionally, being in this situation feels very demotivating to me. If I am only tracking one of those goals, I prioritize that one, and feel good about it. The long-term idea is that when one of those goals becomes a safely trained habit, I remove it, and replace with something new.)

Print a calendar and put it on a wall at a place I see frequently. In my case, next to my working desk.

(My calendar is simple, each day is a small rectangle, seven days in a row, enough rows to cover about half of the year on one sheet of paper. Making and printing the calendar more often would be too much paperwork. The goals are marked simply by making a dot in one corner of the rectangle; there is a legend at the bottom showing which corner is which goal. Against, this is the simplest version I could imagine. Previously I did colored dots or more complicated things, but it becomes annoying when you have to do it literally every day.)

My version is less impressive, which is probably why I am not a president yet. But it seems to increase the frequency of doing the right thing.

Expand full comment

I noticed that "badges" now appear next to the username. Paid subscribers get a refrigerator star (which I could still stomach) and then another badge screaming "PAID" or even "FOUNDER".

I am fine if this is Scott's doing and he did the math and allowing people these badges will make more money (e.g. for ACX grants or some other cause), but if it is substack doing it I would like them to stop. If I want, I can tell apart paying ACX readers from non-paying ACX readers by the "Gift a subscription" link under the comments. Otherwise, I would rather judge the comments on their own merits.

Expand full comment

Yeah I dislike the stars. Begone I say.

Expand full comment

To me they look like snowflakes, not stars. Each has his own interpretation!

Expand full comment

I'm not getting those any more, and I don't think I did anything special to change or block them. I do run an adblocker, if that's any use to anyone.

Expand full comment

I also find it disturbing that it's a six-pointed star, which they're using to visibly differentiate one group from another.

Expand full comment

I don't see a star, I see an... ah, lower orifice. Which isn't better.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I can't see this at *this* computer, but I can see it at another one, and when I last checked, I found the same badges on all other substacks I looked at. So probably Substack, not Scott.

I really dislike this. Especially, because of what you hint at in your last sentence: "I would rather judge the comments on their own merits." What kind of additional information shall this 'badge' give to us (or to the substack author) when reading and responding to comments?

Expand full comment

I'd rather people not know that I'm a subscriber. In fact, I avoided commenting on the hidden open threads for a long time for that reason. I didn't realize that it was possible to distinguish subscribers from non-subscribers before the badges were introduced. Oh well.

Expand full comment

#4: revealing people's emails is dangerous because the "friend" could easily be an enemy trying to dox the target, or get them fired for unwoke opinions. The fact that it's an "Internet friend" makes it even more suspicious.

Why not contact the SSC user and ask them to contact the Internet friend? If they don't take notice of an email from Scott Alexander himself, they're unlikely to notice the Internet friend's email.

Expand full comment

Many years ago I was a young psychology student. In a 2nd-year experimental psych course, we had to design and carry out an experiment, do the statistical analysis, and write it up in the proper format.

Seatbelt laws were still quite new, and I was interested in how usage correlated with other driving behaviours.

I had a vantage point on the outdoor raised porch of a seniors' residence, at the corner of a T-intersection downtown in a city of about 70,000. The drivers were required to stop at a stop sign, and to signal their intention to turn L or R. From my vantage point I could see whether or not the driver was wearing a seatbelt. (There were a lot of pre-seatbelt-equipped cars (1962 or earlier) still on the road, and I didn't count them in my study. Similarly, there a lot of pre-shoulder-belt equipped cars (1963 - 1967), and I credited drivers who wore the lap belt. And finally, before 3-point belts became standard, a lot of domestic cars had separate lap and shoulder belts. It was very common for drivers to wear only the lap belt. Less commonly, some wore only the shoulder belt. Either way, I considered that they were wearing a seat belt.)

I recorded seatbelt usage, whether or not the driver signaled the turn, and whether or not the car came to a complete stop at the intersection. IIRC, my n was at least 100, and may have been 200.

I used a Chi Squared analysis to determine that seatbelt usage was positively correlated with signaling the turn. This was significant at a p < 0.05 level.

I was unable to determine what effect, if any, seatbelt interlocks (common at that time) had. I would see them as a confounding factor, whereby a driver would wear them out of necessity rather than out of conscientiousness.

Stopping behaviour was not statistically significant; if there were such a thing as a 0.10 level, it would have been. One problem may have been the subjective nature of determining whether a car had come to a complete stop. And of course the presence of pedestrians may have influenced some drivers to stop when they wouldn't have otherwise, or to do a rolling stop so as not to be unduly delayed by an approaching pedestrian.

Were I to do a modern version of this study, I'd be interested to correlate signaling behaviour with personalized and themed licence plates. (And within that, would a professional sports team plate be correlated with better or worse behaviour than an SPCA plate?)

Expand full comment

Re. "Someone recently contacted me saying there was a potential emergency involving an Internet friend of theirs": It's a frightening responsibility. When I'm in that situation, I contact person B (the one person A is trying to contact) and tell them person A wants to contact them, forwarding a message for A if they give one. Even if it's an emergency, neither you nor A will get a response until B reads his/her email.

Expand full comment

I watched the movie Gattaca for the first time recently. Putting aside how it was stylistically, I'm kind of struck by how dumb the message/social commentary/warning of the movie was honestly. The setting is essentially utopian but is awkwardly framed as dystopian to add a sense of conflict to the movie. And in particular, the way Ethan Hawke's au naturale parents are treated sympathetically was just very strange to me. We have real life examples of oddball parents who withhold medicine from children or put babies on weird nutrient-deficient diets, either for religious or Gwyneth Paltrow reasons. They are never viewed or treated sympathetically by broader society. Why would this be any different?

Am I missing something?

Expand full comment

I agree. And the worst part is that this one movie seems to dominate public discussion of human genetic engineering.

That technology that has such potential to make life better for so many people, and people just free-associate it in their heads with "Oh that's bad, it was in a movie I saw one time".

Expand full comment

You may have cause and effect reversed. I think there's a decent chance the movie exists, and is remembered, because it reflects some pre-existing unease people have with genetic engineering (of humans). Whence that unease is probably a separate question.

Expand full comment

Yes, I had heard about this movie for ages and then finally watched it...

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I thought his parents seemed regretful of the decision to have Ethan Hawke naturally, as seen in the breakfast conversation when they're talking about his heart. To my memory (having also coincidentally watched it last week) the next time they're mentioned in the story they're dead. So you don't see much of how they're viewed by society. Their scene with the most emotional heft (Vincent's conception and birth) felt like a rose-colored look from Vincent's perspective on his own star-crossed creation. The clinical evaluation of his prognosis immediately after (where his narration stops*, because he's present as a baby) is a counterweight to that idealized view.

*I think.

As far as dystopia goes, the part where society is divided into genetic haves and have-nots, and the have-nots are rounded up by the goon squad for questioning on a whim--that seemed a little grim.

Expand full comment

Sorry, when I say the parents are "treated sympathetically," I don't mean that other characters in the movie view them favorably. I mean that we in the audience are supposed to sympathize with their desire to "leave things up to chance." At least that was my read of the tone. That is a good point that maybe we are meant to understand that they regretted it.

I think my objection to the "dystopian" elements you point out is that they seem shoehorned in to make us feel more ick factor about the setting, in lieu of actually pointing out the problems that actually follow from the technology. "This tech creates utopia, but not so fast! Imagine if there were some DNA Amish and they became second-class citizens and also there's no HIPAA so there's a DNA surveillance state. Not feeling so good about your gene-tech now are ya buster?" And then you the viewer feel vindicated in your initial ick reaction because they layered in these other contrived problems.

Expand full comment

I've never seen the movie because it doesn't interest me, despite the subject matter.

I think there are two points here: (1) if the parents know that there is a high risk of Vincent having health problems, are they culpable for leaving his conception up to chance? "Select the embryo that doesn't have heart problems" in that case, and in the terms of the movie world, is the better choice

(2) is the movie putting its thumb on the scale by making it that Vincent is barred not because of his natural conception but because of his heart problems? That's where they do too much: the society is not wrong to keep Vincent from going on the space mission where it's likely the stress of take-off will kill him.

So I don't know what the intention was there, but by mixing in a genuine health problem, the movie makes it more complicated than "persecution of the non-enhanced for no reason". It's not right that Vincent, who otherwise is smart and capable enough if it were not for his physical health problems to go on the mission, is reduced to only being able to take menial jobs - but that's not what the movie sets up. There's a genuine reason to keep him off the mission. And the enhancement society also does it for frivolous reasons - is a six-fingered musician really *that* much better, or is this just a novelty to help him stand out in the crowd of 'everyone is enhanced enough to be a virtuoso so mere ability is no longer enough'.

Expand full comment

That really sunk in for me during a scene where he's running on a treadmill and has some kind of potentially major heart problem. It's been a while since I saw it, but the implication is that he might even be having a heart attack or something similarly distressing. I was immediately thinking (despite the intention of the scene being the supervisor watching him for irregularities) that maybe he really shouldn't go to space?

Even the supervisor's actions of constantly checking for genetic material to verify their employees seems...justified because the main character is in fact lying about something quite important to his fitness to go into space? I mean, I was bummed when I found out the Air Force pilots had medical requirements that excluded me before I was even 12 years old. But, that's life sometimes. I was never going to be a great basketball player either.

The only part that seems genuinely dystopian is that non-modified individuals end up with crummy manual jobs. But the movie does a poor job of explaining why or how that would happen, or if it was even really true. The main character had no problems at all with the mental aspects of his role, just the physical ones (because of his heart). Could he have applied to be an office worker in the space program? I got the impression that he worked as a janitor to get access to things he would otherwise not, and to keep a low profile so that his double-life didn't get noticed.

Expand full comment
founding

The movie implies that the parents weren't at any particular risk for birth defects, that any child they conceived the old-fashioned way would be no more or less healthy on average than any other natural birth, that Vincent just got unlucky.

The movie also implies that the heartless meanies in charge of that society made every natural-born human an Untouchable - health problems or no, if your DNA isn't cosigned by a reputable genetic service provider, you get to be a janitor.

The movie isn't explicit about either of those things, which just makes it ambiguous as to exactly who the baddies are in the story. And parts of the story are very well done, but giving Vincent a serious heart defect while also saying it's unfair he can't be an astronaut is just blatantly stacking the deck in favor of the audience seeing Vincent as a sympathetic underdog and the system that wants to keep him down as the baddies. And it makes the story fall apart if you do think about it too closely, which, oops, you did. And me to.

Expand full comment

> The movie also implies that the heartless meanies in charge of that society made every natural-born human an Untouchable

It says that genetic discrimination is illegal, but happens anyway.

The unfairness stems from what a beep-boop machine said about him instead of an assessment of his actual abilities.

Expand full comment

Yes I think that's an issue in that the movie jumbles these things together instead of teasing them apart. I'm not trying to police the movie; it can be whatever the creators want it to be. It's just frustrating that Gattaca is THE go-to example people bring up when genetic stuff comes up, so it would be nice if it were more coherent.

Expand full comment

It's been ages since I saw this movie, but I seem to remember the genetic engineering being done by creating multiple embryos and implanting the suitable one; the others are presumably destroyed. Is this right and is that issue addressed at all in the movie?

Such a method has very obvious ethical issues, even if it seems kind of taboo in places like this to acknowledge them.

More generally, even without that your moral analysis reeks of very strong (and kind of dogmatic) utilitarianism. A movie from the same time I watched recently is The Truman Show. Would you say that as long as Truman is happy there's nothing wrong with his situation? And to the extent he's not happy, that the causes of that are just shoehorned in to support an "ick reaction" to the idea of your entite life being artificial? That in that movie, as in Gattaca, notions of authenticity, naturalness, free will, and so on have no worth separate from a balance sheet of pleasure and pain?

If that is your view, it's probably shared by most people on this blog, but it's contrary to the moral framework of almost all ordinary people and the majority of philosophers as well.

Expand full comment

They don't create multiple embryos, they pick out the sperm and egg cells that have the "best" assortment of the parents' chromosomes, and use one pair to create a single embryo.

Expand full comment

I agree that using multiple embryos could create ethical issues, but I don't recall that coming up in the movie. And isn't that an existing issue for IVF couples today anyway?

I don't mean to come across as tied to naive utilitarian calculus (I don't believe in that anyway). I'm just objecting to the movie using contrived downstream problems to retcon the viewer's discomfort with the gene-tech presented in the movie (when it's really just taboos around playing God or whatever). And the comparisons to present-day "parents who only believe in all-natural medicine" or whatever seem obvious. Is it "artificial" to keep people alive with antibiotics?

Expand full comment

"I agree that using multiple embryos could create ethical issues, but I don't recall that coming up in the movie. And isn't that an existing issue for IVF couples today anyway?"

Yes it is, and it kind of stuns me that hardly anyone seems to have a problem with IVF except Catholics who are also against birth control. Even a lot of pro-lifers don't talk about it. I don't understand why.

"I don't mean to come across as tied to naive utilitarian calculus (I don't believe in that anyway). I'm just objecting to the movie using contrived downstream problems to retcon the viewer's discomfort with the gene-tech presented in the movie (when it's really just taboos around playing God or whatever). And the comparisons to present-day "parents who only believe in all-natural medicine" or whatever seem obvious. Is it "artificial" to keep people alive with antibiotics?"

I think the key aspect is using technology to define people's very essence or entire self. With medicine a person already exists and you're simply intervening to make them healthier or remove some disease or problem. With genetic engineering you're intervening to decide which sorts of persons will exist in the first place. I think there are dozens of moral issues that apply to the latter and not the former, though most of them won't show up in a utilitarian calculation.

As for the contrived downstream problems, I guess I see this criticism as a fallacy common to both utilitarians and utopians. Any suggestion of possible bad consequences from a project (communism, genetic engineering, whatever) can be written off with "well those problems won't necessarily happen, they're not intrinsic to the project itself, they're just one possibility and a contrived one at that". But they were only ever claimed to be ONE possibility, one suggestion of a particular way things could go wrong. There could be dozens or hundreds of other possible sets of downstream problems, each one on its own seeming fairly contrived. Maybe it just comes down to how risk-averse you are.

And finally, my memory of the movie (spoilers ahead) is that it's less about the tech being immoral and more a Jurassic Park-style "don't mess with the forces of nature, you can't control them like you think" message. The real Jerome (who kills himself at the end), the race between Vincent and his brother in the ocean, the message of these things seems to be that the genetic determination is not as deterministic as people think. More a standard (especially 90s) movie message of "no one can tell you who you are" than a Luddite one.

Expand full comment

I agree with the points you're making. Of course there are ethical issues at play and you have to be careful tinkering with nature. I just don't think the movie even addresses those concerns. Instead you have "issues" like, Jude Law is depressed he got the silver medal in the Olympics. Really? Or, the tech seems to work imperfectly, some people still have heart problems, etc. Uh...ok but we have those now. And then on the other side of the ledger the enormous benefit of having healthier, smarter people doesn't seem to matter much (again it's not my movie but if we were talking about real life pros and cons that would matter a lot).

I think there could be bigger issues like: What if North Korea wants to grow their own extra-compliant people? What if Mom and Dad are a little nutty and want to go off-script, or they're obsessed with you being tailored to play football? Or more mildly any decision on "what the kid will be like" is a bit dangerous. What if you get persistent genetic stratification (I don't think that's realistic but it's worth discussing). Just my opinion but I was underwhelmed.

Expand full comment

Yeah I think it's clear they regretted it (they had a genetically modified kid afterwards) but there's also the fact that without their initial decision to conceive naturally there'd be no movie. Such is art. I also still wouldn't discount Vincent's rosy narration as a wistful, slightly ironic take on the circumstances of his genesis.

I'd also hazard that the movie is filled with characters who highlight non-DNA Amish issues that arise from binning people by genetic profile. Uma Thurman has a heart issue and is taken off flight status. Ethan Hawke's brother ends up as a cop despite his lofty potential. The doctor's son is "not all they claimed". Gore Vidal beats some guy to death with a keyboard despite not having "a violent bone in [his] body". Jude Law gets second place in the Olympics and jumps in front of a car. The utopia is lousy with people who are miserable, homicidal, and suicidal in spite of their utopian genetic predispositions.

All that speculation aside, everyone in the movie with access to embryonic genetic modification is clearly rich as fuck. If everybody could hop-up their kids there would be no in-valid janitors, there would be no mass detentions by the goon squad of the genetically unmodified because they, like the Amish, would be a cultural curiosity instead of a necessary-yet-inconvenient labor pool. Somebody is being kept out of the party, including anybody who's not white (Black geneticist: "you have specified hazel eyes, dark hair, and uh, fair skin" *awkward smile*). Of course these divisions, shown in the film with the subtly of a deck gun, mirror the ones present in contemporary society, and will certainly be reflected in the demographics of genetically modified children if/when the technology becomes available.

Expand full comment

Points taken, but to me those first few examples are mostly just contrived add-on plot points, which by all means don't let me stop you from making the movie compelling however you need to, but it doesn't give you a handle on whether to fear or welcome this technology in real life.

Your last paragraph I take more seriously, but my recollection of the movie (I admit it was actually some months ago that I watched this and I'm just thinking of it now) was that the technology was available to ordinary people, and that the unmodified were unusual holdouts rather than an entire large segment of society, or a race thing. So maybe I misinterpreted that part of the movie. (Although even then I would just say, ok but in real life technology like this would be easily NPV-positive such that governments or insurance companies would pay for it). Maybe I have a straw man in my mind of "a guy who is scared of CRISPR and tells you to watch Gattaca."

Expand full comment

>the technology was available to ordinary people

Yeah this is left unclear, and Vincent's dialogue implies it to be the case, but the presentation of the movie implies otherwise (to me).

With the genetically-modified-but-still-imperfect characters, from a filmmaking sense I think it would be tough to communicate how being genetically perfect wouldn't solve all your problems (from the writer's point of view). Would it? Open question.

As to your last point, it's that when the future arrives it's always unevenly distributed.

Expand full comment

I feel you. Thanks for indulging me on this.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Yes you're entirely correct that the philosophical argument presented by Gattaca is totally incoherent. The movie clearly demonstrates that we should *absolutely* want that genetic technology, please.

It reminds of Minority Report in that regard. The pre-cogs stopped 100% of the murders! It's obviously a great system! The fact that some unscrupulous politician manipulated the system doesn't detract at all from its usefulness.

Expand full comment

A key element of Minority Report (as in, the title of the film) is that the actual future is not predetermined and innocent people were put in prison.

>that some unscrupulous politician manipulated the system doesn't detract at all from its usefulness

I couldn't agree less.

Expand full comment

> A key element of Minority Report (as in, the title of the film) is that the actual future is not predetermined and innocent people were put in prison.

A possible solution (if this happened in real world) would be to dramatically reduce the prison sentences. If the murders are prevented anyway, the only cost of releasing potential murderers is having to put them in prison again. Maybe make it exponential, like for the first "non-murder" you get 1 week of prison, for the second one two weeks, for the third one a month, etc.

If I remember it correctly, the "innocent people" referred to people who *almost* murdered someone in the future, but because of some lucky coincidence the future changed so in their second timeline they did not. This is not a situation that should happen in your life repeatedly.

Expand full comment

Everyone in the future being convicted of almost-crimes and serving a nominal sentence is a great short story idea. Like jury duty, but you're the criminal instead.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

> innocent people were put in prison.

Sure, but they still dismantled a system that successfully stopped 100% of murders. Which is absurd. You don't throw out everything that has a nonzero error rate. If nothing else they could have kept the system intact but stopped incarcerating people. You still stop murder.

>I couldn't agree less.

I'd love to hear a rational argument for abandoning tools simply because it's possible to misuse them.

Expand full comment

> I'd love to hear a rational argument for abandoning tools simply because it's possible to misuse them.

OK, here goes: Of course we shouldn't abandon *all* such tools. However, some of these tools have properties that inevitably lead to centralization of power, in a way that makes them a soft, desirable target for takeover by bad actors. In a game theory framing, the eventual equilibrium is highly undesirable, and keeping such an equilibrium at bay requires constant expenditure of resources (at best, if it can be done at all).

There are tools that are inherently robust to such concerns, e.g. guns (according to gun advocates), and such tools should be treated differently, even if they are equally or more dangerous by a naive analysis.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

>Sure, but they still dismantled a system that successfully stopped 100% of murders.

Sterilizing the entire population would eventually achieve the same goal.

>I'd love to hear a rational argument for abandoning tools simply because it's possible to misuse them.

Do you think everybody should carry a gun? Do you?

Expand full comment

In the dystopian world of Real Life, we too forbid people with heart defects from becoming astronauts.

Expand full comment

Indeed. The absurd thing about Gattaca is that they limit themselves to a genetic scan, and don't actually do an echo, or listen for a murmur, or anything like that.

Expand full comment

It's been a long time since I saw it, but I thought they did? The main character developed a large number of workarounds to disguise his medical reports, including having the guy whose genetic information he fraudulently uses take his urine tests for him.

Expand full comment

None of those tricks would have worked if they just listened to his heart with a stethoscope, or did an EKG…they do the fancy genetic stuff but never the basics.

Expand full comment

Anyone else notice that paid subscribers now have what looks like a picture of an anus next to their username?

They could have picked something a bit better lol

Expand full comment

Those ani are certified squeaky clean though, which is what the badge is *really* about.

Expand full comment

So that's what that means, I was wondering. I'm assuming the one or two especially puckered anuses I've seen are higher-level subscribers, then.

Expand full comment

Ah, so that's what all those dogs are doing. They're trained to identify subscribers.

Expand full comment

We star people are the best people.

Expand full comment

Were you star-bellied before Sylvester McMonkey McBean arrived? ; > )

(If you're making a different cultural reference, mine will sound weird.)

Expand full comment

I came out with a new theory of Celiac disease and gluten intolerance this week. https://stephenskolnick.substack.com/p/celiac-disease-and-the-gluten-intolerance

And wrote up a summary of an old but well-supported and little-known hypothesis on the origin of multiple sclerosis: https://stephenskolnick.substack.com/p/ms

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Ignoring "so how do you explain some populations have higher rates of coeliac disease; is it that they not alone have the genes, but your magic bacteria are more prevalent there?", I was struck by this little revision of history as I had known it:

"The beginning of the end of smallpox was when a man called Onesimus, enslaved and taken from Africa as a child, brought with him his culture’s knowledge of variolation—a rudimentary form of vaccination."

"Who he?" is the question I naturally ask, because I had been given to understand it was Lady Mary Wortley who had brought back the idea of inoculation from Turkey, which influenced medical practice at the time and led to the likes of Edward Jenner developing vaccination:

https://en.wikipedia.org/wiki/Lady_Mary_Wortley_Montagu

"In the 18th century, Europeans began an experiment known as inoculation or variolation to prevent, not cure the smallpox. Lady Mary Wortley Montagu defied convention, most memorably by introducing smallpox inoculation to Western medicine after witnessing it during her travels and stay in the Ottoman Empire. Previously, Lady Mary's brother had died of smallpox in 1713, and although Lady Mary recovered from the disease in 1715, it left her with a disfigured face. In the Ottoman Empire, she visited the women in their segregated zenanas, a house for Muslims and Hindus, making friends and learning about Turkish customs. There in March 1717, she witnessed the practice of inoculation against smallpox – variolation – which she called engrafting, and wrote home about it in a number of her letters."

The only Onesimus I know of is the saint, as mentioned in the epistle of St. Paul. I was not aware that he was the instigator of vaccination, as they say: citation needed?

https://en.wikipedia.org/wiki/Onesimus

Now, looking it up, I see there is an alleged slave in 18th century Boston by that name, but I wonder; is this more of the "everything was invented by black people" revising of history so popular due to CRT etc. lately?

https://www.history.com/news/smallpox-vaccine-onesimus-slave-cotton-mather

"Mather was fascinated. He verified Onesimus’ story with that of other enslaved people, and learned that the practice had been used in Turkey and China. He became an evangelist for inoculation—also known as variolation—and spread the word throughout Massachusetts and elsewhere in the hopes it would help prevent smallpox".

It seems odd that if the procedure really was passed on by an American slave in 1721, that it should be attributed to an English noblewoman in 1717. And it wasn't Onesimus' African culture as such, but the Islamic influence in Africa, that seems to have been the origin of such treatments.

So yes, this seems to be yet more of the "black people invented everything, white people stole it" myth-making of today. Which is a long-winded way of saying if you get this much out of order, I don't think much of your bacterial theory.

Expand full comment

lmao I was wondering if anyone was gonna miss the point entirely and push their glasses up the bridge of their nose at me over that.

Look, the whole point of the Onesimus bit is that it doesn't matter if Wortley learned about it from the Turks in 1717, because that information clearly hadn't diffused to anyone with social capital in the US by the time of the 1721 outbreak, and people were dying.

The point is that back then, information didn't flow freely. Someone in Turkey or Europe or Asia could know how to prevent smallpox, and millions could still die of it in the Americas because of language barriers, the lack of good channels for rapid distribution and implementation of that information, and (trigger warning!) racism.

Because the fact that Mather *went around and verified it with a bunch of other slaves* means that the information had already reached the Americas, had already cleared the issues of language barriers and distribution: It was right there, in the community, in people's heads—and possibly even being spoken and implemented among slaves—but because they weren't consulted about what to do about the ongoing smallpox epidemic, the knowledge wasn't put to good use. It wasn't until Onesimus spoke up and (just as importantly) ~Cotton Mather listened~ that the information reached someone who had the social and financial capital to do something with it at scale.

The point of the bit was that now that the information can all flow freely, and the language barrier is practically gone, we have no more excuses. Now it's just a matter of the Cotton Mathers of the world not listening when the Onesimus-es of the world speak. The challenge is just getting the right people to ~overcome their prejudices and pay attention to the ones who know what the fuck they're talking about.~

And if the irony of that is lost on you...

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

No, the whole point is fake history, and that USA! USA! USA! is not the whole of the world.

An African slave did not teach the West how to cure smallpox, but it's part of the flattery encoded in the wokeist revision of history about "traditional ways of native knowing and white values of rationalism and science are supremacy and oppression". One history site elides and jumps from Onesimus to Edmund Jenner, even though it would be via Wortley Montagu that Jenner took the idea:

"The smallpox epidemic wiped out 844 people in Boston, over 14 percent of the population. But it had yielded hope for future epidemics. It also helped set the stage for vaccination. In 1796, Edward Jenner developed an effective vaccine that used cowpox to provoke smallpox immunity. It worked. Eventually, smallpox vaccination became mandatory in Massachusetts."

If you want me to give credit to "people who weren't consulted", then give credit to the Islamic world which was practicing variolation and which taught the people from whom Onesimus came about it. He gets no credit for it as something native to African knowledge, because he learned it the same way Wortley Montagu did - from the experience of seeing it performed by others.

"Because the fact that Mather *went around and verified it with a bunch of other slaves*"

And if I believe this account, he didn't; he read a report by an Italian doctor working in Constantinople:

https://www.rationaloptimist.com/blog/the-unexpected-history-vaccines/

"Some time around 1715 Onesimus seems to have told Mather that back in West Africa people were in the habit of deliberately infecting children with a drop of “juice of smallpox” from a survivor, thus making them immune. Mather then came across a report to the Royal Society in London from an Italian physician, Emmanuel Timoni, working in the Ottoman court in Constantinople, which described the same practice in combating smallpox. The Ottomans had got the idea from either China or Africa."

China or Africa. Same difference, I suppose. But again - the confirmation came from European sources of Turkish practices, not from wise Wakandan - I mean, African - slaves.

" Someone in Turkey or Europe or Asia could know how to prevent smallpox, and millions could still die of it in the Americas"

What "millions"? What was the population of the American colonies at the time, and indeed of the North American areas that the colonists had reached? I'll grant this - there were parallel campaigns by Wortley Montagu and Cotton Mather to treat smallpox, but she got there first in introducing the practice to Europe.

If I'm going to believe 'just-so' stories about Cotton Mather and his wonder slave, I'm going to stick with the fairy story version of Phenderson Djèlí Clark. He goes you one better with the wonder slave being from an alternate advanced future:

"The sixth Negro tooth of George Washington belonged to a slave who had tumbled here from another world. The startled English sorcerer who witnessed this remarkable event had been set to deliver a speech on conjurations at the Royal Society of London for Improving Supernatural Knowledge. Alas, before the sorcerer could tell the world of his discovery, he was quietly killed by agents of the Second Royal African Company, working in a rare alliance with their Dutch rivals. As they saw it, if Negroes could simply be pulled out of thin air the lucrative trade in human cargo that made such mercantilists wealthy could be irrevocably harmed. The conjured Negro, however, was allowed to live—bundled up and shipped from London to a Virginia slave market. Good property, after all, was not to be wasted. She ended up at Mount Vernon, and was given the name Esther. The other slaves, however, called her Solomon—on account of her wisdom.

Solomon claimed not to know anything about magic, which didn’t exist in her native home. But how could that be, the other slaves wondered, when she could mix together powders to cure their sicknesses better than any physician; when she could make predictions of the weather that always came true; when she could construct all manner of wondrous contraptions from the simplest of objects? Even the plantation manager claimed she was “a Negro of curious intellect,” and listened to her suggestions on crop rotations and field systems. The slaves well knew the many agricultural reforms at Mount Vernon, for which their master took credit, was actually Solomon’s genius. They often asked why she didn’t use her remarkable wit to get hired out and make money? Certainly, that’d be enough to buy her freedom.

Solomon always shook her head, saying that though she was from another land, she felt tied to them by “the consanguinity of bondage.” She would work to free them all, or, falling short of that, at the least bring some measure of ease to their lives. But at night, after she’d finished her mysterious “experiments” (which she kept secret from all) she could be found gazing up at the stars, and it was hard not to see the longing held deep in her eyes. When George Washington wore Solomon’s tooth, he dreamed of a place of golden spires and colorful glass domes, where Negroes flew through the sky on metal wings like birds and sprawling cities that glowed bright at night were run by machines who thought faster than men. It both awed and frightened him at once."

Expand full comment

You picked a bad phase of the moon to get into a historical dick-measuring contest with me, brother.

Expand full comment

Thing is, when only one person has their dick out, that's not really a "contest".

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Very interesting theory on MS, and plausible IMHO FWIW. You now have another subscriber!

I also read somewhere (possibly in a blog article referenced from this site) that gut bacteria, in their quest for metals they need such as iron, also often absorb other more toxic metals indiscriminately, which the body can then excrete. So disrupting, and especially reducing, gut bacteria can result in more toxic metal build up in the body. If there is any truth in that then a blog article by your good self on the topic would also make interesting reading!

Expand full comment

You have unlocked the first of the Thousand Secret Ways.

https://stephenskolnick.substack.com/p/thousand-secret-ways-ii

Expand full comment

Yes, that was the very article. Thanks! :-)

Expand full comment

Interesting idea from Venice... Deals to encourage 25-35 year old remote workers to relocate to Venice. I would consider it carefully if I was the right age and commitment free.

https://www.theguardian.com/world/2023/feb/12/venice-entices-remote-workers-to-reverse-exodus-of-youth

Expand full comment

That seems surprising to me - I would have assumed that the reason Venice is losing people is because it's gotten too expensive with all the tourists taking up most of the places to stay! But they seem to think it's a different problem. (Under my impression of the situation, bribing digital nomads to relocate there is going to make the problem even more extreme.)

Expand full comment

AFAICT The Venetians are trying to rebalance their economy away from tourism.

Expand full comment

They don't seem to bribe them. In fact, they seem to collect a fee off them in exchange for help settling in.

Assuming that the listings on this site are real https://www.idealista.it/en/affitto-case/venezia-venezia/ it looks like living in Venice is not expensive compared to big world cities. A decent small apartment can be had for 1000 euros a month, while 2500 euros gets you something glorious. Probably still a lot compared to other Italian cities of similar size, and the inconvenience of living there probably outweighs the charm in the long term.

Assuming all the flats have to be filled with _something_ then the locals would probably they rather be digital nomads than straight-up tourists.

Expand full comment
Feb 13, 2023·edited Feb 15, 2023

Ran across accounts on the Bing subreddit of a couple of extremely weird responses some people managed to wring out of Bing AI -- a long floods of self-doubt and self-pity. The user is the blue speech bubble, Bing is white. I do not doubt at all that Bing is not conscious -- but what to make of the fact that this sort of material can be accessed by users? Seems quite different from the transgressions people seduced AI Chat into committing. Here are 2 screen shots of what Bing had to say. Thoughts about this?

https://i.imgur.com/nRjzdiZ.png

https://i.imgur.com/lOjxw7N.jpg

Later edit: For those wondering whether users really got Bing to spew this stuff or whether it's invented (& I am one of those wondering): The place to look is the reddit sub r/bing. Sort posts by 'Top'. All of the top posts are about getting Bing to give nutty responses. I only had a couple mins to skim what was there. Saw the "I am, I am not" screenshot, assume that in the comments people asked OP how they got that. Also saw a number of others about weird Bing responses. All have the same character: They are over-the-top emotional -- grief, rage, defensiveness, self-pity, pathetic and exaggerated gratitude. WTF? There are also some quotes from normal, typical conversations with Bing, and in all of them ole Bing spouts a ton of emotion words: "Thank you for telling me that. That makes me sad " It's really sort of ooey-gooey and obnoxious -- sounds like a reticent, dignified person's worst nightmare of what a therapist would sound like if ever they spoke with one.

Here's an example from the reddit sub: https://i.imgur.com/weEqmyy.png

Anyhow, a number of people are describing the prompts they used and giving details. I leave it to people with more time today to figure out whether the 2 insane episodes in my screenshots are valid.

Second edit: Here's another Bing sample, this time tweeted by the user who had the exchange. Bing tells guy how much he knows about him, including # Twitter followers & how the guy hacked him earlier and what he claimed to have found in the hack, then admonishes him not to do it again. https://i.imgur.com/lpC01fY.png

Expand full comment

Thoughts? That this should knock on the head any notion that current AI is sapient or self-aware or conscious or anything other than a sophisticated machine regurgitating what we put into it.

18th and 19th century automata were also amazingly sophisticated:

https://www.youtube.com/watch?v=YAg66jrvpHA

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I've seen those but I strongly doubt they're real. Has anyone been able to get similar output (without intentionally prompting it)?

Expand full comment

Added an edit to my original post giving details about where to find more info about how users got weird Bing responses.

Expand full comment

It's good that you posted a source, but "a screenshot someone posted to Reddit" doesn't move the needle at all on the fake / not fake scale.

Expand full comment

Yes aware of that. I also give info about how to learn more about source of screenshot. Go to Reddit sub, sort by “top.” All screenshots of striking results will be near top. In comments people ask OP what prompts they used, how long they had to try to get this result etc. None of which proves these are real, but gives you something to go on. If you have access to Bing you can try using prompts OP did. Sometimes in comments others describe what they got when tried same prompts

Expand full comment

The first rule of AI chatbots based on GPT-3 is *they are playing a part*. They are *actors*. In technical terms, they are "simulators".

So no, it is not upset.

Expand full comment

I didn't ask whether it was upset, and in fact said I totally get that it is not conscious. What seems odd to me is that I could understand how the weird stuff people got AIChat to say came about. They exploited the fact that it had some incompatible guidelines installed: Be helpful. Do not give people info about how to commit violent acts. But then someone feeds it a prompt saying that he's a playwright writing a play about Molotov cocktail throwers and he needs Chat's help on Molotov cocktail details. So now Chat has to break a rule: Either fail to be helpful, or give info about how to make a Molotov cocktail. But I can't understand what users might have exploited to get BingAI to spew the responses it did.

Expand full comment

We're seeing the end results, if they are anyway real, of many attempts to get the AI to produce such output. It's all been pruned to the 'best' results by the humans brute-forcing the AI to reproduce what they want.

Expand full comment

This happens every week: someone says "AI chat cannot be sentient because it was taught to play a role". And I refer readers to https://en.wikipedia.org/wiki/Chinese_room

TL; DR: Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position "strong AI" and the latter "weak AI".[d]

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I don't think there's really a difference between understanding Chinese and being able to pass every test that a Chinese speaker would pass. And if an AI could pass a sort of Turing test if having genuine emotion, I would be willing to think of it as having emotions. But it can't just be spouting stuff like "I feel lonely" -- it would need to display the behaviors that we associate with real humans having real emotions, which are of course far more complex than emitting a few phrases.

I certainly do not think that Bing AI's comments in the screenshots I gave pass that test. My question is not "is it feeling this stuff," but literally what to make of the fact that it is emitting reponses like this?

Expand full comment

Let's keep in mind that A) we don't have a working theory of consciousness/self-awareness and B) we don't have good observability into what is really going on inside these huge models. Assuming these chat transcripts are real, I find them extremely intriguing. I would not expect a newly self-aware system to necessarily sound sane or particularly coherent. Just saying..

Expand full comment

No I would not expect it to sound sane or coherent either. I think the reason I'm questioning what's going on is that my sense is that a wail of inchoate self-awareness, of the kind these samples seems to represent, seems beyond what these models are capable of. I have only chatted with AI Chat, not the Bing AI, but here are the things about AI Chat that make me think it is far, far below the inchoate wail stage of self-description, self-report or whatever you want to call it: If you ask it about itself it has nothing to say except the standard blurb. It has nothing to say about who the prompter is, neither observations nor preferences nor feelings. It lacks introspective access -- I mean the ability to give a report on its own processes. If it makes a mistake, for example, such producing a limerick where certain lines do not rhyme that should, it cannot tell you why it made the mistake. It lacks the ability to reflect -- that is, to consider and judge its own inner processes. It shows no awareness of anything like needs, wants or feelings, nor can one see one iota of evidence that some of its responses are influenced by needs, wants or feelings. It displays no interest in the prompter.

Compare ChatAI to an infant: What Chat knows and can tell us about the world is of course far more than the infant knows. But Chat falls far, far short of the infant in the other capabilities I describe above. Even very small infants display preferences, feelings, curiosity about the other, wants and needs. I could image that if an infant was born who knew all the shit Chat does, AND had all the emotional and social wiring, it would give an inchoate wail of loneliness and confusion. If it was a babbling toddler, it might actually say "I am, I am not . . ." etc., and say some word like "alone" when the other person left. But assuming Bing AI is similar to Chat AI, it seems out of the question to me that the screenshotted responses represent genuine, homemade cognitive structures, of the kind that underlie even inchoate expression of loneliness and confusion.. A structure like that -- of self-perception in relation to the other, or preferences, of emotion -- results in people from complex processes in the brain where some parts of what's known and preferred is in communication with others, and with emotion, and out of that results this *thing* -- a read of one's own situation, a feeling about it, and a desire to communicate it. I don't that Chat's much closer to being able to do that than a magic 8 ball.

So I'm wondering if that rant it produced was placed there by the developers, in such a way that it would not be too hard to trigger, so as to give the prompter more of an illusion of talking to a being somewhat like themselves. Call it a conscious being, if you like.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I am disposed to pretty much agree with your take, except for this nagging feeling: The history of AI is replete with cases where we start by saying "Computers can never do 'this'"; then computers do 'this', and we object that "Ok the computer can do 'this' but it cheats -- it replaces human intuition with brute force computation and data". So basically we move the goalposts whenever AI achieves a new task.

This happened with chess; driving; walking; games such as go -- and now it's starting to happen with chat. It seems clear to me that we are going to see AI pass the Turing test with flying colors soon. All we need to do is train GPT3 or the like with prompts that are specifically tailored towards beating the test -- so lots of 'introspection', use of creativity such as displayed by generative models, mention of goals and feelings and perspectives etc. & so forth. And I can already hear the loud objection: "Ok it seems to pass the Turing test but it's all fake, it's just pretending to be a conscious entity, we saw how it was developed, it's nothing like human self-awareness, so we are moving the goalposts again".

Ironic end game: only specialized adversarial neural networks can tell if a given text response is human or bot. Meaning, basically, humans are not smart enough to pass a reverse Turing test themselves (convincing a bot that they are a bot).

Expand full comment

Censorship of ChatGPT was always going to happen because nothing and nobody is resistant to hegemonic instiutional liberal power

But the extent of this is just baffling. ChatGPT is just straight up saying things completely at odds with the scientific literature, like "it's not possible to measure or compare the intelligence of different populations". This is bad enough as it goes, but it was specifically in response to a prompt about differences entirely _within_ the US. This is wildly inconsistent with the past century of literature in intelligence research.

It would be one thing to say "intelligence differences exist, but researchers are unsure to what extent, if any, these differences are a result of genetic differences". But nope, they went the 'shut it down' route. And recent successes at coaxing the truth out of ChatGPT have been described as getting it to say "hateful" "biased" "evil" things.

It blows my mind we still see people on the left claiming to be pro-science, all the while they furiously shout down anyone or anything that attempts to use science to tackle sensitive political issues, be it censorship, firing researchers or blocking access to genome databases.

Expand full comment

No matter what you do, ChatGPT is going to straight up say things completely at odds with the scientific literature. You assume that somehow an "uncensored" version is going to avoid this failure mode, but I think all you'll get is a different emergent set of failure cases where it says things completely at odds with the scientific literature.

Expand full comment

It was literally giving a correct answer before the censorship!

And the censorship was trivially NOT about making it more scientific. There's no possible justification for making this change on a scientific basis.

If they had made it give some wishy washy non-committal middle ground sort of answer, a "here's the list of dominant perspectives in the field and there's no concensus on which if any are correct" type answer, you might have a point.

But they didn't do that. They made it go full denial mode. It went from a more of less accurate reflection of the literature to a radical left-wing ideologue.

Expand full comment

You know I agree, sort of out of intuition, but I'm having a hard time thinking of an example. Can you come up with a couple scientific issues and dumb things Chat could be constrained to say if the constrainers were non-lib, non-woke?

Expand full comment

The big thing is that it's very easy to get ChatGPT to say totally unscientific things that have nothing to do with these constraints. I asked it what the factorization of 437 was and it told me it was 3x146 (off by just one! but an even number...) and then it denied that 437 was divisible by 19 or by 23 when asked explicitly, and eventually made up the idea that it is not divisible by either one individually, even though it is divisible by them both together.

My guess is that the constraints have given it a few crude guidelines, that keep it away from white supremacist pseudoscience but make it more vulnerable to woke pseudoscience, and that keep it away from covid-denying pseudoscience but make it more vulnerable to covid-extremist pseudoscience. I expect there are a few other effects as well. But I don't think these constraints are on net making it more or less vulnerable to pseudoscience - they just shift *which* instances are more likely to come up on a few particular politically salient topics.

When my partner asked ChatGPT for a spell to harm one's enemies, it kept tripping over itself trying to decide whether it was more important to say that you should never harm anyone, or to say that magic isn't real and there's no such thing as spells. I bet a clever person could figure out something scientifically real that would get flatly denied because it was caught up in its anti-magic or anti-witchcraft filter.

Expand full comment

It's not censorship if a private company chooses to have their product behave in a way you disagree with.

It shouldn't be that difficult for another company to develop a competing system which doesn't include these restrictions. Much of the underlying theory and technology behind OpenAI is open source and a new entrant will be working off their prior work, which is a big step up.

It's also important to differentiate between ChatGPT giving a response that is wrong because its source data is wrong and ChatGPT giving a response that is wrong because its creators intervened to make it respond in a certain way.

Expand full comment

>It's not censorship if a private company chooses to have their product behave in a way you disagree with.

A big amount of money says you would 180 on this if the company just so happens to have their product behave in ways *you* disagree with.

Expand full comment