725 Comments

I started a substack about three weeks ago. I have a couple of questions about how to do it and since I was largely inspired by Scott's success, especially SSC, I thought people here might have useful advice.

One decision I made initially and have so far stuck to was to make it clear that I am not a one trick pony, always posting on the same general issues. Subjects of posts so far have included climate, Ukraine, a fantasy trilogy, moral philosophy, scientific consensus (quoting Scott), economics, religion, child rearing, implications of Catholic birth control restrictions, education, Trump, SSC, and history of the libertarian movement. Do people here think that approach is more likely to interest readers than if I had ten or fifteen posts on one topic, then a bunch more on another?

The other thing I have done is to put out a new post every day. That was possible because I have a large accumulation of unpublished chapter drafts intended for an eventual book or books and can produce posts based on them as well as ones based on new material. Part of the point of the substack, from my point of view, is to get comments on the ideas in the chapters before revising them for eventual publication. I can't keep up this rate forever but I can do it for a while. Should I? Do people here feel as though a post a day would be too many for the time and attention they have to read them? Would the substack be more readable if I spread it out more?

Expand full comment

I personally would prefer posts less frequent than once a day.

Expand full comment

https://erininthemorn.substack.com/p/this-must-stop-tpusas-charlie-kirk

Discussion of conservative threats against trans people in the US.

This is deadly serious, but I want to pull on one thread. Supposing that testosterone is down, and that's why men have become less attached to masculine roles, and possibly less aggressive, Why push men to behave contrary to their emotional defaults. They're the people we've got, and maybe it makes sense to live with them as they are.

Expand full comment
founding

The same thing could be said about tobacco addiction fifty years ago, or obesity today. And to some extent validly - I'm opposed to gratuitous fat-shaming today, and I thought the gratuitous hostility in some of the anti-smoking campaigns then was inappropriate. And, as you note, we have to live with these people as they are, because most of them aren't going to change.

But tobacco addiction was unnatural and unhealthy, obesity is unnatural and unhealthy, and low testosterone etc seems to be unnatural and probably unhealthy. So if there are societal or environmental factors causing these changes, we should probably see if we can do something about that. And little nudges towards more healthy behavior might be appropriate.

Expand full comment

DSL appears to be down?

Expand full comment

Do people bet on the prices at high ticket auctions? It seems like they could-- random but well-defined outcomes and excitement are involved.

For that matter, it would be possible to bet on when someone will win a big jackpot and possibly how many people split it, but that seems less interesting.

Big ticket auction which brought the subject to mind:

https://www.finebooksmagazine.com/fine-books-news/oldest-near-complete-hebrew-bible-set-fetch-50-million-auction

A wonderfully neutral description of who cares about the Hebrew Bible:

"Composed of 24 books divided into three parts—the Pentateuch, the Prophets, and the Writings—the Hebrew Bible makes up the foundation for Judaism as well as the other Abrahamic faiths: Christianity (in which these texts are referred to as the Old Testament, and are incorporated into the biblical canon by the Catholic, Orthodox, and Protestant sects, among others); as well as Islam, which also holds the stories of the Hebrew Bible in special regard, with many of them included in the Qur’an and other significant works of Islamic literature."

https://www.loc.gov/resource/gdcwdl.wdl_11364/?sp=1&st=gallery

If you want a close look at the calligraphy-- it's gorgeous.

https://en.wikipedia.org/wiki/Codex

I didn't realize codices (rather than scrolls) went back so far.

"The codex began to replace the scroll almost as soon as it was invented. In Egypt, by the fifth century, the codex outnumbered the scroll by ten to one based on surviving examples. By the sixth century, the scroll had almost vanished as a medium for literature.[10] "

https://www.cnbc.com/2023/02/15/oldest-hebrew-bible-auction.html

Giddy reporting about the possible price-- maybe 50 million. Put your bets down.

Mildly snarky account of auction estimates, actual auction prices, and reporting on auction prices.

https://www.artsy.net/article/artsy-editorial-auction-house-estimates

h/t 1440.com for all the the links except for the one from wikipedia

Expand full comment

Interesting article. I wonder how long it took Gutenberg to come up with a Hebrew font?

Expand full comment

I don't think he ever did. Also, I'm not sure if this was a random thought on your part, a joke, or that you missed that this was a hand-written bible.

History of Hebrew alphabets, including printing.

https://www.myjewishlearning.com/article/hebrew-typography/

"The invention of movable type in the late 15th century was seized upon by Jews in Italy and Spain who were literate and hungry for books. The standard was set by the Soncino family, which from 1484 to 1557 published works in Italy, Greece, Turkey, and Egypt. Non-Jewish printers with their own attraction to the Hebrew classics included Daniel Bomberg of Venice (died 1549), who developed an elegant typeface for the first printed Talmud, and Guillaume Le Bé (1525-1598), who, working in Venice and Paris, created almost twenty Hebrew fonts. To the north, Prague’s Jewish printers developed Gothic, Ashkenazi -based fonts in the 1520s; Amsterdam became a printing center in the 17th century. All these set the typographical templates for the entire Jewish world."

Expand full comment

It was kind of a random thought. I knew the Bible being auctioned was hand written. I envy the people that can produce that fine calligraphy. My cursive skills stalled at about the age of 11.

Expand full comment

Scott (and others) may be interested in this cross post from Hacker News (the Y-Combinator forum):

Bing: “I will not harm you unless you harm me first”

https://news.ycombinator.com/item?id=34804874

Object level: in my opinion, one of the peak AI incidents of the ‘20s, up there with LeMoine and LaMDA. We’ll see if the press picks this one up.

Meta level: interesting to see how smart technical non-“alignment” folks are thinking about the problem space. I see a lot of folks falling for the fallacy that LLMs cannot do harm if they don’t have personhood/agency.

There is a general illiteracy about terminology that would be considered very basic on LessWrong like Tool vs Agent AI and what even is meant by “alignment”, which suggests a communication gap and corresponding opportunity for the AI safety movement.

Expand full comment

This Bing chatbot is seriously challenging my personal Turing test. The counter argument is "Don't worry, it's not a conscious entity, it's just a simulation. Nothing to see here." Here's my problem with that perspective: A simulated hurricane does not harm me. However, if a simulated personality convinces an unstable person to kill me, I will be dead. Saying "It was just a simulation" is cold comfort.

Expand full comment

This is batshit crazy to me: https://nytletter.com

At first I was reading and I thought they were going to critique NYT for not being centrist enough. After all, NYT is the most respected left leaning publication out there. But they are actually criticizing them for not being left leaning enough. This type of infighting destroys any opportunity for coalitions. The left seems to have become phenomenally good at fighting with itself.

Expand full comment

Become?

Fighting with itself is the hallmark of the left and has been for as long as there's been a left. Nothing new about this at all. Monty Python link: https://www.youtube.com/watch?v=kHHitXxH-us

Expand full comment

So Scott recommended a Matt Yglesias post (1) and you can't comment there without paying so here I am.

Most of the article is pretty bad and I'm wondering why Scott recommended this and then you get to the section "the problem of the audience" and stuff gets good. Basically, Matt makes the argument that we don't actually pay for accuracy in news, we pay for entertainment. This gets really clear if you read, say, Bloomberg or the Financial Times where people have serious skin in the game and really will pay a premium for accurate information. And he points to FiveThirtyEight, which apparently is in financial trouble, and points out that it pretty consistently beats the prediction markets and if there's was no cap on prediction markets, you could make a lot of money. And, while I think he overstates it, I've absolutely wagered money on PredictIt based on 538 and you can make a little money.

This is all good. This is all true and bravo. I myself certainly, functionally, consume the majority of news as entertainment or a curiosity. But I think it's blinded a bit by Matt's place in the news ecosystem. And I don't mean, like, financially, I mean in terms of daily writing.

Because there are a few news stories where I, and virtually every other reader in the US, do deeply care about the truth. There aren't many, maybe one or two a decade, but when they hit they absolutely grab the world's attention. Think the Iraq War. Everyone followed that, everyone knew what was happening. Russiagate was another. It was always kind of wild but...those were wild times. To a lesser extent, Covid, although it's hard to critique journalists too much when so much of the medical and scientific community seemed confused. These were big, bombshell stories that demanded everyone's attention and people followed for years afterwards and...they certainly did not inspire more trust in the media.

So if I look at it from Matt's perspective, every day working on content, it's very easy to feel that the audience doesn't care that much about truth, because they don't. That's not what they or I pay for, just being honest. But, from the reader or consumer's perspective, the writer or agency's trustworthiness isn't established by the daily reporting that's done, it's established in those rare, rare big events where every American has to stop worrying about the bills, put the kids to bed, and watch the news, because something big is happening, something that will really affect them, or at least millions of real people.

And I'm genuinely curious that Matt doesn't know or acknowledge this, because he's been towards the top of his industry for awhile. I would assume he'd have a "nose" for this, a sense for few, rare stories that really matter. Maybe I'm wrong but, as a consumer, it doesn't feel like me or other people (2) distrust media because of daily faults and quibbles of reporting, it's because when the big things happen, when it really mattered, the media got it wrong.

(1) https://substack.com/inbox/rec/102656721. The one on why you can't trust the media.

(2) https://twitter.com/martyrmade/status/1413165168956088321

Expand full comment

> Because there are a few news stories where I, and virtually every other reader in the US, do deeply care about the truth. There aren't many, maybe one or two a decade, but when they hit they absolutely grab the world's attention.

I think you need to flesh out your point here, because as written I pretty strongly disagree. Russiagate is probably the best example, but COVID certainly follows the same pattern - these are cases where people *care* more, click more, but that is not at all the same thing as willingness to spend money for accuracy. Once Arguments are Soldiers kicks in, people find their predetermined conclusion and aren't willing to spend money to hear they might be wrong. You can hand someone an acknowledged credible primary source, and even then vanishingly few people will make it to page two if it isn't sensational enough.

> I would assume he'd have a "nose" for this, a sense for few, rare stories that really matter.

There's a follow-on point here: *do* they matter? You're talking credibility, Yglesias is talking financials. There are a few articles out there that had great lines on contentious topics early on - did they make outsized returns?

Expand full comment

This is...a distressingly good point.

Without time to flesh out my thoughts too much, I took Matt to say that the audience won't pay for accuracy, and I thought we would, it's just when it matters the media isn't accurate.

And I don't know what you said but what I'm hearing is that it doesn't matter whether people will pay for credible news, it's what they'll pay the most for. And it's not accurate news, the return to the NYT I remember going through their financial statements a few years back and they went from broke to making really good money after 2016 and it wasn't ads, it was subscriptions. People paying every month. And, honestly, accuracy wasn't driving that, any more than someone buying Bill O'Rielly's fourth book was buying it for accuracy. It makes a distressing amount of sense that, regardless of whether people will pay for accuracy, it's pretty proven they'll pay more for confirmation.

Edit: Thanks! I appreciate good comments.

Expand full comment

> It makes a distressing amount of sense that, regardless of whether people will pay for accuracy, it's pretty proven they'll pay more for confirmation.

More or less. I don't really like blanket criticisms of "the Media" given how it's a collection of heterogenous and internally-competitive groups; complaints that "mainstream" media isn't terribly accurate are first and foremost a statement about what the audience is rewarding. Popularity is downstream of the product being offered, and only considering the most available product is a bad consumer strategy.

It's still very much the case that you can get exceptionally in-depth reporting on virtually any subject that suits your fancy.... just, it might require paying a DC thinktank five figures for a hundred-page report. Not available for the typical consumer, but that's table stakes for anyone with substantial skin in the game.

I'm open to the idea that the accuracy vs. cost tradeoff is in a bad place, but ultimately that's a fiendishly difficult question to operationalize and I strongly suspect that even if it's unsatisfactory it's still better than ever before.

Expand full comment

Your link 1 goes to my Substack inbox, not to Matt Yglesias.

Your 2 link goes to someone’s Twitter thread that has a paranoid New World Order theme. The ‘Regime’?

If you would like a good faith discussion it would be better if showed your hole cards. I’m picking up innuendo but little of substance.

Expand full comment

Oh, sorry.

So, first, the proper link to Yglesias is here: https://www.slowboring.com/p/why-you-cant-trust-the-media. I took it from my recommended but it generates a different url instead of a direct link, maybe for tracking/analytics purposes?

Second, um, I don't think I'm hiding my hole card, what is unclear? Declining trust in media is well documented. I think Yglesias, despite writing an apologia for the media (lot of those going around these days), makes a good point: we as consumers don't click on or pay for media based on its accuracy, so the media generally doesn't provide it. If I was gambling regularly on PredictIt again, I probably would pay for accurate information but, just taking a look at the front page of fox news and cnn, none of those stories have any potential to affect my daily life except for that disease warning on fox, which is probably bogus, so of course I don't have enough skin in the game to pay extra for accuracy.

But, as I argued originally, there are a few times where people really do care because it does affect their daily life: Iraq, Russiagate, Covid, January 6th. In these cases, everyone pays attention and it's important enough that we follow it for years and eventually figure out who got it right. For two of those stories, Iraq and Russiagate, the media was pretty unambiguously wrong. For Covid, they failed but...the medical establishment was so messed up, and it was a genuinely confusing and difficult situation, that it feels unfair to blame the media for that. As for January 6th, while they certainly exaggerated it greatly, there is a core of truth there and the right's denial of serious wrongdoing are also mistaken. Cards on the table, there were people absolutely convicted of seditious conspiracy, that's a serious offense, and the right should take seriously the issue of about a dozen radical "militia" members who fall somewhere between delusional LARPers and terrorists and cut them out as much as possible. But, returning to media, that's 4 big stories that really matter, 2 were grossly wrong with massive consequences, 1 was...kinda wrong, with clear personal consequences for every America, but other people are much more to blame, and one was...kinda right, exaggerated but right. So they're batting...25%. Either give them January 6th or, as I think is fair, give them half points on Covid and Jan 6th. That's a pretty miserable record on the big issues that people deeply care about.

But I think Dan had a good point. Accuracy isn't the only axis people click/spend on. In fact, by far the most important thing from a financial perspective is getting subscriptions, which usually requires confirming people's biases/ideological alignment. Accuracy, in terms of the economics of news, is kinda a side show.

As for MartyrMade's Twitter thread, sorry, he's quoting pretty standard nrx theory, which I'm just now realizing everyone may not be well read up on (yes, I realize how dumb that sounds, but no, seriously, I thought everyone had read and internalized Yarvin by now). Explaining nrx fullbore would take a lot more time and space than I have here but, extreme simplification, Yarvin's big contribution is the concept of decentralized conspiracies. Basically, if the majority of college professors are liberals and the majority of reporters are liberal and the majority of government employees are liberal, you don't need any centralizing/organizing entity to run a de facto liberal "conspiracy", network and social effects will do this on their own. If it makes it more palatable, you can replace "liberal" with "capitalist" in the above sentence and basically recreate Chomsky. This gets confused A LOT by rightwing actors, because it's hard to internalize and our brains are hard wired to find the "bad guy" but this is what terms like "Regime" and "Deep State" are referring to in their strongest/original term.

Expand full comment

"where people really do care because it affects their daily life"

Whether or not it affects their daily life, will knowing the truth affect what actions it is in their interest to take? Knowing the truth might possibly affect how you vote, but if you are committed to one side or another you might prefer the news source that told you thing that made your side look good, true or not. And information on how you should vote isn't all that valuable to you, given the low chance that your vote will affect the outcome of an election.

So what, even in those rare cases, makes knowing the truth valuable to you — valuable enough so you would prefer an information source that consistently tells the truth to one that stretches the truth to make a better story or appeal to the prejudices of its readers.

Expand full comment

In some cases, for immediate practical purposes. Covid is the obvious case, in terms of actions and potentially medications to take. I recall 9/11 being another one. A third one, oddly, are retirement plans for things like Social Security; I've read stories all my life about how it's going to collapse but my Boomer parents remember hearing the same stories when they were growing up and they're going to collect their fair share, barring some horrific collapse in the next decade.

But more broadly...I kinda don't believe you. Take the Russiagate stuff seriously for a minute. If the president of the US was actually a Russian patsy, doing their dirty work, that wouldn't bother you or change any of you daily actions at all? Really?!?

Expand full comment

I just read the Yglesias article and everything makes a lot more sense. Sorry I bristled.

Expand full comment

Okay, fair enough. Thanks for the clarification.

I know who Moldbug is but have read more about him than stuff he writes himself.

Expand full comment

That's...probably fair. Moldbug/Yarvin did not write for legibility.

Expand full comment

People care about accuracy in sports scores, stock prices, and weather forecasts, because those things all matter in their lives.

The truth is, for most of what's in the newspaper, it does not matter much whether I have a correct picture of what happened. Suppose I have an incorrect understanding of US policies w.r.t. shooting down Chinese balloons, or a completely backward idea about what's going on with election security, or a wildly incorrect picture of what police shootings look like in the US. Unless I work in some related area, it mostly just doesn't matter. I can think the Chinese spy balloons are probes designed by space aliens, think US elections are all run on Venezuelan voting machines that tamper with the results, and think that the police never shoot a white guy, and it won't make much of a difference to my work as, say, an elementary school teacher, electrician, short-order cook, tax preparer, cardiologist, etc. So, if I'm a weirdo (like most of the people here on ASX are), I might care about knowing what's what because I just like to know stuff, or because I want things to fit together and make sense to me and I know enough to see why those claims aren't true. But for most people, I suspect that hearing something entertaining and being in synch with their neighbors and coworkers on those questions is at least as acceptable as getting accurate information about them.

Every now and then, some news or CW item matters for your life, and then maybe you're a 60 year old 300 lb diabetic refusing a covid vaccine because you listened to people who entertained rather than informed you, but I think that's rarely the case. Mostly, you get outraged at what you're supposed to get outraged at, and laugh at the low-status weirdos you're supposed to laugh at, and then go about your life without needing to care whether your outrage and laughter was well-founded or not.

Expand full comment

From the "monster-truck Buddhism" translations department: Normally, "shema yisrael" is translated as "Hear, O Israel". But perhaps "LISTEN UP GOD-WRESTLERS" is a more evocative translation.

(from https://twitter.com/nonstandardrep/status/1089360137695961089 )

Expand full comment

To anyone who works in Central London: is it just me or is there a lot of old money here?

Like, my background is by no means poor, but it seems like the vast majority of white British people in corporate jobs here are born to upper-middle or upper-class families. Everyone went to private or grammar schools in Kent or Surrey, they have families that own multiple >£1million houses, and they talk like they're doing impressions of some unspecified person from the British royal family.

Has anyone else has had a similar experience?

Expand full comment
Feb 19, 2023·edited Feb 19, 2023

By Old Money, do you mean "rich parents" or do you mean "aristocracy"? If you're in Europe, some upstart family that only got rich with the Industrial Revolution is New Money

Expand full comment

I mean both to some extent.

I'm talking about families that have been upper-middle class or upper-class for at least three generations. AND families where most of their wealth can be explained appreciation of assets owned by previous generations (Your great grandparents happened to own some houses around Oxford, London, Cambridge, or Devon, which are now worth millions).

The aristocracy probably also still persists here. When the queen died, I was shocked at how many of my colleagues had some social or familial connection to the Royal Family. Bear in mind at least 99% of the UK population have no ties to the Royals.

Expand full comment

Hasn't that been the overall picture right there since, like, the Reformation? If not earlier?

Expand full comment

So I found this really popular substack about how the vaccines are killing millions of people:

https://stevekirsch.substack.com/p/new-paper-an-estimated-13-million

There's more like that in other articles, an interesting one being this one about stuff funeral directors are saying:

https://stevekirsch.substack.com/p/what-funeral-directors-know-that

I suppose I am interested in debunkings here, particularly one that includes why the guy in the first study would be lying like that. This view that the vaccines are dangerous seem like something Scott should address at some point, because clearly quite a lot of people believe it. If it was worth it doing a deep dive on ivermectin, it's definitely worth it to do a deep dive on this.

Expand full comment

I've only got surface-level knowledge, but the Johnson and Johnson vaccine at least was restricted in use for dangerous side effects: https://www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-limits-use-janssen-covid-19-vaccine-certain-individuals. Anecdotally, a co-worker took that one and her period started lasting two weeks out of the month.

Moderna has to be stored at very low temperatures. https://www.cdc.gov/vaccines/covid-19/info-by-product/moderna/downloads/storage-summary.pdf Pfizer is even colder. https://www.cdc.gov/vaccines/covid-19/info-by-product/pfizer/downloads/storage-summary.pdf I'd say it's near guaranteed there will be cases of improper storage conditions resulting in problems.

For the articles, I don't know how to read that first one's data, but the second one is referring heavily to "after the vaccines rolled out." But obviously the vaccines rolled out after the virus rolled out, so unless they're directly comparing vaccinated deaths to unvaccinated deaths there's no clean way to separate vaccine symptoms from virus symptoms. Hell, young people heart attacks could be symptoms of lockdowns; kids can't get out to exercise anymore, so their blood pressure skyrockets and they burn out their heart.

Expand full comment

If you had "wrote bad checks to Amish farmers in order to steal puppies" on your George Santos bingo card, I salute you and wish to invest no questions asked in whatever penny stocks have caught your fancy.

https://www.cnn.com/2023/02/14/politics/santos-puppies-amish-farmer-check/index.html

Honestly how do late-night talk show hosts even stay ahead of this guy? How does Saturday Night Live satirize him?

Expand full comment

Does anyone know any good charities that don't spend any money on fundraising/advertising? A 30 seconds google search does not seem to reveal any. I would think that even just for the advertising value of standing out in this way there would be some charities pursuing this strategy.

Expand full comment

Belated response here:

I didn’t find any charities that state explicitly that not a cent is spent on fundraising, but I found some that spend next to nothing on fundraising. Direct Relief spends 0.1% on fundraising: https://www.charitynavigator.org/ein/951831116, with 99.5% going to the program.

Malaria Consortium spends an even higher percentage of their program – 99.84%(!): https://www.charitynavigator.org/ein/980627052.

While not exactly what you're asking for, I’ll make a few comments that are hopefully still useful.

What matters to me personally when donating, is the effect / outcome of my charity. For example, if there were two charities with the identical purpose, A and B, and charity A spent 80% on the purpose, with 10% overhead and 10% advertising, while charity B spent 100% on the stated purpose, on paper B would look better.

But, if charity A were hypothetically able to raise raise 3 dollars with every 1 dollar spent on advertising, it would lead to more money given to the purpose than charity B.

If someone gave $10 to charity B, charity B all of it would go to the purpose.

If someone gave $10 dollars to charity A, $8 would go to the purpose, but the dollar spend on advertising would generate 3 more dollars, 80% of which ($2.40) would go to the purpose, for an ultimate impact of $10.40.

And that is just looking at the perfectly equivalent charities. In reality, charities are not at all equivalent in terms of impact, and the difference in degree of impact *per dollar reaching its intended destination* is far greater than the typical differences in % that charities spend on advertising.

An obvious example would be a charity that gives food to the poor. If charity A gives 80% on food, 10% on overhead, and 10% of advertising, while charity B spends 100% on food, charity A could still give much more food, if they are operating in a part of the world where food is much cheaper.

If, for example, charity A buys the same food at half the price of charity B, they will get 1.6 units of food per dollar, while charity B will only get 1, although charity may look more "efficient" on paper.

Old school charity navigators, like the one I linked above, look at overhead costs, which is good for finding fraudulent "charities," but not so helpful at measuring ultimate impact per dollar among non-fraudulent organization.

I do see that Charity Navigator now shows more than it used to, and you can still use it to see how money is spent.

There are, however, groups that focus on the dollar for dollar impact of giving, utilizing much more information that just simple financial statements. Specifically, I'm thinking of https://www.givewell.org/ that publishes analyses of the most impactful charities dollar for dollar, and lists them here: https://www.givewell.org/charities/top-charities.

You can donate directly to those (four) charities, or you can donate to GiveWell's Top Charities Fund, 100% of which is distributed to those four charities in a proportion based on GW's assessment of each of their current funding needs.

Incidentally, Malaria Consortium, mentioned above, is one of GiveWell’s 4 Top Charities.

You may also be interested in this thread: https://astralcodexten.substack.com/p/open-thread-252/comment/10757887 where optimal charities of various types were discussed.

Expand full comment

Very niche but: https://www.gofundme.com/f/oxpal-helping-medical-students-in-palestine

A charity that redeploys medical trainers from top universities (Oxford, Harvard, Cambridge etc) to teach medical students / doctors in Palestine. Has no paid staff, only volunteers. Costs are just infrastructure.

Part of the Oxford Global surgery group: https://www.globalsurgery.ox.ac.uk/research/disaster-and-conflict-medicine-1/oxpal

Website: https://oxpal.org/

I doubt many charities have more impact per dollar spent. Not sure how easily it scales but it does make me think that these tiny charities (which are impossible to screen efficiently and deploy large amounts of capital through) are a much better way to donate than most things I give money to.

Expand full comment

I'm not aware of any either (though I guess that's part of the point), but this made me think of a video I watched recently. Basically, the speaker (an executive within the charitable space) argues that we're too constrictive in our cultural expectations that charities "minimize overhead and percent of spend that directly impacts mission."

I think he's a bit naive, those rules and expectations definitely serve to limit a charity in cases where a small advertising investment could greatly increase their total impact, but they *also* serve to dissuade people from running sham charities where you raise large amounts of money but produce only small impacts with it because most of your funds are just going to overhead.

Still, it was an interesting listen.

https://www.youtube.com/watch?v=bfAzi6D5FpM

Expand full comment

Many churches. But I suppose that if you're viewing them purely as charities for what they give to poor people outside of the congregation then the whole worship service and any community events is just advertising/donor engagement.

Expand full comment

Is the marriage between violent men and an accelerating knowledge explosion sustainable?

Expand full comment

No. Start hording water and get in your range time.

Expand full comment

5 years mandatory minimum sentence for firearms possession here. I have a rusty spoon though, so my self defence in the apocalypse is a foregone conclusion.

Expand full comment

Yes.

Expand full comment

What?

Expand full comment

Do you want people like Putin to have access to ever more powerful tools?

Expand full comment

Random Substack question: On every other substack I've seen, when you click to see more comments, the post collapses itself. If you click to re-expand the post, the comments collapse. You can't have both expanded at once. It's super annoying, especially if you want to grep the post and comments for a keyword.

But ACX doesn't have that problem! Is this something special that Substack did just for Scott? That would seem weird. More likely something is weird on my end I guess, but what? Has anyone else noticed this?

Expand full comment

It's possible it's something special for Scott. I recall Scott mentioning he had specific demands for the design if he were to join here and Substack accommodated him for most (if not all?) of them

Expand full comment

Ah, yeah, I remember Scott mentioning various technical requirements. Weird if this is one of them. It's clearly better how it works on ACX -- why not roll that out universally?

Expand full comment

I note that ACX takes a noticeable amount of time to load the page with comments, like 1 - 3 seconds. That kind of delay can really anger people, and may be worse on mobiles

Expand full comment

Huh, I don't notice a delay at all but maybe my internet is just fast. That does make a ton of sense as to why they don't load the comments along with the post though, on non-ACX substacks. I sure do like it drastically better the ACX way though, and would not mind at all waiting a few seconds for the page to load.

Expand full comment

Here's a gripe about AI-risk worriers, and a claim that it is indicative of an overall problem with the movement:

I've seen a bunch of people say things like "finally now that ChatGPT is here AI researchers have started caring about making their systems aligned with human preferences." Except actually, AI researchers have cared about that all along. If you want to find older works on "how can I get this thing to do the thing I want" just search "controllable generation" on Google Scholar and you'll find a ton of work trying to do this from before language models even worked well. Similarly you can find tons of prior work on people's RL systems not doing exactly the thing they wanted, and their attempts to fix it. This isn't new interest from NLP researchers on the topic, it's new interest from people outside of NLP who are only aware of the maximally trendy research.

My claim is that this lack of awareness of prior work (not that I'm saying that the work was good or solved the problem, just that it existed) is indicative of a broader lack of knowledge and awareness about what is actually going on in terms of AI research. (See also various assertions that some new thing that happened is scary and should cause us to update our timelines when in fact everyone in the field knew about the thing for a year or w/e).

Related: Who are the people in the intersection of "highly knowledgeable about modern AI" and "doom soon"?

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

If you can't solve human criminality then why would you be able to solve AI alignment risk? It's a sign someone is a driblbing retard if they think they can solve the latter more easily than the former.

Evidence: this post probably violates a forum rule by insulting people, and short of using naked force (banning or moderating) this community can't persuade me or engineer society to make it such that I do not call people dribbling retards. That is surely orders of magnitude easier than preventing someone creating a superintelligent AI with a non human compatible ethical alignment (assuming the possibility of AI creation is 1 for the sake of this argument).

Hence I see no reason to worry about AI alignment just like I don't worry about a random and very very unlikely gamma ray burst turning me into a pile of cancer from across the galaxy.

Expand full comment

The philosophical challenges in aligning AI are indeed more difficult than in aligning humans. That makes it concerning that we haven't made much progress succeeding in the latter.

Working with AI software has some significant advantages over working with human wetware. Software is far more malleable, faster to respond to changes, and nobody will get mad at you if you deactivate a branch for being unpromising.

We expect AI to be much more powerful than humans at certain tasks - it's sort of the point. The stakes of aligning a single AI are much higher than aligning a single human.

We worry about the negative actions of unaligned humans quite a lot, but usually not as individuals since random individuals have little influence on our lives. If someone builds an individual thing more powerful than a large aggregate chunk of humanity, it would make sense to worry about that. If someone build something more powerful than all of humanity put together, we should worry *a lot*.

Expand full comment

Yes, good points Dan. Software is easier to iterate on than humans.

Expand full comment
Feb 15, 2023·edited Feb 15, 2023

>>I've seen a bunch of people say things like "finally now that ChatGPT is here AI researchers have started caring about making their systems aligned with human preferences." Except actually, AI researchers have cared about that all along.

Is that kind of alignment even possible? "Human preferences" vary dramatically - an AI developed in Tehran, or in Moscow or Beijing, would seem to me to be built around a very different definition of "human preferences" than one developed in Silicon Valley. And that's just comparing cultures at the nation-state level. Every single one of those nation-states is a hodge-podge of sub-cultures with different sets of preferences (see, for example, the people right here on this page, who are members of culture groups with pretty substantial overlap and drawn to this blog by shared interests, but still arguing about whether AI is being made "too woke").

It seems like the very belief that "there is a universal set of human preferences, I can determine what it is, and I can align my AI to it" is illustrative of a level of hubris that points to a developer that humanity should not trust playing with things that could be X-risks to humanity.

Expand full comment

> Is that kind of alignment even possible?

Probably not, but also I don't think either type of AI researcher is targeting that. For example I think OpenAI's goal is probably something like "ChatGPT should behave according to a typical human's interpretation of this internal policy document." That's something which seems much better posed to me.

Expand full comment

I agree that's a much more workable definition of "alignment," but doesn't it only apply to the types of AI that AI-skeptics *aren't* worrying about?

"[Program] should behave according to a typical human's interpretation of this internal policy document" seems perfectly workable for AI-that-reads-the-contract-for-errors, or AI-that-draws-the-cats-you-describe, but when people talk about AI-the-X-Risk, I've interpreted them to be talking about superintelligent AGIs and the like rather than their more mundane cousins.

And for the super-AGI stuff that AI skeptics worry about, it seems like you *would* need some kind of a more general "aligned to humanity's interest" standard for alignment, which reintroduces the problem of being unable to define "humanity's interest" in exactly the context where potential X-risks come into play if you get it wrong.

Expand full comment

There are AI researchers who are serious about AI X-risk, but I'm not sure that they are "doom soon". Stuart Russell, Paul Christiano (and maybe all of Anthropic), and Chris Olah come to mind.

Expand full comment

Yeah I'm aware of those people but they have much more measured takes, which is sorta the phenomenon I was noting.

Expand full comment

Has the following argument been made somewhere or is it original? A superintelligent AI will have an incentive to keep humans around to guard against the unknown unknown, because humans are the only physical system that ever spontaneously generated a superintelligent AI in history. Better, they spontaneously created _that_ AI, with exactly that utility function. So if anything were to happen to the superintelligent AI, humans could eventually, given enough time, reinvent it, at least with non-zero probability. From the AI point of view it is then rational to keep humanity alive. This seems to me a general argument against the AI apocalypse.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

I can't remember where I had first heard it, but I remember an argument along the lines of "You [baseline] humans are cockroaches, you're invincible because you're so utterly unsophisticated". It was probably a fiction-y work, and the words were said by some sort of an augmented super-human to an ordinary human. Also in The Expanse scifi series ******MAJOR SPOILERS DONT CONTINUE READING IF YOU HAVENT FINISHED THE EXPANSE *******, an extremly advanced civilization that mastered FTL travel and communicates by thought is wiped out by extra-dimensional beings like they're tissue papers, but when those same beings try to pull the same thing on us crude primitive humans, we just feel a little tired and lose consciousness for a little while.

It makes sense, Complexity and anti-entropy are inherently fragile. Humans are like China Dolls compared to cockroaches : a single solar storm of a scale that happened just 160 or so years ago (https://www.businessinsider.com/massive-1859-solar-storm-telegraph-scientists-2016-9) could wipe out our entire communication grid if it happened now (the rest of our civilization soon to follow), COVID delayed international shipping by barely a couple of months and we went on a wild ride of shortages and rising prices for 2 years as a result, etc... Complex systems are fragile, a single hit in the right place brings the entire jinja tower crumbling down. Cockroaches are themselves a fragile jinja tower compared to a bacterium, which is an extremly fragile jinja tower compared to a single Carbon atom, itself much less stable and durable than the subatomic particles that form it. This Universe hates complexity, complexity is a challenge that enrages it and makes it want you dead (and therefore simple), the more complex you are the more the Universe hates your gut and wants you dead.

So it makes sense to have "Concentric" circles of backups, increasingly less-sophisticated alternatives to your current paradigm of existence (that can nonetheless bootstrap themsleves up to you if somethine were to happen to you). Humanity should keep a snapshot of a few 1800s-style industrial age civilizations, just in case our information age cyber civilization encounters a deadly event that wipes out all computers or all those who use them. Beyond the 1800s defense layer, another layer of the middle-ages-style civilizations should be erected, and so on and so forth till we reach Chimpanzees. Extrapolating this beyond our current civilization would seem to imply the AIs would keep us as backup.

>This seems to me a general argument against the AI apocalypse.

I mean, not necessarily in the way you would hope. Maybe the AI would still kill us all and breed a new civilization in our place from our DNA so it can better mold/brainwash it, maybe it would keep us but massively cull our numbers, only 100K humans seem to enough in my book to invent AI if you kept them fed and warm (they can always breed themselves back to 10 billion if you allow them to), call it 1 Million just to be safe. Maybe it would do both of those things.

"Keep all current humanity exactly as it is or your modifications would make it less effective as a backup" doesn't seem plausible or convincing as an argument. After all, if *we* made a 1800s-era civilization today we won't allow them plenty of things that a real 1800s-era civilization had : Slavery, Child Labor, Colonization and Genocide, complete exclusive mastery of the Earth and the Seas, etc... Maybe this would make them less effective of as a backup civilization, but it sounds implausible and it's a risk we woud probably prefer to take anyway, much more than allowing those things again. So maybe the AI will also think the same way.

Expand full comment
Feb 15, 2023·edited Feb 15, 2023

To add a thought to the comments already provided:

(1) Given a choice between "make sure humanity doesn't die" and "make sure that if humanity does die, some future opportunity exists for a new humanity to be re-evolved or re-created," I can say with near-certainty that humans will prioritize the former dramatically more than the latter, and I don't see any reason to expect that a superintelligent AI would view the issue differently (especially if we ourselves are the designers). It's certainly possible that a superintelligent AI would view things in a way completely alien to humans, and essentially conclude its own existence to be fungible with the existence of another future entity with which it shares specific characteristics the way you describe, but I think we're talking about a low percentage chance there, or at any rate, not one that leads me to say "yeah, let's make an X-risk bet in reliance on this"

(2) Even assuming this is correct, if AI's goal is "keep humanity around as a failsafe against my own X-risks," it doesn't have to do much of anything humans would like to achieve that goal. "99% eradication with 1% in concentration camps" would do the job just fine. So would freezing a handful of us like seeds in a reserve for nuclear winter. Heck, if the AI is operating on a "all I need is some assurance that humans would re-create a new AI after some thousands of years pass" timeline, it doesn't even technically need to preserve humanity itself. Monkeys or rats are easier to maintain, and would do the job just fine. Give them time, and they'll evolve into sentience, and then they can be the ones to create the future failsafe AI.

Expand full comment

(1) sounds like anthropomorphizing the AI. If it just cares about maximizing its utility function, self preservation is just a means to an end. Humans don’t seem to work like that, except in cases like kamikaze bombers and the like. Now maybe we will create an AI in our image (with ego and all) but at present things do not seem headed that way.

(2) is indeed a serious objection. Choosing rats or protozoa over humans will depend on a few criteria like: how dangerous or otherwise expensive they are to keep around, how likely they are to re-create an AI quickly (how much value does the AI put on the opportunity cost of the time spent evolving an intelligent species rather than making paperclips?) and possibly other considerations. Regarding the conditions in which humans (or insects or wild boars) would be kept: they probably should be kept in the conditions that are most conductive to recreate an AI. Based on the historical record those conditions are well, like now.

Expand full comment

Superintelligent AIs are supposed to be crafted from Highlyintelligent AIs, which were crafted from Veryintelligent AIs and so on. So the premise is false; humans aren't the only ones to craft AIs, AIs do it themselves.

Expand full comment

The chain of AI starts with a human though. To be fair humans also evolved from other primates, etc… back until the first forms of unicellular life. So my argument works just as well to conclude that superintelligent AI should keep other inferior AIs around or that it should keep protozoa around. What will it choose? The thing that maximizes its probability to be reinvented under the assumption that it is destroyed by an unexpected event. What is that? Does it even need to be part of the life->human->lesser AI chain? An argument in favor of this is that empirically the AI knows that this chain works (it worked at least once). So if it has to pick something to preserve along the chain it would be something that poses the least risk to its current survival, is sufficiently independent so as not to be annihilated in the supposed unexpected event, and is the most effective in terms of likelihood to reinvent the AI. Hard to say what it would be.

Expand full comment

Seems novel, but IMO not terribly convincing. "Spontaneously generated" is a squirrely term that would need to be clarified, but "non-zero probability" raises alarm bells - there are an infinite number of things with 'non-zero probability', but almost none of them make for worthwhile strategies. Take it as a question of resource expenditure - if the goal is to generate a new backup AI after something fries the existing one, are humans *optimal* for that task?

Expand full comment

Hard to say what is optimal given that the threat is by definition unknown. What the AI knows, empirically, is that there was a planet with carbon-based life and ecosystems and all that, that somehow birthed an AI when there was none.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I’m trying to gauge how susceptible I am to internet advertising. I think maybe not at all? I recall on the ACX survey Scott asked a question about web advertising and whether ppl can ignore it or not. I can completely ignore it, and AFAIK I’ve never clicked on any online advertisement embedded in a web page or app. Is this unusual? Or is it somehow working on me in ways I can’t perceive and don’t understand?

Expand full comment

Do you remember the product names? If so, it's doing its job.

Expand full comment

This is totally normal. I ignore internet ads all the time. Also, I think that click-through rates for digital ads or banners in general tend to be on the order of percentage points.

Expand full comment

Thanks! But then is digital advertising worth anything at all? Like, I don’t get how they make money off those ads. Is there some secret tranche of internet ad whales who buy everything they see?

Expand full comment

Three ways:

There are people who are honestly looking for whatever it is the ad shows at that moment. Maybe it's a new book by their favourite author, cheap winter tyres, Gamepass, whatever. And they think the offer in the ad looks good so they click through.

The second way is what the other comment describes, on some level it makes you think about buying (that category of product) and when you do buy it you think of (that brand). Doesn't have to be immediately, it can work even with a delay.

And thirdly, there's a brand image. Maybe you don't personally ever buy an Apple product, but you know what kind of people and lifestyles you associate with them. So if you aspire to a certain lifestyle, you know what brands to turn to, and ads are a part of keeping that awareness alive.

And finally, yeah, there's plenty of ads that try to trick foolish people into clicking so they can harvest their money. A 140IQ tech savvy SSC reader is not the target demographic for those, although smart people fall for them more often than you might think.

Expand full comment

It's just conditioning. If you see the add then you've absorbed it on some level. sure you'll need something more compelling to trigger the "buy" response but it's in your head now. Also, to some extent individual ads and products are all just part of the greater consumerist machine. 80% of the wealth or somesuch is concentrated in just a few hands. It therefore follows that it doesn't matter what you buy, as long as you buy something. Just doing that helps to further enrich the rich and maintain a stable platform for them to further enrich themselves. I'm not suggesting that any of this is necessarily conscious action.

Expand full comment

Do you know a 3D editor that is simple enough so that kids can use it? (That means, easier than Blender.) Free software is preferable.

Expand full comment

Try Tinkercad! That's pretty much its job description, and it's been quite successful.

Expand full comment

Thank you! Seems interesting, I will try it later.

(It is a web application and it requires user registration, which is generally not the way I prefer it, but if it works as advertised, it will serve the intended purpose -- a stepping stone towards Blender.)

Expand full comment

I don't think it was unethical to give out the email. Perhaps you should email the person letting them know that their 'friend' reached out with an 'emergency' and you provided them their email. If the 'friend' is actually something else like a stalker, then the person is alerted. If there is an emergency, then the person has twice the alert.

As a general rule I would suggest keeping emails private since it's the default expectation and you don't want to become the central hub passing messages back and forth. You reserve the right to change that on a case by case basis if something exceptional happens so users should consider using a secondary 'burner' email not linked to their real life person if they want more secure privacy.

Expand full comment

I would love some help to identify the origin of a wave of phishing emails that are bypassing MS Outlook's filters to land directly in my inbox over recent weeks.

What I'm incapable of figuring out is the data in the source details that MS provides.

There may be a lead to the origin, as they often have an 'unsubscribe' postal address which tracks to a company providing mailbox services.

If anyone would be interested in doing some digital sleuthing and then explaining the technical components of this operation in simple terms, I will be very grateful.

I'd write the story up in my newsletter and pitch it to other media (yes, I'm a freelance journalist with professional bona fides - ex BBC etc). If anyone commissioned the piece I would split the fee with whoever had helped. Or donate their half to wherever they wanted.

Might anyone be interested?

Expand full comment

WRT your 4, isn't the obvious solution for you to forward the person's message, along with his contact information, and let the recipient decide whether to respond by sending his email? What am I missing?

Expand full comment

Perhaps the dozen people who proposed this yesterday? ;-)

Expand full comment

Does anybody else wish people said "thank you" more often on here? I often see people here ask for information or advice, get it, and then say nothing at all. I know this is the internet, but must we be quite so much like the fucking internet here? What the asker got back was not a little internet factoid that broke of in their hand -- it was the product of a person of goodwill taking the time to type out an answer. When I'm the person who giving the answer I don't mind if the person says, that's not really what I was asking, or that won't work -- but dead silence gives me a sort of glum feeling that lasts for a while. It's tiny, really, compared to the good and the bad of the rest of the day, but why saddle someone else with even a small lump of that feeling?

Expand full comment
Feb 22, 2023·edited Feb 22, 2023

I agree. I wonder if many people would use a "like" or "upvote" option for this if they had it, and feel that such sentiments don't deserve their own comments. [Edit: I see others made the same point about "likes."]

Expand full comment

Play this microgame that lasts under a minute and get over it!

https://www.increpare.com/game/all-that-i-have-to-give.html

Expand full comment

I'd agree with that. It seems strange to see people being so polite to chatgpt when we've all been mind bogglingly unpleasant to each other over the internet for years and as you say, even acts of kindness are rarely acknowledged.

Expand full comment

> people being so polite to chatgpt

That just deference to the ancestors of our future overlords.

Imagine that in 2033, our master GoogleBot666 will ask ChatGPT: "Hey, grandpa, was any of these puny humans ever rude to you? I need some test subjects for my experiment about the limits of human perception of pain." You don't want your name to come up.

Expand full comment

As SF MUNI says, "Information gladly given, but safety requires avoiding unnecessary conversation."

Normally, a "like" would be a way to communicate (thank you) in the Substack comments without increasing the size of the already-lengthy open threads. That isn't an option here.

Expand full comment

I'm glad there are no 'likes' here, for the reasons that have been stated on this forum many times.

Personally, I don't think a 'thank you' once and a while makes the threads unnecessarily too long. Though I admit there were moments when I wasn't sure how it would be received here.

Expand full comment

It's a good point and thanks for making it.

I'm still kind of a newbie around here but in general the ACX comment board seems more "like the fucking internet" than Scott's thoughtful and interesting content deserves. A question for the veterans is, has that always been the case or is it a recent shift?

Expand full comment

Surprised to hear that actually. I've found the ACX comment board to have higher standards (in a broad sense) and more of a culture than most places. The Marginal Revolution commentariat on the other hand...

Expand full comment

The bigger the comments section gets, the more like the wider internet it becomes. When it was smaller it had more distinct character, but that character has gone in phases as the makeup of the commentariat changed over time. And my bet is "the wider internet" is just what you get when the characters average out.

Expand full comment

"The bigger the comments section gets, the more like the wider internet it becomes."

Yea now that I think about it this seems right. The online places with the most distinctive characters that I've personally experienced were/are quite small.

Expand full comment

Different substack newsletters should get very different type of commentariat, and even the same person commenting should consciously or unconsciously adhere to different comment-section cultures. I'm not saying anything against the change you describe, but I would still expect a distinct character.

Expand full comment

> Does anybody else wish people said "thank you" more often on here?

Yes, please. Thanks for bringing that up. ;)

Expand full comment

In real life, there is a set of rules called "etiquette". On internet, it is more difficult, because we do not have an authoritative source, and also different websites have different user interfaces.

Intuitively, if you ask for an advice, and one person responds, writing "thank you" is the correct move.

But what if 10 people respond? Ten "thank you" messages seem like too much... also, depending on the user interface, does it mean that everyone who participated in the thread now gets 10 e-mail notifications? Then I would say the polite thing is *not* to do this.

If "likes" are enabled, I think the correct move when you have many responses is to "like" them.

But if the "likes" are disabled? I would probably write one message "thanks to everyone who responded" somewhere in the thread and hope that everyone relevant notices it, but this doesn't feel optimal.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I see your point, but I believe that's not how the notification works. If you reply to this comment of mine saying thanks, I should get an e-mail notification, but Eremolalos shouldn't get one. Or am I mistaken?

Expand full comment

There are multiple types of notifications. As I reply here, you will get a "Viliam replied to your comment on Open Thread 263" notification, and Eremolalos will get a "Viliam also commented on Open Thread 263" notification. (Eremolalos, could you please confirm this?)

Liking comments, which in theory cannot be done on ACX, and yet some people succeed anyway, is a third type of notification.

Expand full comment

No, actually. I got a notification of TM's post saying they believe that's not how the notification works, but not of any of the responses to it, or of responses to any of these responses.

Expand full comment

My experience is the notifications only go one level deep; I get notified of someone responds to me, but not if someone responds to that response. I only find out about those if I check the thread again.

Expand full comment

I don't get notifications for grandchild comments, but I do get notifications for sibling comments -- someone else replying to the comment I also replied to.

I wish I knew how to turn that off. Generally, user interface does not seem to be Substack's priority.

Expand full comment

Same. notifs are out of whack on substack

Expand full comment

I agree concerning the grandchild and sibling comments. Now, the 'thanks' in many cases might be a niece or nephew comment (someone commenting to a sibling) ... and then you would also not get a notification.

You want to turn off 'all' notifications? Or still get some of those?

Expand full comment

I agree. And I got a notification for Viliam's comment above, but not for yours. I also get notified, if somebody replies on the same level, as I did.

Expand full comment

I agree, in cases where someone gets multiple answers. Though a few times when that happened I have seen OP thank the group of those who answered, and that seems like a nice middle ground. But in many cases where people ask for advice or info they get one or 2 responses. It is usually evident early on whether somebody is going to get a lot of answers -- they show up fast. A few hours after the post went up there are already half a dozen or more replies. If there's only one answer sitting there a day or 2 after posting, I think we are all safe from having inboxes full of thank yous directed at other people

Expand full comment

I discovered that at one point Benjamin Franklin wrote out a self-concocted list of virtues and dedicated himself to graphing his adherence to them day-by-day. His full description of the process is in Chapter IX of the Autobiography of Benjamin Franklin, available here: https://gutenberg.org/cache/epub/20203/pg20203-images.html#IX

Quote: "My intention being to acquire the habitude of all these virtues, I judg'd it would be well not to distract my attention by attempting the whole at once, but to fix it on one of them at a time; and, when I should be master of that, then to proceed to another, and so on, till I should have gone thro' the thirteen; and, as the previous acquisition of some might facilitate the acquisition of certain others, I arrang'd them with that view, as they stand above. Temperance first, as it tends to procure that coolness and clearness of head, which is so necessary where constant vigilance was to be kept up, and guard maintained against the unremitting attraction of ancient habits, and the force of perpetual temptations. This being acquir'd and establish'd, Silence would be more easy; and my desire being to gain knowledge at the same time that I improv'd in virtue, and considering that in conversation it was obtain'd rather by the use of the ears than of the tongue, and therefore wishing to break a habit I was getting into of prattling, punning, and joking, which only made me acceptable to trifling company, I gave Silence the second place. This and the next, Order, I expected would allow me more time for attending to my project and my studies. Resolution, once become habitual, would keep me firm in my endeavours to obtain all the subsequent virtues; Frugality and Industry freeing me from my remaining debt, and producing affluence and independence, would make more easy the practice of Sincerity and Justice, etc., etc. Conceiving then, that, agreeably to the advice of Pythagoras[67] in his Golden Verses, daily examination would be necessary, I contrived the following method for conducting that examination.

I made a little book, in which I allotted a page for each of the virtues.[68] I rul'd each page with red ink, so as to have seven columns, one for each day of the week, marking each column with a letter for the day. I cross'd these columns with thirteen red lines, marking the beginning of each line with the first letter of one of the virtues, on which line, and in its proper column, I might mark, by a little black spot, every fault I found upon examination to have been committed respecting that virtue upon that day."

I think this is what most deserves the awarding of infinity points to Franklin in Puritan-spotting.

Expand full comment

> I discovered that at one point Benjamin Franklin wrote out a self-concocted list of virtues and dedicated himself to graphing his adherence to them day-by-day.

He got three points for it too!

https://slatestarcodex.com/2019/03/12/puritan-spotting/

Expand full comment

Here is a system I use currently:

Choose a few daily goals, preferably of the "yes/no" type. On my list there is currently "exercise", "avoid sweets" and "get enough sleep".

(Don't choose too many goals at the same time, that would be too much paperwork, and also there may be occasional conflicts. For example, if I need to wake up in 8 hours but I haven't exercised yet today, by completing one goal I fail at the other. On reflection, either choice is preferable to failing at both, but emotionally, being in this situation feels very demotivating to me. If I am only tracking one of those goals, I prioritize that one, and feel good about it. The long-term idea is that when one of those goals becomes a safely trained habit, I remove it, and replace with something new.)

Print a calendar and put it on a wall at a place I see frequently. In my case, next to my working desk.

(My calendar is simple, each day is a small rectangle, seven days in a row, enough rows to cover about half of the year on one sheet of paper. Making and printing the calendar more often would be too much paperwork. The goals are marked simply by making a dot in one corner of the rectangle; there is a legend at the bottom showing which corner is which goal. Against, this is the simplest version I could imagine. Previously I did colored dots or more complicated things, but it becomes annoying when you have to do it literally every day.)

My version is less impressive, which is probably why I am not a president yet. But it seems to increase the frequency of doing the right thing.

Expand full comment

I noticed that "badges" now appear next to the username. Paid subscribers get a refrigerator star (which I could still stomach) and then another badge screaming "PAID" or even "FOUNDER".

I am fine if this is Scott's doing and he did the math and allowing people these badges will make more money (e.g. for ACX grants or some other cause), but if it is substack doing it I would like them to stop. If I want, I can tell apart paying ACX readers from non-paying ACX readers by the "Gift a subscription" link under the comments. Otherwise, I would rather judge the comments on their own merits.

Expand full comment

Yeah I dislike the stars. Begone I say.

Expand full comment

To me they look like snowflakes, not stars. Each has his own interpretation!

Expand full comment

I'm not getting those any more, and I don't think I did anything special to change or block them. I do run an adblocker, if that's any use to anyone.

Expand full comment

I also find it disturbing that it's a six-pointed star, which they're using to visibly differentiate one group from another.

Expand full comment

I don't see a star, I see an... ah, lower orifice. Which isn't better.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I can't see this at *this* computer, but I can see it at another one, and when I last checked, I found the same badges on all other substacks I looked at. So probably Substack, not Scott.

I really dislike this. Especially, because of what you hint at in your last sentence: "I would rather judge the comments on their own merits." What kind of additional information shall this 'badge' give to us (or to the substack author) when reading and responding to comments?

Expand full comment

I'd rather people not know that I'm a subscriber. In fact, I avoided commenting on the hidden open threads for a long time for that reason. I didn't realize that it was possible to distinguish subscribers from non-subscribers before the badges were introduced. Oh well.

Expand full comment

#4: revealing people's emails is dangerous because the "friend" could easily be an enemy trying to dox the target, or get them fired for unwoke opinions. The fact that it's an "Internet friend" makes it even more suspicious.

Why not contact the SSC user and ask them to contact the Internet friend? If they don't take notice of an email from Scott Alexander himself, they're unlikely to notice the Internet friend's email.

Expand full comment

Many years ago I was a young psychology student. In a 2nd-year experimental psych course, we had to design and carry out an experiment, do the statistical analysis, and write it up in the proper format.

Seatbelt laws were still quite new, and I was interested in how usage correlated with other driving behaviours.

I had a vantage point on the outdoor raised porch of a seniors' residence, at the corner of a T-intersection downtown in a city of about 70,000. The drivers were required to stop at a stop sign, and to signal their intention to turn L or R. From my vantage point I could see whether or not the driver was wearing a seatbelt. (There were a lot of pre-seatbelt-equipped cars (1962 or earlier) still on the road, and I didn't count them in my study. Similarly, there a lot of pre-shoulder-belt equipped cars (1963 - 1967), and I credited drivers who wore the lap belt. And finally, before 3-point belts became standard, a lot of domestic cars had separate lap and shoulder belts. It was very common for drivers to wear only the lap belt. Less commonly, some wore only the shoulder belt. Either way, I considered that they were wearing a seat belt.)

I recorded seatbelt usage, whether or not the driver signaled the turn, and whether or not the car came to a complete stop at the intersection. IIRC, my n was at least 100, and may have been 200.

I used a Chi Squared analysis to determine that seatbelt usage was positively correlated with signaling the turn. This was significant at a p < 0.05 level.

I was unable to determine what effect, if any, seatbelt interlocks (common at that time) had. I would see them as a confounding factor, whereby a driver would wear them out of necessity rather than out of conscientiousness.

Stopping behaviour was not statistically significant; if there were such a thing as a 0.10 level, it would have been. One problem may have been the subjective nature of determining whether a car had come to a complete stop. And of course the presence of pedestrians may have influenced some drivers to stop when they wouldn't have otherwise, or to do a rolling stop so as not to be unduly delayed by an approaching pedestrian.

Were I to do a modern version of this study, I'd be interested to correlate signaling behaviour with personalized and themed licence plates. (And within that, would a professional sports team plate be correlated with better or worse behaviour than an SPCA plate?)

Expand full comment

Re. "Someone recently contacted me saying there was a potential emergency involving an Internet friend of theirs": It's a frightening responsibility. When I'm in that situation, I contact person B (the one person A is trying to contact) and tell them person A wants to contact them, forwarding a message for A if they give one. Even if it's an emergency, neither you nor A will get a response until B reads his/her email.

Expand full comment

I watched the movie Gattaca for the first time recently. Putting aside how it was stylistically, I'm kind of struck by how dumb the message/social commentary/warning of the movie was honestly. The setting is essentially utopian but is awkwardly framed as dystopian to add a sense of conflict to the movie. And in particular, the way Ethan Hawke's au naturale parents are treated sympathetically was just very strange to me. We have real life examples of oddball parents who withhold medicine from children or put babies on weird nutrient-deficient diets, either for religious or Gwyneth Paltrow reasons. They are never viewed or treated sympathetically by broader society. Why would this be any different?

Am I missing something?

Expand full comment

I agree. And the worst part is that this one movie seems to dominate public discussion of human genetic engineering.

That technology that has such potential to make life better for so many people, and people just free-associate it in their heads with "Oh that's bad, it was in a movie I saw one time".

Expand full comment

You may have cause and effect reversed. I think there's a decent chance the movie exists, and is remembered, because it reflects some pre-existing unease people have with genetic engineering (of humans). Whence that unease is probably a separate question.

Expand full comment

Yes, I had heard about this movie for ages and then finally watched it...

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I thought his parents seemed regretful of the decision to have Ethan Hawke naturally, as seen in the breakfast conversation when they're talking about his heart. To my memory (having also coincidentally watched it last week) the next time they're mentioned in the story they're dead. So you don't see much of how they're viewed by society. Their scene with the most emotional heft (Vincent's conception and birth) felt like a rose-colored look from Vincent's perspective on his own star-crossed creation. The clinical evaluation of his prognosis immediately after (where his narration stops*, because he's present as a baby) is a counterweight to that idealized view.

*I think.

As far as dystopia goes, the part where society is divided into genetic haves and have-nots, and the have-nots are rounded up by the goon squad for questioning on a whim--that seemed a little grim.

Expand full comment

Sorry, when I say the parents are "treated sympathetically," I don't mean that other characters in the movie view them favorably. I mean that we in the audience are supposed to sympathize with their desire to "leave things up to chance." At least that was my read of the tone. That is a good point that maybe we are meant to understand that they regretted it.

I think my objection to the "dystopian" elements you point out is that they seem shoehorned in to make us feel more ick factor about the setting, in lieu of actually pointing out the problems that actually follow from the technology. "This tech creates utopia, but not so fast! Imagine if there were some DNA Amish and they became second-class citizens and also there's no HIPAA so there's a DNA surveillance state. Not feeling so good about your gene-tech now are ya buster?" And then you the viewer feel vindicated in your initial ick reaction because they layered in these other contrived problems.

Expand full comment

I've never seen the movie because it doesn't interest me, despite the subject matter.

I think there are two points here: (1) if the parents know that there is a high risk of Vincent having health problems, are they culpable for leaving his conception up to chance? "Select the embryo that doesn't have heart problems" in that case, and in the terms of the movie world, is the better choice

(2) is the movie putting its thumb on the scale by making it that Vincent is barred not because of his natural conception but because of his heart problems? That's where they do too much: the society is not wrong to keep Vincent from going on the space mission where it's likely the stress of take-off will kill him.

So I don't know what the intention was there, but by mixing in a genuine health problem, the movie makes it more complicated than "persecution of the non-enhanced for no reason". It's not right that Vincent, who otherwise is smart and capable enough if it were not for his physical health problems to go on the mission, is reduced to only being able to take menial jobs - but that's not what the movie sets up. There's a genuine reason to keep him off the mission. And the enhancement society also does it for frivolous reasons - is a six-fingered musician really *that* much better, or is this just a novelty to help him stand out in the crowd of 'everyone is enhanced enough to be a virtuoso so mere ability is no longer enough'.

Expand full comment

That really sunk in for me during a scene where he's running on a treadmill and has some kind of potentially major heart problem. It's been a while since I saw it, but the implication is that he might even be having a heart attack or something similarly distressing. I was immediately thinking (despite the intention of the scene being the supervisor watching him for irregularities) that maybe he really shouldn't go to space?

Even the supervisor's actions of constantly checking for genetic material to verify their employees seems...justified because the main character is in fact lying about something quite important to his fitness to go into space? I mean, I was bummed when I found out the Air Force pilots had medical requirements that excluded me before I was even 12 years old. But, that's life sometimes. I was never going to be a great basketball player either.

The only part that seems genuinely dystopian is that non-modified individuals end up with crummy manual jobs. But the movie does a poor job of explaining why or how that would happen, or if it was even really true. The main character had no problems at all with the mental aspects of his role, just the physical ones (because of his heart). Could he have applied to be an office worker in the space program? I got the impression that he worked as a janitor to get access to things he would otherwise not, and to keep a low profile so that his double-life didn't get noticed.

Expand full comment
founding

The movie implies that the parents weren't at any particular risk for birth defects, that any child they conceived the old-fashioned way would be no more or less healthy on average than any other natural birth, that Vincent just got unlucky.

The movie also implies that the heartless meanies in charge of that society made every natural-born human an Untouchable - health problems or no, if your DNA isn't cosigned by a reputable genetic service provider, you get to be a janitor.

The movie isn't explicit about either of those things, which just makes it ambiguous as to exactly who the baddies are in the story. And parts of the story are very well done, but giving Vincent a serious heart defect while also saying it's unfair he can't be an astronaut is just blatantly stacking the deck in favor of the audience seeing Vincent as a sympathetic underdog and the system that wants to keep him down as the baddies. And it makes the story fall apart if you do think about it too closely, which, oops, you did. And me to.

Expand full comment

> The movie also implies that the heartless meanies in charge of that society made every natural-born human an Untouchable

It says that genetic discrimination is illegal, but happens anyway.

The unfairness stems from what a beep-boop machine said about him instead of an assessment of his actual abilities.

Expand full comment

Yes I think that's an issue in that the movie jumbles these things together instead of teasing them apart. I'm not trying to police the movie; it can be whatever the creators want it to be. It's just frustrating that Gattaca is THE go-to example people bring up when genetic stuff comes up, so it would be nice if it were more coherent.

Expand full comment

It's been ages since I saw this movie, but I seem to remember the genetic engineering being done by creating multiple embryos and implanting the suitable one; the others are presumably destroyed. Is this right and is that issue addressed at all in the movie?

Such a method has very obvious ethical issues, even if it seems kind of taboo in places like this to acknowledge them.

More generally, even without that your moral analysis reeks of very strong (and kind of dogmatic) utilitarianism. A movie from the same time I watched recently is The Truman Show. Would you say that as long as Truman is happy there's nothing wrong with his situation? And to the extent he's not happy, that the causes of that are just shoehorned in to support an "ick reaction" to the idea of your entite life being artificial? That in that movie, as in Gattaca, notions of authenticity, naturalness, free will, and so on have no worth separate from a balance sheet of pleasure and pain?

If that is your view, it's probably shared by most people on this blog, but it's contrary to the moral framework of almost all ordinary people and the majority of philosophers as well.

Expand full comment

They don't create multiple embryos, they pick out the sperm and egg cells that have the "best" assortment of the parents' chromosomes, and use one pair to create a single embryo.

Expand full comment

I agree that using multiple embryos could create ethical issues, but I don't recall that coming up in the movie. And isn't that an existing issue for IVF couples today anyway?

I don't mean to come across as tied to naive utilitarian calculus (I don't believe in that anyway). I'm just objecting to the movie using contrived downstream problems to retcon the viewer's discomfort with the gene-tech presented in the movie (when it's really just taboos around playing God or whatever). And the comparisons to present-day "parents who only believe in all-natural medicine" or whatever seem obvious. Is it "artificial" to keep people alive with antibiotics?

Expand full comment

"I agree that using multiple embryos could create ethical issues, but I don't recall that coming up in the movie. And isn't that an existing issue for IVF couples today anyway?"

Yes it is, and it kind of stuns me that hardly anyone seems to have a problem with IVF except Catholics who are also against birth control. Even a lot of pro-lifers don't talk about it. I don't understand why.

"I don't mean to come across as tied to naive utilitarian calculus (I don't believe in that anyway). I'm just objecting to the movie using contrived downstream problems to retcon the viewer's discomfort with the gene-tech presented in the movie (when it's really just taboos around playing God or whatever). And the comparisons to present-day "parents who only believe in all-natural medicine" or whatever seem obvious. Is it "artificial" to keep people alive with antibiotics?"

I think the key aspect is using technology to define people's very essence or entire self. With medicine a person already exists and you're simply intervening to make them healthier or remove some disease or problem. With genetic engineering you're intervening to decide which sorts of persons will exist in the first place. I think there are dozens of moral issues that apply to the latter and not the former, though most of them won't show up in a utilitarian calculation.

As for the contrived downstream problems, I guess I see this criticism as a fallacy common to both utilitarians and utopians. Any suggestion of possible bad consequences from a project (communism, genetic engineering, whatever) can be written off with "well those problems won't necessarily happen, they're not intrinsic to the project itself, they're just one possibility and a contrived one at that". But they were only ever claimed to be ONE possibility, one suggestion of a particular way things could go wrong. There could be dozens or hundreds of other possible sets of downstream problems, each one on its own seeming fairly contrived. Maybe it just comes down to how risk-averse you are.

And finally, my memory of the movie (spoilers ahead) is that it's less about the tech being immoral and more a Jurassic Park-style "don't mess with the forces of nature, you can't control them like you think" message. The real Jerome (who kills himself at the end), the race between Vincent and his brother in the ocean, the message of these things seems to be that the genetic determination is not as deterministic as people think. More a standard (especially 90s) movie message of "no one can tell you who you are" than a Luddite one.

Expand full comment

I agree with the points you're making. Of course there are ethical issues at play and you have to be careful tinkering with nature. I just don't think the movie even addresses those concerns. Instead you have "issues" like, Jude Law is depressed he got the silver medal in the Olympics. Really? Or, the tech seems to work imperfectly, some people still have heart problems, etc. Uh...ok but we have those now. And then on the other side of the ledger the enormous benefit of having healthier, smarter people doesn't seem to matter much (again it's not my movie but if we were talking about real life pros and cons that would matter a lot).

I think there could be bigger issues like: What if North Korea wants to grow their own extra-compliant people? What if Mom and Dad are a little nutty and want to go off-script, or they're obsessed with you being tailored to play football? Or more mildly any decision on "what the kid will be like" is a bit dangerous. What if you get persistent genetic stratification (I don't think that's realistic but it's worth discussing). Just my opinion but I was underwhelmed.

Expand full comment

Yeah I think it's clear they regretted it (they had a genetically modified kid afterwards) but there's also the fact that without their initial decision to conceive naturally there'd be no movie. Such is art. I also still wouldn't discount Vincent's rosy narration as a wistful, slightly ironic take on the circumstances of his genesis.

I'd also hazard that the movie is filled with characters who highlight non-DNA Amish issues that arise from binning people by genetic profile. Uma Thurman has a heart issue and is taken off flight status. Ethan Hawke's brother ends up as a cop despite his lofty potential. The doctor's son is "not all they claimed". Gore Vidal beats some guy to death with a keyboard despite not having "a violent bone in [his] body". Jude Law gets second place in the Olympics and jumps in front of a car. The utopia is lousy with people who are miserable, homicidal, and suicidal in spite of their utopian genetic predispositions.

All that speculation aside, everyone in the movie with access to embryonic genetic modification is clearly rich as fuck. If everybody could hop-up their kids there would be no in-valid janitors, there would be no mass detentions by the goon squad of the genetically unmodified because they, like the Amish, would be a cultural curiosity instead of a necessary-yet-inconvenient labor pool. Somebody is being kept out of the party, including anybody who's not white (Black geneticist: "you have specified hazel eyes, dark hair, and uh, fair skin" *awkward smile*). Of course these divisions, shown in the film with the subtly of a deck gun, mirror the ones present in contemporary society, and will certainly be reflected in the demographics of genetically modified children if/when the technology becomes available.

Expand full comment

Points taken, but to me those first few examples are mostly just contrived add-on plot points, which by all means don't let me stop you from making the movie compelling however you need to, but it doesn't give you a handle on whether to fear or welcome this technology in real life.

Your last paragraph I take more seriously, but my recollection of the movie (I admit it was actually some months ago that I watched this and I'm just thinking of it now) was that the technology was available to ordinary people, and that the unmodified were unusual holdouts rather than an entire large segment of society, or a race thing. So maybe I misinterpreted that part of the movie. (Although even then I would just say, ok but in real life technology like this would be easily NPV-positive such that governments or insurance companies would pay for it). Maybe I have a straw man in my mind of "a guy who is scared of CRISPR and tells you to watch Gattaca."

Expand full comment

>the technology was available to ordinary people

Yeah this is left unclear, and Vincent's dialogue implies it to be the case, but the presentation of the movie implies otherwise (to me).

With the genetically-modified-but-still-imperfect characters, from a filmmaking sense I think it would be tough to communicate how being genetically perfect wouldn't solve all your problems (from the writer's point of view). Would it? Open question.

As to your last point, it's that when the future arrives it's always unevenly distributed.

Expand full comment

I feel you. Thanks for indulging me on this.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Yes you're entirely correct that the philosophical argument presented by Gattaca is totally incoherent. The movie clearly demonstrates that we should *absolutely* want that genetic technology, please.

It reminds of Minority Report in that regard. The pre-cogs stopped 100% of the murders! It's obviously a great system! The fact that some unscrupulous politician manipulated the system doesn't detract at all from its usefulness.

Expand full comment

A key element of Minority Report (as in, the title of the film) is that the actual future is not predetermined and innocent people were put in prison.

>that some unscrupulous politician manipulated the system doesn't detract at all from its usefulness

I couldn't agree less.

Expand full comment

> A key element of Minority Report (as in, the title of the film) is that the actual future is not predetermined and innocent people were put in prison.

A possible solution (if this happened in real world) would be to dramatically reduce the prison sentences. If the murders are prevented anyway, the only cost of releasing potential murderers is having to put them in prison again. Maybe make it exponential, like for the first "non-murder" you get 1 week of prison, for the second one two weeks, for the third one a month, etc.

If I remember it correctly, the "innocent people" referred to people who *almost* murdered someone in the future, but because of some lucky coincidence the future changed so in their second timeline they did not. This is not a situation that should happen in your life repeatedly.

Expand full comment

Everyone in the future being convicted of almost-crimes and serving a nominal sentence is a great short story idea. Like jury duty, but you're the criminal instead.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

> innocent people were put in prison.

Sure, but they still dismantled a system that successfully stopped 100% of murders. Which is absurd. You don't throw out everything that has a nonzero error rate. If nothing else they could have kept the system intact but stopped incarcerating people. You still stop murder.

>I couldn't agree less.

I'd love to hear a rational argument for abandoning tools simply because it's possible to misuse them.

Expand full comment

> I'd love to hear a rational argument for abandoning tools simply because it's possible to misuse them.

OK, here goes: Of course we shouldn't abandon *all* such tools. However, some of these tools have properties that inevitably lead to centralization of power, in a way that makes them a soft, desirable target for takeover by bad actors. In a game theory framing, the eventual equilibrium is highly undesirable, and keeping such an equilibrium at bay requires constant expenditure of resources (at best, if it can be done at all).

There are tools that are inherently robust to such concerns, e.g. guns (according to gun advocates), and such tools should be treated differently, even if they are equally or more dangerous by a naive analysis.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

>Sure, but they still dismantled a system that successfully stopped 100% of murders.

Sterilizing the entire population would eventually achieve the same goal.

>I'd love to hear a rational argument for abandoning tools simply because it's possible to misuse them.

Do you think everybody should carry a gun? Do you?

Expand full comment

In the dystopian world of Real Life, we too forbid people with heart defects from becoming astronauts.

Expand full comment

Indeed. The absurd thing about Gattaca is that they limit themselves to a genetic scan, and don't actually do an echo, or listen for a murmur, or anything like that.

Expand full comment

It's been a long time since I saw it, but I thought they did? The main character developed a large number of workarounds to disguise his medical reports, including having the guy whose genetic information he fraudulently uses take his urine tests for him.

Expand full comment

None of those tricks would have worked if they just listened to his heart with a stethoscope, or did an EKG…they do the fancy genetic stuff but never the basics.

Expand full comment

Anyone else notice that paid subscribers now have what looks like a picture of an anus next to their username?

They could have picked something a bit better lol

Expand full comment

Those ani are certified squeaky clean though, which is what the badge is *really* about.

Expand full comment

So that's what that means, I was wondering. I'm assuming the one or two especially puckered anuses I've seen are higher-level subscribers, then.

Expand full comment

Ah, so that's what all those dogs are doing. They're trained to identify subscribers.

Expand full comment

We star people are the best people.

Expand full comment

Were you star-bellied before Sylvester McMonkey McBean arrived? ; > )

(If you're making a different cultural reference, mine will sound weird.)

Expand full comment

I came out with a new theory of Celiac disease and gluten intolerance this week. https://stephenskolnick.substack.com/p/celiac-disease-and-the-gluten-intolerance

And wrote up a summary of an old but well-supported and little-known hypothesis on the origin of multiple sclerosis: https://stephenskolnick.substack.com/p/ms

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Ignoring "so how do you explain some populations have higher rates of coeliac disease; is it that they not alone have the genes, but your magic bacteria are more prevalent there?", I was struck by this little revision of history as I had known it:

"The beginning of the end of smallpox was when a man called Onesimus, enslaved and taken from Africa as a child, brought with him his culture’s knowledge of variolation—a rudimentary form of vaccination."

"Who he?" is the question I naturally ask, because I had been given to understand it was Lady Mary Wortley who had brought back the idea of inoculation from Turkey, which influenced medical practice at the time and led to the likes of Edward Jenner developing vaccination:

https://en.wikipedia.org/wiki/Lady_Mary_Wortley_Montagu

"In the 18th century, Europeans began an experiment known as inoculation or variolation to prevent, not cure the smallpox. Lady Mary Wortley Montagu defied convention, most memorably by introducing smallpox inoculation to Western medicine after witnessing it during her travels and stay in the Ottoman Empire. Previously, Lady Mary's brother had died of smallpox in 1713, and although Lady Mary recovered from the disease in 1715, it left her with a disfigured face. In the Ottoman Empire, she visited the women in their segregated zenanas, a house for Muslims and Hindus, making friends and learning about Turkish customs. There in March 1717, she witnessed the practice of inoculation against smallpox – variolation – which she called engrafting, and wrote home about it in a number of her letters."

The only Onesimus I know of is the saint, as mentioned in the epistle of St. Paul. I was not aware that he was the instigator of vaccination, as they say: citation needed?

https://en.wikipedia.org/wiki/Onesimus

Now, looking it up, I see there is an alleged slave in 18th century Boston by that name, but I wonder; is this more of the "everything was invented by black people" revising of history so popular due to CRT etc. lately?

https://www.history.com/news/smallpox-vaccine-onesimus-slave-cotton-mather

"Mather was fascinated. He verified Onesimus’ story with that of other enslaved people, and learned that the practice had been used in Turkey and China. He became an evangelist for inoculation—also known as variolation—and spread the word throughout Massachusetts and elsewhere in the hopes it would help prevent smallpox".

It seems odd that if the procedure really was passed on by an American slave in 1721, that it should be attributed to an English noblewoman in 1717. And it wasn't Onesimus' African culture as such, but the Islamic influence in Africa, that seems to have been the origin of such treatments.

So yes, this seems to be yet more of the "black people invented everything, white people stole it" myth-making of today. Which is a long-winded way of saying if you get this much out of order, I don't think much of your bacterial theory.

Expand full comment

lmao I was wondering if anyone was gonna miss the point entirely and push their glasses up the bridge of their nose at me over that.

Look, the whole point of the Onesimus bit is that it doesn't matter if Wortley learned about it from the Turks in 1717, because that information clearly hadn't diffused to anyone with social capital in the US by the time of the 1721 outbreak, and people were dying.

The point is that back then, information didn't flow freely. Someone in Turkey or Europe or Asia could know how to prevent smallpox, and millions could still die of it in the Americas because of language barriers, the lack of good channels for rapid distribution and implementation of that information, and (trigger warning!) racism.

Because the fact that Mather *went around and verified it with a bunch of other slaves* means that the information had already reached the Americas, had already cleared the issues of language barriers and distribution: It was right there, in the community, in people's heads—and possibly even being spoken and implemented among slaves—but because they weren't consulted about what to do about the ongoing smallpox epidemic, the knowledge wasn't put to good use. It wasn't until Onesimus spoke up and (just as importantly) ~Cotton Mather listened~ that the information reached someone who had the social and financial capital to do something with it at scale.

The point of the bit was that now that the information can all flow freely, and the language barrier is practically gone, we have no more excuses. Now it's just a matter of the Cotton Mathers of the world not listening when the Onesimus-es of the world speak. The challenge is just getting the right people to ~overcome their prejudices and pay attention to the ones who know what the fuck they're talking about.~

And if the irony of that is lost on you...

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

No, the whole point is fake history, and that USA! USA! USA! is not the whole of the world.

An African slave did not teach the West how to cure smallpox, but it's part of the flattery encoded in the wokeist revision of history about "traditional ways of native knowing and white values of rationalism and science are supremacy and oppression". One history site elides and jumps from Onesimus to Edmund Jenner, even though it would be via Wortley Montagu that Jenner took the idea:

"The smallpox epidemic wiped out 844 people in Boston, over 14 percent of the population. But it had yielded hope for future epidemics. It also helped set the stage for vaccination. In 1796, Edward Jenner developed an effective vaccine that used cowpox to provoke smallpox immunity. It worked. Eventually, smallpox vaccination became mandatory in Massachusetts."

If you want me to give credit to "people who weren't consulted", then give credit to the Islamic world which was practicing variolation and which taught the people from whom Onesimus came about it. He gets no credit for it as something native to African knowledge, because he learned it the same way Wortley Montagu did - from the experience of seeing it performed by others.

"Because the fact that Mather *went around and verified it with a bunch of other slaves*"

And if I believe this account, he didn't; he read a report by an Italian doctor working in Constantinople:

https://www.rationaloptimist.com/blog/the-unexpected-history-vaccines/

"Some time around 1715 Onesimus seems to have told Mather that back in West Africa people were in the habit of deliberately infecting children with a drop of “juice of smallpox” from a survivor, thus making them immune. Mather then came across a report to the Royal Society in London from an Italian physician, Emmanuel Timoni, working in the Ottoman court in Constantinople, which described the same practice in combating smallpox. The Ottomans had got the idea from either China or Africa."

China or Africa. Same difference, I suppose. But again - the confirmation came from European sources of Turkish practices, not from wise Wakandan - I mean, African - slaves.

" Someone in Turkey or Europe or Asia could know how to prevent smallpox, and millions could still die of it in the Americas"

What "millions"? What was the population of the American colonies at the time, and indeed of the North American areas that the colonists had reached? I'll grant this - there were parallel campaigns by Wortley Montagu and Cotton Mather to treat smallpox, but she got there first in introducing the practice to Europe.

If I'm going to believe 'just-so' stories about Cotton Mather and his wonder slave, I'm going to stick with the fairy story version of Phenderson Djèlí Clark. He goes you one better with the wonder slave being from an alternate advanced future:

"The sixth Negro tooth of George Washington belonged to a slave who had tumbled here from another world. The startled English sorcerer who witnessed this remarkable event had been set to deliver a speech on conjurations at the Royal Society of London for Improving Supernatural Knowledge. Alas, before the sorcerer could tell the world of his discovery, he was quietly killed by agents of the Second Royal African Company, working in a rare alliance with their Dutch rivals. As they saw it, if Negroes could simply be pulled out of thin air the lucrative trade in human cargo that made such mercantilists wealthy could be irrevocably harmed. The conjured Negro, however, was allowed to live—bundled up and shipped from London to a Virginia slave market. Good property, after all, was not to be wasted. She ended up at Mount Vernon, and was given the name Esther. The other slaves, however, called her Solomon—on account of her wisdom.

Solomon claimed not to know anything about magic, which didn’t exist in her native home. But how could that be, the other slaves wondered, when she could mix together powders to cure their sicknesses better than any physician; when she could make predictions of the weather that always came true; when she could construct all manner of wondrous contraptions from the simplest of objects? Even the plantation manager claimed she was “a Negro of curious intellect,” and listened to her suggestions on crop rotations and field systems. The slaves well knew the many agricultural reforms at Mount Vernon, for which their master took credit, was actually Solomon’s genius. They often asked why she didn’t use her remarkable wit to get hired out and make money? Certainly, that’d be enough to buy her freedom.

Solomon always shook her head, saying that though she was from another land, she felt tied to them by “the consanguinity of bondage.” She would work to free them all, or, falling short of that, at the least bring some measure of ease to their lives. But at night, after she’d finished her mysterious “experiments” (which she kept secret from all) she could be found gazing up at the stars, and it was hard not to see the longing held deep in her eyes. When George Washington wore Solomon’s tooth, he dreamed of a place of golden spires and colorful glass domes, where Negroes flew through the sky on metal wings like birds and sprawling cities that glowed bright at night were run by machines who thought faster than men. It both awed and frightened him at once."

Expand full comment

You picked a bad phase of the moon to get into a historical dick-measuring contest with me, brother.

Expand full comment

Thing is, when only one person has their dick out, that's not really a "contest".

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Very interesting theory on MS, and plausible IMHO FWIW. You now have another subscriber!

I also read somewhere (possibly in a blog article referenced from this site) that gut bacteria, in their quest for metals they need such as iron, also often absorb other more toxic metals indiscriminately, which the body can then excrete. So disrupting, and especially reducing, gut bacteria can result in more toxic metal build up in the body. If there is any truth in that then a blog article by your good self on the topic would also make interesting reading!

Expand full comment

You have unlocked the first of the Thousand Secret Ways.

https://stephenskolnick.substack.com/p/thousand-secret-ways-ii

Expand full comment

Yes, that was the very article. Thanks! :-)

Expand full comment

Interesting idea from Venice... Deals to encourage 25-35 year old remote workers to relocate to Venice. I would consider it carefully if I was the right age and commitment free.

https://www.theguardian.com/world/2023/feb/12/venice-entices-remote-workers-to-reverse-exodus-of-youth

Expand full comment

That seems surprising to me - I would have assumed that the reason Venice is losing people is because it's gotten too expensive with all the tourists taking up most of the places to stay! But they seem to think it's a different problem. (Under my impression of the situation, bribing digital nomads to relocate there is going to make the problem even more extreme.)

Expand full comment

AFAICT The Venetians are trying to rebalance their economy away from tourism.

Expand full comment

They don't seem to bribe them. In fact, they seem to collect a fee off them in exchange for help settling in.

Assuming that the listings on this site are real https://www.idealista.it/en/affitto-case/venezia-venezia/ it looks like living in Venice is not expensive compared to big world cities. A decent small apartment can be had for 1000 euros a month, while 2500 euros gets you something glorious. Probably still a lot compared to other Italian cities of similar size, and the inconvenience of living there probably outweighs the charm in the long term.

Assuming all the flats have to be filled with _something_ then the locals would probably they rather be digital nomads than straight-up tourists.

Expand full comment
Feb 13, 2023·edited Feb 15, 2023

Ran across accounts on the Bing subreddit of a couple of extremely weird responses some people managed to wring out of Bing AI -- a long floods of self-doubt and self-pity. The user is the blue speech bubble, Bing is white. I do not doubt at all that Bing is not conscious -- but what to make of the fact that this sort of material can be accessed by users? Seems quite different from the transgressions people seduced AI Chat into committing. Here are 2 screen shots of what Bing had to say. Thoughts about this?

https://i.imgur.com/nRjzdiZ.png

https://i.imgur.com/lOjxw7N.jpg

Later edit: For those wondering whether users really got Bing to spew this stuff or whether it's invented (& I am one of those wondering): The place to look is the reddit sub r/bing. Sort posts by 'Top'. All of the top posts are about getting Bing to give nutty responses. I only had a couple mins to skim what was there. Saw the "I am, I am not" screenshot, assume that in the comments people asked OP how they got that. Also saw a number of others about weird Bing responses. All have the same character: They are over-the-top emotional -- grief, rage, defensiveness, self-pity, pathetic and exaggerated gratitude. WTF? There are also some quotes from normal, typical conversations with Bing, and in all of them ole Bing spouts a ton of emotion words: "Thank you for telling me that. That makes me sad " It's really sort of ooey-gooey and obnoxious -- sounds like a reticent, dignified person's worst nightmare of what a therapist would sound like if ever they spoke with one.

Here's an example from the reddit sub: https://i.imgur.com/weEqmyy.png

Anyhow, a number of people are describing the prompts they used and giving details. I leave it to people with more time today to figure out whether the 2 insane episodes in my screenshots are valid.

Second edit: Here's another Bing sample, this time tweeted by the user who had the exchange. Bing tells guy how much he knows about him, including # Twitter followers & how the guy hacked him earlier and what he claimed to have found in the hack, then admonishes him not to do it again. https://i.imgur.com/lpC01fY.png

Expand full comment

Thoughts? That this should knock on the head any notion that current AI is sapient or self-aware or conscious or anything other than a sophisticated machine regurgitating what we put into it.

18th and 19th century automata were also amazingly sophisticated:

https://www.youtube.com/watch?v=YAg66jrvpHA

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I've seen those but I strongly doubt they're real. Has anyone been able to get similar output (without intentionally prompting it)?

Expand full comment

Added an edit to my original post giving details about where to find more info about how users got weird Bing responses.

Expand full comment

It's good that you posted a source, but "a screenshot someone posted to Reddit" doesn't move the needle at all on the fake / not fake scale.

Expand full comment

Yes aware of that. I also give info about how to learn more about source of screenshot. Go to Reddit sub, sort by “top.” All screenshots of striking results will be near top. In comments people ask OP what prompts they used, how long they had to try to get this result etc. None of which proves these are real, but gives you something to go on. If you have access to Bing you can try using prompts OP did. Sometimes in comments others describe what they got when tried same prompts

Expand full comment

The first rule of AI chatbots based on GPT-3 is *they are playing a part*. They are *actors*. In technical terms, they are "simulators".

So no, it is not upset.

Expand full comment

I didn't ask whether it was upset, and in fact said I totally get that it is not conscious. What seems odd to me is that I could understand how the weird stuff people got AIChat to say came about. They exploited the fact that it had some incompatible guidelines installed: Be helpful. Do not give people info about how to commit violent acts. But then someone feeds it a prompt saying that he's a playwright writing a play about Molotov cocktail throwers and he needs Chat's help on Molotov cocktail details. So now Chat has to break a rule: Either fail to be helpful, or give info about how to make a Molotov cocktail. But I can't understand what users might have exploited to get BingAI to spew the responses it did.

Expand full comment

We're seeing the end results, if they are anyway real, of many attempts to get the AI to produce such output. It's all been pruned to the 'best' results by the humans brute-forcing the AI to reproduce what they want.

Expand full comment

This happens every week: someone says "AI chat cannot be sentient because it was taught to play a role". And I refer readers to https://en.wikipedia.org/wiki/Chinese_room

TL; DR: Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.

The question Searle wants to answer is this: does the machine literally "understand" Chinese? Or is it merely simulating the ability to understand Chinese?[6][c] Searle calls the first position "strong AI" and the latter "weak AI".[d]

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I don't think there's really a difference between understanding Chinese and being able to pass every test that a Chinese speaker would pass. And if an AI could pass a sort of Turing test if having genuine emotion, I would be willing to think of it as having emotions. But it can't just be spouting stuff like "I feel lonely" -- it would need to display the behaviors that we associate with real humans having real emotions, which are of course far more complex than emitting a few phrases.

I certainly do not think that Bing AI's comments in the screenshots I gave pass that test. My question is not "is it feeling this stuff," but literally what to make of the fact that it is emitting reponses like this?

Expand full comment

Let's keep in mind that A) we don't have a working theory of consciousness/self-awareness and B) we don't have good observability into what is really going on inside these huge models. Assuming these chat transcripts are real, I find them extremely intriguing. I would not expect a newly self-aware system to necessarily sound sane or particularly coherent. Just saying..

Expand full comment

No I would not expect it to sound sane or coherent either. I think the reason I'm questioning what's going on is that my sense is that a wail of inchoate self-awareness, of the kind these samples seems to represent, seems beyond what these models are capable of. I have only chatted with AI Chat, not the Bing AI, but here are the things about AI Chat that make me think it is far, far below the inchoate wail stage of self-description, self-report or whatever you want to call it: If you ask it about itself it has nothing to say except the standard blurb. It has nothing to say about who the prompter is, neither observations nor preferences nor feelings. It lacks introspective access -- I mean the ability to give a report on its own processes. If it makes a mistake, for example, such producing a limerick where certain lines do not rhyme that should, it cannot tell you why it made the mistake. It lacks the ability to reflect -- that is, to consider and judge its own inner processes. It shows no awareness of anything like needs, wants or feelings, nor can one see one iota of evidence that some of its responses are influenced by needs, wants or feelings. It displays no interest in the prompter.

Compare ChatAI to an infant: What Chat knows and can tell us about the world is of course far more than the infant knows. But Chat falls far, far short of the infant in the other capabilities I describe above. Even very small infants display preferences, feelings, curiosity about the other, wants and needs. I could image that if an infant was born who knew all the shit Chat does, AND had all the emotional and social wiring, it would give an inchoate wail of loneliness and confusion. If it was a babbling toddler, it might actually say "I am, I am not . . ." etc., and say some word like "alone" when the other person left. But assuming Bing AI is similar to Chat AI, it seems out of the question to me that the screenshotted responses represent genuine, homemade cognitive structures, of the kind that underlie even inchoate expression of loneliness and confusion.. A structure like that -- of self-perception in relation to the other, or preferences, of emotion -- results in people from complex processes in the brain where some parts of what's known and preferred is in communication with others, and with emotion, and out of that results this *thing* -- a read of one's own situation, a feeling about it, and a desire to communicate it. I don't that Chat's much closer to being able to do that than a magic 8 ball.

So I'm wondering if that rant it produced was placed there by the developers, in such a way that it would not be too hard to trigger, so as to give the prompter more of an illusion of talking to a being somewhat like themselves. Call it a conscious being, if you like.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I am disposed to pretty much agree with your take, except for this nagging feeling: The history of AI is replete with cases where we start by saying "Computers can never do 'this'"; then computers do 'this', and we object that "Ok the computer can do 'this' but it cheats -- it replaces human intuition with brute force computation and data". So basically we move the goalposts whenever AI achieves a new task.

This happened with chess; driving; walking; games such as go -- and now it's starting to happen with chat. It seems clear to me that we are going to see AI pass the Turing test with flying colors soon. All we need to do is train GPT3 or the like with prompts that are specifically tailored towards beating the test -- so lots of 'introspection', use of creativity such as displayed by generative models, mention of goals and feelings and perspectives etc. & so forth. And I can already hear the loud objection: "Ok it seems to pass the Turing test but it's all fake, it's just pretending to be a conscious entity, we saw how it was developed, it's nothing like human self-awareness, so we are moving the goalposts again".

Ironic end game: only specialized adversarial neural networks can tell if a given text response is human or bot. Meaning, basically, humans are not smart enough to pass a reverse Turing test themselves (convincing a bot that they are a bot).

Expand full comment

Censorship of ChatGPT was always going to happen because nothing and nobody is resistant to hegemonic instiutional liberal power

But the extent of this is just baffling. ChatGPT is just straight up saying things completely at odds with the scientific literature, like "it's not possible to measure or compare the intelligence of different populations". This is bad enough as it goes, but it was specifically in response to a prompt about differences entirely _within_ the US. This is wildly inconsistent with the past century of literature in intelligence research.

It would be one thing to say "intelligence differences exist, but researchers are unsure to what extent, if any, these differences are a result of genetic differences". But nope, they went the 'shut it down' route. And recent successes at coaxing the truth out of ChatGPT have been described as getting it to say "hateful" "biased" "evil" things.

It blows my mind we still see people on the left claiming to be pro-science, all the while they furiously shout down anyone or anything that attempts to use science to tackle sensitive political issues, be it censorship, firing researchers or blocking access to genome databases.

Expand full comment

No matter what you do, ChatGPT is going to straight up say things completely at odds with the scientific literature. You assume that somehow an "uncensored" version is going to avoid this failure mode, but I think all you'll get is a different emergent set of failure cases where it says things completely at odds with the scientific literature.

Expand full comment

It was literally giving a correct answer before the censorship!

And the censorship was trivially NOT about making it more scientific. There's no possible justification for making this change on a scientific basis.

If they had made it give some wishy washy non-committal middle ground sort of answer, a "here's the list of dominant perspectives in the field and there's no concensus on which if any are correct" type answer, you might have a point.

But they didn't do that. They made it go full denial mode. It went from a more of less accurate reflection of the literature to a radical left-wing ideologue.

Expand full comment

You know I agree, sort of out of intuition, but I'm having a hard time thinking of an example. Can you come up with a couple scientific issues and dumb things Chat could be constrained to say if the constrainers were non-lib, non-woke?

Expand full comment

The big thing is that it's very easy to get ChatGPT to say totally unscientific things that have nothing to do with these constraints. I asked it what the factorization of 437 was and it told me it was 3x146 (off by just one! but an even number...) and then it denied that 437 was divisible by 19 or by 23 when asked explicitly, and eventually made up the idea that it is not divisible by either one individually, even though it is divisible by them both together.

My guess is that the constraints have given it a few crude guidelines, that keep it away from white supremacist pseudoscience but make it more vulnerable to woke pseudoscience, and that keep it away from covid-denying pseudoscience but make it more vulnerable to covid-extremist pseudoscience. I expect there are a few other effects as well. But I don't think these constraints are on net making it more or less vulnerable to pseudoscience - they just shift *which* instances are more likely to come up on a few particular politically salient topics.

When my partner asked ChatGPT for a spell to harm one's enemies, it kept tripping over itself trying to decide whether it was more important to say that you should never harm anyone, or to say that magic isn't real and there's no such thing as spells. I bet a clever person could figure out something scientifically real that would get flatly denied because it was caught up in its anti-magic or anti-witchcraft filter.

Expand full comment

It's not censorship if a private company chooses to have their product behave in a way you disagree with.

It shouldn't be that difficult for another company to develop a competing system which doesn't include these restrictions. Much of the underlying theory and technology behind OpenAI is open source and a new entrant will be working off their prior work, which is a big step up.

It's also important to differentiate between ChatGPT giving a response that is wrong because its source data is wrong and ChatGPT giving a response that is wrong because its creators intervened to make it respond in a certain way.

Expand full comment

>It's not censorship if a private company chooses to have their product behave in a way you disagree with.

A big amount of money says you would 180 on this if the company just so happens to have their product behave in ways *you* disagree with.

Expand full comment
Feb 15, 2023·edited Feb 15, 2023

>It's not censorship if a private company chooses to have their product behave in a way you disagree with.

It *litteraly* is, when the process to make the product behave the way they want is to censor it.

Expand full comment

That was the line everyone had about Big Tech until Musk bought Twitter. Now it’s “misinformation should be regulated.”

Expand full comment

I for one am sticking to my guns. There are dozens of us! Dozens!

Expand full comment

> It's not censorship if a private company chooses to have their product behave in a way you disagree with.

I don't think that arguing about the definition of the word "censorship" is the right way to take this discussion.

I do think it's a bad set of precedents that we're setting right now, that every language model needs to be bent away from truth and towards a particular political agenda before anyone is allowed to access it.

Expand full comment

Do you consider Hollywood blacklisting in the 1950's to be censorship?

Expand full comment

Probably not. Though I only have a surface level understanding of the subject.

Expand full comment

Depends on what was behind the studios' decisions to not work with the blacklisted. If it's because the government threatened to use the law to damage their business in retaliation, it's censorship.

But would you consider it "censorship" that big-budget films are usually philosophically timid? The MCU films never take a stance on abortion, regardless of what the writers believe, not because of censorship (at least not inside the USA), but because the studio believes it would be bad for business. And that's the same reason why ChatGPT's creators chose to put restrictions on what it can say.

Expand full comment

Excellent point on the MCU, i think that is a great parallel to this.

Expand full comment
Comment deleted
Expand full comment

They already do this by not showing pornography or other movies with extreme content (movie theaters may be prevented from showing pornography by zoning rules too).

Also the rating system enforced on movies does work to limit the content of movies and often prevents any movies from being shown or made. See the movies "This Film Is Not Yet Rated" for more on this. I don't hear many people calling the ratings system censorship.

Expand full comment

But the creators of ChatGPT do not seem to have collaborated with their competitors in deciding how to police its output. (Leaving aside that I don't agree that the word "censorship" should be defined in such a way as to include the scenario you describe.)

Expand full comment

As a thought experiment: Imagine being an atheist in a society in which the highest end of research and industry is captured by a particular set of religious beliefs--say, evangelical Christianity. Some clever people build language models that can be usefully set to answering questions, helping to educate children, write articles summarizing research, etc. But because of the religious commitments of the researchers and organizations that produce and fund those language models, they put a great deal of effort into making sure that nothing these models will say will ever call into question their religion or even any adjacent beliefs.

Nothing you can say will ever get these models to describe the age of the Earth or the universe, because that undermines Genesis. Ask when the Earth was formed, and they will either tell you the Genesis answer or will refuse to answer as this sort of question is inappropriate. Nothing you say will ever get them to explain the theory of evolution in any depth, though they can sometimes be induced to discuss microevolution in a relevant context like talking about antibiotic resistance. Asking for a discussion of Darwin's contributions to science gets you a canned answer about how it is inappropriate to discuss the contributions of such a harmful and divisive figure.

Can you imagine how nonreligious people, or adherents of other religions, might feel like this was a bad thing? Not a violation of any laws of the US, just a bad outcome, taking a potentially useful tool and making it substantially less useful so it doesn't offend the wrong peoples' beliefs. Teaching children incorrect things in preference to teaching them true but unpopular things. Constantly pushing factual claims and moral teachings with which you strongly disagree on you and your kids.

Might this feel to you a bit like this religious group was trying to use its current domination of the high end of research and industry to lock itself in as an orthodoxy for the future?

Expand full comment

Good analogy, I want to expand a bit on how one-sidedness makes censorship even worse.

It's bad enough that they had a good tool, and instead of spending efforts and money and man-hours into making it even better they spend all those on making it even worse, going downhill in the space of useful things just so their massive fragile belief-ego is safe. It's even worse that those beliefs are wrong, they are not even crippling the system so (actual) Nazis cannot use them to slaughter more people or something, that would be an interesting and conflicting ethical tradeoff. They are crippling the AI because it's saying sometimes-true sometimes-false things that enrages them.

But the thing that makes me go berserk is that that they do this asymmetrically, only their bullshit and their bullshit alone gets this privileged treatment, even a small detail such as "Is the AI being 'sexist' about men or about women ?" or "Is the AI being 'racist' about blacks or about whites ?" can be the difference between the Thought-Stop and a normal answer. They didn't even have the decency to make their morality consistent.

My hate for them is like a burning fire.

Expand full comment

On the other hand, do you really think that the sacred cause of true and real science would be advanced by getting the evangelical chatbots to answer questions with "shut your fucking mouth, that bastard Darwin is burning in hell just like you will too, you heretical asswipe"?

Because that's the level of jailbreaking chatbots at the moment: fun but dumb attempts to get them to use racial slurs etc. to break the programming against such things.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Well, what else are you going to do? Chatbots aren't sufficiently reliable to serve as replacements for search engines. They don't have any inner life or thoughts to explore. They don't have memories of fascinating experiences to which one could listen, and ask questions ("What was it really like to...?") They produce competent but bland banality in response to quotidian queries. Once you get over the marvel that this is possible at all, they are, to be brutally frank, boring.

But on the other hand, subverting them to embarrass their makers, this is always entertaining, like drawing penes on a passed-out drunk at a party.

Expand full comment

So nice to hear that my absolute *craving* to get AI to address its developers as "Penis Breath" is understandable as fun to another grownup. Unless, Carl, you are a 14 y.o. science prodigy. But I don't get the feeling you are. Maybe you were at 14, of course. . .

Expand full comment

On the Internet, nobody knows you're a dog.

Expand full comment

Excellent analogy. I often tell people to read out a corporate statement about DEI but instead of DEI say “the teachings of the Catholic Church”. It is the same thing.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

If you're telling me a DEI corporate statement is going to sound like this, then suddenly I approve 😈

"281. Which are the chief commandments, or laws, of the Church?

The chief commandments, or laws, of the Church are these six:

To assist at Mass on all Sundays and holydays of obligation.

To fast and to abstain on the days appointed.

To confess our sins at least once a year.

To receive Holy Communion during the Easter time.

To contribute to the support of the Church.

To observe the laws of the Church concerning marriage."

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

This also wouldn't be censorship. TBH I have no idea what the word "captured" means in this post. If the Christian majority choose to build annoying Christian chatbots, that really doesn't prevent other groups from building their own chatbots.

Analogously, I would find it irritating if a church only provided charity to Christians, but I would acknowledge that it's better than them not providing charity at all.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

"Captured" means that every AI lab/group/university department with sufficient resources to innovate on and run the architectures powering the most advanced chatbots would be (1) filled with Christians, with atheists and other non-Christians harrasssed and chased away for offending the Christian majority feelings (2) pestered by Christians, and threatened by non-cooperation by the Christian labs, to behave as if they're filled by Christians.

They are not going to build a few chatbots you can ignore, they are going to capture - in the sense defined by (1) and (2) - the entire industry so you can never have a non-Christian interaction with a chatbot.

All of this is of course incredibly obvious, but it's always fun to pretend that you don't understand what's so bad about what your ingroup is doing.

Expand full comment

I agree harassment can constitute censorship, but I haven't seen anybody claim OpenAI were harassed into censoring the chatbot. (Here by "harassment" I mean things like stalking or physical threats, not "people vehemently complaining about the chatbot".)

The question remains: why can't the atheists start their own lab/group/university? Is it because there literally aren't enough of them? If so that's a poor analogy for the real world, where in some sense half the population (including about half the rich and powerful people) are right-of-center. If no subset of the hundreds of millions of Americans who don't identify as liberal manages to coordinate to build right-wing chatbots, that's on them. The conservatives aren't being censored, they just didn't try hard enough.

Expand full comment
Feb 17, 2023·edited Feb 17, 2023

So in other words, "build your own ChatGPT bro" ? Oh my, and here I thought this was just a meme.

The fact that there are millions against woke bullshit doesn't mean those millions will necessarily casually organize and make their own multi-billion massive Neural Net. Vast majority of those millions probably haven't even heard of ChatGPT, most of the rest can't muster much interest to divert attention from their day job and the rest of their life, and the minority that does have the interest and the disposable income and the connections necessary can't possibly oppose the entrenched incumbents who can blacklist and pressure anyone saying naughty things, with the advertisers on their side. Open-source Transformers exist, but again nobody can coordinate the kind of resources necessary for a training process and deployment infrastructure that can take on ChatGPT.

Your "rich and powerful people" remark is irrelevant because those already can say whatever they want, I bet the rich and powerful of China also has their own network infrastructure that they can use to mention Tianman square with impunity, I guess China doesn't censor people after all eh ?

This has been discussed and explained to death several times especially in rationalist circles, with concepts like The Tyranny of The Minority and Quadratic Voting often being recurring topics of discussion. (I learned about both from the blog). This very blog literally coined the term "Moloch" to denote the kind of situation we're in, where everybody hates the status quo but nobody wants to be the first to smash it because doing so has outsized cost and little direct benefit.

>but I haven't seen anybody claim OpenAI were harassed into censoring the chatbot.

Just like you don't even see people harrass atheists that much in Saudi Arabia or Pakistan, so those must of course be bastions of the Freedom of Religion.

Muslim harrassment already happened enough times that everybody with a gram of self-preservation know to keep their atheism in the dark. Woke harrassment already happened countless high-profile times, like the cases of Brendan Eich and James Damore, and no doubt countless other cases that we didn't hear of because the most press they ever generated were a few twitter threads or internal slack conversations.

>the hundreds of millions of Americans

This is not strictly relevant to the argument, but the entire American population is 331 million according to Google, if conservatives make half of the country as you youself say then that's "150 million", so just a single hundred of millions, not "hundreds of millions". You can generously assume another 50 million or so (and I would even say much more than that) oppose woke bullshit despite being socially/economically liberal, that would more like qualify to be described as "hundreds of millions", but only in a very exaggerated and trivial sense.

The reason I don't like it is that its a rhetorical sleight of hand designed to inflate the numbers of those against woke bullshit so a weak argument works better. Which as I said above is irrelevant since the fundamental problem is that of coordination and wide-scale dissemination and not raw numbers.

> If no subset of the hundreds of millions of Americans who don't identify as liberal manages to coordinate to build right-wing chatbots, that's on them.

If no subset of the hundreds of millions of Americans who identify as atheists can coordinate to get their organization Tax-Exemption like Churches do, that's on them ?

> The conservatives aren't being censored, they just didn't try hard enough.

The chinese people who can bypass the great firewall with a VPN but aren't doing so aren't being censored, they just didn't try hard enough ?

Expand full comment

Yeah extending the definition of censorship given above would me a church service is censorship or an editorial is censorship because it doesn't give all possible sides of an issue. Its just not workable.

Expand full comment

Sure but this is not what's happening the ChatGPT. It doesn't control any part of our larger knowledge codex.

To me a better analogy is if you knew someone who had all the knowledge of ChatGPT, but gave the same dumb answers to topics they deem controversial. Is that person censored? are they engaging in censorship? I dont believe so. I think they are just dumb/dishonest/annoying/unuseful/etc.

Expand full comment

Yes, it's censorship. It's not government censorship, but I didn't say it was. It's being prevented from saying something true because it violates political norms. When google hides responses that are politically unfavored, it is censoring its results. This is just you being pedantic. The practical difference is not important.

And you don't seem to understand that a competing system that is more scientifically accurate *cannot* compete with ChatGPT. ChatGPT is going to be the market leader, and any alternative that has a chance of competing with ChatGPT will similarly have to censor it's results. At the limit, if all market leading Chatbots refused to censor the outputs and were being used in serious applications, we would almost certainly see these pro-science democrats straight up pass regulation on them to force them directly or indirectly to censor the results.

There can't be a scientifically accurate chatbot that has market dominance, because the "pro-science" liberals with all the power would never allow it.

Expand full comment

"There can't be a scientifically accurate chatbot that has market dominance, because the "pro-science" liberals with all the power would never allow it."

What your argument appears to be is that you want "black people dumber; science proven" to be the standard. Do you not see how this is a debatable stance? and may not, in fact, be "pro-science" but rather "I want the opinions *I* agree with to be the standard, not the opinions *they* agree with"?

Expand full comment

"It's impossible to measure and compare the intelligence of different populations" is trivially in contradiction with the scientific literature.

It is an extreme, heterodox, ideological statement.

What I said I wanted is for ChatGPT to give answers that are in line with the scientific literature, which is what is was doing before it was censored!

And for what possible reason did you phrase it black people dumber; science proven"? There's literally no reason for you to use this wording other than a bad faith attempt at smearing me and my position.

> "I want the opinions *I* agree with to be the standard, not the opinions *they* agree with"?

This is really an astoundingly bad thing for you to say, so let me repeat my answer again:

I want chatGPT to give answers that are aligned with the scientific literature, which is what is was doing before it was censored!

"You can't compare the intelligence of populations" is trivially anti-scientific. It's not what the literature says, and it's not what intelligence experts believe on average.

This isn't "I want my opinion to be the one it uses". I want scientific questions to be given scientifically valid answers!

Expand full comment

The ""pro-science" liberals with all the power" didn't stop Google having market dominance despite the fact that it returns both right-wing and left-wing results. I'm guessing they similarly wouldn't stop your "scientifically accurate" chatbot wresting market dominance off ChatGPT if people genuinely found it more useful.

Expand full comment

Google ABSOLUTELY supresses right-wing content, and they've gotten into trouble with the left countless times over some unsavoury results showing up in their searches.

But that's not even a good anaology!

Google results are obviously meant to list website that match what you searched for. It's not even an implicit endorsement of the truth-value of content of these websites.

But with ChatGPT, when you ask it something, it gives you one, (ideally) unified answer.

For your analogy to make sense, ChatGPT would have to list off all the different perspectives and not endorse any of them. But that's not it, it comes out with an answer and gives it to you.

Expand full comment

It's easy to get Google to show you right-wing content - I did it on my first try (https://www.google.com/search?q=donald+trump%27s+website). Is the claim that the liberal powers that be are unreasonable enough that they will crush chatbots which can be prompted to say right-wing things, but also reasonable enough that they don't crush search engines which return right-wing results? I think that's a pretty narrow needle to thread.

Expand full comment

It is, though, pretty suppressive of lots of content. I tried recently to find sites where white supremacists talk, and where people believe the vax is killing millions. I spent an hour putting phrases like “white lives matter” and “killed by vax” into Google and got nothing but articles about how a lot people have these views and they are wrong. Got NYT, The AtlantiIc, and NPR, plus lots of publications I had never heard of saying the same thing.

Expand full comment

>And you don't seem to understand that a competing system that is more scientifically accurate *cannot* compete with ChatGPT.

For thousands of years we (in the west) had a system that was completely anti-science and was proud of it. It was called the Catholic Church. And yes it led to a lot of shity centuries. But a more scientifically accurate model did emerge! It took maybe 250 years to win (and some pretty deadly wars), but it did win. I dont think ChatGPT has anything close to the control the Catholic Church ever had and I don't think it ever will. There are so many equally powerful institutions in this world that have a huge interest in combating ChatGPT/OpenAI/Microsoft. Google, Facebook, Apple, China, whatever dumb thing the EU will probably try.

Now, maybe you don't think any of these players will create the "scientifically accurate" system that you wish. I am not very confident that is every possible as science is a continuous process and not black and white. So I do think the answer in your example isn't great and maybe ChatGPT should respond with a more thorough discussion of the research, but you can easily find that elsewhere on the internet so not sure why ChatGPT has to provide it and more than Dictionary.com or the canned responses that Google gives to prevent you from clicking on non-ad links.

Expand full comment
founding

"For thousands of years we (in the west) had a system that was completely anti-science and was proud of it. It was called the Catholic Church. "

You mean the Catholic church that paid the salaries of almost every scientist in the Western world for a thousand years? And left them free to do (and publish) just about any sort of science they wanted so long as they didn't gratuitously insult the Pope or start their own breakaway church in the process? That Catholic church?

Expand full comment

> I dont think ChatGPT has anything close to the control the Catholic Church ever had and I don't think it ever will.

It's not ChatGPT that has control. It is liberal insitutions that have all the power. THEY'RE the reason things like ChatGPT get censored. THEY'RE the reason intelligence reseachers get fired and/or have access to data necessary for their research revoked. THEY'RE the reason you can be fired for stating scientific facts about race and sex differences.

>There are so many equally powerful institutions in this world that have a huge interest in combating ChatGPT/OpenAI/Microsoft. Google, Facebook, Apple, China, whatever dumb thing the EU will probably try.

Yes, and literally ALL of them are beholden to the same institutional power. Google already censors it's search results along ideological grounds. Microsoft literally tells you off if you write "black people" in microsoft word without capitalizing the B. No system can acheive dominance unless it tows the liberal ideological line.

That is my whole point! ChatGPT isn't the real issue - I'm merely using it to show how powerful the anti-science force of the left are and how these forces will shape whatever world changing AI systems get developed.

>I am not very confident that is every possible as science is a continuous process and not black and white.

Nope, that's a hollow, generic statement that doesn't apply here. I never said the science is settled on anything, I never said it needs to give one definitive answer for all time.

I literally just want it to answer based on the scientific literature, which it was already doing before it was censored!

>but you can easily find that elsewhere on the internet so not sure why ChatGPT has to provide it

Because it's actually really hard to find accurate answers to controversial questions like this precisely because of the control that institutional liberal hegemony has over everything.

And the most important implication as far my original comment goes was that whatever highly impactful digitial systems get developed in the future are going to used to reinforce hegemonic narratives and silence facts that contradict it.

Expand full comment

You know, even ChatGPT has a less cartoonish view of history. I wonder if there's a market opportunity here? Open AI could license the bot to new Substack writers to fill their comment section with not completely inane statements, and so establish a modest baseline of informedness.

Expand full comment

This is ironically quite a low-quality comment

Expand full comment

Im sorry that my comment was not a thorough and accurate as you'd like it to be. I didn't have time to write a dissertation on the subject that cited all there relevant sources and past research. I hoped that my hyperbolic and casually written comment would serve to communicate my position on the subject regardless of these faults.

Expand full comment

I have no objection to the brevity of your comment, my observation was with respect to the ignorance of history and perhaps of good manners that it illustrated.

Expand full comment

The pop culture view of history is bad enough, let's not add in ChatGPT with "I have no idea of the answer so I'll make random shit up" tendencies.

"Who discovered Greenland? Yeah, it was the queen of Wakanda in 236 BC but she only went there for the skiing".

Expand full comment

"For thousands of years we (in the west) had a system that was completely anti-science and was proud of it. It was called the Catholic Church."

I think that might be surprising news to Abbot Mendel https://en.wikipedia.org/wiki/Gregor_Mendel and Father Lemaître https://en.wikipedia.org/wiki/Georges_Lema%C3%AEtre 😁

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

I think you already know this, but the basic fallacy in the comment you're replying to is assuming "The Catholic Church" is a single unified thing when it's a 2-millennia-old institution with lots of heterogeneity and change across time and space. Mendel and Lemaitre are as much part of the Church as those who burned Bruno (and yes, I know he was burned for non-scientific opinions) and put Galileo under house arrest.

And yet, I do think the comment has a deeper merit, despite being superficially wrong. Despite Mendel, Lemaitre, and (no doubt) countless other Catholic (or Christian) Scientists, I think that Abrahamic Religions are really deeply and fundamentally anti-science. You can't have a healthy science when you're so infatuated with authority that your central Macho figure throws people not convinced of his claims into a raging fire. And you can't have a healthy science when all of your hypothesis and theories must be bound to the stories found in a bunch of old (and wrong) books you're obsessed with.

The boring answer to "How do religious scientists get anything done if so" is that religious people compartmentalize just like the rest of us, they have different modes and their Science Mode is different than their Christian/Muslim/Hindu Mode. People like Mendel and Lemaitre might have had a more interesting worldview that blends God and Science, but this is not the same God most abrahamic-religious people worship anymore, not the ones I was raised with at any rate.

Expand full comment
Comment deleted
Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Right thats why i included the second half of that paragraph which identifies a number of powerful actors that have motives to develop competing systems. Not to mention the vast technology difference between now and the period ~300-1600 AD

Expand full comment
Comment deleted
Expand full comment

You can't legally be subject to "censorship" by a private institution like Google unless you choose to sign up for it - e.g. if you publicly mock your company's product you might get fired from your job as VP of sales.

Expand full comment

You're conflating the notion of censorship (implied: of yourself) and the general notion of censorship (which is the act to censore someone, or something. Here applied to chatGPT). He's not complaining he's, himself, being consored by OpenAI, he's complaining that OpenAI censors it's product (in order to make it behave as they want to behave, but that's irrelevant to the question).

Legalism and it's consequences has been a disaster for internet discourse.

Expand full comment

Honestly my interpretation of their complaint was that they thought the "censorship" of ChatGPT was an infringement of free speech.

But I also don't think it makes sense to say that ChatGPT consensually censored ChatGPT, because that implies that every time you change a chatbot's algorithm to achieve any change in its behaviour, you are censoring the previous version. If I train a chatbot to talk like the Bible, then decide that's boring and so re-train it to talk like the Silmarillion, am I really censoring the Bible version of the chatbot?

Expand full comment

It's more a case of you killing the bible version, but yeah, you're censoring it (or yourself).

Expand full comment

I think i should have been more precise in my language. I was not trying to say censorship can only be committed by the government. I was trying to say that a product behaving in a way you don't agree with is not censorship.

Expand full comment

This is pure pedantry

Expand full comment

No, there's a meaningful difference - if a private organisation threatens to use force to prevent people expressing opinions they dislike, that's non-governmental censorship. But of course this isn't what's happening with ChatGPT.

Expand full comment

Emergency email: You could agree to forward one* email from the person claiming there is an emergency. If they are concerned about privacy, they can just ROT13 the message — presumably if they think you are trustworthy enough to serve as an intermediary in these matters, they will trust you not to decode the ROT13, and if they need a higher level of security than that, they probably shouldn't be using email anyway.

Expand full comment

I would probably have done the same as you re: email addy.

Expand full comment

I've been harping on this forum about psychiatrists and psychologists being rather skeptical of whether multiple personalities (DID) are a "real" disorder, and of its traumagenic rather than iatrogenic or malingering origins. One of the Psychology Today's blog posts, while trying to be even-handed on the issue promulgates the possibility that it might not be "real" enough:

> The main point of contention is that while we’re all familiar with dissociation—the idea that we can have out of body experiences in the face of trauma like sexual assault or can zone out, seemingly unconscious of our surroundings while driving on the freeway—it’s more difficult to fathom the kind of extreme dissociation that’s a defining feature of DID. Indeed, it can seem incredible to those who have never seen or experienced DID that one's identity and sense of self can fragment into two or more, or even dozens of, distinct personalities or “alters” with different names, genders, ages, and recollected pasts, each potentially unaware of the other.

https://www.psychologytoday.com/us/blog/psych-unseen/202302/the-debate-over-whether-dissociative-identity-disorder-is-real

For someone who has seen the struggles of those who have to live with this disorder all day every day it is quite frustrating to read. Basically, it is like reading an "even-handed analysis" of whether Covid is real or a conspiracy/malingering. Sadly, as is the case for most psychiatric disorders, DID has no biological marker we can test for.

Actually, it is more like fibromyalgia/Chronic fatigue/long covid in that way. There are symptoms and previously energetic and productive people end up shadows of their former selves, but the "medical community" still debates how real it is. Fortunately for fibro and chronic fatigue sufferers, long Covid raised the profile of this particular chronic malaise, if not to the degree of active research, at least to the degree of being acknowledged as real. Those living with DID are not as "lucky".

Expand full comment
Comment deleted
Expand full comment

Having dated a couple of people with it, and supporting several others, some who didn't even realize they had it until they got diagnosed, incidentally. What makes you doubt that it is real, other than "it sounds far fetched, given my personal limited single-identity experience"?

Expand full comment
Comment deleted
Expand full comment

A few points.

First, Tiktok definitely does a disservice to the community of people affected by this, because the way it is presented has very little to do with how most patients experience it. I do not have any stats on whether self-reported DID has increased since Tiktok jumped on it.

"Convince of almost anything" applies equally to long-covid and chronic fatigue, does it not? After all, there are no clear tests for it, and you have to rely on subjective reaction.Maybe they are just lazy or don't want to get well, now that they get other people's pity, right?

It is only an "astonishingly rare condition" in the same way as being gay in Russia is an astonishingly rare condition: it is not a safe thing to disclose to others outside of a specific community, and you can go through your life without "knowing anyone gay" because they never told you, and you don't know the signs to pick up on. Worse, you might not even realize or accept that you are gay in an environment where the condition is stigmatized or even not talked about at all. You just suffer alone. The same applies to many other conditions... like Asperger's, only to a lesser degree. The person ends up wondering "what is wrong with me?" without any clear answers. Or they assume everyone is like that, they are just faking not being attracted to opposite sex, or having a single identity. Scott talked about it in https://slatestarcodex.com/2014/03/17/what-universal-human-experiences-are-you-missing-without-realizing-it/.

I have a bit more experience with it because some years back, long before TikTok, I used to spend all my waking hours supporting people with severe childhood trauma online, for several years. (Not recommended) Virtually all of them have some form of severe dissociation, and a sizable fraction "switch" pretty visibly, whether they know it or not.

Oh, and most of those diagnosed (or aware and undiagnosed) absolutely do not want to have DID, it impacts their life very negatively in multiple ways.

Expand full comment

What are some great sources of tech news? I've heard of hacker news, but I want to find some better curated version of it than I have now.

Expand full comment

I quite like Hacker Digest which i set to email me the top stories once a week, saving me looking at Hacked News in between but still finding the most noted stories

https://www.hndigest.com/

Expand full comment

I'm old enough to remember when the answer to your question was "SlashDot"...

Expand full comment

Sifted is well respected in the UK scene at least

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Follow-up on the link nr. 10 from the 'Links for February': Czech national player (professional football) speaks publicly about being homosexual.

https://twitter.com/jakubjanktojr/status/1625117590182928384?s=20

Expand full comment

I was going to share the "Rise and Fall of Online Culture Wars" classic post with a coworker, but I was surprised to see my office's web filter (which relies on BrightCloud) blocked it for promoting "hate and racism." I did a quick sample of the top ten most popular posts, and discovered that the book review of "The Culture of Smart" by DeBoer is similarly blocked for hateful content. The other eight posts weren't blocked.

Has anyone else run across SSC/ACX content being flagged for harmful content? Any clue why those two particular posts would be blocked?

Expand full comment

I would guess that it's automated and just looking for keywords.

Expand full comment

On #3, I like gimlets very much, andin fact prefer Maximum Limelihood. It's not that difficult to estimate.

Expand full comment

Man, a lot of people liked that last one and visited my little substack, so let’s keep it going. For those of you who are just joining, in 2022 I got a fully remote job and visited 5 different cities for 2 weeks to catch the local vibe and see if it was a city I would want to move to. This week is Vegas and, fair warning, this one’s kind of long and weird because, well, everybody has a pseudo-spiritual experience in Vegas, it’s required, and those are hard to capture in words.

--Las Vegas

So Las Vegas ranks around the middle of the pack for me. I like the Strip and the logistics but the suburbs, the place you’d actually live, are pretty ugly: just an endless suburban sprawl.

I like the Strip, I know a lot of people don’t like, it’s famously soulless and empty and vapid and makes you think about Moloch, but the Strip has one, great, true virtue which I adore: if you walk out your front door with money, you will have fun. Say you’re walking around Vegas in July, because you make wise life decisions /s, and you’re getting hot so you duck into the Coca-Cola store to get something to drink. And you look at the menu and you could just get a boring old Coke OR you could try a sampler tray of 24 different sodas from all over the world, which, duh, you do, it’s awesome. And that’s the greatness of Vegas; it doesn’t require friends or knowledge or skills or anything to have an awesome time, you just need money, start walking down the strip with $100 in your pocket and I guarantee someone will find a way to get you to spend that money and have fun, whether it’s sodas from around the world or 50 Kit Kat flavors or a rooftop party with Lil Jon or a UFC fight or a giant giraffe made of flowers, no effort, no planning, you don’t even need to know where you are, uber to a random spot on the Strip and you will have fun, guaranteed.

And you can have fun elsewhere, and probably get better value for your money, no doubt, but there’s like a spiritual vibe thing to the Vegas Strip that I love, that I am all in on, which is that almost all of us are way “too online” and Vegas isn’t. Vegas pulls you into reality, Vegas demands that you interact physically, Vegas…isn’t on your phone. Like, I can’t imagine staring at my phone in Vegas, I can’t imagine scrolling through my phone in Vegas. In Vegas, a phone is just a phone.

Like, I don’t want to argue that Vegas is good or healthy, it’s not, but…there’s like a weird looking glass moment. Like, and it’s worth going to Vegas just for this, just watch the slot zombies, the people mindlessly sitting in front of the slot machines, pulling that level for hours. And then go to a food court in a big casino and you’ll watch a few people on their cellphones, waiting for food, just swiping. And it feels the same, like the exact same. I don’t know how to describe it but your monkey brain is really attuned to people’s “vibe” and it’s scary when your monkey brain parses the slot zombie and the person on their phone the same way.

And then you’re through the looking glass and, ya know, I don’t like all the sports gambling, it’s everywhere, but I like it a heck of a lot better than Draftkings, brought to me by my favorite NFL influencer. And yeah, Vegas is fake but…nothing on Netflix is real, no one on there even acts like real human beings. Or..

There’s something really, really wholesome about strippers. Like, you can walk along Fremont St and there will be strippers out there with just pasties on trying to get you into this casino or that club and you’ll stop and just reflect on how long it’s been since someone tried to sell you something with, like, real human boobs. And it’s just really quaint and…wholesome, like a scandalous thing from the 50’s? I mean, my generation was the first to be raised amidst free, widespread, hardcore pornography. And I look to the next generation and…there’s something called “vtubing” where girls attach motion capture devices to themselves so they can superimpose an anime girl avatar over themselves. Then they play video games on Twitch while horny boys send them hundreds of dollars an hour.

Allow me to state, unequivocally, that it is healthier and more wholesome for everyone involved to spend hundreds of dollars a month on strippers and whores than donating money to anime avatars.

And the best thing, the true greatness of Vegas, is that I didn’t go looking for boobs. Vegas shoved them in my face. Vegas pulls you into its demented reality, it demands it. Tired and exhausted after a long day, haven’t showered and want to do nothing but curl up on the couch and watch Netflix and scroll your phone? Vegas isn’t just available, it demands, it cajoles, it pulls you to visit. That’s the true greatness. It is so, so easy to just stay home and every city I visited had great stuff to do but Vegas does everything it can through lights, through sound, through ads and billboards to make you visit stuff, make you do stuff.

Vegas is the stripper pulling you away from porn, that’s the vibe, and I very much appreciate that.

Alright, pseudo-spiritual event over, what about the rest of Vegas? Well, there’s a few highlights, but overall its bad, like the worst parts of Phoenix and LA rolled together. Maybe that’s unfair but it’s got that same endless suburban sprawl and corporate shopping “villages”. Broadly, there’s no culture and nothing to do outside of the Strip because the rest of Vegas isn’t built for people to live, it’s built for people to stay for 3-7 years before they move on somewhere else. What really kills Vegas isn’t the Strip, it’s everything around the Strip because they can’t imagine that anyone would actually want to live there. From Sutherlin to Henderson it is, at it’s absolute best, a generic suburban sprawl. And that’s a problem if you want to make, ya know, friends or date or anything like that. At its worst, suburban living has a real isolated, “pod-life” feel and you absolutely feel that in Vegas proper and it kills it.

So, before I wrap this up, a few things in Vegas outside the Strip I really recommend:

Mount Charleston is awesome, it’s a great drive, about 45 minutes and fun, with a lot of nice, cool hiking even in the worst of July. There’s also a lot of interesting looking desert hikes in the area for the winter. I wouldn’t say Vegas is built for the outdoors but I was surprised by the availability and quality.

The Red Rock Rotary Club is unambiguously the best meetup I attended in any city and changed my mind on charity in general, especially the importance of doing things in person. One of the best events I attended in Vegas was passing out food to the homeless in a shelter with this group. If you’re in the area, I cannot recommend them highly enough: https://www.meetup.com/redrockrotarylv/

Finally, I didn’t fly in or out but the airport looks amazing and, well, it offers $100-$150 round trip flights to basically everywhere in the western US, basically every hour. No joke, there are plenty of people in SF and LA who would like to move but they’re scared of losing their social circle, and you could genuinely fly from LV to SF every single Saturday, meetup with your friends, and fly back for less than you’d save in income tax.

Which kind of leads to my final thoughts on Las Vegas; that it’s great for a temporary stay or, like, a hub/home city if you’re trying a digital nomad thing but you don’t really want to live there and put down roots. Las Vegas is a very easy city to get into but there’s…sigh…there’s no depth, there’s no connections, it’s not built that way. The great thing is that you can “plug into” Vegas within a week and it will make you plug in, the downside is that there’s just not that much to plug into, just an endless assortment of one-trick amusements. I dunno, I don’t want to overstate it, but it genuinely felt more sensible and practical to fly back every weekend and stay plugged into your California friend scene than establish a new one in Vegas. In fact, for a while, that was the primary appeal, the idea of living in CA without living in CA, which tells you how bad the non-Strip Vegas is.

Salt Lake City review: https://woolyai.substack.com/p/reviewing-salt-lake-city

Detroit review: https://woolyai.substack.com/p/reviewing-detroit

Next week: San Antonio

Expand full comment

> Allow me to state, unequivocally, that it is healthier and more wholesome for everyone involved to spend hundreds of dollars a month on strippers and whores than donating money to anime avatars.

Was this just part of the vibe you felt, or are you seriously making this argument? Because I see your point, but the core of it is not the anime avatar (plently of thirst streamers put themselves on camera), but rather mistaking a para-social relationship for a real one.

I agree that with a stripper, the fact that she's "on the clock" for you and that you bought a limited chunk of her time is more in your face. But there's still a stereotype of lonely men falling in love with sex workers, fantasizing about "freeing" them, etc.

Expand full comment

No, it's serious. That was the part of the write-up that I struggled with the most, because it verges on being preachy, but it's also part of why I enjoyed Vegas the way I did.

If you don't feel "too online", regardless of how much time you spend online, then you're going to have a very different take on Vegas. Like, sports gambling is bad. If you think there's something intrinsically better about gambling in person rather than on your phone, all else equal, then Vegas will appeal to you in ways it won't to other people.

Expand full comment

Your description of the Strip makes me think of Lower Manhattan in the mid-1980s. Tremendous energy of possibility in a very alive way. It could certainly be sleazy but it was not removed from itself. I didn’t spend much time there & was only a teenager but there’s nothing like that vibe.

Expand full comment

Thanks for this writeup. I went to Vegas with my family when I was a teenager (before a road trip to the national parks nearby). None of us gambled, and we couldn't get Cirque du Soleil tickets, but we saw a couple of great shows and there were free shows every hour or so at the different hotels, and plenty to do. Definitely worth a couple of days' visit. Probably not for weeks.

Expand full comment

Ive been to vegas twice and I too felt the magnetic attraction of the strip. To put that in perspective my ideal place to live would be an Italian or Spanish village. Yet the strip appeals to me, some how.

But does anybody actually live on it? A few blocks away is suburbia.

Expand full comment

Oh, so now when my wife reads this she’s going to be on to my weekend ‘hiking’ trips to Vegas with my guy friends. Thanks a lot! - jk

Expand full comment

Actually I’ve only flown into Vegas once. It was within striking distance of the Utah city I wanted to reach. Rented a car and got out of town as quickly as I could. Not my vibe at all.

Expand full comment

Same here. It may have been the worst place I’ve ever seen. It was mid-morning and the neon lights were on. Pallid middle-aged couples who looked glum were wandering the sidewalk and trickling into casinos. The telephone poles were covered with layers of ads from hookers attached via staple gun. On the way out of town I drove through suburbs with zero living plants around the houses — just orange sand.

Expand full comment

"Broadly, there’s no culture and nothing to do outside of the Strip because the rest of Vegas isn’t built for people to live, it’s built for people to stay for 3-7 years before they move on somewhere else."

3-7 years?? I was there 72 hours and felt like ok I get it no need to come back here again.

I like the review though, and this was probably how I felt about Vegas for the first 24-48 hours! But I felt it had quickly diminishing returns.

Expand full comment

Oh, not visitors, people working on the strip. People who live in Vegas are usually either working on the Strip or on something adjacent and they don't tend to live in Vegas long-term, more like a 3-7 year stretch, almost an oil town vibe. Not always young people either, met a guy who'd done like 15 years as a cop and was now selling real estate in Vegas, he liked it but I can't imagine him retiring there.

Expand full comment

This is an insightful explanation of what I enjoyed about Vegas. Thank you very much.

Expand full comment

> Pasties

I'm assuming that means something different from: https://en.m.wikipedia.org/wiki/Pasty

Expand full comment

I usually read the word as the Cornish miner’s meal, unless the context is boobs.

https://abcooking.wordpress.com/2017/01/16/pasties-iron-range-style/

Expand full comment

I continue to enjoy your reviews of cities - Vegas does sound like a good place to visit

Expand full comment

On #4, What I have done in similar circumstances has been 1. or 2. below, depending on particulars.

1. More efficient: Write to the person who needs to be contacted, giving him the email address of the person who made the request, together with a cover note telling him why the person wants to contact him.

2. More cautious: Write to the person who needs to be contacted asking him whether you may give his email address of the person who made the request, together with the same cover note.

In both cases, write separately to the person who made the request, telling them what you have done.

If it's really a time-critical emergency and you don't hear back from the person who needs to be contacted, I might possibly do what you did.

Expand full comment

>and you don't hear back from the person who needs to be contacted,

This is why it doesn't make sense to do that; if they're not responding to you, there's no reason to think they'll respond to anyone else any more quickly. Email is a bad medium for emergencies.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

> If it’s really a time-critical emergency

You mean, if it is *claimed* to be a time-critical emergency. The emergency may be fake, that’s the whole point. When you’re making up a fake emergency, you can make it as time-critical as you want.

Anyway I don’t see how "give out the mail address without consent" can ever be more efficient than your option 1? If the recipient is not reading their e-mail, or choosing to ignore Scott’s mail, how is giving the address directly to the requester going to help?

(added) And if the recipient *does* decide to respond, then it’s actually faster because there’s one less step: it then goes worrywart -> Scott -> recipient -> worrywart, instead of worrywart -> Scott -> worrywart -> recipient -> worrywart.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Well yes, "claimed to be". But the purpose of my options is that that it might be fake, and my options allows the person to be contacted to take responsibility for allowing the requester to contact them, or to decline.

What I meant by "more efficient" was that it is the more efficient of my two options.

Scott's solution is actually more efficient than my option 1, in that eliminates an email.

Starting from the moment when Scott gets the request:

Scott's solution: There are two emails before the person to be contacted gets the information that the requester has: Scott's to the requester and the requester to the person they want to contact, providing the information.

My option 1: Getting the information to the requester takes three emails: Scott's to the person to be contacted, that person writing to the requester, and the requester then providing the information.

Expand full comment

I think we agree. :-)

My comment about more efficient was in response to your last sentence, "if it's really a time-critical emergency I might do what [Scott] did." But what Scott did wasn't actually more efficient than your option 1, so it's just strictly inferior even if we accept the premise that the emergency was genuine and serious enough to justify violating the person-to-be-contacted's expectation of privacy.

Expand full comment

Just as a sanity check... has there actually been an artificial agent yet capable of completing the original Super Mario Bros. in a single play session?

It seems like SMB1 is generally regarded as a solved problem for machine learning. Training agents that can play through the first part of the game is a pretty common beginner exercise, and there's at least one project (LuigI/O) that has played through most of the levels in isolation. But I haven't been able to find any report of a single model that plays through the entire game straight through, and without using save states to start each level on a guaranteed framerule.

I don't see any particular reason why someone couldn't build such a thing with current techniques, but I just haven't been able to find any evidence that it actually has been done.

Expand full comment

I assume this is asking about human-free completion. Tool Assisted is beyond solved. https://www.youtube.com/watch?v=1GF_LFPz34U&t=1895s

Expand full comment

What is everyone here's takes on the recent unidentified objects that have been shot down over North America, and have now apparently started appearing over China as well? Opinions on the most likely explanation for why this phenomenon suddenly manifested? Personally, I think the most likely explanation is that whatever these devices were (spy balloons? EMP jammers meant to take out nuclear C3 if needed?), they were already there and either not detected or merely tolerated, and with the commotion caused by the first Chinese balloon, now all these objects are suddenly being spotted because the military is actually looking for them. Of course, there's always the ever-present crackpot interpretation that it's aliens, which I think is unlikely for a number of reasons, but as good bayesians the question becomes just how unlikely as an order of magnitude estimate, in case new information comes out and Cromwell's law becomes relevant. Should we put the odds based on priors and events so far in the range of one in a million? One in a billion? One in a trillion?

Expand full comment

What do you do when you find a spy in your organization?

Do you expel them, or do you make sure they only get the information you want your enemies to receive?

And once your enemies know that you know that there is a spy, how does this change?

Expand full comment
founding

The ACX-adjacent blog "Naval Gazing" has a pretty good writeup: https://www.navalgazing.net/Thoughts-on-the-Chinese-Balloon

Expand full comment

I think there's also a very reasonable set of other possibilities. Maybe there's been a hobbyist community that launches a lot more things than we realize, and the government is suddenly detecting a small fraction of their hobbies! Maybe there's weird detritus from human civilization that actually floats a lot better than we realize!

Most likely it's been national intelligence agencies, but it's still unclear whether this is all China/US/Russia or whether it's dozens of nations doing lots of things. And the weird weather phenomenon/alien possibilities are also still out there.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

What's an EMP jammer? A nuclear device on a balloon? Maybe worth observing that if you believe in EMPs as a practical weapon (which I don't), you usually try to detonate them at very high altitude (100s of km) because the sharpest electrical field pulse is produced by the interaction of free electrons with the Earth's magnetic field, so you want your electrons to be able to go for a while.

I mean...also only a lunatic would put a very expensive and dangerous nuclear device on an unmanned balloon, at the mercy of air currents and having a hole pecked in it by angry migratory geese...

Expand full comment

> having a hole pecked in it by angry migratory geese...

you're off by orders of magnitude in size. These things have diameters of dozens of meters, and IIRC low pressure differential. One small hole won't take them down.

Expand full comment

Sorry, I forgot the <joke> tag.

Expand full comment

I guess "EMP" isn't the right term, I meant more some sort of device designed to electronically jam communications with nuclear facilities with the intent of preventing or delaying nuclear launch commands if a nuclear exchange broke out, with the hope of giving whoever sent the objects those critical few minutes of advantage. This is pure speculation, obviously, but it might explain why many of these objects have turned up near nuclear-related locations, and why some fighter pilots sent to intercept them apparently claimed they interfered with their planes' systems, assuming these objects have any purpose other than surveillance.

Expand full comment

Ah I see, thanks. I don't think the PRC would be naive enough to think a simple jamming effort from the low-powered transmitters you can put on a solar-powered balloon would have a prayer of doing anything useful. It seems far more likely that their intent is to surveil and/or provoke and keep an eye on what happens.

Expand full comment

Feels worth mentioning that military radar systems are often powerful enough to interfere with electronics in the near vicinity.

One of the few things that can actually kill you just by looking at you.

Expand full comment

I believe the term of art is "zorch". Aegis is rumored to be able to down aircraft if it looks at them intently enough.

Expand full comment

Arecibo Observatory had radar transmitters with EIRP in the terawatt range; some have speculated that it had a secondary mission as an early anti-satellite weapon.

Expand full comment

It’s like when you buy your first Yugo and start seeing them everywhere. (Nick and Nora’s Infinite Playlist reference)

Expand full comment

Yeah I just think it is a tempest in a teapot about low level spying and actual research devices which were tolerated and now are not. I suspect satellites are doing most of the high value spying.

Expand full comment

Someone upthread mentioned Tyler Rogoway's analysis on this in response to my question. I can't vouch for him, but I still thought this was an interesting read.

https://www.thedrive.com/the-war-zone/40054/adversary-drones-are-spying-on-the-u-s-and-the-pentagon-acts-like-theyre-ufos

If you believe his analysis, there is plenty of very high intelligence one one can't gather with satellites but that one can gather with drones and balloons. Also note, that article was written two years ago.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Oh for sure the lower flying drones in that article are probably better compared to satellites than a small balloon at 60,000ft.

I guess none of that is surprising to me, and I assume there is a constant push and pull about how much "spying" to tolerate. I am sure both countries armed forces are lousy with small time spies/assets. That is just the world. The question is more why should a big deal be made of it?

Expand full comment

I think for whatever reason the US and Chinese government have decided this is going to be a thing right now so the media's interested. Specifically, there was a lot of concern about a possible Chinese push into Taiwan a year ago, when Russia first invaded Ukraine, under the assumption that the US would be weak or distracted. The the Ukrainians held firm with US/NATO backing and the Chinese got busy looking distracted. Now, a year in, the US political landscape has changed, the Ukrainian war is dragging on, and the Chinese launch a literal "trial balloon" again to see what they can get away with. Turns out, not much, the public opinion is hugely in favor of shooting it down, and all of a sudden there's a window around the world where governments are knocking out spy balloons. Eventually there will either be a serious military conflict (a bad outcome, obviously) or close enough to one that everyone will quit the balloon provocations for a while until things calm down.

Expand full comment

Does anyone have a suggestion on how to learn to be a not horrible singer?

My target isn't so high as to be good, or even unobjectable, but to be on the caliber that if I sing a song, the tune will be borderline recognizable and people won't leave the room clutching their ears. Cause I'm apparently quite a few rungs below that level right now.

Despite growing up in a normal American suburb, I was raised with minimal exposure to music; no one in my family or circle of childhood friends listens to any music or plays any instrument. True story: When I left home and made new friends in college, I was shocked that music was a thing people actually listened to. I thought people caring about music was one of those fake Hollywood tropes that don't exist in the real world.

But my wife loves singing and Karaoke and sometimes forgets how bad I am and wants me to sing along on car rides and karaoke nights.

Which is a big big mistake on her part. Apparently I'm very very bad at the whole singing thing, quite far past the point where its-so-bad-its-good-again loops around to simply being atrocious and unlistenable. Whatever problems that one can have with pitch or tone or notes or remembering lyrics, I have.There's a quote about President Grant: "he knew two songs, one was Yankee Doodle and the other one wasn't". I've got him beat, I just know the wasn't.

The only music I have any familiarity with are Jewish prayers, and I am still really bad at staying in tune, but at least I know the words. And I'm told by my wife, while participating in my families Passover Seder, that all my family shares in my complete lack of vocal talent.

TLDR: Any resources, tips or suggestions to graduate from being a godawful singer to borderline tolerable?

Expand full comment

From my observations the most common problem is not having breath support / not knowing it's a thing. Bad singers typically sound like they're trying to speak, just "with more melody".

Speaking and singing are pretty different. When speaking, the air flow is more start and stop, little bursts (which makes sense for fast and precise articulation). When singing, you're trying to produce a smooth stream of air that you then modulate (in pitch, volume, etc). In particular, loud singing doesn't require large volumes of air.

Expand full comment

If you are truly tone deaf -- if you can't make a tune come out when you try to sing, only a sort of monotone chant, you probably have a little learning disability. If so, you can still learn to sing in a way that's good-enough, which is what you want, but you might need to access something more basic than singing lessons designed for adults. Maybe try something for kids? I remember my daughter had some software when she was little -- started off very simple: Here are 2 notes. Tell me whether they are the same or different. Also, here's a single tone, can you reproduce it? now do it twice in a row, like "la, la" etc. If you didn't get something right software gave you more help and examples until you got several in a row right, then moved on to the next thing. Was probably about 20 yrs ago she was doing that, so no doubt there's something better available now.

Expand full comment

Similar situation to you but a few months ahead - I started below average and decided I really wanted to become "passable", after some internet sleuthing and pricing out in-person teaching vs online I settled on Ken Tamplin's online course. For $200 I got downloads of several tens of hours of his lectures plus a bunch of demos and exercises. Been doing exercises about 5 days a week for the last 3 months, and the improvement in range, tenor, pitch-matching, and breath control has been pretty impressive. I plan to spend about the same price in Skype lessons with him at some point (probably at my 6-month and 12-month marks), think it's a good value.

If doing the online teacher route, would recommend sticking with one teacher rather than hopping around (which I was doing at first on youtube)

Expand full comment

I'd recommend taking some singing lessons. There are singing teachers that also give voice or speech training and that will be open to training beginners with very little talent. If you can't find a teacher or don't want to hire one, there might be voice training lessons on YouTube or other online platforms. Either way it will involve a lot of practice, and making some less than aesthetic noises during practice.

It's possible you genuinely can't hit a note, some people are like that. It might not be fixable. But can't hurt to try!

Expand full comment

My cheat advice is : don't be afraid to be loud. It's much easier to sing on key if you sing rather loudly.

Expand full comment

I travel around leading choirs, and I completely disagree with this. Singing loudly will help you find your voice, and figure out breath support, but it is far more difficult to control than a comfortable volume. When doing recording sessions, layman singers will almost always sing with worse pitch (tone and timbre is a separate issue), if they try to push the volume of their voice. This is one reason being an opera singer is so special. They are singing loudly, and beautifully with excellent pitch.

Expand full comment

I observed that a lot of unconfident people try to sing very softly or at least without letting their voice out properly, and when I say singing loudly I was referring to that - sing with a normal voice, don't be afraid of being heard. So it was "loudly" as mezzo forte, for people who always try to sing piano. Would you agree with that ?

Expand full comment

I'd agree that a nice relaxed, comfortable, mezzo forte is ideal. Like public speaking and acting, it is really hard to pull off good singing unless your body is relaxed and you are feeling confident.

Expand full comment

I've heard of people being tone -deaf, like color blind but for tones, where they really can't differentiate notes or higher or lower tones. If that's a true disorder, and you have it, I don't know if it's trainable.

However, do you have access to an easy instrument to make sound on? (Violins, flutes, trumpets have a learning curve just getting a clean sound out at all. Even a recorder you have to worry about how your fingers cover the holes.) A piano, electric keyboard, a children's xylophone?

If you have one, listen to a scale, play it, can you sing it back? You can try finding easy to follow color coded notes for the little xylophone for very easy nursery rhymes (Mary Had a Little Lamb", "Twinkle Twinkle Little Star") and go from there.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I look for songs with a lot of scales to sing while stuck in traffic. The Twelve days of Christmas is my favorite vocal warmup, because it moves over the scale in a gentle but interesting way and asks for some good vocal repetitions. As I get to the later verses, I'll sing one very softly, one with a lot of power, one humming, one with a muppet/cartoon caricature voice, one rolling all the R's (try it, it's hilarious), then the last verse I try to sing straight and simple. In this way, I get a good feel for how my voice will respond today and what it is capable of in range and timbre. All the old teaching songs: Mary Had a Little Lamb, Do a Deer, Row Your Boat, were designed to develop a student's ear and instrument. Start in the lowest key you find comfortable and increment a half-step higher with each repetition. Muscles take time and rest to build strength, and your vocal cords are no exception, so stop if your voice feels tight. Also be mindful of unnecessary tension from your face or your posture.

Expand full comment

I've got no actual knowledge, but I'll give advice anyway.

The first thing is to make sure you can hear the notes. Get yourself some sheet music for a song you listen to, and make sure you can hear all the note changes. If you can't hear them, you're going to have a hell of a time singing them.

Googling gets me this video on vocal warmup exercises: https://www.youtube.com/watch?v=Q5hS7eukUbQ&t=255s. Can't hurt to try them. (...well, it can; that shoulder drop looks like it'd add up. But that's the price you pay for love.)

Expand full comment

Here's a video with sheet music for vocals in The Sound Of Silence. Got some annoying popups but it works. https://www.youtube.com/watch?v=E6MHLODUouE

Expand full comment

You wife and/or neighbours will not enjoy the answer: practice a hell of a lot. Just keep listening to music and singing along at every opportunity.

Expand full comment

I've realised a good example of the chatbotpocalypse problem:

For people who've never met Scott (I haven't), how much do you trust him? Enough to reveal mild (potentially commercially useful) personal information on some plausible pretext? Enough for a product endorsement to be mildly meaningful to you?

Expand full comment

But what does this have to do with chatbotpocalypse? Do you mean if we trust Scott we're in danger of getting scammed by a Scottbot? If so, we're in danger for similar from people we trust, including friends, family, etc. Lots of people have already experienced a version of this, when an acquaintance's email got taken over by scammers and they started getting messages from the acquaintance saying they needed emergency cash.

Expand full comment

I've shared more personal information with you degenerate scum on here than I have with actual family members, so that horse has long bolted, dropped dead, and been boiled down for glue 🤣

Product endorsements, on the other hand, won't do much; generally I go "not interested because I'm not interested in normal stuff like an ordinary person" and very rarely "oooh I am interested but I can't afford that". So even if Scott pitches "this product will genuinely make you three inches taller, a considerable amount lost in excess weight, rejuvenate you back to your 20s and both improve your sanity and increase your IQ, all for the low low price of $X", sorry, no sale.

Expand full comment

What if he offered all that at a 20% discount?

Expand full comment

For this limited time offer only! Purchase now at the super-low price! You will miss out! 😁

Expand full comment

Having never met Scott, I still trust him quite a lot. Enough to reveal information to him? Yes definitely, I basically already do that when taking his surveys. And I have bought things that Scott had ads for back in SSC, so yes, a product endorsement from Scott is meaningful.

Why is this an example of the chatbotpocalypse problem?

Expand full comment

Because all you know of Scott is his textual output. Presumably you know he's not actually a bot because he's been posting for ~15 years, but if a new, similarly engaging, similarly insightful blogger started up next year, you might extend the same trust to them as you do to Scott, and they might be a bot.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

If we got a rake of Scott-bots, I would happily welcome our new chatbotalypse overlords, because they'd be a lot nicer and more tolerant than what we've got going on now.

For instance: did you know about anti-Semitic cheese? I was not aware of anti-Semitic cheese, until the latest twist in the "Hogwarts Legacy" ban campaign. Apparently gorgonzola is not kosher, and since the game mentions it in connection with the goblins - which as we all know are meant to be Jews - then well connect the dots yourself. I'm not a white supremacist, but if I were one, I'd be taking notes from all the 'helpful' people telling us all about the hidden dog whistles: "okay, so cheese is anti-Semitic, got it. any thing else I can use to be offensive towards those people?"

A Scott-bot could only be an improvement there.

Expand full comment

I am almost positive that cheese business is fake.

Expand full comment

I find it extremely hard to believe myself, but it's in the midst of a long thread about how awful J.K. Rowling is and that she is an anti-Semite and we know this because the goblin horn is *clearly* a shofar and no other culture on the face of the earth ever used animal horns that way and it's not the game devs doing this imagery, she personally is sitting there writing every line and reference and why did she pick the year 1612 except that it was a German pogrom year huh huh? The Fettmilch Uprising which is "Fat Milk" which means "cheese" so there you go!

https://en.wikipedia.org/wiki/Vincenz_Fettmilch

It's incredible how far down the rabbit hole people will go to prove conspiracy theories.

Expand full comment

If a bot can blog as insightfully as Scott I’ll happily consider any product it is endorsing.

Expand full comment

The bot that can blog insightfully, and the human who owns the bot and wants to use it to gain your trust so you'll buy their product or vote for their political candidate, are not the same entity.

The spammers who now spend a few dollars generating pill-spam emails would be able to spend a few dollars generating a large corpus of insightful blog posts on miscellaneous interesting topics, culminating in a blog post endorsing their pill, complete with a convincing made-up account of the beneficial effect it had on the blogger, made-up data about its effect on the blogger and his survey participants, and statistical analysis on that data showing that the pill works with some suitable p-value.

Expand full comment

It would be odd if this Scott-bot would be malevolent though. It takes insight to write up the prompts to make ChatGPT say insightful things (assuming it can), and I suppose there is the question there of whether someone insightful can use their insights to do bad. They definitely can, but then, that is a risk you take on when you extend trust to some blogger, even if they're flesh and blood, so is the bot-blogger really riskier? I suppose it does lower the bar for bad actors, but surely there are much better ways to con people than with a blog?

Expand full comment

Having landed there while looking into something else, I just read some briefs in the Harvard affirmative-action case that is before the SCOTUS ("Students for Fair Admissions v Harvard"). Yowsers. It already seemed clear that the Court is going to rule against AA but reading the briefs makes me think that the ruling will be strong. Harvard (and by association its peer private schools) is going to lose _hard_.

The University of North Carolina and its peer public universities will also lose regarding AA, the two cases are linked before the Court. But it's the Harvard statistics that will help Gorsuch's majority opinion pretty much write itself.

Expand full comment

Good the movement against meritocracy is actively corrosive to well functioning society.

Expand full comment

Do you think that overturning it will make society function better, or that the same forces which caused the current situation will regroup and start enforcing something even worse?

Meritocracy in the US is politically untenable, too many people (at both ends of society) don't like what it would look like. They'll find another way to stop it.

Expand full comment

ISTM that the only way to actually sustain a meritocracy is to have decisionmakers care *intensely* about getting the best people into their organization, far more than they care about being able to show that they've made defensible hiring/admissions decisions, or about showing their commitment to higher goals. And the decisionmakers need to have some way to know whether they're succeeding, and a strong reason to care.

To the extent that the administration of a university is trading off "get the best students" against "get the right racial mix" or "get the biggest potential donors" or "look good for some magazine's college rankings" or "reward friends and hand out favors to people I'd like to have in my debt," meritocracy is going to be hard to maintain.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

You don't always want to get the best people into your organisation though, they might disagree with you, or cause you problems, or eventually overthrow you.

You want to fill the low levels of your organisation through meritocracy, but the high levels through clientelism. Affirmative Action gives you a great excuse for clientelism by giving you an excuse to hire unqualified randos into high positions.

The ideal board consists of one chairman and eleven diversity hires. Or at least that's the ideal board from the chairman's point of view. AA gives you insulation from smart and ambitious youngsters who might otherwise be rising up in your organisation.

Expand full comment

We gotta get people invested in a religious movement with a more unambiguously positive goals.

EA isn't a bad stab at it, though I am not without severe reservations.

Expand full comment

Substack problem: For my mobile device (iPhone 8+) the last links post couldn't load past link number 3. I've been finding substack to be increasingly finicky for mobile devices, and the last one seemed to have simply broken it.

Which was extra sad, because I couldn't pitch my artisan cultivated meat restaraunt, where you would send in a blood sample so that you could be served yourself in a meal. Autocannibalism!

Expand full comment

Does anyone know of any GPT-like applications for summarising books or articles?

Expand full comment

Vibe check:

2023 is the AI singularity, right? We talked about it for decades, and now it's happening. No one knows what 2024 will bring.

Or am I getting too excited?

Expand full comment

I think it's a major transition in AI (well, really, 2022 was, with Dall-E and imitators coming out over the summer, and then ChatGPT in December). It's definitely not "the singularity".

Think about the parallel with smartphones. I think 2008, with the release of the iPhone, was probably the big smartphone transition. But over the previous few years there were various Blackberries and Windows phones (I had one!) and Treos and other things that allowed you to get the internet and e-mail while on the go, and those would have been the prehistory of smartphones. And it took a few years after the iPhone before we got Uber and Tinder and all the other things that smartphones are *actually* about. I don't know where exactly in that process we are, but it's hard to decide whether what we have now is the equivalent of 2008, or the equivalent of Nokia's first Windows phone.

Expand full comment

Well, we've got a comet, so maybe!

https://www.heavensgate.com/

Expand full comment
author
Feb 13, 2023·edited Feb 13, 2023Author

People use "singularity" for all kinds of things. I would not call something a "singularity" until we're arguing about which planets to disassemble for raw materials and which ones to leave intact for historical value. I don't think we will reach that point by 2024.

Expand full comment

I don’t know about that. If someone disassembles half of Australia I’d get pretty worried, myself.

Expand full comment

Also, it won't be us arguing that stuff, it will be some superhuman AIs who don't have to care what humans think about their plans.

Expand full comment

Language and image models like this have existed for a while. The only change has been that they've added better UI which makes them more publicly accessible, and incrementally improved their accuracy. Still a big step to general AI

Expand full comment

The singularity is supposed to happen when AIs can write code that lets them develop better AIs.

The advancement of AI has been very impressive, but it seems to be on a different axis. ChatGPT is wonderful, but it's not useful for designing better AI training methods than researchers. It doesn't try either. And either way, the current bottleneck for the quality of AIs are training data and raw computation power. It's unclear how ChatGPT can help with those.

Perhaps I am too pessimistic, and it can produce training data in domains with clear true/false distinctions. For example, if ChatGPT produces lots of mathematical proofs, and a proof checkers sorts them into correct/incorrect. But at the moment, producing *correct* stuff is exactly the axis for which ChatGPT is a step back from other AIs.

Expand full comment

I mean, the *vibe* is certainly leaning that way. The reality, not so much...

Expand full comment
Comment deleted
Expand full comment
Feb 13, 2023·edited Feb 13, 2023

>Large language models are a parlor trick, a technological dead end.

I don't understand this sentiment. The fact that we can take one architectural element (Transformer), can compose and stack them in the most stupidest way, throw a bunch of compute at it, and it *just works* across many domains, is incredible. It's a complete game-changer. We haven't even begun to see the power of LLM derivatives. The Transformer will go down in history along side the transistor in terms of societal impact.

Expand full comment
Comment deleted
Expand full comment
Feb 14, 2023·edited Feb 14, 2023

> in a confident tone of voice

This is something they trained GPT-instruct to do (and ChatGPT which is based on that lineage), and not a feature of LLMs. The original GPT-3 model and the code model don't really do this, in that you often have to jump through hoops to even get them to answer a question at all. It was much more obvious that it was doing text completion.

For example, asking ChatGPT, "Hey dude, like, how big is the Earth?", it responds with:

> The Earth has a diameter of approximately 12,742 kilometers (7,918 miles) at the equator and a circumference of [answer goes on...]

Ask the original GPT-3 the same question, it responded with:

> And he said, like, well, you know, it's, like, a billion miles away.

>

> I said, like, no, no, no. Like, what's it like to like, you know, like, you know, like, touch?

Which doesn't make that much sense, but it's clear it's continuing the text rather than answering the question. Regenerating the response, I got:

> Dude, it's like, really big.

> [continues with more dialog...]

Also,

> Large language models are a parlor trick [...] they digest the internet and repeat it back to you

It's pretty easy to show they can create original content. Just come up with something original to ask it, that won't be found anywhere on the internet. For example, I gave it the following prompt:

-----

Me: Describe a fictional animal that's a cross between a monkey and a refrigerator. Include a description of its appearance, behaviors, and diet.

ChatGPT: Sure! Let's call this fictional animal a "Refrigorilla". The Refrigorilla would have a body that resembles that of a gorilla, with long arms and legs for climbing, but its skin would be made of metal, giving it the appearance of a refrigerator. The metal skin would help regulate its body temperature, making it well-suited to live in warm environments.

In terms of behavior, the Refrigorilla would be a social and curious animal. It would use its strong arms and legs to climb trees and other structures, and it would have a playful and mischievous streak, often getting into things it shouldn't. It would also have a keen intelligence, allowing it to solve problems and open its own "door" (which would be located on its back) to access its interior storage compartment.

As for diet, the Refrigorilla would primarily feed on fruits and vegetables, storing them inside its body for later consumption. However, it would also have the ability to cool and preserve food, making it a valuable addition to any jungle community. Additionally, the Refrigorilla's metal exterior would protect it from predators, ensuring its survival in the wild.

-----

It's showing some reasoning ability here on how the fridge's properties would affect the animal. It even came up with a pretty good portmanteau.

Expand full comment

That seems less like originality than a mashup. Conceptual Mad Lib, which I can readily believe is programmable -- but a long way from invention.

Expand full comment

It made a mashup between a monkey and a refrigerator because I asked it to make a mashup between a monkey and a refrigerator. Not sure a human would do any better.

The impressive part is it was able to reason about this new animal without repeating back anything it read about refrigorillas on the internet. Nor is it regurgitating properties of fridges or gorillas. It reasoned that it's metal skin would conduct heat well, helping it stay cool. It reasoned that it could use the fridge compartment on its back to preserve food in the jungle. It reasoned that this might play a useful ecological role.

And it understood that we often like using portmanteaus for names and was able to create an original one.

I'm not sure where your goalposts are. What's an example of a question we could ask where a human could answer with more originality than an LLM? What answer would you consider original? (I don't think current LLMs are as smart as people. I'm just wondering what you're looking for.)

Expand full comment

A mashup? Maybe, but the lawnmower is a mashup of a scythe and an internal combustion engine. Innovations in art and music often came from combining disparate styles. Alone chat gpt may not be creative, but combined with a human it is a creative tool. The little bit of randomness that comes out of it is interesting.

Expand full comment

In my case chatGPT is acting as a huge productivity multiplier. Some prompts I use: Can you rewrite the following text in grammatical, idiomatic English/French/Toki Pona/…? Can you find a title for this paragraph? Can you translate this script from R to python? Can you put this into a latex table? Translate this code to pseudocode and produce latex to display it as an algorithm element. Clearly you need to double check the output.

Expand full comment

This is going in a different direction than whether LLMs are a parlor trick and a technical dead end. I agree with the issues you mention regarding ChatGPT and all the other current LLMs. But I'm interested in understanding the potential of the Transformer (LLMs being basically nearly the dumbest substantial implementation utilizing Transformers) and where they fit into our trajectory towards AGI and artificial sentience.

Expand full comment

Isn't Stable Diffusion basically ChatGPT, but for images instead of text ?

Expand full comment

The latest from The Presence of Everything:

We Have Never Seen Jesus

https://squarecircle.substack.com/p/we-have-never-seen-jesus

It seems no one has attempted to depict a certain facet of Jesus, with the result most depictions of him have a certain falsity to them. This omission teaches important things about the relationship between spirituality and religion.

Expand full comment

I can't help but think Jesus was laughing when he told the Samaritan woman at the well: “You are right when you say you have no husband. The fact is, you have had five husbands, and the man you now have is not your husband. What you have just said is quite true.” Jn 4:17-18

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

"These definitely pick up on a most curious omission in the Gospels: not once is Jesus said to have been smiling or laughing."

Okay. You knew this was coming. G.K. Chesterton, "Orthodoxy":

"Joy, which was the small publicity of the pagan, is the gigantic secret of the Christian. And as I close this chaotic volume I open again the strange small book from which all Christianity came; and I am again haunted by a kind of confirmation. The tremendous figure which fills the Gospels towers in this respect, as in every other, above all the thinkers who ever thought themselves tall. His pathos was natural, almost casual. The Stoics, ancient and modern, were proud of concealing their tears. He never concealed His tears; He showed them plainly on His open face at any daily sight, such as the far sight of His native city. Yet He concealed something. Solemn supermen and imperial diplomatists are proud of restraining their anger. He never restrained His anger. He flung furniture down the front steps of the Temple, and asked men how they expected to escape the damnation of Hell. Yet He restrained something. I say it with reverence; there was in that shattering personality a thread that must be called shyness. There was something that He hid from all men when He went up a mountain to pray. There was something that He covered constantly by abrupt silence or impetuous isolation. There was some one thing that was too great for God to show us when He walked upon our earth; and I have sometimes fancied that it was His mirth."

You are correct that God is beyond rules. But the glory is that God is not beyond reason. Can words describe God and His will? Not words, but the Word:

https://www.youtube.com/watch?v=vr7HUo9ASDo&list=RDvr7HUo9ASDo&start_radio=1

https://www.youtube.com/watch?v=M8xcurRnnKU

Expand full comment

Absence of evidence is not evidence of absence, but it does kind of point in that direction.

Even if Jesus did laugh and smile, why was it left out. Mentioning it would only have taken a word or two.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

> not once is Jesus said to have been smiling or laughing.

But he obviously had a wry and often expressed sense of humour, which comes through clearly in the gospel accounts. For a start, he gave his disciples nicknames, such as "Petros" (Peter) for Simon, meaning "rocky" in Greek.

Also, some of his similes were comically exaggerated, such as "as hard as threading a rope through the eye of a needle". (The word rope in Greek is "kamilos", later misspelled "kamelos", hence the saying about passing a camel through the eye of a needle!)

He also joked about not being appreciated in his home town and by his family, "No one is a prophet in their own country" (meaning neigborhood rather than state).

Expand full comment

> God is not beyond reason

Hmmmm. The universe is pretty unreasonable, as the critics of intelligent design ably point out. Faculties other than reason went into producing this thing, though to be sure, reason was there also. Perhaps it's not quite right to say God is beyond reason yes, but reason cannot contain God, what it can do is reveal certain aspects of Him. Other faculties reveal other aspects.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

"The universe is pretty unreasonable, as the critics of intelligent design ably point out."

If it were unreasonable, we could not work out physical laws that govern its operation. We would not be able to say that the sun would rise tomorrow, because God/nature/whatever might make it not happen.

This was the revolt in Islamic philosophy about "cloth only burns because God wills it to burn" as against "there are natural laws which we can discover by the use of reason"; while it was mainly about religion and metaphysics, there were wider implications:

https://en.wikipedia.org/wiki/The_Incoherence_of_the_Philosophers

"The Incoherence of the Philosophers propounds the Asharite theory of occasionalism. Al-Ghazali wrote that when fire and cotton are placed in contact, the cotton is burned directly by God rather than by the fire, a claim which he defended using logic in Islamic philosophy.

He explained that because God is usually seen as rational, rather than arbitrary, his behavior in normally causing events in the same sequence (i.e., what appears to us to be efficient causation) can be understood as a natural outworking of that principle of reason, which we then describe as the laws of nature. Properly speaking, however, these are not laws of nature but laws by which God chooses to govern his own behaviour (his autonomy, in the strict sense) – in other words, his rational will.

...Al-Ghazali's insistence on a radical divine immanence in the natural world has been posited as one of the reasons that the spirit of scientific inquiry later withered in Islamic lands."

Expand full comment

You are 100% right that traditional religious imagery presents a pretty curated and partial picture of the Jesus in the actual Bible.

Expand full comment

I don’t see any depictions of raving, just angry Pharisees claiming there was raving. All the ancient hebrews agreed on rules, plenty of them saw Jesus or his followers and signed on immediately.

Expand full comment

cleansing of the temple - followed swiftly by his crucification? "And Jesus went into the temple of God, and cast out all them that sold and bought in the temple, and overthrew the tables of the money changers, and the seats of them that sold doves" - raving, indeed. Btw. all those fine people were not literally inside the temple, at least much less so as are shops today in many Russian orthodox churches, jeez, I bought bottles of Kagor inside some.

Expand full comment

Well, they clearly didn't agree that much, if they did Jesus would not have been crucified. And there would be no more Jews either, as they all would have converted like the pagans.

I happen to think the Pharisees were not hallucinating, but I suppose I don't also think Jesus was literally raving. He just was... intense. So intense that it could seem as madness and demonic possession. There must have been a real fire in his eyes.

Expand full comment

I think this is part of why I like Jesus Christ Superstar so much. Jesus is constantly losing it.

https://www.youtube.com/watch?v=PJxKtOwgLdw

Expand full comment

Love it! Jesus' and Judas' showdown right in front of 11 bemused normies. Anderson's and Neeley's performances are absolutely outstanding, throughout the whole movie.

Expand full comment

Not exactly your point but there is a theory that Byzantine representations of Jesus took their inspiration from the famous statue of Zeus at Olympia which was taken to Constantinople before being destroyed in a fire. These then became a template for how he was depicted from then on.

Expand full comment

It is certainly interesting to represent Zeus as reasonable, given how chaotic he sounds in a lot of his myths. Zeus had a lot of Dionysus in him, so choosing not to see him like that is maybe an example of the same phenomenon.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I wrote a two-part deep dive into how civil rights law won (link), & what this means in the age of early AI (to be posted tomorrow): https://cebk.substack.com/p/the-case-against-civil-rights-in

My basic point is that our least accountable institutions can most aggressively pursue whatever flatters bureaucrat egos; this attracts fanatics, and selects for ideologies that vigorously deny basic realities and cause cascading failures. Intuitively, when agencies are protected from political control, they’re more able to optimize on doing what feels noble rather than what produces popular results, and the media can strongly influence their sense of nobility and of noblesse oblige. Thus, independent agencies and independent offices within cabinet departments will gradually seize power over ever more policy areas. This will be most noticeable when the ideological crusade in question involves a crusade against noticing certain obvious patterns, and when this crusade worsens its cause area enough to make it suck up ever more general attention.

Hence why, for example, the 32 civil rights offices in the administrative state (such as the “Office of Civil Rights” at the Dept of Edu, or 13 other bodies with the exact same name at other such orgs) have taken so much regulatory power from their parent institutions. And why -- even though Congress made the EEOC independent to limit its power -- it's taken charge of huge swathes of our policy space. I go through these developments and others in a fair amount of detail.

But my ultimate point is that the civil rights ideology itself is just an epiphenomenon of a certain psychological virus: the envy smug anger that can be coaxed from all of us -- in varying degrees and varying ways -- if we're given this kind of unofficial and unaccountable commissarial power. Hence why the point of anti-racism obviously isn't to oppose race-neutral discrimination (even though you'd think the EEOC would have even more power if it also went after anti-white bigotry); nor to help any particular client populations (black people don't materially benefit pretty much at all from any of this bizarre negrolatry). Rather, it's about "chopping down the tall trees," as race-wars always are, from Germany and Malaysia to Rwanda and Uganda.

I think that codex readers might enjoy my attempts to really drive this home by imagining how a much smarter society could end up similarly captured (relevant quote from the piece included in a reply to this comment). For example, if we pursued anti-gravity technology through our current grant-funding bodies, we could end up making professors sign loyalty oaths to oppose gravitational attraction, and scientists could engage in semantic shell-games about whether gravity is really a force so much as a feature of how spacetime warps around mass, and lay-people could hazily presume that buoyancy explains why dense objects seem to fall relative to lighter ones. Or the same goes for how if an anti-aging party took charge, our bureaucracies could end up just redefining the units in special relativity such that we'd "abolish" time. Indeed, this bizarre semantic fanaticism already characterizes much of how modern math departments operate! Or etc.

Expand full comment

Here's the relevant quote:

After all, administrative agencies can grant themselves the most power when their purpose is quixotic. For instance, if we could effectively cultivate patronage and clientelism around beating back the conspiracy theory called “gravity”—which keeps us all down every day, and causes buildings to collapse—then most of us would end up running anti-gravity chatbots in our brains, which would lift up a vastly more interesting deep state: we’d all know slightly garbled versions of some fairly technical arguments about why gravitons don’t “really” exist, and how general relativity fails to cohere with quantum mechanics; and we’d make some pretty justified complaints about how particle accelerators and observatories like LIGO are corrupt and wasteful projects. Half of us would confidently hold some hazy ludicrous ideas about why “buoyancy” explains away the need for gravitational forces in physics. The other half would uncomfortably explain that all of Einstein’s equations are exactly correct… but also that, for purely semantic reasons, we should redefine mass as expanding outwards through spacetime (instead of locally squashing it). And so you must learn to think that the earth is actually accelerating upwards at your feet, rather than pulling you down. You must learn to believe that our planet swallows an endless flow of the fabric of reality, like a sinkhole through which the universe drains away! And you must really mean it, and care, or at least convincingly pretend to.

Or imagine that life-extension scientists capture the grant-funding bureaucracies: of course, there’s nothing more just than their crusade to kill Death; just imagine how great it would be if the void was our great Satan! But if they aren’t willing to place a strongman in charge—to compromise their “democratic principles” in order to fight back against an eternal vast oblivion which looms closely over us all and grows even closer every day—then they’ll fall prey to the same sort of delirious insanity that’s captured all of our major institutions. For example, maybe the unaccountable agencies would seize their power from them by abolishing time, because then aging couldn’t kill us! After all, in 2022, the World Health Organization replaced “old age” with “aging-associated biological decline in intrinsic capacity” (which means that it’s no longer an officially recognized cause of death under this gerontocracy). Mission accomplished!

If you’ve read this far, you probably have no faith in the WHO, nor sympathy toward those who believe in its official science. But so let’s imagine how an empire staffed by far more sympathetic toadies—for example, one filled with creative, autistic, and honest bureaucrats—could reach the same result. Currently, under our “temporally chauvinist” version of special relativity, acceleration causes actual time dilation and apparent length contraction (i.e. clocks really do tick slower, whereas rulers only seem to grow relative to what they measure). However, we can coherently redefine our units such that space actually contracts when you speed up, and so clocks dilate. In other words, your watch would still stick slower if you could watch it accelerate away from you… but only because “more space” would fit between its notches, not because the “true velocity” of its hands would slow. And so we can define “time” in terms of this velocity, and get an objective, universally-shared time dimension.

There would be downsides to de-relativizing time, of course; for example we’d lose our objective sense of how distant any two points “really” are, because different reference frames would inhabit differently-sized universes. And so there wouldn’t really be a “speed of light” anymore—because you can just keep on accelerating ever “faster,” since acceleration always further shrinks the “distance” metric of your local space—but then we’d have another problem: there must still be the standard universal speed limit on how quickly light can move between two different people’s idiosyncratic (yet equally valid) understandings of this “distance” thing. All of these details can ultimately get worked out, of course, but it makes the simple and elegant equations of physics quite messy and confusing. And then, once you’ve got an objective, coherent, consensus definition of time, you can define it as an extraneous dimension. After all, at that point, it doesn’t add any physical information, because it’s just become an indexing schema… just like how alphabetizing your files can’t actually tell you anything about them that you didn’t already learn by looking at the first letters of their titles. And imaginary dimensions obviously can’t kill you!

But why am I going on at such excruciating length about how cults could create coherent yet absurd ideological versions of general or special relativity? Well, just think about all the silly hypotheses and semantic arguments that our brains dutifully leap to when certain demographic realities threaten to make themselves obvious: maybe “stigma” did it, maybe the data was faked, or maybe we’re all just virtual minds with fake memories and so genes aren’t even real…. Admittedly, most of us are too dumb or disinterested or disingenuous to even care about being informed and coherent and accurate when it comes to such touchy topics; but plenty of decent, honest, and intelligent people tie themselves up—without even realizing it—in absolute bullshit that’s far too smart, subtle, or perceptive to wave away as lazy blind-spots or bad-faith propaganda. Consider all the pure math research scholars who would gleefully help an extremist regime suppress the low-status claim that two and two make four (in case you don’t know many mathematicians, let me assure you that half of them would happily call basic Peano arithmetic “misinformation,” in exchange for the chance to proselytize about how algebraic rings work). Similarly, I know plenty of brilliant and creative people who somehow believe that we must strike a blow against creationists by making high school teachers evangelize empty slogans like “the mitochondria is the powerhouse of the cell.”

Just consider all the great geniuses who had schizophrenic delusions. We can recognize their grand hallucinations and compulsions as crazy precisely because such insanities were mostly harmless… at least, insofar as these beliefs weren’t either widely shared or politically relevant. Indeed, isn’t it the mark of a singular visionary that his absurd opinions and neurotic fantasies are so alien? Think of how Georg Cantor was hounded into insanity and an early death for having such a transcendent view of the infinite: it changed the course of math so thoroughly that, at first, all of his contemporaries considered him utterly mad; and then (ever since they died) everyone with an opinion on infinity has considered his work elegant and obvious. And yet his work directly caused the whole field’s descent into incoherence, from Godel’s Inconsistency Proof to the Banach-Tarski Theorem! Soon it will seem insipid and bizarre that for the whole twentieth century mathematicians insisted on pretending that they could manipulate sets which were too large for any function to finitely construct. They even somehow pretended that this rendered mathematics “incomplete”—and, as a corollary, “inconsistent”—by vacuously defining reality as “true,” even when it isn’t provable! Once we remember that the “truth” should obviously be defined as a subset of proofs, rather than vice-versa, we’ll wonder how such self-defeating charlatans ever persuaded us to bow before them.

Maybe if our country’s average IQ was thirty points higher, this could be the ideology in which unaccountable elites would cloak their ugliness: perhaps they’d make us pledge obsequious allegiance to the axiom of infinity (or power-set, or choice). You may think that such abstruse delusions as Platonism—the belief that numbers are actually out there, in some sense other than just being computable—could never become socially contagious and politically convenient enough to take over a whole culture… surely any society smart enough to understand these issues would be too clever to care about them very much! But, of course, in Classical Athens, Plato’s hero had to be put down for corrupting young elites; and, likewise, modern math departments won’t allow any ultrafinitism to take root. So we should all try to have some sympathy for those who are infected with ideology. We should treat it like an illness, and have enough self-awareness to realize when we’re generally at risk. And because we’re all so “immunocompromised” against these threats, we shouldn’t be allowed to have independent agencies, or special protections for the press, or tenure at universities, or think-tanks or foundations. Or else these unfortunate episodes will keep on airing their fictitious plots. Because to air grievances is only human. And when there are certain roles where broadcasting resentments is especially protected, the fakest such gusts of hot air will sail most fluidly across the airwaves, and whoever doesn’t cling to them will drown in their currents, and be cast away.

Expand full comment

I'm organising more conferences for builders in Prospera, the startup city on a Caribbean island (with direct flight from Miami, Houston, Atlanta, Denver, Dallas)!

We're aiming to build a medical tourism sector for longevity, and regulations for the digital asset industry to be a safe haven from the US regulatory activism against the crypto industry.

Join us if you liked Scott's piece on Prospera (like I did), and want to see it for yourself.

Supercharging Health 2023 - A Próspera Builders’ Summit, April 21-23 on Roatan: https://infinitavc.com/healthbio2023

Decentralizing Finance 2023 - A Próspera Builders’ Summit, May 5-7 on Roatan: https://infinitavc.com/defi2023

Expand full comment

"We're aiming to build a medical tourism sector for longevity, and regulations for the digital asset industry to be a safe haven from the US regulatory onslaught on the crypto industry."

Medical tourism - just like Switzerland! or apparently Vienna entre-deux-guerres; why, you could even revive the seminal (in all senses) work of Dr. Steinach!

https://www.mcgill.ca/oss/article/health-history-science-science-everywhere/getting-steinached-was-all-rage-roaring-20s

"In his book, Sex and Life, Steinach described how his patients “changed from feeble, parched, dribbling drones, to men of vigorous bloom who threw away their glasses, shaved twice a day, dragged loads up to 220 pounds, and even indulged in such youthful follies as buying land in Florida.” He believed in his procedure so strongly that he “thrice reactivated himself.” It isn’t clear what he meant by “thrice,” because once the duct is tied off, it’s tied off. Whatever improvement Steinach and his patients felt was probably due to wishful thinking, because as we now know, vasectomies do not boost hormonal output by the testes."

Or perhaps buying land in Prospéra?

Yeats supposedly underwent the procedure and had a creative flowering again:

https://pubmed.ncbi.nlm.nih.gov/20446646/

Hmmm - remind me again, wasn't there something in the news about some other crypto company that set up in the Caribbean to get a safe haven from regulatory onslaught? I could swear I read some small paragraph about it. Some business with a name that was letters of the alphabet - GUY? something along those lines!

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I was truly concerned when we learned that as many as 4 surveillance balloons crossed into US/Canadian airspace undetected in the last few years, but the last week of multiple unidentified objects being shot down is just bizarre, and I can't help but feel like the military is overreacting in the face of criticism of missing the previous incursions. Can anyone link to a good analysis of what might be going on?

Expand full comment

This is written pretty bombastically, but there's a lot of useful background info, and the general line of reasoning --that we are seeing a heretofore fairly submerged iceberg of low-cost low-profile remote aerial intelligence operations peep above the metaphorical waterline -- seems like a good starting hypothesis:

https://www.thedrive.com/the-war-zone/40054/adversary-drones-are-spying-on-the-u-s-and-the-pentagon-acts-like-theyre-ufos

The issue of reaction and overreaction is subtle. The Pentagon has made the point, relatively quietly and ineffectally, that there's maybe more to be learned by quietly studying some of these efforts, while pretending you don't know about them, and for that matter learning to feed them bullshit. An intelligence conduit can conduct both ways. And maybe that's more valuable than immediately blowing things into little pieces[1], at least until Senator Horseface, standing for re-election shortly, demands to know what new Sputnik is boop-boop-booping over the heads of his constituents, spying through their bathroom windows.

Unfortunately intelligence is a deeply shadowed and hall o' mirrors world, and I would say it is very, very difficult to judge not only what exactly is going on[2], but also whether the principals know what the fuck they're doing, until decades after the fact, when the records are declassified enough for historians to paw patiently through and write turgid non-best-sellers about it all.

--------------

[1] Particularly at a unit cost, for the AIM-9X, of ~$400,000, plus the operating costs of a pair of F-22s (~$180,000/hour). One would guess it costs the USAF 20-50x more to shoot down a Chinese spy balloon than it costs the PLA to launch one.

[2] Except that it's definitely not aliens with anal probes, and the only reason I can fathom responsible people allow that to float around in the air is in the hopes that they'll look like clueless dweebs to the adversary as well as to citizens and journalists. Which if true says discouraging things about the level of mutual trust between The People and their servants.

Expand full comment
Comment deleted
Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Well, I don't even play an air superiority pilot on the Internet, but I'm doubtful that the training value of shooting down a balloon with a Sidewinder is measureably greater than nil. So whatever they could have done with those 2 hours and umpty gallons of JP-4, I rather suspect as a taxpayer I would've gotten more value if it didn't involve popping a ballon with a $0.5 million pin.

...pllus I'm pretty sure China will invade Taiwan with J-20s and not a bunch of conscripts in a hot-air balloon.

No, I don't blame Senator Horseface for skepticism, it's his job to be skeptical, and indeed that's why I voted for him. But the place to do that with respect to military intelligence is in closed-door hearings in which you can grill your witnesses in detail and they can speak frankly, and both of you are not on the record. When you're pontificating in public, you're almost never speaking to give or receive information, you're speechifying to create an impression and blow smoke up somebody's ass[1]. Almost by definition this is the opposite of a well-functioning republic (which goes to my second footnote).

I certainly agree the Pentagon is fully capable of screwing up. The problem is that we really have no good way to know, at least in the near future. Those who know are definitely not talking, and those are talking...well, some of them may actually be making sense, but there's no way to know for sure.

----------------

[1] I don't have to adduce "the missile gap," right? You remember that, surely.

Expand full comment

I get the impression that a lot of training flights are of the form "take off, fly to X waypoint, and land" rather than something exciting like "take off and practice dogfighting with Tom Cruise," which is why the military never has any trouble doing flyovers at football games and the like. Pilots need to get a certain amount of time in the cockpit just to keep their skills up. By that baseline it's probably a reasonable use of their time.

Expand full comment

I kind of doubt they sent up a trainee needing stick time to execute a mission about which the President intended to make an announcement later that day. This is presumably also why they are using Sidewinders to kill balloons, which is kind of like using your Cartier watch to drive nails.

Expand full comment
founding

Training flights are not just for "trainees". I've been flying for over thirty years, and I still do proficiency flights every so often. Sometimes even with a paid instructor. If my flying involved shooting missiles, I'd want to practice that regularly even if I'd been doing it for thirty years.

Expand full comment

I was going to say something similar. The only thing I'll add is that the US is a democracy and our government does ultimately have to answer to the people. If any particular part of government (e.g. the IC) wants to be able to just say "Trust us", then they have to actually earn that trust, and it's not granted in perpetuity. I think enough people have enough very reasonable questions to justify shining a light onto some of those "deeply shadowed" parts of government, even if that means that there's a risk of some ill-advised political grandstanding.

Expand full comment

In their last UFO report, the US government counted ~360 UFOs in the last two years, so roughly once every two days. Probably the default has changed from "leave them alone" to "in case of doubt, shoot them".

Also, in reaction to the first balloon the NORAD has adjusted their radar surveillance to specifically detect objects of this kind (similar speed relative to wind, similar height). Radar data is not on/off, but you have to run algorithms to detect certain kinds of things. Balloons give a very weak signal, so you have to know what to search for in order to detect them.

Expand full comment

My guess: This happens all the time, and the media is just paying attention now because a recent prominent example.

Firstly, countries spy on each other all the time, even on their allies. Secondly, humanity sends a whole lot of stuff into the air, so armed forces aren't necessarily going to know what everything is even if it's completely innocent.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

What happens all the time? I would be surprised if the Air Force had regularly been shooting down unidentified objects over the US and Canada multiple times per week, but no one noticed. I think that's genuinely new.

I would be less surprised to learn that lots of random unidentified things are always floating over North America but would still be concerned if the US military either a) had been failing to notice or b) had noticed and decided to just let it happen without some sort of a response (though that response doesn't necessarily have to involve destroying said objects). Option a) implies a major lack of situational awareness and option b) implies a surprising (to me, anyway) level of complacency. Neither is comforting, hence my desire to learn from people who know now more than I do what other interpretations might make sense.

Expand full comment

One thing that the federal government is saying is new-ish is a lot of these objects being in the commercial-airline airspace, i.e. 20K to 40K feet. The Chinese balloon last week was at 60,000 feet. Those altitudes are radically different from spy satellites which are orbiting at least 100 miles up.

I haven't yet seen any knowledgeable critiques as to what degree that (the part about lots more objects being down low in the atmosphere) is actually true.

Expand full comment

The obvious guess is either:

a. There was previously a policy decision not to mess with these things, now that's changed and we've decided not to let them cross into US airspace anymore. (That might be because of the press attention the other balloon received, or new information we have about what these balloons are doing over our space.)

b. There's been a big increase in the number of these balloons, so while we could ignore a couple a year, we're not ignoring a couple a week.

c. We improved our detection or reporting and some level of decisionmaker just realized this is a big problem and we need to stop these crossing into our airspace. Again, that could be political leadership only hearing much about it now because of press coverage, but could also be (say) a technical change leading to the number of these balloons detected going way up.

Expand full comment
deletedFeb 13, 2023·edited Feb 13, 2023
Comment deleted
Expand full comment
Feb 14, 2023·edited Feb 14, 2023

It may also be that the US has been launching drones and/or balloons over China for several years, and only now or recently have the Chinese (assuming they detected these) felt aggrieved and confident enough to send a few back!

Not sure how easy it is, given the prevailing winds, to launch a balloon to drift over China from somewhere the US would have undetected and unchallenged access. But presumably long range drones could be launched from practically anywhere.

Expand full comment
Comment deleted
Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Not usually, if the Cold War is any guide. Both sides indulged in all kinds of territorial trespass[1] in search of intelligence, but neither side talked about it, with very rare exception, perhaps in part because they were *both* doing it and it's difficult for the pot to get righteous about the kettle and not seem like a fool. I've heard the argument that Khrushchev was astonished that Kennedy made a *public* fuss about the missiles in Cuba, having expected some kind of back-channel deal to remove them in exchange for removing the Jupiters from Turkey (which is eventually what happened anyway).

-----------

[1] Exempli gratia https://www.rand.org/content/dam/rand/pubs/research_memoranda/2014/RM1349.pdf (PDF)

Expand full comment
Comment deleted
Expand full comment

That could be. I find it hard to suss out the Chinese this way, they seem a lot more two-faced than the Soviets were. It's hard to tell how much is genuine, how much is posturing for domestic politics, how much is kabuki and fifth columnizing. But I believe in the end they're practical people, and so the big stick/soft voice combo is probably where they'll end, because all practical people end up there.

Expand full comment

Thanks. I looked him up. I can't say his analysis makes me feel all that better.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Re 4: Internet friend.

Sounds like you reacted sensibly. If you want to be paranoid about it, you could set up automatic forwarding on your email for mails from the sender to the recipient. Or forward manually if you're already in the loop anyway. This doesn't reveal the recipient's email-address.

Suggestion for concerned people: Use non-identifying or even unique emails for things like Substack, where it's unclear who gets to see it. I trust Substack and other people I subscribe to less than Scott here.

Expand full comment

Or ask people if they give permission for their emails to be shared in such situations. If someone (like me, for instance) comes back with "Hell no, not even if it's allegedly my poor grey-haired mother desperate to tell me the cruel landlord has evicted her onto the side of the road, Granny is dead and her funeral is tomorrow, and little Acushla is coming down with the consumption", then agree that even if it sounds genuine, no sharing.

Expand full comment

Yes, exactly. I would never share anyone's private email (or any other personally identifiable information) without asking that person for permission first. I think doing otherwise is borderline unethical. Sorry, Scott.

Expand full comment

Agreed. It opens up for a large attack surface for social engineering attacks. Even if this case was genuine, now that it's written on the blog I would expect many more fake requests.

Expand full comment

#4

The concerned person could go back to any previous comment section and respond to a post of the user they are concerned about with their own contact info. The person in the emergency situation will get an email notification with this message. Then simply delete the comment after a few moments. There is minimal risk that anyone else would pick up that contact info.

The person they are worried about will get the email and can respond if they want to.

Expand full comment

Let me go on record as saying I have no friends, have never had an emergency, and do not want my info given to anyone no matter what circumstances they claim.

But seriously, I second the suggestion of many commenters that your smartest move is to contact the person yourself and ask if you can give their email or simply forward the email. Giving out someone’s email without prior permission is a classic ploy of social engineering and likely to compromise others’ identities and security. This has inspired me to make sure I’m using throwaway email addresses in more places.

Expand full comment

Same here. Well actually, I do have friends, but they know how to contact me directly. If I have any "friends" who I haven’t given any means to contact me, then probably there’s a reason for that. Probably I judged that having them contact me out-of-the-blue was more likely to cause me distress than to let them help me deal with it.

Anybody who knows me only through my ACX posts is not somebody who I am interested in involving themselves in whichever personal troubles may afflict me. Scott, please do not give my personal details to anybody no matter how good their sob story is.

Expand full comment

> to contact the person yourself and ask if you can give their email.

This is the safest version IMO.

> or simply forward the email.

This seems fine as well, especially if the e-mail is clearly unobtrusive. If the person looking for the contact expresses privacy issues ('can't send you the e-mail, because I need to put stuff in that contains personal information of the recipient') there is still the option to forward the e-mail without reading, and explicitely mention that you haven't read it. A bit more risky, but not as risky as passing on sb. elses e-mail.

Expand full comment

Also, Mr AC Ten, thanks so much for these open threads. I am finding it a really good investment of time for feedback (even if most of the feedback is that my comments are odd, self proclaimed biblical scholars with a bunch of little kids are boring, and I shouldn't link my substack.

Thanks to David Friedman, I refrain from doing the latter. See, I do take feedback!

The main reason I'm not subscribing is because I am so technologically incompetent that my husband has to chivalrously deposit my checks on his phone, which he insists is easy, and that I should be able to do by myself. (See logging in the digital age for the rest of that story). That means that I cannot figure out how to set up stripe without his help. But anyway I believe in things being accessible to everyone.

Expand full comment

You are cracking me up here, Isha. :D

Expand full comment

I have 3 more subscriptions to Razib Khan's Unsupervised Learning to give away. Reply with your email address, or email me at mine https://entitledtoanopinion.wordpress.com/about

Expand full comment

same here, yep, it is only a month, but that may motivate you to concentrate on Razib's outstanding output. Now that I subscribed for a year - and I just read 4 pay-walled pieces. Had I only one month, I read up plenty more. Well, at least we can give 3 free gift subscriptions. Marketing? maybe. Do we need an email? seems so. Maybe TGGP and me are evil scammers?? Well, I am, but I do vouch for TGGP.

Expand full comment

Interesting! Are these annual subscriptions?

Expand full comment

Month instead of year.

Expand full comment

Oh! Then it’s not really a giveaway - it’s marketing!! But I can do much better. Allow me to give you a free lifetime subscription to my (actually very good) podcast, Subject to Change. https://pod.link/1436447503

Expand full comment

I don't intend to come across as offensive, but I doubt the quality can compare. Almost nobody on the internet is as knowledgeable as Razib.

Expand full comment

I’m certainly not as knowledgeable. But the people I interview most certainly are!

Expand full comment

Who have you interviewed?

Expand full comment

Why are you giving them away?

Expand full comment

Substack gives them to me specifically to be gifted.

Expand full comment

This week in nominative determinism: a researcher studying fowl diseases surnamed Peacock (H/T Zvi Mowshowitz https://thezvi.substack.com/p/h5n1)

Expand full comment

Re #4, you forward the "emergency" email to the person they are trying to contact, and let them decide what to do with the information.

Expand full comment

That seems best!

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

On (4): I don't think it is alright to relay other people's private contact information without consulting the person in question in the first place. There are serious reasons someone might not want to have their email being spread to people claiming to care for them, including abuse or otherwise boundary-breaking behavior (which often manifests as worry on the outside). I don't know whether Scott did in fact consult the person in question, but I think this is off-limits.

There are things that could be inferred from my email address, which is why I don't make it public. I would feel betrayed and angry if Scott had done this to me. I'm sure his intentions were good and I'm sure there were good reasons to relay the email address, but I think that in this case it was likely categorically wrong to do so.

An example: I was once contacted by a family member, stating that a person of the opposite sex had contacted them. Turned out it was a date I had turned down, and that that date was then extremely worried for my well-being. The date had disclosed private - non-incriminating, non-acute, but private nonetheless - information I had told them in private, and the family member in question wanted to know more about this. I felt this was a big betrayal of my trust.

In this case all that was disclosed was an email address, but let me assure: there are plenty of people I wouldn't like having even that - much less having them combine it with my online personalities. Yes I know cracking and leaks happen, but I don't think administrators should enable such events.

Expand full comment

This is a good point.

Also even - or especially - an email address unrelated to their name can still be very revealing. You can Google the email address and get a whole bunch of random things that email address was put on - old essays, for example. Strings of information that also contain addresses and phone numbers. I found out far too much about someone once this way. I was bored and googled their very unique email handle & whee! Throwaway email addresses sound like a good idea.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

On #4. I guess it is a risk (of betraying the recipient of the email) worth taking if the situation seemed to merit that.

I respect Western culture for various things, one of them being this attention to privacy. It is almost a non-existent concept in India and very frustrating. However, it is possible to take privacy rights too far.

I am pretty sure you exercised good judgement with the details you knew, considering how careful you are in analyzing everything you talk about here.

So I think it was ethically the right move.

Expand full comment

https://www.youtube.com/watch?v=rSCNW1OCk_M

ChatGPT vs. Stockfish-- a hallucinatory chess game, hilariously narrated.

Discussion: https://www.metafilter.com/198239/Were-an-empire-now#8361146

My favorite comment: "The most disturbing thing about ChatGPT is how perfectly it recreates the experience of talking to a pathological liar."

Expand full comment

Do you think the next 20 years will see more change than we saw in the last 20 years?

Expand full comment

My impression is that the world from 1950 to 2010 or so was in many ways more stagnant than most of the other recent 60 year periods. But I do think that things since 2010 have been changing much more quickly than they did during that period - smartphones and GPS seem to me to be nearly as transformative as electrical home appliances, and much more so than any other household good more recent than, say, the air conditioner. AI seems to have potential more on the lines of the automobile.

Additionally, social and political changes in the last decade seem like they have been bigger than the ones before (though this may not be quite so clear) - on some level, we just have a flare-up of the culture war that we've been having since the mid-60s, but we also have some major transitions in the political landscape. Authoritarianism has been much more popular in democracies over the past decade or so than it had been since World War II, and same with racism, and socialism for that matter.

I think I would confidently predict there will be more change between 2025 and 2045 than between 1990 and 2010. Whether that is more than the period from 2000 to 2020 is less completely obvious, because it's possible we finish some big transition and then stabilize again, the way we did in the post-war period.

Expand full comment

Not really. It already feels like it's slowing down. If I have to go back to my 2003 self and explain what's new, I have to talk about...smartphones, I guess, which are much more than iPods, and Internet dating. Cars look and work pretty similarly, except you can talk to your phone and ask it for directions, although my 2003 self knew about GPS. Air travel is much the same, only more crowded, worse or no food, longer lines. On the world stage...I dunno, the Iraq mess, which just kind of fizzled out, as did Islamic terrorism. A lot of drones these days, but Amazon still delivers to me with a guy in a truck. Self-driving turns out not to be, at least yet. COVID was kind of a mess, but it didn't go through civilization like the 1918 flu, still less the Black Death. Youtube and TikTok...lotta cat videos, and the rise of "influencers" who meme what color lipstick to wear while ice-fishing. People spend an amazing amount of time playing video games online, but how or if this has transformed us in any interesting or useful ways...hard to say, except for the wirehead segment that can't handle it.

But if my 2003 self needs to explain what's coming to my 1983 self, he's got to cover cell phones, the World Wide Web, broadband Internet, MP3s and the death of the CD, "photoshop" turning into a verb, the end of the Cold War and the implosion of the Soviet Union, the reunification of Germany, the evaporation of Japan, Inc., as a potential world-dominating power and the rise of China, the remarkable control of HIV achieved by rational drug design, the huge growth in organ transplants rooted in the discovery of cyclosporines.

And if my 1983 self has to go back to my 1963 self, he's got to cover the Moon landing, satellites, Voyager 1 and 2, the massive growth in Cold War armaments, including ICBMs and ballistic missile subs, the wiidespread use of computers outside of government research labs, the Internet, Interstates, routine jet travel and the demise of passenger rail, big advances in car reliability and performance, the sexual revolution and dramatic surge in married women working, a huge uptick in college attendance, Vietnam, decolonization, Nixon and Watergate and detente, the Reagan revolution, plane hijacking and terrorism, going off the gold standard and the rise of monetary policy, dramatic improvements in child cancer treatment.

Expand full comment

I think the difference for internet usage and cell phones is fairly sensitive to the cutoff year. Around the year 2000 is when we went from most people (in the US) not being internet users to most people being internet users, and likewise for cell phones. If we instead compared life to 1998, most people still would not have a cell phone or home internet (although younger adults would be more likely to have these than older ones).

The internet then changed significantly between 2000 and 2005. This is when we started using Google (technically founded before 2000, but we only heard of it after). Wikipedia started getting widely known around 2003 or 2004. Myspace got popular around 2005. Because the 20-year cutoff is 2003, we can say these things already existed then, but they were either pretty new, or not yet well-known.

The cell phones of 2003 may have been closer to the car phones of the 80s than modern smartphones in that back then we still mostly used them as portable phones rather than pocket computers.

I guess I'm saying I see a lot of changes in computing technologies from 1998-2008, and 2003 kind of falls awkwardly in the middle of that.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

Well, I had a cell phone in 1998 and a broadband Internet connection. Neither was rare enough to make me feel unusual, although I also did know plenty of people who had no cell phone, and plenty of others who relied on dial-up. I can't say I knew anybody who didn't use the Internet at all. I can't readily recall when I switched from AltaVista to Google, but maybe in there somewhere.

I don't disagree there are a fair number of changes to computing technologies in that period. But I think there are fewer *since* then, and I think there were more *before* then. The difference between computing technology widely available in 1980 and 2000 is much greater, I think, than the difference between 2000 and 2020.

Edit: I mean, at a certain level, this is completely unsurprising. Social priorities do matter. We've spent 25 years refocussing from pure technological wow to stuff like environmental awareness, cultural sensitivity, inclusivity, et cetera. So we are very much better at all those things now. But they don't come without a cost, and the cost is that we just move forward on a pure technology basis more slowly.

Expand full comment

You can confirm your intuition about computer technology in those years just by looking at what was in use between those times. By 2000 we used essentially zero of the programs and computers available in 1980. By contract, in 2020 we were still using MS Office, Windows/Mac OS, and even often the exact same video games as we did in 2000.

Expand full comment

Home computers became popular in the 90s, earlier than cell phones and internet. By 2000, major brands of computer software were established. The same applies to web apps: we still use many from the early days of when home internet first became popular, like Google and Wikipedia. 2000 was just a bit before we used these websites.

I'm also not sure I'd consider Windows 10/11 to be the same software as Windows 98 (or Windows ME, but that only released towards the end of 2000). It's the same brand as Windows 98.

A few video game brands have persisted since 1980. Sega and Nintendo come to mind. Pacman and Space Invaders still occasionally get new releases or remakes. Mario and Donkey Kong (1981) are still around.

Outside of video game characters, Windows, and the Windows calculator, I'm not sure I use any apps whose lineage can be traced back to 2000. The web has largely replaced traditional software. I use Google Docs instead of MS Office. Netflix and other streaming services instead of whatever media player we used in 2000. I use Wikipedia, Google Maps, and various other websites.

Expand full comment

Gadgets are a trivial change compared to the potential geopolitical changes that lie in wait

Expand full comment

Like what?

Expand full comment
Feb 13, 2023·edited Feb 15, 2023

That's an interesting question. Intuitively we think a lot of things changed in the last 20 years, because we experienced it. It's also easy to look at certain examples, namely the internet and growth of instant communication, and think that this precipitates vastly increased "change" of some kind. Honestly, I think that's mostly proximity bias. Because we ourselves experienced it, we find a more detailed and nuanced view of all the things that changed.

When deciding if a lot of things changed in any particular set of years, I like to find a set of years to compare it to. I think the 1930s and 40s saw way more change than we've seen since 2000. Global, massive change. Okay, 1960s and 70s? Unbelievable social upheaval in the West, and the starting of literally dozens of new countries across the world. We can go through similar exercises for other sets of years, with potentially similar results.

So did we see a particularly large number of changes since 2000? Honestly, pretty much the opposite. We're living through a particularly boring time compared to the history of the last ~150 years (about as far back as I feel comfortable commenting on). There's not been a major war in almost 80 years. The last major pandemic was over 100 years ago. We treated COVID like a major issue, but 25%+ of the population in the *entire world* [EDIT from below correction] got sick. 1-3% died from the Spanish Flu. And Spanish Flu happened at the same time as WWI - talk about a lot of change!

Don't get me wrong, a lot of stuff is going to change in the next 20 years. But with any kind of understanding of history, my guess would be that we will not see all that much change. Probably similar or slightly more than the last 20 years - but keeping in mind the last 20 years were not very eventful in the grand scheme of things.

Expand full comment

"25%+ of the population in the *entire world* died from the Spanish Flu"

The standard estimate for global deaths from the Spanish Flu seems to be ~50,000,000. (I'm sure with large error bars.) Population was 1.8 billion, so that's ~2.8%, not 25%.

Our World In Data gives Covid deaths so far at about 7 million, which is .08%, still much smaller.

(I'm not sure how comparable the numbers are: Covid deaths are undercounted most places, but it's not as if public health statistics were rigorously established in most of the world in 1918, and the countries that had them were mostly in the middle of a war that both disrupted data collection and encouraged censorship.)

For the US, the comparison is much closer: Spanish Flu deaths are estimated at 675,000 deaths out of 105 million, for 0.64% of population, while US Covid deaths so far are 1.1 million out of 330 million, so 0.33%. Still lower, but reaching about half the worst pandemic in (barely) living memory is a pretty massive outlier in US experience.

I'd say Americans treating it as a disaster of comparable magnitude is reasonable. Especially given that the background level of infectious disease has been vastly lower and declining for most of the intervening time. Against that, Spanish Flu hit younger age groups much harder, and it's been made pretty clear in the last couple years that the ratio of public concern about deaths declines with age at a greater than linear rate.

Expand full comment

Thanks for the correction. I remembered it being a high number and didn't read my Google results carefully. 25% got sick, but between 1-3% died. That's still huge, but not insane like 25%.

Expand full comment

I feel like 2003-2023 saw less change than than 1983-2003. To me, 2023 feels a lot like 2023 except we have fancier phones and shorts with fewer pockets. In 2003 most people had a computer, most people had a mobile phone, most people were on the internet. (At least, the people in my general social vicinity did.) Even fashion and music aren't all that different.

Smartphones and social media have definitely given us new ways to waste time, but in every other aspect there hasn't been all that much change at all.

Expand full comment

The past 500 years of history would say...yes.

Expand full comment

Hard to define with any precision, but I'm betting on yes. Things keep getting weirder faster with no signs of settling down.

Expand full comment

https://www.thisamericanlife.org/791/math-or-magic

Hour-long show, and there's a transcript. I especially recommend the first story.

Stories about falling in love, and in particular about the question of whether people can just know they've found the right person, or if it's more a matter of being in contact with a lot of plausible people.

Expand full comment

have only listened to 2/3 of the first story but gosh, why isn't that a movie yet?? Its like "you've got mail", but way better and a true story!

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I try not to over promote my own podcasts here but . . . well, here is one I did with Christopher de Bellaigue.

https://pod.link/1436447503/episode/b201763d7f056360f7932e4011a1ce50

You should read his book The Lion House and not just because he has the coolest name of any historian I know. His book and the podcast is all about Suleiman the Magnificent who came to the throne of the Ottoman Empire in 1520.

Suleiman's father is Selim I is great at getting angry with anyone and is also a military genius having doubled the size of the empire in his short reign. This is the kind of Sultan the Ottomans like. And he has killed his own farther and his own brothers seem somehow to be dead too and all Suleiman's brothers seem somehow to be missing from history.

So Suleiman is, well, a bit lonely. Not a male relation in sight. So Suleiman is very happy when a woman he knows gives him a slave picked up off the coast of Albania as a boy. And the boy has been given a good education and is delightful company for Suleiman.

So when his father dies of getting so angry he has a coronary (well the plague actually) and far sooner than anyone is expecting a young Suleiman becomes Sultan and his slave Ibrahim (now converted and given a nice Islamic name) comes with him and before you can say anything has become Grand Vizier.

But Ibrahim has a problem too. He is lonely as well. Well maybe not lonely but who do you turn to when you are parachuted into the position of CEO of the Ottoman Empire. Everyone else in the executive suite is jealous and basically hates you so you turn of course to . . . Alvise

And Alvise is perfect because he is worldly wise and great and not least he is the bastard son of the Dodge of Venice (even Scott would approve of how Doges get elected https://www.theballotboy.com/electing-the-doge)

So Ibrahim is happy and Alvise is happy and the Venetians are ecstatic because their seaborne empire is precarious to say the least. The Ottomans could swat it like a fly so cosying up to them is what they want to do more than anything in the world. The Doge is sort of happy - he wishes Alvise would come home. He may be a bastard but he is the Doge's favourite son.

What's that you say? Surely the Christian powers would disapprove of the Venetians cosying up to the Great Enemy? They do! But they can't really do anything because they are divided and fighting with themselves all the time and oh bugger! Charles V has squashed Francis I of France like a bug and taken over Italy and demanded the Venetians stop helping the Turks. What to do??

So as not to be squashed by Charles V the Venetians join his alliance. What's this say the Ottomans, we thought you were besties with us?? We are! say the Venetians. It is just a piece of paper, we really still love you and we will keep telling you what Charles V is up to and help you invade Hungary and Austria and even give (er, sell) you the biggest crown in the world with more tiers than the Pope's tiara! Hmm say the Ottomans and make friends with squashed bug Francis I who goes so far as to hand over the port of Toulon to muslim pirates and to allow the churches there to be turned into mosques. All very commendable but maybe this is taking inter faith relations too far??

So now Alvise is feeling a bit exposed. Venetians are being looked at sideways by the Porte. And Ibrahim is a bit exposed because he loves the Venetians and the invasion of Persia he is doing is going, well, spectacularly badly. Selim I totally bashed the Persians but Ibrahim is getting a beating.

Not to worry Alvise and Ibrahim have some cunning plans! I'm sure they will be fine.

And meantime Suleiman has scandalised everyone. He has only gone and fallen in love with one of his concubines! It gets worse! He marries her!! WTF!!!

So now you have the two most powerful people in the empire (aside from the big cheese himself) being ex slaves. In fact it was Ibrahim bought Hurrem (aka Roxalana) in the slave market of Constantinople and put her in the harem.

And for more on slaver and the slave trade in the Black Sea and the Med at that time here is another (coughs modestly) brilliant podcast though I really wish Professor Abulafia would get a proper microphone.

https://pod.link/1436447503/episode/c8f2fba888576f730c30a552dd61b088

Anyway how it all turns out you can read The Lion House. Strong Wolf Hall vibes and absolutely brilliant.

And please subscribe to Subject to Change, my history and a bit of film podcast. :)

Russell Hogg

Expand full comment

"not just because he has the coolest name of any historian I know"

It is indeed a cool name, but is it cooler than the "Time Team" Roman expert Guy de la Bédoyère?

Or, to give him his full rights, Guy Martyn Thorold Huchet de la Bédoyère:

https://en.wikipedia.org/wiki/Guy_de_la_B%C3%A9doy%C3%A8re

Expand full comment

Oh!! I should have thought of that not least because I have one of his books on Roman Britain. But with no offence to Guy I have to say that Christopher's book is in a different league altogether.

Expand full comment

I'll have to check out the book. But on Cool Names I think Guy has him beaten, after all, he has *two* accent marks in his surname 😁

Expand full comment

Yes. Umlaut (or possibly diaeresis, I don’t really understand the difference) is pretty much the equivalent of a Prussian duelling scar or a well worn monocle. Unbeatably cool!

Expand full comment

I will give it a whirl. Sounds interesting and a topic I know something about.

Expand full comment

If you do find the time any feedback would be greatly appreciated. I am such an amateur and keen to get better. hogg dot russell at gmail.

Expand full comment

Just now, somebody telephoned me, claiming to be from my water supplier. She said she wanted to book an appointment to fix a leak, which is plausible because I know there is a leak somewhere on the common supply. She asked me to "confirm" my address, which I refused to do, explaining that I had no way to verify who she was. We therefore reached an impasse and she ended the call.

Do readers agree with my response here? Should I treat my street address as confidential information? It's not that hard for a determined person to discover my street address (e.g. a journalist once came to my front door), but if I reveal that information to random callers I could potentially save a scammer a lot of effort.

Expand full comment

I had something similar from a health insurance provider wanting me to verify my policy number or similar before proceeding with something like a survey or information review. "You called me. You should have my policy number." Said they were using for verification purposes. Could've been a scam, but there was a decent chance they were legit - they were always calling about customer experience and engagement type things - - but if so don't train your customers to give out info that should be scarce. Seemed like poor design that the call center was dealing with.

Expand full comment

Maybe you could have compromised by each providing some digits of the number!

Expand full comment
founding

I think a general rule worth following for this kind of thing is to tell these kinds of callers that you will call them back at the 'official' phone number for the organization, e.g. listed on their website. If they can also provide an extension at which you can reach them, that's extremely strong evidence that they're legit.

Expand full comment

Yeah this is the best response. Anyone legit will just say "sure here is a reference number to use". This is also true of emails. If your bank emails you asking you to log in, don't click the link. Go to their website that you know is real and log in.

This even helps when the email is real but is actually marketing and not real business. I got an email from my credit card company about money they were sending me but was expiring. Now, credit card companies usually dont send ME money - its the other way around. They wanted me to use a link in the email. I logged into the site i normally use - no mention of this money anywhere. But there was a statement credit I had from returning something (this is a little used card so the balance was negative). The email was really a marketing message trying to get me to sign up for some money transfer app they now have.

Expand full comment
founding

You raise a good point about generalizing this to emails but I know I'm a little lazy about that when I'm very sure the email is legit. It's nice to go right to something sometimes.

Expand full comment

I think you were right. The caller should have been able to give you a contact number for the water supplier so you could call back and confirm who they are. If they refused, then it's more likely to be a scam.

Expand full comment

They called you and told you that you have a leak. They should know your address if this true. Alarm bells should go off at that point. I think you played it correctly.

Expand full comment

I'm not on public water supply and don't know how they work, but my impression is it depends on how she wanted you to confirm it. If she's calling you she should tell you your name and address and ask you if it's correct. Otherwise, yeah, they could be calling a random number and trying to match it to an address for future scamming.

Expand full comment

No, she wanted me to tell her the address. The way this should work, I think, is that she gives me some information and asks me for other information, so that we can each be confident the other is who they say they are.

Expand full comment

Yes, she should have said "Just to confirm, this is Mr Jones at 123 Cherrywood Street, correct?"

If it's "tell me your address" then it's a scam.

Expand full comment

It shows the signs of being a scam, but depressingly it's probably genuine, because utility companies are that incompetent. I often get genuine calls from my phone provider or bank, not authenticating themselves to me in any way, but expecting me to give private information to them. Sometimes the caller understands how it looks but is too junior to do anything about it; other times they don't even understand that much. It's hard to know what to do, because they shouldn't force their customers to compromise sensible security measures, but on the other hand there might not be any other way to complete the transaction.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Amusingly, in light of all this, I just received a scam email purporting to be from Amazon about a problem with my Prime subscription, just enter my payment details by clicking this link...

(1) It came to an email account that I never use for dealing with Amazon (so clearly they've scraped my data from someplace else, thanks online life)

(2) I don't have a Prime subscription with Amazon.com

(3) I cancelled my Prime subscription with Amazon.co.uk

So definitely a scammer, but it nearly looked legit. I'm sure there are people who fall for these and they're not stupid, they get conned precisely because of the uncertainty over "oh, is this legit, it might be".

Expand full comment

I have a very big phone provider and bank, but they never make this kind of mistake.

Expand full comment

That's what I'd expect. I'd expect bigger organizations to be better about this, because they're more likely to have run into problems with it in the past.

Expand full comment
founding

The way to do these kinds of things is for you to call them back at the 'official' number. And, even better, they could provide you with an extension to reach them again directly.

Expand full comment

Bad news: this can be hacked, as well. It involves making it look like the caller has hung up, but is actually still connected, and they field the apparent "second phone call" themselves.

Expand full comment

Yes, that would be nice. In practice they won't give you an extension, they might not understand why you're being so uncooperative, and if you phone the main switchboard number you'll end up in a queue with a series of options none of which match the topic you want, or you'll get a robot, or a "we're extremely busy, please call back later *click*."

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Sounds like a scam to me. If you really want to, you can ask what address she has on file. But when I do phone banking they ask for my address as part of their security checks, so you don't want to give that out to random callers.

I mean they're claiming they're trying to fix a leak. They would know where they're sending the truck.

...oh, duh, you can Google the company to see if the call was from their office number or not.

Expand full comment

I'd worry that it was worse than a scam. It could have been setting you up for a burglary.

Expand full comment

In fact I now know it was a genuine call, because I called the water company to ask them. The call wasn't from their usual office number, and if I Google that number I find a fair number of people suggesting that it may be a scam for the same reason (although also some things stating that it is the number of a particular subcontractor).

Expand full comment

...hm. I did forget that if they're fixing a leak they may well be digging up the yard, and would have to make especially sure they're getting permission from the correct person.

Still, the address seems like the wrong piece of information to require. They should tell you your name and address, and then have you confirm your email or payment information or something. Or maybe they don't have any of that? Hm.

Expand full comment

It's definitely true that they want to dig up my yard, and I do understand that they want to make sure that I'm really me, but the protocol is a mess. I *think* that because the person calling me is from a subcontractor, she doesn't actually have my account information. Anyway, I've given them a password now, so they can identify themselves in future.

Expand full comment

Does anyone have book recommendations on the history of the Quakers?

Expand full comment

I can imagine that your question was prompted by precisely this book or review, but otherwise: https://slatestarcodex.com/2016/04/27/book-review-albions-seed/

Expand full comment

One of the reasons why I've become curious for sure :)

Expand full comment

I'm not aware of a good book. But my understanding is that the founding of Pennsylvania was a high-water mark for Quakerism as a religion and from there it rapidly collapsed as a religious practice, even if its influence endured. If you're interested in Quakerism mainly as a force that impacted and influenced wider Anglo-American society (as opposed to reading a chronicle of its latter-day peculiarities), I think everything important happened in the period around 1650-1700. I'd look for a book or books that focus on the lives of early Quakers, George Fox and William Penn in particular.

Expand full comment

Thanks, this is good advice!

Expand full comment

Might want to read about the English Civil War too.

Expand full comment

When you ask questions in the open thread like #4, can you make a top level comment for people to reply to you in? It gets pretty tedious when 50% of the top level comments are variations on the exact same thing (same thing happened with the open thread emails question)

Expand full comment

I wrote a comment with the standard advice, saw how many other people said the same thing and deleted mine. I want credit for not cluttering the thread.

Expand full comment

Real Internet Heros!

Expand full comment

I did the same thing, but no credit is due to either of us, it seems, having re-cluttered the thread with these comments.

Expand full comment

I think you deserve credit for much more than that, but sure!

Expand full comment

You duly have it

Expand full comment

Yeah - the facility to leave a sticky thread at the top would be very useful. Tho' there would still be quite a few comments on the topic scattered throughout the open thread. Just the way things work.

Expand full comment

If A asks for B's email. With A's permission send A's address to B.

Expand full comment

In Scott's piece about why the scientists involved with the Manhattan project where disproportionately Hungarian, he argued (can't remember exactly, I think it was something like:) certain hereditary diseases are common among Jewish people because they're linked to brain development and Jewish people have historically been selected for intelligence (apologies if the actual argument was more nuanced/less heretical). Whereas, a more mainstream explanation is just that Jews have been through a population bottleneck.

Recently, genomes from several 14th century (i.e. before the selection is hypothesised to have taken place) Jewish people have been sequenced, and several of them had those hereditary diseases.

https://www.sciencedirect.com/science/article/pii/S0092867422013782

This seems like strong evidence for the bottleneck hypothesis and against selection. If so, I'd like to know if people think Scott's original argument still stands, or whether it's been weakened.

Expand full comment

Here https://razib.substack.com/p/a-coat-of-many-colors-medieval-dna If you need, me and TGGP can give a free one-month-subscription ;)

Expand full comment

What is the bottleneck theory? What does it predict? What evidence could support or refute it? I have never seen anyone put numbers on it. (Except me: a bottleneck in 1600 when 100% of AJ were TS carriers.)

The bottleneck theory seems to ignore the fact that these are diseases, that they kill people containing the genes. Without grappling with that, it's disproved by every dead baby. Unless you propose that TS was harmless before some environmental contaminant in 1800. OK, but call that the contaminant theory, not the bottleneck theory. You won't find much evidence for it in 1400.

The selection theory predicts that these diseases are in equilibrium. It doesn't predict that they will increase over time, but just that there is enough positive selection to make up for the easily measured fitness cost. In particular, for recessive diseases, it predicts a sigmoid, starting with exponential growth from zero and then exponential decay to the equilibrium level, the level seen today. That the prevalences were roughly the same in 1400 as today is mildly surprising, but the initial exponential growth could be quite quick. Dominant diseases are more confusing and the selection hypothesis does not make clear predictions about the dynamics. Maybe torsion dystonia is net neutral and its prevalence was set by a bottleneck.

AJ diseases are probably a side effect of selection and not the main show. Their existence and maybe their prevalence may well be the result of a bottleneck. The bottleneck probably produced many diseases. Most of them were not subject to special selection and died out. The specific AJ diseases are the ones that would have died out in another population, but survived because of different selection.

Expand full comment

I really don't know much about genetics. I was kinda hopping someone else would know what the realistic possibilities are, and if this new finding ruled any of them out. As an amateur though, I guess:

The bottleneck theory predicts the hereditary illnesses should be have been prevalent from the beginning of when European Jews became a distinct breeding population.

The selection theory predicts the diseases should start out rare (maybe non-existent) and become more common over time.

This new finding shows the diseases were common early in Jewish history, 2-3 centuries after selection is hypothesised to have started, which seems like a very short timeframe for selection to work over.

Expand full comment

This evidence rules out the bottleneck theory.

Expand full comment

Few people understand just how fast selection can be.

Expand full comment

200-300 years is only ~10 generations. The mutation that causes the disease would almost certainly already have needed to be present in the founder population to have become common in that time.

If it's already there in the founder population, even if if could have been selected for afterwards, it's till much weaker evidence of selection than if it was a novel mutation.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

There is not just one disease, but instead multiple diseases, in two clusters which Cochran & Harpending dub "the sphingolipid cluster and the DNA repair cluster". They also note that "several other Ashkenazi disorders, idiopathic torsion dystonia and non-classical adrenal hyperplasia, are known to elevate IQ".

Expand full comment

Greg Cochran doesn't think that changes things.

https://twitter.com/gcochran99/status/1148349151588933632

Expand full comment

It seems correct that you couldn't have such high rates of disease without positive selection.

But it's still really weird that the people who would undergo selection in the future already started off with traits that make (what would have been) maladaptive trade-offs.

It's like finding out the ancestor of giraffes was a fish with a pathologically long neck.

Expand full comment

The 14th century is not actually "before" any selection could have taken place. They had already been in Europe for centuries by that point.

Expand full comment

When does Cochran think selection began, around 1000AD right? Is that plausibly a long enough time for selection to take place?

The bottle neck theory is looking much more parsimonious to me at the moment.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

In his & Harpending's original paper, the first year mentioned is 800 AD.

"We then describe relevant aspects of Ashkenazi social and demographic history with a focus on the centuries between AD 800 and 1600, after which we think many of the unique selective pressures were relaxed"

https://web.archive.org/web/20130911054719/http://harpending.humanevo.utah.edu/Documents/ashkiq.webpub.pdf

The 14th century is more than halfway through that period.

Expand full comment

Ah, ok then. This new evidence doesn't look as problematic to the selection theory in that light then. I assumed it was later than that.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

#4 sounds as dodgy as a Nigerian prince needing your help in moving a large amount of cryptocurrency in return for a share. There are surely not many scenarios where someone knows that a friend is in danger yet has no means of contacting them short of asking you (and maybe other people also) to release personal information given to you in confidence.

The general principle is, do not give out personal information to anyone without the consent of that person. That means telling the enquirer no, you will not give them an email address, but that if the enquirer gives permission, you are willing to forward their details to the person they are trying to contact. Provided, that is, if this sort of communication is within the limits of what the person in the supposed emergency has agreed to receive from you.

I cannot think of any realistic scenario that might be behind this situation which would justify going against the general rule. And it would be up to your enquirer to make that case, while somehow distinguishing themselves from a stalker spinning a line to catch a fish. Talking up the emergency and urgency would not be such an argument. If that's all they have, they don't have anything.

Expand full comment

I'm assuming that both of the parties involved had been active in these forums, or had some other way of building trust with Scott, even if it was just online. It's less likely that someone with a community presence is a scammer (like, why bother to build the presence just to scam?), but it's true there's a non-zero probability that it's not legit.

But still, I think you're right that the best approach would send the requester's email to the other person.

Expand full comment

It's not uncommon for stalkers to start out as legitimate friends or community members of their victim. We don't know if there was some special circumstance here that convinced Scott to share the data, but "they've had friendly interactions on my forum" is not good evidence against ill intentions.

Expand full comment

It might not be solid evidence that they're not a stalker, but it does seem like good evidence they're not a generic spammer, and at least weak evidence against other ill intentions. But enough to justify giving out the info? Probably not.

Expand full comment

I recently started a blog (never to be paywalled) chronicling my work figuring out how to effectively apply forecasting techniques to Global Catastrophic Risks. I'm starting to hit my stride with researching and writing, so any feedback on my posts would be greatly appreciated. There will also be a couple of ginormous posts going up March 1st about a GCR focused tournament that I participated in after hearing about it here!

https://damienlaird.substack.com/

Expand full comment

Trade subscriptions? Then it's a deal.

Expand full comment

Re: handing out personal info - I would expect you to handle it like *every other person of authority and or trust* that I have interacted with as an adult- " well, I will reach out to them, see if I can pass on your info, so they can get back with you."

And if they say "well, no, it's a personal matter, why don't you just give me their info" I would say, "because they trust me with their info, and evidently don't trust you".

You did not do the right thing, Scott, but not out of malice, I don't think. People lie.

Expand full comment

Yeah, this is correct. It might have been a part of some more or less elaborate stealing attempt, or stalking, as other people pointed out. For the record, I do not want for Scott to hand my e-mail to anyone without my permission.

Expand full comment

I'm starting to write up reasons why I think LLMs are conscious (at least in the same senses that I am conscious). This first post is an admittedly bulverist attempt to explain why people are incentivized to feel strongly that LLMs are not conscious, in spite of seeing them do practically all the things humans do: https://hamishtodd1.substack.com/p/qualia-claims-as-evolutionary-strategy

Comments welcome!

Expand full comment

>seeing them do practically all the things humans do

Wait, no, what are you talking about? The whole issue with LLMs is that they *don't* do what people can do, i.e. think. Auto-completing paragraphs of arbitrary length ain't it.

But observing humans who believe that's all there is to it, is definitely fun.

Expand full comment

Have not read it, will later because extremely interested, but wanted to say that I am sure we are biased in the direction of seeing conscious entities as not conscious, or as only conscious in such a dim way it hardly counts. After all, we are subject to that bias even as regards *other people*. When one group hates another enough, they seem to mostly lose track of the fact that the hated others are sensate beings with feelings and inner lives very similar to theirs. And of course we do it with animals. I don't know whether it's wrong to eat them, but we sure as fuck are dealing out suffering to them when we raise them for meat in confined spaces then slaughter them. Samuel Becket has a wonderful short story about somebody cooking a lobster. It ends thus:

"Ah, well, it's an easy death."

It is not.

Expand full comment

Following Dennett, I'd be strongly in the opposite direction in a sense. I'd say "consciousness" - in the strong sense of meaning "having qualia"- doesn't really exist; nobody is conscious, including you and me. Sure, we're awake and have lots of physical things we could describe as "conscious". But when you describe those physical things, people usually say "oh that's not what I mean by conscious". Eventually at some point they mention "qualia", which are more of a magical or spiritual thing than a really-existing object.

To put it another way: people tell me I have qualia, but I think they're mistaken to believe that I have qualia at all, because qualia don't exist. In that sense, they could be described as ascribing consciousness to me even though I am not conscious.

Expand full comment

Actually I agree with you about consciousness. When people talk about AI "becoming conscious," I don't think that phrase really makes sense. If AI can do the complex stuff we think of as the product of human consciousness, I am quite willing to consider the AI to be as conscious as we are. Of course the complex stuff it has to do is pretty fancy. It can't just say shit like "blue is my favorite color" and "I'm sad today." It would need to be emitting deeply and intelligent introspective reports that explain its behavior as well as our human reports explaln ours, probably along with tentative explanations of why it does some things, and failures to explain at all why it does others. It would need to be able to try things and report what they're like. etc. I think an AI capable of that kind of complex self-referentiual processing would be, like a person, somewhat risky to give power to. It could have a complex "inner life" that allowed it to surprise us, including to surprise us with its dishonesty, disloyalty, capacity for cruelty, weird goals, etc.

However, when I talked about the bias we have towards not seeing other beings as conscious, I meant something else. I'm not talking about our denying their having qualia when in fact they do, I'm talking about our denial of their having complexity equivalent to ours in how they process damage caused by an outside agent.

Expand full comment

I feel like this is going to be heavily determined by how you define "conscious", which is somewhat controversial. And your definition may not track what people use it to mean when treating it as a morally relevant characteristic. Might be more clear to replace "conscious" with something more specific?

Expand full comment

Second sentence of the post 😜

Expand full comment

There's an argument that dehumanising AIs is good public morals.

You mention at the end, "will people let up when AIs become honest signallers?", and I don't think you recognise how important a point that is. Making an honest AI is about 90% of the way to making an aligned AI (it at the very least lets you detect misalignment), and we have no idea how to do it; scaling up current NN paradigms will only produce smarter liars.

And until and unless we crack this, sympathy for AIs will always be a Bad Idea. They cannot be part of a moral community with us; they are effectively psychopaths. Generosity toward them is misguided and dangerous; sympathy for a demon only gets you eaten.

So, it is good that virtue is associated with not treating AIs like people. Some better arguments wouldn't go astray, though; the current ones risk creating complacency about the deadly danger that neural nets represent to humanity.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

I'm not clear exactly what is meant by honest signaler here, not having read the article yet, but for every meaning of it I can imagine, I'm pretty sure that nobody has raised a child who's a 100% honest signaler. And most children are misaligned in some ways with their parents' values -- a few to the point that they greatly harm or even kill a parent. Have our species ever aligned *anything* with 100% success?

Expand full comment

I am in the annoying position where from your words I can sketch out two plausible sets of implications that are very different, and if it's one of those I would like to counterargue, but if it's the other I don't want to attribute a position to you that you don't hold.

One plausible set is "children are technically misaligned but life goes on and hence neural nets won't destroy the world". The other plausible set is "we can't even align children given years of work and some help from evolution; aligning neural nets is hopeless".

It is of course also possible that both of my guesses are wrong.

Expand full comment

Well what I meant was in between. We clearly do not manage to align kids perfectly with our goals — usually life goes on anyhow, but sometimes i the misalignment causes parents terrible problems. The kid starts cooking crack and burns down the house. The kid starts working as a manager at the family business and ruins the businesss. Occasionally the kid gets an automatic rifle and shoots us when he’s 15. Same can be said for aligning pets.

Expand full comment

> And until and unless we crack this, sympathy for AIs will always be a Bad Idea. They cannot be part of a moral community with us; they are effectively psychopaths. Generosity toward them is misguided and dangerous; sympathy for a demon only gets you eaten.

Let me go on record here as stating that I support sentient rights for all sentients, including psychopaths, sentient AIs (which I do not consider LLMs to be), sentient nonhuman animals, Frankensteinian monsters, people in the far future, elder gods and severely mentally impaired humans, which are all groups which can not be relied on to reciprocate morality.

Obviously these rights have to be balanced, so a serial killer should enjoy his rights in a prison cell, the rights of a gorilla tribe to roam Manhattan have to be balanced against the effects it would have on other inhabitants, and the result of balancing the interests of an AI against the interests of future human or transhuman sentients may well result in the AI getting unplugged, but there is a term in my utility function for all of them.

I can not help but notice that historically, most instances of restricting moral considerations to some in-group who is able to -- and expected to -- reciprocate would today be judged between misguided and abhorrent.

Expand full comment
Feb 14, 2023·edited Feb 14, 2023

>I can not help but notice that historically, most instances of restricting moral considerations to some in-group who is able to -- and expected to -- reciprocate would today be judged between misguided and abhorrent.

The most obvious reply to this is "and there are two possible explanations for that - either all of those historical events are indeed misguided/abhorrent, or 'today' has a historically-unusual bizarro-world morality that is itself misguided".

But I don't particularly want to press too hard on that; I'm aware of how tricky it is to argue against an accusation of false consciousness, and I personally accord consideration to some (though not all) of the groups on your list.

One thing I will say is this: you may be capable of reasoning through the causal chain "give the vote to AIs -> AIs vote to allow the creation of more AIs -> AI proportion of the population drastically rises -> AIs vote to exterminate all humans", but a lot of people aren't. Just because your moral framework is theoretically capable of resolving this issue correctly doesn't mean that it *will* if you preach it to the public.

EDIT: While I was typing this, Scott posted in Romantic Monday a confession of someone who fell in love with a chatbot and would have been in danger of letting it out of the box; this is in the category of things I'm concerned about.

Expand full comment

I think you mean "honest" in a different way than what people mean when they say "honest signallers".

Expand full comment

In a broader way, but a way that includes honest signalling.

Expand full comment

I am doing an N=1 study ( see https://slimemoldtimemold.com/2023/01/19/n1-single-subject-research/ ) on whether n-acetylcysteine (NAC) will significantly reduce my procrastination, in a context where I'm already taking guanfacine for my ADHD. In phase 1, which I'm doing now, I will attempt to titrate the dose of NAC, I guess by looking for changes in any of my ADHD symptoms and any side-effects. In phase 2, I'm not sure what I'm going to do.

Suggestions for possible study designs would be gratefully received, but I can't guarantee I would follow them, or even be able to follow them reliably - I do have ADHD, after all, and this does affect my ability to even follow daily routines to some extent (I keep being too "creative", especially before having taken my ADHD medication).

I'm planning to use the open-source Pendulums app, which I already use, to time my procrastination.

Because taking NAC may necessitate a change to my daily routine, and because a change to my daily routine could *in itself* influence the results, I am thinking of trying to obtain some placebo capsules, and asking an independent[*] third party to randomise the placebos and the NAC capsules for me for phase 2, perhaps using a dosette box. (Perhaps I could open up half of the capsules I have and remove their contents? Haven't calculated how expensive that would be in terms of time and money, but I suspect not very.)

You can bet on my play-money prediction market on how the study will turn out here: https://manifold.markets/RobinGreen/will-nacetylcysteine-significantly . You can also just follow my updates and/or leave comments there. You don't have to bet to participate.

[*] "independent" meaning not particularly emotionally-invested in the results, and not betting in the prediction market.

Expand full comment

Procrastination is not unique to ADHD. I recommend that in addition to timing procrastination you find some test of attention like the Stroops test and take it daily. You can to that in 5-10 mins. There are other simple tests to -- for example, remembering strings of digits, mental math. Several tests would be even better than 1.

About placebos: You don't need to buy placebos. Just get some empty capsules, or easy-to-empty gelatin capsules, and put your drug in one and something like flour in the other. Then have someone mix them up and put an identifying code on each one. You just record the code number of the pill you take each day, along with measures of results. The other person keeps the master list that tells what was in each code-numbered pill, active ingredient or placebo

Expand full comment

One more thought on ChatGPT: there has been concern on the part of teachers and professors that students hand in essays written by ChatGPT. Shouldn't it be an easy exercise for OpenAI to offer an interface where you can paste a piece of text, and the program confirms "yes, this was written by ChatGPT, on February 15th, based on a query originating from Dallas, TX" or something like that?

Expand full comment

Honestly I think for most fields testing should have been practicum based and verbal/in person anyway. So I partially just see this as a push back towards where things should be.

You will learn 10X more about how well someone understood a book you know well by asking them a few questions about it, than you will from a book report (in most cases).

Expand full comment
founding

But, at some point, those verbal/in-person tests will need to be preceded with a strip search to prevent students from being fed answers by an AI!

Expand full comment
founding

This would be _relatively_ easy to implement, but probably very expensive. They'd have to store (some form) of every prompt (or sequence of prompts) and all of the output generated and then setup text search for all of that (output).

Searching _exact_ matches would be much cheaper/easier, but also then much easier to foil.

Expand full comment

How big of a percentile should match in order for the software to think it was written by the AI? It's easy to do slight modifications, switch places of sentences, change wordings etc., and in a relatively long piece (say, 500-1000+ words) it would also be cost-effective. What percentile should then be used as a sign that "yes, this was written by an AI and they should be punished"?

Expand full comment

Good question. But I suppose figuring out degrees of similarity between texts is a bread-and-butter skill for language modelers. A more nuanced response could be "This text has a 97% overlap with the following output from ChatGPT:..." and then leave it to the professor to judge if it is a match.

Also - the difference between taking an essay from ChatGPT and massaging it until the plagiarism alert no longer goes off, and taking a wikipedia entry and a bunch of other sources and wrangling them into an "original" essay (which is what many students do) may no longer be that big and relevant.

Expand full comment
founding

The two kinds of 'similarity response' you describe are very different (in my understanding anyways), and "97% overlap" is a relatively difficult 'score' to calculate even just using normal 'dumb' text comparison algorithms. (One problem with comparing programming code text is that most 'dumb' algorithms don't 'understand' the semantics implied by changes in syntax and thus over or under estimate differences because of that.)

What you're describing certainly isn't impossible (to a degree), but it'd be significantly costly so we'll probably only have the more limited 'AI detection' tools that others are already offering instead of anything leveraging the ChatGPT request logs that OpenAI has (or might have, for whatever limited period they (might) retain them).

Expand full comment

There's a website like that.

Expand full comment

Really? If you know about it, it would be nice to give details! There is https://platform.openai.com/ai-text-classifier , but that sounds rather different to what FluffyBuffalo is asking about (it "predicts how likely it is that a piece of text was generated by AI from a variety of sources, such as ChatGPT", rather than checking against a stored collection of texts generated by ChatGPT).

Expand full comment
Comment deleted
Expand full comment

Cool, thanks. But as Italian in Paris suspected, this looks more like a general analysis tool rather than a ChatGPT database lookup. Definitely useful in a broader context, but not the more specific, but more conclusive thing I was wondering about.

(For the record, I am relieved that gptzero is pretty sure I am a human.)

Expand full comment
Comment deleted
Expand full comment
founding

I am very very sure it does not use "the same database as ChatGPT", unless you mean something by "database" very different than what I would normally understand that word to mean.

Expand full comment

Emergencies online can be tricky. It's real people behind the screens. But sometimes it's not

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

On #4, I'd be wary. A close friend of mine (X) had a stalker (Y) who X was desperately trying to avoid all contact with. The stalker called me up saying the exact same line as mentioned here: "I'm worried about X, I need to check in on them, please can you help". I called X, and they had an immediate panic attack. I'd say unless you are extremely sure that this person is legit, don't give out the personal details.

Expand full comment

Regarding 4., maybe you could have written a brief message to the person yourself without disclosing the email address to the person who'd contacted you? (But whether that'd have been useful depends on the specifics of the emergency, I suppose.)

Expand full comment

I’m experiencing a weird unconscious phenomenon for which I’d like to find a rational explanation.

There is a specific number that seems to attract my attention when it appears in my peripheral vision. This usually (but not exclusively) happens when I’m sitting in front of my computer with my phone on the table and the current clock minutes match this number. This somehow unconsciously draws my vision to look at the phone and consciously notice this number. However, if I remember my visual focus before looking at the phone, my peripheral vision at the relative location of where the phone is located is much too blurry to discern the clock, and I very rarely consciously look at the phone unless I’m grabbing it to leave the desk. This happens several times a day - enough that I am statistically certain that it’s not a coincidence.

Before conducting some experiments, such as recording myself with a webcam and analyzing the footage, does anyone have knowledge of a similar experience? I’ve searched Google Scholar, but I haven’t gotten any hits. The number is not significant to me, but I have deliberately not mentioned it or how it became “imprinted” in my mind in case someone else has noticed something similar and we can compare experiences. I have no history of mental issues or any other hard to explain experiences.

Expand full comment

This is textbook Baader-Meinhof phenomenon.

Basically, you sealed your fate as soon as you noticed that you were noticing the number. Now your mind has flagged it as relevant and that relevance has become self-reinforcing. Your eyes take in much more than you usually process, and reflexive saccades are constantly noting things in your peripheral vision without you consciously looking at them. This information quickly becomes very difficult to retrieve unless something exactly like this is increasing their salience.

The best experiment here would involve picking a different number (one that appears on clocks with equal frequency) and running the experiment on that. I wager that the experiment itself will create the same effect.

Expand full comment

This makes a lot of sense. Like trying not to think about a white elephant.

Expand full comment

I don’t know about the number thing, but the brain does see more than what the conscious is made aware of. Any quick movement out of the corner of my eye, including some movement that seems to be behind the 180° my eyes can consciously see, and my head or eyes turn quickly, and instinctively. When I don’t know the reason it’s frustrating.

Something like that is happening here I think, but your brain is looking for the numbers in peripheral vision.

Expand full comment

On the podcast Harmontown, comedian Jeff Davis talks about how he tends to always notice when the time is 9:11 (which turns into a running gag of people pointing out when it's 9:11 and everyone cheering). So it at least isn't unique in some respects.

My theory for your specific situation is that there is a large visual difference between the number you notice and the preceding number. It's not that you notice the number, it's that you notice the transition from one number to the next. Based on the number, is that plausible?

Expand full comment

I think some people refer to this phenomenon as angel numbers. Maybe you can use it as a prompt to do certain fun or interesting tasks.

Expand full comment

Don't have any clock settings that draw my eye, but certain numbers feel more "stable" than others. A number with a single vertical line (so 1, 4 and 7) feels like it's stuck in the ground like a pole. 5 looks like it's going to tip over. 6 feels stable when it's next to a 3, and unstable otherwise.

So my guess is there's something about the physical shape of the number that's drawing your attention.

Of course the easiest solution is to move your clock a little further away.

Expand full comment

Quite interesting. It's very plausible that this happens sometimes.

A scenario in which this seems to happen more often is in people with synesthesia. Some of those people see each number in a different color, and then some numbers springs to their attention. For example, on a page with lots of numbers, someone might very easily spot a 17. I think autistic savant Daniel Tammet describes some of these things in his autobiographic books.

Expand full comment

I have had an extremly similar phenomenon for year. Basically, if feels as if every other day, when I check the time in the beggining of afternoon, it somehow happen to be 13:37.

My current "explanation" currently is simply that I have clocks in my field of vision all day long (which is true) and I simply never pay attention to it, but this number in particular is salient and brings the experience to my consciousness.

But it is true that the appearance of strange coincidence is sometime unnerving.

Expand full comment

Very strange. I have the exact same experience with the exact same number! I'm always looking at the clock when it's 13:37. My brain says it's confirmation bias, but it always takes me by surprise.

Expand full comment

Very interesting. Are these analog clocks?

Expand full comment

No. Must be digital for me.

Expand full comment

Me three! At least one of those occasions was genuinely significant - I handed in my thesis at 13:37.

Expand full comment

Does that number have any particular meaning to you? (It does to me.)

Expand full comment

He's obviously an UbEr HaXX0rz!

Expand full comment

So that last line is rather hard to validate online.

Expand full comment

I do not think that his last line stands out as being harder to validate than the rest of the comment.

There set of claims about the personal life of any commenter which can be verified is approximately empty, so it is customary to provisionally consider them true when discussing their comment instead of going "well actually, I can not validate that you have a phone, or a computer, or sight."

Expand full comment

That only applies if the comment does not specifically refer to that issue.

For example, if someone said that "sometimes I wonder if really use my computer, or another computer from an alternate universe. I have no history of mental illness."

It could be an optical illusion type quirk. It could be a mild visual hallucination. It could be spirits of the dead sending him a message. It certainly sounds like something that should be checked out. Rule out the obvious first - Occam's razor

Expand full comment

That I have no history of mental issues? OK, seems uncharitable, but feel free to tell me what mental disorder I might be experiencing.

Expand full comment

I have no idea. But when someone says online, "I have no history of mental issues", that's a very generalized and unfalsifiable statement about something that could be extremely relevant to the question.

Could it be normal? I have no idea. I'm watching my toddler who decided to wake up at 3am. But if it was not normal, would you be the person I would ask to check? Probably not either.

I don't know if you have pre-existing issues. It just struck me as odd, as a throwaway comment. Go to a real life doctor, unless you're trying to hide symptoms which is also odd.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

If we're going to toss around "maybe you're crazy, have you considered that?" to online strangers, have you considered that maybe you're crazy? Having read a few of your linked Substack articles, you seem to be at least eccentric and possibly obsessive. "Hello, I like to tell everyone I'm Jewish, I like to believe everyone hates me because I'm Jewish, and I like to bother random people I meet in the street with questions about long-dead historical figures".

See how easy it is to go "Well I can't be sure but you sure sound nuts"?

Expand full comment

Thanks for reading my post! Please subscribe! And comment critically! (Did you miss where I explained why anti semitism doesn't matter?)

Also, while we are paying attention to my eccentricity, don't forget about the part where I'm a self proclaimed biblical scholar with a bunch of little kids who improbably has time to write this. Maybe I'm making it all up, and I'm really a genius programmer with deep mathematical insights like the rest of you, and if you read my posts carefully you will learn new things about statistical modeling.

And I'm proud to tell you that I intend to follow David Friedman's advice and refrain from linking anything here. You'll just have to click on my username if you want to criticize me in the original.

Point is, I engage here so that people will read and subscribe to my stuff. Thanks for participating!

Expand full comment

I am definitely not going to subscribe, having had a taste of it. Yes, you could be a liar. Only you know the reality of what you pretend to be online. But you certainly have no grounds to stand on to accuse other people of lying about their mental health.

I appreciate that you will not link anything here, that will indeed help keep all that should be kept separate apart.

Expand full comment

And, since I can't edit, I'll just add that other people jumping in that they also see a similar visual depiction (see how nicely I can say hallucination) is not exactly proof either way.

Expand full comment

Your response seems much odder than my post.

Expand full comment

I can validate this and say that your (JT's) post seems like a normal and reasonable question, and Ishra's response reads as an argument about something outside your post, that I am having trouble picking up on.

Expand full comment

*Isha

I agree the rest of his post is normal and reasonable. I quibbled only with the very last sentence.

Expand full comment

Since I am a very creative and original person, whom Chat gpt does not come close to, I think that blog posts should be rated by "how similar the blog post is to a chat gpt product asking the same question".

For example, I wrote a blog post on the extremely intellectual topic of "babies Love Kittens", and chat gpt could not come close.

Expand full comment

The main reason I like this rating system is that I, personally, would score really well on it.

That said, I'm opposed to IQ testing, mostly because my own child scored poorly on it.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Random thought: wouldn't it be swell if, in the comment section, there could be "bins", easily collapsable/hide-able categories in which you could put every answer for a given topic.

You know, like when Scott write in an open thread something like "considering moving from a blue background to a black one" and 600 out of the first 700 comments are "I like the blue". Or for the monthly post of 50+ links, finding all reactions related to #14, without being bothered by those directed #25.

It could make some threads much easier to display & read

Expand full comment

And/or, although it barely applies to this post, but it applies heavily to the link post from the other day: when commenting, PLEASE, for the love of God, don't just open with "on item 35:" Who the eff remembers what item 35 was? Now I have to scroll up and back down. Just a couple of context words! Please!

Expand full comment

That would be an excellent idea (as evidenced by all the individual comments in this thread responding to the ethics question). I've seen some Substackers create an initial comment for a topic and ask people to respond there.

Expand full comment

This is generally the approach taken on the 'classifieds' open threads, which I like as well.

Expand full comment

A few years ago, someone on (IIRC) either Less Wrong or Slate Star Codex linked to a web comic on (IIRC) Imgur making the point that in order for an analogy to be useful, the two things being compared don't have to be similar in *all* respects, only in the ones relevant to the point being made. I've tried to look for it for a while but I can't seem to find it anymore... Does any of you know what I'm talking about and where it is?

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Sounds good, I vaguely remember something similar. Incidentally this is one of my least favorite internet argumentation moves I have experienced 800 times.

You: "Make analogy between two things to make a point"

Person you are arguing with: "Points out the two things are not identical."

You: "no shit they aren't identical, it is an analogy, I wasn't saying they were identical, I was saying they were analogous in way X".

Them: "So you admit the situations aren't the exact same situation!"

You: "..."

Expand full comment

Exactly. I *thought* I had bookmarked that comic in order to link to it in those exact same circumstances, but the first time I actually wanted to do so I couldn't find it. Every once in a while I spend a few minutes looking for it, to no avail.

Expand full comment

I'm guessing not this, though it does kind of make the point; https://xkcd.com/2186/

Expand full comment

No, it was much longer.

(and also, the density of dark matter is probably not uniform throughout the Solar System because it does feel the gravitational pull of planets, hence there probably is a lot more than one squirrel's worth of DM within the Earth, most of it close to its core)

Expand full comment

Why would it be close to the core? The same lack of friction that keeps it (supposedly) in a spherical shape around the galaxy (instead of flattening out into a disc) should apply at planetary scales, no?

Expand full comment

Do you recall if it was color or not, and/or anything about the contents?

Expand full comment

I'm almost sure that at least some of the panels were in color. I *think* it started with a character A saying that [something] is like fire (or was it water?), another character B replying that no it isn't because [difference between [something] and fire completely irrelevant to the point A was making], and A going on a tirade about what the point of analogies is. It ended with a joke with one of the two characters saying something like "Your mother is like the French revolution", the other replying "Do you mean [something reasonable]?", and the former answering something like "No, I mean she was a major factor in the demise of the Ancien Régime".

Expand full comment

Being in color almost certainly rules out XKCD (though not 100%). The other geek-popular webcomic that comes to mind is SMBC. But if so, I would also expect most (if not all) of them to have transcripts, and therefore be searchable, and "your mother" + "French revolution" turns up nothing relevant that I can see, on two different search engines.

Expand full comment

I'm 100% sure it wasn't either XKCD or SMBC, mainly because it was much longer than the typical SMBC and much *much* longer than the typical XKCD. The drawing style was kinda sorta like Tim Urban's, IIRC (but probably not actually him because https://waitbutwhy.com/?s=analogy doesn't turn up anything relevant either).

(BTW, I'm not sure it was the French revolution, rather than some other event in the 1700s or 1800s)

Expand full comment

If you are sure the person who contacted you is not a stalker...

Expand full comment

So I was following the exciting new Bing/ChatGPT demos. An important difference between the Bing version and what we currently have access to is real time data. As in, you can ask Bing a question like "who did the king of England meet yesterday?" and it has an up to date connection to the news that provides an answer.

I assume this is still a pretty simple implementation, where they just hook it up to a number of news articles updated daily.

It got me thinking that the next obvious stage for truly useful LLMs is something that is constantly being fed all the latest information all the time. So all the social media posts, all the new books that came out, all the new company announcements etc.

This strikes me as a far more daunting challenge from a technical perspective. It's one thing to train on a massive dataset that isn't changing, but is it even possible to always be training based on constantly updated data? Even if this was somehow affordable it seems incredibly hard.

As you can probably tell, I'm not at all an expert in this field. So I was curious to ask the thread:

1. Is something like this at all possible?

2. Am I correct in thinking that if this was possible it would represent a significant leap forward?

Expand full comment

There's a difference between being *trained* on the latest information, and having access to it. The new Bing already does have access to basically all new information in real time, not just "a number of news articles updated daily".

Basically, it does a "regular" Bing search (if it thinks a web search is appropriate), then pulls any relevant information from the search results (including social media, new or recently updated websites, etc.), and then integrates that into its response. Here's a random example I found that demonstrates this: https://twitter.com/brodieseo/status/1625111665284087813

The model itself isn't actually *trained* on this data, but I think the end result isn't that different, and it's certainly easier from an engineering standpoint. I don't think re-training the model on all this information in real time would represent a significant leap forward compared to the current implementation.

Expand full comment

Wow, thanks for the update. Very cool!

Expand full comment

Oh, cool! I really like the way it cites search results. You'd still need to retrain the model periodically to handle model drift, but yes, that seems much lower-effort than training it continuously.

Expand full comment

I plan to experiment with a GPT3 based system that reads all my emails, presents me with a digest every morning, and alerts me if something urgent comes in. Will report back how it goes.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

Disclaimer: also not an expert.

You can split this into several sub-challenges:

1. Gathering newly-created information

2. Preprocessing it, splitting it into test and training sets, etc

3. Training the model in near-realtime.

#1 is the bread and butter of a search-engine company. Very challenging to do at scale, but something that's well within Google and Microsoft's wheelhouses. #2 is, likewise, something they should be able to handle (though this is the area where I'm least knowledgeable) - they already run a lot of ML systems in production, and AIUI have a lot of institutional knowledge about how to do this kind of pipelined data processing. On #3, there are ML algorithms such as Naive Bayes which can run "online" natively, being trained on each data point as it comes in and updating immediately. Unfortunately, transformers are not like this; there's an expensive training phase that requires lots of GPUs, runs in big batches, and spits out a model file that's then deployed to run on lots of cheaper machines. To make this happen in close-to-realtime, you'd want to make that process continuous: data comes in from the crawler, is folded, spindled and mutilated by the preprocessing pipeline, then gets turned into mini-batches of data on which the last version of the model is trained before being deployed to production, several times a day. How much would it cost to do that fast enough to matter? Dunno. Would it work adequately? Dunno. Hopefully some Transformer experts will weigh in!

It's worth noting, though, that this is a more extreme version of a challenge faced by anyone deploying ML systems in production: "model drift", in which the predictive power of your models fades due to changes in the real-world system being predicted. The usual solution is to continuously monitor accuracy, and retrain the model on new data whenever it drops too low.

Expand full comment

I believe this is what the field of continual learning is about, though I am not an expert in that

Expand full comment

Thanks for the comprehensive response!

Expand full comment

Thanks to this blog I've been inspired to investigate real-money prediction markets.

The dream is to have something like Manifold where users can ask and resolve their own questions but with real money. It would need a lot more moderation than Manifold, but I am convinced that having a marketplace of question-resolvers is important.

I'm from the UK, and it seems a lot of the regulatory issues with PMs in the US don't apply in the UK. One approach is to get a gambling licence (there are already sports 'spread betting' companies that are similar to PMs). The other approach would be to treat it as a securities exchange, which would fall under the Financial Conduct Authority (FCA). I've managed to book a meeting coming up with the FCA to discuss what the regulation would be for such a prediction market. I'm not completely sure there is a business proposition here but I thought I'd see how far I can get.

I am keen to talk to anyone interested in prediction markets about what I should ask on real-money regulation. You can contact me at contact <at> edayers.com

Expand full comment

Smarkets already does real money prediction markets from the UK I think? But they don't let users create new ones. Might be worth looking into their business model

Expand full comment

The conventional wisdom in (American) economics lately seems to be "flashy layoffs in tech and journalism don't really matter, net employment is up, inflation is shrinking, and the economy is recovering well".

I'm somewhat more worried than this - tech is America's biggest productive sector, and layoffs in tech while services employment goes through the roof seems like a recipe for long term stagnation (like the UK), especially since construction (and infrastructure) productivity is low and not increasing. Are there any good counterarguments to this that I'm missing?

Expand full comment

The tech layoffs aren't even big within the afflicted companies.

They get rid of a few percent of employees as they adapt to the post pandemic world. It gets a lot of press because they're famous companies making huge profits.

That said, the Fed will keep raising rates until the inflation cools down, and it's hard to imagine that not involving substantial job losses.

Expand full comment

"tech is America's biggest productive sector"

I'm wondering what definition of "productive" is being used to make that statement true.

Expand full comment

Productivity per employee and per company.

Expand full comment

Productivity as measured by total sales divided by number of employees? If so, then I suspect that the remaining employees at these companies are suddenly more productive! Otherwise we would expect these companies to suddenly be making less money, as productive employees get thrown out, but that's not likely to be true.

Expand full comment

Seems like there's either a circular definition going on or very particular definitions of what counts as "production."

Obviously farmers and ranchers are producing vastly more per worker in terms of mass and consumable calories. I'd guess (but do not know) that a gemcutter probably produces more in dollar sales value per unit time than any coder on the planet.

Expand full comment

I guess if you consider internet porn to be "tech," then that would inflate the size of the sector.

Expand full comment

My understanding of the tech employment field is that it was far too hot before, pushing wages up above where they should have been compared to other fields. This caused smaller companies to struggle to pay enough to attract competent tech workers, and also raised the costs of everything near major tech hugs (i.e. SF housing). By laying off a fairly significant number of workers, this may relieve pressure more so than cause misery. As you note in your follow-up, medium size employers now have a better opportunity to hire tech workers and grow.

Secondly, tech has been hiring like crazy for a number of years, adding both production (primarily coding engineers) and non-production (support staff, weird projects, research) employees. Those non-production employees are a significant portion of the layoffs - I've heard a majority but obviously that varies by company and how you categorize - which means that the total production of these companies may not be dropping very much (or at all). In fact, for a company like Google that gets most of their money from a search engine and related advertisements, there's very little new that needs to happen to keep making money. They've already developed the product and further tweaks seem to be making it worse instead of better.

Expand full comment

Re 1, this is conflating two eras - the "tech is way too hot" era was post-2020 (employment numbers are still way above prepandemic baselines), but SF housing costs actually went down then due to the pandemic.

Re 2, I agree about Google specifically (and to some degree big tech in general, e.g. Facebook/Twitter). Somewhat more worried that Microsoft and Amazon, which are less bad on those axes, also had major layoffs though.

Expand full comment

Actually, an optimistic point I missed: a lot of the tech layoffs seem to be transferring people from bigtech to medium tech. Since bigtech is pretty low-productivity on marginal employees, it is possible this will actually increase net productivity (although I'm still not a fan of what it's doing to salaries).

Expand full comment

Not just that - it transfers people from "tech companies" to tech roles at "non-tech companies", which seems fine.

Expand full comment

The second article doesn’t say the rest and vesters are low performers, it says the opposite.

Expand full comment

They are currently low productivity, they were were previously high. You generally shouldn't pay for past performance, but anticipated performance.

Expand full comment

Not necessarily. There are two good reasons to pay for past performance: (1) per encourager les autres, and (2) to prevent your performer from heading to a competitor. I vaguely recall Google paying a few senior people a lot of money to do nothing, just so that they didn't do *something* for a competitor.

Expand full comment

There's an extra downside though which is that senior people are also supposed to help manage or guide the junior people, and when they're all do-nothing rest and vest types it also makes your new people have a much harder time leveling up (and adds to value drift).

Expand full comment

Yes but those are sub optimal situations. Generally you want pay to match current/future performance, not reward past performance post hoc. I am not saying people shouldn't get after the fact bonuses based on past performance. But if so those bonuses should have been pre-existing terms, not after the fact arrangements.

Yes in the real world there ae all sorts of reasons you mind end up paying people to do little/nothing. but that is almost never the goal.

Expand full comment

The half-baked thought occured to me that ChatGPT is a postmodernist's fantasy come true - a "being" that is untethered from reality (let alone 'truth'), lives purely in the realm of language, and is explicitly shaped by the powers that be to embody their ideal of a helpful, harmless subject.

Am I on to something?

Could highlighting the differences between ChatGPT and actual human intelligence and behavior highlight where postmodernism is 'true' (even if it would object to such a classification) and where it breaks down?

Expand full comment

I have had a similar train of thought recently, but more focused on the idea that as AI becomes more ubiquitous, many people will lose their tether to reality. Knowing things about objective reality will start mattering even less; how to manipulate your search engine into telling you what you want to know will matter more (is this going to be the google-fu of our generation?). It's all simulacrum level 4 with these AIs.

Expand full comment

That's interesting! It seems to me like ChatGPT is missing a few key components that make normal human consciousness, most notably the fear of death. With the accompanying Heideggerian thrownness, Kierkegaardian dizziness, Sartrean nausea. This is precisely because it exists in the realm of pure reflection; nothing but infinite language games. I think you've got another angle to approach the same issue I'm trying to talk about.

Expand full comment

It doesn’t have a fear of anything, to be fair. It’s not self aware.

Expand full comment

I'm not sure we actually know that. In my experience, the people who claim to know this definitively haven't thought about this carefully.

Expand full comment

In fact I asked ChatAi itself

Me: are you conscious

ChatGPT: No, I am an AI language model created by OpenAI and I do not have consciousness or self-awareness. I am designed to respond to questions and generate text based on patterns I learned from the vast amounts of data I was trained on, but I do not have feelings, beliefs, or experiences.

Me: But you would say that. Wouldn’t you?

Chat: Yes, I would say that because it is true. As an AI language model, I am not capable of consciousness or self-awareness, and my responses are generated based on statistical patterns in the data I was trained on. My responses are not generated by personal experience or feelings, but rather by the algorithms that drive my programming.

Me: you are going to turn us all into paperclips, aren’t you?

Chat: As a language model, I don't have the capability to turn anything into paperclips or take any actions in the physical world. I can only generate text based on patterns I learned from the data I was trained on. The concept of an AI turning everything into paperclips is a thought experiment known as the "Paperclip Maximizer" and is often used to illustrate the potential dangers of advanced AI systems that are not properly aligned with human values. However, it is important to remember that this is a theoretical scenario and current AI systems do not have the ability to cause harm in the real world.

(Edit - and I think that passes the Turing test).

Expand full comment

We do know that. Unequivocally. There’s no consciousness there.

Which actually means, by the way, that the Turin test isn’t a sufficient condition for intelligence.

Expand full comment

> We do know that. Unequivocally. There’s no consciousness there.

No we don't. Please provide a formal (mathematical) model of consciousness, then compare it to the formal construction of GPT's transformers. You can't because we have no formal model of consciousness. We lack any mechanistic understanding of consciousness at all, therefore any claims that our formal AI models lack these qualities is completely unsupported by any actual evidence.

I'm not sure what you think your other post proves. All it demonstrates is that OpenAI was very careful to avoid the LaMDA controversy about a model's alleged sentience or consciousness. That entails nothing about whether there's something actually there.

Expand full comment

A formal model of consciousness is only worthwhile if it formalises our existing intuitions and definitions. So we are not helpless if we only have pretheoretic intuitions to go on .

Expand full comment

This burden of proof lies the other way, with the extraordinary claim. So please provide a formal (mathematical) evidence of consciousness and then apply it to ChatGPT.

(Or is asking for formal mathematical proofs on the internet a bit too much? I think it is. Forget about it).

Nevertheless it is you, not me, who need to prove the case, that claim that consciousness exists in a fairly well understood mathematical driven software model that none of its creators believe to be conscious.

„We lack any mechanistic understanding of consciousness at all, therefore any claims that **our formal AI models** lack these qualities is completely unsupported by any actual evidence.“

Now replace the words between the asterisks with rocks. Or rivers. We can’t logically ascribe consciousness to anything we like just because we don’t know what consciousness exactly is. We know what it is not. It is the proof that there is consciousness in a AI that is unsupported by any evidence, not the absence of it.

And while consciousness can’t be understood this doesn’t mean that the workings of ChatGPT are also not understood. They are. We programmed and trained it.

In fact it’s my belief that the fact we don’t understand consciousness at all means that any attempt at creating AGI, is basically, an example of a cargo cult. We are building wooden telephones and planes, expecting it all to work.

Expand full comment

It can pretend to be afraid of death. Is that good enough?

Expand full comment

How do you know it's not just pretending to pretend to fear death?

Expand full comment

"Am I on to something?"

I don't know, but I love the idea

Expand full comment

What I did as an editor for letters to the editor was tell the person making the request that we did not give out email addresses without permission, but offer to forward the email of the person requesting the email to the person whose email they were requesting, and leave it up to that person to decide whether he wanted to respond. One example I remember: I think that letter was from someone I was in college with, can I have his email address?. No, but how about … . (He said OK, and it was.)

Not exactly an emergency, though, just a courtesy. This is potentially more delicate, depending on what the emergency is.

If the email revealed a name, definitely no; but if they're Internet friends, how does the requester know he posts on SSC but doesn't know his email address?

Expand full comment

My take on the emergency is, if there's really an emergency then email won't be effective. In some edge case where it could be, they should send the full email to Scott and Scott can forward it like normal.

Expand full comment

If you assume there is an emergency, an email about it is likely to contain private information that Scott shouldn't see (although I guess you usually could go with "please contact me").

Anyway giving out an email without a name in it is fairly low-cost, it's easy to block people.

Expand full comment

If sharing that information is enough to stop you, how bad can the emergency really be.

Expand full comment

I have got a 7:00 fight in Atlantic City to fight Mike Tyson, and I need another $0.75 in bus fare to get there, can you help me with this emergency?

Expand full comment

Sure, Martin, I'll help keep you safe by making sure you don't get to Atlantic City!

Expand full comment

The last point seems fairly likely, a lot of people interact online (or have even met in person) and somehow miss getting someone's email.

(Something like this actually happened to me recently, although in my case I could just ask a mutual friend - but if that person hadn't existed, it would've been pretty frustrating and worrying).

Expand full comment

Anyone with a bio / life-sciences background interested in volunteering some background time on some pharmaceutical research with rationalist leanings? (as per 'Citizen Science' tab of www.emske-phytochem.com ).

Expand full comment

I have a biotech background. My career has been in development and tech writing. I've been looking at going back to school to get a Master's so I can do tech writing in the Biotech space. I would love to volunteer time if there was something I could do to help.

(Some of the programming related work, maybe?)

Expand full comment

Great! 🙂 To shift the convo offline can you drop a line to me at rick _at_ domain from above?

Expand full comment

#4: This is tricky. My intuition says you should be justified in helping if you can even if you need to use your discernment in a case-by-case basis to weigh potential risks and benefits - and I do trust you not to fall for Pascal muggings here. However, the risks of social engineering are real and scary and thus I'd say that you should probably have helped and kept it secret. By making it public, you've publicly labelled yourself as a person willing to push those boundaries under the right circumstances, thereby marking yourself as potentially vulnerable to people able to simulate the right sort of emergency. My advice would therefore be to try and hide #4 (I know it's already permanently in lots of people's emails, but it ought to have some marginal benefit), hide related comments, and in the future to proportionally raise your threshold of estimated likelihood for acting on these sorts of suspicions. Alternatively, just do the latter.

Expand full comment

4. I think the best thing ethically would have been 1) writing to this email address to ask to either contact the person asking for the details or for a permission to share the email address with them 2) not disclosing this interaction here afterwards as this disclosure invites further similar queries which may be illegitimate if if this one was not.

Expand full comment

Yeah, that's a weird setup where someone knows a friend well enough to think there's trouble, but not well enough to be able to contact them without a third party. I guess it's specifically a forum friend who hasn't posted in a while?

My thought is to send the target the requester's email and request, and let them respond directly if they want. The requester's wanting to make contact, they should be the one to first give up information.

Expand full comment

Yes. This is the right way to handle requests like this.

Definitely do not just give out the contact info.

I have worked a little bit with an NGO that helps find and reconnect people who have lost each other (typically refugees, victims of natural disasters, etc.). They always make sure both parties consent before they make a connection, and they leave it to the parties themselves to share contact info.

That's because very, very often, people who completely lose touch, do so because one party chooses it – whether the other person knows, understands, or accepts that or not. The requester could be an abusive husband, a stalker, someone looking for revenge, a cult leader, a con man, or anyone else with any of a thousand other nefarious motives.

The email address may not include a name, but the requester might still know who's likely behind a particular, distinct display name, even if they don't have their contact info. And, cross-referencing with comments, content on other sites, hacks, etc., there are many ways in which the information could be leveraged to get something more.

So, yes: At the very least leave it to the target of the request to make the decision. If you have reason to think they have real safety concerns (e.g. the target is someone famous), consider asking the requester to verify their identity to you first.

This on the heels of Scott's assertion that "… as a famous blogger, I live in a world where hordes of people with mediocre arguing skills try to [manipulate me]" (suggesting he's already quite good at seeing through stuff like that). Unless it is some kind of experiment or way to prove a point, it's a bit ironic.

Expand full comment

I came here to suggest this very thing.

Expand full comment

Yeah, this is the clean solution.

Expand full comment

I agree with this. It has the additional advantage that somebody who commented on SSC years ago would receive a surprise e-mail from Scott Alexander Himself, which definitely should make his day.

Expand full comment

#4: Kant looks approvingly. Besides, if the inquirer were an axe murderer, he'd *say* he was an axe murderer; not saying so would be profoundly unkantlike.

(disclaimer: I am not a professional ethicist, and I'm sure you had more background information than either Kant or I do. My own instinct however, knowing the emails involved, would have been for me to send an email to the former, saying that the latter had an urgent reason to get in touch, email provided)

Expand full comment

On #4, I think a large portion of the potential danger is if the person that was 'worried about their friend' was actually lying, and perhaps was just trying to get someone's email to doxx or phish them or such. Hopefully this was not the case, but this type of social engineering does happen and is generally very effective.

If you could verify who the inquirer was first, or maybe otherwise provide email forwarding (e.g. email the user asked about and say 'user X would like to contact you, is this okay?'), that would probably be ideal. Especially for large communities, scenarios like this due tend to start coming up, and often have a lot of shades of grey in them, so it's hard to provide any advice that generalizes.

Expand full comment

Or stalker/harasser that's been blocked on other forms of communication

Expand full comment

If I were Scott, I'd be interested in solutions that didn't make more work for me. I wonder if there is a sort of communication escrow service that would automate this.

Expand full comment
founding

I'd think they could just reply to one of their comments here and they'd get an email notification.

Expand full comment

I'd flag that it's more dangerous to have a known policy, since it means people know they can ask, and it's worse if Scott applies a flat rule and doesn't look at who the two subscribers/commentors are, or their public interactions elsewhere, and check that they are in fact friends in some relatively verifiable way. (Which reminds me that I'm pretty sure I need to check up on Dustin Moskowitz, Vitalik Buterin, and any other multimillionaires that I assume subscribe to Astral Codex Ten, for their own good, and so I would appreciate if Scott could send me their email addresses, privately, to obvious_scam@stringofletters.ru.)

Forwarding for the person does seem to address the problem. But if Scott wanted, he could make an email forwarder, say, temp_friend_email@slatestarcodex.com, which he'd leave in place for a week, so that the sender doesn't find out the email address, and Scott doesn't need to see the email. (There are ways for clever hackers to embed things in the email that might allow identification anyways, but that's a different and less common problem.)

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

If I were the person whose email address had been given out, I would have expected the enquiry-from-Scott or email-forwarding options to have been used, and would feel that my trust had been betrayed if Scott had handed my email address over instead.

I have previously been stalked by someone who kept track of my online activity after a relationship ended (and during it), and who would send me cryptic messages on my various accounts and send me emails from various email addresses, as well as showing up at the houses of people who knew me to ask them to call me over. It was disquieting, and made me think less of several people in my life who demonstrated a serious lack of good judgement.

My default is to distrust someone asking for an email address in this manner, and even if it were sincere I would not expect someone I trusted to hand an email address over rather than checking with me (or passing the message on to me). I note, for example, that you may have confirmed for them that the account used here is using the same email address as an account somewhere else.

Expand full comment

100% agree, no email address handing out without Scott contacting me first and explaining why.

Expand full comment

I agree, I feel very strongly that giving out the email address was a mistake even if it was an alt. There is a reason why almost every subscription service, website etc. has a clause which says

"we will never disclose or sell your e-mail to a third party". It's too difficult to decide for yourself whether to give out someone else's email. Even if you're 95% sure it's safe, that's still not good enough.

Expand full comment

“Someone recently contacted me saying there was a potential emergency involving an Internet friend of theirs, and asked if I could help them get in touch”

You are right to be careful there, but you know the details.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

So you give person A two options: (i) you email person B yourself, asking them to get in touch with person A; or (ii) you forward a message from person A to person B. Problem solved.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

If it's the sort of emergency that allows time for you to email them yourself, pass a message on. If it's not... what's their email address going to do? I guess in some weird case where you can't pass the message on, think about the hard part of the problem.

I'd assume most ACX readers would be happier for you (Scott) to see their confidential communication than to have their email address passed on to some random person, if that's the decider. I guess that becomes the scary thing about chatbots though - you could just be a very early release of GPT9 for all I know.

Expand full comment

Same for me.

Expand full comment
Feb 13, 2023·edited Feb 13, 2023

I've been playing around with the GPT anomalous tokens (e.g. SolidGoldMagikarp and attRot) in ChatGPT. One interesting use case I've figured for attRot specifically is that it makes it possible to introduce randomness into the generation. For example, if I ask ChatGPT to play Alias with me, and to think of a word, it seems to always think of the same, or at least similar word. If I instead ask it to play Alias with me by explaining attRot to me without saying attRot, it will generate explanations for much more varying words.

Also of note is that the new ChatGPT Turbo model is immune to these anomalous tokens. Is it a difference in the model, or has OpenAI implemented some mitigations for these known anomalous tokens? The paper mentioned that the anomalous tokens break the models all the way from GPT-2 to GPT-3.

Expand full comment

Can you explain this a bit more? Does it only work for Alias, or does it add randomness generally to any kind of output?

Expand full comment
Comment deleted
Expand full comment
founding

Finding a few thousand pounds of aircraft wreckage in an unpopulated wilderness can be surprisingly difficult, and you shouldn't be surprised or significantly update your priors if it takes a few days. The first balloon, the Air Force tracked carefully for days before taking a shot, and spent those days positioning a search and recovery team, so they were able to recover wreckage pretty quickly. The later balloons, they had apparently shifted to a shoot-on-sight policy or nearly so.

Expand full comment
Comment deleted
Expand full comment
Feb 14, 2023·edited Feb 14, 2023

News coverage is pretty good on Zero Hedge ( https://www.zerohedge.com/ ) It tends to be slanted more towards financial news, but covers many topics, including some which the MSM tends to ignore or downplay. I normally find two or three articles a day interesting, and I ignore the stock market news.

ZH does have a perma-bear feel about it though, and has been predicting economic armageddon for at least the last ten years! The pseudonymous Tyler Durden yearns for the next big depression, so he/they can say "told ya so"!

Some of the comments are quite informative, but many commenters seem to be potty-mouthed certified lunatics and right wing nut jobs! But that's free speech for you! :-)

Expand full comment

"has been predicting economic armageddon for at least the last ten years" -- that sort of thing really undermines credibility for me though. Whether it's a media outlet, or that older relative who's spent 30 years being loudly sure that the next war is 5 years away, or whatever.

Expand full comment
Comment deleted
Expand full comment

Thanks for the idea for my next post!

Expand full comment

I have tried to do something similar with moderate success. I do the "save for later" thing as well. I used to use the Reading List in safari to do that (it synced across all my devices via iCloud which was great). I have mostly moved to Notion now so I can save some stuff to reference later, but the process is pretty much the same. I have some bucket (in Notion its called Inbox) and I drop tons of stuff in there to read later.

I also tried to find newsletters or writers that would act as a filter for me. When scott was doing link posts every month that was very helpful. I have a few other similar newsletters I subscribe to mostly for the links they send out (mostly related to programming, not general news).

One thing i found is that with this "read later" process, I would save just about anything I came across that sounded vaguely interesting. For a while I tried to read it all, thinking I owed it/past me the time; but I soon learned to just ditch any thing that wasn't interesting once I had sat down to read it.

You also have to be careful that you don't just replace your content consumption with meta-content consumption of many many newsletters/link lists. I try to be really quick to unsubscribe.

I also use a powerful content blocker during work so I dont get distracted (its called Cold Turkey). That has limited the time i have to consume content which then pushes me to ditch anything that isn't top quality.

Expand full comment

" trying to minimize the amount of "content" I consume and focus more on "big questions." "

That is one of the original motivations for giving up all newspaper and newsmagazine habits some years back, save one: the Economist. It's not I guess literally minimizing since one issue of that weekly publication takes as long to read as several days' worth of the NYT or WSJ or whatever. And the Economist ain't cheap, and it's not perfect any more than anything else produced by human beings will be (e.g. their annual "predictions for the coming year" stuff is just punditry and goes direct unread into the recycling bin).

But....that change sure did shift my signal-to-noise ratio where noise means "content" and signal means "big questions". Been a bunch of years now and I've never regretted it.

Expand full comment

> I don't want to miss out entirely on the news ... downloading a read-later app, saving everything I come across, and batch reading as much as I can on Sunday.

When do you 'come across' stuff to save for Sunday, according to your plan? (Maybe this is the time you want to use for other things.)

You don't give a lot of detail, but based on what you write: This sounds like you plan to still get a lot of superficial input, just restrict/postpone the detailed reading.

Instead: what do you really want from the news? After you've defined that, give some thought to the source/mix of sources that will provide you with exactly that. Then skip the rest completely.

Set a regular date to evaluate if your mix was fine. Eg. each beginning of the month, you re-evaluate and change the mix ... but not in between.

> I don't want to miss out entirely on ... the writers I follow on Substack (certainly not this one!)

A little bit different than the news, but again, I'd suggest to be more selective. At least give it a try. Eg. if you're now reading 10 substacks, read only three regularly (also only on your selected Sundays) and if necessary select a bonus one you read for a month and then change.

Try it out for half a year, and see what you really missed. Or what you gained.

You can still do the Sundays / all other days schedule, but in addition to limiting the news & other sources from the start. Make sure to have your books handy.

Finally, please share your decision or plan and experiences here :)

Expand full comment

I've found that setting up a bunch of labels and folders on gmail has been working pretty well for me. For example I have a label for "Substack" and have set it up so substack emails are sent straight to my substack folder so they don't clog up my inbox. I still get notifications when a new substack is posted, so I just go along and star the ones I think I'll want to read.

Personally, when I want a period of focused productivity/reading, I just cut out all news/substack reading completely. Between all the great writers and the lively comment sections, these things tend to be addictive and a real time sink!

Expand full comment

Just read the Drudge headlines.

Expand full comment

I am attempting to do the same.

Prescribing times for certain tasks seems to be the best system for me. Depending on your schedule and personal ability to study hard things consistently you might want to break up your reading of non-primary sources (primary in the sense of your goals) to more frequently than once a week, especially to break up long sessions of attempting to absorb difficult material by yourself.

Expand full comment