855 Comments

> an ideal experiment would involve taking a really talented family, adopting away one of their kids at birth, and seeing what happened to them.

More practical experiment: high-IQ women inseminated by sperm of smart famous men. The study tallies IQ and talents of children, scatterplotted against... (i) the husband's IQ/abilities and (ii) famous men's children's IQ/abilities?

Expand full comment

Extremely random thought: I hereby propose that we rename generations as follows:

Silent Generation -> Generation A

Baby boomers -> Generation B

Generation X -> Generation C

Generation Y -> Generation D

Generation Z -> Generation E

Generation Alpha -> Generation F

etc.

(In my proposed scheme, there are no names for the Lost Generation or the Greatest/G.I. Generation, as most of them are dead anyways at this point.)

This proposed scheme has several advantages over the current one.

Firstly, it sets the set-point for generation numbering at a fairly reasonable point, and thereby eliminates our need for suddenly switching to the Greek alphabet. (In the old scheme, Generation A would be ~1500-1520, assuming a 20-year generation span, and no one has generational stereotypes stretching that far; in my proposed scheme, we won't need another alphabet until Generation Z is finished being born around ~2440.)

Secondly, it makes giving names for members of particular generations much easier, as now one would only need to append "oomer" to the generation's letter to refer to a single member. This way, Generation B members get called "boomers", in accordance with current slang. The other names also (kind of) make sense too (though I'm not sure if they're accurate or valuable as new generational stereotypes): Generation C members (born between 1960 and 1980) get called "coomers" (i.e. people addicted to pornography), Generation D members get called "doomers" (i.e. people extremely concerned about forthcoming worldwide doom). (Generation A members get called "aoomers" and Generation E members get called "eoomers", which are neither well pronounceable nor semantically memorable, but that's okay - neither generation is really well known for having a Defining Generational Experience.) It even works for the forthcoming Generation F, who would get called "foomers" (i.e. things that FOOM, or exhibit characteristics of AIs exhibiting a hard takeoff), which is precisely correct given current (optimistic?) estimates of when we should expect some kind of AI takeoff to occur.

Now for some possible disadvantages: The current system of generation naming is already well-established and it would be incredibly hard to change it. Also, I'm not sure whether "coomer" and "doomer" are appropriate generational stereotypes for members of Generation C and D, respectively - some quick searching suggests that people generally think of Generation C members as cynical and sleep-deprived and Generation D members as lazy and tech-savvy. Furthermore, I'm not even certain that dividing people into generations by *birth year* is the right way to go - I think that it's also popular to divide people instead by *age*. (This depends on whether people tend to be shaped more by when they were born rather than their current age. It seems the former would be more useful in a rapidly changing society and the latter in a very slowly changing one, which seems to suggest that birth year is more useful? But I digress.)

Sincerely, an eoomer*.

*Yes, I'm revealing my age, sort of, but I've already written about so many times on the internet that it's not really sensitive info for me at this point.

Expand full comment

> Generation E members get called "eoomers", which [is] neither well pronounceable nor semantically memorable

"Eew"-mers, the generation that's disgusted by everything

Expand full comment

Or "EU-mers", the generation that grew up when the European Union already was a thing and thinks it's an obiously good idea

Expand full comment

Or "Eomers", the generation that was born after the LOTR movies started coming out.

Expand full comment

At the very least, the Polgárs should be a demonstration that home environment can be very important for the kind of things that show up in your "Great families" post. Maybe they would have become doctors or something and never received widespread attention in a counterfactual world.

Expand full comment

I'm still unvaccinated after getting my first impressions on COVID vaccines from anti-vaxx-because-mRNA-is-poison crowd, but I've been thinking about the statistics and decided whatever the scale of adverse reactions, they are regrettable but vanishingly small in the bigger picture, and I'm not likely at all to get life-threatening ones outside of recoverable myocarditis and/or blood-clotting, the last one also occasionally found in live infections. So such side effects is actually on par with the real bug or even smaller, rather than magnitude worse than actual infections. Granted they can accumulate, and antivaxxers warn of unknown unknowns (or suppressed knowns like fertility "inhibition"), but those might wear off with immunity itself, or not sufficient to be of my immediate concern. I'll still prefer non-mRNA ones over mRNA ones because of the new technology aspect, which needs several years to investigate its long-term side-effects before being really safe.

I can stay unvaccinated & avoid those places where a vaccine passport/health code system is set up, like many Conservatives who hate such a level of state overreach. That's probably as big a rationale to "resist" vaccination, along with job-quitting. They are often moving to GOP-dominated areas, getting work that don't have vaccine mandates or WFH, or even trying to be self-sufficient and do business informally (what they call "parallel" societies). They are sticking to their principles and those efforts at alternative economic organizations are applaudable, but the question is, is the trade-off worthwhile (no vaccination & a degree of surveillance, but massively lower quality of life indefinite, which they can blame on the mandates and the system as a whole)?

Thanks for the advices because that will determine my lifestyle for the next 2-5 years, & life planning for even longer!

Expand full comment

I think you should bone up on the collective benefit arguments, and maybe even acquaint yourself (or refresh) with Kant's Categorical Imperative:

"Act only according to that maxim by which you can at the same time will that it should become a universal law."

Under that principle, the ethical decision may in fact be one that is not optimal for you personally if all you take into account is your own welfare and actions. But you are not a solitary being living on your own planet. You are a member of society, and you benefit from the fact that, for example, rape and murder are discouraged and policed, in spite of the fact that some individuals would gladly cause harm to others if not restrained by society's rules.

Ask yourself this: if you could get away with something you want to do but you know for certain is unethical and would seriously harm individuals you don't know, would you do it? More to the point, would you willingly choose to live in a world where bad deeds always go unpunished?

In essence, by treating your vax decision as only a matter of your immediate well-being (in a world where others are choosing to act more altruistically) you are making an (arguably) unethical decision, regardless of external forces such as vaccine mandates.

Rape and murder are wrong even if you don't go to jail. Similarly, refusing to take on a reasonable amount of personal risk that reduces collective risk and pain is wrong, even if you can get away with it.

Expand full comment

The assumption that the vaccine benefits society enough to offset widespread oppression of individuals is baked into this argument, but it shouldn't be. Unknown unknowns and all. This argument can trivially be applied to compel people to do deeply unjust and harmful things simply by wrongly assuming they pros outweigh the cons.

Also this argument asserts that only one individual has to make this choice. It only balances the cost to ONE person against the benefit to everyone. This is *obviously* wrong. Everyone pays the cost, which varies from person to person.

Expand full comment

No it's not. Daniel did not propose "widespread oppression of individuals", he tried to persuade one individual to do something voluntarily. Besides which, many regulators around the world have evaluated the evidence and found the vaccines safe, and even looking at just the FDA, it has a pretty good track record.

Expand full comment

There was some talk about a Florida meetup in late October that I wasn't able to attend, but I was wondering if anyone could provide an update to that. How many people attended? Did it go well? is there any talk of doing one again in the future?

Expand full comment

To kick off my presence here at this colossal blog I'm asking for a few questions on how commenters here evaluate some conspiracy theory claims that is gaining acceptance by an emerging segment of people on the political right-wing. Personally I see a lot of those to be narratively more structured and "convincing" explanations than what is (propagandized?) to be mainstream, and often consider issues from their perspectives. It is basically the "end phase of NWO to enslave and/or kill everyone outside of the elite thru excuses starting from COVID"

The most immediate concern for them is to confront the emerging COVID "police state", or neutrally put, the digitalized system of intensive surveillance and direction of daily lives based on a particular interpretation of contact-&-mobility-restricting NPIs (e.g. vaccine passports & contract tracing apps) and the assumption of a "New Normal" based on obligatory (instead of mandatory) vaccination. Their main objections are libertarian, anti-surveillance, anti-segregation & anti-centralization of social agency, not unlike what emerged after the passage of the Patriot Act (also rejected by much of the same people). To counteract that they have sought alternative social & economic strategies, from building extra-formal parallel societies conforming to their political ideologies to practices of subsistence-level self-sufficiency.

Here comes the question: how do you evaluate the legitimacy of the current "police state" system? What I have seen is either resigned acceptance, or total resistance. I'm trying to find principled arguments that legitimizes the current level of strictures. 2ndly, of the political understanding to marginalize the unvaccinated? They appear similar to Nazi or Soviet dissidents that were prosecuted and often denied basic services & needs. 3rdly, of the modernity-withdrawing reactions of those "resisting" vaccine passports & contract tracing apps? (I won't be surprised if these have come up before and discussed)

Expand full comment

Speaking about conspiracy theories in general, their problem is not that conspiracies as such do not exist. They do; our legal codes indeed recognize and penalize things like criminal organizations or cartels. There is also this tacit cooperation that results from everyone following their own incentives, where e.g. rich people in general are likely to promote rules that further advantage rich people. (But also e.g. educated people promote rules that further advantage educated people, such as requiring credentials for the types of jobs that uneducated people would be able to do equally well.)

The problem with conspiracy theories is with applying this type of thinking blindly, and ignoring any evidence that doesn't fit the preconception. You decide that some group X is responsible for everything bad, and assume that everything that happened is a part of their grand plan -- as opposed to a concidence, a tradeoff, a more general force, a failure in a plan, or a result of a plan of some unrelated group Y. Everyone is either 100% on your bandwagon... or is a brainwashed sheep. There is no chance of you being wrong, even about some insubstantial detail.

Are there people with the ambition to rule the whole planet? Probably yes; as far as I know the egos of some politicians have no limits. Would some people like to enslave others, and kill those who resist? Sure; I mean even today slavery is legal in many countries, and the dictators typically kill those who oppose them. Is surveillance constantly increasing? Yes; the amount of data Google has on me would make Stasi jealous. Is police corrupt? Of course; look at any police union and you will see the organization that protects corrupt cops.

None of these assumptions is an epistemic problem, in my opinion. The problem is seeing everything as a part of a grand plan, and ignoring all alternative explanations. Like, the increasing surveillance is mostly a side effect of technological progress and technological centralization; the citizens even pay for the smartphones that track their every movement. COVID is a real pandemic, people are actually dying, and face masks and vaccines are actual methods how to reduce those deaths. Etc.

Expand full comment

The problem with conspiracy theories is that *conspiracies are secret*.

9/11 was a conspiracy of >20 terrorists to cause huge damage. The public didn't know about it until it was too late. Usually everyone finds out about a conspiracy at the same time.

Conspiracy theorists, however, claim to have special knowledge that experts with the SAME evidence don't have (like "9/11 was an inside job ... because, you see those flashes of light and that blob under the aircraft wings??")

The conspiracy theorists' explanation of this will be "the experts are engaged in groupthink - you can tell because so they all agree! Except the two who completely agree with me, THOSE guys are independent thinkers just like me!" or "the experts are in on the conspiracy!" or "they're being paid by George Soros to reach a certain conclusion! Obviously!" or "the experts have a narrow scope of knowledge but I see more clearly because I am a generalist polymath and definitely not a crank!" or "most experts don't know what I know! because they're ignoring my emails!"

My explanation of this is conspiracy theorists are misinterpreting the evidence (using Dark Side Epistemology) because they want so badly for their preferred conclusion to be true.

Expand full comment

>COVID is a real pandemic, people are actually dying, and face masks and vaccines are actual methods how to reduce those deaths. Etc.

The bar does have to be higher than "people are actually dying", or the policy prescription is "permanent police state".

Expand full comment

Yeah, like, we can't have speed limits just because people are actually dying on roadways. The autobahn is the only non-police state left, unless you count the speed limits on some parts of it or that rule against passing on the right...

Expand full comment

How would you define "police state"? I think some sort of surveillance apparatus has always existed since the start of state formation, but its intrusiveness is being normalized after 9/11.

Likewise, does the level of risk justify the level of "police state" strategies to the management of public health?

Expand full comment

The question of how well we're threading the needle is a tough one and I'm not well-informed enough to answer it in full.

My point is merely that if the bar to activate "emergency measures" is set too low, all of them will be activated all of the time. People die from the flu, too, after all. And robberies. And suicides.

Expand full comment

The fact that Big Tech already has the capabilities of intensive location tracing that can be commandeered by state intelligence is one of the reasons people are living off grid.

Expand full comment

What do you think of those who publicize their agendas, like the WEF & UN (Agenda 2030), which is usually interpreted as the "NWO"?

Expand full comment

Here is agenda 2030. https://sdgs.un.org/2030agenda

Mostly waffle. What’s the conspiracy.

Expand full comment

I think a lot of diabolical goals (e.g. population control) have been associated with this.

Expand full comment

Where in that document though. I didn’t see anything.

Expand full comment

I mean, conspiracy theorists associate all sorts of claims they read somewhere (e.g. Kissinger's remarks on population policy thru US diplomacy, Limit to Growth Report, etc.)

Expand full comment

I am looking for book recommendation about Ancient Rome. I know almost nothing. Particularly interested in political institutions, law, and political economy. Also interested in day-to-day life portrait kind of stuff. “Great man” history is ok I guess, and I do appreciate biography, but I’m looking for something a little more expansive. Extra points for something fun and readable. I’m not afraid of tomes. Recommendation?

Expand full comment

This isn't quite what you asked for, but you might check out a blog called "A Collection of Unmitigated Pedantry" ( https://acoup.blog/ ), which is written by a history professor who is focused on Rome. He's got an entertaining style and likes to write posts describing ancient life and explaining how it's different from popular depictions like Lord of the Rings, Game of Throne, or Dungeons & Dragons. He's also got a book recommendation list, and cites various specific history books as references in his posts.

Expand full comment

Kulikowski's Imperial Tragedy and Imperial Triumph are vital reading for the later empire.

Expand full comment

A Fatal Thing Happened on the Way to the Forum: Murder in Ancient Rome

By Emma Southon

New book (2021), fun and readable, but full of Roman history.

Expand full comment

Check out SPQR, by Mary Beard. I think it precisely matches your requirements.

Expand full comment

Following up to tell you I loved SPQR. Just what I was looking for. Great rec!

Expand full comment

+1. I haven't read it yet but it's very much the standard non-academic history of Ancient Rome at the moment.

Expand full comment

Attempting a different calculation of the number of lottery tickets in the pyramid and the garden (https://slatestarcodex.com/2016/11/05/the-pyramid-and-the-garden/):

If you round the speed of light to just 2 decimal places (29.98), you still hit the great pyramid (https://goo.gl/maps/kHKNJQWvwVd3Rbi9A). It's exactly the location of the entrance on the north face. So we only need to explain a 1-in-10000 coincidence.

1. At least 10 constants which would be impressive if ancients knew them:

* c

* G

* 9.81/m/s^2

* Avogadro's number

* molar gas constant

* lyman-alpha wavelength

* fine-structure constant

* proton-electron mass ratio

* planck constant

* stefan-boltzmann constant

* electron charge

2. At least 10 man-made wonders of the world

3. At least 16 characteristics in which to encode the interesting constant (latitude, longitude, height, length, width, circumference, plus length/width/height of a few internal features)

4. At least 3 choices of units (SI, imperial, and cubits or whatever the local system was when other wonders were constructed)

5. At least 4 choices of decimal point placement

That gives us 10*10*16*3*4 = 19200 lottery tickets to explain a 1-in-10000 coincidence.

Expand full comment

So you're telling me there must have been aliens explicitly *avoiding* encoding such coincidences in various ancient wonders?

Expand full comment

No, the Illuminati is suppressing study of the great wonders in order to prevent people finding all the winning lottery tickets scattered around.

Expand full comment

This year's gift guides are predictable and sad. I'm looking for your Top-1 recommendation for each of these:

a. Really Good Black Friday deal.

b. A gift for your SO.

c. A gift for coworkers.

I'm intentionally not specifying budget, SO's gender, interests etc. I'm just looking for good ideas in any price-range, and in any interest category (tech, history, literature, rationality etc.).

Only thing I'm asking is that you share your top-1 recommendation only ;-). Why? Because it's fun to think about "best", "most valuable" etc. ideas, instead of saying "I have 10 great ideas" :-P. I guess I can't stop anyone from sharing more than 1 really...

Expand full comment

I thought the conventional wisdom was that Black Friday had become mostly hype to clear out inventory, with some tricks like raising the price in the months leading up, or brick and mortars advertising discounts on big name items that immediately sell out to drive foot traffic.

But maybe that's too cynical, and there have to be a few counterexamples out there... maybe Anker's power stuff, which is already kind of good value for money?

Really curious how discount days change when supply chains are messed up and online retail has eaten the world. They clearly still happen, like Amazon day and Singles day, I just wonder if they have different goals and impacts that are not obvious. I'd love to see any data (or even wild speculation!) on how discount days have changed over the last decade if anybody has any.

Expand full comment

I've wondered whether a lot of the black Friday deals will show up on ebay as people realize they bought things they don't actually want.

Expand full comment

Does anyone have any good tips about making medical and dietary decisions when there isn't very much data? My baby Daughter is going to have to go on a drug that is known to be associated with having lots of allergies. It seems really unlikely to me that choices about weaning etc. aren't relevant to reducing this risk but since so few kids need this drug I think it's unlikely there will be good medical trials on this.

Expand full comment

From https://bariweiss.substack.com/p/lose-the-mask-eat-the-turkey-and

> The largest study worldwide, the Israeli study, showed that natural immunity was 27 times more effective than vaccinated immunity in preventing recurring Covid illness. The only two studies to the contrary are from the CDC. They were sham, jerry-rigged studies that were so embarrassing they would get disqualified in a seventh grade science fair project. That’s how horrible these studies were.

Anyone know the basis for this claim ?

Also, thoughts on this interview overall are welcome. Never heard of Dr. Makari before - his pedigree sounds trustworthy but the interview format leaves little room for references/footnotes, which means that this is a “trust me” format, not “trust but verify” format. I don’t like this on principle.

Expand full comment

Here is a newspaper article that discusses a paper that the CDC sometimes cites when it makes misleading claims about vaccine immunity vs natural immunity: https://www.tampafp.com/nih-director-violated-agency-policy-by-intentionally-misrepresenting-natural-covid-immunity-study-watchdog-alleges/. The study itself is here: https://www.cdc.gov/mmwr/volumes/70/wr/mm7032e1.htm.

Here is an article comparing the Israeli study and the recent low-quality CDC paper: https://brownstone.org/articles/a-review-and-autopsy-of-two-covid-immunity-studies/. The CDC paper is here: https://www.cdc.gov/mmwr/volumes/70/wr/mm7044e1.htm.

I recommend reading the two CDC papers. The problems with these papers (discussed in the two articles) are pretty obvious.

Expand full comment

Thanks!

Expand full comment

I haven't read The Nurture Assumption, but got a lot of similar information from Bryan Caplan's 'Selfish Reasons to Have More Kids'. I think the results of the 'parenting doesn't matter' studies oversold. IIRC, 1 SD 'better' parenting can do things like raise IQ on average by 3 points. Not a big difference individually, but far more than 0 - especially at the extremes of the probability distribution. 3 IQ points roughly doubles the frequency of 150 IQs, 6 points roughly quadruples it.

Expand full comment

Plausibly the great families have environments at the +3-4 SD range.

Expand full comment

I've seen some people (including Paul Graham) making a big deal about the lack of association between parenting and Big 5 personality traits. I think the findings have been misinterpreted as saying "everyone ends up becoming themselves so whats the point of good parenting". The study literally begins with "personality traits are stable, but also amenable to change." I'd be willing to bet that just because personality is stable does not mean that perceived personality (by both the person and others) and well-being are not affected by parenting. A neurotic person with good coping mechanisms might always have a tendency towards anxiety, but if they avoid falling into negative thought patterns they might not think of themselves as especially anxious and generally feel fulfilled.

Expand full comment
Comment deleted
Expand full comment

How does that relate to your parenting, Paula? I think I'm missing a link here. :)

Expand full comment

Is there any real downside for a commenter here using their real name? I started using the name of one of my old S Corps - and my favorite entry lake to the BWCA - on a whim early on.

Expand full comment

My feeling is that if any of us gets famous enough to be worth a deep dive into our online history, someone will inevitably find all our alter egos. The network logs are there. Even Tor records could theoretically be cracked. As long as an internet packet can get back to your eyeballs, so can a snoop.

Therefore, I choose to just present a clean image everywhere. If someone finds me, they'll find someone who tries to be a good person.

Also, I don't seek fame unless it comes by accident in the course of my trying to do good things. There's security in obscurity, not in the sense of hiding the keys to the vault, but in the sense of the vault looking nonvaluable.

Expand full comment

In security, people consider different threat models. Some protections may be sufficient against an angry teenager with the attention span of ten minutes, but inadequate against a state actor. So you take them, and understand where you are safe and where you are not. If I ever run for president, I assume that this account will be quite easily connected with my identity. But if I apply for a job in a company where one woke HR person will do a quick background check on me by googling, they will not make the connection.

> If someone finds me, they'll find someone who tries to be a good person.

There is a difference between being good and avoiding controversy. Do you have an opinion on the genocide of Uyghurs? You don't need to answer (I am trying to discuss the meta level here), but any specific answer has a chance to get you in trouble with someone.

Expand full comment

I think your point about threat models is good advice. I suppose if I were more worried about being surrounded by woke HR people, I would revisit my posting strategy. I'm not (much), so I don't. And if anyone else were to expend the effort to factor in their threat model, I'd admire their industry. In my case, I get to avoid that effort - I post as one persona.

I can engage your question about Uyghurs on the meta level without even stating an object level opinion: any position I express on my one persona will, I think, get me in trouble with only the people whose opinion I don't have to worry about. An HR person could get me fired, or refuse hiring me, on a job I would not want anyway - having to feign a position I can't endorse would likely not be worth that job. If I were running for public office, it would get me in trouble with people who weren't going to vote for me in the first place. In the limit, it could deprive me of some critical donors, but then, in the limit, I can also just say that I have no intention of running for public office. (Which I suppose suggests something depressing yet understandable about all politicians.)

So, that's the tradeoff as I see it. I'm careful about my one persona; in return, I get to only have to worry about that one.

That downside is maybe sometimes non-trivial. What I consider "myself" is a version that is relatively sober and serious, as a consequence of how easy it is to misunderstand sarcasm or even oblique speech online. In other words, I try to only ever say what I really mean, after some thought; I don't just blurt out stuff in the heat of a moment, like a snap judgment about some trial making the headlines or what I "think" ought to happen to everyone who picked some side in some debate. I see little gain in equating ephemeral online quotes to someone's long-term thinking, and I figure I can try to avoid people doing the same to me by mistake.

Expand full comment
founding

I've been posting here and elsewhere under my real name for decades, and it hasn't caused me any trouble. But I don't live in places cancel-mobs are likely to reach me, or have friends or family who would turn against me for standing too close to wrongthink, so YMMV.

Expand full comment

Yeah this is my first experience with this sort of anonymity. I don’t say things here that I’m not willing to stand by so it feels a bit odd not to have my name by my words.

Expand full comment

I find that I behave a bit better and put more effort into my posts when they're attached to my name, so I do.

Expand full comment

I’ve spent a few minutes thinking about what I’ve commented here and the only things that seem like they could come back to haunt me are things that were meant ironically.

I long for a font that indicates [this is a joke]

Maybe an HTML <joke> tag.

Expand full comment

Depends, but I would err on the side of safety. Maybe there is no problem now, but there might appear a problem tomorrow, and it may be impossible for you to remove the existing comments (or they may be already noticed, archived, and screenshot).

Many people read this blog; many of them read without ever commenting. Your current boss, or your (potential) future boss, may be reading this blog without you ever noticing, but they can notice your name.

I assume that in not-so-distant future, there will be companies providing a service for HR, where for a small fee they will compile a report of things you have posted online, sorted by controversial. (One of the things where machine learning can be useful.) Consider the possibility that the most controversial things you write under your name will be taken out of context and included in a report that all your potential employers will read before the job interview. Maybe the person doing the interview will not even really mind what you wrote, but they will probably throw your CV in the garbage anyway, because it is not worth for them risking the possibility that the boss finds out and gets angry that they failed to do their job properly.

Scott writes about many controversial topics. Also, you never know which topics will be considered super controversial 10 or 20 years later. People have been fired from jobs for doing things that were *not* considered controversial at the moment they did them. Even if most people around them were doing the same thing. (As an analogy, consider e.g. voting for Trump. Half of the American population did it. Yet there are situations where admitting to this would get you in trouble. Not because you are some super rare villain, but simply because it can make you a convenient target in your local environment on an unlucky day, and everyone can signal their virtue by attacking you.)

In the past I used my full name online, then I changed my mind. My kids will be strongly advised never to use their full names online. The risk is simply not worth it (unless you are so rich that you will never need a job, or it is your strategy to do controversial things because you profit from clickbait). I am unhappy that we live in this kind of situation, but this is where we are. Too many crazy people out there, coordinated by the evil powers of Twitter et cetera.

Expand full comment

>I assume that in not-so-distant future, there will be companies providing a service for HR, where for a small fee they will compile a report of things you have posted online, sorted by controversial.

This seems to imply that such companies don't already exist. How confident are you in this?

Expand full comment

People are still inviting me for job interviews. So even if such companies exist, they are not sufficiently widely used, or not good at finding the most controversial things.

In the (more) dystopian future, you will be checked by such company everywhere, because not having checked you would get the HR employee fired.

Expand full comment

Do you consider yourself to be in the most controversial 5% of the population? Because if not it's possible that they just can't find anyone noncontroversial.

Expand full comment

Given the general perceived lack of software developers, this makes a lot of sense.

However, the "5% of the population" should probably refers only to people competing for the same job, right? So in my case it would be "5% of software developers", not the general population.

I am not even sure what would be the proper way to measure controversy in general population. Like, some people have way more *impact* than others. In general, working-class people often have tons of politically incorrect opinions, but because they are working-class, they are mostly irrelevant; no one actually important listens to them. Similarly, opinions expressed on Facebook are less important than same opinions expressed on your own blog, simply because the former will quickly scroll down and disappear, while the latter will remain, can be linked, etc.

But either way, I am most likely *not* in the top controversial 5%.

Expand full comment

It was "5% of the population", because 5% was my wild guess at the unemployment rate (leaving aside COVID). Even if you select maximally on boringness when selecting employees (ignoring things like relevant skills entirely), if 95% of people are employed then someone at the 90th percentile of controversiality is going to get employed.

Related: https://www.lesswrong.com/posts/HCssSWMp7zoJAPtcR/you-are-not-hiring-the-top-1

Expand full comment

Oy. Such a word to be alive in.

I’m not on any social media now. I was on Facebook for a while to keep up with family and old friends. I dropped my account when I started seeing disturbing conspiracy theories being taken seriously.

Twitter? Never even considered setting up an account. Seemed like a way converse with bumper stickers.

Expand full comment

Yes.

Expand full comment

Thank you Furrfu. If that is indeed your real name.

Expand full comment

Listen colonel bat guano, if that really is your name…

https://m.youtube.com/watch?v=Ef-JYpYM81Q

Expand full comment

Two blegs:

I ask for two things, a good history of the world (in under 400 pages) and a history of the Late Republican and early imperial Roman periods in the style of Kulikowski's Imperial Triumph. Anyone have suggestions?

Expand full comment

Hmm, I wrote this yesterday, but seems to have been swallowed by the system:

Harari "Sapiens" is somewhat inaccurate at times (you can find online reviews describing where he was wrong), but a fun read and only slightly longer than 400 pages.

Expand full comment

Of reasonably recent histories I've read, Rubicon by Tom Holland on the late Republic is an engaging gallop through the late Republic, and Mary Beard's SPQR a good general history, which spends a lot of time on the late Republic/early Imperial period. Don't know how similar they are to Kulikowski, I'm afraid.

Expand full comment

History of the world in under 400 pages? You’re better off with this YouTube video: https://m.youtube.com/watch?v=xuCn8ux2gbs

Expand full comment

All that comes to mind is "The outline of history" by H.G. Wells. more than 400 pages.

I read it in my youth...~40 yrs ago. There should be a better telling by now?

Expand full comment

He wrote a shorter version after that, "A Short History of the World", https://archive.org/details/cu31924028328908 (still 436 pages!), which has now finished serving its term of copyright, and also Downey and Chesterton wrote rebuttals.

https://en.wikipedia.org/wiki/Human_history is 38 pages. It's comprehensive, highly illustrated, extensively referenced (13 of those 38 pages are a bibliography), and meticulously correct in the usual Wikipedia way. Unfortunately it's also deadly boring because, in the usual Wikipedia way, it can only contain statements that are factually true in an objectively verifiable way. Still, it's readable from beginning to end.

It gives, I think, undue emphasis to recent events; there are two whole pages on the 20th century and another half-page on the 21st, which is about twice the proportion they are of recorded history. There are of course articles such as https://en.wikipedia.org/wiki/21st_century (43 pages), https://en.wikipedia.org/wiki/Late_modern_period (33 pages), https://en.wikipedia.org/wiki/20th_century (21 pages), and https://en.wikipedia.org/wiki/Dissolution_of_the_Soviet_Union (39 pages).

Expand full comment

Wikipedia is inevitably mediocre, so I'm asking for something published.

Expand full comment
founding

I've come to realize that even though I may enjoy reading wikipedia pages, the retention rate long term is abysmal. To the point where I can read a page twice and only realize near the end.

Expand full comment

Agreed. I feel like you need to do something with the information to really incorporate it into your brain, like conworlding or something.

Expand full comment

I would like something up to at least 2010. We know far more about world history than during H.G. Wells's time. For the second half of the 20th century, I'd focus on 4 things: recovery of the first and second world, revolutionary communism and dictatorship around the world, fall of the USSR and American unipolarity, and rise of Thailand, Indonesia, China, and India. Maybe add environmental issues to that, as well.

Expand full comment

> I’m wondering if I’ve been blogging so long and cast such a wide net that I’ve collected readers who aren’t familiar with The Nurture Assumption

I think it is a mix of this plus people coming in with strongly held beliefs that are expensive to update.

Bryan Caplan has talked about how economics is a weird subject because in a lot of 100 level classes, students will argue with the professor that the whole field is wrong. Not many subjects get that. If The Nurture Assumption was taught, I'd bet it receive similar treatment.

Expand full comment

Just been looking at summaries of The Nurture Assumption and I get the impression that it doesn't say that you have no influence as a parent, just not necessarily in exactly the ways you might first think?

It's also hard to separate genetics in the sense that 'being the type of person who tries to positively influence one's children' may be genetic in itself.

I think as a parent you inevitably do lots of mini-experiments (even if you don't think of them in those terms!) in the course of trying to figure out the whole parenting thing, and see the short-term effects of those on your children. And those experiments make you feel like you have an influence. Some patterns in parenting styles and children's behaviour are also so striking and feel so causal that I can see why one would want to seek evidence of correlation rather than causation to repudiate those beliefs.

I also doubt any parent with more than one child thinks they can influence personality or baseline intelligence but I find it hard to believe that there aren't ways one can positively influence one's children. If nothing else, making them feel loved feels like it must be important, and I get the impression that The Nurture Assumption agrees there - although as I say I must read it.

Expand full comment

I think the whole field is wrong. Smart kids.

Expand full comment

What's your in-a-nutshell case that the entire field of economics is wrong?

Expand full comment

Is that really an accurate comparison?

You seem to think The Nurture Assumption is accepted as the final word.

From it’s Wikipedia page:

“ However, the psychologist Frank Farley claims that "she's taking an extreme position based on a limited set of data.

Her thesis is absurd on its face, but consider what might happen if parents believe this stuff!"[6]

Wendy Williams, who studies how environment affects IQ, argues that "there are many, many good studies that show parents can affect how children turn out in both cognitive abilities and behavior".[6]

The psychologist Jerome Kagan argues that Harris "ignores some important facts, ones that are inconsistent with this book's conclusions".[8]”

The book’s reception was mixed at best.

How did it become gospel on ACX?

Expand full comment

it became gospel because of follow-up research that did an extremely good job of showing that it was correct. The book is, like, 40 years old at this point.

Expand full comment

Just read the Judith Rich Harris obit. She passed away at 80, January 2019. Steve Pinker has many kind words for her but it seems like her thesis was still on the fringe. Not saying it's incorrect, just not accepted.

I admire an iconoclast as much as any other ACX reader, I'm just not sure she got this right.

For that matter I'm not even sure what she said beyond a couple summaries. I guess I will read her book before I say any more.

Expand full comment

Published in 1998 so we're talking about 23 year now.

The pushback I'm seeing is pretty strong but I can't claim that it hasn't been refuted.

Can you point to the follow up research?

Expand full comment

I thought this was pretty good: https://www.edx.org/course/the-science-of-parenting

Expand full comment

I’ve read her book now. The main take away for me was that a child’s age peers have more effect on socialization than parents.

Expand full comment

Both Trudeau SR and Castro are some of the most influential historical leaders of their respective countries...

It would be hard to tell if Trudeau JR got his political talents (and indeed, very quickly fell into the role of Prime Minister in his political career) from being the 'adopted' son of the most consequential Canadian prime minister in postwar history or being the biological son of someone who was able to navigate the politics of revolution and post-revolutionary Cuba.

Expand full comment

Well, he isn’t the son of Castro so…

Expand full comment

I guess my objection to "success is genetic" is that it does NOT follow even from the assumption that everyone's personality (including intelligence) is 100% genetic and 0% environmental. And I do believe that this assumption IS a good approximation of reality, so no need to oppose me on that part.

Suppose that because of genetics, you get "the type of brain that is capable of inventing the cure for cancer". But there is still a huge gap between having this type of brain... and actually inventing the cure for cancer.

Your environment can make you interested in biology and medicine... or history and conspiracy theories. Both are great areas for someone who can memorize thousands of small details and notice patterns, but the latter does not lead to you inventing the cure for cancer.

There is a difference between merely having a talent... and having the same talent, plus good tutors, learning resources, opportunities to network with people studying the same thing, etc.

Education costs money. If your family can't afford it, no matter how smart you are, the path to medicine is closed. Not necessarily because you lack knowledge, but simply because you lack the credentials.

Your general financial situation also determines whether you can study things that are interesting and spend a lot of time thinking about them... or you must do whatever maximizes your income in short term, even if it destroys some opportunities in long term. On the other extreme, financially independent people can get 10 extra hours of free time every workday; that is not a small thing.

Money can make the difference between owning a famous company... and being the most productive employee in a company that made someone else famous. In academic sphere, political connections can make the difference between being known as the guy who invented the cure for cancer... or being on the list of his sidekicks.

(An argument in the opposite direction is that generally intelligent and conscientious people have more than one opportunity in life, so even if something prevents them from inventing the cure for cancer, they can still become famous for something else.)

Shortly, to achieve great success, you need to score high on both genetics and luck. Even if nurture has no impact on your personality traits, your family can influence your luck.

Plus, there is this example of the three Polgár sisters. People usually dismiss it by saying "they just inherited the chess genes from their parents, duh". However, although their parents were chess players, they were no grandmasters. And without the benefit of hindsight, you probably would have *predicted* the *opposite* -- the daughters being *less* good at chess than their parents -- because of the regression to the mean. And they exceeded their parents, thrice.

Yes, the Polgár sisters definitely inherited some superior "chess genes", but the genes alone would not have made them so famous. They also needed the supportive family. If you read the book, there were a few hostile people placing various obstacles in their way, such as trying to ban them from competing in the "male league", or refusing to issue them a passport so they would be physically prevented from participation in the world championships... and the parents had to fight hard to overcome these obstacles. (So if there was ever an equally genetically gifted girl born in a less supportive family, we would not know her name.) Not all competitions are fair, and the family can make a big difference here, too.

So, my model is that you have "genotypic geniuses" and "phenotypic geniuses", and the family plays the role *twice* -- the first time it is a source of the genes, and the second time it helps to transcribe the genes into actual world-class success. "Genotypic geniuses" that happen as random mutations are much less likely to translate into "phenotypic geniuses". The thing that we see running in the successful families are the "phenotypic geniuses", but the "genotypic geniuses" could be much more widely distributed in the population.

Expand full comment

The genetic raw material is only part of the path to success. Let's say Aldous Huxley was raised as an adopted step brother of J.D. Vance in some small Appalachian town. No one in his poor Kentucky family is going to say "That Aldous is pretty sharp, maybe we should pool our meager resources and get him a tutor."

A more likely scenario is "That Aldous is too big for his britches. Using those big words he gets from his book learning. He thinks he's better than us."

Instead of studying Classical Greek and Latin as a kid he catches catfish and helps with the chores. Maybe he finds time to visit a not so great rural library now and then and learns big words that only get him in trouble.

If catches a couple of breaks - like JD did - perhaps he gets a couple scholarships and goes on to write a much better version of "Hillbilly Elegy". But I don't see him writing "The Perennial Philosophy" and "1984".

He wouldn't have had the necessary nurturing early environment, not to mention the family connections, to prepare him for those big successes.

Expand full comment

Brave New World? 1984 was Orwell.

Expand full comment

Oops. Thanks Sol.

Expand full comment

and no ability to strike through or edit. Alas.

Expand full comment

No problem. Huh, I'm usually just a lurker and didn't notice there was no editing. If you wanted to, I guess you could copy, delete, and re-post, and accomplish the same?

Expand full comment

I’m not much more than a lurker myself. It’s a mistake. I’ll live. Thanks though.

Expand full comment

What percentage of variance in success do you think is accounted for by genetic variance versus environmental variance? I'd also be curious to hear what you think is accounted for by family environment variance.

As is, I can't really tell how relatively important you believe the different aspects are (and thus, whether there's actually any big disagreement).

Expand full comment

I do not have enough data to make a reasonable guess. I believe that the effect described by Scott is real, but weaker, potentially much weaker. Can't say how much weaker exactly. Yes, the "genotypic geniuses" are overrepresented in some parts of society, and in some families. But in addition to this, a supportive family increases the chance that they become "phenotypic geniuses", and one way how the family does it is already having some of them (which gives you role models, a network, resources, hero license, halo effect...).

My disagreement is about the *magnitude* of the effect. Going only by the "phenotypic geniuses" makes you overestimate how rare the genes are and how much they are concentrated in families.

It could also lead to the opposite conclusion about what should be done. If you believe that the "phenotypic geniuses" are all there is, then duh, the great families will take care of their own, everyone else is doomed anyway. But if you believe that there are many "genotypic geniuses" that in a more supportive environment could also have become "phenotypic geniuses", then perhaps creating such environment could make a great difference. (Which is quite different from believing that *everyone* is a potential genius. If everyone is, you may want to support everyone equally. But if "genotypic geniuses" exist, you may want some method to *find* them in the population, outside of the great families.)

The magnitude of this effect probably varies across history, as Charles Murray has shown. Hundred years ago, a famous professor would be more likely to marry a pretty girl; these days he is more likely to marry another professor; so the society gets more genetically stratified than it was in the past. That would suggest that these days a greater fraction of "genotypic geniuses" get born in supportive families. They still might be a minority of all "genotypic geniuses" though.

Expand full comment

So you say you disagree about the magnitude of the effect, but I still don't understand what you're saying the true effect size is for genetic vs environmental factors.

I think IQ variance in the US is ~60% genetic and the rest is (by definition) environmental, but probably only ~10% of variance is directly from parents rather than all the other bits of your environment. I think success is ~30% IQ in the US, ~30% from other durable and heavily genetically influenced factors (ex. conscientiousness, agreeableness, default motivation levels), and ~40% environmental factors like peer group/where you went to school/parental choices.

Is this wildly off from what you think? What numbers would you throw out?

Expand full comment

I honestly don't know. Sorry if that disappoints.

All I have is anecdotal evidence. I have met a few people who were highly intelligent, but no one ever told them, so they considered themselves unfit for intellectual tasks.

(Specifically, I have made bets with a few people that if they take a Mensa test, they will pass. They all passed and were surprised a lot. I am not saying that passing a Mensa test is a high bar; by the ACX standards it is pretty low. I am saying that those people believed that they wouldn't pass even such relatively low bar, while I made the bets because I was impressed by their intelligence. And I am not easy to impress.)

I have faced some minor obstacles myself. Things that seem quite absurd for me now, such as winning mathematical olympiads, but then being told that I am not actually that good at math, because... and I am not making this up... I lived in the poor part of the town. Also, because no one in my family is a math professor. Most other math olympiad winners had some math professor in their family. Facing such absurdities regularly, it does not really convince you that you are wrong, but it does make you tired. I guess I am overly sensitive about the "talent in families" topic.

Expand full comment

I think then we probably actually agree.

A correlation existing doesn't mean you'd never find mathematical brilliance outside of rich, otherwise successful families.

For example, suppose we had person A who passed a Mensa test and had no relation to major mathematicians, and person B who failed a Mensa test and was Paul Erdos's grandson. If I were to guess who was better at math, I'd guess person A (and I'm assuming you would as well).

I think the crux of the difference is that we seem to differ in our interpretation of what a genetic correlation should imply. You highlight instances where it seemed to rob intelligent folks of the license to believe they were brilliant (despite being brilliant) when they didn't come from a background of intellectuals. I think that those folks' lived experience should provide vastly more evidence of their intelligence than their family background, to the point where they could safely ignore their background in trying to decide if they pass some imaginary intellectual bar.

I would guess we would both agree that for an individual, simply taking an intelligence check is easier and more accurate than seeing how successful your ancestors are.

Expand full comment

I was shocked by the global map in Scott's Ivermectin post ( https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/ac9e4f34-f9cc-40f2-9d83-da4e7178fad7_772x330.png ) showing that, in about half of the world's land area, more than 10% of the population is infected with worms. Shouldn't there be charities to distribute Ivermectin or something similar in these parts of the world? Shouldn't Mexico, Brazil, China, and India be able to do this on their own?

Expand full comment

I can't wait to see how you react when you find out what fraction of the population has eyelash mites, foot fungus, dandruff, vaginal yeast infections, and skin bacteria.

Expand full comment
founding

I live in Romania, and one of the (many) things which annoys me here is how much old people accept the various declines that come with old age, even if they are preventable or curable. It's very rare here for healthy people to do preventive testing, even when they are old and obviously in risk groups. Same with low hanging fruits like flu vaccine or dental care. So yeah, a consequence is that we end up with a lot more visits to the doctor than it'd be strictly necessary.

Point is, a lot of it is cultural. Not necessarily meaning that the culture is inherently bad - I'd guess it's mostly an adaptation to long periods with unavailable or unreliable health care. But whatever the reason, a solution that doesn't take into consideration the cultural aspect will most likely be incomplete.

Expand full comment

I imagine one problem is that treating the parasites is only a temporary solution. Unless the infrastructure is modernized enough to prevent food/water sources that are infected, people will just keep getting parasites (though it probably isn't so black and white, maybe in some areas it takes a long time between average infections, so you only need medicine every couple months or years).

Expand full comment

https://www.evidenceaction.org/dewormtheworld/ and https://schistosomiasiscontrolinitiative.org/ are two charities which address the issue, although I'm not sure what medicine they use. Both are top charities according to GiveWell.

Expand full comment

There are charities that do this. Pharmaceutical companies give them the drugs for free to generate goodwill but they rely on donations to fund distribution of the deworming pills. It's very cheap per person treated but the health impacts of parasitic load are hard to measure.

Expand full comment

What if the total number of U.S. Senators stayed fixed at 100, but they were apportioned based on the square root of each state's population?

Also, regardless of how small a state's population was, it would be guaranteed one Senator.

Expand full comment

What for? If there is no advantage in equal representation in the Senate, we don't need to put up with the expense and complication of a bicameral legislature, just one chamber would do. If there is...then it works best as is, with manifest equality.

Expand full comment

Senators have 6 year terms, representatives have 2 year terms. I don't know whether the difference is important, but it might be.

Expand full comment

Important to what? Senatorial terms are also staggered, they have much bigger staffs, they're older, and the functions and rules of the Senate are quite different from the House in a number of ways. It's chalk and cheese.

Expand full comment

An interesting hypothetical, but sadly it will never be anything but that. (Because equal representation in the Senate is the one constitutional-structure feature that is specifically exempted from the amendment feature.)

Expand full comment

It's only mostly exempted. The Constitutional text here is "and no state, without its consent, shall be deprived of its equal suffrage in the Senate."

This leaves open a few possibilities. The most straightforward would be that if all existing states could be persuaded to agree to an amendment modifying the apportionment of the Senate, then that would meet the bar of every state consenting. Any future states could then be required to consent as part of the statehood admittance process. Actually getting unanimous consent from the states seems a tall order, but might be just barely doable by some combination of bribery (e.g. granting special subsidies or tax breaks to smaller states for a certain number of years in exchange for consenting) or blackmail (e.g. cutting off states that don't consent from existing federal subsidies if they don't consent).

A more abuse-of-rulesy approach would be to amend the constitution to reduce the Senate's portfolio of powers, turning it into something more like an analogue of the Upper Houses of Westminster-style parliaments: an advisory body with the ability to propose amendments or apply a procedural brake, but without the ability to completely block major legislation.

Or even more abuse-of-rulesy, the Senate could be stripped of all of its powers and a new "Schmenate" upper house with population-sensitive apportionment established in its place.

Or to try to reconcile the modern ideal of one-person-one-vote with the original intent of giving individual states equal representation in the Senate befitting their co-equal semi-sovereign status and addressing the fears of small states that the union would be dominated by a coalition of large states, the larger states could petition Congress to allow them to divide into multiple states each, so the gap in population between the larger and smaller states would be much less than currently. This would not even require a Constitutional amendment, just approval by the legislatures of the existing states, a convention elected by the population of the proposed new states to draft a constitution an apply to Congress for admittance, and a simple act of legislation to admit each new state.

Expand full comment

Can we just amend the constitution to remove the part about needing the consent of states?

Expand full comment

The Constitution says you can't get rid of that part via Amendment.

Expand full comment

Can you get rid of the part that says you can't get rid of that part via amendment?

Expand full comment

It's been sacked.

Expand full comment

Yeah, the feature is baked in I’m afraid. It was a must to sell the Constitution to the less populous colonies.

Expand full comment

Wait, how do we know Trudeau isn't a Castro?

Did someone collect some of his glorious hair for a DNA test?

Expand full comment

I don't know if the timing works, but it sounds just as possible that his father is Ted Kennedy or Mick Jagger: https://en.wikipedia.org/wiki/Margaret_Trudeau

Expand full comment

Why does it become harder to remember names than other parts of speech?

Why can it be possible to remember a thing, and bunch of related facts about a thing, while still not remembering its name?

I just mentioned Erdos in a comment. I couldn't remember Erdos' name, but [nomad mathematician] turned it up.

Expand full comment

I want to say "because you use them less often, and any word you use less often is harder to remember".

But there's this glitch. I don't forget names, but I often start to use the wrong name for one of the 3 other animate objects in this household, and correct myself mid-word (and my father generally says the entire incorrect name, like my brother's name instead of mine, and then may or may not correct himself).

These names are words I use often, and the same mistake doesn't happen as much for other nouns. So, I need a new theory.

Expand full comment

Using the wrong name from a small set strikes me as different from struggling to remember a noun.

Expand full comment

I think it's because names generally just lack the sort of structure that makes them memorable - by their very nature they're largely arbitrary structureless "handles" that gives them very little "sticking" power compared to a whole interconnected set of facts about the person, where one fact often helps reinforce the memory of another.

Also, linguistic familiarity matters too, IIRC. People are much better at remembering names from languages they're most exposed to. Like, for an "Anglo" like me, a very English name like "Adam Smith" is just going to stick better than a Hungarian name like Erdős, and it gets worse the farther you drift linguistically.

Expand full comment

Exactly. Anime names seem like interchangeable jumbles of letters to me, but I'm sure they're much more memorable to Japanese people.

Expand full comment

Verbs are just as arbitrary, but maybe having fewer of them makes it easier to remember them.

Expand full comment

Well, yeah, almost all parts of the language are arbitrary (main exception is stuff like onomatopoeia) - but the difference is that I'm not learning the language, I've used it for decades, so I'm not just going to randomly blank on a verb the way I might blank on somebody's name.

... but if you are learning a language, "blanking" on a verb or other part of the language is completely normal.

Expand full comment
founding

Verbs are not arbitrary, not that way. Think "to cook" - don't you have a wealth of associations coming up?

Names on the other hand have most of the associations necessarily cut. "Tom" may bring memories, but they're both a lot less, and (most importantly) they're an error. Since you don't know which Tom in particular I'm referring to, it the symbol should be kept as void of any links as possible. And any associations should be made with the concept of that particular person, and not with his name.

Also frequency, especially for family names. Other that very few which are already concepts in itself (Hitler, Einstein), we don't use each very often. And when they do, the meaning they gathered is usually useful (the Smiths down the street)

Expand full comment

My experience is that sometimes I can remember many details about a thing, even minor details, but not the name for that thing. This is not the same thing as going completely blank about a name.

Expand full comment

In biochem and molecular bio, I started learning etymologies just to remember the names themselves. It kinda works, actually.

Expand full comment

So here’s my theory - which can never be proven. The brains recall or search function is slow and highly compartalized. And buggy.

So the brain tries searching a certain part of the memory based on a hint and doesn’t reset its search criteria until it gets another hint. And faces and names are, perhaps, stored in different “folders”. So once you try one folder you are pretty much stuck with it, or sometimes are ( ie it’s a bug).

A friend of mine was trying to think of the railway station in Dublin beginning with H. He just couldn’t get it, so I asked him to think of a city in Texas.

He went “Dallas, Huston oh … Heuston”. So that similarity in names reset his search criteria.

I didn’t know that Texas cities would be easier for him, I just guessed. In fact he knew both, of course - I never said Heuston to him so he had to recall it himself. Both were in memory.

In fact in a different situation he might have forgotten the Texas city and have been helped by he saying “think of railway stations in Dublin”.

Expand full comment

Possibly related: https://sudonull.com/post/12946-1000-dimensional-cube-is-it-possible-to-create-a-computational-model-of-human-memory-today

Particularly:

> In trying to extract stubborn fact from memory, many people find that constantly knocking on the same door is not the wisest strategy. Instead of demanding immediate answers - to command your brain - it is often better to put the task aside, take a walk, perhaps take a nap; the answer may come as if he were uninvited. Can this observation be explained by the SDM model? Perhaps at least partially. If the sequence of recalled patterns does not converge, then further investigation of it may turn out to be fruitless. If you start again from a neighboring point in the memory space, you can come to a better result. But there is a mystery: how to find a new starting point with better prospects? You might think that it’s enough just to randomly replace a few bits in the input pattern and hope as a result, he will be closer to the goal, but the probability of this is small. If the vector is in bits from the target then bits already true (but we do not know any ofbit); with any random change we have a probability indo not come close, and go even further. To make progress, you need to know in which direction to move, and in-dimensional space is a difficult question.

Expand full comment

Actually it's not a difficult question. That's what optimization methods do. One common method is hill-climbing: check every possible 1-bit move, and take the one with the best result. There are many variations on this to reduce the chances of getting stuck in a local optimum. Another is gradient search: here the representation is continuous, and you use partial differential equations to find the direction that's uphill in n dimensions. It comes with many of the same variations to reduce the chacnes of getting stuck in a local optimum. Another is iterated relaxation methods, as in a Hopfield network, or in monte-carlo markov chain models. This is more common in models of memory retrieval, because it works together with a simple method of storing the points in N-space that you want to remember. Another way of choosing the next point is genetic algorithms, which is useful if the items stored in underlying representation have linkage disequilibrium owing to some hierarchical structuring of the ontology of the things represented. My point is just that this is a mature, well-researched field, and many satisfactory solutions are available.

Expand full comment

Er, the ontology doesn't actually have to have a hierarchical structuring; linkage disequilibrium of representations in the ontology is sufficient. I just think this usually happens in ontologies because they usually are hierarchical.

Expand full comment

Now I want to know why there's a railway station in Dublin called Houston.

Expand full comment

Formerly 'Kingsbridge' station, it is named in honour of Seán Heuston, an executed leader of the 1916 Easter Rising, who had worked in the station's offices.

Expand full comment

Well, Marcy, have your considered using some sort mnemonic like Sherlock Holmes ‘Mind Palace’ to keep these things straight?

https://www.smithsonianmag.com/arts-culture/secrets-sherlocks-mind-palace-180949567/

Expand full comment

Was misnaming me a joke about remembering names, or an accident? By weird coincidence, my sister's name is Marcy. Or maybe not a coincidence, if you happen to know of her.

Expand full comment

Yes I was joking. A silly riff on not remembering names. Just a coincidence about your sister being named Marcy.

Expand full comment

So far as I know there are a ton of complicated reasons for why memory works better or worse, but two of the most common failures of memory of which I've heard are (1) not having enough associations with other memories (more likely with a proper name than with a generic noun, and much more likely than with a verb), and (2) having something closely related in unmeaningful ways (e.g. sounds the same) interfering with the exact recall.

That is, your mental Erdős number may be too large or too small.

Expand full comment

Completely speculative, but I wonder whether some of the incredible achievements of Ashkenazi Jews born in central Europe from 1880 to 1920 was a result of a similar mix of circumstances that has also been associated with extraordinary, concentrated achievement in other times and places. Periclean Athens, 1st century BC Rome, 15th century Florence, maybe late 18th century Edinburgh, maybe mid to late 17th century London, maybe the Netherlands in the early 17th century. Most of our culture was created in a very small number of places, in a short period of time. And then, although high achievement in those places/cultures continued - as it certainly has done for Ashkenazi Jews judging by the number of Nobel prizes they win - the great, world-changing contributions faded away.

These cultures do seem to share some things. Wealth and power - they were all rich places that were extending control over others, and were at the forefront of contemporary technological development. Novelty - all of these were very consciously new societies, doing something different from anything that people living in those places had done before. Threat - they were all under constant threat, not just of attack, but of total destruction. It's almost as if the relative lack of a past they could call their own, and a future in which their culture was reasonably certain to survive, helped focus the collective mind of these places on achievement in the present.

Looking at these indicators, one might expect the most interesting place in the world today, and the most promising place to look for major contributions to the future of our culture, to be rich, technologically advanced, realistically threatened with annihilation, and new - a civilisation which thinks of itself as separate from anything that has gone before. I'd say that the place that seems to fit those criteria best would be Taiwan. I know almost nothing about Taiwan.

I'm in the happy state of being aware that there's a mountain of speculation on this subject, but sufficiently ignorant of the nature of that speculation that I feel free to add my own half-baked noodling to the pile.

Expand full comment

Great choice of environments and periods. Except I don't see "being under threat" as an anyhow central part of this picture. I see it more as those environments having somehow succeeded to get a scene running, and to maintain it alive for a certain time.

Beethoven was not from Vienna, but was drawn to it because for music in the 18th/19th centuries, Vienna was The Scene in that region - same as Milan was in the neighboring region. Just like Paris was The Scene for writers in the first half of the 20th century, drawing them in from across Eurasia and the Americas. The US universities were The Scene for bright minds in the second half of the 20th century, drawing them in from across the globe. (while present-day US universities have become markedly decadent, bloated, barnacled and on a path of decay). Silicon Valley is an absolutely massive Scene, though the question may be for how much longer.

I don't think there is a "recipe" for creating a scene, nor is there a "recipe" for how one is dissolved. Scenes are above all living organisms. If it's not a living organism, it's not a scene. Those organisms are very, very diverse, and correspondingly diverse are the processes by which they decay, petrify or succumb to poison.

Taiwan seems to be a pattern-match of your key assumptions, including the one about "being under threat" which I think is far from central. Taiwan is certainly a more-than-well-off country, but I think not a scene. Meanwhile, China - itself "Taiwan's looming threat" - seems to be comparatively more active and alive in the scene department, even though it is not under threat from anyone. (or at least not from the US, which is lately too busy shooting itself in the foot)

Expand full comment

You're right, late 19th/early 20th century Paris is a real problem. It wasn't particularly under threat, and wasn't really a new culture, but there's a lot of major stuff happening. You can make an argument for the real risk of nuclear war from 1950 to 1989 being suggestively correlated to the period when American music/art/literature was really doing new things.

Taiwan is the test, I've more than somewhat ridiculously decided. If in ten years' time it turns out that actually a huge amount of great stuff has been going on there, and is continuing to do so, then this theory should definitely be taken seriously.

Expand full comment

In ten years' time, the Republic of China might already become a part of the People's Republic of China. OK, kidding aside, I agree that your theory is worth considering because it is falsifiable - it could be potentially falsified or vindicated in such a timeframe.

That said, I still think that "being under threat" is not key, that "bringing to life a Scene" is key. (while "*how* do you bring a Scene to life, what is the recipe?" might be an extremely hard-to-answer question - except if the answer is that there are no firm recipes).

While we ultimately attribute progress to the many individuals who brought it about - and I agree with those individuals getting all the credit that they're getting as individuals - I think we mustn't overlook the contribution that the Scene's existence has in this picture. The Scene is bringing them together with like-minded individuals, who are riffing their ideas off each other - and in *motivating* them to dedicate their extra time and energy passionately to those pursuits, as opposed to dedicating it to whatever more mundane pursuits they would have had, absent that Scene. In other words, those -same- individuals, with those -same- innate capabilities and drives, might not have accomplished nearly as much if we bar them having been thrown into that Scene.

I don't think the "nuclear threat" had anything much to do with US achievements post-WWII. Europe was under that exact same threat, right? But Europe had just spent half a decade thoroughly gutting itself, ending up largely devastated and crushed ("in body and spirit"), including in the Allied parts who won. In contrast, America had won with over an order of magnitude less human loss and virtually no destruction whatsoever in its mainland - and high spirits, the impression that they could do anything. Scenes were indeed springing to life in post-WWII US - literary, academic etc. - but I don't think it had much to do with the "nuclear threat".

Taiwan is a very rich country. As is Switzerland. Taiwan is under threat. As Switzerland isn't. And neither of the two presently have a Scene running - at least to my knowledge. I may be wrong. We will see in 10 years.

P.S. about the "richness aspect". While I don't think there's a set-in-stone recipe for creating a Scene, I think that one of the unavoidable ingredients is ... not *money* per se, but Slack (in the meaning that Scott uses the word, https://slatestarcodex.com/2020/05/12/studies-on-slack/). Slack is *far* from being the sufficient ingredient, but it is a mandatory ingredient. A baseline of richness is indeed required to have Slack at all, but beyond that baseline, further richness does not necessarily bring further Slack, it can even be to the contrary. The Taiwanese can at the same time be quite richer than the Portuguese, while having less Slack than them - if the work expended on achieving such material standards sucks out their time and energy.

Expand full comment

My list looks like this:

- Athens, 500-150 BC, but our record past 400 BCE is spotty.

- Not Rome. On the contrary, I think that, for a nation that lasted 2000 years (~600 BCE - 1453 CE), Rome had an astounding lack of intellectual achievement, including in art, math, literature, science, and philosophy, unless we attribute everything done by Greeks ruled by Rome to Rome. The Romans had good engineers and lawyers, and a few good poets.

- Venice, circa 800-1800 AD. The reinventors of republicanism; the forerunners of the Renaissance; also a world military power. Suspiciously underemphasized in our histories.

- Southern Italy, 1300-1600 CE.

- The Netherlands during the Dutch Golden Age, ~1550-1700. Another time/place of criminally underestimated importance. They invented the toleration of opposing ideologies, and imported it to the US when they founded New Amsterdam. Every step towards Enlightenment in Europe during the Renaissance relied on Dutch printing presses; subversive material was for a hundred years printed mostly in the Netherlands, where the press was difficult to control.

- The UK, 1660 (establishment of Royal Society) - 1776 (death of Hume), with a special nod to Scotland.

- The US, around the time of the American Revolution. The persistent insistence of historians that the French Revolution was more important than the American Revolution strikes me as perverse. The French Revolution was materially important in wiping out the aristocratic class. But it gave us no new ideas or new knowledge; merely another proof of the inherent violence and instability of naive communitarian ideology.

All of the places on my list had many things in common:

- No powerful monarch or central government, and no planned economy

- Money (not barter)

- A merchant economy based on sea trade with considerable freedom

- A sophisticated monetary system, including loans with interest

- A history of the government paying back its loans (I'm not sure if they all had this)

- A rising middle class, due to this freedom of trade

- Respect for personal artistic achievement, as evidenced by the fact that we know the names of their artists and architects (as opposed to, say, those of Rome, or of the Middle Ages)

- Individualism (the notion that it's okay for individuals to seek personal honor, to feel personal pride, and to have their own interests and preferences)

- Individualism and communitarianism are not exclusive! Athens, Venice, the Italian city-states, and the Netherlands had a high degree of community spirit. The US and Britain did not because they were patchworks of different nations.

- Competition, both with other cities and nations, and with other individuals within the same state

- An upper class which, unlike those of France and Spain, was free to engage or invest in trade or labor

- Widespread disillusionment with religion among intellectuals

- Significant freedom of speech and writing, owing to this disillusionment with religion

- Ineffective enforcement of religious authority (this was true even in Renaissance Italy, where, despite physical closeness to Rome, the Catholic church had much less power than in France or Spain)

- Naturalistic and non-idealist art, resulting from freedom from violent religious oppression and from Platonist ideologies

I'll call cultures with these attributes "Enlightened cultures".

I think all of the items on this list resulted, in the European cases, from the weakness of the monarch and of the Catholic Church. The exception that proves the rule is Constantinople, which had most of these things, but had a powerful, centralized state and state religion, and accomplished little culturally in 1000 years other than building the Hagia Sophia and preserving ancient manuscripts.

[I deleted some paragraphs here that were political.]

Note that this list consists almost entirely of things Plato opposed, disliked, or said he would eliminate from the ideal state (in Republic).

Expand full comment

I think if you're looking at Rome from 75BC to 0AD you've got Virgil, Horace, Ovid, Cicero, Livy, Lucretius. People who know far more about their works than I do would describe that as a flowering of literary genius, I think. And great works of art, even if along the lines laid out by the Greeks hundreds of years previously. Again, my limited understanding is that their feats of engineering were extraordinary, and unparalleled prior to the modern day.

Interesting that the quality of literature and art declines so markedly from the 1st century AD onwards, even though Rome itself flourished.

Expand full comment

I think you are right in identifying the period 75BC - 1AD as the high point.

Expand full comment

Like I said, they had good engineers, lawyers, and a few poets (I was thinking specifically of Virgil-Horace-Ovid; that's why I said "a few" rather than "several"). Add historians to that if you like. Lucretius became the greatest Roman philosopher merely by passing on the ideas of Democritus and Leucippus without adding any insane metaphysics or religious doctrines; and the Romans were so uninterested in what he had to say that it was almost lost, while they instead devoted centuries to adding epicycles to Plato's vile and insane philosophy, then stirring it in a pot with Judaism, Mithraism, Manichaeism, and Zoroastrianism, and calling the resulting stew Christianity.

They, or their Greek slaves, made some good sculptures; but whereas Greek sculpture developed, Roman sculpture IMHO merely gradually declined from the Greek. They made unique and great advances in architecture. But all that is trivial considering the extent, duration, and wealth of their empire.

In Rome's defense, the island theory of biogeography (in evolutionary theory) predicts this. The diversity generated by evolution is not proportional to the land area available; it is proportional to the area raised to the power of z, where typically 0.15 < z < .35. For z = .25, it predicts that a land area 100 times as great will produce only 3 times as many new species. The production of artistic "advances" is in many ways much like evolution, so my prior is that we should model it with the island theory of biogeography.

Expand full comment

Rome didn't contribute much? I think "contribute" is the wrong way to think about it. During the time of Rome there were certainly huge advancements in every field. The problem is that the fall of Rome caused many of these advancements to get lost. In fact much of Greece's known works were also lost for a while and were rediscovered hundreds of years later.

Expand full comment

They did invent one new literary form: the rape comedy. In a Roman rape comedy, a couple is in love and about to be married when the woman is raped by an unidentified man, and so the wedding must be called off and the woman sent away in disgrace. But in the end, the couple discovers that the man she was to marry was himself the rapist, so they can get married and live happily ever after.

On that note, they also invented gladiator shows as entertainment; apparently they grew out of the ritual sacrifice of slaves at the funerals of wealthy men. If they didn't invent them, they at least greatly increased them in scale and grandeur. That's the sort of art the Romans invented.

Expand full comment

What advancements? I tried recently to make a list of inventions made by the Romans, and used Google to search for things they had invented, compiled a list from several websites about the Romans, and then went through the list, and found the only thing on it that they might possibly have invented was the overshoot water mill--an engineering improvement on the undershoot water mill. That's one major incremental improvement in one technology, in over a thousand years. They didn't invent concrete, the ballista, the catapult, indoor plumbing, the aqueduct, the arch, the barrel vault, or any of the other things people sometimes erroneously credit the Romans with inventing. They developed no new math, and all I know of Roman science is Galen.

They made incremental advances in engineering, which *is* a type of intellectual achievement; but I've never claimed they were bad engineers. I do claim their skill at engineering excelled their scientific understanding to such a degree that it's a mystery how they did so much engineering without developing more theory.

Given their huge population and long duration, I find the sparsity of their accomplishments in art and science astonishing. Taken as a whole, they were roughly on a par with individual cities and small countries, like Athens, Florence, Paris, and the Netherlands, each of which had only a tiny fraction of the resources and the labor power of Rome.

Expand full comment

"Rome had an astounding lack of intellectual achievement, including in art, math, literature, science, and philosophy"

Can we really say that? For a civilization of sixty million people, we have hardly anything at all preserved from the Romans of 100 BC-600 AD. Sadly, the Romans did not write on cuneiform tablets.

Expand full comment

Yes, I think we can, and we have a great deal preserved from the Romans of that period. Unfortunately, it's mostly crap.

Expand full comment

Also, the pre-260s Roman Empire certainly did not lack in artistic achievement.

Expand full comment

Well, they made good death masks, though this was more like photography (they used wax to make molds). They made some great tombstone paintings, but I think that was only in Africa, probably all by Greeks. They had great architecture and made great mosaics in the Imperial period, at least from the 2nd to the 6th centuries. (In science, they had Galen.) But mostly they stole Greek art, or had Greek slaves make art for them. They didn't develop a distinctive Roman style AFAIK until the time of Constantine, as in Constantine's arch, which is the first instance I know of Romanesque art, whose prime characteristic is sloppiness and inattention to detail. There was a bit of later great Byzantine art; lots of it if you like 1000 years of painting flat alien-looking madonnas with child always in the same pose. And some painters from Constantinople made great art around 800 AD when they left the stifling, static art world of Constantinople for Charlemagne's court; but that's more the exception that proves the rule--their painters *could* make great art, but weren't allowed to.

I'm unfamiliar with their pre-260s art. Can you link me to some examples?

Expand full comment

(Technically not tombstones; they were painted on wood.)

I don't mean there was *no* great Roman art / science / etc.; but that, considering the vastness and duration of their nation, they contributed little. They were outdone by 1 or 2 centuries of flourishing in Athens, and Florence, and the Netherlands--probably all areas with at most 1% as many labor-hours to work with as Rome.

Expand full comment

Hm, you might be right.

Expand full comment

"I know almost nothing about Taiwan."

Relative to the mainland, it's stagnating in all respects (at least partly due to mainland competition), similar to Japan. The scientific and cultural power of the mainland is growing by leaps and bounds, though (though from a very low base).

Expand full comment

Comparing a country with a population of 23.5 million to one with 1.5 billion hardly seems fair. I think comparing China to almost any other nation on earth would've shown that the other country was "stagnating" relative to China just because China's starting point was so low. And maybe in another 40 years we will say the exact same thing about China when comparing it to countries in Africa.

Expand full comment

"And maybe in another 40 years we will say the exact same thing about China when comparing it to countries in Africa."

No. Africa has a dearth of human capital (and, I note, had as much time to develop as China).

"Comparing a country with a population of 23.5 million to one with 1.5 billion hardly seems fair."

Well back in the 1960s-1980s the situation was exactly the reverse, with Taiwan diverging from the mainland.

Expand full comment

For many years in the US Taiwan *was* China. As Mad magazine satirically described it in the 60’s, that big land mass occupying continental east Asia was referred to as “Big Empty Spot”.

I remember buying my first garment with a “Made in China” in 1981.

My first thought was “Do they mean *Communist* China? What’s up with that?”

Expand full comment

"maybe late 18th century Edinburgh, maybe mid to late 17th century London"

Neither of these fit the criteria of either Novelty or Constant Threat (at least not from war), in my opinion. Unless you want to point to the Black Death and Great Fire as instigators of London's intellectual rise.

Plagues are often touted as great shifters/resetters in terms of attitudes, but was London especially under threat in comparison to other cities of the time? I don't think so.

Expand full comment

Late 18th century Edinburgh doesn't fit constant threat, really, although the last Jacobite rising was 1745. But then great as Hume and Smith are, I'm not sure the Scottish Enlightenment was really significant enough to fit this argument. Novelty does fit, I think - they rebuilt their city, were creating a new North British nation, and were consciously separating themselves from Scottish history to date.

17th century London fits a bit better, I think. The population increased by about 1000% between 1550 and 1700, from a town to one of the largest cities in the world, and they definitely saw themselves as creating something new, most notably in not being Roman Catholic, but then extending to the subsequent explosion of new religions and political movements, Quakers, Levellers, Republicans, etc., that flourished in the city. I think the threat was real, too; a Catholic king or a Catholic invasion were real possibilities, and would have been likely to lead to the destruction of the culture that was being created (and it was partially destroyed after the Restoration in any event). That's what I mean by Threat, really - if the wrong person had won a war in the 15th century, London's culture wouldn't have been changed much. Two hundred years later, something new had been built, and was at risk.

Would love to know how the theory fits the Golden Ages I know even less about - the 8th century Baghdad of the House of Wisdom, the equivalent if there was one in Song Dynasty China, Mauryan India, etc.

Expand full comment

Fine- I was interpreting the criteria a little more stringently than that, but I think I agree with what you said.

Is the 'imminent threat' supposed to be a threat to the culture that is being created, or is it a threat to the safety/security/lives/independence of the place?

You seem to imply that it is the former in the direct above: a reactionary regime destroying the thriving new culture of London. But that is tautological- if there wasn't a new culture to uproot, then it wouldn't be under threat. And I don't really think that Taiwan faces something like this, does it? The culture is somewhat different to China sure, partly due to the divergent political systems. But it's hardly uniquely facing cultural eradication. More, it faces military threats to its independence.

(There might also be something to be said about the convergent of many advanced societies' cultures in the modern era, which means there is less culture to be under threat. Scientific advances look pretty similar in China as they do in the West, for example, as does music.)

What I thought you meant was a more existential level of destruction that the place faced e.g. Athens. But, to be honest, I fail to see how any urban area for substantial chunks of history doesn't fit the slightly weaker threat criterion as you've sketched it out above, assuming we're applying it to the polity rather than the culture.

Or perhaps a weaker claim: which of London's roughly cultural/geographic/historical contemporaries didn't face threats of a roughly equivalent level of risk as a monarch with different views coming in and stamping out opposition and doing a bit of killing and looting? Didn't basically everywhere face potential threats of around that level for 1000+ years?

Expand full comment

I think I would describe the level of threat faced by London in the 17th century as being on a similar level to that faced by Athens in the 5th century BC. After all, Athens was in fact conquered, and did survive - though many people died and its culture was torn apart.

It's the realistic threat of the destruction of a new culture by external armies that matters. Everyone always faced the threat of disease, starvation, etc, and less so in these places than in many others, as all of them were rich. And I think you can have a new culture that isn't under threat of destruction.

Again, as above, I know almost nothing about Taiwan, but quick googling would suggest that there has been a conscious effort to create a new culture, separate from that of Mainland China, in the last twenty years. That's exactly the kind of thing that my theory would suggest should lead to an explosion of major cultural/scientific achievement in the near future.

A lot of London's 17th century contemporary states faced this kind of threat - all of the Protestant states of Europe, basically. But I don't think that's normal. It's not equivalent to a king coming in and killing people - whatever happened in the Hundred Years' War, for example, there was no new culture that was threatened with destruction in war (the Lollards strengthen my argument, I think). And, of course, London wasn't the only Protestant state in 17th century Europe that made huge contributions to the growth of human civilisation.

Expand full comment
Comment deleted
Expand full comment

Possibly related: African-Americans being a primary influence on music all over the world for some 80 years or more. (Duration very approximate-- I'm not sure when their influence got out of the the US.)

I'm not sure most people realize how remarkable it is for one smallish ethnicity to have so much influence for so long, we've been living with it all our lives.

And it can't be simply genetic, since it's not Africans.

Expand full comment

If your child has a personality that gets on your nerves, what should you do?

Expand full comment

I think that, if that's the worst thing about the parent-child relation, it's still above-average.

Expand full comment

A daily meditation practice can help a lot with patience. I’ve been doing one four over 4 years now and a lot of things that used to irritate me no longer set me on edge.

It doesn’t have to be a big time sink. 15 minutes a day done regularly helps a lot.

Personally I’ve lengthened the time of my ‘sits’ but that’s driven by an urge to understand what and why I am. I get some interesting insights if I quiet my mind for 90 minutes but that sort of lengthy practice isn’t necessary to develop more equanimity.

Expand full comment

It doesn’t seem to have helped with my spelling though. doing one *for* over 4 years.

Expand full comment

Very tentative suggestion: spelling requires memory of how words look. Can you get into a meditative state where you notice how words are spelled?

Expand full comment

Yeah, I could give that a whirl. :)

Expand full comment

Is it their personality or their behaviour that actually gets on your nerves?

Expand full comment

I'm not sure how much they can disentangled.

I think I annoyed my mother by not caring much about clothes, and she might have preferred to be argued with to some extent rather than just being told it's alright about some clothing choice.

However, I wasn't perceptive enough to strongly notice her preference at the time, and not agreeable enough to pretend to care.

Expand full comment

Based on your self-criticism, I would focus on developing the skills of perception and agreeable-ness in your child.

Expand full comment

There are now three nags to subscribe per page instead of two (top-of-page/end-of-article/end-of-comments instead of top-of-page/end-of-comments). Is this intentional?

Expand full comment

Probably sometime next year they'll add obnoxious popups instead of improving the amazingly shitty comment system.

Expand full comment

(There's also an extra hidden instance of one's username somewhere near the top; I Ctrl-F my username to find the parts of the thread with me in them and there are two hidden instances up the top instead of one.)

Expand full comment

Last whole number OT I gave some arguments from the cognitive scientist Donald Hoffman's "The Case Against Reality: Why Evolution Hid the Truth from Our Eyes". Hoffman's interest is in understanding consciousness/qualia; he presents radical ideas such as what we see physically as medium-sized objects like humans and other animals are merely the equivalent of an object in a video game because our perceptions have evolved to give us a useful interface of reality that tells us nothing about what is behind the screen. It's a pop-science book strained and stained with tired The Matrix tropes, nevertheless I found some of the ideas serious and novel.

Last week I focused on Hoffman's emphasis on how much our perceptions, particularly our visual perceptions, are divorced from "reality".

Now I want to focus on Hoffman's radical ideas about what it is that consciousness is.

Hoffman approaches understanding consciousness by attempting to solve the maze backwards. Cartesian-like, he starts over with a basic premise: consciousnesses are agents who 1) perceive The World; 2) Decide to act based upon info from 1; 3) Act, changing The World, changing perceptions, leading to new decision, new actions, changing the world again, etc.

It's the most simple model of consciousness (although he acknowledges an unconsciousness computer could also fit that model.) The idea is to cut consciousness down to its essential elements, assume it is real, and then ask What Then?

Hoffman seems very influenced by studies about epileptic patients who have had the hemispheres of their brains separated. The two hemispheres seem to exist as separate yet whole consciousnesses ever after. Such result seems to imply that two conscious structures capable of acting independently can also join together to form a new consciousness containing both.

Hoffman believes that, humans say, are a grouping of many fundamental conscious agents joined together in a hierarchy. So one agent might be in control of the blood flowing through your liver; another your heartrate, another your breathing, etc., and all of these agents are "unconscious" to "conscious" higher-order executive functions, which we more normally think of as consciousness.

We may even, who knows?, ourselves be lower-lever agents playing our video games of life, while in reality our business is something more important like being the unconscious agents of who monitor and stabilize the blood pressure of sleeping dragons. Who knows?

Hoffman doesn't claim to know much, and ultimately that is the failing of his book. He has big ideas but can't flesh them out.

Expand full comment

> It's the most simple model of consciousness (although he acknowledges an unconsciousness computer could also fit that model.) The idea is to cut consciousness down to its essential elements, assume it is real, and then ask What Then?

> Hoffman believes that, humans say, are a grouping of many fundamental conscious agents joined together in a hierarchy. So one agent might be in control of the blood flowing through your liver; another your heartrate, another your breathing, etc., and all of these agents are "unconscious" to "conscious" higher-order executive functions, which we more normally think of as consciousness.

Ignoring the extremely loaded verb "decide", there is still a significant contradiction here - if a human consciousness is made up of multiple narrower consciousnesses, then we can't say that the human consciousness is meaningfully "simple". It's possible to bite that bullet fairly hard and agree that a thermostat is conscious (to a more foundationally simple degree than a human!), but then it's not clear how much of the conventional baggage the word carries applies to this definition.

I'm most interested in what technique was applied in the discovery of more fundamental subordinate consciousnesses, incl. the homeostatic processes, and what that means for the model. Is this a concession by the definition being too broad, and forced to capture phenomena that would otherwise be a poor fit? Or is there a rigorous approach where this actually gives us new insights, and suggests future research?

Expand full comment

I think I heard part of an interview with him on Sam Harris' podcast, and I was not particularly impressed by his ideas.

I'm going from memory here, but IIRC his argument about perception vs reality relied heavily on evolutionary simulations showing that organisms that perceive the utility of some resource do better than organisms that perceive all the accurate information about that resource. Which, OK, if you parameterize something in a way that emphasizes useful information and deemphasizes useless information, you'll do better than someone without access to that parameterization. But why does it then follow that this is how real conscious agents behave?

The part about consciousness being a hierarchy of individual conscious elements seems similar to someone else I heard on Harris speculating that consciousness may be a fundamental property of matter, not unlike charge. They went so far as to speculate that electrons may be conscious on some level. But the justification seemed to be that we can't think of a way that consciousness arises from unconscious elements, therefore everything must be conscious. I find that unconvincing. Consciousness may be something that only arises when there are interactions between separate elements. Electrons may have some property that is necessary for consciousness, but not sufficient unless you get lots of electrons together in the right configuration.

Apologies if I'm misremembering the way these ideas were presented.

Expand full comment

I'm interested in finding out more about the reality we're failing to see.

Expand full comment

Infrared is an example. In fact all of the electro magnetic frequency except light. We evolved (not just humans) to see the frequencies we need to see.

Expand full comment

I think that's a more modest claim than Hoffman makes.

Expand full comment

He’s making a more general point for sure but it’s related. We have no reason to see solid objects as non solid although there are gaps between atoms If you want to take an extreme view and insist that all the space is taken up /occupied down to the size of protons/quarks etc then it is probably true to say there is no such thing as a solid object but if we can’t go through it or sit on it then it’s solid to us. We don’t need to know at the level we are interacting with the world about quantum mechanics either.

I don’t really get this either “ humans and other animals are merely the equivalent of an object in a video game because our perceptions have evolved to give us a useful interface of reality that tells us nothing about what is behind the screen.”

If there was a conscious mind in a video game it wouldn’t need to know the ins and outs of 3D coding or pixel refresh rates and a digital tree is, to it, a tree. And in fact, within the reality of the video game, it exists.

Expand full comment

err... he's not saying reality is like a video game, he's saying that our perception receives only high-level information that is greatly preprocessed, while low-level or pixel-level information is inaccessible to our consciousness.

Expand full comment

I didn’t say that either. I was using an analogy.

He is saying “it’s an illusion”. But it isn’t. We don’t need to know or see atoms, or most of the electro magnetic frequencies. The reality we see is what we need to see and it exists.

Expand full comment

I don't think it has anything to do with the solar spectrum, atlhough that is a common canard, and it is true that the Sun's spectrum currently peaks in the visible (it peaked in the IR during its very early years). The Sun emits more than enough IR to see by, and the use of IR in nightvision glasses tells you that nocturnal animals would find it rather an advantage to be able to see in IR.

But IR is strongly absorbed by almost any biological material, or indeed anything made of molecules, so it would be almost impossible to construct a clear lens to focus the stuff, and I would say that is why animals don't see in IR (although I believe spiders have low-resolution IR-sensitive spots).

Expand full comment

Sure, as you said - many animals do see in IR, where they needed to evolve to find prey at night, or find hot spots on a warm bodied body. Bedbugs have IR vision. Mosquitoes, too, damn them.

Snakes too because they are looking for warm blooded animals to prey on.

For most animals visible light is what is absorbed or reflected the best by solid items thus defining them as solid. And of course differentiation of the frequencies into that qualia called colors is useful for fruit eaters and pollen hunters. (of course some co-evolution is involved there - flowers want to be pollinated and fruit eaten needs to be eaten. )

Expand full comment

I said *no* animals to my knowledge see in IR, meaning they focus the light and get an image. There's no biological lens material that would work for that, as far as I know. Yes, pit vipers also have IR-sensitive pits, but they're extremely coarse -- basically it only tells the snake whether to strike left or right once it gets close enough.

This has nothing to do with qualia or with the solar spectrum, it's just a property of matter. Visible light can be focused because you can build biological lenses that are transparent to it. You can't do with that IR light.

Expand full comment

Physics is exploring it. If we have difficulty perceiving anything, we build devices that can perceive them and project those observations into the domain we can perceive. Science has been doing it for centuries. Hoffman's argument is not particularly compelling IMO.

Expand full comment

I'm not buying it. The disturbing thing about consciousness is that it is the only thing that we can be sure exists, but it is the only thing that we have literally zero explanation for.

IMHO we will need to totally rethink our fundamental scientific axioms before we can start to understand consciousness.

Expand full comment

I believe our certainty that consciousness exists is the Big Lie in our cognitive machinery. If I wrote a fancy but "obviously" non-conscious NLP in-out loop and artificially injected some "facts" such as "consciousness exists" and "you are conscious" that were not grounded in any of its observables or subject to epistemic revision, I would have some fun watching it spin out produce and retract various statements about consciousness, like our "hard problem", never being able to make progress, nor update itself on the fact that it's unable to make progress toward the belief, "my premises are incoherent."

In my honest opinion, we are that NLP loop. Almost all data that are processed in our minds as facts are subject to revision by sense data. The idea of "consciousness" is not, though we can't tell it apart from grounded facts. Thus we treat "we are conscious" as a similar class of statement as "the ground is wet", seemingly coherent and meshing with the rest of our beliefs, even though in reality it's not part of any coherent epistemic web.

I suspect repeated exposure to highly altered, dissociated, semi-waking states might be able to bring "consciousness exists" nearer to the domain of "revisable by sense-data". However, I doubt our cognitive architecture is able to function even close to normally without that synthetic fact present.

Expand full comment

Are you saying that the statement "I am conscious" is just like the statement "scientists are trustworthy/untrustworthy": something that is "not subject to revision by sense data"?

It is not clear to me that my experience of being conscious is not subject to revision by sense data. I can almost imagine my brain circuitry noticing that whatever normally allows it to notice consciousness is missing, and saying words like "Huh. I don't feel conscious anymore. Weird."

Expand full comment

Does your believe [that consciousness is just an illusion] have an influence on your moral values? After all, if consciousness isn't real, then the subjective experience of pain and suffering isn't real. This would mean that a human being screaming under torture isn't qualitatively different from a computer program that prints "Please stop it hurts" whenever you press the Enter key.

Expand full comment

This is a difficult question for me to answer.

My "cheap cop out" answer is that I believe morality is a synthetic construct, and that even if the epiphemenological model of consciousness held, "good" or "bad" is just a game we play. There is no "qualitative" difference between making paperclips and preventing torture, these are merely different arbitrary preference sets over certain configurations of matter (or epi-matter, if that's part of your ontology).

My more serious answer is that, it's perfectly consistent to define suffering as "mind-like systems that substantially resemble those of neurotypical human beings who claim to be in pain", and that coupled with the arbitrary choice of "I want to avoid suffering", it will "all add up to normal" if you do that. In fact I suspect this "consciousnessless" definition is going to serve society much better than asking "but is it really conscious" will when debates about whether advanced AIs suffer or not arise. And also for animal rights -- one must surely find it ironic how consciousness fetishism has allowed us to consider complex minds of animals as qualitatively inferior.

My most serious answer is a simple affirmative, buried in the previous paragraph: yes, because suffering is defined, and there is no qualitative distinction between suffering and non-suffering systems.

A fun game to play is to try and create a mental model for the "simplest system that can experience suffering". Is it c elegans? Can we use fewer neurons? In fact, I think that going down this rabbit hole is one of the surest paths toward my view of things, so give it a try!

Expand full comment

There wouldn't be any point in most torture if you didn't think (can unconscious beings think?) that the victim doesn't feel pain. A lot of torture seems to be spite, not even interrogation. (Using torture for interrogation doesn't work, but at least there's an excuse.)

Expand full comment

Has the belief that consciousness is an illusion affected your life? If so, how?

Expand full comment

There's belief, and there's belief.

I have concluded intellectually that any thinking about or study of "consciousness" is equivalent to studying "magic" in its fruitlessness (though it is a lot of fun). I try very hard to channel this conclusion into altering my perception of reality, and very rarely succeed very partially. Most concretely, it's become easier for me to enter a "fragmented" mental state where rather than being aware of a unitary self it really feels like there are different sub-selves in the brain speaking to each other at a delay. At that point it becomes as "obvious" to me that consciousness is a lie as the opposite feels normally. But these states are fleeting, and frankly pursuing them would derange me.

All in all, life is good in the Matrix. After all, I feel no different that I do knowing that spacetime is curved or that the universe is likely a giant high-dimensional wavefunction of which I'm a subspace. It's not a conclusion you can "taste" so to speak. All in all, I'm remarkably normal.

Expand full comment

That's my point! I don't think that there is any way to make progress on the "hard-problem" from our current set of scientific axioms. We need to start at our only known - consciousness - and build from there. Leibniz was onto something with his Monads (https://www.britannica.com/topic/monad) if you ask me.

Expand full comment

Descartes (who preceded Leibniz by half a century) has done pretty much that -- starting with consciousness as our only known, and building from there -- and has arrived at dualism.

To this day, I haven't really seen a model that would be a better fit. (although I've seen monumental amounts of hand-waving...)

Expand full comment

If ever we build an AGI then it will have a dual nature, the hardware and the software.

Expand full comment

That is not what Descartes means by the duality between the material world and the consciousness. The realm of matter and the realm of consciousness are two separate realms, that are able to interface and bidirectionally exchange information.

AGI's "hardware and software" are both material. So is the central nervous system.

The CNS has a subsystem that controls your every heartbeat, adjusting the heart rate to a swathe of circumstances. Inputs. Evaluations. Decisions. Outputs. It's all there. "The hardware and the software" - both implemented by elaborately joined-up neurons with an internal state, as opposed to by elaborately joined-up transistors with an internal state. And note that the "joined-up transistors with an internal state" entity covers *both* the "hardware" and the "software" aspects that you speak of.

Except, the consciousness isn't there. You have no consciousness whatsoever of your heart beating. Not "per se" - at best you're conscious of secondary effects that may appear only in some circumstances, when pressures between systolic and diastolic are applied to certain parts of your body, and your hearing might pick up the thuds, or those parts may pick up the sensation of pressure pulses. Then you are conscious of that sound as sound, or of pressure pulses as pressure pulses. You're never conscious of the "input -> evaluation -> decision -> output" bit that truly runs the heartbeats.

Except, you *are* conscious of the "[heavily preprocessed] input -> evaluation -> decision -> [simplified] output [which will get heavily postprocessed before reaching the actual muscles]"; *this* is what we call consciousness, in what encompasses the five senses and the "high-level" commands to most muscles. As for the heavy preprocessing and heavy postprocessing - you can mentally understand that they exist, perhaps you can even grasp *just how heavy* those preprocessings and postprocessings really are, but you have no consciousness whatsoever of them *executing*.

ALL of that - the preprocessing of sensory information, the postprocessing of the signals actually sent to the muscles, the heartbeat control - is implemented in the exact same strata. Neurons. Totally material ones. No difference there. Not even a difference in "a control loop existing", as for the heart.

In the dualist view, the realm of consciousness is separate from that material realm. Information exchange between the realms is possible, and the brain *somehow* pulls of this bidirectional interfacing. Sending the preprocessed information (which is, in Shannon terms, of significantly reduced bandwidth compared to the raw source information from which it was preprocessed). Receiving very simplified "high-level commands" (which are then "postprocessed" into much more complex actual signals to the muscles). In all, the information bandwidth between the CNS and the realm of consciousness seems to be many orders of magnitude narrower than the bandwidth of all the signals exchanged throughout the CNS. And what lies in between these two endpoints, happens in the realm of consciousness. Bodies are the agents through which the realm of consciousness can effect changes in the material realm. The other direction applies too. The brain is somehow able to exchange information on the interface between the realms.

The "AGI hardware and software" will not have consciousness merely because there is a "duality", because "hardware and software are two things". That is cargo-cultish. Also, declaring consciousness to be an "emergent property" of a control loop that involves evaluation and decision, is shallow hand-waving. Consciousness is its own realm. AGI will "have consciousness" only if it manages to pull off whatever it is that the brain is pulling off to achieve that information exchange with the realm of consciousness.

P.S. the "commands" in the consciousness->CNS direction are not only muscle movement commands, but also commands for memorized information retrieval. The memory *is* stored in the material neurons, and the CNS might or might not immediately oblige such a memory query. From Von Neumann onwards, the random access to memory was viewed as an absolutely fundamental tenet of computer design, so our computers are doing at least that bit right.

Expand full comment

Yeah I think that's a much more plausible hypothesis than that it's some magical ineffable quality, the foundation of reality or something, forsooth. That is, we're probably just programmed to have the feeling that we're in some special loop with exotic hall o' mirrors properties, and to say so to each other.

Expand full comment

> In my honest opinion, we are that NLP loop.

Speak for yourself. Maybe you don't have a subjective experience [1], but I certainly do. You could spend all day arguing that consciousness doesn't exist and is just an illusion created by our brain cells to trick themselves into ... whatever it is that brain cells trick themselves into, and I could just sit here, thinking about it, and proving you wrong by experiencing – and not just processing – my thoughts and sensory inputs. Not that I can prove it to *you*, but I can prove it to *me*. And that's what makes the hard problem of consciousness more difficult to even properly define than just about any other problem – save for maybe the question for why there is something instead of nothing.

[1] Although I very much doubt that.

Expand full comment

It really depends on what your standard for proof is. I could easily assert just as well that I have a soul, or some ineffable magic powers, and if my standard for proof was that "it really feels like it's true", then I suppose I will have proven that to myself. Someone with schizophrenia might claim that it's "proven to themselves" that some delusion is true, simply because it feels incontrovertible. If one is happy with that, there is nothing that I can say that will dissuade you.

However, in my opinion, the problems start when one tries to define more rigorously what "I have subjective experience" actually states about the world. The "hard problem" and friends arise because it seems to be the case that one can make a complete "external view" description of reality without invoking any of these concepts. This leaves the "obviously existing" internal view as sort of a phantom, which leads us to all sorts of things like p-zombies, epiphenomena, "separate magisteria" et al.

This is where the NLP loop analogy comes in. It too will obviously assert that it is conscious, just like you do. All of its external, empirical observables will be consistent with yours. Its "consciousness" is asserted by fiat. It will never be able to create an internal representation that encodes "I am not conscious". Statements of "obviousness of consciousness" implicitly assume a naive realist proposition that one's mental representations of the world are a faithful representation of it. We are happy to make adjustments about, say, relativity or quantum mechanics (though our every-day representations of the world will always be classical and Newtonian), however, it seems that making adjustments about to representations of our own minds are (not unexpectedly, really) a bridge too far. So we remain committed to the naive realist assumption that the statements "consciousness is a lie" and "it's obvious/self-evident that consciousness exists" are contradictory, when objectively they're not -- the trick is rigorously defining "obvious/self-evident".

Expand full comment

It’s a fairly convincing big lie and if it is a lie, that’s what needs to be explained, rather than hand waved away.

Expand full comment

The hemisphere stuff sounds like Jaynes, and the agents like Dennett.

Expand full comment

And Iain McGilchrist's The Master and His Emissary. I think Dennett's agents are a compatible concept, and both concepts seem plausible.

Expand full comment

+1. Seems like a rehashing of quite well trodden ground in Philosophy of Mind over the last 150 years.

Expand full comment

yep

Expand full comment

Another talented family that got missed: the Fields. David Dudley Field I, a Congregational minister, had nine children--several of whom gained national prominence. Stephen Johnson Field become a Supreme Court Justice, serving from 1863-1897 (the second longest of any justice). Cyrus West Field laid the first transatlantic telegraph cable in 1858. David Dudley Field II became a Congressman and also pioneered a major legal change: the shift from common law pleading to code pleading. Their sister, Emilia Ann Field Brewer, had a son, David Josiah Brewer, who also become a Supreme Court Justice from 1889-1910. Funny to think that the United States, a nation of about 63 million people at that time, had an uncle and nephew pair on the Supreme Court.

Expand full comment

Sounds like they were outstanding in their respective domains

Expand full comment

I see what you did there 🤣

Expand full comment

A contrarian acquaintance of mine who's in habit of posting these sort of things linked the following blog post: https://alexberenson.substack.com/p/vaccinated-english-adults-under-60

Turns out that, according to the government data, between May and September all-cause mortality for vaccinated 10-59 year olds in the UK has been twice as high than those who are unvaccinated. The original substack has some comments attempting to form an explanation other than vaccine actually being the cause of these deaths, and I can think some more myself, but I would be interested to hear if someone here can provide a robust model of what's happening and not just a handwavey set of reasons that sound about right (including an explanation to death rate among vaccinated being that much lower before the trend shifts in April).

Some of my thoughts regarding the issue:

0. I have a fairly strong prior towards human body being robust against presence of foreign mRNA and/or bits of inert viral proteins and would be rather surprised if it turned out vaccines of any modern type did actually cause excess deaths.

1. The base rate mortality for this age group is very low so just about any confounding factor has the potential to overwhelm it

1a. At the beginning of the time period almost no one was vaccinated, towards the end only some 10% of 10-20 year-olds appears to have received a second dose (https://www.theguardian.com/world/2021/oct/12/explainer-why-has-the-uks-vaccination-rate-slowed-down), whereas more than half of the deaths among 10-60 year-olds are accounted for by 50-60 year-olds. Additionally, the mortality of the unvaccinated appears to drop somewhat. This definitely explains a decent chunk of the effect, but surely the mortality in 2nd vaccine group shouldn't exceed the mortality before basically anyone had vaccine?

1b. People with potentially deadly comorbidities are presumably self-selected to take the vaccine

2. This was brought to my attention by someone who actively seeks out data that can be used to support a contrarian stance, and whomever noticed this anomaly the first might be the same. As far as I can tell, there could be more degrees of freedom than is apparent at first glance. There's a decent chunk of variability in both groups. I have no idea of the usual variability. It seems possibly convenient the beginning of the dataset has been cut off (I'm not comfortable enough with spreadsheets to do the visualization). All the other possible fudge factors.

It seems like 1a might also explain the initially lower death rate among doubly vaccinated, if we suppose the only people who had received two doses at that point are healthcare professionals, and that healthcare professionals enjoy lower mortality than the group of 10-60 year-olds by large.

-

(I can see this is tangentially related to politics, but my intent is to understand people who are anti-establishment in general, and the whole pandemic response thing is just background)

Nevertheless, I want to say I very much get where the mentality behind the contrarian stance comes from. While writing this post I looked at all kinds of British news articles and was again reminded by how preachy a lot of the news reporting sounds, at least when it's not just reporting of daily COVID deaths(*) or something else that at this point feels about as front-page worthy as "Theory of relativity still thought to be broadly correct" (reporting about TRENDS would make sense, I find that kind of information worse than useless precisely because it detracts you from the big picture), "trust science" sort of messaging aimed at people who fundamentally distrust the elites, and all that. I have approximately zero credence on this being a conspiracy and correspondingly high credence on all of this being explained by Moloch is playing us like puppets, but if my priors were different, I could easily see myself interpreting the push towards vaccine passes, yes-men attitude of the media and other such details as some sort of NWO takeover.

*) Indeed, I would attribute the rote daily reporting of case numbers as one of the prime culprits to a trap politicians have found themselves in: it seems fairly well-established vaccines are quite effective at reducing cases requiring hospitalization yet only have a limited effect at reducing transmission that tests positive, and consequently politicians are responding to raise in confirmed cases as though it was spring of 2020, despite vaccinations markedly changing the cost-benefit analysis of interventions which, as I recall, Scott himself concluded weren't trivially the right or wrong thing to do to begin with (in the "Lockdown Effectiveness: Much More Than You Wanted To Know").

Expand full comment

This is a good explanation of how Simpson's paradox plays into this: https://roundingtheearth.substack.com/p/uk-data-shows-no-all-cause-mortality

Expand full comment

Well reading that link, well sure Simpson's paradox, but also ~zero vaccine efficacy. What's up with that?

Expand full comment

Consider that the comparison is between excess cumulative mortality in vacc. vs unvacc. populations, and as the rest of the post makes clear, their age breakdown is very different. This isn't "vaccine efficacy" measured directly or even indirectly. This is taking two groups that are quite different (and changing throughout the year), and saying "this one did worse than predicted in terms of deaths, that one too, but the difference in how much worse they did is quite small". Could be due to so many confounders! E.g. what if unvacc'd population died quite a bit more due to covid-19, but everybody had fewer traffic deaths due to lockdowns and whatnot, but since these deaths skew young, and unvacc'd skew young, the counter effect is more pronounced in the unvacc'd. It's just the first stupid thought that came to my mind, probably traffic deaths are too small and I don't even know they decreased... but something like that could be true.

Expand full comment

Yeah but if you look at the absolute numbers it's like the vaccine had very little effect. Not the 95% first reported.

(I may be missing something in the data.)

Expand full comment

It's all-cause mortality, not covid mortality.

Unfortunately, the 95% effectiveness only applies against covid and the vaccine does little to prevent other leading causes of death such as dementia, heart disease or respiratory cancers.

Expand full comment

10-59 is a very wide bin. 59 year olds are mostly vaccinated and die more than 10 year olds who are mostly not.

Expand full comment

Also 10 and 11 year olds aren't eligible for vaccination yet, and older children (unless they have serious health conditions) have really only started being vaccinated the last month or so.

Expand full comment

It is the kind of bin where I suspect someone shopped around until they found it.

Expand full comment

I was about to say exactly that.

Expand full comment

(Repost from the last Open Thread in the hopes that enough people see it to produce an answer or at least some informative speculation)

What the hell is going on with Rivian? They're an electric car company that just IPOed, have a market cap of over $100 billion (currently the fifth "biggest" car manufacturer in the world, down from third place last week), and have produced less than 200 vehicles.

As far as I can tell their valuation isn't based on some enormous technological breakthrough or anything, so why on Earth does the market think a company this small is worth more than Ford or GM, and why this one in particular when there are many EV startups to choose from?

Expand full comment

What I heard was that they really hit a home run with their first car. Nobody has ever seen a first-run car by a brand-new company that was done as well as this car. Not by Tesla, not by anyone.

Expand full comment

The one thing I saw is that Amazon already has a contract to buy 100,000 vehicles from them: https://techcrunch.com/2021/11/08/rivian-expands-into-fleet-business-beyond-amazon-deal/

Having a big and established relationship with one of the world's biggest delivery companies seems relevant to the future of a vehicle manufacturer.

Expand full comment

There is also the angle that Ford and GM are seen as, if not dying, at least in an inevitable decline. Being higher valued than them is not *that* earth shaking.

If you're a little bit older, you may still think of them as the dynamic giants of industry they once were. But those days are gone.

Expand full comment

On one hand, Ford and GM are huge, well-established companies that were "dying" twenty years ago but still seem to be doing ok. On the other hand, I've never heard of Rivian, and from what little I know about them they're trying to be the next Tesla by just doing what Tesla did, even though Tesla obviously got there first. Also, Ford and GM are making electrics now too.

Expand full comment

Never heard of them, had to look them up, and regret to say my first reaction was "Ah. California. That explains it".

I think the other commenters are correct: there is a ton of money sloshing around looking for investments, everyone wants to get in on the next Tesla, and so a small ambitious company sounds like the very thing: Jobs! Musk! Bezos! Who wants to be the guy who turned down The Beatles?

Expand full comment

Ford actually invested a great deal of money in Rivian and is still an investor although the two companies recently decided that they were going to be taking different paths towards the future. I’m not sure how to parse that out.

Expand full comment

The society having a lot more money than potential good investments strikes me as pathological, but I'm not sure what the pathology is.

It might be that people aren't just looking for good investments, they're looking for extraordinarily good investments, and by definition, there can't be very many of them.

It might be that this is a time when it's hard to think of something good and new.

Expand full comment

I think this book really gets behind some of these problems:

The Decadent Society: How We Became the Victims of Our Own Success by Ross Douthat – February 25, 2020

https://www.amazon.com/Decadent-Society-Became-Victims-Success/dp/1476785244/ref=sr_1_1

Expand full comment

It might be that there are plenty of good investments, and the pathology is an abnormal number of people with money but no skill for assessing investments.

Expand full comment

Perhaps not just that, but also a shortage of people who are good at assessing investments.

Expand full comment

Because silly money is sad that it missed out on Tesla, and Rivian is the closest thing right now to the "next Tesla", even though they're far, far, far behind where Tesla was when they were worth $100B.

At this point I'm tempted to invest. One thing I've learned is that whenever I think something is a stupid ridiculous bubble, I'm right more often than I'm wrong, but the asymmetric nature of the returns of stupid bubbles means that I should be investing in all of them anyway.

Expand full comment

Because Tesla is worth a fucking fortune, and people want to get on the next gravy train before it booms. Lately, with the insanity of the various meme stocks, it might be unironically a very good strategy to just buy evenly the 1000 memeist memes on the assumption that a handful will blow up and pay for the rest (many of which will just be flat rather than going bust, anyway). eg. Someone who evenly invested in every dog-themed crypto of the last year would have an average return of something like 10,000% from the couple that went bananas, and surprisingly the majority of them went up a bit.

Expand full comment

Item 4:

I took most of the following information from the Wikipedia articles about these two men:

Miloš Forman, 18 February 1932 – 13 April 2018, directed films such as One Flew Over the Cuckoo's Nest (1975), the film version of the Broadway musical Hair (1979), and Amadeus (1984). He won 2 Academy Awards, 3 Golden Globe Awards, and a Grand Prix at the Cannes Film Festival.

Forman was born in Čáslav, Czechoslovakia (now the Czech Republic) to Anna Švábová Forman who ran a summer hotel. When young, he believed his biological father to be professor Rudolf Forman. During the Nazi occupation, Forman's mother died in Auschwitz in March 1943, and Rudolph Forman died in the Mittelbau-Dora concentration camp in May 1944.

Forman later discovered that his biological father was in fact the Jewish architect Otto Kohn and Forman was thus a half-brother of mathematician Joseph J. Kohn.

Note the dates of birth:

Joseph John Kohn (born May 18, 1932) is a Professor Emeritus of mathematics at Princeton University. Since 1968, he was a professor at Princeton University, where he served as chairman from 1993 to 1996. Otto Kohn emigrated to Ecuador in 1939.

Since 1966, Kohn has been a member of the American Academy of Arts and Sciences and a member of the National Academy of Sciences since 1988. In 2012, he became a fellow of the American Mathematical Society.[2]

Kohn won the AMS Steele Prize in 1979 for his paper Harmonic integrals on strongly convex domains. In 1990, he received an Honorary Doctorate from the University of Bologna. In 2004, he was awarded the Bolzano Prize.

Not a bad daily double if you ask me.

My wife's great grandparents are buried in the cemetery at Caslav. Pretty town about 70 mi east of Prague.

Expand full comment

Another Family for item 4. de Broglie. The most important one was Louis (1892-1987) who won the Nobel Prize in 1929 for his mathematical demonstration of the wave particle duality of electrons. He was one of the founders of quantum theory. He was unique in the family as a scientist. His ancestors going back to the middle of the 17th Century were soldiers, diplomats, and politicians including a Marshall of France, a bunch of Generals, ambassadors, and politicians. His grandfather was a prime minister. Sadly, he had no children.

Expand full comment

Even were Justin Trudeau the secret love child of Fidel Castro, he was nevertheless raised the son of Pierre Trudeau, who also served as Prime Minister of Canada, so it would still tell you nothing about the effects of nature vs. nurture.

Expand full comment

Galton, as someone quotes him, would argue that this is nature over nurture, since the nephews of popes didn't all become pope themselves hence social helps (like being raised the son of the Canadian prime minister) would not sufficiently explain Trudeau's eminence.

I don't agree with that, but this is his argument.

Expand full comment

I feel like it is so obviously true that parenting / childhood environment affects people's personalities, behaviors, and life outcomes. I think you should be much more skeptical of these sources that aren't finding a correlation.

Expand full comment

I completely agree. I think anyone that that had a non optimal, perhaps even traumatic upbringing would also agree.

I suspect those that accept the substance of this nurture is not all that important book have little experience with a rough childhood.

The book Scott cites as evidence against got a pretty mixed reception. I’m surprised it carries so much weight in these parts.

Expand full comment

There might be a split between people who find being told that they can't do something is a strong incentive to do it, and people who find that it being assumed that they can do something makes it easier to do it.

I believe both types exist.

Expand full comment

I think the big thing about that study is that it's about big 5 personality traits, rather than like, general bevhavior. We have 4 kids and although they're not very old yet, their personalities are *surprisingly* concrete. As they've gotten older, their *behaviors* have changed a ton, to the extent that a random person might not notice, say, the neuroticism, but to us, it's obvious that the neuroticism is just like it's always been, but the kid is better at dealing with it.

Before becoming a parent, I assumed there was a best way to parent, but it turns out there is a best way to parent *each* child, and for our 4 children, there are 4 different best ways to parent due to their different personalities, and absolutely nothing has changed as they've gotten older. They thrive more as they get older and we adapt more completely to their personalities, but it's definitely more about learning to work within the framework of their personalities than trying to change their personalities.

Expand full comment

The good news is that the entire child development field was on your side when that book was published, and subjected it to intense scrutiny. But it turns out that the evidence against a majority-nurture explanation kept getting stronger and stronger, and the field ended up having to radically change its position.

Expand full comment

I'm currently raising my kids full time, and I think some things are easier to influence. "General" personality traits, for each of my kids, were already present at birth as far as I can judge. Maybe one can change them, but I don't think people realize the enormous amount of work (and time, resolve and thought) it takes to influence even one "uncomplex" trait like self-confidence (uncomplex from my point of view but others may disagree).

One the other hand, it is very very easy (comparatively) to influence a kid's interests, schedule, friendships, taste etc. For those things the main work is basically exposing them to something and they run with it. Of course, to expose them to something you almost always have to be an insider (knowing that piano lessons for kids are a thing for example).

It makes sense for me that studies would find no correlation between parenting and some traits, but that doesn't mean parenting doesn't influence life. It influences specific areas of life.

(Digression : Maybe society is constructed as to give the most weight to the things that are influenced primarily by parenting/the class one was born in? Like what your college essay will look like, your special interests, the job you go to, it seems much more influenced by your environment and almost tailored to make generational poverty/wealth a thing.)

Expand full comment

When I was a child (many decades ago, in the UK) my parents moved (rented) house for the extra room but made sure that the new house was in the catchment area for the best state school in the city. They deliberately changed my environment but found great difficulty in making my handwriting better through practice.

Expand full comment

Parenting can affect interests and even careers. It can decide what anxieties an anxious person has. But it can't affect deeper things like being anxious in the first place.

Expand full comment

That seems so unlikely to be true. I feel like every person I've met who has bad anxiety got it from their messed up family relationships (or some other past trauma).

Expand full comment

I think you're missing the base rate issue. How many people do you know that don't have past trauma? In my experience, it's basically everyone. But not everyone has anxiety.

This is Matthew's point. If you traumatize your kids who will get anxious from trauma, then you get the dubious honor of being the center of their anxieties. If you don't, odds are still good they'll develop anxieties from some other trauma.

Expand full comment

pseudo-edit for clarity: In my experience, basically everyone has experienced trauma.

Expand full comment

Trauma is really common, but anxiety isn't the only response.

Expand full comment

That's what I said? "[Who has trauma?] In my experience, it's basically everyone. But not everyone has anxiety."

Expand full comment

1. How would you control for genetics? If their family is messed up, it doesn't distinguish nurture from nature.

2. How can you be sure you're not subject to confirmation bias?

Expand full comment

Doesn’t it seem likely that there is an acceleration effect? In that parents who have a genetic disposition to anxiety and perhaps limited skills in managing it will not only pass on those genes but amplify the effect of them?

Expand full comment

How would I ..? I have no idea. I'm not describing or trying to describe a scientific study; I'm describing anecdotal observational evidence. Maybe it's confirmation bias, maybe it isn't.

I doubt it, though. The reason it seems plausible is there there is clear mechanism for it to be true. If you can clearly see a causal effect in the world, you witness A causing B, then you (at a personal level) regard it as true that A causes B. Then a scientific argument that says, no, C causes B, needs to also really needs to explain why it _looked_ like A caused B, in order to be satisfactory. Otherwise you find it more likely to be true that the scientific argument is flawed, especially if the argument isn't very convincing anyway (which is what I'm doing here).

The same process roughly holds when reality is just suggestive that A causes B, but with less confidence.

For a concrete example: I have a friend who has a 'walking-on-eggshells' behavior socially. For instance she is hypereager to apologize for anything that could be offensive or offputting, and lots of anxiety around socializing. You can ask her about it, and she'll describe the anxiety and how debilitating it is. It's also clear where it came from: growing up with her hyper-critical mother meant that everything she did as a child was a potential transgression, so she learned a behavior of double- and triple-checking everything to avoid offending. Also, if you meet her mother, the effect and how it led with her personality is totally apparent.

So I have no doubt it's a nurtured behavior. It may be correlated with some genetic predisposition to that kind of behavior or something, I don't know, but when the _cause and effect_ of someone's psychology is so clear, you don't see any reason to doubt it.

Anyway, I have known _lots_ of people like this. Most reasonably self-aware people with anxiety seem to have a good understanding of where it came from. I have no idea if it's true across the general populace, but it seems so clearly true among people that I've met that I'm inclined to believe the pattern holds.

Expand full comment

Matthew's alternate hypothesis is "they would have become anxious anyway, just with different rituals and blaming something else".

Your data doesn't provide any distinction between these hypotheses; both "traumatising kids makes them anxious" and "there are anxious people, who usually orient their anxiety around traumas" predict your observation of a bunch of people who can trace their anxiety to traumas. As such, that observation is no evidence of the former over the latter.

Parsimony suggests adopting the former hypothesis *if* there's no way to get evidence of any kind, but the people you're arguing with are saying that there is evidence that can distinguish between the two and that it favours the latter.

Expand full comment

It's not really confirmation bias, not even reverse causation as such, but I have the feeling that many self-justification of things considered a problem (over anxiety, attention disorder, compulsive behavior, addiction) use environmental causes (especially family causes) because it's the trendy thing to do, provided some self-help by having human responsible to blame (instead of fate or bad genes), may trigger sympathy (or avoid judgement) and , in a victim-society, even add some status....

I have little sympathy when I detect even the faintest hint of attention-grabbing attempt...But even when sincere, I don't think those justification are often real causes....

Expand full comment

The obvious alternate explanation is that anxious parents have anxious kids because anxiety is genetic, not because seeing the parents' anxiety causes anxiety in the child. Without adoption studies, it's basically impossible to distinguish between the two scenarios, so unless you personally know a lot of people who were adopted it makes sense to trust the scientific studies.

Expand full comment

It's totally possible to distinguish between the two if you actually _know_ the people. Not in a way that would be convincing to science, necessarily, but it's not like your choices are "scientific-grade evidence" or "complete ignorance". There is a whole world of information out there that science has trouble processing.

Expand full comment

Edit: because apparently substack doesn't have editing?: to be clear, when I say "Anyway, I have known _lots_ of people like this", I mean lots of people whose personalities seem visibly imprinted by their parents behavior (usually in a negative way, I guess because those are the things that come up). Not that I know lots of people like that one person in my example. Although I do know a few.

Expand full comment

I had the same priors, but the evidence points in a totally uninituitive direction. Parents can affect kids on the margins, like give them a headstart or a severe trauma, but personalities, behaviors and life outcomes relative to their peers are almost all nature, not nurture.

Expand full comment
Comment deleted
Expand full comment

How is this bot not banned yet?

Expand full comment

It registers a new account with the same display name and profile picture for each post. So banning it doesn't have any effect.

Presumably it also rotates IPs or something, because if Substack allows the same IP to reregister with the same name and same picture then that's a pretty obvious gap in comment moderation tools.

Expand full comment

You can't ban based on text?

Expand full comment

I don't know.

Expand full comment

I've been wondering about that, though there have been some funny replies.

Expand full comment

All I know is every time I see it I become just that much more tempted to check them out.

Sigh….

Expand full comment

Can anyone do a better job than me of thinking of counterexamples to the "science progresses one funeral at a time" meme? Counterexamples in the sense that an erstwhile consensus view (which was ultimately shown to be the correct one) retrogressed one funeral at time through the death of its supporters, eventually ceding its consensus status (at least temporarily) to a now demonstrably false view?

Expand full comment

To the extent that meme is true, our increased lifespan acts as a dampener on scientific progress.

Expand full comment

I had a long bit written out about the death of a British Antarctic expedition due to the replacement of true knowledge about an effective cure/preventive regime for scurvy based on unsound ideas around germ theory but then ran across https://idlewords.com/2010/03/scott_and_scurvy.htm which covered 98% of what I was gonna say and is probably better researched so just read that. Also more details about this than the 3 man Antarctic expedition I was thinking of (which is also more debated)

My overall impression is that the blog post undersells the failure mode that things changed without anybody noticing and nobody did a new clinical trial because they "knew" better, but they really, really didn't. Explicitly; Long boat voyages cause scurvy, we found a cure/preventative measure, we applied that preventative measure to all boat voyages, boat voyages stopped being long to the point you'd get scurvy anyway, due to a new theory we changed the preventive measure and nobody started getting scurvy again on boat voyages so 'clearly' it worked. Then when going on Artic journeys the voyages were long again, but even when noticing that scurvy came back it was really, really, really hard to get the new theory discredited and go back to the old measure (to be fair, it may have been actually impossible for those expeditions)

Expand full comment

I would have said most scientific progress fails to fit the meme. The major and radical revolutions in physics in the last century, quantum mechanics and relativity, both gained rapid acceptance as they proved to have superior explanatory power, and no old guard had to die for that to happen so far I can tell.

Indeed, a good illustrative example is Max Planck, already a middle-aged successful professor physics in the early 1900s. He was certainly steeped in 19th century classical physics, and struggled mightily to reconcile it with the new physics when it emerged -- and which of course, he himself helped usher in -- but he proved willing to go where the facts led, regardless of his personal inclination and history. Wikipedia quotes Max Born:

"He was, by nature, a conservative mind; he had nothing of the revolutionary and was thoroughly skeptical about speculations. Yet his belief in the compelling force of logical reasoning from facts was so strong that he did not flinch from announcing the most revolutionary idea which ever has shaken physics."

Planck also immediately recognized the importance of special relativity when the young whippersnapper (by Planck's standards) Einstein published it in 1905, and was instrumental in ensuring it saw wide distribution in Germany.

Although, ironically, Planck himself thought that some others of his generation were stubborn to accept the new physics, and he himself was sympathic to the meme.

Expand full comment

For a counterexample, you don't need to find a retrogression (which I imagine would be pretty rare if science is at all good at making progress), you just need an instance where science made significant progress despite opponents of the new paradigm still being alive.

Expand full comment

That’s a valid and correct response, and it’s the one I was trying to rule out with the second sentence. Seems “counterexample” was a poor choice of word. What I want is that situation you alluded to where the opposite happens, where for whatever reason the funerals are resulting in actual retrogression. Is this state of affairs merely rare or actually non-existent?

Expand full comment

You should search up for the knowledge about scurvy was lost for a period of time.

https://www.mentalfloss.com/article/24149/how-scurvy-was-cured-then-cure-was-lost

Might count?

Expand full comment

Science isn't all that old and it usually works pretty well, so there are very few examples of scientific retrogression, where widespread more-correct beliefs were replaced by widespread less-correct beliefs. If examples exist then we largely don't know about them because we're still stuck in the less-correct-beliefs stage.

Lysenkoism was one, but it was pretty localised to one particular place so it's a poor example.

I can think of one other example, but it's too spicy for an odd-numbered thread. It's very hard for science to really go backwards of its own accord, so it can only do so under enormous political pressure, so it's not surprising that any examples which aren't from far in the past would be extremely politically charged.

Expand full comment

Lysenkoism wasn't really about old scientists dying off, though, right? It was more about keeping one's mouth shut for the sake of physical safety.

Expand full comment
Comment deleted
Expand full comment

Like Carl and Melvin have noted above and below, the meme sometimes fails with respect to specific elderly-statesmen/women scientists, and it is interesting to reflect on just what makes that meme so virulent if its not the accuracy with which it captures the social dynamics of scientific production. That said, a large-sample empirical assessment of its accuracy would still be good to see - there's the possibility that Carl is just anecdotally No-True-Scotsmaning the meme into failure by taking either one individual (Planck) or small community (early 20th century western physicists) as representative of the entire "scientific community". What about all the authors of non-replicating sociology studies who rode them all the way to tenured positions they still remain in?

This is why I posed the question: I was thinking about what an interesting case study it would be in the (perhaps very rare) case of incremental retrogression-by-funeral. As an elderly scientist watching more and more junior peers reject the established consensus of your heyday, what clues might there be that you were situated in a world where the meme failed? Under what circumstances might your hypothetical self feel justified in sticking to your guns? Obviously the main clues are likely to be in "the body of relevant evidence pertaining to the consensus itself", but what if you've reviewed the evidence and your best epistemological assessment still seems to support the old consensus?

Expand full comment

"People" rarely do, but scientists are much more likely to, if faced with compelling evidence.

The other thing is that most scientific progress doesn't actually involve disabusing anyone of a strongly-held misconception. Scientific progress generally doesn't go from "Everyone thinks P" to "Everyone thinks Q". It goes from "Everyone has no idea" to "Everyone thinks Q".

There are definitely exceptions, and the saying about funerals does capture this in a somewhat pithy way. Still, science usually works properly, to the point where it's noticeable when it doesn't.

Expand full comment

It seems to me that the best tactic for doing a lot of super-hard research is to figure out how to enhance your human researchers. If you have 1000 person-years of work to do, then finding a way to make your researchers 2x faster or "better" is worth 500 person-years; and for research that can't be parallelized any further, enhancing the researchers is the *only* way to make it happen faster.

I'm thinking here of biological enhancements: e.g. drugs that make your brain work more like von Neumann's, or like a 25-year-old's if you're 60, or accelerate or replace sleep; computer-chip implants that augment your short-term memory or perform tasks like arithmetic; brain-to-brain and brain-to-computer communication interfaces; growing or grafting more brain cells; and so on.

These enhancements seem extremely valuable, for particular projects and for humanity in general. Yet I don't think I've seen much serious activity on them. SENS and such are about reversing aging more generally, and I haven't heard of them focusing specifically on making their aging researchers more productive; Neuralink exists, and apparently their plan is to "make devices to treat serious brain diseases in the short-term, with the eventual goal of human enhancement", so they *might* be doing the right thing eventually; drug-based cognitive enhancement is nootropics, and my impression is that the field severely lacks rigorous results from large-scale trials, probably due to lack of funding. For replacing sleep, I googled, and found some results about "Orexin-A", which seemed promising; but according to a Vice article, the researcher is merely trying to treat narcolepsy, and thinks "sleep replacement" means temporarily keeping someone awake and alert (as opposed to identifying the critical maintenance tasks sleep accomplishes and finding better ways to do them) and is therefore a bad idea long-term.

I suspect these researches are underfunded partly because of difficulty in capturing externalities. Making up numbers: If, say, $1 trillion is spent per year on research, then making all researchers 2x better is like getting another $1 trillion, and if it took $1 billion it would be so worth it; but if the research budget you personally control or care about is only $1 billion, then it's not worthwhile to you. So it's mostly useful for those with huge budgets, or those who don't care about capturing the externalities. And my impression is that government agencies and others are a lot more likely to fund "fixing diseases" than "enhancing healthy people".

So I guess I'm proposing that cognitive bioenhancement, perhaps aimed primarily at researchers and engineers, should be high on the list of EA causes to fund.

Expand full comment

Secretaries for scientists, or even the idea of reducing administrative burden would be a nice start. Instead most universities dump huge amounts of admin on their researchers

Expand full comment

At least secretaries already exist. And would be cheap compared to massive research.

Expand full comment

Even just ideas like "stop sending so many bullshit emails" would be nice.

But yeah, try and free up time as a real priority- auto approving things instead of filling out forms (maybe with random audits) and eg providing say home cleaning services as part of the job

Expand full comment

At my job I ignore most emails. It causes problems but it may be the least bad way forward.

Expand full comment

This sounds like the inspiration for Scott's story here

https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/

Expand full comment

Very cool, just read it! Thank you for sharing, very similar theme. Imagine someone who had to master "architecture", "war", and "alchemy" to make the next great discovery...I think that's what I'm getting at.

Expand full comment

Erdos used low-dose daily amphetamines, and when he gave them up for a month for a challenge, he apparently did little or no mathematics.

Were they the equivalent of ADD medication? A tool for extending mathematical ability for extra years?

Expand full comment

Adderall, which is one of the most common ADHD medications, literally is amphetamine (more specifically, it's a mixture of four different amphetamine salts). So I'd wager that yes, Erdos used amphetamines the same way people with attention disorders use amphetamine now.

Expand full comment

It could have been self-treatment for ADD, but I wonder whether it was also the energy boost.

Erdos might also have been just unusual-- it's not as though other mathematicians had his particular talents or wanted his lifestyle.

Expand full comment

I think this is basically right, if only because an enhanced human could learn more about different things. I do biology research and I think a huge problem is that you need so much knowledge to do a project these days, that takes advantage of modern biology methods that the expertise necessarily has to be spread out amongst a large team--a doctor who knows clinical medicine, a strong scientist who knows laboratory based methods, someone with a good grasp of informatics methods, etc. etc. This means that a big part of a project is coordination--helping the other members of your team understand enough about your area of expertise so that you can move forward. And this takes a good bit of time.

Even if you didn't necessarily have MUCH better ideas, if because of an increase in the rate of learning on an individual level (either from a magic "brain pill" that makes you pick up skills faster or some technology that decreased the need for sleep), you could concentrate more of these skills in individuals so that the friction of communication could be cut down dramatically.

All that is to say that even improvements/enhancements that fall short of history's greatest luminaries but let you finish an online R tutorial in 1 week instead of 3 weeks would probably be quite valuable.

Expand full comment

I don't know, that sounds a bit like "if the recipe calls for 20 mins at 150 degrees, we could cut it down to 2 minutes if we turned the oven up to 1,500 degrees" meme.

No, you can't cook it faster that way.

Expand full comment

Ok. Does the analogy extend beyond the surface level? If your researchers were Einsteins instead of average PhDs, or if they only had to sleep 1 hour per night and thus had several more waking hours per day, etc., then do you think they would produce no higher amount or quality of research, on the majority of projects, as a non-enhanced team?

Expand full comment

Sometimes the limits to scientific progress are technological or practical, not intellectual. A famous example is the discovery of Neptune - two researchers predicted its existence and discovered it independently at almost the same time, based on perturbations in Uranus's orbit. The limitation wasn't intelligence, it was just waiting long enough for Uranus to make an entire orbit so they could get enough data.

Expand full comment

The thing is, we don't know how people have good ideas. A lot of the anecdotes around discoveries are "it came to me in a dream" - so should we make sure scientists spend sixteen hours out of twenty-four asleep?

It could be that Unenhanced George spends thirty years concentrating on one problem exclusively and dies without solving it, while Enhanced William solves it - after spending thirty years concentrating on that one problem exclusively.

I don't think you can scale up productivity on super-hard research like efficiency on a production line. I think that the basic suggestions others gave - do away with most of the time that is not on research, so that instead of "in one day I can only spend five researching and the rest of the time is taken up with paperwork, meetings and bureaucracy", the researchers can do what they are there to do: research.

It probably would be that "if I were five times smarter, I would get more done", but I don't necessarily believe that "having a brain that worked like von Neumann's" would produce startling results if you mean "churn out ideas that can then be spun off into profitable businesses".

After all, if you can be sure that "problem X requires 1,000 person-years of work to solve, wouldn't it be better if we could get it done in 500 person-years?", the answer there might well be "we have one hundred ordinary scientists working on it, that takes 1,000 person-years, but instead of fifty enhanced super-scientists to cut the time in half, just hire two hundred ordinary scientists to work on it".

Expand full comment

Why get fancy? Just find the most brilliant minds you've got[1], and take away their need to earn a living, put gas in their cars, do the shopping, write grants and papers, kiss administrative ass, et cetera. Most creative people spend about 10% of their waking hours actually being able to think deeply, the rest of the time they're occupied with or at least distracted by the housekeeping stuff they have to do for their lives to keep trundling along.

-----------

[1] Which is, of course, the rub. We have no ways to identify ahead of a brilliant discovery who the brilliant discoverers are going to be.

Expand full comment

That's good to do as well. It's not an either-or; the effects *multiply*, and are hence *complementary*; if augmenting your memory lets you write programs 2x faster, and having assistants gives you 10x more time to write programs, then you can program 20x as fast.

Expand full comment

Yeah but I don't believe in augmenting your memory. I think the human brain is already tuned to its maximum ability naturally -- can't see why a reason why it *wouldn't* be -- and so any currently imaginable perturbation of it will make it function suboptimally. For example, augmenting your memory -- making it hard to forget things -- might turn on a mental experience similar to PTSD or OCD, you'll be continually distracted by ephemera, and your productivity will be reduced.

Expand full comment

My memory degraded very substantially between 22 and 32 for no apparent reason (along with developing some mental health issues), so I'm skeptical of any claim that it's tuned to its maximum ability in any non-tautological sense.

Expand full comment

I meant in general, of course. In any one individual the assertion would be silly -- certainly someone who is drinking himself to death has a very nonoptimally functioning brain, for example.

Should you really be extrapolating from you to the entire species, though? Seems fairly bold as hypotheses go.

Expand full comment

> Should you really be extrapolating from you to the entire species, though? Seems fairly bold as hypotheses go.

That seems entirely unfair. You stated a belief in universal terms (that the human brain is tuned to its maximum ability naturally, specifically in terms of memory) because you can't see a reason it wouldn't be. Given that I am a counter-example, I'm pretty skeptical of that position. I'm not extrapolating in any way to make that point.

Extrapolating from me to the entire species would be if I made a universal claim based on my individual experience (say, that memory in humans degrades a lot in the twenties) and would be absurd of course.

> I meant in general, of course. In any one individual the assertion would be silly

Well, first, I don't think the average-optimality-of-memory is especially important here. Research is done by individuals. If we grant, as you do, that some individuals have non-optimized memory, then we should believe that it's theoretically possible to fix that in those individuals. In your example, it should be theoretically possible for a therapy to reverse alcohol damage on the brain.

Secondly... I also see perfectly good reasons why memory (especially as it applies to coding) might not be an evolutionary priority. Why would it be? It's not like there were enormous, precise abstract data structures and APIs to memorize in our ancestral environment. And if its not a priority, then if its optimal its only by coincidence because the trait happened to load on something that WAS a priority.

Expand full comment

"If augmenting your memory" is the big honking "if" there. We don't know for sure that augmenting makes things better in that way, and the von Neumann example that gets trotted out ignores that he was first and foremost a mathematician.

Ask him to work on biological or chemical problems, and the answer might well be "No idea". Ask him to plan the perfect economy, or voting system, or one-world government, ditto. If the real-life guy stuck to one field, then expecting copies to be able to apply their talents in all fields requires first that you get the biology von Neumann, the political science von Neumann, the architect von Neumann, etc.

Expand full comment

This is pretty much what Nick Bostrom argued for in his 1.5 page note "Three Ways to Advance Science": https://www.nickbostrom.com/views/science.pdf

Expand full comment

Interesting; it's good to confirm that I'm not the first to think of this. On the other hand, if Bostrom published that in 2008, then why haven't I heard of anyone else advocating "accelerating research via human enhancement"? ... Searching the EA forums, I find a few results and a tag for "cognitive enhancement", which has five citations (including two Bostrom items but not the above) and eight forum posts. That's not nothing, but I would have hoped for more.

Expand full comment

I've been wanting to publish this piece on "Improving Human Intellectual Efficiency" but I've been too busy / cowardly. Admittedly I didn't have ideas like "computer-chip implants that augment your short-term memory or perform tasks like arithmetic", maybe I should add those in, they sound sexy: https://docs.google.com/document/d/1-xTYpv15ToV4JXDoDGX4IMtlrxrEpfOaxL3eaik02jk/edit?usp=sharing

Expand full comment

That's a very interesting essay, and I think you have a point. Even little things, like being able to change habits easily and check on whether the change is an improvement would probably pay off well.

As for the cost of dealing with computer complexity, you're right. One answer would be centralization, but how could it happen?

There's a relatively easy shared language-- Esperanto-- but while it's caught on well for a constructed language, it hasn't been a big solution.

Expand full comment

On the constructed language question, I figured Esperanto didn't catch on because people didn't see its value given that no country speaks it, and certainly they don't see its "propaedeutic value" (https://en.wikipedia.org/wiki/Propaedeutic_value_of_Esperanto).

I therefore developed the seed of an English-derived interlanguage called Ungglish (http://ungglish.loyc.net) which I would sell as a valuable tool for learning English faster but also, incidentally, happens to create a community of speakers for a new interlanguage language. To succeed it would be necessary to have a network of institutions teaching it. Not sure how to achieve that, as I've never successfully built a company or a community. Plus, I think making a language very similar to English can allow machine translation to English that is more reliable than machine translation from any other language (and vice versa).

(btw, Basic English has been tried and didn't get very far, but I never liked it anyway - it's clumsy as a language, keeping most of the drawbacks of English while being more long-winded than English and having only a single one of the advantages of an interlanguage, namely being easier to learn than normal English. Granted this is the biggest advantage, but my feeling is the trying to be a subset of English is a straightjacket and that it's better to just make a language that resembles English with intentional differences that call out "I'm a different language". Also it would be helpful to be able to slip in a few words from Mandarin or something, to work around certain limitations of European languages, such as concepts that are excessively long-winded or ambiguous)

Expand full comment

The trouble is that the obvious implementation of this (nootropics) are socially taboo, often illegal and somewhat judgement-impairing. It's very hard to push for 'human enhancement' when it looks a lot like a cover for 'please give me legal speed'; unless you're VERY careful you will end up destroying your reputation and making no progress toward the goal.

There's a strong argument that if you actually want to enable better research through human enhancement, you're best off never ever mentioning that to anyone. Push for generalized drug decriminalization on compassionate and practical grounds (did you hear that drug usage went *down* after the Netherlands decriminalized?! Tabooing drug usage prevents people from getting the treatment they need! Look at all these promising mental health treatments that have been used by primitive groups for centuries, but we're forbidden to research!) and once you've won the road to human enhancement will be clear and easier than if you'd spent that time labeled as a drug-seeking crank.

Expand full comment

(Also, for the love of dog, if you're publicly pushing for permitting researchers to use drugs for human enhancement do NOT under any circumstances use them yourself. This is a case where skin in the game will engender far more distrust than trust.)

Expand full comment

My healthy 68 year old mother has two doses of the COVID vaccine, but says she "does not need" a booster because she doesn't have any other risk factors and doesn't normally interact with people outside her small social group and family. Is it worth pushing for her to get one when we get together for Thanksgiving?

Expand full comment

The booster shot has a benefit, but also a cost. I hear the reaction is typically worse, and people are knocked out for days.

I would mention that I'd feel better if she did it, but it's her decision, and leave it at that.

Expand full comment

I had no negative reaction to my booster.

Expand full comment

There is data on this at https://www.cdc.gov/mmwr/volumes/70/wr/mm7039e4.htm, so we can base this on more than just anecdotes.

Anecdotally, my own reaction was (far) milder than Dose #2, one another I know of that was similar to Dose #2, and a third I know of that was a worse than Dose #2.

Expand full comment

One data point could be to get an antibody test. I know some older folks who tested negative about six months after their second mrna doses, which points in the direction of them needing a booster.

Expand full comment

It probably comes down to the risk levels she's willing to accept.

The chances of double vaccinated people dying, even if they're older are low. There's no certainties in life but it's probably into a reasonable range of micromorts. I can't give you an exact number but based on the UK death numbers, the covid-related death rates for double vaccinated people amounts to about about 8 micromorts per year, loaded towards older people.

She probably faces about similar per day at her age as background risk.

So assuming she's not too worried herself, I'd say go with common sense stuff like family avoiding the event if they've got symptoms and testing if they've recently been hanging out with covid positive people but otherwise try to enjoy the holiday.

It's a tradeoff after all, QALY includes "quality" for a reason and social events and spending time with family are important.

Expand full comment

Yes, it’s worth it.

For thanksgiving, it’s worth it to do antigen tests for everyone, once on Tue, then again in Thu, and gather only after it’s clean twice.

As to your question, the simple argument goes like this: 2 shots are probably (but not definitively) enough to prime our immune system in an endemic environment characterized by (a) low transmission, (b) low virulence and (c) therapies.

We aren’t endemic yet. We have (a) high transmission (local minimas don’t matter), (b) high virulence and (c) therapies aren’t useful.

So we can’t afford the 2-3 day lag that our immune system needs to produce antibodies. Instead, we need to prime our bodies periodically to produce NAbs upfront, so that we can react with bear-zero lag when faced with infection.

In 3-12 months in US, when we have therapeutics, and possibly lower transmission, the need for “topping up” NAbs with boosters would have dropped. Right now, that’s not the case. We may need yet another booster 6 months hence as well.

Expand full comment

>For thanksgiving, it’s worth it to do antigen tests for everyone, once on Tue, then again in Thu, and gather only after it’s clean twice.

This seems like an insane level of overreaction for vaccinated people? Fully vaccinated covid seems significantly less deadly than the flu even without boosters.

Expand full comment

Why is this insane? Antigen tests are really easy, and probably cheaper than the meal that will be eaten. It's definitely a more annoying step than asking everyone to wash their hands before eating, but if someone is prepared to drive dozens of miles to go to an event, then taking some antigen tests is only a tiny bit more annoying.

Expand full comment

Less deadly, yes, but rates of long COVID seem to be that same irrespective of breakthrough COVID or unvax COVID.

Also when older people are involved it’s worth the paranoia.

Expand full comment

Ignorant question about cryptocurrency: are there people working on alternate ways to anchor the value of cryptocurrency other than mining with graphics cards, which has environmental costs and has totally screwed up the market for graphics cards?

If we required bitcoin mining to happen using human power on exercise bicycles instead, would that work? (that's a joke). Are there other options that people are considering that could plausibly work? Are there any good articles out there about this that would make me less ignorant?

Expand full comment

The waste of cryptocurrencies is a feature not a technical limitation that can be overcome. Without waste, you either have trivial 50% attacks or a centralized authority.

Expand full comment

The best explanation about what (well designed) cryptocoins are is that a cryptocoin is a distributed ledger that solves the Byzantine fault problem by picking the partition with the largest proven computational capacity. Unpacking that: "Ledger" means that is an account that tracks assets among accounts via transactions; it can be used to transfer something of value between two agents that trust it. "Distributed" means that anyone can go download the entire ledger for their review or participate in the decision process for whether new transactions are confirmed. "Solves the Byzantine fault problem" means that it facilitates agents who participate in the system to coordinate around the correct version of the ledger by only exchanging pairwise messages. "...by picking the partition with the largest proven computational capacity" means that if two groups of agents disagree about what the correct version of the ledger is (maybe because someone is doing something nasty like trying to spend the same coin to two different accounts) then they resolve that conflict by holding a competition to see which subset of agents can do the most computation.

The environmentally ruinous part is the competition. It's not just that the cryptocoin system has to do a lot of computation, it has to do so much computation that no single individual or cabal could do more. Computational power is something that's very easy to verify in a distributed way, which is why it hasn't been replaced with something better.

Expand full comment

There’s also proof of transfer which is used by Stacks. Pretty interesting idea. You prove you’ve transferred an asset - Bitcoin in their case - in order to enter the lottery to propose a block. Or something to that effect

Expand full comment

Yes, there is something called "proof of stake" (as opposed to proof of work aka mining). There are several blockchains using it and Ethereum plans to switch to it in Q2 2022. It is ~95% less resource-intensive.

Expand full comment

And also therefore ~95% less valuable, by marginal cost = marginal return.

Expand full comment
author

You can't do exercise bicycles because there's no decentralized, not-requiring-an-auditor way for the blockchain to "know" how many humans you have on exercise bicycles.

But there are other things called Proof-Of-Stake (how much crypto you have already, which is easy for the blockchain to "know") and Proof-Of-Space (using how much hard drive space you have instead of how many graphics cards you have)

https://en.wikipedia.org/wiki/Consensus_(computer_science)#Permissionless_consensus_protocols might be a good starting point article.

I tried to sketch out a human-labor-requiring cryptocurrency as the second-to-last point at https://astralcodexten.substack.com/p/list-of-fictional-cryptocurrencies , but no idea if it would really work.

Expand full comment

Proof of space solves the problem of screwing up the graphics card market, but it instead screws up the market for hard drives.

Expand full comment

Could you hook up something with gyms? All those rows of stationary bikes, if you turn them into generating energy, plus gyms could be licenced for "we definitely have X number of bikes and we definitely have X number of humans on those bikes for every session"?

Expand full comment

The challenge isn't the basic technical side, it's trying to prevent people from cheating.

Expand full comment

If you manage to find a number of trusted authorities, you can forget the crypto-*** and its ineficiencies ; and go back to traditional banking, which is basically an oligopoly of trusted authorities.

Expand full comment

Okay that was a pretty brilliant idea to use captchas to ensure human labor, though BuffyCoin is my favorite. That was a great post and I'd missed it!

I gather people who do data scraping know how to program around captchas. Maybe there's a way for people to log exercise points through some clearinghouse of humans who are paid to watch those humans exercise in order to provide live human proof-of-stake. Isn't that kind of what the SEC does anyway?

Expand full comment
author

You could definitely do that, it just wouldn't be decentralized, which is the only advantage of blockchain over anything else. It would just be a company where, if you pay them money, they will cause people be on exercise bikes.

I don't know why people would give money to that company. Making it crypto doesn't make it *better* per se, but people give money to weird crypto for inexplicable reasons, and I think just having it as a company wouldn't let you use the "people give me lots of money for no reason" loophole.

For a potential example of decentralized auditing, see https://www.gwern.net/CO2-Coin

Expand full comment

I submitted an application for an ACX grant but I didn't get an email confirmation about my responses like I normally do when I submit a Google Form. Is this expected? I'm somewhat concerned now that I might not have entered in my email correctly or something.

Expand full comment

I didn't get one either.

Expand full comment

Some thoughts re brain poisoning yourself by following people on twitterl:

I heard about Glen Greenwald on here a while ago, had some 50/50 thoughts on him ("Isn't he that reflexively anti-establishment guy who lost his job for being edgy? He's probably worth looking into at least").

I read saw his history, read some of his new pieces, and felt like I had found another good source for moderating my current events feel.

Then I followed him on twitter. I now think that he is hopelessly biased, and basically useless for calibrating myself against vis. What Is Happening.

Am I correct? Did I gain any actual useful information here, or did I just trick myself into thinking poorly of someone I've never met?

Basically, can you extract useful information about a public figure by seeing their social media style minor interaction? (discounting them being obviously malicious, of course.)

Expand full comment

FWIW I had a similar experience: liked him at first, then noticed he's super biased on Twitter.

Expand full comment

He does have one virtue: he's not a boring standard left-wing or right-wing guy. He has his own special Greenwald biases.

Expand full comment

My view on folks like Glen is that he often gives me a chance to learn something I did not already know.

Yes he is very likely biased and if you were to only select one person as your sources for world events he would definitely not be the single source to pick. But as an ensemble of sources of information he is very good as he provides information and a perspective left out of many information sources that is sometimes true and that you may sometimes miss otherwise.

Expand full comment

Does it give you more information about a work colleague who's polite and helpful at the office to find out that he's a total arsehole a few pints into Friday night drinks? Twitter reliably shows one the worst side of people, but it doesn't mean that their long-form, considered opinions cease to exist. It is perfectly possible for someone to be an awful hot-take machine but able to filter only the true and interesting ones for longform publication.

What I would suggest as useful is skimming a bunch of articles selected at random from a blog, as opposed to only reading ones that you are linked to or ones on a specific topic of interest. If they sometimes write about a topic on which you are personally very knowledgeable, that's an even better way to check how well calibrated they are

Expand full comment

Glenn greenwald is an opportunist who falls into the category of opportunists that make their money being contrarian. A good rule of thumb is that if they tweet about 50 times a day like Glenn does, then twitter is their job, and that means their job is to farm outrage and engagement. Avoid these people.

Expand full comment

what if the nature of your factual dispute of what Greenwald says

Expand full comment

We're not supposed to be talking politics in this open thread so you likely wont get an answer because most of Greenwald's obvious mistakes are related to political theory.

Expand full comment

uh, do you want to offer any factual or propositional content about what you think he is right or wrong about so we can have a meaningful understanding, or no

Expand full comment

I did my best to keep this content focused on personal and not political drama and to not talk about right or wrong. In keeping with the thread rules. I think I did pretty good answering your various comments from that perspective.

There isn't a ton of strong evidence based content people can give you related to Glenn's tweets. Glenn, and Zaid for me, are good to check up on and sometimes have a cool link but 90% of the time they are being contrary without linking to anything. Or I read a news link from one of them and it is much more "interpretable" than their accompanying tweet implies. I will read their stuff if I am wondering whether a "trending" narrative is the whole truth but just as I can't provide detailed evidentiary criticism to you I can't defend them to most people who really dislike them.

Also Glenn is super petty which is a separate issue. Zaid for instance is very much Glenn-esque twitter wise but nowhere near as petty.

A lot of people do just hate Glenn for the Biden stuff or for going on Fox News which is silly. But he and other similar twitter users content wise are disliked for the same reason strong contrarians are always disliked even in cases where they are right a lot. They'll post 95% negative stuff no matter who is in power and ignore good stuff.

I really empathized with Glenn during the late January to late March phase of the Dem primary because Bernie, and Liz, or their top staffers perhaps, had clearly massively dropped the ball on good campaign decisions. Certain people would constantly tell me to shut up in dm groups when I was criticizing those people and their reason was I was really depressing people and if I was right at least I could keep quiet and let people have a couple months of hope till the end of June. I think that is how a ton of people see Glenn. Generally 10-20% too negative for reality and like 50% too negative such that he makes people feel depressed and bitter and if you feel like you can't do anything about reality then at least you can put a positive spin on things but Glenn, or Zaid, never do.

Comparatively Tracey has 30% again the content value of Glenn and Jimmy Dore is all trash and grifting. Purely negative.

Expand full comment

No, because it's political. All I can say is that he portrays himself as anti-MSM, anti-establishment and anti-consensus; but he is incredibly obviously pro-MSM, pro-establishment and pro-consensus if it aligns with his financial interests.

Eg, I trust Scott to come by his opinions and express them honestly, even though we disagree on half of everything; I cannot rationally extend that charity to Greenwald

Expand full comment

Funny, but I had he exact same experience as you did with Glen. I gave up on him after I discovered the he can be enormously petty. I stopped following him on Twitter and on the Intercept (before the Intercept got fed up with him). Sometimes I think he's actually a rightwing media plant to sow dissension in the progressive ranks.

Expand full comment

Glenn has a lot of issues. Twitter brings though out and a lot of blue checks like to fight with him. It probably isn't the best vehicle for getting useful information if you don't care about inside baseball stuff.

Expand full comment

I mean it's more data, right? There's a wide range of what each person considers an acceptable way to comport oneself on social media, and for you GG fell outside of it (he does for me too) and this causes you to question his general judgement. I think that's totally reasonable.

Expand full comment

I think part of the reason for push back is that the subsection of people who believe in nurture and privilege believe it passionately. My reaction to the great families was "Yep, sure". So you get an obvious bias in the comments.

Expand full comment

Working in genetics is getting weird. It makes me wonder if its sort of like how I must have felt for botanists in Russia when Lysenkoism was embraced and believing in genetics was seen as implying you must support evil capitalism.

Theres this whole thing where people seem hostile to the whole concept of genetics.

People seem to have started extending it beyond humans. The concept of animal husbandry has become "problematic" and I've seen unironic statements that believing some breeds of dog are smarter or have different temperaments to be signs of evil beliefs in eugenics.

Even previously uncontroversial statements like "incest with close relatives more often leads to genetic problems" seem to be becoming unacceptable sentiment because some cultures practice regular cousin-marriage... which is a great resource in genetics but only because it leads to lots of horrifying but fascinating case studies.

Expand full comment

"Incest with close relatives" =/= "cousin marriage". First cousins are third-degree relations, and second cousins fifth-degree. The rate of genetic disorders is detectable but not large for first cousins (we've done greater harm to our children by raising maternal age than we would by making half of our marriages cousins), and while long-term inbreeding can worsen the problem that strengthening effect requires *systematic* inbreeding. Also, it's worth noting that unlike maternal age and mutagens, inbreeding does *not* damage the actual long-term gene pool (rather, it assorts the existent alleles in a suboptimal way).

The US hysteria over cousins* really is an over-reaction and it really can be traced directly to the excesses of the nineteenth-century eugenics movement. Closer relationships than cousins have much larger genetic issues and also bring in the spectre of potential abuse of power, but cousins really aren't that big a deal.

*I've seen the saying "if you know how you're related to each other, it's too close" bandied about, and when I mentioned a cousin relationship in my (Australian) family on the web an American immediately started speculating that we had rape dungeons.

Expand full comment

>we've done greater harm to our children by raising maternal age than we would by making half of our marriages cousins

Interesting proposition, but are there any numbers to back it up?

Some kind of quantification of genetic degradation caused by pushing the maternal (&paternal) age from 18-ish to 28-ish, compared to the risks of getting two copies of defective genes within first-cousin marriages?

I've spent some years in a country in the Gulf where first-cousin marriages were fairly common, and where the maternal (&paternal) age wasn't as raised as in the West. They had an outstanding genetic problem, namely sickle cell disease, but that problem is regional rather than "first-cousin-related". And aside from sickle cell, they didn't seem to have (or I didn't seem to notice?) any other conspicuous genetics-related problems.

Expand full comment

I was following a line from Wikipedia, but the sources for that line are vague, it was comparing 30-year-old births to 40-year-old births and the 20 vs. 30 seems to be not quite as much of an issue. I'm now not 100% sure which comes out in front (and there's some degree of apples/oranges), though they'd not be of spectacularly-different magnitude.

Expand full comment

Hm but the West is generally postponing parental age from 20 to 30; the 40 is quite rare. If not much genetic degradation happens by 30, then first-cousin seems all the more likely to be riskier.

I didn't even realize that first-cousin marriage was a thing before I'd come to live in the middle east. And I thought the risks for the progeny were enormous - a thought which I may have imbibed from a local culturally-transmitted heritage that appears to be far, far older than any DNA-related discoveries. I was thus quite surprised to find that first-cousin marriage is common in some muslim countries - and was even more surprised that it didn't appear to have the conspicuous consequences that I'd anticipated it should have.

Some commenters say that the practice actually *does* have "conspicuous consequences" for the Pakistani population in UK (apparently more dire ones than it does for the Pakistani population in Pakistan, if I understood well? (Also, does the Pakistani population in UK perhaps "additionally" have a higher parental age than in Pakistan?)) At any rate, I'm not quite convinced that "first-cousin progeny at 20" is less risky than "unrelated-partners progeny at 30", but that said, I did have an apparently-overblown impression of the risks of first-cousin progeny before I came to the middle east.

Expand full comment

Perhaps I wasn't clear enough; I was partially retracting this:

>>>>(we've done greater harm to our children by raising maternal age than we would by making half of our marriages cousins)

on the basis of this:

>>but the sources for that line are vague, it was comparing 30-year-old births to 40-year-old births and the 20 vs. 30 seems to be not quite as much of an issue.

That aside, the parental age thing is more of a problem for women than for men because of the way oogenesis works; immature egg cells are stuck in the middle of mitosis for all those years.

I'd need to look up the exact spread of maternal ages in the West and complex conditional risk tables, plus add together a bunch of different risks, in order to get a full answer on this one, which is beyond what I'm willing to do and probably beyond my research skills. If we raised the maternal age from 30 to 40, that would definitely be worse than 50% of marriages being first cousins AIUI, but there's enough fuzziness on both sides that I'm not sure whether raising it from 20 to 30 is worse or not.

Expand full comment

The problem is that when it's cultural it is systematic.

You end up with communities with generation after generation of cousin marriages and a lot of excess consanguinity.

A family with 1 cousin marriage is no big deal. A family where cousin marriages are normal starts to look about as healthy as the royal families of Europe.

But people pull a bait ad switch where they pull out the numbers for a singe cousin pairing and then try to paint it like geneticists being concerned about the kids born screwed up is just people being fuddy duddy and outdated.

Repeat generation after generation and it ends up responsible for responsible for a huge fraction of all infant mortlity.

I couldn't care less about the general concept of the gene pool. I fully expect we'll be able to trivially patch things like that with some variation on gene therapy in a century or so but right now it generates childhood death and suffering.

https://www.independent.co.uk/news/uk/home-news/london-borough-child-deaths-redbridge-parents-related-cousins-pakistani-families-council-report-a7741146.html

Expand full comment

When I say "systematic", I mean >75%. The strengthening effect relies on there being a real lack of new blood. The Habsburgs had no new blood for something like 200 years and the coefficient of relatedness only doubled (also, they started doing uncle-niece which is a lot dodgier).

I'm with Vicky Hobart in that article regarding low base rates.

Expand full comment

That's an interesting and important distinction.

Expand full comment

The issue with Pakistani kids having so many birth defects and other issues isn't because they accept cousin marriage. Plenty of places accept cousin marriage, though usually with caveats(ie. your mother's sister's kids are siblings, as are your father's brother's, in order to reduce the odds of accidentally shagging a secret sibling), and they don't have those same issues. The issue with Pakistani kids is they have a preference that encourages cousin marriages, because they almost never marry out of their ethnic group. In Pakistan, this is not an issue because they many options who are not familially related but are within their tribe/clan/ethnic group. In the UK, this is not the case since there aren't nearly as many people within the tribe there, so often the only options that don't violate the taboo of not marrying out are familialy related.

Cousin marriage being normal is fine, as long as there's not a selective pressure that encourages cousin marriages over other marriages. That's the issue with UK Pakisttanis.

Expand full comment

Sure, as I said, the occasional cousin marriage is no big deal. But when there's something cultural pushing it to happen generation after generation it gets bad. Also the same thing happens in small villages.

Though it definitely still is an issue in some places. Small villages where a lot of people are married to close relatives and no tradition of young people travelling away from home become a gold mine for genetics research because they end up with the same kind of excessive consanguinity and horrible health problems.

Ditto for the aristocracy and irish travelers. Generally being an insular group who discourage marriages to outsiders tends to be bad for the infant mortality rate.

Expand full comment

So there is a decent amount of discourse happening on YouTube nowadays but I was surprised to see that there seems to be very little flow of ideas between communities like this and the discussions happening there. I decided to try my hand at the video essay format and the first one was a discussion of Moloch. Here is a link:

https://www.youtube.com/watch?v=K8kZ1ywX3Ag

I also have a few other videos on topics like biases and Cancel Culture. Please let me know what you think.

(My apologies if this kind of post isn't allowed.)

Expand full comment

There are two/three obvious reasons people from here might not watch YouTube a lot (or at least, these are the reasons I don't).

1) Reading Scott selects for fast and avid readers, who may find the pace of explanatory videos too slow and their depth too shallow.

2) YouTube is part of the mainstream social media ecosystem, which means:

2a) it's Out to Get You (https://thezvi.wordpress.com/2017/09/23/out-to-get-you/) and some of us try to minimise exposure to such environments (I *particularly* try to avoid contact with YouTube's "search" and "suggested videos" algorithms, sticking to direct recommendations or browsing a specific person's videos),

2b) YouTube does deplatforming, which is typically frowned on by this community (Scott's decried it a number of times), and some of us don't want to give deplatformers money by watching their ads (at least, not on a large scale).

(Amusingly, you touch on 2a and 2b in some of your videos.)

>I also have a few other videos on topics like biases and Cancel Culture. Please let me know what you think.

Videos about political polarisation and largely-politically-motivated shunning are a little bit hard to critique in a thread marked "no-politics", I don't currently (AIUI) have the ability to comment on YouTube, and you don't list a contact email there or here for me to give the critique in private.

Overall, though, they're kinda positive but miss a few things.

Expand full comment

Did anyone else laugh out loud a lot when they read “Bobby Fischer”?

Expand full comment

In regards to #4, there is a documentary called "Three Identical Strangers" on Amazon Prime that is about triplets who were separated at birth and placed into families in three different socioeconomic classes: Lower Class, Middle Class and Upper-Middle Class.

Furthermore, I was adopted myself and had a complex upbringing. I recently met my biological mother, grandmother, three half-brothers, and uncle. I spent six months with them in the last year. I've learned that I share similar aptitudes and shortcomings with my biological relatives.

I am also inclined to believe that the socioeconomic status of any given family could likely be caused by certain habits, traditions, and values which that family upholds. If earned independently, a higher economic status could be a reflection of that family placing a higher value on education and financial literacy. In short, I believe that the values the Great Families uphold are more likely to cultivate an environment where exemplary achievement can be attained.

I am new to Substack but I've been a writer for years. I'll be posting blogs that go into further detail about my experiences with these matters soon.

Expand full comment

That story sounds pretty interesting! I wouldn't mind hearing more.

Expand full comment

What constrains your the most? If there were one “variable” you could change about yourself to improve your life overall what would that variable be and why ?

Expand full comment

Ability to stay on task (when the task is important but not fun) instead of browsing ACX.

Expand full comment

mortality.

Expand full comment

I would love to hear more about this. Any advice or thoughts you have for younger you? If there's a time-portal that allows you to communicate a message to yourself when you're, say, in your mid 30's, what would you say about this topic?

Expand full comment

Difficulty with doing useful things. Especially with useful things.

Expand full comment

I meant to say "Difficulty with doing things. Especially useful things.".

Expand full comment

Stamina, I guess

Expand full comment

Being able to remember names I haven't encountered in the last week or so. That would be an instant +5 charisma, -2 anxiety.

Expand full comment

Putting aside money constraining my life choices, even relatively small amounts like $20k, probably social anxiety or executive function problems.

Executive function prevented me from doing well in college to some degree. Sure I didn't have a hard goal when I was there but if I could just manage myself better I could do what many people do and grind at something even if it wasn't my favorite thing.

Being anxious about social interactions probably has a general strong negative effect on me as well. As a kid when walking to the house of my best friend I would have strong intrusive thoughts of this being the wrong day or me imagining the invite or other things of that nature. Makes it hard to be socially pro-active in a lot of ways. Also an issue for in-person, but not digital, networking.

Expand full comment
Comment deleted
Expand full comment

I would say this is very close to my feeling. Somewhat classic akrasia with some other stuff mixed in.

Expand full comment

With regard to _The Nurture Assumption_, the author mentions the rare case where the family is the peer group. That was my situation, I think hers, and it is particularly likely if your family members are almost the only people you know who are smart as you are, which would fit your families story. In that situation the family environment can have a large effect on you, as in the case she describes in the book.

Expand full comment

This is perhaps one of the greatest powers of the internet. Connecting people who would have been starved of certain valuable mental nutrition otherwise. Well and hobby connections.

Expand full comment
founding

I'm highly skeptical of the validity/accuracy of surveys that apply numeric values to qualitative values like personality traits and apply numeric values to subjective views of one's own habits. Common sense and experience both suggest that parental behavior affects a child's life as an adult.

Expand full comment

The research on the Big Five personality traits is long and solid. There's been tons of validity and reliability testing done on the assessment instruments in many different languages and countries.

Needless to say, a lot of psychological research is about subjective views of one's experience. Including most of the mood disorder assessment tools (also used for pharma research), which have also been widely tested for validity and reliability.

One can certainly decide to dismiss almost all of psychological research (given how bad so much of it is), but it seems like that's what this would amount to.

Expand full comment
founding

I think you can assess someone's personality approximately, particularly when they are older. The accuracy of parents self-reporting about how they parent? Not so much.

Expand full comment

> I think you can assess someone's personality approximately, particularly when they are older

Now that personality tests have been around for a long time, isn't there an effect where many people understand what is being measured, and start to adapt their answers to the result they want to get rather than answering honestly? I don't think I've ever taken a Big Five personality test, but if I did then I sure as heck don't want to be identified as "low openness" or "high neuroticism" so I'll probably be keeping that in mind while I think about my answers.

Expand full comment

I'm a bit baffled at your motivations when taking a personality test. If you actually are more neurotic than average, would you not want to believe that? If you actually are less open than average, would you want to think you were not?

Expand full comment

Social desirability bias. And I suspect most people don't do personality test for fun. First one I remember doing was in context of career counseling class in school. Lately, most of time anyone has tried to make formal inferences about my personality has been for a job interview. Who wants to give answers when they know giving a wrong answer is undesirable?

Expand full comment

I've been too lazy to go look at the research showing that parenting doesn't matter, so have been quieter in this conversation because I haven't looked at the evidence their conclusions rest on. I'm assuming it rests on twin studies and not on parents' self-reports of how they parented, but I really don't know. Do you know?

Maybe someone here could recommend -- if I were going to read one summary of research on why parenting doesn't matter, what would it be? Ideally someone closer to the actual research than Pinker?

Expand full comment

How would you go about demonstrating that to someone with different intuitions though? If you want to make a testable scientific statement, at some point you're going to have to define categories or assign numbers.

Expand full comment
founding

I would say that some things are subject to quantitative analysis and others are not. Or perhaps not yet.

Expand full comment
author

What happened to the n at the end of your name?

Expand full comment

Why are you asking me? It's your substack that lost it.

Expand full comment

It's gotta be around here somewhere. Has everyone checked their pockets?

Expand full comment

I checked there are more than a thousand n's on this page. How are going to figure out which one belongs to David?

Expand full comment

I think the one we are looking for is a little thicker and bolder than the others. Also, has a bent towards anarcho-capitalism.

Expand full comment

I was going to suggest that it could be an oblique reference - now that you have a blog whose title is an anagram of your name *including* the 'n', unlike the old blog, someone else had to lose their 'n' to restore balance to the force.

But I had to check out that the anagrammery was indeed as I remembered, and I have just discovered that all of this time, you could have been the author Slate Codex Rant.

Expand full comment

Perhaps its the beginning of the 'n' times.

Expand full comment

Perhaps it strikes many as a small point, given the primary focus of the debate between genetics and parenting styles, but Judith Rich Harris is very clear on the impact of peers - and how parents CAN matter if their focus is constructing a particular peer environment,

"Even though parents may not have much influence as individuals, they can have a great deal of power if they get together. Hebrew used to be a dead language—a language used only for ceremonial purposes. A bunch of grownups got together and decided to make Hebrew the language of their new country, and they taught their kids to speak Hebrew. The kids found that their peers spoke Hebrew too, and Hebrew became their "native language," even though it wasn't the native language of their parents. It worked because the parents who decided to do it lived in one place and their children played together and went to school together. It wouldn't work if only one family in a neighborhood decided to do it. So parents who want to have an influence on their kids should get together with other like-minded parents and send their kids to the same school. That's the way the Amish do it, and the Hasidic Jews. In fact, it's what middle-class parents do when they move to "nice" neighborhoods so their kids can go to "nice" schools."

https://www.edge.org/conversation/judith_rich_harris-judith-rich-harris-1938-2018

This mechanism through which parents can have an impact shows the mechanism through which cultural differences also may have a more significant impact - children raised in significantly different peer communities may have outcomes different from those who have not been raised in significantly different peer communities.

In a different context Robert Plomin also acknowledges the impact of peer communities,

"If you've ever seen these really mathematically-gifted kids like one of my students has this foundation in Russia. For some reason, they seem to have a lot of mathematically-gifted kids. And if you see these kids early in life, they just live math. They joke about math. They talk math basically with their friends. And they have friends who are interested in math as well.And I think that’s the way genes work to influence complex traits like mathematical ability. It's not hardwired in the brain. I think that's why I said before, it's as much appetites as it is aptitudes. And if you have this genetic propensity, then you select and modify and create environments that are correlated with your propensities. And that's where I think parents can make a difference as they recognize what their kids are good at and like to do, they can help them do that. And that's an environmental influence but we call it a gene-environment correlation. It's not like the environment overriding genetic propensities. Instead, it's going with the genetic flow if you see what I mean."

https://richardhanania.substack.com/p/all-in-our-genes

An obvious inference is that with respect to environmental influences, we should be focused a LOT more on the norms of peer populations (and less on parenting).

John Ogbu, the Nigerian anthropologist who mainstreamed the notion that engaging in the behaviors needed to do well in school was "Acting White" among some African-American teen peer groups. Roland Fryer estimates that 1/3 of the black/white achievement gap at the high end may be due to the impact of norms against "Acting White,"

https://www.brown.edu/Departments/Economics/Faculty/Glenn_Loury/louryhomepage/teaching/Ec%20237/Fryer%20and%20Torelli%20(2004).pdf

The notion that outcomes are either genetic OR idiosyncratic (an increasingly common dichotomy among advocates of behavioral genetics) eliminates a focus on the role of peer influence, subcultures, and cultures in outcomes. Researching, discussing, and addressing these issues more fully and openly is the next frontier. At present those who discuss differences in outcomes associated with particular cultural norms are often attacked as aggressively as advocates of behavioral genetics used to be (see Amy Wax).

Expand full comment

Judith Harris also recognizes the uncommon pattern where the family is the peer group. Her claim is not that environment doesn't matter but that the environment that matters is the peer group.

Expand full comment

I'm always surprised at how rarely people mention this in descriptions of her work, "the environment that matters is the peer group." That was the message that came screaming out at me when I read it. But many people seem to get stuck on "genetics matter" and "parenting doesn't matter" as key messages.

I'm both not entirely surprised and yet also delighted that she recognizes "the uncommon pattern where the family is the peer group." Can you point me to a specific passage? Is this from The Nurture Assumption or some other source? I'd very much like to cite it in some work that I'm doing.

Expand full comment

The passage involved a black man, I think blue collar or something similar, with four daughters, who decided they would all become doctors. He ended up with one doctor and three other professionals.

Expand full comment

Thanks, will search for it.

Expand full comment

I was hoping that "the other Scott" would continue his series on the continuum hypothesis but it looks like he lost interest, so I post my (naive) question here, since there probably is a large overlap between the two groups of readers.

Here it is:

I can sort of follow the argument that you can build a model where ZFC+CH are valid and one where ZFC+NOT(CH) are valid.

But I still don't know what it all means for THE reals - you know, the equivalence classes of Cauchy series of rational numbers that we learn about in college.

It seems to me that that ought to have a definite answer, or am I missing the point?

Expand full comment

One difficulty is that the Cantorian cardinality of a set is not an intrinsic property of that set - it is rather a feature determined by whether or not there are bijections of that set with various subsets of itself. Since we think of the size of a set as an intrinsic property of the set, some philosophers of math argue that we should think of Cantorian cardinality as a different concept than size. Since, for every model of ZF, and every set in that model, you can create a larger model of ZF in which that set is countable, some philosophers of mathematics argue that "actually" all infinite sets are the same size, and Cantorian cardinality is just showing us the limitations of any particular model. There was a conference about this a few weeks ago, and it's possible some of the papers or videos of the talks are posted:

https://www.hf.uio.no/ifikk/english/research/projects/infinity-and-intentionality-towards-a-new-synthesi/events/conferences/countabilism.html

Others have already mentioned that there's also ambiguity about what are "the" reals. There are only countably many equivalence classes of Cauchy series of rational numbers that we can talk about in our language (among them, all the rationals, all the algebraic numbers, pi, e, all the computable reals, all the definable reals, etc.) Cohen falsifies CH by creating models of ZF in which there are additional equivalence classes of Cauchy series of rational numbers (or rather, additional subsets of the natural numbers - there's a natural one-to-one correspondence between these things) but with no way of us referring to them.

Expand full comment

I think that countabilism is based on the same misunderstanding of forcing that I alluded to in my previous reply, which is cleared up by the naturalist account of forcing: forcing doesn't just add new generic sets, it first mangles the universe in order to "make room" to add new generic sets. So if you have an uncountable set S, and you use forcing to collapse it to a countable set, what happens is you first pass to a larger, elementary extension of your original universe, which has an object which is still named S, but which isn't S (it will have extra elements inserted, basically superpositions of its original elements). Additionally, this larger universe will have an object which is named "the natural numbers", but which actually isn't the natural numbers (since it will contain logical superpositions of actual natural numbers, the same way that occurs in ultrapower constructions). Then it will add a generic bijection between the new thing named S and the new thing named the "natural numbers". At no point is an actual bijection between the original S and the actual natural numbers inserted, unless you start with an impoverished model to begin with.

Expand full comment

Despite rejecting the forcing-based justification of countabilism, I found the paper "In Defense of Countabilism" (https://philarchive.org/rec/BUIIDO-2) to be a pretty fascinating read. The intuitive view of the world they describe certainly seems to be self-consistent, at least. It reminds me somewhat of the possibility that the continuum is real-valued measurable, where the reals are too large to express directly in the aleph notation (even though strictly speaking these are contradictory points of view).

I think there is a physics-based objection that they missed, however. While it seems intuitively plausible to me that we can model spacetime without the full collection of reals truly existing, it is hard to see how to see how to model the amplitudes that occur in quantum mechanics without building the reals. These amplitudes form a continuum of distinct values even when you are studying a finite quantum system. The same continuum seems to arise from probability theory alone, but quantum mechanics forces the issue in a more substantial way than classical probability.

Expand full comment

One problem with talking about THE reals, is that first-order logic can't even pin down THE integers. There are concrete statements about integers, like "there is some n such that if you run this Turing machine for n steps, then it will halt", which are true in some models of ZFC, and false in others. Think about the insanity of that for a minute: somehow there is some alternate universe, not just an alternate universe of integers but of all of the rest of set theory (including an alternate universe of reals to go along with it) where this Turing machine halts after a finite number of steps.

The standard alternate universe where ZFC+CH holds is Godel's L, the "constructable" universe. The nice thing about this alternate universe is that the "true" universe (called V) and L at least agree about who the integers are, so they agree about things like whether Turing machines halt. The problem is that L seems to be "obviously impoverished" - intuitively, we don't expect every possible real number to be constructable, and there are technical reasons to believe that specific real numbers (the "sharps") should exist and are missing from L.

The standard way to make a model of ZFC+NOT(CH) is "forcing". If you start with an impoverished model such as L, and if there are a lot of reals that are missing from L, then forcing puts them back in in a logically generic way (you sort of set up a garden of forking paths you could take to define a new real bit by bit, and try to understand the generic behavior of these paths, vaguely picking a path without ever concretely nailing the whole thing down). Unfortunately, forcing does too much: it manages to force new reals in *even if you already have all of the reals already*. This is intuitively bizarre: how can we force *new* reals into our system if we *already have all of the possible reals*?! The source that explained this to me in a way that intuitively makes sense was Joel Hamkins "naturalist account of forcing". If you start with a model that already has all of the reals, what the forcing construction ends up doing is first it sneakily inserts new integers into your model that didn't exist originally, by applying the same ultrapower construction which is used in nonstandard analysis to the entire set-theoretic universe. The new universe has more "integers" available, and since reals can be thought of as sets of integers (or as functions from integers to {0,1}, or Dedekind cuts on the set of ratios of integers, or...), there are more potential real numbers that could exist in the new system, not all of which are already present in your ultrapower universe. So now you can pick some of the sets of "integers" which are missing from your ultrapower universe, and force them in as new "reals", changing the size of the set of "reals" in the process - and that is what forcing actually ends up doing in this case.

So the two approaches we have of constructing models are: artificially impoverish the universe of sets, in which case the set of reals becomes as small as possible, or artificially increase the universe of sets (but actually *shrink* it, because bizarrely, an internal ultrapower of a model of ZFC actually becomes smaller than the original model of ZFC), screwing up the way the integers work if you have to, in which case the reals can become as enormous as you like. This sucks, because both approaches are obviously the wrong way to try to understand anything about the actual, true universe of sets.

Forcing axioms are an attempt to try to limit ourselves to only force in things which are reasonable to force in in some technical sense, and then to force all of those in until the whole universe stabilizes - the hope is that this will somehow lead to insight into the true universe of sets. Forcing axioms typically lead to the conclusion that the reals have size aleph_2. Unfortunately, as far as I can tell forcing axioms are never justified in a way that takes into account the point of view of the "naturalist account of forcing" properly, so they are just a weird ad-hoc kludge that somehow seems to give a coherent picture of... something, which may or may not have any resemblance to actual reality.

The upshot is that currently, there is no intuitive set theoretic argument which sheds any light on whether or not CH is actually true in reality, although I definitely agree that the question ought to have a definite answer.

Expand full comment

Slight correction: the ultrapowers in the naturalist account of forcing are actually "Boolean ultrapowers", which are a generalization of the ordinary ultrapowers used to construct nonstandard models. The generalization is that you replace the usual Boolean algebra of subsets of N with a certain complete Boolean algebra which you construct from the "garden of forking paths".

Here are the slides for the naturalist account, which should correct any other mistakes I made above: https://www.math.uni-bonn.de/ag/logik/events/young-set-theory-2011/Slides/Joel_David_Hamkins_slides.pdf

Expand full comment

So the problem is with the concept of "the" reals. Different models of ZFC will have different opinions about what sets there are, and in particular will have different opinions about what the set of real numbers is. The idea being that, in either model of ZFC, you can carry out the construction of the real numbers, but because your models of ZFC contain different sets, carrying out the same construction in both models leads to different sets. Therefore, without an opinion about what model of ZFC is "correct", it's not really possible to have an opinion about what model of R is "correct".

As a second note, at least in the set theory class that I am in, the existence of a model of ZFC requires that you are working in ZFC + "exists an inaccessible cardinal", which means that you need to deal with a third set of real numbers that you can compare to, namely the one in the (class) model ZFC + "exists and inaccessible cardinal". This set of real numbers might have more real numbers than the entire set model of ZFC + CH, if you want it to. Or it could not. You can construct your set model of ZFC + CH to have all sorts of crazy properties, which in turn will make its opinion about what the real numbers are have the same crazy properties.

Basically, asking questions about what a set "actually is" will quickly get you into the realm of crazy independence results where everything depends on which model of ZFC you are in.

Expand full comment

Here's what it means for the reals: as far as ZFC is concerned the cardinality of the reals could be any of the alephs with finite subscripts (other than aleph null, of course). It can't, however, be any of the alephs with transfinite subscripts. There are finitely many levels of infinity in-between the integers and the reals - zero if the continuum hypothesis is true, some finite number if its false. An interesting recent article at Quanta magazine discusses two proposed extensions to ZFC, Martin's Maximum and (*), both of which entail that there is exactly one level of infinity in-between the integers and the reals. It was recently proved that one of these extensions entails the other. https://www.quantamagazine.org/how-many-numbers-exist-infinity-proof-moves-math-closer-to-an-answer-20210715/

Expand full comment

The thing we can prove is that the reals are not a union of countably many sets that are each smaller than the reals. Thus, the reals do not have size \aleph_\omega, which is the union of sets of size \aleph_0, \aleph_1, \aleph_2, ....

However, the reals could have size \aleph_{\omega+1}, \aleph_{\omega+2}, \aleph{2\omega+1}, \aleph_{\aleph_1}, or many other transfinite cardinalities.

Expand full comment

The cardinality of the reals can be alephs of transfinite index, it's just that they need to meet technical conditions about the cofinality. In particular, we can prove that |R| != aleph_omega, but it is consistent that |R| = aleph_(omega_1).

Expand full comment

Thanks for the correction. Obviously, I'm not a set-theory specialist, just a fairly well-educated layperson.

Expand full comment

Fundamentally, this is the philosophical question of whether (and which) mathematical statements have an objective truth value independently of axiomatic systems used to formalize them, which is more or less the divide between formalists and platonists. A formalist would argue your question is meaningless because there's no such thing as "the reals" -- to speak meaningfully of them you'd need some axiomatic system, and Godel & Cohen show how the answer depends on the system chosen. A platonist might admit the transcendental 'existence' of real numbers independent of axiomatic systems used to formalize them -- but the question now becomes obvious: if we're not going to rely on axiomatic systems, how are we to find truths about these platonic entities?

I think this can be done in many cases. You don't really need an axiomatic proof of the Pythagorean theorem to be convinced it's true. There's tons of visual demonstrations, plus you could draw a bunch of right triangles and measure until you're convinced. Thus is also the case with a lot of concrete math, which you can see reflected in physical effects in the universe etc. to be convinced that they are right even without an axiomatic proof. But something as abstract as the continuum hypothesis seems to me to be impenetrable to our physical existence.

Expand full comment

This question can actually be made orthogonal to the formalist/platonist divide.

Most formalists think that although every consistent mathematical theory corresponds to some structure, there's still reason to use certain names for particular structures. For instance, "the natural numbers" is canonically the smallest model of Peano Arithmetic, which has features that go beyond those derivable from PA alone (though there are still undecidable questions about it). Such a formalist could have views about what "the sets" are that might go beyond ZFC, and settle the question of CH (though no one yet has a clear view of what those conditions might be).

Although Platonists think there is one real mathematical reality, they still have questions about how our language attaches to things in it. There are some obvious ambiguities that don't matter - for instance, which of the two square roots of -1 gets the name "i" doesn't matter (complex conjugation is an automorphism). Paul Cohen's forcing proof of the independence of CH works (particularly in the "Boolean valued models" approach of Dana Scott) by creating a proper class of objects called "names" that are standardly interpreted as not being the sets themselves, but acting a lot like them, and shows that by choosing the construction properly, one can make the "names" satisfy CH or satisfy not-CH. Some Platonists might think we are mistaken about which objects are actually the "sets" and which are the "names", or might think that nothing about our mathematical usage makes clear which class of objects we are talking about. If it is ambiguous which ones are "the sets", then it might be ambiguous whether CH is true or not. (I believe John Steel pushes this view.)

Expand full comment

It tells you that the axioms of ZFC are not enough to tell you that there is or isn't a cardinality between that of THE reals and aleph null.

Whether that question has a true answer... I'm not sure, that seems like a difficult philosophical question. I feel confident that [the collection of all the real numbers] "exists" in a sense not specific to "because all models of ZFC include such a set". But I'm not confident one way or the other whether the question of "Is there a cardinality between that of the set of integers, and that of the set of real numbers?" has a definite answer.

I am confident that a statement that can be formulated in ZFC but which is independent of ZFC can have a definite answer. But I'm not sure if this one does. Perhaps all statements that can be formulated in ZFC are either truly true or truly false (even if ZFC doesn't say one way or the other)? But I am not confident that that's true.

But also, for any statement that boils down to arithmetic expressions, a finite sequence of \forall and \exists (err... actually, maybe only if it only alternates at most once? Am a bit unclear on this) with natural numbers and then simple statements about those natural numbers,

any such statement that you can prove by assuming CH, or by assuming not(CH) , should hold in any model of ZFC I think??

So, as long as you only assume one of them, I think in terms of proving things that are sufficiently down to earth, you can pick whichever is more convenient to assume?

Expand full comment

Speaking of TV, I just discovered the 2017 series Trial & Error, funniest thing I’ve seen in a long time. Very British style comedy about a murder trial where an earnest young NY attorney is dropped into a small town murder trial where everyone, literally everyone, is some combination of insane, deranged, delusional, oblivious, etc, and John Lithgow is pretty much all of those as the defendant. Light fare but OMG is it funny, to me at least.

Expand full comment

Typo: "...some of Andrés Gomez Emilsson’s theories. Anders has since written..."

I assume "Anders" should be "Andrés"

Expand full comment

Can someone explain why and when the regular commenting group here changed so drastically?

Expand full comment

Some of them are at a forum now.

Expand full comment

On both the Viktor Orban and the Ivermectin articles, I think there were a ton of new commenters - presumably people already interested in those topics. I think Substack feels less foreign than the old blog to people from other parts of the internet, so they are more likely to comment.

Expand full comment

I don't know what the effect was on the commenting community, but we got a bunch of new people from the New York Times and possibly the New Yorker article. When would you say things changed?

Expand full comment

I actually discovered Scott several months before the NYT brouhaha because the writer Mark Manson recommended him and several other rationalists as more interesting reads than just the MSM.

I suppose it's like when your favorite band gets famous. You're happy for their success but bummed that you now only get to see them playing in stadiums instead of intimate clubs, and the social costs that come with that.

Expand full comment

I will say that in a fit of stubbornness, I set out to read all of the ivermectin comments, and the proportion of tribal insults was shocking.

Expand full comment

Agree - I thought the quality of comments in that discussion was noticeably worse than usual.

Expand full comment

I'm glad to read this! I am a relative newcomer to the blog and the ivermectin article was the first time I really engaged in the comments (some of which were really good!). But I walked away feeling like the comment section was a bit of a disaster.

Expand full comment

It totally was. A bunch of new people showed up, just calling each other stupid. And the moderation tools available to Scott really suck. We don't even have a REPORT button.

Expand full comment

I tried really hard not to call anyone stupid! I did, however, try to (gently, respectfully) point out when I thought someone was mistaken. I probably just ended up feeling the trolls, but I felt like I had a nice exchange with one person.

Expand full comment

That's unfortunate to say the least - one of the wonderful things about SSC was the quality of the comment section. I wasn't a big fan of the move to substack but I never envisaged it having such a detrimental effect on the commenting.

Expand full comment

Substack is its own community with a separate group of active users. On SSC you had to get there through more convoluted routes. Here you get dragged into the general SS community. The same as things work on Reddit.

Expand full comment

Yes. I'm probably thinking too much from the lens of my own commenters; I think I have less than half as many paying subscribers as Scott.

Expand full comment

It’s also worth noting that (as far as I can tell) you have your comments set so only paid subscribers can comments, while Scott lets anyone comment.

So I think ACX is much much more likely to attract outside commentators on high controversy articles, while your substack has a more defined pool of commentators.

Expand full comment

Oh wow, I had no idea I could comment here without paying! This is a test to see if it's really true ...

Expand full comment

It is! It is!

Expand full comment

Welcome, new commenter, we have sent you an invoice.

(just kidding)

Expand full comment

I don't necessarily focus on high drama posts when commenting but I don't sub to Scott. I think there is definitely a huge difference between sub only stacks.

Expand full comment

Yes, very good point.

Expand full comment

I think a large chunk of the regular (and to my mind, most interesting) commenters went en masse to DSL during the SSC hiatus. You'd have to ask them yourself as to why, but it makes sense to me intuitively - self-selecting group of high quality commenters no longer have to interact with riff-raff and proles. Something like that.

Expand full comment
founding

In my case, A: DSL was there when ACX wasn't, so I made it at least my temporary new "home" and found it a cognenial one with lots of old friends and B: the commenting interface at ACX has never been as useful as either SSC or DSL. And I suspect because A and B are not my unique perspective, I have found fewer old and made fewer new friends at ACX than at DSL. Less signal (other than Scott himself), more noise.

I'll still show up here to see what's going on, but you're much more likely to catch my attention at DSL.

Expand full comment

DSL is enough to the right of what I want that I've lost my impetus to comment there. I'll at least be checking in there eventually.

Expand full comment

I look forward to you doing so. I think you pull the average in a healthy direction.

Expand full comment
founding

That is unfortunate but understandable

Expand full comment

A fair number of them also seem to have gone to the Discord, I have seen for instance Philosophisticat there sometimes, same with Shambibble/Anonymous Bosch, and Nornagest is now a mod of it

Expand full comment

Ah, I wondered where Nornagest had apparated to.

Expand full comment

To some extent this seems like a political polarization thing to me. The discord definitely gives me a bit more left-libertarian vibes (though still overall pretty anti-woke) and DSL seems a little more rightward (and still generally libertarian)

Expand full comment

I was a fairly frequent commenter on SSC, though perhaps not one of those you are thinking of.

I went to DSL, but when Scott finally started ACX it seemed to me that there was a minor exodus from DSL of commenters I enjoyed reading, and an increase in comments that annoyed me. These changes increased over time, to the point where I rarely enjoyed my experiences on DSL.

I still have an account on DSL, but I mostly go there when I'm in a mood to enjoy a nice fight, or at least willing to tolerate a high noise-to-signal ratio.

I post here sometimes, but the UI is sub-standard, compared e.g. to the old Slate Star Codex forum.

Net result: I'm a less frequent poster on any/all SSC-derived forums.

Expand full comment

Yes, as an SSC Open Thread lurker, my sense is that most of the people who I considered the defining commenters of SSC now post far more at DSL -- John Schilling, David Friedman, Bean, Nybbler, etc etc.

But I considered them defining not just because they posted the most, but because I thought they were representative the generally centrist/somewhat conservative bent of the board. But it's certainly possible that they, with their volume, defined rather than reflected the commentariat, and their absence has just made it clear there were always different types of commenters around SSC but I didn't notice them as much.

Expand full comment

Some of us comment in both places, but I comment more on DSL. Partly it's because the forum format makes it much easier to ignore threads you are not interested in, participate in ones you are.

Expand full comment

I still miss the old days of web forums. Reddits and Facebook groups and worse Discord are just actively worse.

Expand full comment

What is DSL?

Expand full comment

Data Secrets Lox - an anagram of Slate Star codex and a forum for SSC type readers you'll find under the title of 'Bulletin Board' at the top of Scott's list of things headed 'Community'

Expand full comment

In the time before ACX, my wife referred to DSL as the exiles forum.

Expand full comment

Does that make DSLers Pharisees and ACX commenters Sadducees?

Expand full comment

The people or the quality? Maybe the people are using different names? Seems like some of the OG posters don't comment anymore.

Expand full comment

I don't care that people reject Nurture Assumption-style, genes-heavy-environment-light hereditarianism. That's a perfectly legitimately scientific position. But I find it very strange that a) they appear shocked that Scott believes this, b) are dismissive of a mountain of credible research that supports this perspective, and c) seem unwilling to consider that their attachment to the conventional nurture-dominant sentiment is the kind of cultural artifact that rationalism has always pushed back against.

Expand full comment

I think it's just been a long time since Scott brought it up, and a lot of people who take it for granted have quietly assimilated it, and it got forgotten and over the years the normal turnover is users caused it to drop off.

If we fought over it once or twice a year, everyone would know.

Expand full comment

Personally I don't reject genes-heavy-environment-light hereditarianism, although I do think that some people might be going too far if they think (based on a few studies) that the non-genetic contribution of parents is zero.

To some extent these studies might be measuring the wrong thing; I can accept that (for instance) the non-genetic parental contribution to Big Five personality traits is zero, but Big Five personality traits aren't everything.

Expand full comment

Check the New Yorker review I posted elsewhere in this thread for a good articulation of that position

Expand full comment

I wonder, how many of us are from environments where we did see "Whoo, you weren't brought up, you were dragged up" and so it is self-evident to us that parenting styles do indeed matter, versus people in the studies who do seem to be selected for "okay, no alcoholics, wife-beaters, dead-beats, or single parents working three jobs in a Brazilian favela".

Expand full comment
founding

To a first approximation, Nurture doesn't much matter unless Nurture really sucks. You have the interesting perspective of being a professional in dealing with the outcomes of really sucky nurturing. Since that's a rare thing in the SSC/DSL/ACX community, but not in the world as a whole, we need that perspective to keep us honest.

Expand full comment
Comment deleted
Expand full comment

> It seems a bit less likely, you’d think that a family with books and education would bring out the potential more, but it looks like adequate parenting is good enough there two

It's easy to imagine that it might make a difference for kids on the high end, but not much difference for the average kid, and that none of the studies done so far has had enough statistical power to tease this effect out (especially since adoptees probably tend towards lower-than-average genetic potential anyway).

So maybe the average kid can reach his full potential with a mediocre upbringing, but John Von Neumann needs a house full of books and tutors in five languages in order to reach his full potential.

Expand full comment

Isn't that the opposite of the argument, though? That potential von Neumanns will do well regardless of the family they are in (so long as it's not actively abusive), but an average kid is not going to be (very much) improved even with books and tutors pushing them on.

Expand full comment

In chess news, 18-year-old Alireza Firouzja is now ranked number 2 in the world, behind only world champion Magnus Carlsen (who will be defending his title against Ian Nepomniachtchi in a match starting this Friday).

In the last month, Firouzja won a spot in the 2022 Candidates Tournament by winning the https://en.wikipedia.org/wiki/FIDE_Grand_Swiss_Tournament_2021 , and led his adopted country of France to victory at the European Team Chess Championship.

Expand full comment

https://youtu.be/hZLjWFD8z3o

A Firouzja game picked at random if you want to see him in action.

Expand full comment

Open Question: Is this a waste of human capital?

Expand full comment

I've played a fair amount of chess ............ and that's a good question.

The game has suddenly taken off in popularity again, due to several factors (esp. video platforms that have critical massed the global player/fan base). Carlsen is a millionaire, and Nepo might be after this match. Someone linked to @agadmator's videos, he's not even a GM, and he's pulling hundreds of thousands of videos just commenting on games. Hikaru Nakamura pulls a lot on Twitch. So it's operating as a content market, and some of these guys are as-well or better off than they would have been going into the job market.

Expand full comment

I do occasionally have the nagging feeling that once we have built a machine to undertake a task better than any human then it is a waste of time to try and excel in that task. Chess is in that list, as is weighlifting. Football, ballet and poetry are not.

Expand full comment

What does it mean for a machine to do weightlifting? I thought that at least some of weightlifting is maintaining a particular form while doing it, not just physically lifting the weight.

Expand full comment

I think this boils down to the classic journey/destination discussion. If the goal is to have an utterly unique talent, then yes, it's a waste of time to try to excel if there is a machine which is so much better that you can never be competitive.

But if the goal is to find fulfillment through honing one's skills and the process of practice and execution, then it doesn't really matter if you're in the bottom percentile of ability, so long as the activity is rewarding.

Me, I'm a journey/process person, but I don't begrudge the destination/outcome people. That feels like a fairly grim perspective, though, since the next questions are whether it's worth trying to excel if there are other humans who are so much better you can never be competitive. And that leads to the statistical analysis of whether any particular person has any chance of being the best in the world at anything.

Expand full comment

Two answer:

A market-based answer: apparently not. Assuming there aren't meaningful government subsidies for chess grandmasters that I'm not aware of, it seems that they bring enough entertainment to other people that those other people are willing to fork over enough money (directly or indirectly) to economically support a number of professional chess grandmasters.

Whether that's by winning chess tournaments (which people enjoy enough to pay money to attend), or lessons, or sponsorships from chess.com or whatever. If other people weren't enjoying what they were doing, in a broad sense, they likely wouldn't be able have a career doing it.

---

And more abstractly, I just don't see any way this line of thinking doesn't lead to (or amount to anything other than) high-handed judgements about what hobbies are and aren't "worthy" ways for people to spend their time and money on. Basically any art or leisure activity could be subjected to the same criticism of "wasting human capital".

Either you jettison all of arts, sports, video games - or else you force every hobby/interest to have to try to "justify its existence" by some sort of secondary or tertiary benefit beyond the mere enjoyment.

Like if you're concerned about how much wasted human capital comes from a few chess masters, just wait until you find out how many million man-years have been spent of Fortnite.

Expand full comment

I think it is an interesting question, not limited to chess. There are activities that talented people put very large amounts of time and energy that don't seem to produce anything beyond the pleasure of the participants. They make sense as consumption activities, but one has the feeling that if only you could figure out how to make productive activities as much fun for the same people, there would be a large gain. I discuss a little of this here:

http://daviddfriedman.blogspot.com/2009/05/other-worlds-and-wasted-talents.html

Expand full comment

For some people and some activities, productiveness squeezes the fun out.

Expand full comment

Scott was right, you really did drop the "n".

Expand full comment

What do you think he should be doing instead? What if this was a prodigy tennis player, opera singer, or violinist?

I'm inclined to put "extremely good at chess" in the same category as "extremely good singer or ballet dancer". We seem to have imbibed this idea that being good at chess means being a super-brain.

I think chess players may indeed be very intelligent, but not necessarily intelligent in the same way as a scientist or investment banker or whatever 'better' job you think he should be doing.

Expand full comment

As a tiny bit of evidence, I know one very good chess player (U.S. under fourteen — not sure it was fourteen — champion, now at least master) and he currently runs the Federalist Society. I know two world champion level bridge players, one of whom was a computer science professor at UC Davis. So substantial ability outside the game — but in both cases the ability was in use outside the game.

Expand full comment

I think there's a spectrum between a popular singer which millions of people enjoy, and a chess player whose games only a very small set of people can even understand enough to appreciate how great he is. If you think about maximizing the utility for the whole population it might make sense to keep pop artists and pro sportsmen but not chess players.

Of course, though, personal freedom trumps this silly optimization.

Expand full comment

It’s only a maximizing utility argument if you’re very confident about your measure of utility. How many computer hardware, software, and alogrithmic advances have been inspired by chasing human grand masters?

If we’re really maximizing utility, maybe we should be prioritizing entertainment for the most productive people… Hope I like the same things Elon Musk does.

Expand full comment

The worst thing about the social importance of chess is how it dominates game development of all kinds. Always gotta hear those annoying chess comparisons. Always feels like the social status of chess is overbearing and damaging. Chess is the Amazon of games. If some other online retailer had won the wars instead would the difference really be that large?

People doing useless hobbies as careers, for whatever reason that particular activity was supported financially, probably could be good at productive stuff. But it is a bit of a crap shoot. Chess might correlate well to programming but what software projects take off is somewhat random. Whereas Chess talent is relatively easy to evaluate early on as it how to build on that talent.

Expand full comment

Well, Poe famously denigrated chess by comparison with draughts (American: checkers) or whist, in "The Murders in the Rue Morgue", so you can count him as in agreement:

"The faculty of re-solution is possibly much invigorated by mathematical study, and especially by that highest branch of it which, unjustly, and merely on account of its retrograde operations, has been called, as if par excellence, analysis. Yet to calculate is not in itself to analyse. A chess-player, for example, does the one without effort at the other. It follows that the game of chess, in its effects upon mental character, is greatly misunderstood. I am not now writing a treatise, but simply prefacing a somewhat peculiar narrative by observations very much at random; I will, therefore, take occasion to assert that the higher powers of the reflective intellect are more decidedly and more usefully tasked by the unostentatious game of draughts than by all the elaborate frivolity of chess. In this latter, where the pieces have different and bizarre motions, with various and variable values, what is only complex is mistaken (a not unusual error) for what is profound. The attention is here called powerfully into play. If it flag for an instant, an oversight is committed resulting in injury or defeat. The possible moves being not only manifold but involute, the chances of such oversights are multiplied; and in nine cases out of ten it is the more concentrative rather than the more acute player who conquers. In draughts, on the contrary, where the moves are unique and have but little variation, the probabilities of inadvertence are diminished, and the mere attention being left comparatively unemployed, what advantages are obtained by either party are obtained by superior acumen. To be less abstract, let us suppose a game of draughts where the pieces are reduced to four kings, and where, of course, no oversight is to be expected. It is obvious that here the victory can be decided (the players being at all equal) only by some recherché movement, the result of some strong exertion of the intellect. Deprived of ordinary resources, the analyst throws himself into the spirit of his opponent, identifies himself therewith, and not unfrequently sees thus, at a glance, the sole methods (sometime indeed absurdly simple ones) by which he may seduce into error or hurry into miscalculation.

Whist has long been noted for its influence upon what is termed the calculating power; and men of the highest order of intellect have been known to take an apparently unaccountable delight in it, while eschewing chess as frivolous. Beyond doubt there is nothing of a similar nature so greatly tasking the faculty of analysis. The best chess-player in Christendom may be little more than the best player of chess; but proficiency in whist implies capacity for success in all those more important undertakings where mind struggles with mind. When I say proficiency, I mean that perfection in the game which includes a comprehension of all the sources whence legitimate advantage may be derived. These are not only manifold but multiform, and lie frequently among recesses of thought altogether inaccessible to the ordinary understanding. To observe attentively is to remember distinctly; and, so far, the concentrative chess-player will do very well at whist; while the rules of Hoyle (themselves based upon the mere mechanism of the game) are sufficiently and generally comprehensible. Thus to have a retentive memory, and to proceed by “the book,” are points commonly regarded as the sum total of good playing. But it is in matters beyond the limits of mere rule that the skill of the analyst is evinced. He makes, in silence, a host of observations and inferences. So, perhaps, do his companions; and the difference in the extent of the information obtained, lies not so much in the validity of the inference as in the quality of the observation. The necessary knowledge is that of what to observe. Our player confines himself not at all; nor, because the game is the object, does he reject deductions from things external to the game. He examines the countenance of his partner, comparing it carefully with that of each of his opponents. He considers the mode of assorting the cards in each hand; often counting trump by trump, and honor by honor, through the glances bestowed by their holders upon each. He notes every variation of face as the play progresses, gathering a fund of thought from the differences in the expression of certainty, of surprise, of triumph, or of chagrin. From the manner of gathering up a trick he judges whether the person taking it can make another in the suit. He recognises what is played through feint, by the manner with which it is thrown upon the table. A casual or inadvertent word; the accidental dropping or turning of a card, with the accompanying anxiety or carelessness in regard to its concealment; the counting of the tricks, with the order of their arrangement; embarrassment, hesitation, eagerness or trepidation—all afford, to his apparently intuitive perception, indications of the true state of affairs. The first two or three rounds having been played, he is in full possession of the contents of each hand, and thenceforward puts down his cards with as absolute a precision of purpose as if the rest of the party had turned outward the faces of their own."

Expand full comment

He neglects the endgame in this analysis.

Expand full comment

Poe would have loved Go. I have not idea what he would have though of computer proficiency at Go.

Expand full comment

Probably it is. We should de-prestige chess. Nothing wrong with it but nothing particularly great about it outside of historical investment. Questionable whether that will tilt players to productive pursuits or simply shift them to other games, though.

Expand full comment

I tend to think of chess as the gymnasium of a military mind

Expand full comment

I would say that “human capital” is and should be a contradiction in terms. Humans are not capital, humans are humans and they use capital to do things.

Reducing people to “things I can use” and forcing them to justify why they’re enjoying themselves instead of serving you and me is a Bad Thing. The fact that you are directly the question at chess masters instead of, say, hockey players or film actors makes me think that you’re fine “investing human capital” in normal things but ball at doing it for things you personally don’t care about.

Expand full comment

This doesn't seem particularly objectionable. I mean, technically people *are* things you can use - it's just that the law puts pretty extreme restrictions on how you can use them, and the people themselves will only be amenable to being used if you treat them a certain way. But you still 'use' a plumber by paying him to unblock your drain, and the reason you pay the plumber for that particular task, and not some random who has never held a pipe wrench in his life, is that the plumber has spend time and effort acquiring skills that enable him to do the job - skills that the phrase 'human capital' describes neatly.

Expand full comment

"I would say that 'human capital' is and should be a contradiction in terms. Humans are not capital, humans are humans and they use capital to do things."

Are we using the words the same way? Acquiring a skill with some financial return (e.g. doctor) is often described as acquiring human capital. The idea being that it is capital-like in the sense that having it can generate wealth, but different from "normal" capital (eg. a forklift) because you mostly can't sell it.

Acquiring the human capital usually requires an investment of time -- often a non-trivial one.

"Wasting" human capital tends to describe spending the time/effort to acquire the skill (eg. doctoring) and then not doing anything with it. If the medical school was fun in itself (the journey rather than the destination!) then no harm done (except for the $500K of medical school debt). But for a lot of folks going to medical school is a COST and the reward is being a doctor. Paying the cost and then walking away from the reward seems like it is a mistake compared to not paying the cost at all.

Same holds for lawyers and many other fields.

What it controversial here?

Expand full comment

Depends. What other areas are fungible with chess talent?

Expand full comment

I think yes. I think eminence and mental ability are probably somewhat general abilities.

Expand full comment

I must quote 19th century chess champion Paul Morphy: "The ability to play chess is the sign of a gentleman. The ability to play chess well is the sign of a wasted life."

I don't think it's a waste, at least at the top level. Chess players (most prominently Kasparov) can go on to do other things, and there's nothing wrong with a few smart people earning money in the entertainment industry for a decade or two.

Expand full comment

"Paul Morphy: 'The ability to play chess is the sign of a gentleman. The ability to play chess well is the sign of a wasted life.'

I don't think it's a waste, at least at the top level."

At the top level this is mostly what you DO. And that's probably true if you are "only" around 500th best in the world.

A good question then becomes: Is this how you want to spend your time during the years that you are trying to get very good and then are very good (for whatever value of 'very good' you choose)? Time spent on chess (or whatever) isn't time spent on something else.

Expand full comment

My friend is super into chess and has dedicated a lot of time to it. He is really really good. But I can't help but think it's like being really really good at videogames and videogames seem more fun. This particular game seems to have more status attached and considered something impressive/good. I don't think playing chess boosts your IQ, so I figure it is not very useful.

Fair enough.

Expand full comment

I don't think that chess is particularly high status these days, the Cold War era boost has mostly evaporated by now. It's still higher than that of videogames admittedly, but if you play the right ones you can earn more money doing so at least, and the time required to reach top level is lower.

Expand full comment

In some places, "scholastic chess" is a thing, and it's a much different question whether students who put in hundreds of hours to become the top chess player at their school (and, eventually, 5000th best in the world) are wasting their time.

Unless they like chess better than video games or dating, in which case it may not be wasting anything.

Expand full comment
Comment deleted
Expand full comment

True, but a chess champion is (to the extent that intellectual talents are fungible) a waste of a first rate brain that could be used for doing really valuable work, whereas a football champion is only a waste of a first-rate body that could be used for, I dunno, lifting heavy objects.

Expand full comment

I'm not sure chess is *that* much more g-loaded than football though -- I seem to remember reading there have been quite a few otherwise-dim chess grandmasters.

Expand full comment
Comment deleted
Expand full comment

What would you rather spend it on?

Expand full comment

Hey - not sure if this is allowed so feel free to remove if not but I'm a big fan of Scott and have just started writing my own substack which I think is sort of similar in theme and some people here may enjoy. Here's a post on the placebo effect that might be particularly interesting to ACX readers: https://atis.substack.com/p/all-placebos-are-not-created-equal

Expand full comment

I think a very occasional mention, preferably with a few lines about the piece, are alright.

Expand full comment

What do people think is and is not within the norms for advertising on Open Threads? Scott seems to allow it within reason, but what is reasonable?

My view would be that if you finish/start a new big project it’s okay to post about it once (you just started a blog, your software is finished, etc) as long as it is something that at least plausibly could be of higher interest here than among the general population (if you made a new kind of generic sock, maybe this isn’t the appropriate place to shill it).

Expand full comment

I like how @bean did it back in the day. He started by writing what would later be blogposts into the open threads, and when that took off and he had a bunch of followers specifically interested in his posts, he started a blog and would occasionally update us about what was going on, perhaps once per two or three open threads.

Expand full comment

What I'd like to see is occasional classified threads, on which everyone can post (not just paying subscribers), and most advertisements banished to those threads.

I had thought that was where Scott was planning to go - then the last classified thread was subscribers-only, and while that was apparently accidental, the mistake was not corrected, and a no non-payer visible (never mind postable) classified thread has been created since then.

So I guess the open threads are fair game, unless Scott objects to your posting frequency.

Expand full comment

Meh, I generally assume classified threads are for excludable rivalrous goods, and skip them -- I'd very much rather something like a blog was advertised in open threads than in classified threads.

Expand full comment

One post is NBD with so many comments going on. Plus few top level comments get replies so it is unlikely to clog the overall content unless it is popular enough and therefore relevant to the commentors.

Expand full comment

On (3) I can recommend Robert Plomin’s book “Blueprint: How DNA makes us who we are”. It’s a recent book from a leading geneticist that convincingly argues that parenting / nurture doesn’t matter.

Expand full comment

This post is the first I heard of The Nurture Assumption, and I read Malcolm Gladwell's summary of it. https://web.archive.org/web/20140808055954/http://gladwell.com/do-parents-matter/

I'm having a lot of trouble resolving it with my existing paradigms of mental health. I've been in therapy for years trying to work my way out of depersonalization/derealization disorder and other general anxieties. Through the years, I've come to believe that dysfunctional attachment as a child is the "original sin" that leads to poor mental health outcomes in most cases. It makes sense to me that the poor emotional regulation of my parents would result in my own poor regulatory ability and predispose me to dissociation. That paradigm still seems realistic to me, and my suspicion is that some meta skills like emotional regulation are learned predominantly from parents and other traits like personality are learned through peer groups.

I'm curious about whether mental health outcomes are more strongly correlated with parenting than personality traits. How does The Nurture Assumption resolve the strength of the correlations in the ACES studies which are mostly concerned with childhood conditions mediated by parents?

Expand full comment

It's chicken and egg - are 'bad' parents simply that way because of their genes, so not alone will they provide a poor environment for their child but they will also pass on the bad genes?

So is the child an anxious child because of over-bearing parents, or is it because the parents are over-bearing because they too are anxious and have passed that trait on?

I think genetics makes up a huge amount, but I also think parenting/environment has *some* effect also, and depending on how things shake out, it can have more of an effect. The "parenting/nurture doesn't matter" rests on the assumption that "if parents are sane, well-balanced, caring, provide a decent stable environment, and there isn't war, poverty, famine and disease interfering with the outcomes, THEN parenting/nurture doesn't matter".

Expand full comment

Be sure to read some of the 1 & 2 star reviews of the book also. While it seemed obvious to me that peers have more of an impact long term from latch-key type generations (highly relative to the era of publication), the book read more like something you see today on PBS - Deepak Chopra, brain plasticity gruel, etc. than a seminal scientific book. What I don’t see is how it is relevant today with more home schooling, zoom schooling, www, youtube, tiktok, rigid curriculas, loneliness, pharma-rearing…

Unless you redefine peers to be “social media influencers/bullies”, there’s no there there anymore to credit or blame. Rationalists will have to better than leans-eugenics.

Expand full comment

Parents can focus your negative and to some degree positive emotions on things but can't create or eliminate them. Anxiousness is eternal, minus drugs, but what makes you anxious and what doesn't are dictated by parents and peers. And media I guess.

Expand full comment

I think the general gist is that parents can apply considerable damage on top of what your DNA has given you, but cannot really shape you in a specific direction in terms of e.g Big Five, whereas your surrounding culture definitely can.

Expand full comment

Re 4: I'll mention it again because it's relevant. Another experiment that was performed was by Sir Francis Galton in Hereditary Genius. He looked at boys adopted by popes and boys who were sons of men of eminence. The boys who were sons of popes were afforded lots of resources but were not genetically related to the popes. He found that the boys who were adopted by popes didn't achieve eminence like the sons of eminent men did. He believed that eminence was hereditary.

Expand full comment

Mmm, he's going by "really nephews and not 'nephews' as in 'bastard sons'" so he's already diluting down the genetic influence; a nephew is not as close as a son.

"Therefore, I will compare the sons of eminent men with the adopted sons of Popes and other dignitaries of the Roman Catholic Church. The practice of nepotism among ecclesiastics is universal. It consists in their giving those social helps to a nephew, or other more distant relative, that ordinary people give to their children. Now, I shall show abundantly in the course of this book, that the nephew of an eminent man has far less chance of becoming eminent than a son, and that a more remote kinsman has far less chance than a nephew. We may therefore make a very fair comparison, for the purposes of my argument, between the success of the sons of eminent men and that of the nephews or more distant relatives, who stand in the place of sons to the high unmarried ecclesiastics of the Romish Church. If social help is really of the highest importance, the nephews of the Popes will attain eminence as frequently, or nearly so, as the sons of other eminent men; otherwise, they will not.

Are, then, the nephews, &c. of the Popes, on the whole, as highly distinguished as are the sons of other equally eminent men? I answer, decidedly not. There have been a few Popes who were offshoots of illustrious races, such as that of the Medici, but in the enormous majority of cases the Pope is the ablest member of his family. I do not profess to have worked up the kinships of the Italians with any especial care, but I have seen amply enough of them, to justify me in saying that the individuals whose advancement has been due to nepotism, are curiously undistinguished. The very common combination of an able son and an eminent parent, is not matched, in the case of high Romish ecclesiastics, by an eminent nephew and an eminent uncle. The social helps are the same, but hereditary gifts are wanting in the latter case."

Yeah, but in this case what you should be doing is comparing "Tom Brown, nephew on the mother's side of Eminent Man Sir Joseph Whosis, versus his cousin Joseph Whosis Junior" and "Cardinal Armando, nephew of Pope Sleazius III, versus his cousin Giuseppe Moneybags, son of Lucio Moneybags", not "Joseph Whosis Jr. versus Cardinal Armando".

If Tom and Joe Jr. don't equally get on in the struggle of life, then his point is proven that it's the hereditary gifts that count, not the social helps, because they both have a share of the same genetic inheritance, because Joseph got the larger share of the good genes from his eminent father while Tom's share of the family blood came from his non-eminent mother. He even says this much here:

"Now, I shall show abundantly in the course of this book, that the nephew of an eminent man has far less chance of becoming eminent than a son, and that a more remote kinsman has far less chance than a nephew."

So nephews of the same blood don't do as well as sons, so why then compare nephews of popes with unrelated sons of eminent men? Francis, I think you are rigging the race! The explanation is here in what he says:

"I have seen amply enough of them, to justify me in saying that the individuals whose advancement has been due to nepotism, are curiously undistinguished."

Yes, because often it's precisely the untalented or less-able of the family who need the push from nepotism, as they are unable to attain success by their own efforts. . Wikipedia has a fun entry on Cardinal-nephews https://en.wikipedia.org/wiki/Cardinal-nephew :

"The Fifth Council of the Lateran declared in 1514 that the care of relatives was to be commended, and the creation of cardinal-nephews was often recommended or justified based on the need to care for indigent family members".

Indeed, the practice of appointing family members to plum ecclesiastical offices was as much about creating a power base and having supporters you could count on in place, as anything else, the same way mediaeval nobles wanted to have family members as abbots and bishops etc. because this was a way of consolidating power in the family:

"The creation of relatives and known-allies as cardinals was only one way in which medieval and Renaissance Popes attempted to dilute the power of the College of Cardinals as an "ecclesiastical rival" and perpetuate their influence within the church after their death. The institution of the cardinal-nephew had the effect both of enriching the Pope's family with desirable benefices and of modernizing the administration of the papacy, by allowing the pontiff to rule through a proxy which was more easily deemed fallible when necessary and provided a formal distance between the person of the pontiff and the everydayness of pontifical affairs."

"According to Baumgartner, "the rise of a centralized administration with professional bureaucrats with careers in the papal service" proved more effective than nepotism for future Popes and thus "greatly reduced the need for papal nephews". The rise of the Cardinal Secretary of State was the "most obvious element of this new approach".

Expand full comment

This relates to averages though, yes? Eminence specifically seems like it is heavily affected by circumstance.

Expand full comment

I don't know what you mean. He was studying men of eminence and used pope's adopted sons as control. They weren't average.

Expand full comment

He's not really comparing apples with apples, though. The son of rich, powerful nobleman could inherit his father's rank and estate, while the cardinal-nephew was not able to become pope after his uncle's death.

So if he's comparing success or eminence based on "Duke Higgle III was succeeded by his eldest son, Duke Higgle IV, who was a powerful and influential Neapolitan nobleman" that's not the same as "After the death of his uncle Pope Paul IV, the cardinal-nephew Carlo Carafa was executed":

https://en.wikipedia.org/wiki/Carlo_Carafa

"In June 1560, Paul's successor, Pope Pius IV arrested the leading members of the family - Carlo, his brother Duke Giovanni, and their nephew the Cardinal Archbishop of Naples, seizing their papers, and levying a range of charges relating to abuses of power during Paul's reign. Carlo was charged with murder, sodomy, and the promotion of Protestantism. After a nine-month trial, he was condemned along with his brother, and was executed by strangulation at Castel Sant' Angelo on the night of 6 March 1561. His execution was considered at the time to have been motivated primarily by political factors such as his pro-French, anti-Spanish policies."

Expand full comment

Your Catholic knowledge power comes up surprisingly often on open threards.

Expand full comment

Catholicism - you'll be surprised where it pops up! 🤣

I do think that's an indicator of Galton's upbringing and social influence, though; he could have used "nephews of Archbishops of Canterbury" as his example against "sons of eminent men", but being a British gentleman he automatically defaulted to 'those Papists' as examples of how "social helps" are trumped by "hereditary gifts".

While he might argue that he picked the Popes because celibate clergy, unlike the Anglican archbishops, ahem.

https://en.wikipedia.org/wiki/List_of_sexually_active_popes

And besides, if we're comparing nephews of clergy to sons of eminent laymen, it makes no difference if the clergy can or can't marry.

Expand full comment

(Can some eminent person of genius develop an edit button for Substack?)

And so he is doing something a little bit fast and deceiving there, since he's not comparing sons to sons, and he already admits that nephews don't have the same chance of becoming eminent as sons, so honestly it does sound at least in part like a swipe at Roman Catholicism.

Comparing "nephews versus sons of prominent/eminent lay men" would give a better result as to "how far does hereditary gifts carry you?", but I imagine he wanted to show "nature does it all, nurture little to nothing", so he needed to set up his test cases in order to produce that result. Hence his "smart men who father sons will have smart sons" as against "celibate clergy who have random family members won't have those family members be as successful as they are".

Expand full comment

Once again re: The Great Families. I want to post an excerpt from "The Son Also Rises: Surnames and the History of Social Mobility" by Gregory Clark. This is very relevant to the discussion of Darwin. If you're interested in intergenerational transmission of eminence/success, this book is worth a look.

The Son Also Rises pp. 132 - 135:

--------------------------------------------------------------------------------------

"One clear prediction of the human-capital theory is that, other things being equal, the more children parents have, the poorer the children's outcomes. The more children there are, the fewer family resources can be devoted to bolstering the human capital of each. Over time, however, there have been remarkable differences in the correlation of fertility and social status. Sometimes it has been strongly positive, at other times strongly negative.

...

The lineage of Charles Darwin is a nice illustration of how large the families of the middle and upper classes could be in preindustrial England. He descended from a line of successful and prosperous forebears. His great-grandfather Robert Darwin (1682-1754) produced seven children, all of whom survived to adult hood. His grandfather Erasmus (1731-1802) produced fifteen children (born to two wives and two mistresses), twelve of whom survived to adulthood. His father, Robert Waring (1766-1848), produced six children, all of whom survived to adulthood.10

In a social environment where all these children had to be privately educated, dowries needed to be provided for daughters, and estates were divided among children at death, human-capital theory would predict that the heedless fecundity of the English social elites of these years would lead to rapid downward social mobility. The lower classes of preindustrial society, producing only modestly more than two surviving children per family on average, would be able to concentrate resources on the care and education of their offspring and see them rise rapidly on the social ladder.

In contrast, by 1880 in England, upper-class men seem to have produced far fewer children than those of the middle or lower classes. Indeed, from 1880 to 1940, the richest English families seem to have been dying out. Based on the rare-surname samples of chapter 5, the upper-class males produced, on average, fewer than two children who survived to adulthood. At the middle and bottom of society, however, men were producing an average of 2.5-3 children who survived to adulthood, in reversal of the pattern observed before 1780. Figure 7.3 shows, by twenty-year periods, the estimated total number of children surviving to adulthood per adult male for two wealth cohorts: initial rich and initial poor or average-wealth rare surnames. Fertility for the richer lineage is consistently less than that of the poorer in the years 1800-1959.

This major change in the relationship between fertility and status can be illustrated again by the Darwin family. Charles Darwin (figure 7.4), marrying in 1839, had ten children, though only seven survived childhood. These seven children produced only nine grandchildren, an average of only 1.3 per child. (This figure is unusually low for this era, but there was great randomness in individual fertility.) The nine grandchildren produced in turn only twenty great grandchildren, 2.2 per grandchild. This figure was less than the population average for this period. The great-grandchildren, born on average in 1918, produced 28 great-great-grandchildren, 1.4 each.11 Thus by the time of the last generation, born around 1918, average family size for this still rather elite group had fallen to substantially less than replacement fertility. The Darwin lineage failed to maintain itself in genetic terms.

Interestingly, with respect to social mobility rates, the twenty-seven adult great-great grandchildren of Charles Darwin, born on average nearly 150 years after Darwin, are still a surprisingly distinguished cohort. Eleven are notable enough to have Wikipedia pages, or the like, such as Times obituaries, devoted to them. They include six university professors, four authors, a painter, three medical doctors, a well-known conservationist, and a film director (now also an organic farmer).12

But we see no signs that social mobility rates in England slowed as the upper-class groups produced fewer children. Instead, as chapter 5 shows, the intergenerational correlation of status remained constant for education and wealth. By implication, human-capital effects on social mobility must be modest. Status is strongly inherited within families mainly through genetic or cultural transmission, or both."

Expand full comment

re 4: It's impossible, and will almost certainly be for ever impossible to make such experiments with a large enough number of subjects. But this really shows why the Simulation Hypothesis is almost certainly true: it makes sense to make a number of simulation copies of a given universe, so you can run scientific experiments on them, by taking talented kids off talented families and making them grow with dunces, or killing Baby Hitler, etcetera, just to see what happens. Makes me think that our universe possibly is an almost exact copy of a universe where Baby Hitler was indeed killed... Regarding the Fidel Castro thing, for example, it's just too good not to be true, Almost certainly some alient experimenter tinkered with that, to see what would happen to Canada if you were to put a son of Fidel in charge.

Expand full comment

That rests on a mysterious assumption that simulations indistinguishable from base reality are possible and practical.

https://cosmosmagazine.com/science/physics/physicists-find-were-not-living-in-a-computer-simulation/

https://medium.com/big-picture/sorry-youre-not-in-the-matrix-4a321eed8384

I have yet to see anyone pushing the simulation hypothesis seriously try to defend the proposition that these simulations are possible and practical. Elon Musk tried to do it by saying something like "well even if we have a civilization growth rate of only 0.1% per year, simulations will someday be possible". It's like saying "well even if the mold on this peach grows at only 0.1% per year, it will someday be bigger than the galaxy". Yes, but it won't happen, and the machine would be too slow anyway.

Expand full comment

You're hereby joning the large and distinguished club of brilliant old men who once said such and such will never happen because it's impossible. I've played GTA enough to know we're closer to a perfect simulation of reality than we are for example, to 100% reliance of renewable energy

Expand full comment

You're hereby joining the club of large and distinguished club of people who dismiss all their opponents' arguments without even making counterarguments. (mentioning a video game isn't a counterargument, since my article already covers that issue explicitly.)

Expand full comment

That’s a stretch.

Expand full comment

I do think the extreme scepticism of the families post is a product of most followers being American.

If you are British, German or Italian you expect famous people to be second cousins or great grandchildren of other famous people.

P.S. How did you forget the Robert Aumann, Oliver Sacks, Abba Eban and Jonathan Lynn all being first cousins.

Expand full comment

As somebody who lived in Germany most of my life, I disagree. In my experience, people might expect social strata to be reproduced (including 'being famous'), but usually most of this is attributed to 'vitamine B' (B for Beziehungen, having the 'right' connections), the right upbringing, early-learned habits, self-confidence and you name it.

Expand full comment

It looks like vitamin B is necessary but not sufficient for fame.

Expand full comment

Svante Paabo is the illegitimate son of nobel prize winner Sune Bergström. Svante grew up with his mother.

https://en.wikipedia.org/wiki/Svante_P%C3%A4%C3%A4bo

Expand full comment
author

I didn't know! I'll mention it next open thread!

Expand full comment

Freeman Dyson thought that Americans were friendlier because American families are scattered around the country so we are more isolated.

Expand full comment
Comment deleted
Expand full comment

It's about probabilities, not certainties ;)

Expand full comment

I didn't know the Nemenyi/Bobby Fisher connection. Another example of separate upbringings would be Steve Jobs (put up for adoption) and his biological sister, award-winning novelist Mona Simpson (raised by their mother, mostly).

Expand full comment

Full sister or half-sister?

Expand full comment

Looked it up: full sister.

Expand full comment

Yes -- and as I recall the Isaacson biography, his non-college educated adoptive parents agree as part of the terms of adoption to send him to college, but even so seem kind of confused by how precocious the kid is and aren't quite sure how to support his interests.

Expand full comment

By definition, light speed is 299,792,458 meters per second. I am embarrassed every time I look at this number. We can do better.

By shortening the meter by less than 0.07%, we can redefine light speed to be exactly 300 million m/s. This will be much easier for everyone to remember and inconvenience only some easily-ignored eggheads. (Lengthening the second instead would be a much dicier proposition.)

Who's with me?

Expand full comment

'There might even be something better than the Metric system. One of the glories of that system is that it is entirely base-10, whereas the Imperial system’s units don’t follow any consistent scaling pattern (e.g. – there are 8 ounces in 1 cup, 16 ounces in 1 pint, 32 ounces in 1 quart, and 128 ounces in 1 gallon). But base-10 systems of measurement are only useful because humans are terrible at doing math in their heads, and because we have ten fingers to count on. A being with superhuman intelligence could just as easily use the Imperial system, or a system that hasn’t been invented yet that was not base-10 (I happen to think base-12 systems are superior) and maybe had some of the inconsistent aspects of the Imperial system.'

https://www.militantfuturist.com/why-america-will-go-metric-but-it-might-not-matter/

Expand full comment

BTW: You should read this material about recent changes in the SI system.

https://www.nist.gov/si-redefinition

"On November 16, 2018, in Versailles, France, a group of 60 countries made history. With a unanimous vote, they dramatically transformed the international system that underpins global science and trade. This single action finally realized scientists’ 150-year dream of a measurement system based entirely on unchanging fundamental properties of nature.

"On that day, the International System of Units, informally known as the metric system — the way in which the world measures everything from coffee to the cosmos — changed in a way that is more profound than anything since its establishment following the French Revolution."

RTWT

Expand full comment

I'm familiar with the recent decommissioning of the kilogram.

Expand full comment

To be clear, what was decommissioned was the lump of metal kept in Paris, that was the prototype standard of the Kilogram. Sadly, it turned out to be unreliable and drifted out of synch with its copies kept by national metrology authorities like US Federal NIST (National Institute of Standards and Technology).

It has been replaced by procedures executed to determine the fundamental constants of nature. In the case of the kilogram it is the measurement of Planck's constant. This experiment can be done by any competent physicist anywhere throughout space and time.

It is double checked by experiments to recreate in a macroscopic object Avogadro's Number.

Thus metrology is grounded in the basic physics of the universe. Strikes me as a triumph.

Expand full comment

Yes and no. One of the major consequences of the new definitions is that the key fact you were taught in high school chemistry, that the mass in grams of a mole of a substances equals the mass in atomic mass units (or daltons) of an atom or molecule of the substance is no longer by definition true. The relationship is 1:1 within experimental error *now* but as better determinations of the kilogram and mole are made it is entirely possible it may drift away. (Or we would have to start tweaking atomic masses for reasons other than new measurements, which is rather a nightmare for keeping track of historical data.) As you may imagine, this does not sit well with chemists, and there is ongoing debate. It would not surprise if there were additional changes to the SI.

Expand full comment

Trust me. In 60 years you won't remember what you were taught in high school chemistry

Expand full comment

Its the installed base dude. There are way to many systems and instruments to change. But, why does it rankle. We have computers and the internet. Try https://www.wolframalpha.com/ to solve all of your unit problems. I have always been fond of: https://frinklang.org/fsp/frink.fsp

Expand full comment

Small stuff. Let's redefine our number system so that π = 1. I'm not even sure how you would define natural numbers, but it would be wild I am sure.

Expand full comment

breaks the link between counting numbers and calculating numbers. Not very useful.

Expand full comment

Not very useful is kind of the sine qua non here, no?

Expand full comment

The rational community?

Expand full comment

Oooo burn! But I just meant this particular discussion.

Expand full comment

You can actually do that.

https://en.wikipedia.org/wiki/Non-integer_base_of_numeration

The *notation* is a problem, but mathematicians tend to care more about the underlying number system than they do how one might *write* numbers in it.

Expand full comment

That's making pi 10, not 1.

Expand full comment

Sweet! It even inlines some code to convert numbers to base π.

Expand full comment
founding

If you change your base units by 0.07% and don't get *everything* right during the conversion, you'll run into very big problems like radios broadcasting on different channels than you think they are and GPS coordinates being off by several kilometers. It's going to be much easier to A: pretend that the speed of light is 3E8 m/s in all applications that don't require four significant digits and B: look it up (or hotkey it) on the rare occasion when you do need the fourth significant digit.

Expand full comment

I thought that everything going right was just assumed. What kind of world would it be if people got things wrong?

Expand full comment

Reading your comment and the replies below makes me wonder: if we could redo our units of measurement from scratch, what's the "best" way to do it, and why would it be best?

(I suppose "best" is use case-dependent, so perhaps "very good for most use cases" or whatever.)

Expand full comment

Switch to base-12 numbers.

Expand full comment

My post about Planck units below was a joke, but when thinking a bit more it doesn't have to be such terrible idea. The speed of light, for example, is exactly 1pl / 1pt. Several constants in physics would be 1, e.g. the Boltzmann and gravitational.

Now, the units are generally very small, so you'd have to introduce some sort of shorthand convenience unit where e.g. 1 "convenience Planck length" equals 10^33 "real" ones.

Also, since we're making massive changes from scratch we might as well switch to a base-12 numeral system while we're at it. Being able to neatly divide things with 2, 3, 4, and 6 would be useful.

Expand full comment

Read my comment above and the link to https://www.nist.gov/si-redefinition

Expand full comment

About 20 years ago when the www was young. A physicist named Leonard Cottrell worked out an entire system using planck units. And he published it at https://www.planck.com/.

Sadly, Prof. Cottrell went to his reward 5 years ago, you can see a memorial page to him at that link. I am sure you can find the original web pages on the Wayback machine. If you do, please post the link here.

It should be noted that some constants would have the value 1 in such a system, but the fine structure constant, which may be the most important number in this universe will still be ~1/137. https://en.wikipedia.org/wiki/Fine-structure_constant

Expand full comment

The metre is currently defined as the length of the path travelled by light in a vacuum in 1/299792458 of a second.

So probably easier to change that definition.

Don't even think of changing the second or the worlds computer scientists will launch an unholy crusade against you. We already have enough issues with time.

Expand full comment

Yes, the second is upstream from the meter, so it's not a good starting point for changing.

Expand full comment

Just use Planck units as a base for everything; if nothing else it'd be a good flex on any aliens we'll meet later on. "Oh, you base all your measurements on the length of the foretoe of your high priest 9000 years ago? Yeah, we switched to the fundamental units of the universe pretty much as soon as we discovered them."

Expand full comment

In high energy physics people use a units system in which c=1 and hbar=1. So, yes. But don't mess with the SI because the amount of re-calibrating and induced errors would be immense.

Expand full comment

Why not change the duration of one, standard second? Why define the length of one Earth day with 86,400 units of something? Shouldn't it be rounded off to 100,000?

Come to think of it, we should probably change the length of a second so it equals a simple, whole-number value of some phenomenon that it universally constant, like the half-life of some common isotope.

Expand full comment

There have been a few attempts at metric time, for both seconds to days and days to years. I haven’t seen any that feel intuitive enough to actually use, though. Ten hours in a day isn’t enough resolution, a hundred is too much. And it’s just damned inconvenient that we see ~365 sunrises a year. Hard to convince people that there are 3.6 sunrises a “day”.

Expand full comment

That's "decimal" time. Metric time would involve measuring things in kiloseconds and the like.

Expand full comment

The physicists already have you covered: "The second is defined as being equal to the time duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the fundamental unperturbed ground-state of the caesium-133 atom." For some definition of simple.

Expand full comment

"By definition, light speed is 299,792,458 meters per second. I am embarrassed every time I look at this number. We can do better."

I believe that the physicists tend to use '1' for the speed of light and select units carefully. '1' is easier to remember than 300,000,000

Expand full comment

Is "1" better than "c"?

Expand full comment

Yes, because in practice when you see a c you're always multiplying or dividing something by it. Using units where c=1 you don't have to write it at all, which saves ink, and more importantly brain cells.

Also you can accidentally drop a factor of c from your calculations and screw them up, but you can't accidentally drop a factor of 1.

Expand full comment

300 is not a particularly metric value. Why not divide the meter by a factor of 3.34, getting light to an even 1B m/s? It would also make one meter equal to 11.8 inches, close enough to use meters and feet interchangeably for most purposes.

Expand full comment

This misses the feature where the average person doesn't notice the difference.

Expand full comment

We should build the system around c, the density of water at 277K and g

Expand full comment

Water? How parochial! Use hydrogen instead.

Expand full comment
Comment deleted
Expand full comment

We used 3e8 but I'm glad that 3e9 worked out for you.

Expand full comment

N.B.: it’s Andrés Gomez Emilsson, not Anders Gomez Emilsson.

I love QRI’s stuff!

Expand full comment
author

Thanks, fixed.

Expand full comment

I don't think we can create a society that doesn't reward people for their genes and I do not think we should strive for that.

Expand full comment

See Harrison Bergeron by Kurt Vonnegut for a dystopian vision of what that could look like.

Expand full comment

Meritocracy and affirmative action are not necessarily mutually exclusive

Expand full comment

...huh? Yes they are, they are literally two entirely different criteria with no more overlap than one would expect by chance.

Expand full comment

Mutually exclusive = 0 overlap. You're thinking of "independent".

Expand full comment

I'm thinking in terms of selection power, not the actual selected candidates. Each bit of selection power used to eliminate candidates for their sexual, racial, or ethnic identities is a bit of selection power which cannot be used to eliminate candidates for being underqualified for the position.

Occasionally, the same candidates would be eliminated on *either* criteria... but how likely is that?

Think about that old sequence post "Stop Voting For Nincompoops", and how EY measures the selection power of voting against the media, the party structure, etc. That's the sort of framework I am thinking of, when I say the criteria are mutually exclusive. Each bit of selection power, each question which eliminates half of the remaining candidates, can only eliminate them based on some attribute rather than other attributes.

Expand full comment

Ah. The way you put it wasn't very clear.

Expand full comment

How far does that go? You have to redistribute and/or prevent extensive capital accumulation or else you won't have equality of opportunity either.

Expand full comment

This book is political/philosophical rather than just totally scientific. I didn't really like the philosophical elements of the book. Progressives who like science and have an open mind will like it.

Expand full comment
author

I've been following this conversation on editing - https://twitter.com/RichardHanania/status/1461877959078277123 - on editing, and - don't make fun of me - I'm kind of confused by the whole thing.

Suppose you're a famous writer (doesn't have to be Shakespeare level, just a journalist at a top publication with lots of experience). Shouldn't we expect that your editor, who is not a famous writer, is a worse writer than you? And when a better writer and a worse writer disagree about the best way to write something, shouldn't we expect that the better writer is more likely right? I understand editors have a role in correcting typos and factual mistakes, and in making sure your opinion/style line up with the rest of the paper, but people seem to imagine that editors make writing better. How?

I can see a few possible arguments:

1. Publications get their best writers to retire and become editors, so the average editor is a better writer than the average editee. Seems possible but factually false.

2. Good writers don't look over their own work in detail, so editors have more time and are doing a service writers couldn't do. Makes sense except that this person seems to think when a writer and editor disagree (and both are paying a lot of attention to the question), the editor is more often right.

3. Weird game theory stuff. Each journalist wants newspaper readers to spend as much time as possible reading their own column (so might write longer stuff), but an editor has the company's best interests in mind and so wants each article to be shorter so readers can read more of it. But that doesn't explain why eg Glenn Greenwald should have an editor.

4. People are psychologically blind to certain flaws in their own writing, so even a worse writer can tell them when they're wrong and be right. Seems pretty speculative compared to the "better writers are better at writing than worse writers" factor.

Expand full comment

With regard to fiction writing, I'd say #4.5: writers *as a group* are frequently blind to certain flaws in writing, regardless of whether it is their own, and in particular the flaw "it's too long". If you compare amateur fiction to published fiction, the first thing you will notice is that the former is ~2-5x longer for a given scope of plot (novella, novel, series). Apparently, ability to know how long a novel should be is poorly correlated with ability to devise fictional plots and convert them into words-on-page. The solution is "find someone good at each of them and run an assembly line", but obviously if the writer doesn't listen to the editor when the person-whose-job-it-is-to-know-when-it's-too-long says "it's too long", the result will be too long.

The most obvious recent example here is the Harry Potter novels. A simple look at the physical books makes the exact point at which Joanne Rowling was famous enough to bulldoze past her editors obvious; the books become literally twice as thick (http://thayerdemay.com/jpg/harry-potter-portable-bookcase-01.jpg).

What's going on with the Hanania vs. Helmuth thing is that Helmuth's "editing" is not the same thing as fiction-book "editing" (most of the time; the CIA movie of Animal Farm is an outlier). Helmuth conflates, in as many words, "editing for clarity and concision" with "political censorship". The former generally increases quality; the latter decreases it. The two obvious explanations for Hanania's experience are:

1) Hanania is just uncommonly good at editing for a writer, to the extent of being better at editing than the median editor (in which case his advice is myopic and should be ignored by the vast majority of writers), or

2) Being out from under the censorship board is worth not having an editor (at least for Hanania), but Hanania would do better still with an editor who wasn't censoring him.

I'm not 100% sure whether you'd do better with an editor for length, but I'd say "yes" at 60-70% (a lot of your length comes from deep dives, which are key to your niche, but some could probably be trimmed usefully). I'm ~99% sure you'd do worse with a Helmuth as editor/censor (which is, I know, what at least some of the people suggesting you get an editor are proposing).

Expand full comment

"4. People are psychologically blind to certain flaws in their own writing, so even a worse writer can tell them when they're wrong and be right. Seems pretty speculative compared to the "better writers are better at writing than worse writers" factor."

Not sure why you think this is speculative, it seems like commonsense to me. Different empirical intuitions I guess.

Expand full comment

I think you're confusing writing as a form of art and writing as means to communicate. Nobody would edit Faulkner, but nobody would hire Faulkner to write an exegesis on the Electoral College for popular consumption. If the purpose is communication, then brilliance is less important than clarity, and an editor is extremely valuable because he can tell you whether what's in your head is making it onto the page -- something which even the best writers can have a hard time figuring out.

Expand full comment

Faulkner had editors, as has pretty much any traditionally published author since 1800 or so, as far as I know. (Saxe Commins and Albert Erskine worked with other great authors, too, and could be considered great in their own right.)

Expand full comment
founding

2, 4, and you aren't actually a "great writer" if you can't make yourself clear to a merely good writer who is paying attention. Back in usenet days, we had the term "brain eater" for the thing that apparently attacked writers just about the time they got famous enough to overrule their editors. It did not refer to the high quality of their prose.

Expand full comment

Is anyone actually claiming that when a good writer and an editor disagree the editor is more often right? I think the widespread beliefs are a) an editor can often make helpful suggestions the writer will accept, even for good writers and b) bad writers will be improved by having to listen to an editor.

I don't see anyone actually saying "even the best writers should give an editor the final say on their writing"—at most they imply it in a motte-and-bailey way.

Expand full comment

I agree with everyone else that it's largely 4. If you accept that typos can be missed by the original author but easily spotted by someone else, it's not hard to see that other larger-scale structural problems might follow the same pattern. You never explained P, you explained Q twice, you should have mentioned R before you mentioned S because it flows better, that kind of thing.

Expand full comment

re. 4: I've done presentations/comments where I omit entire lines or totally fail to explain a key point, and then totally fail to notice on review.

Editors are important because it let's someone see if your shit actually makes sense without context; specially if you are a super loose/fast writer.

Expand full comment

If I go back to a draft I wrote a couple of months ago, I frequently see and fix things wrong with it that I did not see when I wrote it. I think the same pattern exists if someone else competent goes over my work, seeing things I wouldn't.

Expand full comment

P != NP - it's easier to spot wrong things than create good things. And because of 4 it can't be done by the author.

Expand full comment

Have you worked with an editor before? There might be a professional in your audience who's up for an experiment. (To my eye, your writing could benefit from editing, though perhaps not in a way aligned with your goals. I wouldn't make a pass on one of your pieces without your blessing, though.)

Expand full comment

I wouldn't be surprised if Unsong has gone through formal editing, and if so I'm curious if that informs Scott's thinking here. [fwiw Unsong is one of my all time favorite books, I'm curious-but-nervous what an editor might do to it]

Expand full comment

I agree. It would be even more interesting to A/B test the pre and post edited versions of the posts.

Expand full comment

4 doesn't sound speculative at all to me. I'm much less likely to notice typos or other issues when rereading something I wrote an hour earlier than something I wrote a year earlier

Expand full comment

Also, I feel like editing something and writing something from scratch are two different skills (the standard recommendation is to do the latter when drunk and the former when sober...), so just because Alice is worse than Bob at the latter doesn't mean she also is at the former.

Expand full comment

"... people seem to imagine that editors make writing better. How?"

I spent 18 months editing a JavaWorld column and before that wrote for the on-line magazine, so let me explain what *I* did.

First, I did NOT re-write portions of the article. My comments would be of the form:

1) I tripped over this and had to re-read it a few times to understand what you meant, or

2) This is grammatically correct, but still can lead to confusion, or

3) This idiom won't be understood by some percentage of our audience, or

4) You spend 50 words to say XXX. Would these 10-20 words accomplish the same thing?

5) This is factually wrong. Here is why. What do you want to do.

[There were probably more ... I wasn't studying the taxonomy of my edits....]

The important bit to note in all five of these I'm still punting things over to the author, but with a detailed explanation of why I think the given sentence or paragraph (and occasionally section order) has a problem. I got very little pushback on my observations, though sometimes how to FIX the issue still left a difference of opinion. In this case, I tended to let the author write what they wanted.

I can imagine exactly the same thing for fiction, too. "Hey, famous author, I don't understand what is supposed to be happening in this scene. Is this intentional?" I might not be able to fix things (although often I might be able to suggest a fix), but I can certainly tell the author when something isn't working for me. If this is okay, then one could leave things "as is." If the author had a problem with a reasonably competent reader not understanding what they wrote, then maybe a rewrite of that section would be a good idea.

Expand full comment

Yeah, I've had occasion in a job or two to stare helplessly at a Word document - or handwritten sheaf of notes - cheerfully handed on to me by a boss and go "How can I turn this into intelligible English?"

Not because they're stupid or incapable, just that TOO. DARN. MUCH. WORDS. for what they *need* to communicate, which is getting lost in all the verbiage, "oh yeah and let me drag in this thing I just remembered about our core objectives" and "nobody wants or needs to know this about what we started off doing five years ago and then stopped doing".

Expand full comment

Yes. In the case of fiction, the author knows his world much better than the reader will, so things may be obvious to him that won't be obvious to reader or editor.

It can even happen for nonfiction. When I went over the German translation of one of my books I noticed that a critical point, an explanation of the distinction I make between utility and value, was missing. It turned out that it was missing in the English original too — I just hadn't noticed.

After all, I understood it.

Expand full comment

Speaking of translations, Tolkien's letters have two from 1956 about a Dutch and Swedish proposed translation of "The Lord of the Rings" respectively, and while he was very angry over the changes proposed by the Dutch translator, he really took a pot-shot at the Swedish one:

"Sweden. The enclosure that you brought from Almqvist &c. was both puzzling and irritating. A letter in Swedish from fil. dr. Åke Ohlmarks, and a huge list (9 pages foolscap) of names in the L.R. which he had altered. I hope that my inadequate knowledge of Swedish — no better than my kn. of Dutch, but I possess a v. much better Dutch dictionary! — tends to exaggerate the impression I received. The impression remains, nonetheless, that Dr Ohlmarks is a conceited person, less competent than charming Max Schuchart, though he thinks much better of himself. In the course of his letter he lectures me on the character of the Swedish language and its antipathy to borrowing foreign words (a matter which seems beside the point), a procedure made all the more ridiculous by the language of his letter, more than 1/3 of which consists of 'loan-words' from German, French and Latin: 'thriller-genre' being a good specimen of good old pure Swedish."

Expand full comment

#4 actually seems very credible to me, I'd see it as a special case of the general "humans are blind to their own flaws". That fits in with what seems to me to be the primary job of the editor, to be an outside view for the writing before it's released into the wider world.

Now I don't have a big influential blog so I wouldn't presume to know better, but I think "better writers are better at writing than worse writers" presupposes that writing is a singular skill writers can be ranked on, with better writers always trumping worse writers, akin to poker hands. But isn't writing a lot more rich and multi faceted than that? A writer may appreciate the influence of an editor which negates shortcomings in one facet, while disagreeing and ultimately overruling the editors advice (assuming that the writer has the final say) in others.

Expand full comment

side note: the chief editor's thread there seems to me to be completely argumentative, mean spirited, and lecturing, and completely without any consideration of even the possibility of opposing points ("I get the attraction. It can be painful to hear from an editor that, say, your introduction takes too long...") Doesn't seem to me to be the best way to generate fruitful discussion. Is much of twitter like this?

Expand full comment

"side note: the chief editor's thread there seems to me to be completely argumentative, mean spirited, and lecturing, and completely without any consideration of even the possibility of opposing points"

I actually read it more as a effort to convince everyone that editors are, too, valuable and certainly shouldn't be laid off. Also, that publishing without an editor is kinda hack-ish and what amateurs do.

So not so much an argument, but a plea.

[Steelman this, if necessary. I'm writing it without the benefit of an editor and am not trying to dog-pile...]

Expand full comment

> Is much of twitter like this?

Yes, and almost all of it that involves prominent media figures and journalists. It's a nasty place.

Expand full comment

What's your editing/reviewing process like?

Expand full comment

#4 is probably the real reason. But let me suggest another possibility.

Those AIs that generate fake faces are really two AIs: one that's really good at generating faces and another that's really good at figuring out if the generated image is a face. Just because the first is better at generating faces doesn't mean that it doesn't need a filter, aka, an editor.

If you look at George Lucas's ideas for Star Wars that never made it into the original movies, you can see that he had a LOT of ideas, and many of them just weren't any good. It seems like he has a real talent for generating ideas, but he needs an editor to tell him which ones are good and which ones aren't. (Of course, by the time the prequels came out, he was so successful that nobody was going up tell him no. So he had less of a filter, so more of the bad ideas slipped in.)

You see this a lot with book series, where the later books just aren't as good, timed about when the author got famous. Having lots of ideas isn't the same as knowing which are good.

I think the same may apply to writing, even if the analogy isn't perfect (being good at writing includes execution in a way that outlining a plot does not). And I think the book series' declining quality is a real example of this applying even to writing.

Expand full comment

Agreed, and to generalize, “writer” covers many different skill sets:

1. Having “good” ideas (creative or insightful of whatever, depending on field)

2. Being able to clearly communicate ideas so readers can understand

3. Being able to communicate ideas in an engaging, entertaining way

There are probably more. Editing can help with all of those, at least. I would expect different writers, with different strengths, to benefit from different aspects of editing.

The adversarial training example is a good one — having a second layer that tests for weaknesses is essential. Maybe the very very best writers have that built into their heads, but most don’t. And the skills involved in checking work are different from the skills in creating work.

Expand full comment

I prefer some of the original ideas to the final result. Not that it is terrible but some of the original ideas were better. Also so many of the ideas were changed by circumstance, not by a rigorous editing process. Like sudden staff deaths and so forth. Or Ford and Fischer dating. And tons of coke use.

Expand full comment

I've been edited a number of times (books, articles, stories) and it's clear that the skills involved and the personal approaches of author and editor are very different. Very few authors become editors, and I doubt if any editors become authors. Editors pose questions like "sometimes you use a capital letter for this word and sometimes not, which is it?" to which authors tend to respond: "couldn't care less, you decide." But equally, an editor of a story I recently published quite correctly pointed out that some of the dialogue was written in such a way that it was hard to be sure which character was speaking, and I corrected it. But none of us can be confident of producing a faultless piece of prose every time, and in a, sense, an editor carries out many analogous functions of a peer reviewer, or just a colleague you prevail on to read your text. They are also, by definition, a lot closer to the general reader than you are.

Expand full comment

"I've been edited a number of times (books, articles, stories) and it's clear that the skills involved and the personal approaches of author and editor are very different. Very few authors become editors, and I doubt if any editors become authors."

Playing for both teams (as it were) seems not uncommon in the science fiction world: Fred Pohl, Horace Gold, John Campbell and Kristine Kathryn Rusch come to mind. I don't know if science fiction is an outlier.

It is true that there isn't a lot of overlap, but then there can't be. Way fewer editorial slots than freelance author gigs.

Expand full comment

I think the examples you listed are all things no one, even Glenn, would have an issue with. It is the other stuff that causes problems.

Expand full comment

I think Steve Pinker's chapter on parenting in The Blank Slate is a good (and beautifully written) summary of the basic evidence against parenting mattering for children's outcomes.

Expand full comment

If we feel that our initial application didn't contain enough specificity on how it related to your primary requested goals can we edit it somehow?

Expand full comment
author

You can send in a new application if you want.

Expand full comment

The application form doesn't save data and while the change is, in my opinion, significant and relevant it isn't super large. Writing a whole new application from scratch seems overdone when I sit here looking at it.

It is submitted under my name so it should be easy to link to this comment. The purpose of the project is basically to provide the experience of making policy choices and executive decisions such that you are more able to understand what political and decision making figures deal with in our society. It takes the form of a game to provide stakes and to sort of insulate it from ideological biases and connections to modern political disputes. In my experience with politics and think tanks and political news/literature most regular people don't have any way to really understand the position of decision makers. Whether it is leaders in groups like the Sunrise Movement or people ensnared by grifters like Jimmy Dore involved in the Force The Vote debacle outsiders just lack the framework for figuring out what works and why. You could argue similarly for Trump related outside groups or even for centrists and technocrat fanboys.

This is one key issue related to the leftist "circular firing squad". Because someone like Alex or Bernie or even Ed Markey makes some compromise or has one position out of sync with the ideology and this causes a breakdown in cohesion. Having a simulated experience where you are the Joe Biden or the Obama or the Bernie who had to make tough choices would allow people to engage more effectively with politics. Of course it would also bring the less understandable actions of Sinema or Richie Neal into sharp relief.

Two of the examples I used to use among grand strategy communities when talking about the purely enjoyment based benefits were as follows:

There is a happiness opinion equation related to food. Improving access and diversity of food, subject to the traditionalism aspect of the Ideology system, creates a sort of broad social capital across populations. There is an adjustment factor such that over time losing access to something causes increasing discontent. Thus the waging of the cinnamon wars. Although the opinion system is mostly fungible. You could take that hit and focus on something else instead.

The second is a society that worships dragons. A leader secretly captures and chains a dragon under a mountain and uses one of the forms of magic available to take on draconic traits. Combined with a campaign using the intrigure and propaganda systems in the game he portrays this as a divine blessing. Consider your options as a ruler of a nearby nation if you learned his secret.

These two examples can both be analogized to real world issues but that is just an interpretation. We want to avoid, for the most part, situations that are too directly comparable to the real world since that causes obvious ideology blindness.

Cinnamon and dragons are fun examples but it could be losing access to the potato trade or a different kind of blasphemy. Then you have to make tough political choices to resolve conflicts. There are more traditional issues like over-logging or w/e as well.

Expand full comment

Anders Gomez Emilsson is part of a center that tries to formally study consciousess. (The [Qualia Research Institute](https://www.qualiaresearchinstitute.org/).) My impression is that many people on LessWrong are philosophically opposed to their approach, which may be in part because Eliezer Yudkowsky has different positions on consciousness. But if anyone here has read their stuff, I'm curious what you think.

Roughly, they assume dual-aspect monism, which asserts that there exists only one kind of stuff, and matter and consciousness are its two aspects. This is in the family of panpsychism. They also assume the consciousness aspect has structure and can be formalized much like physics, only it hasn't been done yet.

Expand full comment

Are they able to demonstrate that their model does a better job of predicting physical things than a model where consciousness doesn't exist? I'd be very surprised if it did, but if it doesn't, that seems to imply a lack of a causal connection between consciousness and the patterns of neural activity that make up our thoughts.

My take is that consciousness and qualia are probably a failed attempt to explain an epistemological paradox- like if you tried to explain the liars paradox by deciding that the statement can't be true or a lie, so it must be some third thing. Which may seem sensible at first glance, but if you actually tried to figure out what this thing is and how it interacts with the rest of reality, you'd run into all sorts of confusion- it would be an intractable "hard problem".

The paradox as I see it is that subjectively, my perspective is unique- it's undeniably a different sort of thing from all other perspectives- but that uniqueness doesn't exist objectively- my perspective is just one of infinite possible perspectives. So assuming that the subjective and objective models have to be reconciled into a single understanding of reality, that quality of uniqueness seems to exist and not exist simultaneously.

My best guess for "resolving" that paradox is that, in the same way that something can only be good or bad relative to some set of terminal goals, maybe things can only be true or false relative to some basic assumptions about reality- things like the existence of causality, maybe, or that patterns will continue into the future. Maybe the existence of a particular perspective is one of those basic assumptions, and one that we hold when forming our internal models of reality, but ignore when communicating that model. So we each have a subjective model where our perspective has this unique quality, and an objective model where it doesn't, and since the basic assumptions that underlie each are different, it's just not possible to fully reconcile them into a single Truth.

Expand full comment
author

I think of QRI's research program as more neuroscientific than philosophical, even if they also do some philosophical speculation. I don't think it would change my opinion of them if my philosophy of consciousness was totally different (unless it was so different that I didn't believe the brain could affect the mind, in which case, what am I even doing?)

Expand full comment
Comment deleted
Expand full comment

That's an argument against panpychsim, but not against dual-asioect theory.

Expand full comment
Comment deleted
Expand full comment

So what's all the fuss about?

The fuss is about the fact that human consciousness is the only thing humans know from the inside....and treating it as just another high level property, like fliweriness, isn't adequate to explain the subjective properties apparent to a conscious subject.

Expand full comment

Dual aspect monism means that there is only one kind of stuff, i.e. matter, and two ways of studying it: physics and consciousness. Also (not explicitly part of DAM but part of QRI's theory), physics is complete, i.e. a total understanding of physics lets you predict how all particles move without ever caring about the other aspect. The physical and consciousness aspects are always isomorphic, the task is to find the isomorphism.

Expand full comment

I can see what you're saying but I'm not sure I agree. Consciousness is in some sense a special property in a way that flowerness isn't.

Expand full comment

Why would you make a deadline for Thanksgiving Day?

Expand full comment

In the original post Scott said he'll 'close applications in two weeks', several commenters asked for a concrete deadline, and here it is.

Expand full comment

And yeah, if somebody is giving away 250 000 USD I think the burden is totally on you to explain why the choice of deadline is bad. He could have made it 3:17 am on Sunday or Monday or the day after Thanksgiving, and it still would be fine.

As a degression: it's a 'major US holiday' as you say, and I'd hope ACX is a bit more international than that by now.

Expand full comment

Maybe it's even meant to penalize the US citizens a little.

Expand full comment

The first point is absurd. I asked a question, which doesn’t require conversational sleight of hand to ask me why I’m asking that question. The second point is reasonable though, thanks.

Expand full comment

Please be aware that I didn't say nor mean 'if ... you're not allowed to ask a curious question'. My first point rather referred to the more judgemental parts in your comments below indicated by 'simply bizarre', 'off limits', 'out of touch', absurd, draconian, 'silly', and asking whether a professor doing sth. similar is 'an asshole'. Oh, and I'm aware that only some of those were directly related to the deadline here, and some rather used for comparison.

My pleasure.

Expand full comment

Should he filter for all potential holidays? Also, would it be better to make it Wednesday instead?

Expand full comment

“All potential holidays”?? It’s a major US holiday, one not even tied to religious beliefs. And yea, any day that isn’t a major holiday would be more reasonable, if not respectful. It’s simply bizarre to me to make a due date on Thanksgiving.

Expand full comment

You don't have to hand it in on Thanksgiving. If you think between now and then is too short a time to draw up an application, then say so. If you think there's not enough time left in the year and the deadline should be in February of the new year, then say so.

But nobody is asking you to give up all your Thanksgiving plans and slave away the night before so you can send it off at the very last moment of 25th November before it turns into 26th November.

Expand full comment

Why is it a good idea to have it due on Thanksgiving?

Expand full comment

Five gets you ten that it's so Scott can go over them during the long weekend. But IANASA.

Expand full comment

Not a Scott Alexander? Hmm. Not sure being *a* Scott Alexander would be of any help to answer that :)

Expand full comment

I think the burden is on you to explain why the chosen deadline doesn’t work, not for Scott to explain why it does.

Expand full comment

This isn’t a court of law. There’s no burden on anyone to explain anything. Holidays are holidays. There would be a revolt in my office if my boss asked for stuff on Thanksgiving. I know this isnt the same situation. But holidays are off limits in those settings for many reasons. I was just asking a question, which no one has answered, but deflected it back onto me as though there is no reason to ask that question.

Expand full comment
Comment deleted
Expand full comment

It’s pretty draconian and silly. It’s like having a college term paper due date of Thanksgiving. Is the professor an asshole?

Expand full comment

If a college professor did this, I'd assume the purpose was to point out that assignments don't absolutely have to be done at the last minute before they're due.

Expand full comment

Why have a deadline at all? Your point borders on the absurd.

Expand full comment

"Why have a deadline at all?"

Well, when we solicit applications for vacancies where I work, we quite like people to have their applications in by a certain date so shortlists can be drawn up, interviews held, and the vacancy filled as soon as possible.

Otherwise, we'd have people asking to interview for a job that was filled two years ago, and that would be silly, now wouldn't it?

Expand full comment

I was being facetious. That is precisely my point. Deadlines exist for reasons. Important reasons. They almost never occur on national holidays. Not in the academic world, not in the business world. Why did he choose that date considering it is an arbitrary date anyway. It’s absurd. But I’m not trying to argue with lackeys. I asked a simple question. Why did you pick Thanksgiving. I’m not going to get a response. It’s just seems out of touch and absurd to me to pick Thanksgiving Day and not the day before the holiday.

Expand full comment

Suppose the deadline was the Monday before Thanksgiving. I assume you'd be okay with this?

And now suppose someone said "I need an additional three days to finish my application," and Scott said, "What the heck, I'll give an extra three days to everyone." How would this do anyone any harm? Anyone who planned to be done by Monday could still do so. No one's holiday is being interrupted unless they so choose.

Expand full comment

If that happened that would answer my question. Someone asked for three more days and to be fair he gave everyone three more days, and that just happened to be Thanksgiving. Question answered. But again is that what happened or no?

Expand full comment

Did that happen?

Expand full comment
Comment deleted
Expand full comment

Why is it good for it to be on Thanksgiving? Of course one doesnt have to do submit it. But what is the reason for it to be due on Thanksgiving day as opposed to the day before or even the day after, or any day really, that isnt a national holiday? I’m not hearing any logical well thought out reasons here for why it *should* be on Thanksgiving. Which is my original question. All I am seeing as a bunch of reasons for why others think I am ridiculous for asking that question and thinking pegging that date for a due date seems silly and ridiculous. Can you offer a good reason for it to be on Thanksgiving?

Expand full comment
Comment deleted
Expand full comment

“Bad”? I don’t know? I just think it’s a ridiculous day to arbitrarily pick for a due date, and a date that the vast majority of people would have steered clear of, for pretty obvious reasons. Pick that day in a business setting - everyone will complain. Pick that in a classroom setting - everyone will complain. So I asked why. I’m not trying to troll here. The evidence that I offered is it’s a national holiday, which is a bad day for a deadline, again for obvious reasons. No one is going to work on something on a holiday so the day before or after makes way more sense. It’s a bizarre pick so I wanted to know the reason why. I am not trying to attack the due date or the person making it.

Expand full comment

Big n, high-quality, longitudinal, very recent - no impact of parenting styles or practices on children's personalities as measured with the Big Five

https://online.ucpress.edu/collabra/article/7/1/29766/118998/Longitudinal-Associations-Between-Parenting-and

Expand full comment

Isn't measuring everything by Big Five a bit like looking for your keys under the lamp post? It's easy to measure and an industry standard that nobody will knock you back at the peer review stage for using it, but it misses a lot of stuff that's important.

I know that when I think about parenting my children (who are still very young) I'm not thinking about actually changing my child's personality at the Big Five level, but on things like how I can teach them to have good values or to make sensible decisions.

Expand full comment

Good values and decisions? Conscientiousness would include things like impulsivity. Do you run off with your sexy new co-worker and leave your family in the lurch. Do you spend extravagantly in the moment and end up deep in debt. These are the kinds of things that parenting doesn’t seem to impact.

And you’re probably going to say but parents who model responsible behavior tend to have responsible children. And they do - because they passed on their responsible genes.

Expand full comment

Study used Bonferroni adjustments, which is well-known to increase type II errors.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1112991/

Expand full comment

"Focusing on the magnitudes of the associations rather than their statistical significance, we found that all correlations were small or very small. In general, the four statistically significant results that we observed were small in size, ranging between .10 and .15. The average correlation between parenting and child personality intercepts was .08, and few correlations exceeded .10."

Bonferroni correction doesn't change the size of the correlations.

Expand full comment

They have a large sample, statistical power should remain high even after correction.

Expand full comment

"Bonferroni means the study is inherently invalid" is not credible or serious and is likely designed to protect your priors

Expand full comment

403 forbidden - anyone else getting this? Or this this behind a paywall?

Expand full comment

Hmmm it's working for me in an incognito window, might be IP locked by region?

Expand full comment

Works fine for me.

Expand full comment

I wonder if the growing body of research will stabilize or even increase the birth rate. One factor reducing the birth rate is the idea that parenting is hugely impactful and any mistake, missed activity, lack of constant monitoring and supervision is going to doom the child.

If they are going to be who they are going to be - that takes a lot of the pressure off.

Expand full comment

Bryan Caplan wrote a book about this entitled "Selfish Reasons to Have More Kids" and makes this point. You don't have to invest so much in kids, so feel free to have more and enjoy them.

Expand full comment

I regret not investing more in one of my kids. He slacked his way though high school and, based on stuff like Caplan and Harris, we figured he was the best judge of what he needed. No point making childhood miserable for the kid if it makes no difference in outcomes, right?

Now the problem is much worse and needed more expensive remediation. We can afford it and are doing it, but my son's experiences have been horrible. Hitting with the tough love and interventions sooner was definitely called for.

Expand full comment

Is that because of a lack of capabilities, or because of a lack of credentials of those capabilities?

Expand full comment

The capabilities are those that you wouldn't normally get on a certificate, except maybe implicitly.

Expand full comment

My take away from that is that "parenting doesn't matter, so long as the parents are healthy and well-adjusted, are not in poverty, don't live in a war zone, are not physically/emotionally/psychologically abusive or neglectful, give the child the opportunities and access to resources it needs, provide a stable home, teach reasonable values - once all that is out of the way, then what they do doesn't matter" which is a very long way from the bare bones notion you might get of "it doesn't matter if you feed and clothe them or make them live in the back yard with the dogs".

So the whole "parenting doesn't matter" comes with a whole heck of a lot of conditions attached. Yes, parenting style won't turn someone with two left feet into the Olympic Gold Medallist athlete, but it will make a difference as to "nobody in this family ever went to college, what makes you think you can?" or not, as several people in the comments on the Great Families post pointed out.

Expand full comment

Healthy and well adjusted? You would think that people become alcoholics, at least in part, from growing up the children of alcoholics. But when the children of alcoholics are adopted at birth into families or non drinkers - they have exactly the same rate of alcoholism as adults. Alcoholism is entirely a genetic phenomenon. It has nothing to do with the chaos of being raised in an alcoholic home.

The same is true for many other unhealthy behaviors.

Expand full comment

> they have exactly the same rate of alcoholism as adults

exactly? I would believe in elevated, I would believe in alcoholsim being 80% genetic. But 100%?

Can you link your sources?

Expand full comment

I'd agree - one can accept the notion that parenting has very little impact on major personality traits, IQ, and broad physical capacities (assuming the baseline you outline above), but still think that feeling loved, valued, and morally guided is very important regarding what those children *do* with those natural capabilities and inclinations.

Expand full comment

That's more or less my view as well.

Parenting doesn't matter... past a certain point. And even there, the "doesn't matter" part seems to dwell on large, chunky traits like big-5 and not smaller, more nitty-gritty things.

That said, I don't think that either side is, in actuality, all the way over to "fully nurture" or "fully nature". There's probably the usual issue of people clustering loosely around two points to either end of the distribution while seeing the other side as monolithically attached to the opposite pole.

Expand full comment

Lenny Henry is also well over 6 foot!

Expand full comment

Plainly he has some Fallohide ancestry.

Expand full comment

It’s an interesting thought, but I could just as easily see the opposite effect: if nothing I do as a parent matters, why bother?

Expand full comment

Parenting matters for quite a few things, for example educational achievement, warmth of relationship with adult children.

Expand full comment
Comment deleted
Expand full comment

It has been shown in many twin studies, cf for example the review linked below, that the effect of shared environment on educational achievement is far from negligible, although it is much weaker than the genetic effects. If we assume that the "shared environment" between twins is mostly parenting, wich seems reasonable, then parenting does matter for educational achievement. This would in fact be expected as quite logically, parenting does seem to matter when measuring things on children still within their family, but much less so afterwards.

https://www.sciencedirect.com/science/article/pii/S2211949315000198?casa_token=cUtQf-pLk7IAAAAA:uHsITv-AqOk9WnQSinfJ-jHIVGqXqaMvaonC7PUU516HOyvDvk9Y3n8J0Yxhw4Kpxga_5DqNDj8

Expand full comment

So I'm of the view, which I think is defensible, that the biggest effects of environment (and especially environmental insults) are seen earliest in development. Nutritional defects, for instance, are catastrophic in the womb, calamitous as an infant, dangerous as a child, troubling as a teenager and have more or less no effect (apart from acute affects) as an adult.

So the perfect "twin study", for me, would be one where multiple genetically identical zygotes are created and then implanted in different mothers. The mothers would, in turn, have wildly different socio-economic statuses, with none of them being given any sort of follow-up support beyond the implantation itself.

Presumably we're about one billionaire away from realising a real-life version of this (Elon could, for instance, go mad and decide to populate a small city with his clonal offspring), so I'll await the results as they come out.

Expand full comment

I recently stumbled across this: https://americandreaming.substack.com/p/global-basic-income-ending-world argument for "global UBI." Basically, the rich world has enough money to give UBI to enough of the world to get rid of extreme poverty.

It's so simple and straightforward. And it's funny that your typical UBI advocate generally push "UBI for my country" (or maybe "UBI for me") rather than for the dirt-poor of the world.

My general worry for UBI is that it will create a permanent "serf" class. But a low-level global UBI doesn't seem to have that same risk.

Anyway, wondered if anyone else had read that post and had opinions on the matter.

Expand full comment

Wow. Based on the URL I expected the opposite take! Global Basic Income Ending the World.

Expand full comment

You migth be interested in the work that GiveDirectly does.

Expand full comment

An idea that has been done repeatedly is Direct Food Aid.

This has the benefit of transferring wealth instead of just transferring money.

I don't know too much about the history here, but I suspect that this has been tried enough times for people to understand what is required for this to work effectively and not have other negative consequences.

Expand full comment

I was under the impression that simply giving food to poor areas tends to have some really terrible unintended consequences in terms of driving local farmers out of business.

Expand full comment

Yup. It becomes a form of agricultural dumping, fed directly into a market where local producers are weakened. Worse yet, it inevitably comes with strings attached that seem designed to make the process as destructive as possible to the local economy (eg: requirements for donor country companies to handle shipping and distribution, bypassing of local distribution networks etc.)

Direct food aid is more or less anti-economic aid.

Expand full comment

The inflation argument against UBI is not a strawman. It is very real. I do think a number of real problems could be solved with UBI, but many new ones will arise.

At the end of the day it takes some extreme twisting of logic to think that if you just helicopter dump money into Africa or India problems will disappear and there will be peace on earth. Money is fiat. It is exchange of value. The rich who "hoard" money invest it, the poor spend it.

What is the right balance between spending and saving? That is the question I suppose.

Musk and Bezos have likely done more for humanity than almost every person alive today combined (save the cream of the crop in other fields).

Mankind needs to colonize space. We need to continue to cure major diseases. We need to reduce the waste and inefficiency in our lives so we can focus resources on the advancement of civilization, not the fulfillment of remedial needs. For example, the greatest achievement of mankind to date was the invention of agriculture and the release of human ambition to do more than just put food in its mouth.

Expand full comment

I'll leave others to debate whether giving billionaires the driver's seat in technological advancement is either happening or desirable, and just remind you that multiple books have now been written whose core thesis is that agriculture was a disaster for the species.

We took a population of hominids that was globally successful and stable, had been so for nearly two million years, and looked to be so for the indeterminate future, and traded it for a system that looks to have brought us to multiple potential extinction events within a mere ten thousand years. Agriculture traded everything (including such human things as personal autonomy, freedom and a nutritious diet) for a potential (then-unknown) shot at immortality via expanding into space. Something which, I should remind you, we haven't remotely achieved yet. It also made a tiny fraction of our species fantastically wealthy and powerful by immiserating the vast majority of us - forcing your ancestors and mine into conditions of disease and back-breaking labour so that a tiny upper class to live like, well, kings.

To cap it all off; I think you could put good money on technological progress removing mankind from the picture one way or another (by either killing us or replacing us) within the next two hundred years. I doubt that more then a few trace fossils and a layer of strangely radioactive/metal-enriched soil will remain to warn others of our fate. That's a hell of a way to end what was otherwise a pretty good run as a species.

Expand full comment

You seem to be implying that it was a deliberate choice that somebody, or a group of somebodies made, which they could've just as well not made, and as simple as that, agriculture doesn't ever get implemented, and natural harmony reigns forever (maybe with just a smidgen of wholesome cannibalism).

To me, the Moloch narrative appears to be much more plausible. People simply go wherever incentives point, and by and large are as powerless to resist them as water is powerless to flow uphill. Whether it's possible for this status quo to change is still an open question, but a decisive answer one way or another might just happen in our lifetime.

Expand full comment

Agreed - I'm talking about decision-making in the same way that naturalists talk about evolution "deciding" on one particular tack or another. I'm perfectly aware that nobody sat down ten thousand-odd years ago and took a straw poll about whether their descendants getting to the stars was going to pan out or not.

People (like Jason) just seem to be really keen on this idea of civilization as this great achievement (he even says as much). And I'm a cantankerous contrarian (at least, on the internet). So I can't help trying to pop that bubble a bit.

Less Whig history, basically, and more appreciation of us being caught up in making self-defeating decisions (like giving more or less average assholes unfettered access to the resources of kingdoms, empires and multi-national corporations). And likely to make more in future.

Expand full comment

Well, I think it depends on your time scale.

At this stage here are the facts that we know:

1. We have not discovered any sign of life in our explorations. Today as far as we can tell, mankind and life in general are entirely unique to the planet.

2. The conditions that allow for this life to exist on Earth will end one day. Whether we make it until the sun explodes or an asteroid takes us out sooner, life as we know it will one day cease to exist.

3. The meaning of life is of course out of reach for our peanut brains, but a defining characteristic of life is reproduction and the continuing existence.

Therefore I do not think its necessary to attempt to argue that mankind was better or worse off before agriculture (I think this is indisputable but I understand you and many authors out there harken back to the good old days of hunter gatherers).

I will simply state that the free time afforded by agriculture, and therefore the technological advancement it allowed, is one of the most important steps life has made in its attempt to outlive the Earth. If life does not successfully leave this planet it will, to the best of our knowledge today, cease to exist at some point in the future.

Expand full comment

But the arguments I've seen about the downsides of agriculture say that we had *more* free time as hunter/gatherers.

Expand full comment

And to add one more point as you do address the space concept:

Again at this point mankind is the only form of life that we know of to leave the atmosphere, step foot on a foreign body, and return to the atmosphere. We have a space station in orbit and the first "tourists" have lifted off .

I too am disappointed in our progress since the 60s but if you measure mankind's (or life's in general) attempts to colonize outside Earth.... the progress since agriculture is incredibly rapid

Expand full comment

Well, I guess my point is that calling an inevitable path "good" or "bad" is just a word game, not anything to meaningfully disagree about, but those are fun, so why not? So, I'll agree and disagree with you both - agriculture was a high-risk gamble which seems to have been necessary to have a chance to escape a local optimum, the one with wholesome cannibalism. And while I'm not sure what the counterfactual caveman-me's opinion would be on this matter, as a non-starving non-third-worlder I wouldn't want to trade places with him, so in this sense the gamble is already a success. If it eventually leads to us going out with a bang - whatever, it was a good run, like you said.

Expand full comment

As far as a moral defence of {gestures at everything} goes, I've got to say that "I'm personally comfortable" is perhaps the most twisted. Kudos, sir/ma'am/xir/not-applicable.

Expand full comment

> Musk and Bezos have likely done more for humanity than almost every person alive today combined

How exactly? That seems like an extraordinary claim.

Expand full comment

That is my question as well. If one of the other early online shopping companies had won the war would that be materially significant? I feel like without experiencing the actual counterfactual we can't be super confident about that. As far as Musk, that isn't even a serious consideration. SpaceX and even Tesla have not yet paid off in any significant way. And even if they had, we'd have to do a lot of work to confirm that that was a critical part of history.

Expand full comment

There is a big legibility problem here.

How do we know how many people there are? How do we know how poor they are? How do we know how to give income to each of them?

UBI works best if everyone has a bank account that the money can be directly transferred to. Most people living in extreme poverty don't have a bank account. Without bank accounts, we need some people going around and giving people money. Who are these people? How do either the givers of aid or the recipients know that they're trustworthy?

Even within the rich world, I don't think that UBI will solve homelessness. Many homeless people aren't "in the system" enough to receive aid. And that's without a corrupt government as an intermediary.

Expand full comment

The short answer to legibility concerns is "give them all smartphones". If you want to do something with results in the next 10 years, "Universal Cell Phone" is probably a more feasible and more effective program than "Universal Basic Income".

Expand full comment

"Give them all smartphones" still assumes that you can figure out who everyone is and get in contact with them. It is easier since you only have to do this once, instead of repeatedly.

Smartphones also require electricity and phone service. "Universal Cell Phone" has to have a supporting infrastructure plan.

Expand full comment

I think the world is a lot closer to "universal cell phone" for everyone on earth than it is for "universal basic income" for even just one country.

Expand full comment

What if you allow for a pretty good outcome without insisting on a perfect outcome?

Expand full comment

Then you potentially wind up with a much worse outcome than doing nothing. Local criminal gangs show up and seize everyone's cell phones.

Expand full comment

There are many possible problems that aren't addressed. The most serious few that come to my mind are:

1) Pumping dollars into third-world economies will wreak havoc on global exchange rates. People live on "$2 per day" in countries where that $2 buys what would cost $10 in the US - that will quickly normalize.

2) Without actually producing more food and widgets, people won't get more food and widgets. Presumably the intent is that people in rich countries will have less, which will be a harder sell politically.

3) Is this money going to be taxed (directly or indirectly) by third-world despots so there is no net benefit to the populace, just to the next Mobuto Sese Seko?

4) Do children get the full UBI? What impact would that have on global fertility rates?

5) If rich nations are paying the bulk of this, the extra $50 per month for a UK citizen will definitely get clawed back in taxes. A global program like this certainly will not reduce poverty in the UK by 11%.

Expand full comment

Re 2): Yea I have the feeling that often gets a bit neglected. Just giving people money to buy X will not magically create more X. We'll need some story, some mechanism where this additional X will come from. And if we're not willing to have less of some Y for that, we'll need to somehow ensure that we produce more overall. Make processes more efficient and productive, or ensure that more people productively take part in the economy.

But I don't think I have a good grasp on this yet. I feel like saying people in poor countries just need $x per day to be lifted out of poverty is kind of a red herring. Like the problem is obviously not (just) money? Like there is some structural reason that they are short on food etc, and just giving that amount of money won't directly solve this? Although to some extend indirectly via being able to invest in productive capacity etc.

Anyone know some good writing that would help understanding this more clearly?

Expand full comment

Hello! I filled out your grant application, but I am unsure if it submitted properly. Can you check to see if you have an application from "The Exclosure"?

Expand full comment

I learned about The Nurture Assumption in your old live journal, Scott. You had a book review there.

Expand full comment

> Anyone know of a good refresher I can link people to?

The one written by Astral Codex Ten next month 😉

Expand full comment

Did you watch the Apple TV adaptation of Foundation? What did you think about it?

Expand full comment

I thought it was an awful slog, a tedious point-missing adaptation of source material that I enjoy by people who are actively hostile to Asimov and his world view. It took everything that was special and unique about the Foundation series and turned into a boring slog about two Space Mary Sues and their struggle against 21st century conservatism.

Expand full comment

Beautiful production design, excellent pacing, good-to-excellent acting. Started off interesting, was bored to tears by fourth episode. I’m not a purist and don’t mind the changes from the books, except the one major change that it is dull and meaningless.

Expand full comment

I don't even think the production design was all that great. Lots of example for an imaginative production designer to really let fly with ideas of what the far future might look like, but in the end it just came across as generic spaceship, generic city, generic costumes.

Trantor in particular could have been a really weird and wonderful place, but it looks like they just copy-pasted some skyscraper models and called it a day.

Expand full comment

It's quite disappointing. They basically thought scenes of fights and spaceships blowing up were a substitute for good plot. (Salvor Hardin now likes to punch people instead of negotiate with them.)

Expand full comment

There's an old saying, "Keep science fiction in the gutter where it belongs." There are costs to respectability and popularity.

Expand full comment

The writers severely miss the point if they made him like that. He is one of the purest example of a social trickster who ruin the people who thinks the one with the bigger gun always win.

And it's not like it's hard to recognize this when his catchphrase is "Violence is the last refuge of the incompetent".

Expand full comment

The even bigger point that the writers missed about Salvor Hardin is that he's really nobody special. The decisions he made were the right ones, but they were also sufficiently obviously the right ones that psychohistory could predict that _someone_ would be around to take that particular course of action at that point in time.

Instead we have Salvor as some kind of weird Chosen One trope with special psychic superpowers. Now, special psychic super powers are a thing in the Foundation universe, but introducing them this early in the story breaks everything.

Expand full comment

That's depressing and suggests they didn't get the point of the books. Foundation should be an Aaron Sorkin style political drama, not a space opera.

Expand full comment

What I like about Foundation (the books) is that it's simultaneously an Aaron Sorkin political drama about dudes meeting in offices, and also a sweeping space epic encompassing the history of a whole galaxy over a period of a thousand years. Asimov's decision to keep each individual story extremely limited in scope ( a few characters, a few days, a couple of hundred pages at most) is a very deliberate choice to make such an enormous story manageable.

Expand full comment

I started thinking of Salvor Hardin as a completely new character who happens to share a name with a book character. (They did this with Lord Dorwin as well.)

Expand full comment

What is this with recent adaptations of books? I see a lot of varying opinion on "Dune", with some hating it and some loving it (I have no interest in watching it, but just the pictures I see from it have me going 'why is the future so bland and boring?' This is prime space opera, everyone and everything should be extravagantly opulent, not 'the duke is all in black just like the janitor').

Discussion about the "Wheel of Time" adaptation is very outraged with the huge changes they've made, and my own heart is sinking given reports on Amazon's new Tolkien adaptation; Lenny Henry is a very funny comedian, but stap my vitals:

https://screenrant.com/lord-rings-show-diverse-hobbits-not-white/

"I'm a Harfoot, because JRR Tolkien, who was also from Birmingham, suddenly there were black hobbits, I'm a black hobbit, it's brilliant, and what's notable about this run of the books, its a prequel to the age that we've seen in the films, its about the early days of the Shire and Tolkien's environment, so we're an indigenous population of Harfoots, we're hobbits but we're called Harfoots, we're multi-cultural, we're a tribe not a race, so we're black, asian and brown, even Maori types within it."

Expand full comment

It's a return to one aspect of earlier times, when there wasn't a canonical version for fiction. People just made different versions of stories.

The bible was canonical, but nothing else was.

Expand full comment

I thought Dune was quite good. I'm optimistic for part 2 (although a bit wary, because it's going to be even harder to adapt than the first part).

Expand full comment

RE black hobbits: I'm of the view that, if the adaption is any good (as a stand-alone work), then nobody will care. Conversely; if the adaption is bad then a welcome/cynical (depending on your viewpoint) nod to "diversity" won't suddenly make people watch it who otherwise wouldn't.

It's a bit like going to restaurant, really - plating matters, but only if the food itself is any good.

Expand full comment

If they fit within the established context of Hobbit history, then great, no complaints.

But if it's going to be they stand out from the other Hobbits because "Hey, yo, we da Harfoot tribe" and they are very culturally distinct, I'm going to grouse about it.

Terms like "indigenous population" make me twitch, though it's probably just the way Lenny Henry phrased it, because the point about Hobbits is that they're not really an "indigenous population" of *anywhere*, they migrated a lot and nobody is exactly sure where they originated.

So if we have the (white) Hobbits rocking up to The Shire, where they encounter the "indigenous black Harfoot tribe", yeah I'm not on board with that. To be clear, I'm not objecting to differences like "the Stoors tend to be heavier and grow beards and like messing around in boats" because yes, they were distinct populations but they have integrated into a common culture. If the 'black Hobbits' have a visually etc. distinct culture from the 'main' Hobbits, that's what I object to.

But then I may well be making a mountain out of a molehill, as nothing has been shown yet and this is all just pre-production talk.

Expand full comment

Hypothetically, they don't have good writers or they don't have the ability to recognize good writers.

Expand full comment

The best thing about the Lynch adaptation was the out there costumes. Denis Villeneuve has a certain aesthetic and he applied it to Dune regardless of the source. But in some ways besides that the adaptation is much more faithful on the plot and characters. I understand they whiffed Yeuh though.

Expand full comment

Funnily enough, I thought that the characters they whiffed on hardest were (in order of whiffery) Gurney, Jessica, Yeuh, the reverend mother and the baron. A lot of it is from having to distil the characters, though - even at a bloated runtime cinema struggles to impart nuanced character studies so literary characters end up with flattened, simplified characteristics. So the entire "who's the traitor" plotline got drastically cut down and, with it, Yeuh's characterisation as a conflicted man.

Circling up-thread a bit: I thought the cinematography was pretty good, but then I'm a fan of lighting and shot composition over costume. Realistically one would expect the products of a labour-intensive quasi-medieval system like the great houses to favour ornamentation, but that aesthetic is really hard to square with the strong use of naturalistic lighting, silhouette and shadow that they seemed to be going for.

Expand full comment

I take your point about the naturalistic lighting, but there again - they have interstellar starships and energy shields, but they can't scrounge up a flashlight? Either they're quasi-mediaeval and are reliant on "is the sun shining today", in which case we get the barbaric splendour (dammit!) *or* they're ultra high-tech, ultra sophisticated minimalism, in which case they certainly can run to an interior designer for their lighting needs:

https://www.johncullenlighting.com/blog/general/the-importance-of-lighting-in-interior-design/

Expand full comment

It's the aesthetics: the future is all beige and black. Now, that's probably in the service of the *idea* of what Villeneuve thinks is going on, but it's depressing to look at.

Here's the most colour I've seen his Duke Leto wearing, and it looks like a Nuremberg rally (which may be what Villeneuve was trying to evoke). But actual Nuremberg rallies were probably more colourful than that:

https://dunenewsnet.com/wp-content/uploads/2021/07/Gurney-Leto-Thufir-Dune-Movie-Trailer-2-1024x429.jpg

Same with the Fremen and their garb and stillsuits. But it's what Bret Devereaux says: real desert dwelling cultures are *colourful* as here in reality:

https://saharadeserttour.com/wp-content/uploads/2020/12/saharadeserttour_061.jpg

"Game of Thrones" critique:

https://acoup.blog/2020/12/04/collections-that-dothraki-horde-part-i-barbarian-couture/

Leto is a duke in a galactic empire, tasked with ruling an entire planet, a member of a rich, powerful and important dynasty. He should not be moping around like an 80s goth, he should be dressed to the nines to reflect his power and status, both as a nobleman and a general:

https://image.cnbcfm.com/api/v1/image/104553415-Royal_Family_.jpg?v=1578664998&w=929&h=523

https://www.rct.uk/sites/default/files/collection-online/e/f/56939-1414601499.jpg

http://www.artnet.com/WebServices/images/ll00016lldtvxFFgVeECfDrCWvaHBOcTkJ/adriaen-hanneman-portrait-of-george-villiers,-second-duke-of-buckingham.jpg

I'm supposed to believe these are fabulously high-status people controlling the most rare and important resource in the entire galaxy, and they look like they're dressed to paint the bathroom.

Expand full comment

Lynch did the same thing, and in both cases I found it a bit jarring. My hypothesis on the reason for this is the problem of representing Herbert's universe, which is exceedingly clannish, in film.

Almost everything important about an individual in the Dune universe begins with his clan. Is he Atreides or Corrino? Is she Bene Gesserit, Fremen, Ixian? All the big conflicts are between clan viewpoints and ways of life, and most of the big personal struggles arise when an individual is torn by allegiances to more than one clan.

In the books Herbert defines the clans, and what it means to belong to them, by making copious use of omniscient narration, as well as by dropping in snippets of historical narratives the way Tokien does (e.g. if you want to show why the elves are sad sacks you drop in a few paragraphs about Beren and Luthien, or about that time we thought Sauron was finally done for, but that fucking idiot Isildur kept the Ring.) And it works very well.

But I can see this being hard to accomplish on the screen in 90 minutes, and without being able to drop in paragraphs of interior monologue or a whole lot of expensive flashback. Plus the modern world, at least the American part of it, just doesn't understand family and clan in the same way anymore -- our appreciation for blood ties is probably at an all-time historical low.

So what both Lynch and Villeneuve seem to have thought wise was to map the ties of blood into the artificial clans which are much more familiar to the modern viewer, namely those centered on ideologies and political parties -- which, in part *because* they are not centered on family, heavily employ symbols and uniforms to ease identification.

So willy nilly everyone has to be in uniform, or at least similar dress and look, and use stylized mannerisms so we can tell to what clan they belong, and the opportunity for variation that suggests personality distinctions is greatly diminished -- although, to be sure, personality doesn't play a strong role in Herbert's books anyway.

Expand full comment

I mean, there's visual storytelling going on here. Duke Leto is a dour and serious man, so he wears dour and serious colours. In current Hollywood visual storytelling shorthand, if he was wearing bright colours then he'd seem like he was relaxed or unserious. A film doesn't give as many opportunities as a book to tell us directly about a character's inner state, so the visual shorthand of costume-as-character takes precedence over thoughts about "What would this character actually wear in this situation?"

The audience doesn't consciously notice how the colours of the costumes are affecting their impression of the characters, but they will get confused or enjoy the story less if they are getting an inconsistent picture of the characters from costume vs dialogue vs lighting vs music.

Also, it must be said that the highest status members of our own society in 2021, especially the men, are almost always wearing dull grey business suits, you've just picked the one example that, on rare occasions, doesn't. If Medieval people had made movies then they might have dressed Duke Leto in red and purple as a visual shorthand for "he's rich" instead of black and grey as a visual shorthand for "he's serious and badass" but they didn't.

Expand full comment

Considering that the culture is specifically supposed to be space medieval it is indeed nonsensical that they dress the way they do. Of course even in the modern day we can see the wacky hats of the English rich. Lynch had amazing headgear for his space people.

Expand full comment
Comment deleted
Expand full comment

Very unlikely, unless you ate a teaspoon or two. The most common ant poison is boric acid or borax, more or less the stuff you put in the laundry. Faster acting poisons generally have to be eaten (by the ant, or you) to do their work, and they generally are much less poisonous to humans than to insects. For any greater information, you need to look up the active ingredient in the poison. Wikipedia will have an entry on it.

Expand full comment

In the EU, due to extreme regulatory malfunction, you can't get boric acid or borax anymore unless you're a corporation. Corporations can't use them for a lot of the things they used to use them for, such as fire retardants and fertilizers, due to unfounded concerns about toxicity.

Borates do not kill ants quickly and they do not make ants move weird.

Katrine: there are standard tests for cognitive impairment, but most of them have a fairly low ceiling, so they don't detect cognitive impairment until it's fairly severe. One I like is the reverse digit span test from the Wechsler test; I wrote it up in Python one afternoon, without the artificial Wechsler ceiling, and have used it since then. In two minutes you can have a very clear picture of your current short-term memory. (The Wechsler protocol is a little noisy, but it's what I use for comparability with published figures.) Unfortunately this won't help you if you don't know what your state was before your possible intoxication.

Expand full comment
Comment deleted
Expand full comment

Yeah, fortunately the regulations don't apply to pharmacy products.

Expand full comment
Comment deleted
Expand full comment

The other common ant-killer is diatomaceous earth, which is food-safe for people.

Expand full comment

Well, if you have no idea what the active ingredient is -- it will be listed on the packaging -- then there's not a lot to go on here. I don't think your observations of the ants are nearly precise enough to give any useful hint to the product. I guess I personally wouldn't worry about it much, because most regulatory agencies consider the fact that people won't always use products strictly according to directions, so they tend to approve things for household use only if they have wide safety margins.

Expand full comment
Comment deleted
Expand full comment

I think nothing at all here is clear, but it is suggestive that the booster will cause some non-zero (but perhaps very small) decrease in all of these risks. I got my booster because I had two trips coming up. At the time, I was thinking that travel was a more likely way to get infected, but by the end of my second trip (to Canada) it became clear that it had been a good idea because I really didn't want to have a positive test just before more return trip, and end up having to pay for two weeks accommodation and being stuck abroad!

Expand full comment

Why not do an antibody test and get a booster conditional on low or undetectable levels.

That is the way it's done for basically every other vaccine that's very safe and has waning immunity (e.g. hep b and a, polio)

Expand full comment

The arguments for boosting people are largely the same arguments for getting the vaccine in the first place, given that the vaccines clearly wane in efficacy over time. If you believe the first vaccine protected you and, perhaps more importantly, others around you who are perhaps more vulnerable, then the same logic applies to the booster.

Expand full comment

These are the two main Israeli studies on waning immunity and booster effectiveness that afaik current policy is mostly based on:

https://www.nejm.org/doi/full/10.1056/NEJMoa2114228

https://www.nejm.org/doi/full/10.1056/NEJMoa2114255

The numbers of severe cases in vaccinated young people in Israel (pop. 10mil total) are too small to be useful, but you can see total case numbers increasing 2-fold at least, for all ages as you get from 2 to 5-6 months after vaxx. I don't see a reason to assume that protection against severe covid wanes slower than protection against any symptomatic disease at all, and there are other markers you can use to extrapolate backwards from the 40-59 data.

Expand full comment

I'm in the same boat. If you were vaxxed half a year ago, I think it's reasonable to assume that efficacy is ~30-80% of what it was. How much of a hurry you should be in probably depends on how otherwise covid-safe you can keep yourself, and if you have access to a cheap/free antibody test.

Personally, I vaxxed in late April, and in end of July I got the maximum titer on a mandatory antibody test, so I'm looking to boost in the coming 2-3 months but not really in a hurry.

Expand full comment

In on this thread

I would also like to see what people make of https://www.dice.hhu.de/fileadmin/redaktion/Fakultaeten/Wirtschaftswissenschaftliche_Fakultaet/DICE/Discussion_Paper/368_Fischer_Reade_Schmal.pdf#page=3 (linked without comment by Gwern, which I consider a non-trivial endorsement). If the results hold up, I would definitely be more likely to get a booster.

Expand full comment
Comment deleted
Expand full comment

Heritability estimates give us an estimate of the proportion of the variation in a life outcome that is explained by genetic variation. The estimates are for a given population at a given time. This is a bit confusing. It is not a rule about what will happen in all environments and populations. It is a ratio about what is happening now in the population we are studying. We can always conceive of an environment that is so outside the norm that it would have to play a major role.

Since adoption studies are used for heritability estimates, you are going to not have a fully diverse set of environments to estimate with. Kids aren't typically adopted into highly abusive families. A highly abusive family is outside the norm. Usually the "parenting doesn't matter" talk is aimed at upper-middle class families who invest a lot into their children when in fact it doesn't matter for life outcomes. At the other end of the spectrum, yes you can horribly harm your child. If a large portion of the population horribly abused their children and hit them in the head with a hammer, then heritability would be lower if that population was included in our estimates. The ability to come up with a clearly harmful example, doesn't change the heritability estimates because people in the studied population don't often do this.

Expand full comment

"Usually the "parenting doesn't matter" talk is aimed at upper-middle class families who invest a lot into their children when in fact it doesn't matter for life outcomes."

Yeah, I think exactly this. It's aimed at the people who were doing Mozart for Babies in order to boost brain development, and who do fall prey to anxieties and fads.

Then again, we've had some people on here saying they planned to have kids and what could they do re: genetic engineering to make sure they had the bestest babies, so maybe there really is a need for "Calm the frack down, it doesn't really matter" advice.

Expand full comment

Genetic engineering your children will matter a lot in the near future.

Expand full comment

Most of the interesting traits are very polygenic, though. Meaning that the best way to have a taller/smarter/more attractive/more athletic baby will, for the foreseeable future, still be to find a mate who is taller/smarter/more attractive/more athletic than you.

This will be the case even where dozens-of-gene-edits-at-a-time tech exists (it doesn't, and looks like it won't for decades).

Expand full comment
Comment deleted
Expand full comment

Why would any causal inference vis-a-vis environment invalid? A lot of data is time/location/population specific and we have to make reasonable inferences about it. A clinical trial is in a controlled environment but if a pill doesn't cause nausea in the clinic, it is probably reasonable to say it doesn't cause nausea outside the clinic. We have to use judgement to say how portable a finding is.

Expand full comment

Depends on the DV. In terms of language acquisition, we have natural experiments like Nicaraguan sign language that demonstrate that even children who have suffered under severe linguistic depravation have the ability to spontaneously generate complex grammars. The trick is always to understand what level of complexity the underlying predisposition is working at. Can someone locked in a box for years as a child emerge and immediately play piano? No. Do we have solid evidence that there is some inherent music-learning ability that they have to develop by age X or else they lose it? Also no. Even with language learning, the evidence for critical period hypotheses are far more colloquial/anecdotal than a lot of people think. Neglect and depravation are certainly bad. But there is scant evidence that they result in durable differences in *potential*, which I think is what you're talking about.

Expand full comment

Language acquisition is much different in children. A not too bright kid can learn to speak 4 languages with a perfect accent if they are in the right circumstances. Isn't this pretty solid evidence that there is a critical period? True experimental situations are rare because fortunately there are not too many kids like Genie the feral child.

Expand full comment

All I can tell you is that a common finding of people who attempt to summarize the critical period hypothesis (such as in a lit review) find that the evidence is far scantier than you'd think, in part because of the ethical prohibition against doing a real experiment. Does there appear to be a lot of conventional wisdom and experience that suggest that there's a critical period for language acquisition? Sure. Do we know with scientific confidence that there is a critical period, for sure? I don't think that's a responsible read of the evidence, no. Do we know when the critical period ends, what the rate of dropoff might be, and confidently what the limits are on adult language learning? Not even close.

Expand full comment
author

That's correct and I've edited the post to make that clear.

Expand full comment
Comment deleted
Expand full comment
founding

At minimum, it means "not the sort of parenting that, if we observed it in one of the families in our study group, we'd have to call social services rather than dispassionately observe the effects in the name of Science". So, no gross abuse or malnutrition or the like. And there's almost certainly a selection effect where children being raised in e.g. extreme isolation are beyond the practical reach of the scientists doing the study.

Expand full comment

Practically, what the "normal bounds" mean is the populations included in the studies (e.g. twin/adoption studies). E.g. when a giant twin study finds little relationship between shared home environment and outcomes (after controlling for genetics), the conclusion is about the types of home environments included in the study. If the study had included a population that was half horrible abusive Genie-style (https://en.wikipedia.org/wiki/Genie_(feral_child)) parenting, and half normal parenting, it would clearly find an overwhelming effect of home environment on all outcomes. (Also, I think Freddie is wrong above -- the Genie study and related cases are usually considered pretty strong evidence that you can't learn language after that kind of abuse.)

And re your second point: these arguments are typically specific to the environment that's shared between siblings. E.g. when twin studies are conducted, it gets broken down into heritability (% of variation that's related to genetic differences), shared environment (% of variation that's due to common environment b/w siblings), and non-shared environment (% of variation due to everything else -- i.e. all the other environmental factors that differ b/w siblings).

[epistemic source: PhD in related field, for whatever little that's worth :)]

Expand full comment

This is correct and a good explanation. Heritability is a population relative statistic. Its portability is not exactly in all situations and relies on some assumptions. For example, if shared environment doesn't matter much in the Boston, it seems likely that it doesn't matter much New York City. We can go further and say that if it doesn't matter in the USA, it likely doesn't matter much in the UK. It is harder to go from these to "If it doesn't matter in the USA, it doesn't matter anywhere including Yemen, Afghanistan and South Sudan" Those environments and populations are so wildly different from our own. There just doesn't seem to be anything a middle class family can do to cause some wild unexpected change in personality or intelligence. The environmental effect is non-shared idiosyncratic. Can't be controlled from my understanding.

Expand full comment

Genie is an n of 1. At least 1500 children who faced severe linguistic depravation learned Nicaraguan sign language - and the first generation if them INVENTED it.

Expand full comment

The depravation that matters for language is depravation of stimuli. That is, children go through a critical period for language learning and if they don’t get sufficient language input, they will simply never fully develop that skill. It’s not binary (see heritage speakers of a language) and it’s possible to become better throughout a lifetime (see adult language learners). On top of that, essentially everyone is a good enough parent that this doesn’t matter for at least developing one language, since you’d have to essentially isolate a child from other people entirely.

But that doesn’t mean that Genie was a fluke, it just means that she was in a circumstance *very* different from the Nicaraguan sign language children.

One obvious way this manifests is that environment has a huge effect on children on which languages they learn as a child. If they’re exposed to just English, they’ll only learn English. If they have sufficient exposure as a child to English, ASL, Japanese, and Arabic, they’ll become a native speaker of all four. It’s essentially entirely nurture with no nature involved, at least if you only compare between humans.

Expand full comment

I have to tell you, I have no idea why people are so emotionally invested in attacking the idea that severe linguistic depravation - which, if you actually read the NSL literature, you'll realize these children were - can be overcome by the language acquisition mechanisms inherent to the human genetic endowment. I am perfectly willing to acknowledge that language acquisition is unlike many learned skills and might therefore be more durable than other learning capacities. But the notion that we know that language acquisition is super malleable by parents because of a single child locked in a closet 60+ years ago is very strange to me.

Maybe this is related to the fact that people both overestimate the difficult of adults acquiring second languages and underestimate the ceiling of fluency for such learners? I don't know.

People just really want to defend the idea that parenting is super important for quantitative outcomes, and this is a round about way to do it, I guess.

Expand full comment

I think we may be talking about different types of depravation. The NSL kids were surrounded by peers, not locked in a room with nobody around. (If by "linguistic depravation" we mean "not having adults who communicate with you in a way you can understand", then totally agreed with you that that's not a barrier to learning language. If we mean "locked in a room with nobody else for your whole childhood", then I think the genie-style cases are really good evidence.)

Expand full comment

Those children were universally linguistically deprived; many of them had undiagnosed and untreated learning disabilities or cognitive and developmental disabilities; a large majority of them lived in poverty; many of them endured routine physical or sexual abuse. By modern western standards most of them would certainly be considered neglected.

But, sure - being lock in a closet is different. What I'm not understanding is how the tiny number of children suffering under that kind of extremely unusual depravation is relevant to the question "does parenting exert a strong influence on the personalities of children?" Typically, references to unrealistic levels of neglect are used as a tactic to avoid confronting the immense amount of research demonstrating that parents don't influence personality much. Which is a very different thing than saying that "parenting doesn't matter," which nobody thinks. The question is whether a parent can turn a naturally anxious child into a calm child, and the answer to that sure seems like no.

Expand full comment

When I said Freddie was wrong, I meant his comment about language acquisition above (although I also think he's wrong in the comment immediately above :)).

Expand full comment
Comment deleted
Expand full comment

I think the typical argument is something like: We've done a lot of large-sample twin or adoption studies now and they always show roughly the same thing (about 50% of variation in most outcomes is related to genes, 40% due to non-shared environment, and ~10% to shared home environment). So we've studied a ton of different environments, and can probably generalize decently at this point. (Although I really would be curious if anyone has run a twin study looking specifically at abusive households.)

Expand full comment

You are almost certainly overestimating the impact of parenting, and there is a vast literature available to you to disabuse you of your assumptions. If you're looking for a popular source other than the Nurture Assumption, you can still do a lot worse than Pinker's Blank Slate.

Expand full comment