855 Comments

> an ideal experiment would involve taking a really talented family, adopting away one of their kids at birth, and seeing what happened to them.

More practical experiment: high-IQ women inseminated by sperm of smart famous men. The study tallies IQ and talents of children, scatterplotted against... (i) the husband's IQ/abilities and (ii) famous men's children's IQ/abilities?

Expand full comment

Extremely random thought: I hereby propose that we rename generations as follows:

Silent Generation -> Generation A

Baby boomers -> Generation B

Generation X -> Generation C

Generation Y -> Generation D

Generation Z -> Generation E

Generation Alpha -> Generation F

etc.

(In my proposed scheme, there are no names for the Lost Generation or the Greatest/G.I. Generation, as most of them are dead anyways at this point.)

This proposed scheme has several advantages over the current one.

Firstly, it sets the set-point for generation numbering at a fairly reasonable point, and thereby eliminates our need for suddenly switching to the Greek alphabet. (In the old scheme, Generation A would be ~1500-1520, assuming a 20-year generation span, and no one has generational stereotypes stretching that far; in my proposed scheme, we won't need another alphabet until Generation Z is finished being born around ~2440.)

Secondly, it makes giving names for members of particular generations much easier, as now one would only need to append "oomer" to the generation's letter to refer to a single member. This way, Generation B members get called "boomers", in accordance with current slang. The other names also (kind of) make sense too (though I'm not sure if they're accurate or valuable as new generational stereotypes): Generation C members (born between 1960 and 1980) get called "coomers" (i.e. people addicted to pornography), Generation D members get called "doomers" (i.e. people extremely concerned about forthcoming worldwide doom). (Generation A members get called "aoomers" and Generation E members get called "eoomers", which are neither well pronounceable nor semantically memorable, but that's okay - neither generation is really well known for having a Defining Generational Experience.) It even works for the forthcoming Generation F, who would get called "foomers" (i.e. things that FOOM, or exhibit characteristics of AIs exhibiting a hard takeoff), which is precisely correct given current (optimistic?) estimates of when we should expect some kind of AI takeoff to occur.

Now for some possible disadvantages: The current system of generation naming is already well-established and it would be incredibly hard to change it. Also, I'm not sure whether "coomer" and "doomer" are appropriate generational stereotypes for members of Generation C and D, respectively - some quick searching suggests that people generally think of Generation C members as cynical and sleep-deprived and Generation D members as lazy and tech-savvy. Furthermore, I'm not even certain that dividing people into generations by *birth year* is the right way to go - I think that it's also popular to divide people instead by *age*. (This depends on whether people tend to be shaped more by when they were born rather than their current age. It seems the former would be more useful in a rapidly changing society and the latter in a very slowly changing one, which seems to suggest that birth year is more useful? But I digress.)

Sincerely, an eoomer*.

*Yes, I'm revealing my age, sort of, but I've already written about so many times on the internet that it's not really sensitive info for me at this point.

Expand full comment

> Generation E members get called "eoomers", which [is] neither well pronounceable nor semantically memorable

"Eew"-mers, the generation that's disgusted by everything

Expand full comment

Or "EU-mers", the generation that grew up when the European Union already was a thing and thinks it's an obiously good idea

Expand full comment

Or "Eomers", the generation that was born after the LOTR movies started coming out.

Expand full comment

At the very least, the Polgárs should be a demonstration that home environment can be very important for the kind of things that show up in your "Great families" post. Maybe they would have become doctors or something and never received widespread attention in a counterfactual world.

Expand full comment

I'm still unvaccinated after getting my first impressions on COVID vaccines from anti-vaxx-because-mRNA-is-poison crowd, but I've been thinking about the statistics and decided whatever the scale of adverse reactions, they are regrettable but vanishingly small in the bigger picture, and I'm not likely at all to get life-threatening ones outside of recoverable myocarditis and/or blood-clotting, the last one also occasionally found in live infections. So such side effects is actually on par with the real bug or even smaller, rather than magnitude worse than actual infections. Granted they can accumulate, and antivaxxers warn of unknown unknowns (or suppressed knowns like fertility "inhibition"), but those might wear off with immunity itself, or not sufficient to be of my immediate concern. I'll still prefer non-mRNA ones over mRNA ones because of the new technology aspect, which needs several years to investigate its long-term side-effects before being really safe.

I can stay unvaccinated & avoid those places where a vaccine passport/health code system is set up, like many Conservatives who hate such a level of state overreach. That's probably as big a rationale to "resist" vaccination, along with job-quitting. They are often moving to GOP-dominated areas, getting work that don't have vaccine mandates or WFH, or even trying to be self-sufficient and do business informally (what they call "parallel" societies). They are sticking to their principles and those efforts at alternative economic organizations are applaudable, but the question is, is the trade-off worthwhile (no vaccination & a degree of surveillance, but massively lower quality of life indefinite, which they can blame on the mandates and the system as a whole)?

Thanks for the advices because that will determine my lifestyle for the next 2-5 years, & life planning for even longer!

Expand full comment

I think you should bone up on the collective benefit arguments, and maybe even acquaint yourself (or refresh) with Kant's Categorical Imperative:

"Act only according to that maxim by which you can at the same time will that it should become a universal law."

Under that principle, the ethical decision may in fact be one that is not optimal for you personally if all you take into account is your own welfare and actions. But you are not a solitary being living on your own planet. You are a member of society, and you benefit from the fact that, for example, rape and murder are discouraged and policed, in spite of the fact that some individuals would gladly cause harm to others if not restrained by society's rules.

Ask yourself this: if you could get away with something you want to do but you know for certain is unethical and would seriously harm individuals you don't know, would you do it? More to the point, would you willingly choose to live in a world where bad deeds always go unpunished?

In essence, by treating your vax decision as only a matter of your immediate well-being (in a world where others are choosing to act more altruistically) you are making an (arguably) unethical decision, regardless of external forces such as vaccine mandates.

Rape and murder are wrong even if you don't go to jail. Similarly, refusing to take on a reasonable amount of personal risk that reduces collective risk and pain is wrong, even if you can get away with it.

Expand full comment

The assumption that the vaccine benefits society enough to offset widespread oppression of individuals is baked into this argument, but it shouldn't be. Unknown unknowns and all. This argument can trivially be applied to compel people to do deeply unjust and harmful things simply by wrongly assuming they pros outweigh the cons.

Also this argument asserts that only one individual has to make this choice. It only balances the cost to ONE person against the benefit to everyone. This is *obviously* wrong. Everyone pays the cost, which varies from person to person.

Expand full comment

No it's not. Daniel did not propose "widespread oppression of individuals", he tried to persuade one individual to do something voluntarily. Besides which, many regulators around the world have evaluated the evidence and found the vaccines safe, and even looking at just the FDA, it has a pretty good track record.

Expand full comment

There was some talk about a Florida meetup in late October that I wasn't able to attend, but I was wondering if anyone could provide an update to that. How many people attended? Did it go well? is there any talk of doing one again in the future?

Expand full comment

To kick off my presence here at this colossal blog I'm asking for a few questions on how commenters here evaluate some conspiracy theory claims that is gaining acceptance by an emerging segment of people on the political right-wing. Personally I see a lot of those to be narratively more structured and "convincing" explanations than what is (propagandized?) to be mainstream, and often consider issues from their perspectives. It is basically the "end phase of NWO to enslave and/or kill everyone outside of the elite thru excuses starting from COVID"

The most immediate concern for them is to confront the emerging COVID "police state", or neutrally put, the digitalized system of intensive surveillance and direction of daily lives based on a particular interpretation of contact-&-mobility-restricting NPIs (e.g. vaccine passports & contract tracing apps) and the assumption of a "New Normal" based on obligatory (instead of mandatory) vaccination. Their main objections are libertarian, anti-surveillance, anti-segregation & anti-centralization of social agency, not unlike what emerged after the passage of the Patriot Act (also rejected by much of the same people). To counteract that they have sought alternative social & economic strategies, from building extra-formal parallel societies conforming to their political ideologies to practices of subsistence-level self-sufficiency.

Here comes the question: how do you evaluate the legitimacy of the current "police state" system? What I have seen is either resigned acceptance, or total resistance. I'm trying to find principled arguments that legitimizes the current level of strictures. 2ndly, of the political understanding to marginalize the unvaccinated? They appear similar to Nazi or Soviet dissidents that were prosecuted and often denied basic services & needs. 3rdly, of the modernity-withdrawing reactions of those "resisting" vaccine passports & contract tracing apps? (I won't be surprised if these have come up before and discussed)

Expand full comment

Speaking about conspiracy theories in general, their problem is not that conspiracies as such do not exist. They do; our legal codes indeed recognize and penalize things like criminal organizations or cartels. There is also this tacit cooperation that results from everyone following their own incentives, where e.g. rich people in general are likely to promote rules that further advantage rich people. (But also e.g. educated people promote rules that further advantage educated people, such as requiring credentials for the types of jobs that uneducated people would be able to do equally well.)

The problem with conspiracy theories is with applying this type of thinking blindly, and ignoring any evidence that doesn't fit the preconception. You decide that some group X is responsible for everything bad, and assume that everything that happened is a part of their grand plan -- as opposed to a concidence, a tradeoff, a more general force, a failure in a plan, or a result of a plan of some unrelated group Y. Everyone is either 100% on your bandwagon... or is a brainwashed sheep. There is no chance of you being wrong, even about some insubstantial detail.

Are there people with the ambition to rule the whole planet? Probably yes; as far as I know the egos of some politicians have no limits. Would some people like to enslave others, and kill those who resist? Sure; I mean even today slavery is legal in many countries, and the dictators typically kill those who oppose them. Is surveillance constantly increasing? Yes; the amount of data Google has on me would make Stasi jealous. Is police corrupt? Of course; look at any police union and you will see the organization that protects corrupt cops.

None of these assumptions is an epistemic problem, in my opinion. The problem is seeing everything as a part of a grand plan, and ignoring all alternative explanations. Like, the increasing surveillance is mostly a side effect of technological progress and technological centralization; the citizens even pay for the smartphones that track their every movement. COVID is a real pandemic, people are actually dying, and face masks and vaccines are actual methods how to reduce those deaths. Etc.

Expand full comment

The problem with conspiracy theories is that *conspiracies are secret*.

9/11 was a conspiracy of >20 terrorists to cause huge damage. The public didn't know about it until it was too late. Usually everyone finds out about a conspiracy at the same time.

Conspiracy theorists, however, claim to have special knowledge that experts with the SAME evidence don't have (like "9/11 was an inside job ... because, you see those flashes of light and that blob under the aircraft wings??")

The conspiracy theorists' explanation of this will be "the experts are engaged in groupthink - you can tell because so they all agree! Except the two who completely agree with me, THOSE guys are independent thinkers just like me!" or "the experts are in on the conspiracy!" or "they're being paid by George Soros to reach a certain conclusion! Obviously!" or "the experts have a narrow scope of knowledge but I see more clearly because I am a generalist polymath and definitely not a crank!" or "most experts don't know what I know! because they're ignoring my emails!"

My explanation of this is conspiracy theorists are misinterpreting the evidence (using Dark Side Epistemology) because they want so badly for their preferred conclusion to be true.

Expand full comment

>COVID is a real pandemic, people are actually dying, and face masks and vaccines are actual methods how to reduce those deaths. Etc.

The bar does have to be higher than "people are actually dying", or the policy prescription is "permanent police state".

Expand full comment

Yeah, like, we can't have speed limits just because people are actually dying on roadways. The autobahn is the only non-police state left, unless you count the speed limits on some parts of it or that rule against passing on the right...

Expand full comment

How would you define "police state"? I think some sort of surveillance apparatus has always existed since the start of state formation, but its intrusiveness is being normalized after 9/11.

Likewise, does the level of risk justify the level of "police state" strategies to the management of public health?

Expand full comment

The question of how well we're threading the needle is a tough one and I'm not well-informed enough to answer it in full.

My point is merely that if the bar to activate "emergency measures" is set too low, all of them will be activated all of the time. People die from the flu, too, after all. And robberies. And suicides.

Expand full comment

The fact that Big Tech already has the capabilities of intensive location tracing that can be commandeered by state intelligence is one of the reasons people are living off grid.

Expand full comment

What do you think of those who publicize their agendas, like the WEF & UN (Agenda 2030), which is usually interpreted as the "NWO"?

Expand full comment

Here is agenda 2030. https://sdgs.un.org/2030agenda

Mostly waffle. What’s the conspiracy.

Expand full comment

I think a lot of diabolical goals (e.g. population control) have been associated with this.

Expand full comment

Where in that document though. I didn’t see anything.

Expand full comment

I mean, conspiracy theorists associate all sorts of claims they read somewhere (e.g. Kissinger's remarks on population policy thru US diplomacy, Limit to Growth Report, etc.)

Expand full comment

I am looking for book recommendation about Ancient Rome. I know almost nothing. Particularly interested in political institutions, law, and political economy. Also interested in day-to-day life portrait kind of stuff. “Great man” history is ok I guess, and I do appreciate biography, but I’m looking for something a little more expansive. Extra points for something fun and readable. I’m not afraid of tomes. Recommendation?

Expand full comment

This isn't quite what you asked for, but you might check out a blog called "A Collection of Unmitigated Pedantry" ( https://acoup.blog/ ), which is written by a history professor who is focused on Rome. He's got an entertaining style and likes to write posts describing ancient life and explaining how it's different from popular depictions like Lord of the Rings, Game of Throne, or Dungeons & Dragons. He's also got a book recommendation list, and cites various specific history books as references in his posts.

Expand full comment

Kulikowski's Imperial Tragedy and Imperial Triumph are vital reading for the later empire.

Expand full comment

A Fatal Thing Happened on the Way to the Forum: Murder in Ancient Rome

By Emma Southon

New book (2021), fun and readable, but full of Roman history.

Expand full comment

Check out SPQR, by Mary Beard. I think it precisely matches your requirements.

Expand full comment

Following up to tell you I loved SPQR. Just what I was looking for. Great rec!

Expand full comment

+1. I haven't read it yet but it's very much the standard non-academic history of Ancient Rome at the moment.

Expand full comment

Attempting a different calculation of the number of lottery tickets in the pyramid and the garden (https://slatestarcodex.com/2016/11/05/the-pyramid-and-the-garden/):

If you round the speed of light to just 2 decimal places (29.98), you still hit the great pyramid (https://goo.gl/maps/kHKNJQWvwVd3Rbi9A). It's exactly the location of the entrance on the north face. So we only need to explain a 1-in-10000 coincidence.

1. At least 10 constants which would be impressive if ancients knew them:

* c

* G

* 9.81/m/s^2

* Avogadro's number

* molar gas constant

* lyman-alpha wavelength

* fine-structure constant

* proton-electron mass ratio

* planck constant

* stefan-boltzmann constant

* electron charge

2. At least 10 man-made wonders of the world

3. At least 16 characteristics in which to encode the interesting constant (latitude, longitude, height, length, width, circumference, plus length/width/height of a few internal features)

4. At least 3 choices of units (SI, imperial, and cubits or whatever the local system was when other wonders were constructed)

5. At least 4 choices of decimal point placement

That gives us 10*10*16*3*4 = 19200 lottery tickets to explain a 1-in-10000 coincidence.

Expand full comment

So you're telling me there must have been aliens explicitly *avoiding* encoding such coincidences in various ancient wonders?

Expand full comment

No, the Illuminati is suppressing study of the great wonders in order to prevent people finding all the winning lottery tickets scattered around.

Expand full comment

This year's gift guides are predictable and sad. I'm looking for your Top-1 recommendation for each of these:

a. Really Good Black Friday deal.

b. A gift for your SO.

c. A gift for coworkers.

I'm intentionally not specifying budget, SO's gender, interests etc. I'm just looking for good ideas in any price-range, and in any interest category (tech, history, literature, rationality etc.).

Only thing I'm asking is that you share your top-1 recommendation only ;-). Why? Because it's fun to think about "best", "most valuable" etc. ideas, instead of saying "I have 10 great ideas" :-P. I guess I can't stop anyone from sharing more than 1 really...

Expand full comment

I thought the conventional wisdom was that Black Friday had become mostly hype to clear out inventory, with some tricks like raising the price in the months leading up, or brick and mortars advertising discounts on big name items that immediately sell out to drive foot traffic.

But maybe that's too cynical, and there have to be a few counterexamples out there... maybe Anker's power stuff, which is already kind of good value for money?

Really curious how discount days change when supply chains are messed up and online retail has eaten the world. They clearly still happen, like Amazon day and Singles day, I just wonder if they have different goals and impacts that are not obvious. I'd love to see any data (or even wild speculation!) on how discount days have changed over the last decade if anybody has any.

Expand full comment

I've wondered whether a lot of the black Friday deals will show up on ebay as people realize they bought things they don't actually want.

Expand full comment

Does anyone have any good tips about making medical and dietary decisions when there isn't very much data? My baby Daughter is going to have to go on a drug that is known to be associated with having lots of allergies. It seems really unlikely to me that choices about weaning etc. aren't relevant to reducing this risk but since so few kids need this drug I think it's unlikely there will be good medical trials on this.

Expand full comment

From https://bariweiss.substack.com/p/lose-the-mask-eat-the-turkey-and

> The largest study worldwide, the Israeli study, showed that natural immunity was 27 times more effective than vaccinated immunity in preventing recurring Covid illness. The only two studies to the contrary are from the CDC. They were sham, jerry-rigged studies that were so embarrassing they would get disqualified in a seventh grade science fair project. That’s how horrible these studies were.

Anyone know the basis for this claim ?

Also, thoughts on this interview overall are welcome. Never heard of Dr. Makari before - his pedigree sounds trustworthy but the interview format leaves little room for references/footnotes, which means that this is a “trust me” format, not “trust but verify” format. I don’t like this on principle.

Expand full comment

Here is a newspaper article that discusses a paper that the CDC sometimes cites when it makes misleading claims about vaccine immunity vs natural immunity: https://www.tampafp.com/nih-director-violated-agency-policy-by-intentionally-misrepresenting-natural-covid-immunity-study-watchdog-alleges/. The study itself is here: https://www.cdc.gov/mmwr/volumes/70/wr/mm7032e1.htm.

Here is an article comparing the Israeli study and the recent low-quality CDC paper: https://brownstone.org/articles/a-review-and-autopsy-of-two-covid-immunity-studies/. The CDC paper is here: https://www.cdc.gov/mmwr/volumes/70/wr/mm7044e1.htm.

I recommend reading the two CDC papers. The problems with these papers (discussed in the two articles) are pretty obvious.

Expand full comment

Thanks!

Expand full comment

I haven't read The Nurture Assumption, but got a lot of similar information from Bryan Caplan's 'Selfish Reasons to Have More Kids'. I think the results of the 'parenting doesn't matter' studies oversold. IIRC, 1 SD 'better' parenting can do things like raise IQ on average by 3 points. Not a big difference individually, but far more than 0 - especially at the extremes of the probability distribution. 3 IQ points roughly doubles the frequency of 150 IQs, 6 points roughly quadruples it.

Expand full comment

Plausibly the great families have environments at the +3-4 SD range.

Expand full comment

I've seen some people (including Paul Graham) making a big deal about the lack of association between parenting and Big 5 personality traits. I think the findings have been misinterpreted as saying "everyone ends up becoming themselves so whats the point of good parenting". The study literally begins with "personality traits are stable, but also amenable to change." I'd be willing to bet that just because personality is stable does not mean that perceived personality (by both the person and others) and well-being are not affected by parenting. A neurotic person with good coping mechanisms might always have a tendency towards anxiety, but if they avoid falling into negative thought patterns they might not think of themselves as especially anxious and generally feel fulfilled.

Expand full comment
Comment deleted
Expand full comment

How does that relate to your parenting, Paula? I think I'm missing a link here. :)

Expand full comment

Is there any real downside for a commenter here using their real name? I started using the name of one of my old S Corps - and my favorite entry lake to the BWCA - on a whim early on.

Expand full comment

My feeling is that if any of us gets famous enough to be worth a deep dive into our online history, someone will inevitably find all our alter egos. The network logs are there. Even Tor records could theoretically be cracked. As long as an internet packet can get back to your eyeballs, so can a snoop.

Therefore, I choose to just present a clean image everywhere. If someone finds me, they'll find someone who tries to be a good person.

Also, I don't seek fame unless it comes by accident in the course of my trying to do good things. There's security in obscurity, not in the sense of hiding the keys to the vault, but in the sense of the vault looking nonvaluable.

Expand full comment

In security, people consider different threat models. Some protections may be sufficient against an angry teenager with the attention span of ten minutes, but inadequate against a state actor. So you take them, and understand where you are safe and where you are not. If I ever run for president, I assume that this account will be quite easily connected with my identity. But if I apply for a job in a company where one woke HR person will do a quick background check on me by googling, they will not make the connection.

> If someone finds me, they'll find someone who tries to be a good person.

There is a difference between being good and avoiding controversy. Do you have an opinion on the genocide of Uyghurs? You don't need to answer (I am trying to discuss the meta level here), but any specific answer has a chance to get you in trouble with someone.

Expand full comment

I think your point about threat models is good advice. I suppose if I were more worried about being surrounded by woke HR people, I would revisit my posting strategy. I'm not (much), so I don't. And if anyone else were to expend the effort to factor in their threat model, I'd admire their industry. In my case, I get to avoid that effort - I post as one persona.

I can engage your question about Uyghurs on the meta level without even stating an object level opinion: any position I express on my one persona will, I think, get me in trouble with only the people whose opinion I don't have to worry about. An HR person could get me fired, or refuse hiring me, on a job I would not want anyway - having to feign a position I can't endorse would likely not be worth that job. If I were running for public office, it would get me in trouble with people who weren't going to vote for me in the first place. In the limit, it could deprive me of some critical donors, but then, in the limit, I can also just say that I have no intention of running for public office. (Which I suppose suggests something depressing yet understandable about all politicians.)

So, that's the tradeoff as I see it. I'm careful about my one persona; in return, I get to only have to worry about that one.

That downside is maybe sometimes non-trivial. What I consider "myself" is a version that is relatively sober and serious, as a consequence of how easy it is to misunderstand sarcasm or even oblique speech online. In other words, I try to only ever say what I really mean, after some thought; I don't just blurt out stuff in the heat of a moment, like a snap judgment about some trial making the headlines or what I "think" ought to happen to everyone who picked some side in some debate. I see little gain in equating ephemeral online quotes to someone's long-term thinking, and I figure I can try to avoid people doing the same to me by mistake.

Expand full comment
founding

I've been posting here and elsewhere under my real name for decades, and it hasn't caused me any trouble. But I don't live in places cancel-mobs are likely to reach me, or have friends or family who would turn against me for standing too close to wrongthink, so YMMV.

Expand full comment

Yeah this is my first experience with this sort of anonymity. I don’t say things here that I’m not willing to stand by so it feels a bit odd not to have my name by my words.

Expand full comment

I find that I behave a bit better and put more effort into my posts when they're attached to my name, so I do.

Expand full comment

I’ve spent a few minutes thinking about what I’ve commented here and the only things that seem like they could come back to haunt me are things that were meant ironically.

I long for a font that indicates [this is a joke]

Maybe an HTML <joke> tag.

Expand full comment

Depends, but I would err on the side of safety. Maybe there is no problem now, but there might appear a problem tomorrow, and it may be impossible for you to remove the existing comments (or they may be already noticed, archived, and screenshot).

Many people read this blog; many of them read without ever commenting. Your current boss, or your (potential) future boss, may be reading this blog without you ever noticing, but they can notice your name.

I assume that in not-so-distant future, there will be companies providing a service for HR, where for a small fee they will compile a report of things you have posted online, sorted by controversial. (One of the things where machine learning can be useful.) Consider the possibility that the most controversial things you write under your name will be taken out of context and included in a report that all your potential employers will read before the job interview. Maybe the person doing the interview will not even really mind what you wrote, but they will probably throw your CV in the garbage anyway, because it is not worth for them risking the possibility that the boss finds out and gets angry that they failed to do their job properly.

Scott writes about many controversial topics. Also, you never know which topics will be considered super controversial 10 or 20 years later. People have been fired from jobs for doing things that were *not* considered controversial at the moment they did them. Even if most people around them were doing the same thing. (As an analogy, consider e.g. voting for Trump. Half of the American population did it. Yet there are situations where admitting to this would get you in trouble. Not because you are some super rare villain, but simply because it can make you a convenient target in your local environment on an unlucky day, and everyone can signal their virtue by attacking you.)

In the past I used my full name online, then I changed my mind. My kids will be strongly advised never to use their full names online. The risk is simply not worth it (unless you are so rich that you will never need a job, or it is your strategy to do controversial things because you profit from clickbait). I am unhappy that we live in this kind of situation, but this is where we are. Too many crazy people out there, coordinated by the evil powers of Twitter et cetera.

Expand full comment

>I assume that in not-so-distant future, there will be companies providing a service for HR, where for a small fee they will compile a report of things you have posted online, sorted by controversial.

This seems to imply that such companies don't already exist. How confident are you in this?

Expand full comment

People are still inviting me for job interviews. So even if such companies exist, they are not sufficiently widely used, or not good at finding the most controversial things.

In the (more) dystopian future, you will be checked by such company everywhere, because not having checked you would get the HR employee fired.

Expand full comment

Do you consider yourself to be in the most controversial 5% of the population? Because if not it's possible that they just can't find anyone noncontroversial.

Expand full comment

Given the general perceived lack of software developers, this makes a lot of sense.

However, the "5% of the population" should probably refers only to people competing for the same job, right? So in my case it would be "5% of software developers", not the general population.

I am not even sure what would be the proper way to measure controversy in general population. Like, some people have way more *impact* than others. In general, working-class people often have tons of politically incorrect opinions, but because they are working-class, they are mostly irrelevant; no one actually important listens to them. Similarly, opinions expressed on Facebook are less important than same opinions expressed on your own blog, simply because the former will quickly scroll down and disappear, while the latter will remain, can be linked, etc.

But either way, I am most likely *not* in the top controversial 5%.

Expand full comment

It was "5% of the population", because 5% was my wild guess at the unemployment rate (leaving aside COVID). Even if you select maximally on boringness when selecting employees (ignoring things like relevant skills entirely), if 95% of people are employed then someone at the 90th percentile of controversiality is going to get employed.

Related: https://www.lesswrong.com/posts/HCssSWMp7zoJAPtcR/you-are-not-hiring-the-top-1

Expand full comment

Oy. Such a word to be alive in.

I’m not on any social media now. I was on Facebook for a while to keep up with family and old friends. I dropped my account when I started seeing disturbing conspiracy theories being taken seriously.

Twitter? Never even considered setting up an account. Seemed like a way converse with bumper stickers.

Expand full comment

Yes.

Expand full comment

Thank you Furrfu. If that is indeed your real name.

Expand full comment

Listen colonel bat guano, if that really is your name…

https://m.youtube.com/watch?v=Ef-JYpYM81Q

Expand full comment

Two blegs:

I ask for two things, a good history of the world (in under 400 pages) and a history of the Late Republican and early imperial Roman periods in the style of Kulikowski's Imperial Triumph. Anyone have suggestions?

Expand full comment

Hmm, I wrote this yesterday, but seems to have been swallowed by the system:

Harari "Sapiens" is somewhat inaccurate at times (you can find online reviews describing where he was wrong), but a fun read and only slightly longer than 400 pages.

Expand full comment

Of reasonably recent histories I've read, Rubicon by Tom Holland on the late Republic is an engaging gallop through the late Republic, and Mary Beard's SPQR a good general history, which spends a lot of time on the late Republic/early Imperial period. Don't know how similar they are to Kulikowski, I'm afraid.

Expand full comment

History of the world in under 400 pages? You’re better off with this YouTube video: https://m.youtube.com/watch?v=xuCn8ux2gbs

Expand full comment

All that comes to mind is "The outline of history" by H.G. Wells. more than 400 pages.

I read it in my youth...~40 yrs ago. There should be a better telling by now?

Expand full comment

He wrote a shorter version after that, "A Short History of the World", https://archive.org/details/cu31924028328908 (still 436 pages!), which has now finished serving its term of copyright, and also Downey and Chesterton wrote rebuttals.

https://en.wikipedia.org/wiki/Human_history is 38 pages. It's comprehensive, highly illustrated, extensively referenced (13 of those 38 pages are a bibliography), and meticulously correct in the usual Wikipedia way. Unfortunately it's also deadly boring because, in the usual Wikipedia way, it can only contain statements that are factually true in an objectively verifiable way. Still, it's readable from beginning to end.

It gives, I think, undue emphasis to recent events; there are two whole pages on the 20th century and another half-page on the 21st, which is about twice the proportion they are of recorded history. There are of course articles such as https://en.wikipedia.org/wiki/21st_century (43 pages), https://en.wikipedia.org/wiki/Late_modern_period (33 pages), https://en.wikipedia.org/wiki/20th_century (21 pages), and https://en.wikipedia.org/wiki/Dissolution_of_the_Soviet_Union (39 pages).

Expand full comment

Wikipedia is inevitably mediocre, so I'm asking for something published.

Expand full comment
founding

I've come to realize that even though I may enjoy reading wikipedia pages, the retention rate long term is abysmal. To the point where I can read a page twice and only realize near the end.

Expand full comment

Agreed. I feel like you need to do something with the information to really incorporate it into your brain, like conworlding or something.

Expand full comment

I would like something up to at least 2010. We know far more about world history than during H.G. Wells's time. For the second half of the 20th century, I'd focus on 4 things: recovery of the first and second world, revolutionary communism and dictatorship around the world, fall of the USSR and American unipolarity, and rise of Thailand, Indonesia, China, and India. Maybe add environmental issues to that, as well.

Expand full comment

> I’m wondering if I’ve been blogging so long and cast such a wide net that I’ve collected readers who aren’t familiar with The Nurture Assumption

I think it is a mix of this plus people coming in with strongly held beliefs that are expensive to update.

Bryan Caplan has talked about how economics is a weird subject because in a lot of 100 level classes, students will argue with the professor that the whole field is wrong. Not many subjects get that. If The Nurture Assumption was taught, I'd bet it receive similar treatment.

Expand full comment

Just been looking at summaries of The Nurture Assumption and I get the impression that it doesn't say that you have no influence as a parent, just not necessarily in exactly the ways you might first think?

It's also hard to separate genetics in the sense that 'being the type of person who tries to positively influence one's children' may be genetic in itself.

I think as a parent you inevitably do lots of mini-experiments (even if you don't think of them in those terms!) in the course of trying to figure out the whole parenting thing, and see the short-term effects of those on your children. And those experiments make you feel like you have an influence. Some patterns in parenting styles and children's behaviour are also so striking and feel so causal that I can see why one would want to seek evidence of correlation rather than causation to repudiate those beliefs.

I also doubt any parent with more than one child thinks they can influence personality or baseline intelligence but I find it hard to believe that there aren't ways one can positively influence one's children. If nothing else, making them feel loved feels like it must be important, and I get the impression that The Nurture Assumption agrees there - although as I say I must read it.

Expand full comment

I think the whole field is wrong. Smart kids.

Expand full comment

What's your in-a-nutshell case that the entire field of economics is wrong?

Expand full comment

Is that really an accurate comparison?

You seem to think The Nurture Assumption is accepted as the final word.

From it’s Wikipedia page:

“ However, the psychologist Frank Farley claims that "she's taking an extreme position based on a limited set of data.

Her thesis is absurd on its face, but consider what might happen if parents believe this stuff!"[6]

Wendy Williams, who studies how environment affects IQ, argues that "there are many, many good studies that show parents can affect how children turn out in both cognitive abilities and behavior".[6]

The psychologist Jerome Kagan argues that Harris "ignores some important facts, ones that are inconsistent with this book's conclusions".[8]”

The book’s reception was mixed at best.

How did it become gospel on ACX?

Expand full comment

it became gospel because of follow-up research that did an extremely good job of showing that it was correct. The book is, like, 40 years old at this point.

Expand full comment

Just read the Judith Rich Harris obit. She passed away at 80, January 2019. Steve Pinker has many kind words for her but it seems like her thesis was still on the fringe. Not saying it's incorrect, just not accepted.

I admire an iconoclast as much as any other ACX reader, I'm just not sure she got this right.

For that matter I'm not even sure what she said beyond a couple summaries. I guess I will read her book before I say any more.

Expand full comment

Published in 1998 so we're talking about 23 year now.

The pushback I'm seeing is pretty strong but I can't claim that it hasn't been refuted.

Can you point to the follow up research?

Expand full comment

I thought this was pretty good: https://www.edx.org/course/the-science-of-parenting

Expand full comment

I’ve read her book now. The main take away for me was that a child’s age peers have more effect on socialization than parents.

Expand full comment

Both Trudeau SR and Castro are some of the most influential historical leaders of their respective countries...

It would be hard to tell if Trudeau JR got his political talents (and indeed, very quickly fell into the role of Prime Minister in his political career) from being the 'adopted' son of the most consequential Canadian prime minister in postwar history or being the biological son of someone who was able to navigate the politics of revolution and post-revolutionary Cuba.

Expand full comment

Well, he isn’t the son of Castro so…

Expand full comment

I guess my objection to "success is genetic" is that it does NOT follow even from the assumption that everyone's personality (including intelligence) is 100% genetic and 0% environmental. And I do believe that this assumption IS a good approximation of reality, so no need to oppose me on that part.

Suppose that because of genetics, you get "the type of brain that is capable of inventing the cure for cancer". But there is still a huge gap between having this type of brain... and actually inventing the cure for cancer.

Your environment can make you interested in biology and medicine... or history and conspiracy theories. Both are great areas for someone who can memorize thousands of small details and notice patterns, but the latter does not lead to you inventing the cure for cancer.

There is a difference between merely having a talent... and having the same talent, plus good tutors, learning resources, opportunities to network with people studying the same thing, etc.

Education costs money. If your family can't afford it, no matter how smart you are, the path to medicine is closed. Not necessarily because you lack knowledge, but simply because you lack the credentials.

Your general financial situation also determines whether you can study things that are interesting and spend a lot of time thinking about them... or you must do whatever maximizes your income in short term, even if it destroys some opportunities in long term. On the other extreme, financially independent people can get 10 extra hours of free time every workday; that is not a small thing.

Money can make the difference between owning a famous company... and being the most productive employee in a company that made someone else famous. In academic sphere, political connections can make the difference between being known as the guy who invented the cure for cancer... or being on the list of his sidekicks.

(An argument in the opposite direction is that generally intelligent and conscientious people have more than one opportunity in life, so even if something prevents them from inventing the cure for cancer, they can still become famous for something else.)

Shortly, to achieve great success, you need to score high on both genetics and luck. Even if nurture has no impact on your personality traits, your family can influence your luck.

Plus, there is this example of the three Polgár sisters. People usually dismiss it by saying "they just inherited the chess genes from their parents, duh". However, although their parents were chess players, they were no grandmasters. And without the benefit of hindsight, you probably would have *predicted* the *opposite* -- the daughters being *less* good at chess than their parents -- because of the regression to the mean. And they exceeded their parents, thrice.

Yes, the Polgár sisters definitely inherited some superior "chess genes", but the genes alone would not have made them so famous. They also needed the supportive family. If you read the book, there were a few hostile people placing various obstacles in their way, such as trying to ban them from competing in the "male league", or refusing to issue them a passport so they would be physically prevented from participation in the world championships... and the parents had to fight hard to overcome these obstacles. (So if there was ever an equally genetically gifted girl born in a less supportive family, we would not know her name.) Not all competitions are fair, and the family can make a big difference here, too.

So, my model is that you have "genotypic geniuses" and "phenotypic geniuses", and the family plays the role *twice* -- the first time it is a source of the genes, and the second time it helps to transcribe the genes into actual world-class success. "Genotypic geniuses" that happen as random mutations are much less likely to translate into "phenotypic geniuses". The thing that we see running in the successful families are the "phenotypic geniuses", but the "genotypic geniuses" could be much more widely distributed in the population.

Expand full comment

The genetic raw material is only part of the path to success. Let's say Aldous Huxley was raised as an adopted step brother of J.D. Vance in some small Appalachian town. No one in his poor Kentucky family is going to say "That Aldous is pretty sharp, maybe we should pool our meager resources and get him a tutor."

A more likely scenario is "That Aldous is too big for his britches. Using those big words he gets from his book learning. He thinks he's better than us."

Instead of studying Classical Greek and Latin as a kid he catches catfish and helps with the chores. Maybe he finds time to visit a not so great rural library now and then and learns big words that only get him in trouble.

If catches a couple of breaks - like JD did - perhaps he gets a couple scholarships and goes on to write a much better version of "Hillbilly Elegy". But I don't see him writing "The Perennial Philosophy" and "1984".

He wouldn't have had the necessary nurturing early environment, not to mention the family connections, to prepare him for those big successes.

Expand full comment

Brave New World? 1984 was Orwell.

Expand full comment

Oops. Thanks Sol.

Expand full comment

and no ability to strike through or edit. Alas.

Expand full comment

No problem. Huh, I'm usually just a lurker and didn't notice there was no editing. If you wanted to, I guess you could copy, delete, and re-post, and accomplish the same?

Expand full comment

I’m not much more than a lurker myself. It’s a mistake. I’ll live. Thanks though.

Expand full comment

What percentage of variance in success do you think is accounted for by genetic variance versus environmental variance? I'd also be curious to hear what you think is accounted for by family environment variance.

As is, I can't really tell how relatively important you believe the different aspects are (and thus, whether there's actually any big disagreement).

Expand full comment

I do not have enough data to make a reasonable guess. I believe that the effect described by Scott is real, but weaker, potentially much weaker. Can't say how much weaker exactly. Yes, the "genotypic geniuses" are overrepresented in some parts of society, and in some families. But in addition to this, a supportive family increases the chance that they become "phenotypic geniuses", and one way how the family does it is already having some of them (which gives you role models, a network, resources, hero license, halo effect...).

My disagreement is about the *magnitude* of the effect. Going only by the "phenotypic geniuses" makes you overestimate how rare the genes are and how much they are concentrated in families.

It could also lead to the opposite conclusion about what should be done. If you believe that the "phenotypic geniuses" are all there is, then duh, the great families will take care of their own, everyone else is doomed anyway. But if you believe that there are many "genotypic geniuses" that in a more supportive environment could also have become "phenotypic geniuses", then perhaps creating such environment could make a great difference. (Which is quite different from believing that *everyone* is a potential genius. If everyone is, you may want to support everyone equally. But if "genotypic geniuses" exist, you may want some method to *find* them in the population, outside of the great families.)

The magnitude of this effect probably varies across history, as Charles Murray has shown. Hundred years ago, a famous professor would be more likely to marry a pretty girl; these days he is more likely to marry another professor; so the society gets more genetically stratified than it was in the past. That would suggest that these days a greater fraction of "genotypic geniuses" get born in supportive families. They still might be a minority of all "genotypic geniuses" though.

Expand full comment

So you say you disagree about the magnitude of the effect, but I still don't understand what you're saying the true effect size is for genetic vs environmental factors.

I think IQ variance in the US is ~60% genetic and the rest is (by definition) environmental, but probably only ~10% of variance is directly from parents rather than all the other bits of your environment. I think success is ~30% IQ in the US, ~30% from other durable and heavily genetically influenced factors (ex. conscientiousness, agreeableness, default motivation levels), and ~40% environmental factors like peer group/where you went to school/parental choices.

Is this wildly off from what you think? What numbers would you throw out?

Expand full comment

I honestly don't know. Sorry if that disappoints.

All I have is anecdotal evidence. I have met a few people who were highly intelligent, but no one ever told them, so they considered themselves unfit for intellectual tasks.

(Specifically, I have made bets with a few people that if they take a Mensa test, they will pass. They all passed and were surprised a lot. I am not saying that passing a Mensa test is a high bar; by the ACX standards it is pretty low. I am saying that those people believed that they wouldn't pass even such relatively low bar, while I made the bets because I was impressed by their intelligence. And I am not easy to impress.)

I have faced some minor obstacles myself. Things that seem quite absurd for me now, such as winning mathematical olympiads, but then being told that I am not actually that good at math, because... and I am not making this up... I lived in the poor part of the town. Also, because no one in my family is a math professor. Most other math olympiad winners had some math professor in their family. Facing such absurdities regularly, it does not really convince you that you are wrong, but it does make you tired. I guess I am overly sensitive about the "talent in families" topic.

Expand full comment

I think then we probably actually agree.

A correlation existing doesn't mean you'd never find mathematical brilliance outside of rich, otherwise successful families.

For example, suppose we had person A who passed a Mensa test and had no relation to major mathematicians, and person B who failed a Mensa test and was Paul Erdos's grandson. If I were to guess who was better at math, I'd guess person A (and I'm assuming you would as well).

I think the crux of the difference is that we seem to differ in our interpretation of what a genetic correlation should imply. You highlight instances where it seemed to rob intelligent folks of the license to believe they were brilliant (despite being brilliant) when they didn't come from a background of intellectuals. I think that those folks' lived experience should provide vastly more evidence of their intelligence than their family background, to the point where they could safely ignore their background in trying to decide if they pass some imaginary intellectual bar.

I would guess we would both agree that for an individual, simply taking an intelligence check is easier and more accurate than seeing how successful your ancestors are.

Expand full comment

I was shocked by the global map in Scott's Ivermectin post ( https://bucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com/public/images/ac9e4f34-f9cc-40f2-9d83-da4e7178fad7_772x330.png ) showing that, in about half of the world's land area, more than 10% of the population is infected with worms. Shouldn't there be charities to distribute Ivermectin or something similar in these parts of the world? Shouldn't Mexico, Brazil, China, and India be able to do this on their own?

Expand full comment

I can't wait to see how you react when you find out what fraction of the population has eyelash mites, foot fungus, dandruff, vaginal yeast infections, and skin bacteria.

Expand full comment
founding

I live in Romania, and one of the (many) things which annoys me here is how much old people accept the various declines that come with old age, even if they are preventable or curable. It's very rare here for healthy people to do preventive testing, even when they are old and obviously in risk groups. Same with low hanging fruits like flu vaccine or dental care. So yeah, a consequence is that we end up with a lot more visits to the doctor than it'd be strictly necessary.

Point is, a lot of it is cultural. Not necessarily meaning that the culture is inherently bad - I'd guess it's mostly an adaptation to long periods with unavailable or unreliable health care. But whatever the reason, a solution that doesn't take into consideration the cultural aspect will most likely be incomplete.

Expand full comment

I imagine one problem is that treating the parasites is only a temporary solution. Unless the infrastructure is modernized enough to prevent food/water sources that are infected, people will just keep getting parasites (though it probably isn't so black and white, maybe in some areas it takes a long time between average infections, so you only need medicine every couple months or years).

Expand full comment

https://www.evidenceaction.org/dewormtheworld/ and https://schistosomiasiscontrolinitiative.org/ are two charities which address the issue, although I'm not sure what medicine they use. Both are top charities according to GiveWell.

Expand full comment

There are charities that do this. Pharmaceutical companies give them the drugs for free to generate goodwill but they rely on donations to fund distribution of the deworming pills. It's very cheap per person treated but the health impacts of parasitic load are hard to measure.

Expand full comment

What if the total number of U.S. Senators stayed fixed at 100, but they were apportioned based on the square root of each state's population?

Also, regardless of how small a state's population was, it would be guaranteed one Senator.

Expand full comment

What for? If there is no advantage in equal representation in the Senate, we don't need to put up with the expense and complication of a bicameral legislature, just one chamber would do. If there is...then it works best as is, with manifest equality.

Expand full comment

Senators have 6 year terms, representatives have 2 year terms. I don't know whether the difference is important, but it might be.

Expand full comment

Important to what? Senatorial terms are also staggered, they have much bigger staffs, they're older, and the functions and rules of the Senate are quite different from the House in a number of ways. It's chalk and cheese.

Expand full comment

An interesting hypothetical, but sadly it will never be anything but that. (Because equal representation in the Senate is the one constitutional-structure feature that is specifically exempted from the amendment feature.)

Expand full comment

It's only mostly exempted. The Constitutional text here is "and no state, without its consent, shall be deprived of its equal suffrage in the Senate."

This leaves open a few possibilities. The most straightforward would be that if all existing states could be persuaded to agree to an amendment modifying the apportionment of the Senate, then that would meet the bar of every state consenting. Any future states could then be required to consent as part of the statehood admittance process. Actually getting unanimous consent from the states seems a tall order, but might be just barely doable by some combination of bribery (e.g. granting special subsidies or tax breaks to smaller states for a certain number of years in exchange for consenting) or blackmail (e.g. cutting off states that don't consent from existing federal subsidies if they don't consent).

A more abuse-of-rulesy approach would be to amend the constitution to reduce the Senate's portfolio of powers, turning it into something more like an analogue of the Upper Houses of Westminster-style parliaments: an advisory body with the ability to propose amendments or apply a procedural brake, but without the ability to completely block major legislation.

Or even more abuse-of-rulesy, the Senate could be stripped of all of its powers and a new "Schmenate" upper house with population-sensitive apportionment established in its place.

Or to try to reconcile the modern ideal of one-person-one-vote with the original intent of giving individual states equal representation in the Senate befitting their co-equal semi-sovereign status and addressing the fears of small states that the union would be dominated by a coalition of large states, the larger states could petition Congress to allow them to divide into multiple states each, so the gap in population between the larger and smaller states would be much less than currently. This would not even require a Constitutional amendment, just approval by the legislatures of the existing states, a convention elected by the population of the proposed new states to draft a constitution an apply to Congress for admittance, and a simple act of legislation to admit each new state.

Expand full comment

Can we just amend the constitution to remove the part about needing the consent of states?

Expand full comment

The Constitution says you can't get rid of that part via Amendment.

Expand full comment

Can you get rid of the part that says you can't get rid of that part via amendment?

Expand full comment

It's been sacked.

Expand full comment

Yeah, the feature is baked in I’m afraid. It was a must to sell the Constitution to the less populous colonies.

Expand full comment

Wait, how do we know Trudeau isn't a Castro?

Did someone collect some of his glorious hair for a DNA test?

Expand full comment

I don't know if the timing works, but it sounds just as possible that his father is Ted Kennedy or Mick Jagger: https://en.wikipedia.org/wiki/Margaret_Trudeau

Expand full comment

Why does it become harder to remember names than other parts of speech?

Why can it be possible to remember a thing, and bunch of related facts about a thing, while still not remembering its name?

I just mentioned Erdos in a comment. I couldn't remember Erdos' name, but [nomad mathematician] turned it up.

Expand full comment

I want to say "because you use them less often, and any word you use less often is harder to remember".

But there's this glitch. I don't forget names, but I often start to use the wrong name for one of the 3 other animate objects in this household, and correct myself mid-word (and my father generally says the entire incorrect name, like my brother's name instead of mine, and then may or may not correct himself).

These names are words I use often, and the same mistake doesn't happen as much for other nouns. So, I need a new theory.

Expand full comment

Using the wrong name from a small set strikes me as different from struggling to remember a noun.

Expand full comment

I think it's because names generally just lack the sort of structure that makes them memorable - by their very nature they're largely arbitrary structureless "handles" that gives them very little "sticking" power compared to a whole interconnected set of facts about the person, where one fact often helps reinforce the memory of another.

Also, linguistic familiarity matters too, IIRC. People are much better at remembering names from languages they're most exposed to. Like, for an "Anglo" like me, a very English name like "Adam Smith" is just going to stick better than a Hungarian name like Erdős, and it gets worse the farther you drift linguistically.

Expand full comment

Exactly. Anime names seem like interchangeable jumbles of letters to me, but I'm sure they're much more memorable to Japanese people.

Expand full comment

Verbs are just as arbitrary, but maybe having fewer of them makes it easier to remember them.

Expand full comment

Well, yeah, almost all parts of the language are arbitrary (main exception is stuff like onomatopoeia) - but the difference is that I'm not learning the language, I've used it for decades, so I'm not just going to randomly blank on a verb the way I might blank on somebody's name.

... but if you are learning a language, "blanking" on a verb or other part of the language is completely normal.

Expand full comment
founding

Verbs are not arbitrary, not that way. Think "to cook" - don't you have a wealth of associations coming up?

Names on the other hand have most of the associations necessarily cut. "Tom" may bring memories, but they're both a lot less, and (most importantly) they're an error. Since you don't know which Tom in particular I'm referring to, it the symbol should be kept as void of any links as possible. And any associations should be made with the concept of that particular person, and not with his name.

Also frequency, especially for family names. Other that very few which are already concepts in itself (Hitler, Einstein), we don't use each very often. And when they do, the meaning they gathered is usually useful (the Smiths down the street)

Expand full comment

My experience is that sometimes I can remember many details about a thing, even minor details, but not the name for that thing. This is not the same thing as going completely blank about a name.

Expand full comment

In biochem and molecular bio, I started learning etymologies just to remember the names themselves. It kinda works, actually.

Expand full comment

So here’s my theory - which can never be proven. The brains recall or search function is slow and highly compartalized. And buggy.

So the brain tries searching a certain part of the memory based on a hint and doesn’t reset its search criteria until it gets another hint. And faces and names are, perhaps, stored in different “folders”. So once you try one folder you are pretty much stuck with it, or sometimes are ( ie it’s a bug).

A friend of mine was trying to think of the railway station in Dublin beginning with H. He just couldn’t get it, so I asked him to think of a city in Texas.

He went “Dallas, Huston oh … Heuston”. So that similarity in names reset his search criteria.

I didn’t know that Texas cities would be easier for him, I just guessed. In fact he knew both, of course - I never said Heuston to him so he had to recall it himself. Both were in memory.

In fact in a different situation he might have forgotten the Texas city and have been helped by he saying “think of railway stations in Dublin”.

Expand full comment

Possibly related: https://sudonull.com/post/12946-1000-dimensional-cube-is-it-possible-to-create-a-computational-model-of-human-memory-today

Particularly:

> In trying to extract stubborn fact from memory, many people find that constantly knocking on the same door is not the wisest strategy. Instead of demanding immediate answers - to command your brain - it is often better to put the task aside, take a walk, perhaps take a nap; the answer may come as if he were uninvited. Can this observation be explained by the SDM model? Perhaps at least partially. If the sequence of recalled patterns does not converge, then further investigation of it may turn out to be fruitless. If you start again from a neighboring point in the memory space, you can come to a better result. But there is a mystery: how to find a new starting point with better prospects? You might think that it’s enough just to randomly replace a few bits in the input pattern and hope as a result, he will be closer to the goal, but the probability of this is small. If the vector is in bits from the target then bits already true (but we do not know any ofbit); with any random change we have a probability indo not come close, and go even further. To make progress, you need to know in which direction to move, and in-dimensional space is a difficult question.

Expand full comment

Actually it's not a difficult question. That's what optimization methods do. One common method is hill-climbing: check every possible 1-bit move, and take the one with the best result. There are many variations on this to reduce the chances of getting stuck in a local optimum. Another is gradient search: here the representation is continuous, and you use partial differential equations to find the direction that's uphill in n dimensions. It comes with many of the same variations to reduce the chacnes of getting stuck in a local optimum. Another is iterated relaxation methods, as in a Hopfield network, or in monte-carlo markov chain models. This is more common in models of memory retrieval, because it works together with a simple method of storing the points in N-space that you want to remember. Another way of choosing the next point is genetic algorithms, which is useful if the items stored in underlying representation have linkage disequilibrium owing to some hierarchical structuring of the ontology of the things represented. My point is just that this is a mature, well-researched field, and many satisfactory solutions are available.

Expand full comment

Er, the ontology doesn't actually have to have a hierarchical structuring; linkage disequilibrium of representations in the ontology is sufficient. I just think this usually happens in ontologies because they usually are hierarchical.

Expand full comment

Now I want to know why there's a railway station in Dublin called Houston.

Expand full comment

Formerly 'Kingsbridge' station, it is named in honour of Seán Heuston, an executed leader of the 1916 Easter Rising, who had worked in the station's offices.

Expand full comment

Well, Marcy, have your considered using some sort mnemonic like Sherlock Holmes ‘Mind Palace’ to keep these things straight?

https://www.smithsonianmag.com/arts-culture/secrets-sherlocks-mind-palace-180949567/

Expand full comment

Was misnaming me a joke about remembering names, or an accident? By weird coincidence, my sister's name is Marcy. Or maybe not a coincidence, if you happen to know of her.

Expand full comment

Yes I was joking. A silly riff on not remembering names. Just a coincidence about your sister being named Marcy.

Expand full comment

So far as I know there are a ton of complicated reasons for why memory works better or worse, but two of the most common failures of memory of which I've heard are (1) not having enough associations with other memories (more likely with a proper name than with a generic noun, and much more likely than with a verb), and (2) having something closely related in unmeaningful ways (e.g. sounds the same) interfering with the exact recall.

That is, your mental Erdős number may be too large or too small.

Expand full comment

Completely speculative, but I wonder whether some of the incredible achievements of Ashkenazi Jews born in central Europe from 1880 to 1920 was a result of a similar mix of circumstances that has also been associated with extraordinary, concentrated achievement in other times and places. Periclean Athens, 1st century BC Rome, 15th century Florence, maybe late 18th century Edinburgh, maybe mid to late 17th century London, maybe the Netherlands in the early 17th century. Most of our culture was created in a very small number of places, in a short period of time. And then, although high achievement in those places/cultures continued - as it certainly has done for Ashkenazi Jews judging by the number of Nobel prizes they win - the great, world-changing contributions faded away.

These cultures do seem to share some things. Wealth and power - they were all rich places that were extending control over others, and were at the forefront of contemporary technological development. Novelty - all of these were very consciously new societies, doing something different from anything that people living in those places had done before. Threat - they were all under constant threat, not just of attack, but of total destruction. It's almost as if the relative lack of a past they could call their own, and a future in which their culture was reasonably certain to survive, helped focus the collective mind of these places on achievement in the present.

Looking at these indicators, one might expect the most interesting place in the world today, and the most promising place to look for major contributions to the future of our culture, to be rich, technologically advanced, realistically threatened with annihilation, and new - a civilisation which thinks of itself as separate from anything that has gone before. I'd say that the place that seems to fit those criteria best would be Taiwan. I know almost nothing about Taiwan.

I'm in the happy state of being aware that there's a mountain of speculation on this subject, but sufficiently ignorant of the nature of that speculation that I feel free to add my own half-baked noodling to the pile.

Expand full comment

Great choice of environments and periods. Except I don't see "being under threat" as an anyhow central part of this picture. I see it more as those environments having somehow succeeded to get a scene running, and to maintain it alive for a certain time.

Beethoven was not from Vienna, but was drawn to it because for music in the 18th/19th centuries, Vienna was The Scene in that region - same as Milan was in the neighboring region. Just like Paris was The Scene for writers in the first half of the 20th century, drawing them in from across Eurasia and the Americas. The US universities were The Scene for bright minds in the second half of the 20th century, drawing them in from across the globe. (while present-day US universities have become markedly decadent, bloated, barnacled and on a path of decay). Silicon Valley is an absolutely massive Scene, though the question may be for how much longer.

I don't think there is a "recipe" for creating a scene, nor is there a "recipe" for how one is dissolved. Scenes are above all living organisms. If it's not a living organism, it's not a scene. Those organisms are very, very diverse, and correspondingly diverse are the processes by which they decay, petrify or succumb to poison.

Taiwan seems to be a pattern-match of your key assumptions, including the one about "being under threat" which I think is far from central. Taiwan is certainly a more-than-well-off country, but I think not a scene. Meanwhile, China - itself "Taiwan's looming threat" - seems to be comparatively more active and alive in the scene department, even though it is not under threat from anyone. (or at least not from the US, which is lately too busy shooting itself in the foot)

Expand full comment

You're right, late 19th/early 20th century Paris is a real problem. It wasn't particularly under threat, and wasn't really a new culture, but there's a lot of major stuff happening. You can make an argument for the real risk of nuclear war from 1950 to 1989 being suggestively correlated to the period when American music/art/literature was really doing new things.

Taiwan is the test, I've more than somewhat ridiculously decided. If in ten years' time it turns out that actually a huge amount of great stuff has been going on there, and is continuing to do so, then this theory should definitely be taken seriously.

Expand full comment

In ten years' time, the Republic of China might already become a part of the People's Republic of China. OK, kidding aside, I agree that your theory is worth considering because it is falsifiable - it could be potentially falsified or vindicated in such a timeframe.

That said, I still think that "being under threat" is not key, that "bringing to life a Scene" is key. (while "*how* do you bring a Scene to life, what is the recipe?" might be an extremely hard-to-answer question - except if the answer is that there are no firm recipes).

While we ultimately attribute progress to the many individuals who brought it about - and I agree with those individuals getting all the credit that they're getting as individuals - I think we mustn't overlook the contribution that the Scene's existence has in this picture. The Scene is bringing them together with like-minded individuals, who are riffing their ideas off each other - and in *motivating* them to dedicate their extra time and energy passionately to those pursuits, as opposed to dedicating it to whatever more mundane pursuits they would have had, absent that Scene. In other words, those -same- individuals, with those -same- innate capabilities and drives, might not have accomplished nearly as much if we bar them having been thrown into that Scene.

I don't think the "nuclear threat" had anything much to do with US achievements post-WWII. Europe was under that exact same threat, right? But Europe had just spent half a decade thoroughly gutting itself, ending up largely devastated and crushed ("in body and spirit"), including in the Allied parts who won. In contrast, America had won with over an order of magnitude less human loss and virtually no destruction whatsoever in its mainland - and high spirits, the impression that they could do anything. Scenes were indeed springing to life in post-WWII US - literary, academic etc. - but I don't think it had much to do with the "nuclear threat".

Taiwan is a very rich country. As is Switzerland. Taiwan is under threat. As Switzerland isn't. And neither of the two presently have a Scene running - at least to my knowledge. I may be wrong. We will see in 10 years.

P.S. about the "richness aspect". While I don't think there's a set-in-stone recipe for creating a Scene, I think that one of the unavoidable ingredients is ... not *money* per se, but Slack (in the meaning that Scott uses the word, https://slatestarcodex.com/2020/05/12/studies-on-slack/). Slack is *far* from being the sufficient ingredient, but it is a mandatory ingredient. A baseline of richness is indeed required to have Slack at all, but beyond that baseline, further richness does not necessarily bring further Slack, it can even be to the contrary. The Taiwanese can at the same time be quite richer than the Portuguese, while having less Slack than them - if the work expended on achieving such material standards sucks out their time and energy.

Expand full comment

My list looks like this:

- Athens, 500-150 BC, but our record past 400 BCE is spotty.

- Not Rome. On the contrary, I think that, for a nation that lasted 2000 years (~600 BCE - 1453 CE), Rome had an astounding lack of intellectual achievement, including in art, math, literature, science, and philosophy, unless we attribute everything done by Greeks ruled by Rome to Rome. The Romans had good engineers and lawyers, and a few good poets.

- Venice, circa 800-1800 AD. The reinventors of republicanism; the forerunners of the Renaissance; also a world military power. Suspiciously underemphasized in our histories.

- Southern Italy, 1300-1600 CE.

- The Netherlands during the Dutch Golden Age, ~1550-1700. Another time/place of criminally underestimated importance. They invented the toleration of opposing ideologies, and imported it to the US when they founded New Amsterdam. Every step towards Enlightenment in Europe during the Renaissance relied on Dutch printing presses; subversive material was for a hundred years printed mostly in the Netherlands, where the press was difficult to control.

- The UK, 1660 (establishment of Royal Society) - 1776 (death of Hume), with a special nod to Scotland.

- The US, around the time of the American Revolution. The persistent insistence of historians that the French Revolution was more important than the American Revolution strikes me as perverse. The French Revolution was materially important in wiping out the aristocratic class. But it gave us no new ideas or new knowledge; merely another proof of the inherent violence and instability of naive communitarian ideology.

All of the places on my list had many things in common:

- No powerful monarch or central government, and no planned economy

- Money (not barter)

- A merchant economy based on sea trade with considerable freedom

- A sophisticated monetary system, including loans with interest

- A history of the government paying back its loans (I'm not sure if they all had this)

- A rising middle class, due to this freedom of trade

- Respect for personal artistic achievement, as evidenced by the fact that we know the names of their artists and architects (as opposed to, say, those of Rome, or of the Middle Ages)

- Individualism (the notion that it's okay for individuals to seek personal honor, to feel personal pride, and to have their own interests and preferences)

- Individualism and communitarianism are not exclusive! Athens, Venice, the Italian city-states, and the Netherlands had a high degree of community spirit. The US and Britain did not because they were patchworks of different nations.

- Competition, both with other cities and nations, and with other individuals within the same state

- An upper class which, unlike those of France and Spain, was free to engage or invest in trade or labor

- Widespread disillusionment with religion among intellectuals

- Significant freedom of speech and writing, owing to this disillusionment with religion

- Ineffective enforcement of religious authority (this was true even in Renaissance Italy, where, despite physical closeness to Rome, the Catholic church had much less power than in France or Spain)

- Naturalistic and non-idealist art, resulting from freedom from violent religious oppression and from Platonist ideologies

I'll call cultures with these attributes "Enlightened cultures".

I think all of the items on this list resulted, in the European cases, from the weakness of the monarch and of the Catholic Church. The exception that proves the rule is Constantinople, which had most of these things, but had a powerful, centralized state and state religion, and accomplished little culturally in 1000 years other than building the Hagia Sophia and preserving ancient manuscripts.

[I deleted some paragraphs here that were political.]

Note that this list consists almost entirely of things Plato opposed, disliked, or said he would eliminate from the ideal state (in Republic).

Expand full comment

I think if you're looking at Rome from 75BC to 0AD you've got Virgil, Horace, Ovid, Cicero, Livy, Lucretius. People who know far more about their works than I do would describe that as a flowering of literary genius, I think. And great works of art, even if along the lines laid out by the Greeks hundreds of years previously. Again, my limited understanding is that their feats of engineering were extraordinary, and unparalleled prior to the modern day.

Interesting that the quality of literature and art declines so markedly from the 1st century AD onwards, even though Rome itself flourished.

Expand full comment

I think you are right in identifying the period 75BC - 1AD as the high point.

Expand full comment

Like I said, they had good engineers, lawyers, and a few poets (I was thinking specifically of Virgil-Horace-Ovid; that's why I said "a few" rather than "several"). Add historians to that if you like. Lucretius became the greatest Roman philosopher merely by passing on the ideas of Democritus and Leucippus without adding any insane metaphysics or religious doctrines; and the Romans were so uninterested in what he had to say that it was almost lost, while they instead devoted centuries to adding epicycles to Plato's vile and insane philosophy, then stirring it in a pot with Judaism, Mithraism, Manichaeism, and Zoroastrianism, and calling the resulting stew Christianity.

They, or their Greek slaves, made some good sculptures; but whereas Greek sculpture developed, Roman sculpture IMHO merely gradually declined from the Greek. They made unique and great advances in architecture. But all that is trivial considering the extent, duration, and wealth of their empire.

In Rome's defense, the island theory of biogeography (in evolutionary theory) predicts this. The diversity generated by evolution is not proportional to the land area available; it is proportional to the area raised to the power of z, where typically 0.15 < z < .35. For z = .25, it predicts that a land area 100 times as great will produce only 3 times as many new species. The production of artistic "advances" is in many ways much like evolution, so my prior is that we should model it with the island theory of biogeography.

Expand full comment

Rome didn't contribute much? I think "contribute" is the wrong way to think about it. During the time of Rome there were certainly huge advancements in every field. The problem is that the fall of Rome caused many of these advancements to get lost. In fact much of Greece's known works were also lost for a while and were rediscovered hundreds of years later.

Expand full comment

They did invent one new literary form: the rape comedy. In a Roman rape comedy, a couple is in love and about to be married when the woman is raped by an unidentified man, and so the wedding must be called off and the woman sent away in disgrace. But in the end, the couple discovers that the man she was to marry was himself the rapist, so they can get married and live happily ever after.

On that note, they also invented gladiator shows as entertainment; apparently they grew out of the ritual sacrifice of slaves at the funerals of wealthy men. If they didn't invent them, they at least greatly increased them in scale and grandeur. That's the sort of art the Romans invented.

Expand full comment

What advancements? I tried recently to make a list of inventions made by the Romans, and used Google to search for things they had invented, compiled a list from several websites about the Romans, and then went through the list, and found the only thing on it that they might possibly have invented was the overshoot water mill--an engineering improvement on the undershoot water mill. That's one major incremental improvement in one technology, in over a thousand years. They didn't invent concrete, the ballista, the catapult, indoor plumbing, the aqueduct, the arch, the barrel vault, or any of the other things people sometimes erroneously credit the Romans with inventing. They developed no new math, and all I know of Roman science is Galen.

They made incremental advances in engineering, which *is* a type of intellectual achievement; but I've never claimed they were bad engineers. I do claim their skill at engineering excelled their scientific understanding to such a degree that it's a mystery how they did so much engineering without developing more theory.

Given their huge population and long duration, I find the sparsity of their accomplishments in art and science astonishing. Taken as a whole, they were roughly on a par with individual cities and small countries, like Athens, Florence, Paris, and the Netherlands, each of which had only a tiny fraction of the resources and the labor power of Rome.

Expand full comment

"Rome had an astounding lack of intellectual achievement, including in art, math, literature, science, and philosophy"

Can we really say that? For a civilization of sixty million people, we have hardly anything at all preserved from the Romans of 100 BC-600 AD. Sadly, the Romans did not write on cuneiform tablets.

Expand full comment

Yes, I think we can, and we have a great deal preserved from the Romans of that period. Unfortunately, it's mostly crap.

Expand full comment

Also, the pre-260s Roman Empire certainly did not lack in artistic achievement.

Expand full comment

Well, they made good death masks, though this was more like photography (they used wax to make molds). They made some great tombstone paintings, but I think that was only in Africa, probably all by Greeks. They had great architecture and made great mosaics in the Imperial period, at least from the 2nd to the 6th centuries. (In science, they had Galen.) But mostly they stole Greek art, or had Greek slaves make art for them. They didn't develop a distinctive Roman style AFAIK until the time of Constantine, as in Constantine's arch, which is the first instance I know of Romanesque art, whose prime characteristic is sloppiness and inattention to detail. There was a bit of later great Byzantine art; lots of it if you like 1000 years of painting flat alien-looking madonnas with child always in the same pose. And some painters from Constantinople made great art around 800 AD when they left the stifling, static art world of Constantinople for Charlemagne's court; but that's more the exception that proves the rule--their painters *could* make great art, but weren't allowed to.

I'm unfamiliar with their pre-260s art. Can you link me to some examples?

Expand full comment

(Technically not tombstones; they were painted on wood.)

I don't mean there was *no* great Roman art / science / etc.; but that, considering the vastness and duration of their nation, they contributed little. They were outdone by 1 or 2 centuries of flourishing in Athens, and Florence, and the Netherlands--probably all areas with at most 1% as many labor-hours to work with as Rome.

Expand full comment

Hm, you might be right.

Expand full comment

"I know almost nothing about Taiwan."

Relative to the mainland, it's stagnating in all respects (at least partly due to mainland competition), similar to Japan. The scientific and cultural power of the mainland is growing by leaps and bounds, though (though from a very low base).

Expand full comment

Comparing a country with a population of 23.5 million to one with 1.5 billion hardly seems fair. I think comparing China to almost any other nation on earth would've shown that the other country was "stagnating" relative to China just because China's starting point was so low. And maybe in another 40 years we will say the exact same thing about China when comparing it to countries in Africa.

Expand full comment

"And maybe in another 40 years we will say the exact same thing about China when comparing it to countries in Africa."

No. Africa has a dearth of human capital (and, I note, had as much time to develop as China).

"Comparing a country with a population of 23.5 million to one with 1.5 billion hardly seems fair."

Well back in the 1960s-1980s the situation was exactly the reverse, with Taiwan diverging from the mainland.

Expand full comment

For many years in the US Taiwan *was* China. As Mad magazine satirically described it in the 60’s, that big land mass occupying continental east Asia was referred to as “Big Empty Spot”.

I remember buying my first garment with a “Made in China” in 1981.

My first thought was “Do they mean *Communist* China? What’s up with that?”

Expand full comment

"maybe late 18th century Edinburgh, maybe mid to late 17th century London"

Neither of these fit the criteria of either Novelty or Constant Threat (at least not from war), in my opinion. Unless you want to point to the Black Death and Great Fire as instigators of London's intellectual rise.

Plagues are often touted as great shifters/resetters in terms of attitudes, but was London especially under threat in comparison to other cities of the time? I don't think so.

Expand full comment

Late 18th century Edinburgh doesn't fit constant threat, really, although the last Jacobite rising was 1745. But then great as Hume and Smith are, I'm not sure the Scottish Enlightenment was really significant enough to fit this argument. Novelty does fit, I think - they rebuilt their city, were creating a new North British nation, and were consciously separating themselves from Scottish history to date.

17th century London fits a bit better, I think. The population increased by about 1000% between 1550 and 1700, from a town to one of the largest cities in the world, and they definitely saw themselves as creating something new, most notably in not being Roman Catholic, but then extending to the subsequent explosion of new religions and political movements, Quakers, Levellers, Republicans, etc., that flourished in the city. I think the threat was real, too; a Catholic king or a Catholic invasion were real possibilities, and would have been likely to lead to the destruction of the culture that was being created (and it was partially destroyed after the Restoration in any event). That's what I mean by Threat, really - if the wrong person had won a war in the 15th century, London's culture wouldn't have been changed much. Two hundred years later, something new had been built, and was at risk.

Would love to know how the theory fits the Golden Ages I know even less about - the 8th century Baghdad of the House of Wisdom, the equivalent if there was one in Song Dynasty China, Mauryan India, etc.

Expand full comment

Fine- I was interpreting the criteria a little more stringently than that, but I think I agree with what you said.

Is the 'imminent threat' supposed to be a threat to the culture that is being created, or is it a threat to the safety/security/lives/independence of the place?

You seem to imply that it is the former in the direct above: a reactionary regime destroying the thriving new culture of London. But that is tautological- if there wasn't a new culture to uproot, then it wouldn't be under threat. And I don't really think that Taiwan faces something like this, does it? The culture is somewhat different to China sure, partly due to the divergent political systems. But it's hardly uniquely facing cultural eradication. More, it faces military threats to its independence.

(There might also be something to be said about the convergent of many advanced societies' cultures in the modern era, which means there is less culture to be under threat. Scientific advances look pretty similar in China as they do in the West, for example, as does music.)

What I thought you meant was a more existential level of destruction that the place faced e.g. Athens. But, to be honest, I fail to see how any urban area for substantial chunks of history doesn't fit the slightly weaker threat criterion as you've sketched it out above, assuming we're applying it to the polity rather than the culture.

Or perhaps a weaker claim: which of London's roughly cultural/geographic/historical contemporaries didn't face threats of a roughly equivalent level of risk as a monarch with different views coming in and stamping out opposition and doing a bit of killing and looting? Didn't basically everywhere face potential threats of around that level for 1000+ years?

Expand full comment

I think I would describe the level of threat faced by London in the 17th century as being on a similar level to that faced by Athens in the 5th century BC. After all, Athens was in fact conquered, and did survive - though many people died and its culture was torn apart.

It's the realistic threat of the destruction of a new culture by external armies that matters. Everyone always faced the threat of disease, starvation, etc, and less so in these places than in many others, as all of them were rich. And I think you can have a new culture that isn't under threat of destruction.

Again, as above, I know almost nothing about Taiwan, but quick googling would suggest that there has been a conscious effort to create a new culture, separate from that of Mainland China, in the last twenty years. That's exactly the kind of thing that my theory would suggest should lead to an explosion of major cultural/scientific achievement in the near future.

A lot of London's 17th century contemporary states faced this kind of threat - all of the Protestant states of Europe, basically. But I don't think that's normal. It's not equivalent to a king coming in and killing people - whatever happened in the Hundred Years' War, for example, there was no new culture that was threatened with destruction in war (the Lollards strengthen my argument, I think). And, of course, London wasn't the only Protestant state in 17th century Europe that made huge contributions to the growth of human civilisation.

Expand full comment
Comment deleted
Expand full comment

Possibly related: African-Americans being a primary influence on music all over the world for some 80 years or more. (Duration very approximate-- I'm not sure when their influence got out of the the US.)

I'm not sure most people realize how remarkable it is for one smallish ethnicity to have so much influence for so long, we've been living with it all our lives.

And it can't be simply genetic, since it's not Africans.

Expand full comment

If your child has a personality that gets on your nerves, what should you do?

Expand full comment

I think that, if that's the worst thing about the parent-child relation, it's still above-average.

Expand full comment

A daily meditation practice can help a lot with patience. I’ve been doing one four over 4 years now and a lot of things that used to irritate me no longer set me on edge.

It doesn’t have to be a big time sink. 15 minutes a day done regularly helps a lot.

Personally I’ve lengthened the time of my ‘sits’ but that’s driven by an urge to understand what and why I am. I get some interesting insights if I quiet my mind for 90 minutes but that sort of lengthy practice isn’t necessary to develop more equanimity.

Expand full comment

It doesn’t seem to have helped with my spelling though. doing one *for* over 4 years.

Expand full comment

Very tentative suggestion: spelling requires memory of how words look. Can you get into a meditative state where you notice how words are spelled?

Expand full comment

Yeah, I could give that a whirl. :)

Expand full comment

Is it their personality or their behaviour that actually gets on your nerves?

Expand full comment

I'm not sure how much they can disentangled.

I think I annoyed my mother by not caring much about clothes, and she might have preferred to be argued with to some extent rather than just being told it's alright about some clothing choice.

However, I wasn't perceptive enough to strongly notice her preference at the time, and not agreeable enough to pretend to care.

Expand full comment

Based on your self-criticism, I would focus on developing the skills of perception and agreeable-ness in your child.

Expand full comment

There are now three nags to subscribe per page instead of two (top-of-page/end-of-article/end-of-comments instead of top-of-page/end-of-comments). Is this intentional?

Expand full comment

Probably sometime next year they'll add obnoxious popups instead of improving the amazingly shitty comment system.

Expand full comment

(There's also an extra hidden instance of one's username somewhere near the top; I Ctrl-F my username to find the parts of the thread with me in them and there are two hidden instances up the top instead of one.)

Expand full comment

Last whole number OT I gave some arguments from the cognitive scientist Donald Hoffman's "The Case Against Reality: Why Evolution Hid the Truth from Our Eyes". Hoffman's interest is in understanding consciousness/qualia; he presents radical ideas such as what we see physically as medium-sized objects like humans and other animals are merely the equivalent of an object in a video game because our perceptions have evolved to give us a useful interface of reality that tells us nothing about what is behind the screen. It's a pop-science book strained and stained with tired The Matrix tropes, nevertheless I found some of the ideas serious and novel.

Last week I focused on Hoffman's emphasis on how much our perceptions, particularly our visual perceptions, are divorced from "reality".

Now I want to focus on Hoffman's radical ideas about what it is that consciousness is.

Hoffman approaches understanding consciousness by attempting to solve the maze backwards. Cartesian-like, he starts over with a basic premise: consciousnesses are agents who 1) perceive The World; 2) Decide to act based upon info from 1; 3) Act, changing The World, changing perceptions, leading to new decision, new actions, changing the world again, etc.

It's the most simple model of consciousness (although he acknowledges an unconsciousness computer could also fit that model.) The idea is to cut consciousness down to its essential elements, assume it is real, and then ask What Then?

Hoffman seems very influenced by studies about epileptic patients who have had the hemispheres of their brains separated. The two hemispheres seem to exist as separate yet whole consciousnesses ever after. Such result seems to imply that two conscious structures capable of acting independently can also join together to form a new consciousness containing both.

Hoffman believes that, humans say, are a grouping of many fundamental conscious agents joined together in a hierarchy. So one agent might be in control of the blood flowing through your liver; another your heartrate, another your breathing, etc., and all of these agents are "unconscious" to "conscious" higher-order executive functions, which we more normally think of as consciousness.

We may even, who knows?, ourselves be lower-lever agents playing our video games of life, while in reality our business is something more important like being the unconscious agents of who monitor and stabilize the blood pressure of sleeping dragons. Who knows?

Hoffman doesn't claim to know much, and ultimately that is the failing of his book. He has big ideas but can't flesh them out.

Expand full comment

> It's the most simple model of consciousness (although he acknowledges an unconsciousness computer could also fit that model.) The idea is to cut consciousness down to its essential elements, assume it is real, and then ask What Then?

> Hoffman believes that, humans say, are a grouping of many fundamental conscious agents joined together in a hierarchy. So one agent might be in control of the blood flowing through your liver; another your heartrate, another your breathing, etc., and all of these agents are "unconscious" to "conscious" higher-order executive functions, which we more normally think of as consciousness.

Ignoring the extremely loaded verb "decide", there is still a significant contradiction here - if a human consciousness is made up of multiple narrower consciousnesses, then we can't say that the human consciousness is meaningfully "simple". It's possible to bite that bullet fairly hard and agree that a thermostat is conscious (to a more foundationally simple degree than a human!), but then it's not clear how much of the conventional baggage the word carries applies to this definition.

I'm most interested in what technique was applied in the discovery of more fundamental subordinate consciousnesses, incl. the homeostatic processes, and what that means for the model. Is this a concession by the definition being too broad, and forced to capture phenomena that would otherwise be a poor fit? Or is there a rigorous approach where this actually gives us new insights, and suggests future research?

Expand full comment

I think I heard part of an interview with him on Sam Harris' podcast, and I was not particularly impressed by his ideas.

I'm going from memory here, but IIRC his argument about perception vs reality relied heavily on evolutionary simulations showing that organisms that perceive the utility of some resource do better than organisms that perceive all the accurate information about that resource. Which, OK, if you parameterize something in a way that emphasizes useful information and deemphasizes useless information, you'll do better than someone without access to that parameterization. But why does it then follow that this is how real conscious agents behave?

The part about consciousness being a hierarchy of individual conscious elements seems similar to someone else I heard on Harris speculating that consciousness may be a fundamental property of matter, not unlike charge. They went so far as to speculate that electrons may be conscious on some level. But the justification seemed to be that we can't think of a way that consciousness arises from unconscious elements, therefore everything must be conscious. I find that unconvincing. Consciousness may be something that only arises when there are interactions between separate elements. Electrons may have some property that is necessary for consciousness, but not sufficient unless you get lots of electrons together in the right configuration.

Apologies if I'm misremembering the way these ideas were presented.

Expand full comment

I'm interested in finding out more about the reality we're failing to see.

Expand full comment

Infrared is an example. In fact all of the electro magnetic frequency except light. We evolved (not just humans) to see the frequencies we need to see.

Expand full comment

I think that's a more modest claim than Hoffman makes.

Expand full comment

He’s making a more general point for sure but it’s related. We have no reason to see solid objects as non solid although there are gaps between atoms If you want to take an extreme view and insist that all the space is taken up /occupied down to the size of protons/quarks etc then it is probably true to say there is no such thing as a solid object but if we can’t go through it or sit on it then it’s solid to us. We don’t need to know at the level we are interacting with the world about quantum mechanics either.

I don’t really get this either “ humans and other animals are merely the equivalent of an object in a video game because our perceptions have evolved to give us a useful interface of reality that tells us nothing about what is behind the screen.”

If there was a conscious mind in a video game it wouldn’t need to know the ins and outs of 3D coding or pixel refresh rates and a digital tree is, to it, a tree. And in fact, within the reality of the video game, it exists.

Expand full comment

err... he's not saying reality is like a video game, he's saying that our perception receives only high-level information that is greatly preprocessed, while low-level or pixel-level information is inaccessible to our consciousness.

Expand full comment

I didn’t say that either. I was using an analogy.

He is saying “it’s an illusion”. But it isn’t. We don’t need to know or see atoms, or most of the electro magnetic frequencies. The reality we see is what we need to see and it exists.

Expand full comment

I don't think it has anything to do with the solar spectrum, atlhough that is a common canard, and it is true that the Sun's spectrum currently peaks in the visible (it peaked in the IR during its very early years). The Sun emits more than enough IR to see by, and the use of IR in nightvision glasses tells you that nocturnal animals would find it rather an advantage to be able to see in IR.

But IR is strongly absorbed by almost any biological material, or indeed anything made of molecules, so it would be almost impossible to construct a clear lens to focus the stuff, and I would say that is why animals don't see in IR (although I believe spiders have low-resolution IR-sensitive spots).

Expand full comment

Sure, as you said - many animals do see in IR, where they needed to evolve to find prey at night, or find hot spots on a warm bodied body. Bedbugs have IR vision. Mosquitoes, too, damn them.

Snakes too because they are looking for warm blooded animals to prey on.

For most animals visible light is what is absorbed or reflected the best by solid items thus defining them as solid. And of course differentiation of the frequencies into that qualia called colors is useful for fruit eaters and pollen hunters. (of course some co-evolution is involved there - flowers want to be pollinated and fruit eaten needs to be eaten. )

Expand full comment

I said *no* animals to my knowledge see in IR, meaning they focus the light and get an image. There's no biological lens material that would work for that, as far as I know. Yes, pit vipers also have IR-sensitive pits, but they're extremely coarse -- basically it only tells the snake whether to strike left or right once it gets close enough.

This has nothing to do with qualia or with the solar spectrum, it's just a property of matter. Visible light can be focused because you can build biological lenses that are transparent to it. You can't do with that IR light.

Expand full comment

Physics is exploring it. If we have difficulty perceiving anything, we build devices that can perceive them and project those observations into the domain we can perceive. Science has been doing it for centuries. Hoffman's argument is not particularly compelling IMO.

Expand full comment

I'm not buying it. The disturbing thing about consciousness is that it is the only thing that we can be sure exists, but it is the only thing that we have literally zero explanation for.

IMHO we will need to totally rethink our fundamental scientific axioms before we can start to understand consciousness.

Expand full comment

I believe our certainty that consciousness exists is the Big Lie in our cognitive machinery. If I wrote a fancy but "obviously" non-conscious NLP in-out loop and artificially injected some "facts" such as "consciousness exists" and "you are conscious" that were not grounded in any of its observables or subject to epistemic revision, I would have some fun watching it spin out produce and retract various statements about consciousness, like our "hard problem", never being able to make progress, nor update itself on the fact that it's unable to make progress toward the belief, "my premises are incoherent."

In my honest opinion, we are that NLP loop. Almost all data that are processed in our minds as facts are subject to revision by sense data. The idea of "consciousness" is not, though we can't tell it apart from grounded facts. Thus we treat "we are conscious" as a similar class of statement as "the ground is wet", seemingly coherent and meshing with the rest of our beliefs, even though in reality it's not part of any coherent epistemic web.

I suspect repeated exposure to highly altered, dissociated, semi-waking states might be able to bring "consciousness exists" nearer to the domain of "revisable by sense-data". However, I doubt our cognitive architecture is able to function even close to normally without that synthetic fact present.

Expand full comment

Are you saying that the statement "I am conscious" is just like the statement "scientists are trustworthy/untrustworthy": something that is "not subject to revision by sense data"?

It is not clear to me that my experience of being conscious is not subject to revision by sense data. I can almost imagine my brain circuitry noticing that whatever normally allows it to notice consciousness is missing, and saying words like "Huh. I don't feel conscious anymore. Weird."

Expand full comment

Does your believe [that consciousness is just an illusion] have an influence on your moral values? After all, if consciousness isn't real, then the subjective experience of pain and suffering isn't real. This would mean that a human being screaming under torture isn't qualitatively different from a computer program that prints "Please stop it hurts" whenever you press the Enter key.

Expand full comment

This is a difficult question for me to answer.

My "cheap cop out" answer is that I believe morality is a synthetic construct, and that even if the epiphemenological model of consciousness held, "good" or "bad" is just a game we play. There is no "qualitative" difference between making paperclips and preventing torture, these are merely different arbitrary preference sets over certain configurations of matter (or epi-matter, if that's part of your ontology).

My more serious answer is that, it's perfectly consistent to define suffering as "mind-like systems that substantially resemble those of neurotypical human beings who claim to be in pain", and that coupled with the arbitrary choice of "I want to avoid suffering", it will "all add up to normal" if you do that. In fact I suspect this "consciousnessless" definition is going to serve society much better than asking "but is it really conscious" will when debates about whether advanced AIs suffer or not arise. And also for animal rights -- one must surely find it ironic how consciousness fetishism has allowed us to consider complex minds of animals as qualitatively inferior.

My most serious answer is a simple affirmative, buried in the previous paragraph: yes, because suffering is defined, and there is no qualitative distinction between suffering and non-suffering systems.

A fun game to play is to try and create a mental model for the "simplest system that can experience suffering". Is it c elegans? Can we use fewer neurons? In fact, I think that going down this rabbit hole is one of the surest paths toward my view of things, so give it a try!

Expand full comment