427 Comments

I’m looking for good resources to learn math that use visuals and give the intuitions behind the math. 3B1B is exactly the kind of thing I’m looking for. I’m interested in learning a bunch of areas but I’m currently most interested in cryptography.

Expand full comment

Why do business leaders consistently refer to their highly educated workforce as "talent"?

This talk of "attracting talent" and "retaining talent" is everywhere I look. (I first thought it was an artefact of tech/Silicon Valley culture, but I hear it in interviews with CEOs in other areas of business too.)

My impression is that they are often very happy to hire someone who learned their skills through extensive experience, rather than being born with them.

I think it's a convenient shorthand for "people with the skills we need", and I admit I don't know a great alternative shorthand. But I don't think this one is great either, as I believe it sends a very exclusionary message.

Is my impression right? If so why did we pick "talent"? And what could we say instead?

Expand full comment

Hello

I am reaching out for help as I am struggling with anxiety and mental illness relating to an Information Hazard / AI thought experiment. I am currently being treated by a psychiatrist but I feel like I need to talk to someone who understands simulation hypothesis arguments and related issues.

Expand full comment

Can anyone recommend a good offline variant of github? That is, my company has a git repository that lives on an offline network, and primarily I want the better UI experience that github provides relative to the CLI. I think this + lots of other bells and whistles can be had via Enterprise Github for $21/person/month; I'm wondering if there are cheaper alternatives (with fewer bells and whistles) out there.

Expand full comment

GitLab is the big one, however if you are just looking for a code browser and not issues, pull requests, CI, etc. then something like cgit: https://git.zx2c4.com/cgit/about/

Expand full comment

Thanks! I didn't know about GitLab, it might be the right answer for us. The main thing I really want is a good UI for pull requests. I don't need integrated issue tracking (I can't get it anyway, since the repository is offline, whereas our issue tracking is online so that other people can access it), I could see us getting some use out of CI but living without it doesn't bother me too much, but having review comments consist of plaintext emails saying "on line 137 of file foobar.cs you do this thing, why do you do it" (instead of having that comment hang out next to line 137 of foobar.cs) is really annoying. For some reason, this doesn't seem to be a common feature set, and I'm puzzled by why that is.

Expand full comment

I just suffered a heart break, realizing that I missed the Berlin meetup. Hope there will be another one soon!

Expand full comment
founding

Burnout.

I've recently saw this TED Talk (uh oh, red flag!) https://www.youtube.com/watch?v=PrJAX-iQ-O4 ("Emily Nagoski and Amelia Nagoski: The cure for burnout (hint: it isn't self-care) | TED")

And I'm curious what do people think about this "stress is a tunnel, it's only a problem if you get stuck in it"?

Expand full comment

A criticism of space shows like Star Trek is that, even if an alien planet's atmosphere were chemically identical to Earth's, it still wouldn't be safe to breathe since it would contain microorganisms and toxic particles. Humans who visited would need to wear space suits, which they never do in most space shows.

If you used a time machine to visit the Mesozoic era, would there also be a risk of you breathing in microorganisms or biotoxins that would kill you? Do we have evidence for or against this?

Expand full comment
founding

You'd probably want to wear an environment suit (but not a full pressure suit) the first time you visited a completely alien world with a biosphere. Or at least a good filter mask. But the audience wants to see the actors' faces, so we make that concession to reality.

Reality is, the human immune system is really, really good, and even microorganisms that have specifically adapted to our fellow mammals usually can't make a foothold in a human body until they've spent many years evolving in close proximity to humans. An alien microorganism that has the wrong chirality or needs an amino acid humans don't have or whatever, is going to just starve in a human body and probably face an insurmountable evolutionary hump trying to do otherwise.

So the protective gear the first explorers wear is *probably* unnecessary, but nobody who is serious about space travel takes even 1% risks if they don't have to.

Expand full comment

They didn’t take any chances when the Apollo crews returned from the moon.

https://en.m.wikipedia.org/wiki/Mobile_quarantine_facility

Expand full comment

An allergic reaction seems like a larger risk than an infection.

Expand full comment

Very few bacteria even on the Earth are well-adapted to infecting a given species, such as humans. As a rule, nonspecific immunity and the lack of the organism's usual environmental niche mean most random micro-organisms that end up inside us die or are killed almost immediately. It's only a relative handful of species, which have evolved in close parallel to us, which have developed the necessary mechanisms to survive for any length of time*. So a priori alien micro-organisms are very unlikely to be infectious, let alone dangerous.

-------------

* The most successful of them, e.g. E. coli, can live and even thrive inside us indefinitely, because they have worked out a modus vivendi whereby they do us no harm (and perhaps some good) and our immune system in turn tolerates their presence.

Expand full comment

Why would space microbes be adapted to live inside a human? Why would they even eat the same kind of biomolecules as we do?

It's honestly a tossup between the microbes having no idea what to do in a human body and the human body having no idea what to do with them, assuming they lack PAMPs (pathogen associated molecular patterns) the immune system evolved to recognize.

It's interesting that you could still get adaptive immunity to them, at least antibody-mediated - T cells would just get confused.

Mesozoic era would be about as safe as anthropocene.

Expand full comment

Since using space suits was clumsy and restrictive for the actors (they did it in the episode "The Tholian Web" https://www.youtube.com/watch?v=pWdtds3v1j8) and expensive for shows, Star Trek at least fudged the whole thing with the transporter filters (i.e. supposedly when beaming you back up from a planet, they checked for any nasty buggies you might have picked up below): https://memory-alpha.fandom.com/wiki/Biofilter

Expand full comment

TL;DR: I don't think this is political, though I suppose the ramifications might be, but I'm concerned about how poverty is calculated in the US and recent articles that point at statistics generated using a newish monthly methodology based on the also newish Supplemental Poverty Measure. I'm wondering if anyone knows about this and can clarify whether these measures are an improvement, better capturing poverty rates or just political hay making numbers.

For fun, I answer Metaculus prediction questions. One I was recently looking at was: https://www.metaculus.com/questions/7963/will-the-large-child-tax-credit-be-extended/

As I was investigating this I read this claim:

"At the center of the American Rescue Plan is a monthly payment structured as a tax credit for the vast majority of families — of $300 per child under 6 years old or $250 per child between ages 6 and 17. The benefit has been well received in polls, and studies say it quickly lifted millions of U.S. kids out of poverty."

Something about it seems wrong to me but I can't put my finger on it. With the rise of inflation across a variety of consumer indices (CPI up 5.3%) as well as a quantitative easing (size of Fed balance sheet doubled since last year to over $8 trillion), it's hard for me to imagine anyone was 'lifted' out of poverty.

I've been looking at this study and the associated methodology to try and get a better view into this claim: https://www.povertycenter.columbia.edu/forecasting-monthly-poverty-data

The big change here is that they calculate poverty on a monthly basis instead of an annual basis, stating numerous times this method is supplemental to annual measurements. The reason they think this is important is that "the average family with children receiving income support in 2018 received more than a third of those transfers in a single month through a one-time tax credit payment." They believe it's important to understand this because month-to-month income volatility is a better representation of how families actually experience poverty. this seems like a reasonable claim.

Still something seems off to me and I'd be glad to hear if anyone else sees flaws in their methodology. The more I read into it, the more it seems like they are making a lot of guesses about what monthly income actually is from data that doesn't provide that level of detail. Furthermore, there seem to be noted issues with the Supplemental Poverty Measure (SPM) that might over-report poverty and fail to account for a number of inputs like home-ownership and health insurance. Do these issues compound if calculated monthly?

Info: https://www.aei.org/research-products/report/addressing-the-shortcomings-of-the-supplemental-poverty-measure/

I don't actually expect anyone to look at this stuff, but it was definitely an interesting and eye-opening look into official poverty statistics and it seems like the kind of thing that was developed specifically for political football. My bias has me feeling strongly suspicious, but I might be very wrong and would love to hear some other opinions.

Expand full comment

We don't have tax credits of that kind here, but we do have Child Benefit (formerly Children's Allowance):

https://www.citizensinformation.ie/en/social_welfare/social_welfare_payments/social_welfare_payments_to_families_and_children/child_benefit.html

It's often been controversial since it was introduced, since anyone is eligible to apply for it regardless of income (back in the 70s, this was defended as allowing women a separate income for the home in the case of abusive/controlling spouses who did not provide enough money to pay bills) and accusations of abuse (our equivalent of 'welfare queens').

Expand full comment

Some time ago there were news about something good happening in USA healthcare, with some transparency rules being created. And now

> In addition to a number of other changes, the Final Rule repeals the price transparency requirement for hospitals

https://www.natlawreview.com/article/cms-backs-price-transparency-providers-and-plans

Expand full comment

(Provided this means what I think it means)I'm honestly shocked that something like this could happen

But i'm not working, the invisible hand of the free market will guide the promised people to the best possible outcome for all involved

Expand full comment

Who are some people who look strongly "English"? I visited England several years ago and thought that most of them looked the same as white Americans. However, a minority of them had a strongly "English" appearance that set them apart from white people from other European countries. It was hard to put my finger on what was different, but it was definitely there.

People I'd nominate as strongly "English-looking":

Charles Dance

George Washington

Ken Miles

Bernard Montgomery

I guess they all have big, bony noses and big chins.

Expand full comment

From this side of the pond, to me there is a particular 'American look' about the jaw- I think it's because of all that dentistry and orthodontics you lot engage in from a young age; it gives even women prominent, strong jawlines and full cheeks 😀

Expand full comment

I would say American whites look like Germans. There’s an English rose type that you don’t really get in the US. And the floppy headed artistic type. Those traits are dying out everywhere maybe.

Expand full comment

Growing up in the southeastern U.S., I noticed that a number of rural whites (though not a majority) have eyes that look a little East Asian. I think it's an Irish feature.

Expand full comment
author

If anyone from Paris sees this, please tell the organizer I might be 20+ minutes late.

Expand full comment

I'm pretty sure this is satire.

https://twitter.com/FalconryFinance/status/1446873636485287941

"With the right investment grade trained arabian hunting falcon or falcon derivative financial product, things don't have to fall apart. The center *can* hold. Falcons always hear the cry of the falconer, keeping a tight, stable gyre.

This is brilliant. Just keep reading. It gets better and better.

Expand full comment

The junk-grade curry-dipped seagull is probably my favorite element.

Expand full comment

Iranian data seems to indicate that natural immunity didn't do much to slow subsequent COVID-19 surges...

https://threadreaderapp.com/thread/1446154506216017921.html

Expand full comment

Wouldn't a greenhouse be most efficient if its ceiling were only a little higher than the plants it was intended to have inside of it? A lower ceiling means a lower internal volume, which I presume would maximize the humidity inside the greenhouse since the water vapor wouldn't be dispersed throughout a larger interior volume. Also, a low ceiling keeps heated air around the plants. A lower ceiling height also means the greenhouse's construction requires less material, making it cheaper to create.

A typical eggplant is only 24 inches tall, so wouldn't the ideal eggplant greenhouse be 25 inches tall? Assume robot farmers of arbitrarily small size can enter the structure to do work that human gardeners would normally do.

Expand full comment

Plants also need CO2, would there be enough in that case?

Expand full comment

There are varous implementations of this sort of thing, from sticking half of a 2l soda bottle over your seedlings to dutch lights.

I personally like low tunnels because you can replace the plastic with mesh later in the year and it keeps birds/insects out without overheating the plants. My cat likes them because he reckons it's a tent for him to sleep in.

Apart from the fact that we don't have 25" tall farmers, one of the big advantages of a taller greenhouse is that it has more thermal mass and thus the temperature and humidity remain more consistent. You don't want the air around your plants to be too heated on a hot summer's day.

Commercial greenhouses are mostly roof, so the wall height only contributes a small fraction to the amount of glass needed.

Expand full comment

Seems like a complex question. I would not be surprised to learn that the specific air circulation patterns matter, as well as the temperature gradient, and both those things would be affected by the headspace.

Expand full comment

I build little movable greenhouses out of reclaimed window panes/ box windows around plants during the winter, and I find that they seem to perform best with only a couple inches if clearance around the plant, but about .3 of the plants total height of clearance above.

The effect is more observable in herbaceous plants than fruit trees.

NO real ideas why, except for: SoCal, so it could be that even winter sun is hot enough to cook them if there is not enough air in the enclosure.

Expand full comment

Sprayers need to be a certain height above the beds to distribute the water evenly. I bet they could be engineered differently, though—but there might not be a cost benefit (just speculation).

Expand full comment

Today in nominative determinism that somebody should really have seen coming:

> More than 500 cases of Covid-19 have been linked to the TRNSMT music festival

https://www.bbc.co.uk/news/uk-scotland-58847481

Expand full comment

I guess this goes with the earlier gag about Scott’s European tour details being kept on a ‘spreadsheet’ was somewhat ominous. :)

Expand full comment

Have fun in Berlin and Paris!

Expand full comment

Are birds dinosaurs?

Let me rephrase this question: Should birds be considered dinosaurs in the same way that, for example, primates are mammals?

I'm not asking whether birds descended from dinosaurs – the evidence is pretty clear on that. It's rather the claim that "dinosaurs didn't go extinct, because birds are dinosaurs" which seems wrong to me. After all, mammals descended from amphibians, and yet nobody claims that "mammals are amphibians".

I'm neither a biologist nor an archeologist (or whoever is responsible for such matters), and my terminology is probably wrong. Maybe the question doesn't even make sense from a scientific perspective, but I don't know how to formulate it properly.

Expand full comment

As far as we can tell, a lot/most of the dinosaurs were feathered, so a Deinonychus would have looked like an ostrich/emu/etc. with more complete front claws and teeth and a Tyrannosaurus rex would have looked like an oversized ostrich/emu with, again, more complete teeth. Dinosaurs were transitional forms between what we would now classify as reptiles vs. what we'd now classify as birds (in an anatomical sense) - the four-chambered heart was definitely there (as it's also present in crocodiles, which are the closest living relatives of the dinosaur clade), and a lot of them had become warm-blooded and/or bipedal (theropods in particular, the subset of dinosaurs including the aforementioned Deinonychus and Tyrannosaurus as well as, y'know, birds).

You can absolutely say that large groups of dinosaurs went extinct (e.g. sauropods), as well as the plesiosaurs and pterosaurs (technically not considered dinosaurs). But there's obviously one lineage that survived, and the closer to that lineage you look the more bird-like the dinosaurs look (even back in the Mesozoic) - if, say, Tyrannosaurus had lived to the modern day, it would probably have been considered a bird. Not sure if this answers the question, but it might give some context.

Expand full comment
founding

There are some classic LW/SSC posts about this kind of thing:

https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries

https://slatestarcodex.com/2014/11/21/the-categories-were-made-for-man-not-man-for-the-categories/

https://www.lesswrong.com/posts/895quRDaK6gR2rM82/diseased-thinking-dissolving-questions-about-disease

I'd summarize the answer as basically: think about what you really want to know; ask that instead and the answer will usually be obvious - e.g. "Are birds terrifying megafauna that can stomp me to death? No. Are birds some of the closest living relatives of ancient dinosaurs? Yes." And then while it may *feel* like there's still a question "are birds dinosaurs" to be resolved; that question no longer points to anything in the real world that hasn't been answered already, so there's no reason to ask it.

Expand full comment

"Are birds terrifying megafauna that can stomp me to death?"

You may be interested in the Emu War:

https://en.wikipedia.org/wiki/Emu_War

"The machine-gunners' dreams of point blank fire into serried masses of Emus were soon dissipated. The Emu command had evidently ordered guerrilla tactics, and its unwieldy army soon split up into innumerable small units that made use of the military equipment uneconomic. A crestfallen field force therefore withdrew from the combat area after about a month."

Expand full comment

I'd say it comes down to a question of whether birds are significantly more distinct physiologically from other branches of the Dinosaur clade than those other branches were from one another, since that seems like the strongest principle on which to split a clade into a paraphylectic group and a crown group.

I don't have a confident answer to that question. Going through differences between birds and their closest living non-dinosaur relatives (crocodiles and alligators), several are know or suspected to be widely shared among many stem dinosaurs (endothermy, erect hind limbs, "bird" hips, bipedalism, body feathers, taloned feet, hollow long bones, "avian" lungs, four-chambered hearts), while others are either unique to true birds or are shared only with closely-related protobirds (hard toothless beaks, avian-style wings, deeply keeled breastbones). And some, like the ZW chromosome mechanism for sex determination, seem to be open research questions.

So is the former list more significant than the latter, and are the latter more significant than similar lists of unique traits among other dinosaur subclades?

Expand full comment

Ugh, hit post too soon.

So is the former list more significant than the latter, and are the latter more significant than similar lists of unique traits among other dinosaur subclades? My gut feel is yes and no respectively, both of which argue in favor of birds being dinosaurs, but neither list is comprehensive and both were based on top-of head recollections supplemented by light googling. I'm prepared to have other commenters who are better versed in the subject jump in and explain why I'm wrong.

Expand full comment

Birds are dinosaurs in the same sense that humans are apes and apes are monkeys and tetrapods are bony fish and land plants are green algae and ants are wasps. I'd say that insisting that all terminology be monophyletic is not likely to be conducive to effective communication. But if you want to be precise, you can say "non-avian dinosaurs".

Expand full comment

I knew that the first reply would bring up "non-avian dinosaurs"! ;-)

To me that sounds like "non-terrestrial fish" (to exclude reptiles, mammals, etc.) or "non-aquatic ungulates" (to exclude whales): biologically correct, but misleading outside of scientific contexts, because although reptiles descended from fish and whales descended from ungulates, they are sufficiently different that mammals aren't considered to be fish and whales aren't considered to be hooved mammals.

Expand full comment

https://www.metafilter.com/192879/What-I-learned-about-my-writing-by-seeing-only-the-punctuation#8157764

There's a program which strips out everything but the punctuation-- it turns out that punctuation patterns are distinctive.

Expand full comment

SSC/ACX got a shoutout on today's Ben Shapiro show, episode 3150, 26 minutes in. Shapiro likes the _Revolt of the Public_ book review and spends several minutes quoting a big chunk of it. He especially likes the notion of the public wanting experts to press - or criticizing for failing to press - a secret "cause miracle" button.

Expand full comment

It is amusing how despite being a liberal, Scott is loved by conservatives. "Small Men on the Wrong Side of History", an average-size political book, cites him *seven times*.

My pet theory is that in an era of leftist political hegemony, the 'autistic', society-ignoring curiosity of rationalists gets you to right-wing conclusions because it's all the things the (left) powers that be sweep under the rug. 70 years ago they would have been liberals.

Expand full comment

Scott is by no means a liberal on the American political spectrum. On the culture war, which is by far the most significant fault line in American politics today, Scott is clearly a conservative.

Expand full comment

I don't think that's true. Scott has publicly defended eg the use of preferred pronouns for trans people, believes in climate change, supports COVID vaccines and lockdowns, etc. He differs from both mainline conservative and mainline progressive stances in significant ways, but he's much closer to mainline progressive.

Expand full comment

I think the meaning of liberal and conservative is in flux enough that it's hard to know what it means.

Expand full comment

That would not be true 10 years ago.

Is it even worthwhile to tag worldviews by categories that shift over less than a generation, while the worldview itself stays in place?

Expand full comment

Yes. Yes it is. A shifting categorization can coney just as much information as a static one.

Expand full comment

The idea that being liked by people with different political opinions is somehow controversial, is itself an evidence of strong polarization of (American) society. What is the "normal"? People only able to appreciate good texts written by people with the same political opinion, regardless of the quality of text? (Should I feel weird for liking some Chesterton quotes? Or perhaps, should Chesterton feel weird for being liked by me?)

"I have a dream that one day right here on the internet little conservative boys and little conservative girls will be able to join hands with little progressive boys and progressive girls as sisters and brothers. I have a dream that my four little children will one day live in a nation where they will not be judged by the color of their tribe but by the content of their character."

Expand full comment

I've seen "centrist" hurled as an insult on nerd hobby discords.

At this point instead of communities filtering out crazy members, the members need to filter out crazy communities.

Expand full comment

Have no fear, your dream will come true. Of course, the reason it'll come true is that we'll find another reason to hate each other. It might be one of the good oldies like race, nationality, or wealth, or it might be an entirely new dividing line, like whether you were genetically modified or not.

Expand full comment

For your consideration, a link from twitter: https://gidmk.medium.com/is-ivermectin-for-covid-19-based-on-fraudulent-research-part-4-f30eeb30d2ff

Re. a hilariously badly faked data set.

In a further bit of humor, a user on said hellsite pointed out that the data repeats every 22 rows, and 22 rows is the amount viewable on screen in a default install of excel at 100% zoom.

Expand full comment

Oh, geez.

Thanks for the link, Gideon's entire series on this is quite good.

Expand full comment

It's a good article and pretty shocking. You'd think if someone wanted to make fraudulent data, they'd at least put a little bit of effort into making it look believable. Publishing a Cohen's d effect size of 3.2 in addition to the repeating data...

Expand full comment

Why bother when faking badly works just as well? So few people actually check the data or care

Expand full comment

I was tempted to answer "pride in craftsmanship", but then it occurred to me that in a way the incentives actually cut the other direction: if you fake something too well, then you risk nobody ever appreciating your efforts.

Kinda like the line in Christopher Clark's "Sleepwalkers' about the laughably bad secrecy of the Black Hand organization, e.g. holding "secret" meetings in public coffee houses, which Clark attributed to a sentiment of "What was the point of belonging to a secret society if nobody knew you did?"

Expand full comment

If people think you produced good, honest data, isn't that a higher honor than if people know it's a lazy fake?

Expand full comment

The thing is, lots of people probably do put effort into making fraudulent data look believable. We only catch the incompetent ones.

Expand full comment

Reading Kyle Harper's "The Fate of Rome," I was struck by a reference in a table entitled "All Known Epidemics, 50 BC - AD 165" (on page 89, for those of you who may have the book). The description of an epidemic in 90 AD -- the source being Cassius Dio -- reads:

"People died from being smeared with needles, not only in Rome, but virtually the whole world (This obscure notice has defied understanding, and Dio does not actually claim there was an epidemic.)"

Naturally, this raised the question, "What? What the - what? What?" At least, those were my thoughts.

So I looked up the mention in Dio LXXIII:14:3, describing events during the reign of Commodus. (Heh.)

Cassius Dio LXXIII:14:3

https://penelope.uchicago.edu/Thayer/e/roman/texts/cassius_dio/73*.html

"Moreover, a pestilence occurred, the greatest of any of which I have knowledge; for two thousand persons often died in Rome in a single day. 4 Then, too, many others, not alone in the City, but throughout almost the entire empire, perished at the hands of criminals who smeared some deadly drugs on tiny needles and for pay infected people with the poison by means of these instruments."

(Note that 3 is referencing a disease of some sort; the needle thing comes in at 4.)

That's when I discovered that there had been another such run years before in the reign of Domitian!

Cassius Dio 67:11:6

https://penelope.uchicago.edu/Thayer/e/roman/texts/cassius_dio/67*.html

"During this period some persons made a business of smearing needles with poison and then pricking with them whomsoever they would. Many persons who were thus attacked died without even knowing the cause, but many of the murderers were informed against and punished. And this sort of thing happened not only in Rome but over practically the whole world."

So....what in the world was going on here? Some sort of mass hysteria? Ninja porcupines? Strangely aggressive pine trees? Johan Larson's aliens with blowguns?

Expand full comment

If I take it as a straight account, it's very strange. Who would be paying for those killings?

Alternatively, there was no payment, just mass killings as sometimes happens in the modern world. Or maybe it was assassination for inheritances.

And there might be some hyperbole about how common it was.

My tentative theory is that it's the result of erroneous copying of a text which said something else, but I have no idea what, and I realize this is a not a good approach for handling weird things in ancient texts.

The modern world will swamp future historians with such a huge amount of material (much of it lies or fiction) that it will be just as hard to figure out what's going on.

Expand full comment

In Greek:

Πολλοὶ δὲ καὶ ἄλλως οὐκ ἐν τῷ ἄστει μόνον ἀλλὰ καὶ ἐν ὅλῃ ὡς εἰπεῖν τῇ ἀρχῇ ὑπ´ ἀνδρῶν κακούργων ἀπέθανον· βελόνας γὰρ μικρὰς δηλητηρίοις τισὶ φαρμάκοις ἐγχρίοντες ἐνίεσαν δι´ αὐτῶν ἐς ἑτέρους ἐπὶ μισθῷ τὸ δεινόν· ὅπερ που καὶ ἐπὶ τοῦ Δομιτιανοῦ ἐγεγόνει.

The specific words of interest are βελόνα and φαρμάκοις. βελόνα does mean needle. But it's also the word for syringe. φαρμάκοις means drug, potion, medicine, poison, etc. You'd probably recognize the word phonetically: pharmakon. I'd take this as saying that assassins were injecting people with the disease for pay. Though it could also be an unrelated poison.

Expand full comment

I'd put my money on mass hysteria - something like the London Monster: https://en.wikipedia.org/wiki/London_Monster

Expand full comment

Was there a known poison in ancient times that could kill with only a needle-prick? I'm a bit rusty on my poisons, but if you don't have a poison dart frog handy I was under the impression that most natural poisons are kind of hit and miss in the effectiveness department. What did the Romans have at their disposal poison wise? Arsenic wouldn't work, or deadly nightshade: you need more of a dose than a needle prick for those to be deadly, or even sicken.

Expand full comment

Are the symptoms recorded? Faeces would probably suffice to make people dangerously ill if you get it into the blood, though I don't know how reliably a needle prick gets a coating into the bloodstream.

Expand full comment

Yeah, I doubt it was actually a poison. Maybe there were one or a few such cases, but I'd be surprised if most of the accusations weren't due to mass hysteria.

Expand full comment

Ricin was known. 100% ricin would definitely be dangerous via unnoticed needle-stick (notoriously, it was used by the KGB to murder Georgi Markov with an umbrella gun), but I'm not sure they'd have been able to purify it. 5% ricin would need on the order of 25 milligrams to reach LD50, which is quite a bit for a needle without a syringe behind it.

Expand full comment

Cyanide, maybe? I've found mixed answers as to whether or not Roman poisoners made use of cyanide.

Expand full comment
founding

Lethal dose for cyanide appears to be ~100 mg, which is a bit much for a needle stick. Unless "needle" means "syringe", but I don't think that's likely.

Expand full comment

Classic blaming of natural disasters on a part of the population. See also, Black Death, Rome's fire, and so on.

Expand full comment

That -- and/or the already-mentioned mass hysteria -- was my working theory. I do wonder, though, if there was something going on to give the people of the time any good reason for thinking of it in the first place, though. Such as maybe an asbestos mine emitting fibers or some such.

Expand full comment

It reminds of another story about Roman "poisoning". Some rich people had a fancy party at someone's house, and the weather was nice so they stayed out in the garden after dark. Then lots of people who had been at the party got sick, and they thought someone had poisoned the food. Their doctors pointed out that their symptoms were just like malaria, and being outdoors after dark is a risk factor for malaria, and the city was in the middle of a malaria epidemic.

Expand full comment

Is it bad that twitter is a strawman factory?

The first paper I had to write in college had a sentence like "Some people argue [POSITION I NEED TO ATTACK]". The paper came back with that underlined, and a comment: "who says this?" Of course, no one said this. And if they did say it, it would probably have enough nuance that their position would require a more complex counterargument. I knew about strawmanning before before I wrote that, but like now I had a very nice concrete way to check if I was doing it — if I said something like "people think X", an alarm would go off and I'd go do research on what people actually think, and make sure I had an actual representation of their argument.

But like, now, if I was writing that I FOR SURE can find a tweet that says whatever I was strawmanning. I can link the tweet, say look at this, here's why it's wrong. And it will be so shortly phrased that it won't contain any of the couching or caveats that good arguments always have. And it will be phrased in the most controversial (incorrectest) form, because that's what twitter likes?

Maybe there are two notions of strawman:

1. An argument you construct that no one actually holds

2. An argument you construct that no INFORMED person actually holds

I guess maybe it's ok to counter 2? Because if people are tweeting 2, and people are reading 2, I guess it's worth explaining why it's wrong. But like, no one reading your blog will think 2, because they are INFORMED, right?

Anyways, I wonder if banning twitter arguments is an epistemic best practice.

Expand full comment
founding

#2 seems to be a defense of nutpicking, and nutpicking is bad. Yes, someone on twitter somewhere said something really stupid and extreme and wholly devoid of nuance. Probably two someones said approximately the same thing. Therefore it is technically true that "some people argue..."

But technically correct is not in fact the best kind of correct, and if "some people" are a tiny irrelevant handful of nuts, then you should say "a tiny irrelevant handful of nuts argue...", and then ask yourself why you are getting involved.

If you can't find anything but tweets, it's probably a handful of nuts.

Expand full comment

"Weakman" is the term for a dumb and easily defeated position that someone in your outgroup genuinely and actually holds.

Also, a lot of news media seems to work off the exact "find someone mad about it on Twitter" model.

Expand full comment

I've been trying to promote "tinman" as an improved term for "weakman", as the latter term sounds like it could be referring to physical or moral weakness rather than being cherry-picked for making a weakly persuasive argument for a disfavored position.

"Tin" on the other hand connotes something stronger than straw, and superficially appearing more solid still, but nevertheless still weak and artificial (c.f. "tin-plated dictator" or "little tin god"). It's also a natural companion to "strawman", per the Scarecrow and the Tin Woodman from The Wizard of Oz. Which has additional apropos implications, as the Scarecrow was an artificial construct from the beginning, while the Tin Man had started out as a flesh-and-blood human.

Expand full comment

That's one of my least favorite trends in "journalism" today, where instead of doing any kind of actual research or interviews, a reporter will just browse Twitter until they find some opinion they can write negatively about.

Expand full comment

Yeah, sometimes some "reporter" finds two tweets by people I've never heard of, that might even be troll accounts - and then proceeds to write an article that essentially treats the absurd position stated in the tweets as if it was the rock-solid consensus of the entirety of [the reporter's political outgroup]. But here's the real important question here: When I deplore the fact that people in my own political outgroup tend to go around approvingly citing such articles, am I myself committing a similar kind of quasi-strawman argument?

Expand full comment

I came across this paper from Roger Pielke Sr. Thought I'd share. Over the years I've come to respect Pielke's fortitude in being a gadfly to climate modelers. He's been called a denialist, but he's not denying AGW, but rather he's pointing out that models aren't doing a very good job of prediction when compared to the observed.

https://issues.org/climate-change-scenarios-lost-touch-reality-pielke-ritchie/#.YVxxJDFpEeo.twitter

Please note, I'm not just ranting about climate models here. I'm ranting about models in general! ;-)

I've often heard the platitude about models that they don’t necessarily try to predict what will happen, but they can help us understand possible futures. But it seems to me that a model should logically be seen as a hypothesis. If it can't make an accurate prediction, something is wrong with assumptions being made in the model. Saying a model will "help us understand our possible futures" is a cop out in my view. Assuming that the laws of probability rule the workings of many events, what good does it do us if a model or the models keep predicting the improbable?

Over the past pandemic year-and-a-half I've been tracking the COVID-19 epidemiological models posted up on the CDC website. None of the epidemiological models for COVID-19 have been able to predict case-loads beyond two weeks out—and only a minority of the models can do it moderately within that two-week time horizon! So, the best epidemiological models are working at about the level of weather forecasting, and those can give us an OK picture of where what the case-loads will be in a week to ten days. But the rest might as well be based on astrology. Worse yet, these models don't seem to be improving as we learn more about SARS-CoV-2. And no one seems to be discussing the elephant in the room. What good is knowing the range of all possible future worlds when most are absurdly unlikely?

On the other hand, weather forecasting models have become much more accurate over the past couple of decades. I suspect it’s because their results get a lot more attention from a wider audience, and the predictions are such a short time horizon, that the modelers can see whether they’re off track within a few days.

On the longer term, from the data I’ve seen *most* of the long-term climate models have been overpredicting the actual warming we’ve observed. I’ve been tracking the global warming narrative/arguments/predictions* since 1985. I was taking a course in Glacial Quarternary Geology to familiarize myself with the climate history surrounding the emergence of genus Homo. The geologist teaching our course introduced us to the theory of climate modeling—and he did so when the tide of climate modeling opinion had flipped from the prediction that we’d soon be entering an ice age to one where we’d soon be entering a hothouse. Personally, I was alarmed by the some of the predictions (>3°C by 2025) that I continued to follow the research in Science and Nature — and the advent of the World Wide Web, gave me access to wider range of papers. It became clear to me around 2005 that initial model predictions were wrong. By that time newer models had churned out somewhat more conservative predictions, but I had become less enamored of the usefulness of these models — and any sort of predictive modeling in general.

I see a model as being a hypothesis that is trying to describe a system in way that’s accurate enough to predict the outputs. If the model outputs (predictions) don’t match the observed results, then there’s something wrong with the system described in the model. I complained to a Physicist friend that if you run any of the climate models backwards they can’t predict the past states of climate history. He pointed out that all scientific models fail on that count! It took a while for the implications of that idea to sink in!

* it’s difficult to find an agnostic term to describe the debate that isn’t pro or con to the theory

Expand full comment

Its totally unsurprising that we can't model Covid-19 more than 2 weeks in advance. Covid 19 cases are part of a dynamical system subject to a not-particularly-good controls system whose goal is to keep the entire system at a point of instability.

Consider an inverted pendulum on a sliding table. There's a controls system that constantly measures the position of the bob of the pendulum and slides the table so that the hinge will be directly underneath when the pendulum loses all its velocity to gravitational potential energy. But sometimes the control systems sensor are wrong about where the pendulum bob is and these inaccuracies manifest as the pendulum swinging wildly before the controls system saves it (or not). Watching this in action https://www.youtube.com/watch?v=hQK_3C6S4Ak is much better than me describing it.

The mechanics of this system are known perfectly. Without the control system, any deviation away from the unstable fixed point increases (at least initially,) exponentially. However, if you're modeling both, you have to model the real system, but also model the control system's model of the real system. And anything you get wrong about either will grow exponentially until it overwhelms whatever you got right. Covid-19 is like that. It's insufficient to model the covid, you also have to model the fear: all 7,000,000,000 instances of it.

The caution I would have for considering climate models is that if the variable you care about is subject to a controls system, then models might not be able to predict it at all. For a concrete example of this look at the collapse of the Grand Banks fish stocks: https://en.wikipedia.org/wiki/Collapse_of_the_Atlantic_northwest_cod_fishery . The ecosystem looked fine until it totally collapsed. It's not likely that the whole climate is like this, but we should at least consider the possibility that some parts of it that we like are.

Expand full comment

I understand most of this except for the part where you say "Covid 19 cases are part of a dynamical system subject to a not-particularly-good controls system whose goal is to keep the entire system at a point of instability." What is this point of instability, & why do we want it?

(I'm also wondering whether you meant to say "if the variable you care about is /not/ subject to a controls system.)

Re. "It's not likely that the whole climate is like this, but we should at least consider the possibility that some parts of it that we like are", that's supported by the IPCC 2015 report, which concluded, "There is little evidence in global climate models of a tipping point or critical threshold in the transition from a perennially ice-covered to a seasonally ice-free Arctic Ocean, beyond which further sea-ice loss is unstoppable and irreversible" (p. 74). It mentions "medium confidence" that there is a Boreal [Arctic tundra (frozen deserts) and boreal forests (shrub pines)] tipping point, but says there are "few adaptation options in the Arctic" (no obvious actions to take); and "low confidence" that there is an Amazon tipping point where "Moist Amazon forests could change abruptly to less-carbon-dense, drought-and-fire-adapted ecosystems". (p. 70)

Marine ecosystems don't get much consideration in the report; the oceans are mostly considered only in terms of how they redistribute heat and CO2. Some mention of coral reefs, acidification, & shellfish; but no mention of tipping points. The oceans haven't been divided up into discrete types of ecosystems to the extent that the land has, and so I suppose aren't as easily analyzed.

Expand full comment

Re: Covid-19. Consider the simplest possible model there are only Susceptible and Infected individuals. This model is quite close to reality for a novel disease at the very beginning when the ratio of Susceptible to Recovered is very low. In this model, the only variable is the number (or ratio) of Infected. This differs from the pendulum model in that there's no velocity; it has 2 variables (position and velocity) where this just has 1. This model has 2 stable points: if R_0 is less than 1, the disease dies out if it's greater than 1, it exponentially increases until everyone is infected. What cannot happen is that the number of infected just kinda stays the same. This is what we saw generally during covid: R_0 would be greater than 1, cases would go up, people would get scared and R_0 would go down, cases would go down and then people would relax. The aggregate effect of all this stressing and relaxation has the appearance of "wanting" to balance new cases with recoveries. My point in all this is that there is good reason to believe that modeling the dynamics of covid is more akin to modeling the dynamics of the stock market than it is to modeling hurricanes. Because the dynamics of covid are dependent on the covid models of the people who might be infected.

For the climate stuff. I can't point to anything which might be a tipping point. I'm just generally worried that they might be in there somewhere and we won't know about them until after we fall of the ledge---despite our best efforts to identify them in advance. Although, if pressed, I would point to a potential shutdown of the gulf stream. Would it be a disaster for the whole world? No. Would it significantly harm Britain in particular and Western Europe in general? Maybe. I don't think that there has to be a tipping point for the whole climate for us to get blindsided by one. The thing that keeps me up at night is that we're really interested in a measure of total ecosystem health, but if a few of the numbers that we consider important are, like the abundance of fish in the North Atlantic, subject to a controls system, the ecosystem might look healthy right up to the point it totally collapses.

Expand full comment

Ah, so when you mentioned a "controls system whose goal is to keep the entire system at a point of instability", you meant a sort of unconscious collective goal, not the goal of the government. And when you say "subject to a controls system", you don't mean a control system deliberately imposed by humans, but an emergent one.

The sudden abundance of wildfires on the west coast could be just such a "collapse"--an accumulation of dry wood, say, over a period of decades, which passed a critical point just now. But that one is self-correcting. Most tipping points in ecosystems are self-correcting, I suppose because they evolved, and ecosystems that weren't self-correcting crashed and died.

Global temperature is not really an ecosystem nor a living thing, so might not be self-correcting. Yet the climate record shows that it is; global temperature oscillates. Insert anthropomorphic argument here if you like.

The great disasters for life aren't the global heating periods, but the global ice ages, which are an existential risk or close to it, which global warming is not. We don't yet know very well what causes ice ages, and I'd like to see more money put into studying that right now.

Expand full comment
founding

Whenever you see someone making a prediction based on a model, ask how the model was validated. If it was, see if you're still in the approximate domain where it was validated. If it wasn't, or if that information is hidden where you can't find it, ignore the prediction, the model, and whoever put forth the model as something you should take seriously.

Every model is a gross oversimplification of reality, based on educated guesses as to what parts of the problem can be ignored or modeled at extremely coarse resolution. For any remotely complex problem, and both climate and COVID are quite complex, nobody will get all those guesses right the first time, just by thinking real hard with their might science brains. You've got to test your guesses against experimental or observational data.

And not in hindsight; that usually just means overfitting to produce a "model" that accurately predicts the past because all it is is an obfuscated curve fit to the past. It is sometimes possible to use blinded historical data to validate a model, but that's rarely done well.

And, yeah, most climate modeling and almost all epidemiological modeling is unvalidated crap.

Expand full comment

> On the other hand, weather forecasting models have become much more accurate over the past couple of decades. I suspect it’s because their results get a lot more attention from a wider audience, and the predictions are such a short time horizon, that the modelers can see whether they’re off track within a few days.

Note that there is a massive and immediate feedback. For example predicting path of hurricanes, predicting storms and floods will have immediate effects and can save lives (not only) on timescale of hours and days.

So there is massive benefit of good models and there is zero incentive to be biased toward some result, and noone will complain about results for ideological or political reasons[1].

[1] exceptions apply - but compare to climate models where there is bias of various kinds, extreme politics and so on.

Expand full comment

I've built and studied complex models in physical sciences for decades, and your skepticism is if anything far more moderate than deserved. Models of physical systems that are any more complicated than an apple falling from a tree should *always* be treated with deep skepticism. (Parenthetically, I think part of the problem is that people confuse their properties with models of *digital* systems, such as models of digital circuits, or computer programs, which don't suffer from some of the crucial limitations of models of what we might call the "analog" systems constitute the real physical world.)

In general, most complex systems are chaotic, which is both a good and a bad thing. By "chaotic" I mean in the sense that the future values of dynamic variables are exquisitively sensitive to their initial values and on the quality of the evaluation of their propagation. So on the *bad* front, it means almost any practical evaluation of the dynamics will cause your specific dynamic variables to diverge exponentially from what they would be in the real world so fast that if you are relying on their particular values you will get garbage almost immediately. On the "good" front it means that if (and this is the big if!) the important result is driven by thermodynamic (or thermodynamic-like) properties of the system, then your model will rapidly evolve to the correct end result even if you pick stupid initial conditios, or your simulation of the dynamics is imperfect (or you're using double precision math in your program instead of infinite precision ha ha).

So if you are modeling something where you have good reason to suspect some kind of thermodynamic or thermodynamic-like governing principle controls the outcome, then models are robust and valuable. They are a way of discovering what the governing principle dictates, in situations that are otherwise far too complex to discover by direct evaluation. Roughly speaking dynamic models are methods of approximately solving horribly large sets of coupled differential equations when the solution is known to be stable and constrained by some global condition (that's your thermodynamic condition).

But if you are *not* -- if you are attempting to solve for the actual dynamics in a complex system where you have no idea whether there is some robust global constraint, or even worse it is clear there is not -- then you are going to go wrong so fast that usually an informed guess is better.

That doesn't mean such modeling is worthless, though, even when its specific predictions are. You can use models to explore how sensitive your results are to your initial conditions, to various parameters, to various approximations you make to the governing dynamics, and from the point of view of understanding complex dynamics these are all exceedingly valuable things. Indeed, much of our understanding of how choatic systems behave has come from modeling studies which we know don't give the correct results -- but how they go wrong, and how fast, is deeply informative.

With respect to climate modeling, there are certainly governing thermodynamic principles up to a point, because the atmosphere is a thermodynamic system of course, but part of the problem is that you're trying to model a rare deviation from equilibrium and the thermodynamic constraints are less helpful because they aren't as strong. It feels to me a lot like trying to model phase transitions in condensed systems (e.g. freezing of liquids), which is well-known to be very, very difficult, in part because the thermodynamic constraints are by definition weaker at this point of instability. But I'm not in this business, so I may have the wrong impression.

With respect to epidemiology, I can't imagine what the thermodynamic constraints are, aside from there are a finite number of people who can get sick. I would expect that any modeling of the course of an epidemic is therefore essentially worthless past the point where you could do pretty much as well by a simple extrapolation from the present. It's an interesting thing to study, and may have some practical use in terms of discovering things like how sensitive are your results to input data like R0 or the distribution of population density, et cetera, but I would never take *the actual predictions* as actionable.

Expand full comment

Carl, you wrote:

> Roughly speaking dynamic models are methods of approximately solving horribly large sets of coupled differential equations when the solution is known to be stable and constrained by some global condition (that's your thermodynamic condition).

Can you give me an example of models where thermodynamic or thermodynamic-like governing principle controls the outcome? Not that I'm doubting what you say, I just need an example so I can pin it on to my brain as a type of modeling situation that works.

And that trouble-maker <#snarkasm>, Judith Curry, has just posted some observations about the IPCC AR6. It looks like the IPCC is starting to hedge its modeling bets a bit...

https://judithcurry.com/2021/10/06/ipcc-ar6-breaking-the-hegemony-of-global-climate-models/#more-27876

Expand full comment

Sure. Let's say you're trying to model how a mutant S protein from a SARS-CoV-2 virus binds to a human ACE2 receptor. You do this with some pretty physical model, meaning you're representing at least individual domains of the two proteins, if not individual atoms, and you've got some model of the forces that is derived from the actual physical forces -- ionic attractions, van der Waals attractions, some model for hydrogen-bonding and/or the hydrophobic effect (or else you're expensively including discrete water molecules). Your purpose, let us say, is to find out whether the binding of the mutant is stronger or weaker than the wild type.

The governing constraint is that you know[1] the underlying system is going to minimize the free energy. But thermodynamics only technically applies in the limit of infinite numbers of degrees of freedom, and how much of the detail of the degrees of freedom and their nature matters in the limit that there are an infinite number of them? Some details, without doubt: we know infinity water molecules behave quite differently than infinity carbon atoms in a diamond lattice. But it's a reasonable hope (and borne out repeatedly empirically) that almost all of the fine details of the interaction and dynamics are washed out, don't matter, and you get the same result whether or not your model of the carbon atom is a hard sphere or a somewhat squishy one, whether you model your ionic interactions perfectly or fudge it with a dielectric constant or a cut-off distance, and whether you solve Newton's equations perfectly or only with 32-bit precision. So there's a lot of detail you can screw up and your model will still reach an accurate (at the level of interest) representation of the real end point, because the system (both in reality and in your model) is being driven to a particular small class of end points by the thermodynamic constraint[2].

This is not by the way some kind of obvious conclusion. When people first started doing computer simulation of complex physical systems in the 1950s (piggybacking off the hardware and algorithms nuclear physicists developed for simulationg fusion bombs), this was an unproved hypothesis, and there were plenty of people who were skeptical about it. It took some time before it was generally accepted that when you're looking for something driven by thermodynamics, you can indeed ignore the fact that your simulated dynamics is going to deviate from the real dynamics very quickly. Doesn't matter, because it's all driven to the same thermodynamic endpoint, both reality and your not-too-far-from reality model.

--------------

[1] You actually don't *know* that, you assume that, but the empirically-proved applicability of thermodynamics to even shockingly small systems has been a staple of the biz since Count Rumford. I wouldn't say we really know *why* it's true, but it is.

[2] Experience tells us you *do* need to make sure you handle certain things pretty well, e.g. you had better make sure energy is conserved. It doesn't have to be collision-by-collision, but it better be overall.

Expand full comment

The models I've seen have two major things missing: they don't model transportation or population density. I don't know why; it isn't very hard. You can use general models about the relationship between city size, density, and travel destination frequency to construct a model of both, without needing specific data about real cities. This is a crucial consideration for covid, which seems to spread mainly in urban areas.

Covid models have a tough job in any case, because the data collected hasn't been chosen to make models useful. For instance, last I checked, we don't count deaths caused by covid; we count "covid-related deaths", which in practice means people who died and tested positive for covid (in either order). This is like estimating how many people die of smoking from the number of people who die while they're smokers. Also, hospitals and nursing homes are financially incentivized to declare deaths as covid-related.

We count covid cases, and eventually we started counting number of people tested; but AFAIK we've never recorded the reasons these people were tested, and in particular whether or not they were symptomatic. This would be necessary to use the "number tested" data together with the "number of cases detected" data to estimate the frequency of covid in the population, because the availability of covid tests has varied greatly from time to time and place to place, meaning you can't treat the people tested as a random sample of the population.

I don't know if this even matters anymore. I think both covid and climate change have become too politicized for data or models to be trusted. They have the epistemological status of medieval proofs for the existence of God.

Expand full comment

And even that doesn't really matter, since nobody reads the reports. That 2014 IPCC report that was widely reported as saying we're all doomed by 2030 actually said that the economic impact of global warming would be 0.2 to 2.0% of worldwide GNP by 2100 AD--an amount which would be completely eclipsed by a single year of even mediocre economic growth. The "irreversibly doomed by 2030" claim, as best as I can tell, comes from an estimate that, under the very worst-case scenario (in which we inexplicably decide to greatly increase our CO2 emissions right now), using data and parameters from the worst-case scenarios of earlier studies, then there will be enough CO2 in the air by 2033 that, even if all CO2 emissions stop then, the Greenland ice sheet will still eventually melt, raising sea levels by up to 7 feet (worst case) by 4000 AD.

Expand full comment

For a while there, the scare headlines were that Greenland was losing the equivalent of Lake Eerie volume of water every year from its ice sheet. Just out of curiosity I looked up the volume of Lake Eerie, and the estimated volume of the Greenland ice sheet to figure out how many Lake Eeries it held. Granted the volume of ice and liquid water are not quite equal, and then I have to worry about the volume of compressed ice, but a quick estimate showed that at the current rate of melting it will be around another 5,000 years. The entire Laurentide ice sheet was larger than the Greenland ice sheet, but it took close to 12K years to melt away. Although the melting started when temperatures were somewhat cooler than today, there was a long period between 10Kya and 8Kya when global temps were between 1.5°-2.5°C warmer than today. That was the period when the last of the Laurentide Ice Sheet disappeared. Anyway, I don't think rising sea levels will be a major concern for several generations at least. We have tide gauge data from past 150 years from geologically stable coastlines (where there's no geological subsidence or uplift going on) that shows no increase in the rate of sea level rise in recent decades.

Data from Wismar Germany going back to 1848 that show a very steady sea level rise of 1.41 mm/year (+/- 0.1 mm). There has been no upward incline in this rate in ~170 years. As an ironic aside, the Heartland Institute took the raw NOAA data, and re-graphed it, and they show the Wismar sea-level rising at a slightly higher rate than the NOAA graph (1.43 vs 1.41 mm/year). But both graphs show no upward incline in the trend.

Likewise, San Francisco shows a steady 1.97 mm/year (+/- 0.18mm) with no upward incline in the trend. (OTOH, cities like Miami in Florida, where aquifer depletion has caused land subsidence, show a rapid sea level rise over the past 30 years.)

Wismar Germany, average sea levels since 1848:

https://tidesandcurrents.noaa.gov/sltrends/sltrends_global_station.shtml?stnid=120-022

San Francisco average sea-levels since 1850:

https://tidesandcurrents.noaa.gov/sltrends/sltrends_station.shtml?id=9414290

Expand full comment

Even if you ignore the future impacts of climate change entirely, it has already caused immense damage *today*. For example, record setting wildfires are now an annual occurrence.

Also observing that there is uncertainty in modeling is not an excuse for inaction - it actually makes the tail risks **more** dangerous, not less.

Expand full comment

I'm not sure you can lay the California wildfires at the feet of climate change. These fires didn't start spontaneously.

1. PG&E was responsible for a large percentage of them for implementing their restarter technology (which was never approved for deployments in fire-prone areas). This was done to save money on sending out technicians to manually reset failures in the lines. Likewise, PG&E decided to save even more money by contracting out tree trimming around the lines—which it turns out often didn't get done by the contractors.

2. The second most common cause of the fires was arson. There's nothing like a forest fire in the news to get an arsonist motivated to set their own.

3. The third cause of fires was lightning strikes.

Moreover, the forests had been unnaturally shielded from fires for over a century by state and federal forest services. So there was a huge biomass that was ready to burn, burn, burn.

In California, the Native Americans set off regular "controlled" burns. Indeed, if we look at old tree ring data, it seems like they were setting off burns quite frequently—so the flammable biomass never accumulated to the levels it has today. Also, there are accounts from the early white settlers in the California interior that tell of forest fires that would burn for months until the rains came.

Yes, the drought no doubt made things extra flammable, but in an area where we've had severer droughts in the past, it's difficult blame all this on climate change. Rather, I blame it on human stupidity.

Expand full comment
founding

The idea that a large wilderness area will or can be expected to go many years with zero ignition events is not plausible. The only cause that matters is the accumulation of a high areal density of dry fuel. Trying to figure out exactly where the spark came from and saying "that's what caused it!", might be the basis for a nice lawsuit but is otherwise pointless.

Expand full comment

Four of the last five years have burned >1.5 mil acres, something that almost never happened historically. Nearly all of the biggest wildfires in CA history have happened in the last two years.

What do you think changed five years ago? Did nature suddenly invent lightning?

As for forest policies, the policies people like to criticize go back decades. What changed is not policy, but temperature and drought. One of the reasons that people don't do controlled burns any more is because the new climate means there is no such thing. Those innocuous fires of the past would grow out of control and torch the state if they happened today.

It's also hard to square the "forest management" scapegoating with the global nature of the recent wildfires. What do Siberia and Greece have in common with the American West? I'm pretty sure it's not forest management.

Expand full comment

Also, I'll refer you a US Forest Service document that that says: "Today’s

forests are more spatially uniform, with higher densities of fire-intolerant species and suppressed trees."

https://www.fs.fed.us/projects/hfi/2003/november/documents/forest-structure-wildfire.pdf

When we think about the natural world we often make the mistake of assuming that what we see today in "wild areas" is like it's always been, and that humans have somehow left the uninhabited these landscapes untouched.

For instance, if you were to visit the countryside of southern New England for a fall foliage excursion, you'll see rolling hills covered in deciduous forests of predominantly maple and oak. If you were to go back in time and visit those same places 150 years ago, you would have seen rolling hills used for dairy farming virtually denuded of trees except for small lots set aside for maples ("sugar bushes" for maple syrup) and wood lots for fire wood. But there were very few extant forests left in Connecticut, Massachusetts and Rhode Island during the 19th Century. If you were to visit southern New England 450 years ago, you would see it covered in mostly climax conifer forests, areas denuded by fires set by Native Americans, and other patches of deciduous forest that had grown up in the agricultural spots abandoned by Native Americans. A good book that describes this is Changes in the Land: Indians, Colonists, and the Ecology of New England, by William Cronon...

https://www.amazon.com/gp/product/0809016346/ref=ppx_yo_dt_b_asin_title_o06_s01?ie=UTF8&psc=1

Archaeological evidence now suggests that the Amazon rainforest, before the Europeans arrived, was once home to civilizations with an agricultural base that could have supported tens of millions of people. The Amazon rain forest may have only existed in patches 1,000 years ago. But it would be ironic, indeed, if the current Amazon rainforest is an artifact of European colonial expansion which killed off 80-90 percent of native inhabitants with measles and to a lesser extent smallpox.

Expand full comment

No, but about a decade ago PG&E started deploying their (illegal) restarter units. According this report, PG&E was responsible for about 1500 fires between 2012 and 2019.

https://www.businessinsider.com/pge-caused-california-wildfires-safety-measures-2019-10

And the recent 30,000 acre Dixie fire *was* caused by PG&E. At least 5 of the 10 most destructive fires in the past 5 years were linked to PG&E equipment, and PG&E and equipment and/or policies were implicated in the other two. Arson comes up as the next most common cause of wildfires in California.

Expand full comment

I've lived in California for nigh on 25 years, and in almost every year of those I've heard about things that have happened that have never happened historically. More rain than ever recorded! Less rain than ever recorded! Highest temperature ever in X! Lowest temperature ever in Y! Biggest/lowest yield of crop A, worst floods in county Z, and so on.

In part I think that's because there's a whole lot of things that can happen, and in part it's because California's history isn't very long, and the part of it that contains detailed records of everything that can happen is even shorter.

If we were talking about something that hadn't happened in the last 100,000 years, and we had good records to confirm it, I'd be impressed. But fires...? Meh. I would need a lot better argument than "we haven't recorded this in the 60-100 years people have been keeping somewhat decent track." California is famous for having big burn seasons every now and then even in its recorded history, and it's clearly been going on for a very long time -- far longer than humans have been building combustion engines -- for there to be the many noted adaptions of Western pines to forest fires, including cones that don't even open until the forest has been burned.

Expand full comment

I don't mean in that paragraph to say that climate change should be ignored, only to express my despair in noting that it doesn't matter what our models say; politicians will say whatever helps them, the newpapers will repeat whatever quotes they like best, and people will repeat and believe them.

Be careful, though: Record-setting wildfires are always an annual occurrence, in that every year there's a record-setting wildfire somewhere. Checking https://www.oregon.gov/odf/Documents/fire/odf-century-fire-history-chart.pdf , https://en.wikipedia.org/wiki/List_of_California_wildfires , and https://en.wikipedia.org/wiki/List_of_California_wildfires , I see that there have been a lot of big fires on the west coast in the 2010s, and also that 2020 was bizarre--it's something like ten standard deviations above the recent average in terms of acres burned, which would be too large and sudden to be accounted for by climate change if we could model acres burned with a normal distribution. Probably it /doesn't/ have a normal distribution, but a power-law distribution.

Your second point doesn't make sense to me. Uncertainty in modeling gives you bigger tails, in which both lower and higher risks are enclosed. It doesn't change the expected value. It sounds to me like you want to base policy on the worst-case scenario (which is, in fact, what's happening) rather than on the expected value.

But in any case I agree we should take action immediately, and build more nuclear power plants.

Expand full comment

Going by that Wikipedia link, four of the last five years have burned >1.5 mil acres, while only two of the preceding 17 years did. There really does seem to have been a major change in the last couple years where cataclysmic wildfires are now a routine event.

Obviously, it's not like the global temperature suddenly jumped in 2017 (and dipped back down in 2019), but there's also no way you can argue that the recent wildfires are within historical parameters. It's clear that the long term warning trend has led to conditions under which wildfires are much more destructive than before.

Expand full comment

How confident are you that the acreage burned over the next 10 years will be similar to the current rate instead of something closer to the historical average?

A Bloomberg article has this chart for acreage burned in US wildfires: https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iISDydT73c1M/v1/-1x-1.png

So it certainly looks like it's been increasing since around 1995. And it makes intuitive sense that warmer weather would dry things out and cause fires.

Likewise, global mean temperatures have been increasing somewhat steadily since about 1910: https://earthobservatory.nasa.gov/ContentFeature/GlobalWarming/images/giss_temperature.png

Still, if that's the extent of the evidence that global warming *caused* the increase in fires, then the evidence isn't that strong. There looks to be about as much correlation between global temperature and wildfires as there is between obesity rates in the US and wildfires.

Once we've established causation, the next question is how much of the wildfires are due to the increase in temperature. How much land would have burned if we hadn't warmed the Earth? Is all the increase in wildfires due to global warming?

Expand full comment

How do you know it has caused immense damage today? You know that bad things have happened and people who are concerned with climate change blame them on climate change. People who are concerned that preventing small wildfires and restricting timber harvesting produce big wildfires blame them on that. I expect there are other people blaming them on other things. With or without climate change, rare bad things sometimes happen.

Expand full comment

Has there been any notable change in the timber management policies of Siberia in the last couple years?

A "rare bad thing" happening once could be just bad luck. But when it happens year after year and all around the world, you'd have to be willfully obtuse to ignore it.

Expand full comment

https://www.nytimes.com/2021/10/06/magazine/laurie-anderson.html

Laurie Anderson is a spectacularly good and weird artist. She is dong a big show at the Hirshhorn in DC. She's 72 and they wanted her to do a retrospective, but she's not interested in going over her past work, so it's a new show, running September 24 to July 31.

Note that a visit could be combined with Worldcon.

She might be of interest for the evolution of art discussion. Her work is avant garde, but it isn't hostile. She's trying to introduce people to non-obvious things, but she isn't punishing them for not seeing them already.

I saw Anderson's Forty-Nine Days in the Bardo which was about pet dog that died.

,

https://fabricworkshopandmuseum.org/artist/laurie-anderson/?doing_wp_cron=1633701319.6193389892578125000000

I'd have sworn I wrote something about it, but I can't find it.

Thing to learn: if you want a really great remembrance after you're dead, be loved by an excellent artist.

Anyway, here are some things I remember. A little statue of the dog which had the cremated ashes worked into the clay and a description of bringing back the smell of wet dog.

A little white statue (maybe a madonna) with a film clip projected on it, giving an illusion of 3D movement. I'm surprised people haven't done more what that technology.

BIG black and white paintings. The only one I remember is a portrait of the dog's face. I think there were others about the confusion of being between lives.

A song from the exhibit: https://www.youtube.com/watch?v=JG8PPLP3ROM

Not something I remember, and I hope the afterlife isn't that challenging.

Expand full comment

She turned it into a marvelous movie entitled "Heart of Dog". (Full disclosure: I love Laurie Anderson.)

https://www.youtube.com/watch?v=8PLWVXICQyM

Expand full comment

Yeah, she's something very special.

Expand full comment

For all the math and physics nerds, a clue from today today's NYT's XWord:

Clue: Obtain a sum via special relativity?

Answer: INHERIT

Not one of their most perversely creative word play clues but it seemed like it might find an audience here.

Expand full comment

One more example from today’s Sunday Times XWord.

No tie in to physics. Just one that tickled my love of word play.

This was a themed puzzle with a series of clues being movie titles. The trick is to think of a - sometimes twisted - alternative meaning for the title. Answers are generally caps locked in the solutions.

Clue: Top Gun

ROT13’d Answer: GFUVEGPNAABA

Without the intersecting letters from the down answers there is no way I could have solved this. If someone can come up with the answer without that support they are either really lucky or perhaps clairvoyant.

If you want to try to get the puzzle experience without the intersecting words, I suppose you could randomly decipher the ROT13'd letters one or two at a time.

Expand full comment

That seems more like misdirection than a genuine pun, but maybe I'm missing some meaning of "inherit" in the physics sense.

Expand full comment

Yeah, it's not a pun. I'd just call it word play. There is no physics meaning in the answer. The way the question is set up sends the solver in that - incorrect -direction with its 'special relativity' phrasing.

As Edward Scizorhands mentions below, the puzzle creator usually give warning of this sort of misdirection by ending the clue with a '?'

FWIW appreciation of this sort of word play is not universal. When I show my wife one of these she gets pretty annoyed, complete with some choice obscenities.

My comment regarding math and physics nerds possibly enjoying the little word game was the clue set up. It *looks* like they are talking about physics, until you have the rug pulled out from under you with the 'aha moment'.

Even though I do enjoy these sorts of twists - big endorphin payoff when I get it - the words 'those bastards' or something similar can come out of my mouth.

Another example from earlier in the week. Warning I'm not going to RO13 the answer cause I'm just demonstrating the technique.

Clue: What jelly rolls are filled with? - Note the ? mark

Answer: ELLS - The words 'jelly rolls' have four occurrence of the letter 'L' . They are filled with ELLS.

If that one annoys you as much as it annoys my wife, then the current NYT XWord is probably not your cup of tea.

Expand full comment

No, I enjoy crossword puzzles, and I'm familiar with the clue paradigm. I was just wondering if I was missing something here.

Expand full comment

I start to suspect that I am your wife.

Expand full comment

I guess it would be help if I paraphrase what my wife says about the meaning of a '?' at the end of a clue:

"It means they messin' with ya!"

Expand full comment

First of all SPOILERS

Second of all -- only that clue interests the math and physics nerds, not the answer, right?

Expand full comment

Oh no! You haven't done the Times Friday puzzle yet?

Expand full comment

Not sure if you're kidding me. I guess I could have ROT13'd the answer.

Expand full comment

In general, yes. I read the whole thing at once, so I didn't get a chance to avert my eyes before seeing the big answer in all caps.

It's not a worry here, because I only have irregular access to the NYT puzzles, and usually do them very late, so I'll forget this discussion if I ever get to that puzzle.

Expand full comment

Haha sorry to guilt you, I'm being overly sensitive to say "spoilers" -- I very rarely do weekday puzzles anymore, only Saturday and Sunday. So you're fine!

Expand full comment

Friday puzzle can be pretty good. Sunday seems like a larger Thursday difficult puzzle a lot of times.

Expand full comment

I've always understood "Sunday is like a Thursday but bigger" to be the explicit goal, but I don't remember where I read it.

Expand full comment

Found it.

"In the continuum of puzzle difficulty, from Monday (very easy) to Saturday (very hard), the Sunday Times crossword is pitched at about a Thursday level."

https://www.nytimes.com/2009/07/19/business/media/19askthetimes.html

Expand full comment

But wait! You haven't answered my question -- "inherit" is not a term that's related to special relativity, is it? I didn't know if the answer there has a double meaning, or if you're just pointing out the (very good!) punny clue.

Expand full comment

It's their tricky word play at work. Someone might inherit a sum of money from a - special - deceased relative. In the last decade or so the NYT XWord constructors tend to make the puzzles more difficult by using an unexpected meaning for the clues. When it's a really good one I have to resist the impulse to slap my forehead when I get the meaning. :)

In the 1990's they made them more difficult by using especially arcane bits of trivia. I think the new way is more fun.

If you are a subscriber and solve on your phone or computer you can go to their archives and give some of the older style clueing a try.

Expand full comment

Basically weekdays became pretty straightforward, so I do the Spelling Bee now, which I'm bad at. I still do Saturday crosswords for the difficulty and Sunday for the popularity. I guess I'm basically a glutton for punishment because I am not good at the Spelling Bee and have barely improved.

I'm not even CLOSE to like Dan Feyer-level good, to be clear, but after a couple years of doing it I felt like it got routine and I wanted a new challenge

Expand full comment

I’m noticing it kind of makes you appreciate the 19 letters not in one of their hives. :)

Expand full comment

I just tried Spelling Bee for the first time. Seem like it could be pretty addictive. Thanks. I think. ;-)

Expand full comment

https://www.newyorker.com/news/on-religion/what-american-christians-hear-at-church

Pew does a detailed analysis of American Christian sermons, while admitting that just doing word counts misses a lot.

Expand full comment

Nice link! Thanks for sharing.

When I was studying speech communication for my Bachelors it struck me as interesting that every Sunday hundreds of thousands of speeches are performed across the nation, but those that hear them don't think of them as speeches and those who aren't in church are oblivious to them. These days we tend to think of oratory as outdated as compared to books and movies and the like: you might listen to the president make a speech, but otherwise it doesn't come up much. It's remarkable then to reflect on the fact that across the nation millions gather every week to hear somebody give a speech, and that it is preachers (who have practical need of it) who keep alive the craft of oratory that was refined over thousands of years when it was much more culturally salient.

Expand full comment

Thanks for that insight! I never thought about it that way before.

Expand full comment

To what extent do the terms "nice/kind person" and "asshole" map onto "submissive person" and "dominant person"?

Expand full comment

There's a certain kind of "niceness" that amounts to assholery, so I'm not sure the two terms are as opposite as your phrasing suggests.

I'm also not clear whether you mean "dominant" and "submissive" as personality traits, or as a description of power levels, i.e. what a person can reasonably get away with.

That said, there are cultural memes encouraging better, kinder, behaviour from those with the power to get away with behaving badly. And while it's true that power corrupts, many people with some power behave better than they need to, having perhaps taken these memes on board.

There's also the question of whether someone (a) enjoys hurting people or (b) enjoys the reverse. (I'd say "helping people", but there's a kind of asshole who forcefully gives others things those others don't want, and demands that they visibly appreciate the "gift".)

IMNSHO, most assholes are either griefers (other people's pain brings them pleasure, especially if they caused it) or so dedicated to their own goals that they don't notice their externalities. Or don't care about collateral damage.

Those insufficiently powerful to be considered dominant don't generally get the luxury of being oblivious. But there are still ways to get your own way in spite of lacking power/dominance, and some people use them quite effectively.

Expand full comment

'Nice' can mean a lot of different thing. It could be a synonym for 'kind' or it could just mean conflict avoidant. 'Kind' seems unambiguous to me. It is a genuine positive attribute. I wouldn't think it maps to submissive very well. In my experience kind people tend not to be pushovers. Perhaps too tolerant of bad behavior for their own good but not at all spineless.

As for 'asshole', well there are any number of ways to pull that off. From dominant jerk to passive manipulative jerk, so that mapping seems poor too.

Expand full comment

I've yet to meet a submissive person who I'd call kind.

Kindness requires a certain kind of dominance, or at least assertiveness.

Expand full comment

Nice maps pretty well to someone who isn't confrontational but might not be submissive, Kind (imo) does not map to either particularly well, and I'd say to be an asshole requires being a confrontational type of person, but not necessarily dominant.

Most of the assholes I've met fold as soon as they realize they're talking to someone who might do more than listen; I've met very few people that will stand up to vigorous argument or "vigorous argument" I'd classify as assholes.

Maybe there's some subconscious que that you someone will actually take the argument all the way to the mat that people can pick up that makes them think you aren't an asshole, kind of thing?

Expand full comment

Depends what you mean by "dominant" I think. It sounds like you're using them in the direct one-on-one sense, i.e. what is my particular response to this particular person in a direct conversation. In that case, the mapping is pretty good, although not perfect, because...

If you are talking about the wider social context, then it is much more common for a "dominant" person to be seen as nice/kind by a wide variety and large number of people -- that's *how* they become, and remain dominant. In this context an "asshole" usually finds a niche as a gadfly or rebel, which is not exactly "submissive" but on the other hand isn't a position with as much broad social power. (Plus such people are often quietly submissive to the dominant paradigm in all areas *other than* their recognized gadly turf, since you can't survive as a total social outcast.)

I mention this because one of the things that became clear to me as I got more personal experience of direct contact with powerful (in the social sense) people in business is that they are very far removed from the pop-media portrayal of loud-mouthed bully types who yell orders at quaking underlings. There are exceptions of course, some very famous, but in general what allows a person to ascend to positions of great social power is the perception (and often the reality) that almost everybody feels they are aware and concerned and considerate of nearly everybody who works for them. They tend to be superb listeners, people who can grasp the essential nature of someone's complaint or praise unbelievably fast, and integrate many of those into a "temperature of the room" incredibly accurately.

And the reason I say the mapping is not perfect even in the one-on-one sense is because we are all constantly aware of the wider social context, and so if we encounter someone who has the psychological aspect of a very popular leader, we will recognize that fact, even only half consciously, and respond to this person *as if* they were dominant, even though their demeanor may be "nice/kind." It's sort of a question of whether we feel this person is being nice/kind just to us, in a one-on-one sense, or whether we intuit that practically everybody is going to feel this way immediately about this person, and so (we infer) he or she is likely to actually have wide social power.

Expand full comment

Fairly well, although I tend to read it through the lens of Nietzsche's inversion of values.

Nice is often used as a compliment of last resort, employed when one is required by social niceties to provide a compliment but unable to come up with even a single positive trait. To be a nice guy, a good man, a mensch, etc. is to be unobjectionably mediocre. In other words, the meek.

The meek in turn look on anyone above them with ressentiment, seeing the success of others first as an indictment of and later as the cause of their own relative failures. Rather than seeking to emulate or serve better men in order to share in their fortune, they nurse impotent leveling fantasies of bringing the high low. This frustration with "assholes" and desire for an omnipotent God or State to cut the tall poppies down has been the emotional core of European religion for the last 2,000 years and European politics for the last 200.

Expand full comment

Not very well. If you think of "niceness" in terms of weakness, then yes, it seems to fit with submission. But while people who are doormats do exist, kindness does not mean being weak and malleable. The same with being an asshole and being dominant; there are dominant people who do this by virtue of charisma, charm, and ability - you *want* to please them and be on their good side because the warm glow of recognition and praise feels so good.

And there are those who tyrannise by being weak, by donning a mantle of victimhood whereby you must always concede to them or else you're being a -phobe or an -ist. I'm not going to give any examples because that just leads down the rabbit hole of waging the culture war, instead let's quote some Lewis about how being 'nice' can be weaponised into aggression:

"The grand problem is that of "unselfishness". Note, once again, the admirable work of our Philological Arm in substituting the negative unselfishness for the Enemy's positive Charity. Thanks to this you can, from the very outset, teach a man to surrender benefits not that others may be happy in having them but that he may be unselfish in forgoing them. That is a great point gained. Another great help, where the parties concerned are male and female, is the divergence of view about Unselfishness which we have built up between the sexes. A woman means by Unselfishness chiefly taking trouble for others; a man means not giving trouble to others. As a result, a woman who is quite far gone in the Enemy's service will make a nuisance of herself on a larger scale than any man except those whom Our Father has dominated completely; and, conversely, a man will live long in the Enemy's camp before he undertakes as much spontaneous work to please others as a quite ordinary woman may do every day. Thus while the woman thinks of doing good offices and the man of respecting other people's rights, each sex, without any obvious unreason, can and does regard the other as radically selfish.

...Later on you can venture on what may be called the Generous Conflict Illusion. This game is best played with more than two players, in a family with grown-up children for example. Something quite trivial, like having tea in the garden, is proposed. One member takes care to make it quite clear (though not in so many words) that he would rather not but is, of course, prepared to do so out of "Unselfishness". The others instantly withdraw their proposal, ostensibly through their "Unselfishness", but really because they don't want to be used as a sort of lay figure on which the first speaker practices petty altruisms. But he is not going to be done out of his debauch of Unselfishness either. He insists on doing "what the others want". They insist on doing what he wants. Passions are roused. Soon someone is saying "Very well then, I won't have any tea at all!", and a real quarrel ensues with bitter resentment on both sides. You see how it is done? If each side had been frankly contending for its own real wish, they would all have kept within the bounds of reason and courtesy; but just because the contention is reversed and each side is fighting the other side's battle, all the bitterness which really flows from thwarted self-righteousness and obstinacy and the accumulated grudges of the last ten years is concealed from them by the nominal or official "Unselfishness" of what they are doing or, at least, held to be excused by it. Each side is, indeed, quite alive to the cheap quality of the adversary's Unselfishness and of the false position into which he is trying to force them; but each manages to feel blameless and ill-used itself, with no more dishonesty than comes natural to a human."

Expand full comment

What is the Lewis citation from?

Expand full comment

The Screwtape Letters

Expand full comment

EDIT: While there is the trope "Good Is Not Nice", it is not correct that the opposite is true: "Not Nice Is Good". This is how people can behave like assholes and still convince themselves they are in the right, or doing it for the other party's good, and so on.

Expand full comment

I enjoy the shill/classifieds thread since it's fun to see what ACX readers are up to, and it's a chance to signal-boost cool things by people without name recognition. It is unfortunate that these threads are just up for a few days. I tend to look at things that someone else has left a positive comment on, and there's just not enough time for most posts to get enough engagement, especially the ones that are advertising a 200,000-word novel.

I would be interested in some way of curating or increasing the engagement time on the classifieds post. One idea would be to do something like what was done for the reader-selected book reviews: compile a giant list of classifieds under different subjects (coding, fiction, history, etc... probably best to exclude personals from this) and have readers rate random postings in domains they are knowledgeable in. Then you can make several posts with a couple at a time of the top-rated classifieds in the body, so people can adequately investigate them.

Expand full comment

I enjoy the classifieds too, tho' it was frustrating that this month only paying subscribers could comment.

Expand full comment
author

What?! Is that true? Definitely unintentional, can someone else confirm?

Expand full comment

I assumed it was on purpose. Paying the bills with classified ads is deeply traditional, as, in some ways, are you.

Expand full comment

Yep, non-subscribers can read but not post. I tried both direct responses (posting my own ad) and second order responses (replying to someone else).

Expand full comment

It's still locked, in case that's of use.

Expand full comment

I'm glad to hear that it was unintentional (tho' that seemed obvious to me). What's surprising is that after three days only a couple of people have mentioned it - on the previous open thread. It must have been a surprise to many hundreds of unsubscribed readers.

Expand full comment

From my cynical POV, I presume the normal response was a combo of "of course it's intentional" and "can't fight city hall".

We've all been well trained by the typical business to expect them to do anything even remotely profitable that they think they can get away with.

Expand full comment

Same.

Expand full comment

I can confirm, I logged in and tried to comment just now, and when I started typing I got an error message saying "Only paying subscribers can comment on this post"

Expand full comment

Can confirm, was subs mode.

Expand full comment

Same results for me.

Expand full comment

Alice, Bob and Carol are chess players who play lots of games against each other. Bob beats Alice in 80% of their games. Carol beats Bob in 80% of their games. What is the percentage of games in which Carol beats Alice?

Does the answer change dramatically if instead we choose Go, Starcraft or Tennis?

Also, suppose we try to continue the chain - we look for a player who beats Carol in 80% of games, and then for another one who beats *that* player in 80% of games, and so on. How many players can we have in the chain before we reach a perfect or almost-perfect player and we can't continue the chain anymore?

Expand full comment

There isn't a general answer. Beating someone in a game isn't always even transitive.

My hobby used to be sword and shield fighting in the SCA. There were three good fighters in the extended Chicago area, of which I was one. On average, I could beat B, B could beat C, and I found C a very hard fight. I don't know to what extent that pattern would apply to chess, but it surely could to some degree to many contests.

Expand full comment

Society for Creative Anachronism? What, they don't have swing dancing or model train enthusiasts where you live? ;-)

Expand full comment

A formal version of this are intransitive dice, where A beats B, B beats C, and C beast A.

https://en.wikipedia.org/wiki/Intransitive_dice

Expand full comment

Zeroth-order guess (mathematically equivalent to John Johnson's answer): Bob is four times as strong as Alice, and Carol four times as strong as Bob, hence Carol is sixteen times as strong as Alice, and will win 16/17 = 94.1% of games.

(I'm assuming 80% is actually rate of wins + (rate of draws)/2 -- as demost_ points out, except at the complete beginner level, anything remotely resembling 80% wins + 0% draws + 20% losses is ridiculously unrealistic -- something like 64% wins + 32% draws + 4% losses would be a much more likely outcome for two players 241 Elő points apart.)

Expand full comment

Yeah, I forgot about draws, thanks.

Another factor I've neglected is that players tend to get better the more they play, so suppose that Alice, Bob and Carol are non-learning programs.

Expand full comment

Using this https://www.3dkingdoms.com/chess/elo.htm

We get that 80% chance of winning gives a elo difference of 241 (80 wins and 20 losses, I'm assuming no draws for simplicity), this gives an elo difference of 482 between Carol and Alice (since chess elo is indeed transitive), which translates into winning 94% of games

Expand full comment

Elo seems like a useful mathematical simplification, but how much predictive power does it have? I'm not a chess expert, but perhaps there is an advanced strategy that is effective against casuals in particular, so that Carol's chances against Alice get a boost. Could the opposite effect happen? I don't know about chess, but I think that in games with randomness, randomness disproportionally skews games between players with a wide disparity of skill towards the noob.

What I'm looking for is a way to quantify the strategic depth of a 2-player game, i.e. how much can a player improve their skill before hitting the ceiling. If using Elo this way is even approximately correct, then one could get the strategic depth from the top score. And with the proper normalizations, maybe the difference between the top scores of different games tells us of their relative strategic depth.

Expand full comment

Elo's predictive power is quite good over a big enough number of game. Even for the short length of a typical tournament (say 10-14 games), it's quite unlikely that the winner will not be among the top rated players.

In your example, the elo differences are big enough that any over consideration can be neglected.

The situation is a bit different with players who have really close ratings. Then you can get non-transitive properties - for example Kasparov<Kramnik<Anand<Kasparov over lifetimes scores. You also have players with head-to-head score widely different from what their close Elo would suggest (Kasparov-Shirov, or Carlsen-Nakamura). This often comes down to subtle differences in skill between different compartiment of the game, and is very difficult to predict as it is not a rock-paper-scissor game either.

Expand full comment

Elo ratings are widely used for competitive video games like StarCraft, and are gaining popularity for analysis of regular sports as well. The other big rating system for video games, TrueSkill, looks like it works on similar principles to Elo in that they both fit win probability vs rating difference to a logistics curve.

Expand full comment

I think in chess it's mostly transitive, so playing styles do not play too much of a role. And 80% is a huge difference. So I would say that Carol beats Alice in much more than 80% of the cases.

Continuing the chain does not make so much sense in chess. If both players are better, the more likely draws become. I would guess that a very good human player still has a decent chance of drawing against a top chess engines (much more than 20%), even though engines are way, way stronger than humans.

Expand full comment

Your guess is way off. Top humans never play computers on equal terms in public these days, because they know they will get humiliated; but a year ago, Hiraku Nakamura (currently world no. 19, Elo rating 2736) played an 8-game rapid chess match against Komodo NNUE (not the world's top chess engine) with Komodo giving odds of two whole pawns (!), and Komodo still won the match with five wins, three daws, and no losses. Rapid time control is thought to favour the computer, but this is still humiliation.

World no. 1 human Magnus Carlsen's Elo rating is currently 2855; world no. 1 computer Stockfish has no official Elo rating, but various unofficial sources estimate it at around 3550, for a difference of about 700 Elo points. Without regular games between top humans and top computers, it is impossible to know how accurate this is, but if we take it at face value, it would mean that Stockfish would expect to win a 100-game match by 98-2 (according to the formula on the Wikipedia Elo rating system article).

Expand full comment

3 out of 8 is much more than 20%, isn't it?

Expand full comment

That was with odds of two whole pawns. You are not a chess player?

Expand full comment

*Hikaru Nakamura

Expand full comment

Starcraft is a special case because a player can be a lot stronger in one matchup (race vs race) than another.

Expand full comment

As is tennis. At least in professional tennis, some tennis players are specialised for clay, some are specialised for grass. The grounds have been made a bit more similar over the last years, but there have been players who completely dominated the clay season, while they wouldn't stand a chance on other surfaces.

Expand full comment

TIL that some people play tennis on grass. Including at Wimbledon!

Expand full comment

I don't know much about chess, but e.g. in Starcraft it is entirely possible that A is perfectly positioned to beat B, B is perfectly positioned to beat C, but C is still strong against A.

Expand full comment

Ooh an even thread. What do people think of the latest Bundestagswahl? Traffic light seems the most likely but the best option for the FDP and Greens to exert leverage would be to keep Jamaika in play as a BATNA.

The compromise between spending public money on welfare, spending public money on the environment and not spending money looks like it will be a sticking point.

Expand full comment

The most surprising result is that the SDP is still a major player. For the last few years, it's been looking like the Greens would displace them.

The Jamaika coalition seems like it has fewer internal contradictions: increase public money on the environment, but nothing else. However, the traffic light coalition seems to have more popular support. Both coalitions exist on the regional level.

We often think of the negotiations in terms of policy: How would they put together a platform that they all could support? It might be better to go through the German cabinet and try to give 8 positions to SPD, 4 to Green, and 3 to FDP. Here is the current list:

https://en.wikipedia.org/wiki/Cabinet_of_Germany

Expand full comment

The FDP would have a genuine interest in Jamaika (conservative/green/liberal), but for the Greens keeping Jamaika in play as a BATNA is what they are actually doing. However, some of their supporters are already unhappy with them 'playing games' so they will not want to overdo that.

On another note, I think with this federal election the game is changing. As was already the case when a stable three-party system evolved into four/five parties clearly clustered into two camps. Now for the first time in the modern history of RFG there'll probably be a government coalition of three parties. Also, for the first time there were justifiably three candidates for the chancellery, and while in the end the third party ended up roughly 10 points behind the first, the results overall show sth. very different to the Volkspartei / big party vs. small party dichotomy that Germany is used to, at least at federal level.

I'm very much in favour of the German electorial system, however I think it works well due to a good combination of formal rules & culture / a myriad of rather stable informal rules. When after the election a sense of 'everybody is talking to everybody' was conveyed, I was worried for a moment that this mide erode some of those stable informal rules, and that medium-term the stability of the political system might decline. I decided to take a slighly more optimistic view then.

I understand this turned out to be a no-politics thread, I still wanted to give a partial answer and I hope this is okay. We might want to postpone longer or especially more controverial discussions to another thread I guess.

Expand full comment

Inspired by the Noah Smith poll (https://twitter.com/noahpinion/status/1446350023621373958?s=21). Who would win a war—Mongolia or Peru? The war is over control of New Zealand.

Expand full comment

Peru

Population 33 million

Active miliary personnel 115 000 + 386 000 reserve

GDP 225 billion

Military budget $2 560 million (actually Wikipedia claims it's $ 2 560 000 000 million)

Navy: 6 submarines, 7 guided missile frigates, 7 guided missile corvettes, 7 offshore patrol vessels, 4 amphibious vessels

Mongolia

Population 3.4 million

Active miliary personnel 35 000 + 135 000 reserve

GDP 14 billion

Military budget $210 million

Navy: a few lake patrol boats

Considering all that, I think Peru could even win a war for control of Malaysia. Boliva vs Mongolia would have been a fairer fight.

Expand full comment

New Zealand

Expand full comment

To expand a bit more: even if NZ military would be disappeared by aliens, insurrection of civilians would still beat back Mongolia / Peru invasion.

If they would be a aware of it, I think that they would beat cooperating Mongolia with Peru.

Disclaimer: I am completely ignorant on this topic.

Expand full comment

I'm curious about the choice of Mongolia. It's land locked, surrounded by much more powerful neighbors (i.e. not likely to let them send large forces through or over their land) and have no significant military or industrial might. Not that Peru is a powerhouse, but it at least can access New Zealand on its own and is stronger than Mongolia.

Expand full comment

Neither, as per magic9mushroom.

If aliens wipe out New Zealand's population beforehand, Peru, also per magic9mushroom.

Expand full comment

I don't see any obvious advantage for Mongolia here; Peru has ten times the population, seven times the GDP and an actual sea coast.

The real answer, of course, is "neither", because while Peru does have a navy it's not remotely up to the job of invading a country with half its GDP on the other side of the Pacific.

Expand full comment

You're assuming New Zealand is an active actor here. Since it has been auditioning for the hermit kingdom role (with international rugby matches added in) of late, we could suggest that a nominally weaker country could assert control without invading it if it were deliberately isolated. In which case Peru's navy would be useful - although if Mongolia has an effective airforce might be a relevant question.

Also anyone that managed to secure bases in Australia (at least the east coast) or the various nearby Polynesian territories would have an advantage. So what can Mongolia offer Tahiti?

Expand full comment

(Addendum: I used PPP because it's generally more relevant to militaries, although less so for countries this size. In nominal New Zealand has a bigger GDP than both put together.)

Expand full comment

If you'd said Chile rather than Peru, I'd know the answer (Chile don't tend to lose wars). But in a war for control of New Zealand a navy might be quite important so actually having a coast line might give Peru an advantage here.

Expand full comment

SSRI question for the chat here: I am visiting a psychiatrist for the first time in a long while for anxiety, and they want me to take SSRI's (zoloft first if it matters). I am concerned about permanent side effects, even if I desist from the medication, which can make it risky to experiment. The least amount of data around seems to be on permanent sexual dysfunction, which in the SSC guide is considered "rare" but with nothing concrete, and every published article I have found is qualitative or descriptive.

Anyone out there have literature/knowledge on how common a side effect long-term sexual dysfunction is? Is it dose-dependent? And maybe most importantly since I am experimenting, is it duration dependent - equally like after 2 months as it is after 2 years, something more likely the longer you are on, etc? Would be really helpful guidance as I evaluate the right treatments, thank you all.

Expand full comment

Just want to say thank you to everyone who replied to this - while I was hoping for some documentation, if it doesn't exist that is just the way it is, and it helped get some good context and understanding. I do think I will push harder for taking Bupropion first, but if its not a good result I will be more open to experimenting with SSRI's.

(Also Ace or Depressed had a *great* name, kudos on that one!)

Expand full comment

Can share my personal experience on this:

Over the past 10 years I have taken lexapro and Wellbutrin at different times. Lexapro definitely reduce my sex drive while taking it the first time (I was around 24/25 where this would be very noticeable). But it was about the same as the reduction in sex drive from being depressed and anxious all the time so about a wash.

On Wellbutrin I developed horrible jaw clenching that would turn into very bad headaches. This became almost constant and was the reason I went off that drug (and life changes that reduced stress). I would not take Wellbutrin again for this reason. The clenching has continued though only when I am stressed. I speculate that the clenching became a physical outlet for anxiety as a replacement for the psychological ones. Essentially I developed a habit of clenching.

The sexual side effects from the Lexapro did not carry over after taking it in any noticeable fashion. Any issues were related to anxiety not the medication.

I am not back on Lexapro after a few years on nothing. The sexual side effects are there, but much less, however I am now in my mid 30s so a reduction in libido is not unexpected.

The positive benefits of Lexapro were so great that I would take them even if the sexual side effects were guaranteed. It basically turned my entire life around.

From the reading I have done on both the sexual side effects and the clenching, it seems like most doctors report seeing them in their patient, but studies don't often show them. Just another paradox of psychiatric treatments.

Expand full comment

I think the anecdotal evidence for long-term sexual dysfunction is mostly when SSRIs are given to children, not adults. Although of course it's always possible, I don't even know of any anecdotal reports of that in adults.

Also, N = 1 endorsement: SSRIs/SNRIs have helped my anxiety a lot. Definitely glad I tried them. (Although be aware that different ones can have really different effects! Zoloft & prozac made my anxiety worse, while venlafaxine has helped a lot.)

Expand full comment

SSRIs had sexual side-effects for me: I became un-anxious enough to enter a sexual relationship.

Expand full comment

I've been on and off SSRIs for 25 years, including Zoloft for most of the last 10 years. I've never had any side effects issues when I've gone off.

Expand full comment

I'll add that I've never had any problem with libido per se on Zoloft, but SSRIs make it almost impossible for me to orgasm.

Expand full comment

My understanding is that "rare" that context means uncommon and anecdotal. I don't think you're going to find a literature on long term sexual dysfunction linked to SSRIs.

My read on the anecdotes is that some people go on antidepressants when they're young and horny, and the SSRIs dampen their sex drive. Some percentage of folks expect their sex drive to return to baseline when they stop taking the medication, but it doesn't, so they call it a long term side effect.

The problem is, most people aren't as horny at 34 as they are at 20, so we need controlled studies to determine how people would respond having never taken SSRIs. However, given that reports of sexual dysunction after discontinuation are low, and how expensive running longitudinal RCTs are, I wouldn't hold your breath.

Expand full comment

That makes sense and I bet it's why some people on long-term regimens of almost any drug also tend to claim hair loss as a side effect.

Expand full comment

I haven't heard of being permanent and most people revert to normal libido when they stop. But if your issue is depression and you're worried about sexual side effects, ask if you can try Wellbutrin, which has no libido dampening effect and often does the opposite (I had to stop taking it bc it honestly caused hypersexuality, which was sort of fun but also scary, as well as mild aggression). It definitely doesn't have any of the side effects of SSRIs, the side effects are basically the opposite (weight loss instead of gain, hyper instead of hypo sexuality, amped up irritability instead of sluggishness).

Expand full comment

To add to the anecdata, I also much preferred Wellbutrin to SSRIs and indeed am on then still. The sexual side effects were noticeable to me and I was glad to be rid of them while still keeping my brain stuff in check

Expand full comment

"Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed. This one is even-numbered"

193 is an odd number. On the other hand, the URL for the thread has 192. I'm guessing that the thread title was a typo.

Expand full comment

Either way, Scott is now too busy to moderate. :P

Expand full comment

The error has now been corrected.

Expand full comment

It's a trap, be careful. :)

Expand full comment

I've been meaning to comment this for a while now: why do people so commonly conflate "complex" with "good" when it comes to aesthetics? A lot of judgement on e.g. "declines" in pop music look at how pop songs have gotten simpler in terms of composition over time, or the "decline of cinema" due to the dominance of MCU movies, or even looking at Scott's recent post on architecture has a lot of allusions to technical decorations and flourishes being seemingly indicative of greater feats of accomplishment in the past. Isn't the ability to distill something more potent and simple out of complexity an equally admirable task? I say this as someone who felt righteous as a teenager being into metal (how could a song be good without an excessively complex guitar solo?!) and then came to appreciate the simpler compositions of general rock and punk music alongside many others.

Expand full comment

The simplest explanation is that for some people complexity is just part of their aesthetic taste. If you like piano Jazz, odds are you're actively looking for and enjoy well-done complexity, and probably view complex genres as superior to very simple ones.

Expand full comment

I think one problem with "simple" is that it often implies a lack of diversity, leading to a point where you've seen it all and get jaded.

This definitely happened to me with music, to an extent: a lot of common chord progressions that sounded awesome when I was young sound cliched to me now.

I think the criticism of "too simple" would be better if substituted with "too unoriginal". I think you're absolutely right that "distill something more potent and simple out of complexity" is a good thing.

Expand full comment

I think when people criticize something as simple, they often mean it doesn't take much talent to produce. That doesn't necessarily mean it isn't enjoyable to people, but these critics may feel it's less worthy of admiration.

For example, the chord progression C, G, Am, F is very common in popular music, usually combined with a simple melody (think No Woman, No Cry or Let It Be). It's a very nice sounding chord progression. Ask a halfway decent musician and they can probably make up a pop song on the fly using these chords that sounds about as good as other pop songs.

Some people are bit bitter that it doesn't always take years of study and meticulous, intelligent craftsmanship to make a catchy song. Or maybe they just want to justify feeling smugly superior for not liking pop.

Or an example from cooking: I often make desserts. Some of them are difficult to make, but one of my best ones is just peaches in red wine with a bit of sugar, chilled. And while we all love the dessert, I don't expect anyone to be impressed by my ability to slice some peaches into a bowl of wine. The product is delicious, but it takes no talent to make.

Expand full comment

Thanks, I enjoyed this comment.

Expand full comment

In fiction, it is quite common to see authors who got the lesson "great literature is complex" and then more or less successfully try to apply it, adding complexity over top of complexity, and then have in their hands a story that does not actually hold up. Common pit traps include over-complicated plot turns over plot turns, characters get many quirks and tragic life events, et cetera. The lesson they did not learn was, in the great literature the masterful complexity serves some purpose (one of them was, "author is paid by word", but not only that).

Classic example: Raymond Chandler wrote is novels by cannibalizing his short stories together. While he nailed the atmosphere, in the end result there are minor details that are ... lacking: Try to work out who drove off a certain car down a pier in Big Sleep, and why? (Spoiler: Chandler had no idea either, when Hollywood was making the story into a movie and somebody called and asked him.)

The most common and easily identifiable offender is a multi-season TV series where the writers get started on a complex mystery without any idea what is going to happen and then they write themselves in the corner, and can't fix it because the issues are in the episodes that aired three seasons ago.

Let me present three different complicated things:

Rube Goldberg machine is a complicated thing with lots of unnecessary parts but despite every indication to the contrary, it wondrously achieves something in the end, and the parts may be fun to look at.

An unfinished super complex perpetual motion engine with all the parts lying around in a garage is also very complex thing when you first see it, but you are in for a disappointment when you look at the design and see it is obviously never going to break the laws of thermodynamics.

Then, finally there are things like LNER Class A4 4468 Mallard, which also has lots of moving parts, but they all not only have a purpose, in a combined miracle of physics and engineering all the parts moving in unison was a functional object, the fastest steam locomotive in existence. On top of it, it also looks aerodynamic on surface.

Expand full comment

Things that are very simple are often easy to recognize as good or bad. "Clean" (uncluttered, pure, etc.) is sometimes used to describe simple things that are also good. Simple things that are poorly done are obviously so. Complex things are harder to categorize, and can often have both positive and negative attributes, making a singular Good/Bad dichotomy difficult. This allows more room for people to claim that complex things are in fact good, even when it's unclear if that's actually true.

Simple and good things also lack nuance, limiting repeated iterations to "boring" once the style has been seen enough times before. Boring is typically bad, so it further limits simple aesthetics. Someone trying to ensure that they are making something Good, or at least harder to call Bad, will want to err towards complexity.

Expand full comment

There is "good simple" and "bad simple", and also "good complex" and "bad complex". Some people criticize the "bad simple" for being bad. Some people criticize the "good simple" for being too simple to signal their high-status education.

Expand full comment

I think the MCU is miscategorised here: MCU has different worlds, character arks including death, find released in non-chronological order and about as many characters as there are actors active in Hollywood so doesn't really fit the description simple. Indeed the films have so many unnecessary twiddly bits for various reasons that baroque might be a better description.

With that in mind it's worth remembering this is a Disney property, the same company flagged in Scott's architecture post as a source of traditional architecture. I always regard lamenting the supposed predominance of the MCU films as elitist signposting akin to looking down on Disneyland, a conscious rejection of popular taste. So a complaint about MCU films not being simple enough is more of a complaint about eschewing a narrative in favour of popular themes, whilst the equivalent complaint in architecture from your post is that popular features are being disregarded by the elite who control building firms for ideological reasons.

Expand full comment

The complex things that seem good (like architecture with a lot of flourishes) mirror forms found in nature. Your local forest is not minimalist.

Expand full comment

In England forest can still designate an area set aside for hunting at some point in its past. So my local forest when I grew up was an area of moorland heavily grazed by sheep and devoid of trees. It's pretty minimalist...

Expand full comment

It's not that I think MCU movies are simple and dumb. It's that I think they're simple and childish, yet somehow popular with adults.

Nothing wrong with Barney the Dinosaur either, but why would I watch it as an adult?

Expand full comment

1. I think it's a rationalization of their personal tastes.

2. I think that complex things leave a lot of room for interpretation, conversation, and analysis, which can be rewarding, but also I think that people try to signal intellectualism and seriousness by endorsing complex things (in fact, the more difficult to understand, the better signal it is of intellectualism, with the height being things that are completely incomprehensible).

3. Relatedly, I think people like to demean others by presenting their tastes as unsophisticated, and this is easier to do when the tastes are simple. There's also an element of classism here.

Expand full comment

Problem with MSU movies is not simplicity, it is dumbness.

> Isn't the ability to distill something more potent and simple out of complexity an equally admirable task?

Yes, but just removal without replacement (see modern architecture with concrete hellscapes like https://en.wikipedia.org/wiki/Salk_Institute_for_Biological_Studies where even supporters will admit that only way to get shade is to hide like a rat just along edge of a plaza - https://astralcodexten.substack.com/p/highlights-from-the-comments-on-modern/comment/3140023 )

Expand full comment

Arghh, it should be "Yes, but just removal without replacement is bad. What happens so often that people equate simplifying with things getting worse."

Expand full comment

I WILL FIGHT ON BEHALF OF MY HOT CUBICAL BOY

But for real, it's fine 9/10 of the year on account of San Diego has the best climate in the world, but during the 12 90+ deg days a year holy shit is it hot.

You gotta go to the other side (Grass instead of concrete) or the glider port if you wanna be cool.

Expand full comment

Interesting. I have clearly combination of some aversion to places like this based on things that are not necessarily applicable there.

When I see something like that I have massive revulsion that in the end seems based on "during summer it will be horrifically hot, during fall/winter it will be wind tunnel without place to hide, wtf is that".

And I have not considered that in some places shade/windbreaker is not so important.

-----

Also, I really, really like trees (to the point of doing actual things to get them more - this year I succesfuly requested planting of 30 trees from my local government[0])

[0] I am aware that most of that would be planted, but elsewhere. Actual increase is lower and more diffuse and basically impossible to measure.

Also, I only got assurance that trees will be planted, no actual planting happened so far.

Expand full comment

Local govt. is hit or miss with trees, and If they are miss and do plant them They'll probably be fucking plane trees, or some A E T H E T I C tree that will die in a couple years when it realizes it is three zones out of it's habitability range.

My recommendation to you: just go out and (carefully!) plant native trees in areas they won't be hit with a lawn mower.

Trees have a kind of inertia, once they are big enough they probably won't be cut down. Some trees I planted in random places in school landscaping, water district land and golf course sidings are still there 20 years later.

Expand full comment

1) What is wrong with Platanus?

2) My local government is far from ideal, but at least they are not trying to grow trees impossible to grow here :)

3) In my case I would be rather adding some protection to prevent them from inadvertent injury by lawmoving, obsession with grass cutting is extreme in my city (though there is growing area of places where they put some seeds of wildflowers and cut once a year).

Expand full comment

Anybody interested in meetups on the big Island in Hawaii?

Expand full comment

This thread has been mislabeled; it is the second OT #192.

Expand full comment

Scott Alexander won't be there (I don't actually recall if announcing cities up top means he'll be there), but Robin Hanson is supposed to be attending & speaking at this Saturday's Chicago Rationality meetup.

Expand full comment

I forgot to provide sufficient details for an interested person to attend:

Date: Saturday, October 9th at 1pm​

Location: South Loop Strength & Conditioning (upstairs in the mezzanine), 645 S Clark, Chicago IL 60605

Expand full comment

Can someone help me understand the stock market, and the forces that drive stock prices?

In particular, I know that the price is mechanically determined by the price someone is willing to pay. Basically supply and demand - if more people want to buy than sell (at the current price point) then the stock "goes up". But what I've always had a hard time understanding is, WHAT CAUSES people to suddenly want to buy or sell a stock?

I know the answer is generally along the lines of "people buy a company's stock because they think the company is doing well...or WILL do well". And as expected, a company's stock jumps when news is released of positive news (e.g. strong sales). But, since the stock's price directly reflects supply/demand and NOT it's performance or the news...the market could all decide to sell the stock after a positive news release. Or, a stock could go up a whole percent on any given day, when there is little to no good news released to justify it (or vice versa, going down).

So I guess the best way I could succinctly phrase my question is: Why should I buy a stock because I think the company will do well? Shouldn't I buy a stock because I think OTHERS will want to buy the stock in the future (and the price will go up, so that I can sell it for a profit)?

And if the latter is true, then how do we determine which stock to buy? We just said we want to buy a stock not based on the company's performance, but on how we think the stock will perform after we buy. So how do we figure that out?

Yes I know dividends exist...but I don't know any serious investors that trade stocks for profits who buy stocks because they produce dividends, since the dividends are usually so small.

Expand full comment

"It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees." (Keynes, General Theory of Employment, Interest and Money, 1936).

https://en.wikipedia.org/wiki/Keynesian_beauty_contest

Expand full comment

My friend got a Ph.D in Finance and recently had a job building computer algorithms for a rich guy who runs a hedge fund (he quit due to workplace abuse), and he told me that, the more he learns, the more of a random roulette wheel he realizes the stock market is.

That said, you can make money in stocks if you invest conservatively, don't make idiotic mistakes, and don't expect big gains in the short term.

Expand full comment

People are mentioning dividends and buybacks. It seems to me that it should also at least sometimes be theoretically worth owning a stock because of the voting power that comes from owning shares. I suppose this is probably mostly relevant in the case where one company wants to take over or merge with another, but could conceivably be relevant if someone sees that a company has a valuable asset that could be used in a different and more profitable way than current management is using it.

Expand full comment

The answer to all your questions is Yes and No. And also Maybe. It is impossible to say with any certainty why the price of a stock is what it is and moved in any way. There are way too many reason and actors to get any certainty.

It's a three body problem on steroids. But in an effort to simplify things I would say there are two major forces. One group buys or sells to make money, the other group buys or sells because they have to for some legal or contractual reason.

The first group is the most well known and the one talked about the most. You basically covered all the big reasons someone would be in this group and all the possible reasons they may buy or sell. The price of the shares, the timing, what other people are doing all play into these decisions.

The second group is less talked about because it is boring but ultimately has a huge impact on markets and probably accounts for more money movements than any other group. These are actors such as mutual funds that have to buy or sell to rebalance their portfolios. Or an index fund that has to add or drop a company from the fund because it now fits or no longer fits the funds criteria. Then there are institutional investors who many need to take a position to offset the position their clients are taking (so they have neutral risk). There are also funds that must maintain some balance of asset classes or risk profiles and will have to sell or buy things to meet those contracts. If we include futures markets or foreign exchange there are people who have to use these markets to do everyday business and aren't super sensitive to the price as long as what they buy or sell creates certainty.

Expand full comment

Approximately how much of the market is legally required sales?

Expand full comment

In the short term, the stock market is a voting machine. In the long run it is a weighing machine.

Expand full comment

Investment (buying stocks for the dividends) is the rock on which speculation (buying stocks because you think the value will go up) is very shakily built.

>I don't know any serious investors that trade stocks for profits who buy stocks because they produce dividends, since the dividends are usually so small.

That is because you have defined the investors out of your sample. Investors - people who hold stocks for dividends - hold their stocks for long periods and thus don't spend a great deal of time making trades (they buy in, sit there and collect for years). Speculation requires more frequent trading and therefore forms the majority of the *traded* volume on a given day (as opposed to the majority of *share ownership*); it potentially has quicker returns than investment *if* you are better than most speculators at it, but if you aren't you won't do any better than an investor and will frequently do worse.

Expand full comment

-"Investment (buying stocks for the dividends) is the rock on which speculation (buying stocks because you think the value will go up) is very shakily built."

This can't be true -- some stocks don't even pay dividends.

Expand full comment
founding

Approximately all stocks eventually will pay dividends, be bought out (or back), or become worthless through bankruptcy. The price of the stock today, is the market's collective expectation of those future possibilities.

Expand full comment

Those stocks may start paying dividends in the future. It is that possibility on which the share price rests.

(Holding shares because you believe the company will be bought out is speculation, built on the investment of the one who eventually buys out the company - with 100% of the stock, they can force the company to start paying dividends.)

There is technically also the value of the share's vote, although for this to be separable from the monetary value (i.e. dividend) there need to be rather serious market distortions such that the company's decision whether or not to do X actually controls whether or not X gets done (e.g. buying out all the payment processors would let you shut down crowdfunded Internet porn *if and only if* there is some barrier to somebody starting a new payment processor that allows crowdfunded Internet porn).

Expand full comment

If you read Taleb's blackpaper on BTC, you can use his argument for concluding that BTC is worthless to conclude that any share known to never yield a dividend (including at liquidation of the company) is worthless.

Expand full comment

A) companies also distribute money via share buybacks and B) even if a company doesn't currently pay a dividend/buyback, it may do so in the future.

Expand full comment

To simplify matters, I would just consider buybacks a type of dividend.

Expand full comment

Another unintuitive thing is that "the stock price" is just the price at which the latest share was sold. A price of $100 means that all prospective buyers are offering less than $100, and all sellers are asking for more than $100.

If you go from not selling (asking price = infinity) to offering 10 shares at a price of $90, then maybe the first share will be sold at $99, 5 more at $98, and the last 4 at $97.

So now the share price has dropped to $97. It has always been the case that those were the best offers at hand, but the act of actually selling your shares revealed this information to the world. The share price drops, but no-one except you has changed their estimation of how much the company is worth.

Your sale might trigger a news headline along the lines of "shares PLUMMETED 3%, wiping out 1.2 BILLIONS of shareholder value!". You can safely disregard this type of news. These people are using a metric called the market capitalization (market cap), which is just the share price multiplied by the number of shares. This a very rough estimate of how large/valuable a company is, and we only use it because we lack better information. No real value was lost. At most, some people had a wake-up call about how much their assets were worth all along, because the order book turned out to be thinner than expected (they wrongly assumed there were hundreds of people clamoring to buy the shares at $99).

Expand full comment

>A price of $100 means that all prospective buyers are offering less than $100, and all sellers are asking for more than $100.

That doesn't seem correct to me. The latest price could be $100 while there is supply at $80 and demand at $60. In that case, an increase in demand could lead to what would look like a 20% price drop.

Expand full comment

There is a "book" of orders, so you can see[1] the people waiting to sell at $100.01, $100.02, $100.03. There are also people willing to buy at $99.99, $99.98, $99.97.

It's a dynamic equilibrium, the same as you get when you have a solution of salt-water.

[1] There has been a lot of controversy over who can see this book, and how it's manipulated, and a lot of that controversy (but not necessarily all) is from people who don't know what's going on. Just to try and head off any fight about it.

Expand full comment

Sorry, I don't get your point. In my example, the sell orders start at $80 while the buy orders start at $60.

Expand full comment

For serious companies this is extremely rare to happen, from what I understand.

Expand full comment

Those particular numbers would obviously reflect a highly illiquid situation and were selected for simplicity rather than for representativeness (though I do trade in derivatives where a spread that size is common). But the basic situation I was illustrating, of the buy and sell price being on the same side of the latest price, is far from unusual.

Expand full comment

The standard advice (assuming you insist on picking individual stocks rater than just buying index funds) is to trade stocks based on asymmetrical information. That is, pick stocks to invest in based on information you have that fund managers, investment bankers, and other professional stock pickers lack.

The most obvious form of this is insider trading, which is illegal and thus not generally recommended. But there are ways to get asymmetrical information that aren't illegal, such as technical knowledge and personal experience with the company, its products, or its executives.

Expand full comment

It always amazes me the number of people who don't understand that the only way to make big money on stocks is to buy when everyone else is selling (price is low), and sell when everyone else is buying (price is high). Which means you are obliged to bet *against* the market's judgment. If you don't have special information that makes that a good bet, then doing so is sort of naturally silly.

Expand full comment

I like to compare it to sports betting: the way to make money is to be right when everyone else is wrong. Unfortunately, you can't make everyone else be wrong. (Or, if you can, there's a fair chance it's illegal.)

Expand full comment

Well, and even if you can be sure you're right when everyone else is wrong once, the only way to do it over and over again is if you're some kind of Super Genius or the insider information never gets discovered by outsiders. The (very remote) possibility of the first is what keeps authors of financial newsletters in clover, and the second seems essentially impossible unless you're *creating* the insider information in the first place, meaning we really are talking about illegal insider trading.

Expand full comment

I'm not saying this is common or advisable, but I know someone who got profitable insider information by working a suicide hotline.

Expand full comment
founding

Ultimately, every stock is priced by its dividend value, or the buyout or buyback offer if the company goes that route. And in the case of a mature blue-chip stock, that really can be as simple as "the company pays a fairly steady $X/quarter, consistently growing at Y% per year, thus NPV of $Z/share". People do buy stocks on that basis.

The farther a company is from settling down into a steady dividend-baying blue-chip stalwart (or buyout candidate or whatever), the more its valuation shifts from "what the dividends are worth" to "what people think the dividends will be worth" to "what people think other people will think the dividends will be worth", with a side order of what people think other people will be able to afford because changes elsewhere in the market will have left them with more or less cash available.

And that part can get rather divorced from ultimate reality, or even from rationality. But a large part of both theoretical and practical economics, is trying to make human irrationality at least somewhat predictable.

Expand full comment

The price-to-earnings ratio is the baseline indicator for this phenomenon. It's simply the ratio between the current price of the stock and the earnings (i.e. net profit) of the underlying company. When it's low, the value of the stock is derived primarily from the demonstrated earning power of the company. When it's high, the value of the stock is derived primarily from speculation on the company's future growth. I like Robert Shiller's adjusted P/E ratio which averages out earnings over a trailing ten-year window.

https://www.multpl.com/shiller-pe

Expand full comment

Plenty of people buy stocks for the dividends. I do. It's usually much more reliable stream of income then hoping the stock will be higher by the right amount when you want to sell, and the tax hit comes continuously instead of all at once. Trying to predict that XYZ is going to be the next TSLA or FB and get in the ground floor is a mug's game, and the best way to make a $1 million on Wall Street starting from $10 million.

Expand full comment

>Trying to predict that XYZ is going to be the next TSLA or FB and get in the ground floor is a mug's game,

Also correlates eventually to the dividends; the reason the price of shares of such a company will go up is that they will gain larger profits and can pay out bigger dividends.

Expand full comment

Maybe. Apple very famously did not pay dividends for a long time, because Jobs despised doing so, and it still doesn't have much of a payout ratio. People tend to hold AAPL for reasons distinct than, say, MSFT, which is historically one of the dividend queens. There does exist the hypothesis that I will be able to sell this stock at a handsome premium when I need the cash, because the value of the investment will constantly and substantially rise.

And then there's the Greater Fool theory of investment, which drives meme stocks, where you have no idea whether the company is worth anything, or ever will be, but you think someone more enthusiastic than you will buy the stock at a hefty premium after X hours/days/months regardless of any underlying value change. This falls under my heading of a way to make a million in the market starting from 10 million. (Although you can make very good money selling advice on this stuff, or apps -- hellow Robinhood -- that enable other people to try their luck at beating the county fair game.)

Expand full comment

>the market could all decide to sell the stock after a positive news release.

There's an old adage: "Buy the rumor, sell the news." It doesn't work, but it's sometimes invoked to explain why a stock dropped on positive news.

>Shouldn't I buy a stock because I think OTHERS will want to buy the stock in the future (and the price will go up, so that I can sell it for a profit)?

You've reinvented Keynes' Beauty Contest: https://en.wikipedia.org/wiki/Keynesian_beauty_contest

Expand full comment

In addition to dividends (which used to be more important and still are for some less huge companies), there are also buybacks, which in effect are a way to return cash to investors. Also, if a company is acquired, shareholders get paid.

Those are the major things tethering stock prices back to some fundamental reality.

There’s also other slightly valuable things you get from owning the stock, like the right to vote on proposals and so on.

I recommend reading “A Random Walk Down Wall Street” as a popular introduction that everyone likes.

Expand full comment

We’re also having an Orlando meetup tomorrow!

Expand full comment

What's the latest thinking on what causes COVID waves to end? We saw many instances over the past 1.5 years where waves just stopped in particular countries/regions even though far fewer than all or even most people had been exposed. Is it just a seasonal thing? If so, what explains the different month-by-month behavior from 2020 to 2021?

Expand full comment

There may not *be* a simple explanation. It seems to me the progress of an epidemic has a lot in common with population dynamics, and in some cases[1] those exhibit chaotic dynamics, where you get abrupt changes in population driven by tiny little changes in underlying parameters, and in the case something as complex as COVID running amok in a human population there must be a load of paramaters that are shifting at least slightly all the time.

----------------

[1] e.g. https://www.youtube.com/watch?v=icz56724Txs

Expand full comment

I don't know about the latest thinking, but my thinking is that it just burns through a subpopulation until it burns out, and then the embers eventually lead to a flareup in a new subpopulation. Both distancing and immunity would have an effect on what constitutes a subpopulation, as well as random chance.

Expand full comment
founding

A combination of behavioral changes (people shift from crowded indoor socializing to more dispersed outdoor socializing, or get scared of the big scary Covid wave and go hide under the bed), and herd immunity at least among the particular subsets of the population that can't or won't change their behaviors (New Yorkers, spring breakers, meat-packing plant workers, etc).

Last winter's US wave probably ended because the US reached effective herd immunity for the general public as a whole against baseline COVID; now we're matching various groups' behavioral risk levels (including vaccine vs antivaxx) against the more infectious Delta.

Expand full comment

I was trying to imagine a model that might explain waves such as:

1st wave: the very most social people are most likely to get it, then that group develops significant immunity.

2nd wave: the second most social people are now most likely to get it because they have less group immunity than the most social people, and are therefore now have the highest Socialness/Immunity ratio.

And so on.

I think this would imply that each group as defined by socialness would have to be a bit clumpy. Not sure how one would test this.

Expand full comment

I've seen research that people who are the most connected in their social networks tend to get infected the earliest. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2939797/)

I've been working on a similar theory but haven't gotten around to running any models yet. I can give you a bunch more links if you'd like.

Of course the problem is that there are so damned many confounders, but I could imagine testing the hypothesis by trying to correlate wave height with the social graph connectedness of given communities (after controlling for stringency index perhaps and maybe measures of personal space in the culture).

Here's an insight I had: if you imagine cases spreading from a few central hubs towards the periphery, then the number of new cases would be essentially an expanding multi-dimensional sphere with radius equal to the number of generations of spread. The surface area of the hypersphere would be the number of new cases per generation. This area would grow exponentially until it hit the "edge" of the social graph and then start dropping off again as it moved through sparser connections. Back-propagation is limited because people self-isolate when symptomatic, but maybe the next wave starts up when the prior wave starts hitting new hubs again.

Also new strains seem to drive new waves, and not just in covid.

Expand full comment

This model would be easy to simulate. Say:

Group A: people who come into contact with 100 people per week, population 1000

Group B: people who come into contact with 50 people per week, population 5000

Group C: people who come into contact with 20 people per week, population 10,000.

Group D: "" 2 people "", population 10,000

Simulate people from each group coming into contact with others within and outside their group with a frequency in proportion to each group's size and socialness. Start by infecting 1 person in Group A. Give some odds of each meeting resulting in a transmission, giving high immunity to those previously infected (One could also test None of the Above's idea below).

Does this simulation result in multiple waves over time?

Expand full comment

(If not, p-hack until it does.)

Expand full comment

Suppose sterilizing immunity lasts just a few months, and then you just get resistance to severe disease. You could get waves from that--once most of the most connected people in the social graph have been exposed, further transmission tends to die down for a few months...but then that immunity wanes, and covid can spread again.

Expand full comment

No idea about the latest thinking, but at least in some cases (especially before the vaccination campaign) people changed their behavior, either due to restrictions or on their own accord.

Expand full comment

This is reasonable, but how do people on the ground actually know that there are more or fewer cases? The news seems to just always be screaming about outbreaks in cases, and I've tuned a lot of it out.

We also saw the same thing before there was good reporting of cases, like with the flu outbreak, right?

I guess you would know if someone in your network got sick, or had a close call. Is that how it happens?

Expand full comment

There's lots of possible ways, and it's probably a combination of all of them. Some people will know people who get sick. Someone will see the news, which does in fact get louder when there's a spike in cases - compare news coverage in August vs June. Some people will look at the numbers directly. Some people will look at governments imposing new restrictions (which is in turn a delayed reaction to the spike in cases) and take that as a sign that things are serious and they need to be more careful. Some people will have friends or acquaintances that become more cautious due to the above and go off of social proof. And so on and so on.

I think it is pretty clear though that in aggregate, there is in fact a control loop where people become more cautious when things get bad and vice versa.

Expand full comment

In Texas it seems clear that the news got really loud in July 2020 (and even the Governor finally issued a mask mandate). Then there were several months of background noise until the holidays. After that wave calmed down it’s been hard for me to tell how much the delta wave produced loud local discussion, but the wave does seem to have broken.

Expand full comment

People also changed behavior due to Delta, even with vaccinations.

Expand full comment

Claim 1: Tech companies are learning fast how to work with a 100% remote workforce (or nearly). This is already compressing the pay difference between the coasts and the middle of the US.

Claim 2: For most tech companies, it's more cost-effective to do contract SW development with cheaper offshore workforces. The main reasons companies avoid it are (a) IP protection, (b) management difficulty, and (c) regulatory concerns.

Conclusion: As US companies make remote workforces a fully first-class citizen way of working, they will gradually then all of a sudden get rid of their expensive (100k/year) US developers and start directly hiring 20k/year developers in India, Eastern Europe, etc. SW development is going to be a bad career in the US starting in the next few years.

Expand full comment

This assumes that the majority of tech workers at any given company are software developers. You're ignoring hardware engineering (development & QA), product marketing, technical marketing, support, sales, biz dev, IT, etc. Many of these fields are not easily outsourced. For instance, India is not a center of hw development. However, China, Taiwan, and Korea, but in the case of China we've got trade restrictions to work around. It takes time and money to move hw development offshore. So I don't see a sudden movement to hire EEs in Asia. From what I've seen of my company and of my customers, the majority of the QA teams are overseas, but the labs are in the US, and the management of QA teams happens in the US. Technical marketing and technical sales have to be in the same timezone as the majority of our customers (which is still Silicon Valley). We've got 24/7 support, and the majority of first level support is in India working, but 2nd and 3rd level support is in the same timezone where we have most of our customers.

So I don't see a huge overseas exodus of jobs from tech companies right now. True a lot *could* be moved overseas, but there's a cost and a risk in executing such a move. Likewise, the overseas salaries of workers has been steadily rising.

For instance, we purchased a small tech company in an a certain eastern European country, call it country X, in the early '00s. They were just about the only computer-type tech company in their capital city, with a team of less than 50. But because they were now part of an US tech company, we started to get the best graduates from their engineering schools at about 1/3 of what we'd pay US engineers (with the same talent and skill). But other companies began to notice that country X was turning out some very good engineers, and they began opening offices in country X. Now country X's engineering schools are churning out more graduates than ever, but they can't keep up with the demand. So now it's a market that favors employees, and salaries in Country X are rising faster than inflation (2-3x). Our US HR is having trouble keeping up with changes in that job market. But after 15+ years of investing in the development of that site, we now have greater than 500 employees in Country X, we can't just wave a magic wand and move it all to India or China.

Expand full comment

If we take your claims as given, the a more nuanced conclusion would be that staring in the next few years, SW Dev will be a bad career for people who are below average SW Devs.

But, if the benefit to having cheaper devs is so great, then why are there so few top tech companies located outside of the US?

Expand full comment

We're too early into widespread remote work to judge the overall trend. March 2020 represents a singular break that forced most companies that could do so to go remote, but didn't change the makeup of the workforce itself. Specifically, I mean that good employees who knew their jobs were already doing the work, and could fairly easily transition to doing it in other locations. Relationships between management and employee, peer groups, teams, etc. were already established.

What we have not seen is what happens when key people leave the company in large enough numbers that the pre-established teams are no longer coherent. Hiring someone remotely to work remotely is a difficult process, and many employees do need a lot of mentoring and monitoring. That's especially true for mediocre young employees (the vast majority of people moving into any field are both young and mediocre). Companies working remotely are going to start seeing some massive issues on the time horizon of 5-10 years unless they can find ways to provide the same teambuilding and rigor they can get in person.

Expand full comment

What I remember is that there was an initial boost to productivity, as a lot of "useless crap" stopping interfering with workers doing their job remote.

But a lot of that "useless crap" was things that were essential to team-building, cohesion, and training the new workers.

Remote is a wonderful benefit. I enjoy it. But being in an office has definite benefits.

Expand full comment

> hiring 20k/year developers in India, Eastern Europe, etc

Note that from my understanding that is an average salary for programmer salary in Russia and it would be higher in other Eastern Europe countries.

Someone capable of good communication in English and with other requirements for remote work WILL demand more than average Russian salary. Partially because they are above average, partially because they know what is the situation and can negotiate, partially to offset various risks.

Toward west, into central Europe you will get higher income.

30k/year (actual wage cost to company about 60k-70k given taxes) is standard for someone locally employed on a stable contract by USA company in Poland.

And programmers are not dumb, someone accepting work for company from USA will ask to be paid far more than for a local job.

Though this moving to remote job exists, is happening already, will accelerate. Programming is an excellent choice of job in poor regions because remote working is viable. And local companies need to compete with remote companies what shrinks pool of workers, so even ones with ability to only read English will benefit.

Source: working as a programmer in Poland, including remote work, including remote work for USA company.

> all of a sudden get rid of their expensive (100k/year) US developers

I would rather bet that good developer in India and Eastern Europe will start earning 100k/year. From what I heard finding good programmers is hard, getting harder and competition is increasing.

Note "good developer" part. It is not "every programmer". And for some levels of expertise especially it demand it happened already.

Expand full comment

> I would rather bet that good developer in India and Eastern Europe will start earning 100k/year. From what I heard finding good programmers is hard, getting harder and competition is increasing.

Given the way that software written once can be replicated millions of time, there is a lot of room for developer salaries to grow.

The supermarket cannot double the wages of their workers without serious changes to what they charge.

A company making code that goes onto millions of devices could double the salaries of their software team without changing the end price.

I am NOT saying that they are price-insensitive -- all else equal, they'd rather pay less. But the business still functions as prices swell.

Expand full comment

You need to talk to an accountant if you're earning 30k USD/year and it's costing your employer 60-70k/year. Switichng to a B2B contract would reduce the total costs of your wage to around 42k/year (assuming that by 30k USD/year you mean 10k PLN/month net) - and that is the most pessymistic scenario, which is basically unrealistic.

So either for some bizzare reason you've chosen the most ineffective way of taxing your wage, or you don't really understand Polish tax system at all (like e.g. you're treating VAT as a cost of your employer).

Source: due to the nature of my work I know a lot about Polish tax system.

Expand full comment

60-70k was for case of employment on "umowa o pracę" (not going to try translating), I know that it is inefficient but some people are actually employed in this way rather than via B2B. Let me know if I overestimated costs also in such case.

And I need to talk to accountant, though for a different reasons.

Expand full comment

In case of UoP (labour law contract) the total cost of 30k USD/year net salary is around 53-54k USD - again, assuming the worst possible scenario (no spouse and kids or any other tax benefit).

I think the only IT people who are emplyoed via UoP in Poland are foreigners from outside of EU, who sometimes might need it for visa related reasons (though even in that case it is usually possible to get a B2B contract), and maybe government employees.

Expand full comment

In this case: relatively old people in a large corporation (one of them used to have B2B), with retirement within years.

> total cost of 30k USD/year net salary is around 53-54k USD

Thanks - just to confirm - is it total cost paid by employer covering taxes partially invisible to person being employed? (ZUS, NFZ, etc)

(info that I have is from someone actually having a company, maybe they included other per-employee costs like office and I got confused/remembered it badly? thanks for a correction!)

Expand full comment

Yes, I calculated it based on total costs of the employer, so called "brutto brutto" not just "brutto". I suppose the higher number you were given might include some other benefits, like private medical insurance and so on, though it is unlikely that they would amount to over 10k USD, especially because they are at least partially deductible.

You can check my calculations with https://wynagrodzenia.pl/kalkulator-wynagrodzen/ (assuming you're able to speak some Polish or willing to use Google Translate). 30k USD/year net is roughly 120 000 PLN/year, which is 10k PLN/month net, the yearly costs of an employer are 214 064,77 PLN which amounts to roughly 55.6k USD.

And just to be clear: salaries are a cost under Polish tax law, so if you pay your workers 100 units of money and sell your product for 200 units, then you're paying your income tax for 200-100=100 units of money, which makes the real cost of employment a bit lower (this calculation is obviously really rough, but the point is to show the general rule, not all the nuances).

Expand full comment

Though I predicted something like that (remote work will further encourage remote and move jobs from USA) and for obvious reasons I would be happy if that would happen on even larger scale than so far.

Expand full comment

From my perspective (within the tech industry), we get what we pay for for software devs. The quality has dropped along with the cost. And, I've yet to see real innovation or product leadership moved successfully to a cheaper country.

Expand full comment

Do you work in software? Do you have any knowledge of how much better American developers are than their offshore counterparts? I do, and failing to mention it as a reason companies avoid offshoring is insane.

Expand full comment

There's a decent theory that a lot of software is something like O-ring development; you are trying to get as few errors as possible, and paying twice as much to reduce errors by 10% is a good trade-off.

https://en.wikipedia.org/wiki/O-ring_theory_of_economic_development

I know someone will complain that all their software is crap, and I'm not really interested in debating that.

But Apple is sitting on $200 billion (or were, before pandemic) so saving a $100K on a project may not make any sense.

Expand full comment

Also, skilled developers in the rest of the world will do their best to become American developers.

Expand full comment

My experience has been that a substantial fraction (perhaps a majority) of top-tier US-based software developers are immigrants. And that full-time employees of big tech companies based in other countries are generally up to the standards of their US-based counterparts.

Of course, there's likely quite a bit of selection bias going on here, since I'm only seeing the devs who are good enough to get hired by big tech companies.

Expand full comment

One of the first things an Indian developer who is good enough to be a developer in the US does is try to move to America to get the native bonus.

Even Europeans want to move to America for the boost in salary.

I don't live in a top-5 city for software development, so I don't want to totally believe that location is prime, but I also live in America, so I want to believe location does matter a lot. A paradox.

Expand full comment

"Americans" was not a dogwhistle for whites, immigrant developers are better too. I attribute it to a mix of selection effects (naturally the top tier will have an easier time migrating), better English, and something like cultural assimilation to American coding/project-management styles.

Expand full comment

One key factor is that the majority of immigrant tech workers are here on H1B visas. These visas are given out by lottery, and only about 1/3 of entrants get the chance to apply for one. That means that for every newly hired immigrant tech worker, there are two tech workers that US companies want to hire but can't, and I assume they are doing similar work in their home countries.

Expand full comment

But employers don't know which ones out of that 2/3 would have got in, had they been allowed to apply.

Expand full comment

Yes, they do. The potential employer has to file forms naming the employees they want during a single week out of the year, a potential immigrant can't enter the lottery without an employer willing to file for them.

Expand full comment

Typical complaint about remote programmers includes time-zone incompatibility and poor English.

USA based immigrants will have much better English than programmers from East Europe or similar.

I am remote programmer, my English is poor and I know it. Recently I was hired to organize remote lectures. Presumably because getting expert from USA on this topic would go into 100k $ and more.

Expand full comment

I'm skeptical here.

Firstly, for many companies, the shift to remote work has been grudgingly accepted, and many are doing their best to insist on a return to in-person work.

I think it'll likely remain incredibly difficult to directly hire cohorts of people in foreign countries. Communication problems are huge, as well. This is probably a great time to run a business model of 'find smart people in some foreign country, build a local team out of them, and then sell your team as a package deal to local businesses.'

If this change does happen, i would expect it to first play out as 'many developers relocate elsewhere inside the US'. Only after that's been going on for a few years would i expect people to take the next logical step.

Expand full comment

> This is probably a great time to run a business model of 'find smart people in some foreign country, build a local team out of them, and then sell your team as a package deal to local businesses.'

Note that this is not something new. I know multiple people in Poland where they work for something like that. From "all work contracted out from USA, implemented in Poland", through "owner manages team of people working in Poland" to "section of company was moved to Poland, USA equivalent was fired".

Expand full comment

The local period of extra warmth and sunshine is coming to a close soon. That is our Indian Summer is almost over. The local news morning chat program took a stab at coming up with a name for this morning bringing in the meteorologist.

They proposed things like ‘Fallsummer’ and carefully avoided looking like they had any knowledge of the traditional term. As if it had been wiped from their memories. It was kind of painful to watch. They couldn’t even mention the possibly offensive term in quotes and say something about coming up with a new name that wouldn’t have the potential to offend anyone.

I know it’s possible that I’m just not adapting quickly enough but this seemed… I don’t know. Words really fail me here. To my admittedly old mind this seemed to have the same effect as a check written out to help re-elect that guy.

You know the one that made my head explode with four years of cruelty and ignorance.

Does this kind of walking on eggs behavior really make sense?

Expand full comment

Why did they even bring it up? Does it really need a name?

Expand full comment

For what it is worth it has two names in Polish - https://pl.wikipedia.org/wiki/Babie_lato_(meteorologia)

(one that I am unable to translate - "grandmother summer" (??) and "golden summer", probably due to color of leaves that disappear quicker in case of a bad weather.

Expand full comment

Funny, in Germany we call it "altweiber sommer" which seems to be close to the polish "grandmother summer".

It literally translates to old wenches summer, with less of a negative connotation towards the wench.

Expand full comment

Yeah I don't think "babie lato" has a direct English translation, or at least not one that I know (I'm not a native Polish speaker). But "złota jesień" translates to "golden autumn".

Expand full comment

> But "złota jesień" translates to "golden autumn".

Definitely, I got confused here. Thanks for correcting!

Expand full comment

Well actually it probably should have a name. The meteorologist went down the path of defining it as a ‘microseason’,

Do you live in the northern hemisphere?

Have a look at its Wikipedia page

https://en.m.wikipedia.org/wiki/Indian_summer

We don’t experience it in all locations every year but when it does happen everyone seems to think it’s something special.

Expand full comment

I've lived my whole life in the U.S., and I'm experiencing an Indian summer right now. But I've never had a reason use the term. Temperatures go up and down all the time. Why should unusually warm weather in early fall have a special name? It's no more special than unusually warm weather at any other time, or unusually cool weather at any time.

Expand full comment

Some locations get it more systematically than others. I seem to recall that Northern California has a pretty strong one, because real summer triggers a wind pattern that cools off coastal areas, and when that is over, the coastal regions keep their local solar warmth. I seem to recall that the warmest days of the year in San Francisco and Berkeley were almost always in September.

Expand full comment

Same. This is my first encounter with the term.

Expand full comment

Possibly some kind of superstition according to which it's something inevitable rather than random chance. (In Italian there even is a phrase for "cold spell at the end of January/beginning of February.)

In reality, once you average out year-to-year fluctuations, basically anywhere in the northern temperate zone, temperatures monotonically decrease from a smooth maximum sometime around July or August to a smooth minimum sometime around January or February and monotonically increases back to the maximum, with no indications of narrower peaks or troughs anywhere, and in the southern temperate zone vice versa -- see https://weatherspark.com/y/45062/Average-Weather-in-London-United-Kingdom-Year-Round https://weatherspark.com/y/23912/Average-Weather-in-New-York-City-New-York-United-States-Year-Round https://weatherspark.com/y/1705/Average-Weather-in-Los-Angeles-California-United-States-Year-Round https://weatherspark.com/y/14091/Average-Weather-in-Chicago-Illinois-United-States-Year-Round https://weatherspark.com/y/9247/Average-Weather-in-Houston-Texas-United-States-Year-Round

Expand full comment

I've used the term my entire adult life. Some years we get an Indian Summer and other years we don't, and I comment on it every year.

Indian Summer has a name for the same reason Summer has a name. It's a season, just not one of the four cardinal seasons.

Expand full comment

"It's a season" -- not in any reasonable sense of the term, see [my comment](https://astralcodexten.substack.com/p/open-thread-192-berlinparis-meetups/comment/31720809

Expand full comment

Okay. I find it special because it comes after mid September and by then I’m normally checking the condition of my snowblower. We got 7 inches last October.

So when we do get the ‘microseason’ it feels like a special reprieve before the coming winter. An extra week or so to put the top down on the convertible and soak up the last bit of easy living.

I see a lot of happy grins in my neighborhood when we catch this pleasant break.

Your mileage may vary.

Expand full comment

Maybe it's because I'm from the South. I'm near Cleveland now, but still not as cold as where you are.

Expand full comment

If it helps, I'm pretty sure I'd literally never heard that term before, or at least had no memory of it or what it means.

It's possible it's a cultural/generational thing, and they're not aware of the term either, or don't expect their audience to be.

Expand full comment

https://en.m.wikipedia.org/wiki/Indian_summer

It’s a pretty common term in the US. The chat people were close to 50 so I think it was a sensitivity issue.

Expand full comment

TIL that Indian summer was named after Native Americans rather than after India.

Expand full comment

That’s the issue. Minneapolis is pretty sensitive to any racial slur since George Floyd was killed.

Expand full comment

I don't watch teevee, but I suppose my local talking heads were spared embarrassment by the fact we seem to have skipped Indian Summer altogether and cut straight through to full-blown Autumn.

Expand full comment

Do you watch *any* long format content? Series on Netflix/Amazon/Apple, or multiple pieces of content by YouTube channels?

Expand full comment

I stopped watching teevee 30 years ago. I am, apparently, one of the seven guys left in the country who still buys and collects DVDs (mostly pre-1960 films), and I watch a lot of machining and woodworking videos on YouTube.

Expand full comment

Huh. And no curiosity at all about Peak TV?

I ask because narrative entertainment, like technology, has evolved in the last 30 years. Not just the special effects, but the general creativity, writing, shooting, pacing, and acting are all spectacularly more competent than what was being done 30 years ago. The same way gymnasts keep nailing more and more difficult tricks, so do modern long-format narrative storytellers.

And the very best writers are all doing "teevee," because the TV business model values and rewards writers, while movies famously don't.

In any given year, there are 5-15 shows that are the equals of the year's greatest movies and books. Maybe their betters, because now that production is cheap and movies and TV look identical, movies are to short stories as shows are to novels.

And while there are of course great short stories, they're never *quite* as great as great novels.

Which is all to say: long format narrative storytelling is producing some of the greatest art of this era, and most critics agree much of it is far superior to television and movies of the past.

I think you're doing yourself a real disservice by assuming modern series are all "teevee."

Expand full comment

I join Carl Pham's dis-preference for long story arc TV shows. I just constitutionally cannot watch longform TV, even when I like it. For example, Jane the Virgin is one of my favorite things ever filmed, but I have never finished even season 3. No matter how much I like a show, I peter out watching it after a few sessions, and then I can never get back into it.

You ask about long-form books lower down. While I don't tend to read book series (I think because I don't prefer the genres that tend to make series) when I do I feel much more in control of when and how much I read at a time. I can read a chapter at a time, or stop in the middle. A TV show demands much more regularity of consumption, even when it is on-demand. I also don't feel bad about reading a book in the middle of the day the way I do watching a show.

I don't entirely like this preference. When I was a grad student it meant I couldn't participate in about 50% of grad student conversations because I wasn't watching Breaking Bad or whatever.

Expand full comment

I tend to dislike the long story arc TV shows. One of the things I like about spinning up a Futurama or Star Trek (TOS) episode is that I know it will all be wrapped up in 50 minutes, and I *don't* have to tune in next time, or invest another 3-15 x 50 minutes to know how things end up. Since the purpose of watching the tube is pure mindless relaxation, this is good. I'm annoyed if my throwing away 50 minutes on entertainment results in dissatisfaction unless I immediately commit *another* 50 (or 100, or 300 minutes) to the same goal. Feels bait and switchy.

There's also the point that since these shows often have uncertain lifetimes, the story can in fact end up wandering around, because the writers don't know if they have to wrap it up in 3 shows or they get another year or two. Let us not even contemplate the staggering betrayal that was the latter fraction of the Battlestar Galactica remake. Those people should be taken out and shot.

Expand full comment

Interesting! Do you feel the same way about very long novels and/or novel series? It seems like they would have the same liabilities in terms of a time-sink.

Also, I absolutely feel your pain and betrayal on Battlestar Galactica - I was such a fan I listened to Ronald D Moore's episode commentary podcast! - but that show was more than a decade ago. Since then, there's been a steady trend (partly driven by the disastrous finales of BSG and LOST) towards series having a planned arc with a defined finale.

That's by no means universal (especially not on Netflix), but there are many shows these days that have deliberate endings, with maybe a door cracked for another season (but not open so wide it creates a draft).

There are also shows that started strong, but later seasons weren't as satisfying, and it's pretty easy to find advice on where to stop.

For example, I feel BSG is terrific if you pretend Season 4 Episode 10, "Revelations," is the finale.

Don't watch past season one of Stranger Things.

The first and second seasons of the U.S. House of Cards are electrifying and shouldn't be missed, and while the third season is also pretty good, there are diminishing returns after that point. Etc.

Expand full comment

I keep hearing about this, but frankly, I have hard time finding these shows.

What exactly are the good 5-15 TV shows of 2020?

Expand full comment

The pandemic resulted in a lot of production delays, so 2020 is more limited than other years.

But looking at narrative fiction shows that released new episodes in 2020, which are both highly rated by critics and hugely enjoyed by me, personally, in no particular order:

What We Do in the Shadows

Better Things

Bojack Horseman

Mrs. America (miniseries)

The Good Place

The Crown

Tales from the Loop

The New Pope

Ted Lasso

Solar Opposites

Never Have I Ever

Rick & Morty

Harley Quinn (animated series)

(Just missed the list)

The Mandalorian (because not as strong as season one)

The Boys (ditto)

Of course, to some degree, personal taste is always going to be a factor. Some might notice that I haven't included a few of the most celebrated series of the year, and that's because I either hated them or wasn't interested in the genre/premise.

That said, don't take my word for it. Finding great media for yourself isn't hard.

Start with the Peabody Awards:

https://en.wikipedia.org/wiki/Peabody_Awards#Peabody_Awards_Archive

Then RottenTomatoes Best of each year: https://editorial.rottentomatoes.com/guide/best-tv-shows-of-2020/ , etc.

And if you are still motivated to keep looking, sites like Vulture, Esquire, and Rolling Stone compile best-of lists, too.

Stay away from Ranker and other lists generated by audience users.

Expand full comment

We had a beautiful patch of weather late September / early October locally this year so the morning show people had to tip toe around the nice weather.

I had a short exchange with Deiseach a couple threads ago wondering if this was still acceptable speech.

I guess it’s not anymore.

Expand full comment

Wired published an article entitled "Biohackers Encoded Malware in a Strand of DNA" [https://www.wired.com/story/malware-dna-hack/]

"In new research they plan to present at the USENIX Security conference on Thursday, a group of researchers from the University of Washington has shown for the first time that it’s possible to encode malicious software into physical strands of DNA, so that when a gene sequencer analyzes it the resulting data becomes a program that corrupts gene-sequencing software and takes control of the underlying computer. "

"When the researchers sent their carefully crafted attack to the DNA synthesis service Integrated DNA Technologies in the form of As, Ts, Gs, and Cs, they found that DNA has other physical restrictions too. For their DNA sample to remain stable, they had to maintain a certain ratio of Gs and Cs to As and Ts, because the natural stability of DNA depends on a regular proportion of A-T and G-C pairs. And while a buffer overflow often involves using the same strings of data repeatedly, doing so in this case caused the DNA strand to fold in on itself. All of that meant the group had to repeatedly rewrite their exploit code to find a form that could also survive as actual DNA, which the synthesis service would ultimately send them in a finger-sized plastic vial in the mail."

I'm not sure if someone would have the appropriate domain knowledge to answer this question but I'm very interested: Could you create an organism with hacker-DNA that was viable? I think the worry would be that someone would create the DNA in a lab to try to hack but what if they tried to turn it into a living organism? If a company was gene sequencing a all the animals in a zoo, could I go in and replace a zebra with my hacker-DNA zebra? I'm thinking it might not be possible for this hacker-DNA zebra to actually exist but it would be cool if it did.

Expand full comment

>And when that FASTQ file is compressed with a common compression program known as fqzcomp—FASTQ files are often compressed because they can stretch to gigabytes of text—it hacks that compression software with its buffer overflow exploit

So the flaw is in compression software, not the actual gene sequencing.

Expand full comment

The issue is that the specific software running the DNA analysis has a security flaw.

There's been image-rendering software that's had security flaws so that presenting it with certain image files caused a security flaw.

Even "less" has had issues, so viewing a file with a (supposedly) simple viewer has problems against malicious inputs.

Expand full comment

>The issue is that the specific software running the DNA analysis has a security flaw.

The article notes (not in the intro though) that it's a security flaw that the "hackers" intentionally added. Not much of a hack in my opinion.

Expand full comment

You could do it in most organisms. My contribution here: you probably *couldn't* do it with a viable *virus*. Viruses are selected *very* heavily on genome size, to the point where some of them overlap their genes (via reading in opposite directions or ribosomal frameshifts) to get the maximum use out of a given amount of nucleic acid; embedding attack code in high-quality genes is probably asking too much.

Expand full comment

This is like a lost Stephenson novel, I love it.

You could absolutely create an organism with hacker-DNA that was viable, there's plenty of non-coding DNA you can mess with.

Time to take the denial of service dog [https://www.youtube.com/watch?v=DMNSvHswljM] to a next level.

Expand full comment

I mean, it sounds more like a Greg Egan novel to me, in fact he has written several short stories with something that was almost exactly this premise, whereas I can't really remember anything Stephenson has done in this area

Expand full comment

Okay, this is really cool in concept but I have no idea what this would ever be used for in reality. If you've reached the point in the hacking attempt where you're trying to break into a network through the PCR machine you're at the point where just paying a guy to to unplug the firewall is simpler and easier.

Expand full comment

Maybe, but this is *cooler*

Expand full comment

How long bofore we see a polymorphic retrovirus that goes RNA -> DNA -> .exe -> DNA -> RNA

Expand full comment

Yes, you could. Scientists have stored data in a bacteria before. It's basically "junk DNA" - the organism doesn't use it to make proteins or anything, it just fills space in the DNA strand.

I don't think anyone's tried it on larger organisms with longer genomes, and it's possible there are reasons why that's harder (maybe you need to find the right spot to insert it so it doesn't break the function of other genes, for example) but in principle there's nothing stopping you from adding this exploit to a zebra's DNA.

Expand full comment

What I was thinking was that the length of the hacker part of the DNA would be too long to just fit in as junk DNA. Or would it?

Expand full comment

One minor practical problem is that we don't have a way to edit all the cells of an organism at once, so you would need to edit the zebra when it's still an egg, implant it in a mother, wait for it to grow to adulthood, and then sneak your zebra into the zoo. It doesn't make it *impossible*, but it means you'll have to wait several years to pull off your zoo heist.

Expand full comment

Of course the entire cunning plan will fall down on the minor point that all zebras have visibly unique markings, so the keepers are likely to spot something. Plus the resident zebras may reject the new one, which is going to probably require a vet rather than DNA testing. This heist is going to need some really good animal behaviouralists and make up artists.

Actually, it might be easier to just add in the extra zebra so the zoo have to do a DNA check to see if they can find out where it came from... Would reduce the risks involved in trying to remove an animal with teeth and hooves from its home as well.

Still, it looks like we've got a plot for the next Oceans # film. Or the start of a really cunning criminal enterprise...

Expand full comment

We can edit all the cells of a living organism (or a significant portion, at least) via viral vectors.

Expand full comment

Right. Right. Good point!

Expand full comment

For depression there’s dysthymia and for mania there’s hypomania. Is there an equivalent concept for psychosis? A lower, less severe grade of psychotic symptoms that might not even impair someone.

I’m aware of the concept of “prodrome” but that assumes people will progress to a full blown episode and also may have its own distinct features.

Expand full comment

Non-impairing psychotic symptoms are usually studied in the context of people who also have impairing psychotic symptoms; you see a symptomatic spectrum in psychosis, similar to anything else really, in these populations. Psychosis symptoms can exist acutely or chronically at a level that isn't impairing both for folks who've had full-blown psychosis and folks who haven't and who are psychologically well. "Sub-clinical" or "sub-threshold" or "low-grade" psychosis is how I've seen psychosis that doesn't meet diagnostic thresholds discussed. There's awareness of the phenomenon and plenty of studies to read and tell, but it isn't a diagnostic category or a singular shared term.

Expand full comment

https://en.wikipedia.org/wiki/Schizotypal_personality_disorder

"Schizotypal personality disorder is widely understood to be a "schizophrenia spectrum" disorder. Rates of schizotypal personality disorder are much higher in relatives of individuals with schizophrenia than in the relatives of people with other mental illnesses or in people without mental illness. Technically speaking, schizotypal personality disorder may also be considered an "extended phenotype" that helps geneticists track the familial or genetic transmission of the genes that are implicated in schizophrenia pathogenesis.[6] But there is also a genetic connection of STPD to mood disorders and depression in particular.[7]"

Expand full comment

Robert Sapolsky contends that the schizotypal personality is selected for at its observed frequency in human populations because schizotypal people are the founders & reformers of religious traditions.

Brief summary: https://www.youtube.com/watch?v=1YfWngyZ8HE

Full lecture: https://www.youtube.com/watch?v=4WwAQqWUkpI

Expand full comment

I have a linguistics/neuroscience question. Could a baby naturally assimilate a conLang like Ithkuil (https://www.youtube.com/watch?v=x_x_PQ85_0k), providing the parent was a fluent speaker? I’m interested given the language seems unlike any natural human language in the way it packages concepts and information. Has anyone ever tried, tested this thing etc?

Expand full comment

I've noticed that conlang constructed rarely consider how the baby language version would work.

Expand full comment

Is this kind of what you're looking for? https://en.wikipedia.org/wiki/Native_Esperanto_speakers

Expand full comment

No. Esperanto grammar is kinda like a simplified version of those of natural European languages, so the existence of native Esperanto speakers doesn't tell us anything that the existence of creoles doesn't already. Ithkuil is ... quite something else. I'd be surprised if there even could be Lobjan native speakers, let alone Ithkuil.

Expand full comment

...then again, Arabic has hundreds of millions of native speakers and whenever I read anything about its grammar I'm like "you gotta be kidding me" so maybe my intuitions on which languages are or aren't learnable aren't that well-calibrated and had better be taken with a grain of salt.

Expand full comment

as a native Arabic speaker living in the US i gotta say one thing: its hard when you are not a native.

Expand full comment

when you are not a *child

Expand full comment

Lojban is an oeuvre conceptual syntax, sure, given all the predicate logic stuff—but in terms of pure learnability it’s quite simple. There are regular rules and regular phonetics and conjugations etc. The lexicon is pretty tiny too.

Expand full comment

Probably the closest natural experiments would be pidgins - cases where two different languages get mashed together. What is fascinating about pidgins is that pidgins are created by adults trying to communicate across language boundaries, and pidgins generally don't rise to the level of a "full" language. But the children of pidgin-speakers seem to regularly turn the pidgin into a real, full-blown language with its own distinct grammar and vocabulary. https://en.wikipedia.org/wiki/Pidgin

Another interesting example is Nicaraguan Sign Language https://en.wikipedia.org/wiki/Nicaraguan_Sign_Language. It's a language that spontaneously developed at a school for the deaf in Nicaragua where none of the students had ever been taught any pre-existing sign language. It very rapidly developed a full vocabulary and grammar, despite the deaf children having essentially no pre-existing language skills.

My guess, based on the above examples, is that if you tried to teach a conlang to a group of children, and didn't expose them to other languages, the children would transform the conlang as they nativized it. Some of the vocabulary and grammar of the original conlang would stick, but the parts that didn't "vibe" with the subtly specific ways real languages work would be jettisoned and replaced with something more natural.

Expand full comment

"if you tried to teach a conlang to a group of children, and didn't expose them to other languages, the children would transform the conlang as they nativized it"

This seems right to me. There's plenty of linguistic research (see Penny Eckert for example) on children as language innovators to back up the idea.

Expand full comment

Tangent, has anyone ever created a written form of a sign language? I assume they must have but I'm curious how well it went and what obstacles they ran into.

Expand full comment

I don't have any personal knowledge, but googling turned up this interesting article: https://disabilityinkidlit.com/2017/05/19/asl-writing-a-visual-language/

It sounds like there isn't a specific written form for sign language, and it's pretty common for sign language speakers to translate to English when writing. I did find some experimental scripts designed to directly express hand signs, but they seemed more like prototypes than things that people are actually using day-to-day.

Expand full comment

There was a linguist who taught his son Klingon. http://www.todayifoundout.com/index.php/2012/08/a-man-once-tried-to-raise-his-son-as-a-native-speaker-in-klingon/ It didn't go super well. The son figured out that no one else spoke Klingon, so stopped speaking it himself by age 3 or so. So it seems to be insufficient that 1 parent be a speaker.

Expand full comment

But I think the point of the question is, can an "artificial" language like Klingon be learned by a child. What you report seems to be point to yes. I'd also point out that although "artificial" everything about Klingon grammar has been filched from some actual child-learnable language.

Expand full comment

I wouldn't support the sentiment that the son learned Klingon. I would say that he stopped learning Klingon at the moment he actually had to commit real resources to it. Groups of kids have learned languages their parents didn't really speak all that well, most famously, Modern Hebrew. But you would need a whole village to even make the attempt.

Expand full comment

It's more specific than that, though. Artificial languages like Klingon can be learned, because they're about as complicated as natural languages. The language in the question is a ridiculously complicated one, more complicated than Klingon or Mandarin or anything else. No one, including the guy who invented it, speaks it fluently. That's why I doubt it can be learned.

Expand full comment

Maybe, but we should assume that a child can learn things that an adult cannot. From the description it is still made up out of the same categories of grammar features, just more of them. Certainly there are limits even to a child's mind, so it is conceivable that {I} is too complicated to be learned. Could we hypothesize that a computer programed with the grammar that "spoke" {I} by translating from some natural language into {I} speaks it fluently? Then we could test if a child exposed to the computer speaking could learn {I} if we could decide what the child's mastery was. What would that mean? That the child could eventually translate from the natural language as well as the computer? The computer would agree with the child's translations?

Expand full comment

I think you might be begging the question here by assuming a parent could be fluent. As I understand it, no one is fluent in the language, even though people have studied it as much as Klingon or whatever. So I doubt a kid could learn it, even though I don't see how you could test it.

Expand full comment

Yeah, that was kind of what I was going for. I wonder however whether that’s the result of people just not spending enough time on Ithkuil (i.e no other fluent speakers), or because there’s something intrinsic to that language that makes it unlearnable? If so, what is it?

Expand full comment

Look up "linguistic universal". While I don't buy the strongest versions of Chomsky's claims about universal grammar, I think a language flouting too many linguistic universals would end up much much harder to learn.

Expand full comment

It's just ridiculously complicated in a way that natural languages and most made-up languages aren't. I think this is the one Akira Okrent talked about in her book on made-up languages (though if I'm misremembering, then, uh, ignore the rest of this post.) A sentence like "John and Sam moved the piano" isn't allowed because it's ambiguous. Did they move it together, or did John move the piano, and then Sam moved it? You have to specify grammatically. And every other language just accepts some ambiguities, but here you have to clarify every possible miscommunication. It's too complicated even for the guy who made it to speak fluently. Either we're not smart enough, or generative grammar doesn't work this precisely (depending on your views of language) but I doubt it's possible to get a fluent speaker.

Expand full comment

Generally in human language I would say excess precision is a waste of time. You almost never deliver some communication as a one-time-message. Normally you engage in a conversation, and what information you need to convey varies as the conversation progresses. Things you thought you needed to specify turn out to be unnecessary or pointless, things you thought would've been obvious aren't, and so on. You learn as the conversation goes on where you need to add specificity and you revise accordingly.

But doing it *ahead of time*, which a language like this insists upon, is a waste of brain processing power. It'd be like insisting on specifying the exact shade of paint on the bathroom door of the house you're saving up to buy in 10 years.

Expand full comment

Its hard to imagine languages more complex than Archi or even Estonian. It seems like its accomplished by tying social rather than linguistic rules to the language. Something like "you have to specify more information than necessary gramtically" is a social rule, not a linguistic one. If a language started like that, I suspect it would quickly change to allow for ambiguity at least in informal settings.

Expand full comment

If an adult could speak it, a child certainly could

Expand full comment

Here is a really nitpicky comment/suggestion: for some reason, my eyes glaze over every time I see the sentence "Odd-numbered open threads will be no-politics, even-numbered threads will be politics-allowed."

I would find it much simpler to digest if each open thread just opened with the statement for that thread, instead of the general rule:

"This is an even-numbered thread, so politics is allowed."

or

"This is an odd-numbered thread, so no politics."

Expand full comment

But then Scott can't copy-paste the same line everywhere, which increases the risk of error and confusion. Maybe do the work yourself and write a simple browser plugin that fixes this for you?

Expand full comment

He already changes it to indicate which type of thread. My suggestion just simplifies it.

Expand full comment

100% agree.

Expand full comment

I'm a traditional horary astrologer looking to practice his art; drop me a line at FlexOnMaterialists@protonmail.com and I'll use the tools of traditional horary astrology to answer your question. This can be as simple as a yes/no query or as complex as untangling a difficult situation. I am as discreet as a courtesan.

Expand full comment

What happens if Elon's next kid gets born when Earth is in Aquarius?

Expand full comment

Note that it would fit https://astralcodexten.substack.com/p/classifieds-thread-102021 better

Also, that may be a poor place to look for so gullible people. At least I hope so.

Expand full comment

I'm not a subscriber, so can't post in classified threads.

Re: gullibility, I agree. But I've seen SV/startup types posting here--even attempting to recruit--and someone's got to stand up for the little guy.

Expand full comment

Out of curiosity, do you find a lot of overlap between this community and people interested in astrology? I would have predicted that the intersection is quite small, so I'd like to be corrected if I'm wrong.

Expand full comment

Not naturally, perhaps; what I do expect is a high degree of intellectual honesty, such that anyone who does decide to take me up on my offer will be forced to perform a serious update to their Bayesian priors when my divinations are proved true. (UpdateBayesianPriorsOnMaterialists being a PITA to type, I settled on a more pithy email--though I will note that kindness prevents me from doing any actual flexing in consultations, even on atheists or those otherwise deserving of being flexed on).

Expand full comment

That seems sort of like weird heads-I-win-tails-you-lose proposition though. Say I'm a skeptic, and I come to you for prognostication. If you are wrong, my priors stay intact, your reputation (among my presumably equally skeptical community) suffers no (additional) damage. But if you are right, even of course accidentally, then indeed my priors require updating and your reputation among my community *improves*.

Hence the net expected benefit to your reputation is always positive, whether you are right for reasons or by accident. That would argue the best possible community in which to market Service X, if your primary motivation is improvement of reputation, is that which is a priori completely skeptical of the value of X.

Expand full comment

A fair point, if my concern was reputational in nature. But see above re: discretion. Pulling Ws one at a time (since I emphatically do not discuss these readings outside of the person who requests them) seems an inefficient way of gaining rep. It's also possible I am the first traditional (read: legitimate) astrologer many of you will have ever encountered; I know that's not saying much.

There is, of course, one obvious way you could test what I assume to be your own skeptical hypothesis...

Expand full comment

Well, it would be a curious entrepreneur indeed who had *no* concern for reputation, i.e. market appeal. But please don't take my comment as specifically critical, I was just struck by the thought that perhaps marketing to a highly skeptical community is in fact actually pretty smart, contrary to the naive hypothesis.

Expand full comment

Why thank you!

Expand full comment

Why can't you publicly post some predictions?

Expand full comment

That's my intent, once I find a suitable astrological election for launching a blog.

Expand full comment

Some thoughts on Cain and Abel and why Peter Thiel's Zero to One is a book of theology more than it is a book of business advice:

I.

The story of Cain and Abel in Genesis 4 is a story about competition leading to envy and violence. It’s a story about scarcity, be it real or perceived, leading to enmity. Yet the story might have gone a different way. God tells Cain—whose sacrifice is rejected:

“Why are you distressed,

And why is your face fallen?

Surely, if you do right,

There is uplift.

But if you do not do right

Sin couches at the door;

Its urge is toward you,

Yet you can be its master.”

Peter Thiel argues in Zero to One that the most successful people and companies are those who find a way to do something that nobody else can imitate, something with which nobody else can remotely compete. This imperative goes by different names: “Pursue a blue ocean strategy,” “Be a category king,” “Create a personal moat,” etc.

Influenced by literary theorist René Girard, Thiel’s point amounts to an injunction to transcend the Cain-and-Abel dynamic of fraternal competition. If you become so wholly you that nobody can imitate you, you won’t even be enviable. While the economic risk of competition in business (as in life) is that you will be forced to shrink your margins, the existential risk is deeper. People hate those who are similar to them, but not those who are incomparable.

At a strategic level, Cain and Abel are both condemned, so long as they are competing for divine love on the same axis. One is condemned to death, the other to murder.

But in God’s cryptic admonition to Cain, I hear a call for Cain not to compete, a call to walk away from the tournament for divine affection. It is a test that Cain fails, but one that we can hope, reading his cautionary tale, to pass in our own lives.

II.

To read Thiel’s Zero to One as a business self-help book is to make a category error, to treat it as one book amongst others, a strategy book that competes with other strategy books.

III.

Thiel says that monopolies pretend to be competitive while competitive companies pretend to be unique. The same is true of the book itself. It pretends to be another business book, but is actually a work of theology. Thiel is secularizing the Biblical insight that the human being is created in the divine image, that is, created to be a unique being. Cain fails to affirm his uniqueness and so looks to compare himself with Abel for validation. This basic sense of insecurity ensures a violent world. Many people and businesses can succeed in a narrow sense through imitation, but they fail to meet the human calling to be differentiated.

The subversive reading of Zero to One is that it’s not necessarily going to make you a good business founder, but it’s going to make you a more fulfilled person, win or lose, by taking you out of a tournament mindset.

IV.

Why do so many prodigies burn out? They can’t take the stress of competition. They don’t want to be Cain or Abel. What is the solution to the conundrum? To transform a desire to beat a competitor into a desire to find beauty in the game itself. Aesthetics moves us from asking “How can I win?” to “How can I appreciate?”

V.

The desire to win and to compete isn’t going away any time soon. Neither is the desire for validation through comparison. But the extent to which we can be singular is the extent to which we can free ourselves from the stress of the game. To the extent that our doing this inspires others to do so, we are elevating the human condition.

VI.

In one reading of Thiel’s book, the appeal of the author as successful entrepreneur and investor, is just cover, just marketing for an argument that should not depend upon a popular, cultural conception of success. If it did, the book would be a self-contradiction. In fact, the loss is ours that we need the message, “Be unique,” to come from the mouth of a wealthy celebrity.

VII.

Pirkei Avot teaches that one can pursue a noble goal for the wrong motivation and eventually come to correct one’s initial motivation. Perhaps this is the undertow of Zero to One. The moral is not that you, too, can build the next Google or become the next Lady Gaga (Thiel’s examples). You can’t. But that in reading a book that pretends to tell you to become the next Google or Gaga, you can free yourself from needing to be anything other than what you want to be.

Paradoxically, this rejection of consensus may the best chance you have of being the next Google. But morally and spiritually speaking, that’s besides the point. Come for the start-up advice, leave for the Kingdom of Heaven.

article link here for further reference: https://whatiscalledthinking.substack.com/p/peter-thiel-on-cain-and-abel

Expand full comment

I would assume a priori the point of Peter Thiel's book is to make Peter Thiel a crapton of money.

Expand full comment

That is... really thought-provoking, actually.

Expand full comment

I loved this. Thank you for sharing!

Expand full comment

I have a question.

We all know the principle behind bicycles is the gyroscopic force that keeps the machine and rider upright while the wheels are turning. However, sometimes when you are riding slowing the wheels are barely turning at all (when starting or stopping or fooling around at signals). You can still keep the bicycle upright in the same way you can balance on one foot: by making small instinctive movements to your weight to counteract tipping forces.

So what I've been wondering is to what extent the bicycle is really being held up by gyroscopic forces and how much of it is the unconscious balancing of the rider? Has this been studied?

As far as I can tell, most of the time a unicycle's wheel is barely moving at all; that seems to be almost 100% rider's balance.

Expand full comment

It isn't gyroscopic forces, but IIRC there was a run of a few decades in which the scientific consensus was that it was gyroscopic forces (at least to the extent that there were scientists who cared about such things - of which there about a half-dozen or so). That consensus eventually got supplanted by the good old-fashioned march of scientific process

Expand full comment

I can roll a coin down a hallway a pretty far distance. It stays up much longer than it ought to while it's spinning fast.

Expand full comment

It seems to play some part, but not much. Test bicycles have been built with counter-rotating wheels to counteract any gyroscopic forces, but they can still stay upright.

https://www.youtube.com/watch?v=oZAc5t2lkvo

Expand full comment

You bet there's a literature on this! https://physicstoday.scitation.org/doi/10.1063/1.2364246

Expand full comment

There's also this paper (http://bicycle.tudelft.nl/stablebicycle/StableBicyclev34Revised.pdf), which does experimental work and mathematical work to test various hypotheses. Major findings:

1. Experimentally, neither the gyroscopic effects nor the castor effects are necessary.

2. They develop a mathematical model of a bicycle and prove a necessary (but not necessarily sufficient) condition: "To hold a self-stable bicycle in a right steady turn requires a left torque on the handlebars"

Expand full comment

The best parts of the literature (or at least the most fun parts) revolve around actually building bicycles that cancel out certain proposed stabilizing mechanisms (e.g. one with a counter-rotating wheel to eliminate gyroscopic forces) and then seeing how easy or hard they are to ride.

Expand full comment

Can a tandem counter-rotating gyroscope really be turned as easily as a non-rotating one? You still need to apply large torques to each individual wheel to produce a change-of-plane.

Expand full comment

As far as I know gyroscopic force plays a negligible role (e.g., it doesn't make much of a difference how massive the wheels are; why would the force prevent you from falling over but not hinder steering etc)

Expand full comment

The gyroscopic force both hinders steering and makes a huge difference in stabilization as velocity increases. Just ride a bike real fast and check it empirically.

Expand full comment

(But the balancing is not done as in the case of a standing bike with shifting body weight, which is much harder, but by steering)

Expand full comment
Comment deleted
Expand full comment

This is the "caster angle" theory and experimental work shows that you can make a self-stable bike that has opposite caster. Neither gyroscopic forces nor caster angle alone are necessary for self-stability. http://bicycle.tudelft.nl/stablebicycle/StableBicyclev34Revised.pdf

Expand full comment
Comment removed
Expand full comment

Dr Dominion sounds like a supervillian name.

Expand full comment

Looks like Doc Abramelin has a rival, for when he sets up his horary astrology site!

Expand full comment

Hey Christy, there’s an old actor named Tom Selleck who gets paid to shill for some company that provides reverse mortgages to vulnerable old people. It’s a a really shady operation.

I think he would be an ideal person for you to tell about this great opportunity of yours. Try to get him to give you a whole bunch of his money to pursue your swell opportunity. I’ll thank you for it my friend.

Not sure exactly where he lives but you might be able to find a map to b-list stars that could point you in the right direction.

Remember now that Tom Selleck, tall guy, good mustache, used to be kind of good looking in Television actor way.

Expand full comment

How can we report spammmers here?

Expand full comment

And there I was thinking Dr Dominion was a rationalist who had devised a way to win the lottery, a la this: https://www.theatlantic.com/business/archive/2016/02/how-mit-students-gamed-the-lottery/470349/

Expand full comment
Comment deleted
Expand full comment

Huh.

I've actually walked right by her building in Salerno on the harbor, and it felt so much like a boat my brain just sent me a "Checks out captain, move along" signal, and I didn't notice that it looks like a fake sci-fi building.

Expand full comment
Comment deleted
Expand full comment

Are you the EA person who reached out recently? Or on their list at least? If not, I can forward their email to you (if you give me your email, that is).

Expand full comment
Comment deleted
Expand full comment

EA is Effective Altruism; sorry, I forget the acronym isn't standard knowledge. (There's a lot of overlap between the communities, and in particular they were reaching out to a list of people who went to an SSC meetup two years ago.) I just forwarded you the email.

Expand full comment
Comment deleted
Expand full comment

One of the studies that tracked this I think had people download an app that sent them alerts at various times asking them what they were thinking right then, and concluded that people who reported higher happiness/satisfaction were the people whose minds were more often on the thing they were in the midst of in that moment rather than somewhere else. You could vaguely replicate by setting a watch or phone to chime every hour and ask yourself where your thoughts are in the moment it goes off. Are they with the thing you're doing or somewhere in the past or future? You could also see if over some days of doing this, does the number of instances that you're present increase?

Expand full comment

Another way to do this is not by the hour but by tagging short mindfulness practices to things you do routinely throughout the day -- waking, showering, brushing teeth, getting mail, walking dog, eating lunch, making coffee, etc. So this is less a passive testing and more proactively practicing mindfulness in those moments and noticing if after a few weeks of doing that steadily do you notice a shift in your outlook and degree of awareness generally throughout the day.

Expand full comment

Another way is to practice mindfulness in all the transition moments -- to more actively take pauses between one activity and another. This can be done as half a minute or a minute of sitting still before going on to the next thing. So after getting off a phone call, after finishing a meal, after completing a task of any substance. There are various structured forms one can use to drop into mindfulness in those moments.

If you sustain any of these practices over some weeks, the dozen or so individual short practices start to connect into an overall shift in continuous experience across the day and right into waking the next day. There's a teacher Sayadaw U Tejaniya who has written some about these more everyday forms of mindfulness and his books might be of interest to you.

Expand full comment