No complaints about this particular post (I've not read it), but I always prefer that ACX (and most other blogs) not do guest posts. This applies to the book reviews too, which I never read. These are just spam to me. I'd rather a blog just link to posts which they think readers might be interested in, in one of their other posts (e.g. link post or open thread post), if they want to share them.
Not directly, but they give me money for publishing a blog, I try to publish ~2-3 posts/week to justify this, and if someone else gives me one then I only have to write 1-2.
This feels like a good time to mention that I have a paid subscription because it's something like "investing a world where Scott Alexander is doing well", mostly as gratitude for your old stuff; I don't perceive this as anything like buying a service from you. Of course not everyone is like this, but I thought I'd mention that some of us are.
If you just want to reward Scott for his past contributions and are not terribly interested in the ongoing service he provides by continuing this blog, wouldn’t it be better just to make a direct contribution to him and bypass the whole issue?
Your form of subscription is paying him to write, but with a time delay. If he wants more subscribers like you, Scott should write more, and then wait for them to be grateful.
But (as a paid subscriber), if you model us as just immediately archiving posts not written by you, the computation of whether it's worth it to continue subscribing is downstream of that deletion. So the guest posts don't bring any free money, but they might dupe you into thinking you're delivering a product of the quality you've pre-decided maximizes your audience retention utility.
There are too many “yous” in this comment for me to be completely clear about what you’re getting at, (in the first instance, I think it refers to Scott; in the second, I think it refers to his subscribers) but either way Scott is relying on his credibility so I don’t think it move the needle much. As a further note, I think the OP was kind of needling Scott to defend his credibility, and I think he dodged that nicely.
> There are too many “yous” in this comment for me to be completely clear about what you’re getting at, (in the first instance, I think it refers to Scott; in the second, I think it refers to his subscribers)
They all refer to Scott. The subscribers are referred to as "us". Subscribers obviously do not have a model of audience retention.
The post originally started with a paragraph explaining that the first part was mostly to cast appropriate doubt on my finding, and was unnecessary "if you want the prize without its price." Scott replaced that with his own disclaimer which is clearer/better, but didn't include that.
I'm not sure whether you're being serious here, but FWIW, I don't consider guest posts to be part of what I'm paying for. I'm perfectly happy that the content you produce is good value for what I pay, but it's only that content I'm paying for.
For what it's worth (likely not much), I have explicitly avoided buying a paid subscription because you (and guest posters) write too much for me to keep up with everything.
I try to read all of it, because every once in a while there's a post which ends up permanently forming a fundamental piece of my understanding of something important. (The Nietzsche posts are the most recent example of this.) But when you say "nothing too important is ever going to be behind a paywall" I choose to happily take your word for it.
(Case in point: I have not yet read the post I am currently commenting on. I was scrolling to the comments to see whether this was worth reading in full. I'll probably get to it at some point.)
As others have mentioned - I would not unsubscribe or anything with fewer guest posts/book reviews. I would be perfectly happy with 12 Scott posts a year.
I too wish posts like this were limited to a roundup or separate feed. Like - the book reviews are a lot, but I’m here for when you talk culture wars and politics and medicine and FDA and parenting, and this post and many book reviews… well, they aren’t written by you! There’s no micro humor or charts or research. Just like Reddit and lesswrong posts where I feel like it’s too neurotic and there isn’t a grounding in feelings, emotions, and what normies actually do/think. Scott - you somehow always are both technical but also relate things back to practical real life.
I would rather you post works from lorein as you write them - the posts about adhd, depression, etc would have great discussion.
And I know it’s a bit parasocial relationship of me, but I’d love low content posts as subscriber only stuff. What did you do this week. What toys have your kids been enjoying. What’s on the radio. What car are you thinking about buying.
Everything its opposite! I'm a cat person and have no interest in what Jonah Goldberg thinks but I used to enjoy encountering his twitter feed just for the relentless 10-dog-things- (with occasional cat e.g. taking dog's spot on the couch: "Be the better quadruped" hee hee) to-one-politics thing ratio. Plus he seemed to live in a cool autumnal landscape.
I haven't been able to view twitter in a long time though.
Yes!! I liked that! Overall your writing was interesting and I read it all. I didn’t like the bits about sequences/thinking because that’s not why I’m here, but I’m sure a ton of people did!
I’ve tried to write like Scott in a few long Reddit posts about sunscreen/skincare/etc, but it’s just hard to get the cadence and everything so incredibly legible. There’s a reason Scott makes the big bucks.
I notice that you toned down your disrecommendation at the top considerably in the current version compared to the email you sent out. I didn't read the post based on that.
But, I came here to comment that I generally like the book reviews, and I'm open to the idea of guest posts, but #guest_posts_are_not_a_suicide_pact; I think that if readers have things to say that aren't of high quality and considerable interest to you, then maybe you should just encourage them to have a substack of their own, and then highlight it in links or open threads?
I know this is partly a joke and I’m sure you agree with this next part based on other stuff you’ve written but for scrollers: you are one guy in one particular place and society overall thrives better if there’s some fostering of community and mentorship.
Edit as forgot to add the point: I am happy to see guest posts as I see it as pro social and and you do it the appropriate amount from my perspective.
I mean, someone who I assume is a friend of the blog or the rationalist community is going through a tough time, so Scott's boosting his writing. They don't have to be totally rational, they can be human and do something nice for a friend in trouble too.
I explicitly do enjoy the book reviews and this guest post, though I usually skip the open threads these days. To each their own. Now, if only there was a way for a RSS feed for each tag (guest post, thread, regular post) to exist and we would all be happy.
I disagree. I like the guest posts and book reviews. Especially when Scott is distracted / busy and so quality slips. If you don’t like them just don’t read them 🤷
You're missing out, and I'd recommend at least reading the past book review contest winners. The Georgism (Progress and Poverty) and Egan (Educated Mind) book reviews have stuck with me ever since I read them, and are basically introductions to a particular scholar's entire theory of economics/taxation and education respectively.
Out of this year's book reviews, I'd recommend the one on prions (The Family that Couldn't Sleep). Of the ones I read so far, it's the one that would have gotten my vote if I'd managed to actually vote in time.
You can filter out emails with a certain keyword very easily. There's also ways to do this for RSS feeds (https://siftrss.com/ is one I'm using). Just remove any post with "guest post" in it and boom, you're done! I use this for various "current thing" topics like "Gaza".
I found this really interesting and with useful insights on a problem I’ve thought about a lot in the past. Thank you for taking a chance and posting it
I agree completely with this, and correspondingly disagree with Fujimura's comment above -- I often read the book reviews (you can almost tell after the first couple of sentences whether you want to go on, which I often do!), and am alerted to things I would otherwise never have considered. Anyway, bravo to Böttger for this post; it's fantastic!
My heart goes out to Böttger - his experience with cancer was worse than mine and mine was pretty bad - but he's missing the last third of his story. He isn't "out of it yet" so he can't see his experience from the outside and judge whether the frame he put around it was a good one. As of yet, it's still a howl at the void. Recover, Daniel. Get well.
Glad to hear it! You told us your ex-father-in-law’s health status by the time of the writing of your essay, but not your own. There’s a lot of space between “fully recovered” and “capable of pressing ‘send’ on an email.”
I'm in radiotherapy, chemotherapy will come next, because that's what you do with stage 3 astrocytoma. Mental functioning is intermittent (pain-dependent), physical functioning is weak but stable. Full recovery in 2024 would be (falsely) considered a miracle.
Thank you for your responses. The fact that you’re alive to type them is, if not a miracle, then a desirable outcome of low probability. If you wished for full recovery in 2024, you’d be foolish and impatient, but you have more than just two and a half months. It’s very likely that be better in January 2025 than you are now, and even better in October 2025. At some point, hard as it may be to imagine it now, you will have recovered. You can do it.
Let's face it, you're only "out of it" when you're dead. Treatment is only a temporary relief; our bodies continue to rot until they inevitable decay into nothingness. There's no light at the end of the tunnel.
You say 'rot' like it's a bad thing. As someone who is outdoors a lot, I see rot as being a teeming - seething - universe of it's own, busy and rich with life of a different kind. Decay does not result in nothingness, but in rebirth. (Just not the rebirth of the original organism!)
Why not? It's interesting, and (as I said before) quite dynamic; there's lots to observe and think about. Is it ideal? - probably not. But it's better than the alternative, and that's got substantial value.
It looks like the metaphor of space-efficient VS time-efficient could be extended to high-bias VS high-variance models in machine learning. If you constrain a model a lot (for instance via regularization or even simply by keeping it small) it won’t overfit, meaning it will be more reliable. This can make it reliably wrong (biased) sometimes, but that’s a price you may be ready to pay given the use case. In medical use cases, the work of people like Cynthia Rudin showed that indeed a very simple and interpretable model such as a scoring system can be learned optimally and can save lives. The nurses are like that. To value complexity you need higher capacity (in a way this is thinking fast and slow all over again) and, crucially, trust.
Nurses are the example here, but this applies to bureaucracies creaking under the load of clients, resorting to form-filling because it would be bad to spend too much time on one "real" conversation while another client with a possibly more important problem is waiting.
I think you may be missing the simpler explanation: bureaucracies don’t care about people. They’re optimizing for efficiency because it can be measured on a spreadsheet and requires zero courage or willingness, on the part of the institution, to make a sacrifice.
The alternative explanations I thought of are that doctors and nurses (or any expert) will disregard any non-expert suggestions, information and opinions, for several other reasons beyond optimizing for survival:
1) Even though the suggestion might be good, it takes too much time and effort to decide whether it's good. The system runs on trust - trust that other experts know what they are doing - without having to verify each instance. This means false positives (accepting expert suggestions that are bad) and misses (rejecting non-expert suggestions that are good). This is not necessarily because of optimizing for space over time (a complicated and lengthy suggestion by another doctor is more likely to be taken seriously than a concise and simple one by a patient), just efficiency in general.
2) Experts take offense at being second-guessed by non-experts. Protecting their status takes precedence over helping the patient. They might not be fully aware of this. Patients are so low on the pecking order they aren't even on it - they are objects, not subjects or God-forbid peers.
I have seen this attitude in some of the surgeons that I've worked with, but private practice physicians with a focus on patient care over surgery that act this way are the rare exception in my experience versus being a norm.
I'm editing to specifically underline the word "some" above. Plenty of the surgeons that I have worked with exhibit more focus on the patient.
I mostly have experience with public healthcare in my country. The attitude describes 5/6 doctors I've had. I've also tried a private clinic in another country, where the doctor did listen to suggestions, but seemed to veer towards the other extreme of doing whatever it took to make me a satisfied customer rather than giving a good diagnosis and treatment.
In general it seems to me that healthcare is focused on treating the 20% of easily solvable ailments that account for 80% of the cases, and don't really do much about rare, severe or chronic diseases. Hopefully in the near future we'll look back at current times and wonder at how primitive it all was.
The thing about US care is that we're business oriented thanks to our private ins system. Docs of all stripes are incentivized to at least resemble a caring physician. Our two systems are likely more aligned in attitude at the hospital v. hospital level.
It also means that 80% of the caseload that is common problems is easily managed en masse. At least in my field, ophthalmology, that includes an array of longterm illness, especially glaucoma and dry eye syndrome, as well as more niche but still common enough concerns like high risk medications (e.g, hydroxychloroquine).
So, we refer out at the primary eye doc level when someone needs more advanced care, especially severe glaucomas and low vision services.
When doctors on duty talk to each other, they're VERY terse, in my experience. When they have a lot of information to pass along, they hand each other printouts.
I was not second-guessing them so I don't think your explanation applies to my experience.
Right, I meant more than any suggestions tend to be interpreted as second-guessing by some doctors, at least in my experience. Anyway, interesting read! I've never been through anything like this, but still felt like I could relate. Wish you all the best!
I have a brain tumour too and my neurosurgeon seemed like he just wanted to get his job done and move on but everyone else I have dealt with (oncologists, neurologists, nurses and MRI operators — even the bureaucracy of the NHS) have treated me as though I am the most important person in the world. I feel like they are all my friends and, in different circumstances, would certainly share a beer with me at the pub. They always respected my opinions and wishes.
Agreed. MS patient in Switzerland (which has a system with a highly standardized insurance mandate but some competition in care provision that in practice ends up far closer to the US than to the NHS) and while I have a lot of beef how long people did not catch it, after the diagnosis everyone has been like that.
Even the health insurance which covers a shit load of costs every year (with no obvious path aside of it increasing further) is nice and helpful!
Is *everyone* perfect? Of course not, I on one occasion had an occupational therapist which I neither liked nor thought useful in the least and that was quickly resolved (by nuking the undertaking which was exactly what I thought should happen).
Breast cancer treatment here in the US and I say the same. My surgeon was easily worn out by conversation, but every other provider -- oncologist, radiation oncologists and technicians, radiologists, and almost every single last nurse have treated me with the utmost attention and care, down to the very last detail. This at a small fairly rural non-fancy hospital with a tiny cancer care center. And despite Covid pressures and burnout and serious understaffing.
The gratitude I genuinely felt and continue to feel was an incredible existential analgesic for me all the way through. Their care and my gratitude for it felt like direct pain medication for the suffering my fear caused.
I can imagine all kinds of ways I could have produced friction for them and me. To step into that level of urgent, highly coordinated medical care is to travel to another country where you don't know the rules, don't speak the language, and your norms and ideas aren't theirs.
It helped me quite a lot to surrender to all of that and trust that they were going to carry me where I needed to go. I spoke up here and there about a few things, but otherwise the collaboration I had in mind was to participate in their system and receive its benefits. It's not the kind of collaboration I'd seek with a single other person, like a therapist or a partner or friend. I think it helped me to see what a stellar job they were all doing under such terrible circumstances. And it probably helped that I'm also a healthcare provider so I have a lot of compassion for the demands of their jobs.
I think some people do get terrible care. There are loads of people who are traumatized by their care. I was to some extent but not the fault of the providers in my case. I think some people have unrealistic expectations about what's possible in that space. And of course loads of people are just out of their minds in pain or terror and can't be expected to be in it any other kind of way.
That's simpler but wrong. Bureaucracies are the jobs of other people, and they're no exception to the fact that most people are honestly trying to do the right thing, under circumstances that you're failing to imagine.
This reminded me of reading Sam Kriss, in that I didn't understand it at all but get the feeling there must be something brilliant behind the words...probably? And there's a third tragedy in there despite only two being explicated, I think? Honestly just sort of baffled. (And worrying about my own "benign" tumor...cancer sucks.)
The various doctors I spoke to about my large tumour told me that the only way to deal with it mentally is to completely ignore it. They even strongly recommended against regular scans of it (both NHS and Private doctors so different incentives) as it would only cause worry from minor growth and any actual issue is going to be noticed first through symptoms anyway. I can't imagine the horror of finding out your benign tumour was actually malignant though.
Yeah, that's where I'm at..."well, we could schedule you with a neurologist" left hanging as an open invitation, and then subsequently worrying about random unexplained one-sided headaches in the supposed area, plus other desiderata. Maybe coincidence, maybe not...would I even notice such anomalies without the priming? Obviously I'd prefer to have a better wakeup call than crashing a car or something similarly dramatic. But all those Bayes lessons about the classic mammogram problem are very much an EY-changes-your-thinking thing. Some avoid doctors cause they're hyperchondriacs, some do it cause they don't wanna wrestle with thorny statistics problems...having had a few relatives die from various cancers isn't reassuring either, even if they were often long-lived. Probably best to just not think of the pink elephant in the brain, which isn't steerable anyway.
Is this a UK thing? I feel like in America you would get an audience with a neurologist if you had a large tumor; but I don't use the medical system so may be confused about what a neurologist is/does.
In the US it often depends to an almost ridiculous degree on who your doctor is and who your insurer is. This is true regardless of how clear or unclear it is that scans and doctor visits are helpful vs watching for symptoms.
I also have a tumour (also NHS). I researched the hell out of mine. Three hours a night for six months. I've had maybe 10 MRIs. I appreciated every snippet of information and they gave me comfort — even when the news was bad. My oncologists and nurses encouraged me in my research and answered all my questions.
Before posting the comment I tweaked some wording and accidentally appear to have removed the word benign (as I was replying to a comment about a benign tumour it mustn't have tripped my final sanity check). But yes, the NHS were very fast and helpful at getting the scans done but once it got confirmed benign they shifted into (probably rightfully) telling me to stop worrying and ignore it unless symptoms show up.
I don't know. I thought it was an interesting exploration of religious questions in a very atheistic, rationalist way, sort of a 'what if a very religious person by nature didn't believe in God'. Though as I've said above, I don't hold the same view. But that's because I'm not the same person.
Sam Kriss...well, let's say he's very good at what he does, but I find him an irritating representative of the literati class with their typical prejudices ("haha let's mock working=-class and middle-class old people because they voted for Trump"), and I'll leave it at my personal statement of disgust, which only asserts how I personally feel about something. :)
Kings New Clothes syndrome. Always reminds me of attempting to read Wittgenstein's Tractatus Logico-Philosophicus many years ago. Now we have Wikipedia and the interweb I must have another go.
Interesting that you mention Sam Kriss. Daniel's post is so honest and open. I feel that I'm reading the work of an adult. Sam's work, although brilliant, does not leave me with that impression.
Love this- thank you. As a scholar of religion (and having skin in the game in religious spheres), this tension is a fantastic predictor and pattern-matching tool. The more you value your system and feel its uniquely beautiful and transcendent, the more you have a 'survive' mindset. This is linked to orthodoxy and traditionalist approaches, which I am favorable towards. "I want to avoid this from collapse because its so great, and the way it is right now is the way it should be".
But as you value it, you also can tend towards thrive and take up a totally different approach. "But we can do this better!" "But this could help EVEN MORE PEOPLE!" "But lets update for the times, or culture, etc!" The thrivers have a similarly good intention and can come to a totally different approach/conclusion with an equally valiant wish. And its hard to empathize or understand sometimes, as it can feel 'competitive' or 'misaligned'.
So anyway- I just wanted to thank you for the essay. It helped me understand something more clearly that I have been processing for a bit and now is seemingly obvious. Wishing you the best :)
Oh, very cool. Yes I agree thrive/survive does work on the reformer/orthodox quarrels that are now ongoing in many religions because they're all suffering the huge onslaught of the Internet and sanity/atheism. There are attempts to explain this with political left/right but thrive/survive imports less baggage. I've done scholarly work and a few minor papers in the study of religion. (Main result: https://sevensecularsermons.org/why-atheists-need-ecstasy/ ) If your pursuit of thrive/survive into contemporary religious dynamics leads to a publication or something, I would love to be told.
I already had some fleeting not-quite-thoughts about this, and my gut reaction is that this is awesome. It feels like someone flipped the switch on, and I can see the hypothesis I was trying to test for. It is directly relevant for my work and I will be testing it abundantly. Thanks.
Likewise. I think about reciprocity - tit for tat - a lot, and the space-time tradeoff seems to shed light on the behavioral choices as much as the thought/reflex split. Gonna go read the Wikipedia article and then do some coding.
"My father-in-law was the front seat passenger. Same story with him: put into the CAT scanner to look for fractures, and although he never had seizures they found a brain tumor in him as well..."
This reminds me of the "hitchhiker you picked up" joke. He asked, “How do you know I’m not a serial killer?"....
This is a great concept! I love the application of computer science. I agree that survival oriented processes are fighting a doomed battle, but I don’t think you take this far enough: They’re trying to push the probability of death to zero, when any study of Eliezer knows, zero isn’t a probability.
The only solution I see is, the survival oriented processes have to trust the thriving oriented processses to run the show most of the time, except for in immediate- ie short time duration- crises. If a “crisis” or “emergency” goes on for long, it will kill thriving. Yes, each process has to trust the other, but ultimately the thriving-oriented process has to be the captain of the ship. If the survival process doesn’t trust the thriving process to ultimately keep it safe, no amount of risk-mitigation will suffice.
From that point of view you can view the early stages of human history (first hundred thousand years or two) as being more in survival mode, from which we then gradually though unevenly have been emerging in the past couple of millenia, esp the last couple centuries. The thrivers are in charge, sort of, here and there, but are still pulled down by the overwhelming flywheel effect of the survivors, who still dominate. Who knows where we'll end up, but Böttger's theory could sure help in keeping us on the trajectory of the past couple centuries.
If thrivers were actually in charge, we’d have nuclear power plants everywhere and very little regulation. We wouldn’t have masked toddlers in response to Covid, or said people have to give up bodily autonomy to ward off the small threat posed by Covid. There wouldn’t be major politicians pushing for digital censorship. We’d be traveling the stars, not fighting over nonsense.
Thrivers ARE in charge locally, here and there; they run a lot of companies and other organizations, in some rich countries, though not usually the political systems of those countries. (The current president of Argentina could be classified as a thriver.) But as soon as we get into this discussion the question of classification becomes pretty vague. Take Elon Musk, often considered a paradigmatic thriver. But when he ventured to support the Ukranians with Starlink satellites, the Chinese were pissed off and said look here, you've got a huge amount invested here and you're dependent on us, so behave, i.e. support our friend Vladimir, or we kick you out. So Musk changed his tune and started parroting Putin rhetoric, e.g. idiotic stuff about holding "referenda" in Donetzk and other occupied territories to decide whether they should join Russia. (And in American politics it's driven him all the way into his current hyperbolic rhetoric . . .) Of course -- he wants to keep what he has; he sees the democrats as a threat not only because of taxes (that's a very small part) but because he doesn't want to get kicked out of China. So is that survival or is it thriving? It's clear that this person was a high-profile thriver up to a certain point, and it seems pretty obvious that he's now turned into more of a survivor -- but one who still needs the reckless swashbuckling thriver image he'd cultivated so consistently before 2022. Hard to classify, and I think you run into the same problems of classification with many, many people. I agree with the commenter above who thought that the survivor-thriver antithesis is too one-dimensional and needs to be seen in a wider and more multimensional context.
I think the whole distinction is a bit silly. You can be as thriver as you like at 9am, but at 9:05am something can happen that puts you (temporarily or permanently) into survivor mode.
Nobody is an always-thriver. If there's a distinction to be made it's between the sometimes-thrivers and the never-thrivers.
I try hard to walk through the world smiling, friendly, laughing, generous, happy. Given a moment between stimulus and response, I can usually be that guy. Under duress, not so good. I agree the distinction is granular; not silly, though.
We had this system, generally, during the beginning of the industrial revolution. It had horrible labor conditions and massive pollution problems. It may have led to faster "thriving" in some sense, a faster advancement, but it had tremendous downsides as well. We've perhaps gone too far in the other direction at this point, but to pretend putting "thrivers" in charge would lead to some harmonious existence is not supported by history. Many people would get stomped on underfoot to advance their goals.
The last 200 years has seen a monotonic increase in American life expectancy (possibly excepting wars, which some American data may not be granular enough to show.) While England essentially forced its rural populations into cities, in America people went voluntarily into cities and factories. Every indication is that, at least for the span of the early Industrial Revolution, the benefits of rapid industrialization far outweighed the costs. Pollution was just not as bad as starvation, material deprivation, or losing the war. And farms of the Industrial Revolution, frankly, had terrible labor conditions as well.
That we can have a cleaner environment despite our oversized population is a direct benefit of our developed industrial capacity. If modern Americans lived 'off the land' in a 1700s manner we would quickly denude the landscape beyond the point of repair.
Perhaps we're at the point where marginal improvements are not as valuable so we can afford to reign in the investors. Or perhaps not. But during the industrial revolution improvement in the economic sense was absolutely a strong net positive, with long term residual benefits. I'm not sure if the investors should be credited as strivers or survivalists, granted. But whatever happened during that period improved society in the short and long term.
It might be too late if you wait for the emergency to arrive. For example, many European countries switched to full thrive mode just before the rise of Nazism, and so did not have time to prepare themselves when attacked.
Optimally you would need to keep some places and some people on constant survival mode, so there will be some help ready for an emergency.
There's a venerable German state theorist whose name escapes me, who says the frue sovereign is the one who can declare a state of emergency. Maybe that has to be a survival-oriented person/institution.
I see it exactly reversed: Most processes value thriving at the expense of survival to a suicidal degree.
People will burn every tree on the island to smelt copper, then freeze to death in the winter 100% of the time unless the few survival oriented perspectives fight a brutal battle of the somme trench crawl against the optimists forever, every day, because there is a limit to how good something can get on a set time scale but there is no limit to how bad something can get. Goodness/badness is a multiplier on a variable, and the multiplier for death is 0.
I think the OP dichotomy is between short-term survival and long[er]-term thriving. The issue as you point out that both the post and most of the society do not pay enough attention to long-term survival.
I was thinking more metaphorically, but thanks for the link. That bit always struck me as odd: It's easy to imagine overhunting to extinction, but you can get on a high place and use your eyes to see how close you are to tree = 0.
Surely there was at least one survival maxer to throw a fit around.
A rivetting piece! The perennial conflict between Surviving and Thriving is both new to me and intuitively correct, and also something to incorporate in my worldbuilding.
However, Urgent communications are always terse, whether Surviving ("Hold the line!") or Surviving ("Charge the flank!"). So the first rule has to be: *Recognise when the situation is Urgent to the other parties.* That's not easy when medical professionals are projecting calm, and maybe impossible when you're doped on meds and distracted by pain.
I'm intrigued by Daniel's observation that Surviving conversations are terse even when Non-Urgent.
Rather than "space efficiency", I think this is about Survival being a complex but solved problem (except when it's suddenly not) - basically Chesterton's Fence.
If so, then people responsible for Survival are terse because the conversation is too big for the time and energy available (you can almost hear the mental sigh just thinking about it), and because they feel an instinctive need to maintain authority.
Spatial efficiency usually comes at the cost of temporal inefficiency. And there’s no way I’d describe survival oriented persons as being “patient, willing to spend long amounts of time”.
I think the (unstated) assumption is that the time-inefficient component of survival happens before the crisis starts. Then, in crisis, it relies on short, cached phrases.
In other words, doctors and nurses have years of medical training so they can communicate efficiently in the moment using standardized language.
Yes. So if you said to them, "Hey why don't you pack an extra beach towel?", their first response would be terse and unenthusiastic. They can't even remember *why* they've packed what they've packed, but they know it's optimum and really don't want to revisit it, and if they do it will be a whole process to decide to make room for the towel and then make that work.
Hmmm. I like the idea of understanding the difference between the survive and thrive mindsets, and the idea of understanding where others are coming from and being charitable to them as a way of fostering better cooperation. I'm not quite sure I'm convinced of the difference between survive and thrive mindsets mapping easily onto the difference between space-efficient and time-efficient algorithms, though. From my reading that seems pretty specific to your experience at the hospital, and really only one specific part of the hospital experience (their communication).
I feel like there are a lot of other things that go into the difference between the two mindsets (e.g. level of risk aversion, or in this case, different incentives: the hospital mostly doesn't want you to die, but cares very little about whether you're having a good time.) that to me cannot be explained away as a difference of algorithms. I don't think that e/acc's issue with AI doomers is that doomers are communicating too curtly, and I don't see how the third heuristic will help the issue. I do, on the other hand, see how the second and fourth heuristics could be valuable, but those seem more about just being charitable than having anything to do with running different algorithms. Maybe I'm missing something here, though, and would be appreciative if anyone can explain what it is.
I think that e/acc's issue with AI doomers is EXACTLY that doomers are communicating too curtly. "Orthogonality", "instrumental convergence on power-grabbing" etc. get used as if they meant anything to people who have failed to Read the Sequences.
But they're all IT people, they understand Algorithms 101, this should help them get to the bottom of their failures to collaborate.
I'm not sure you've convinced me. I can see why using jargon/abbreviations/terseness would be frustrating for that, but I don't think translating that jargon into a message that's more time-efficient and less space-efficient would make any more than just a marginal difference. And it's not clear to me that survive mindset groups are any more likely to communicate tersely than thrive mindset groups (I've seen a lot of very very long blog posts from AI doomers), with the notable exception of a situation like a hospital where things are happening very urgently on a timescale of minutes or even seconds.
It still feels to me like the main thing separating these two groups is risk-aversion, priors about how likely AI is to kill us all, priors about how great AI will be if it goes well, etc. (This is my attempt to be charitable. There is part of me that thinks a lot of (but not all) e/accs are just trolls or people who don't understand tail risk or survivorship bias very well.)
Yes. I don't use "collaboration" and "communication" synonymously. Reading each other's long blog post isn't usually collaboration. Collaboration is when you try to solve a problem together, and that's where the communication styles diverge.
Thinking about this more: I don’t believe survival processes could be optimizing for spatial efficiency at the cost of temporal efficiency. There’s no way “take longer to do the processing” mitigates the risk of “bit flipping” from taking up more space, because in an emergency time is scarce, and reaction time can be life of death.
I think we should expect survival oriented processes to just be computationally simpler period. We should should expect them to be error prone to the side of over-estimating threats. Seeing a threat where it doesn’t exist inhibits thriving, but not seeing a threat could kill you.
The personal interaction rules you give are a good description of “how to deal with an emotional person”, as well as “how to interact with a computational process with very limited capacity”: send short messages, as few as possible, and be patient.
I think you underestimate greediness of such a choice. Even if _total_ time is shorter, each individual _step_ of a fancy algorithm is often longer and/or more complex, hence more prone to disruption. (And yes, what counts as an individual step is fractal, yadda-yadda-yadda.) MergeSort's individual steps are efficient but more complex than "compare A to B".
The prone-to-overestimating part is, of course, true.
I symphatize with his attempt to crystallize what must have been a phenomenologically intense experience, pain notwithstanding, like, it's *that type of experience* that's notoriously difficult to communicate.
Given that certain substances like LSD or ayahuasca can scramble our sense of salience (i.e. what feels important or significant), I usually take shifts in perceived importance with a grain of salt, especially from people who have a drastically different epistemic standard. But some of the readers here, including me, would be predisposed to extend extra trust to Böttger, given his background and past work.
For what it's worth, IMO the post managed to communicate a very real and very important thing, in particular the seven heuristics that probably will get quoted a lot in the future. It needs more elaboration/response by other people (or by himself) in order to gain a stable position in people's metaethics, but as far as memetics go, it could spread far and wide.
Some quotes that i like:
"Whenever these people think there is a minute for idle chat, that proceeds flawlessly. But the more urgent collaboration is, the more frequently it appears to fail"
Reminds me of the distinction between no-slack work vs. work that has slack built-in e.g. film industry.
"The brevity of their communication will feel hostile; interpret it charitably as an expression of urgency of concern."
This resonated a lot, although gotta say: My experience living in a non-western country, where scarcity is more common, is that brevity isn't just about urgency and concern but also reflects a fear of losing out, so it's also a kind of a power play—though that might sound cynical.
A very interesting point, and one I will try to use when analyzing situations in the future. A couple of other places where it seems relevant:
Teaching. The teacher is on survive mode, knowing how little time is left to cover the entire material, the students want an explanation of why they have to learn it for every single new algorithm.
Relationships, when one partner have more experiences with relationships breaking / divorces, and so try to optimize for survival and the other partner is more afraid of the relationship going stale and so optimizes for thriving.
Probably will think of others soon. Also I for one really enjoy having guest posts here, I usually find new things to read mainly through links from blogs I already read. Will try to look into your sermons when I have time (also, a movie recommendation: https://www.imdb.com/title/tt0418455/ )
Having a brain tumor is a really horrible stroke of luck, and I'm really sorry for you. I'm hoping for a miracle.
Ironically, I get more of an Epicurean conclusion from this.
What this makes me think is that I'm glad I never really spent that much time on philosophy, and I feel better about all the time I spent trying to get laid (and...adjacent activities). You engage in all these intricate mental computations and constructions, ponder the meaning of life, and it turns out to be just a tumor.
(God, that SUCKS.)
Of course people enjoyed them and found them meaningful, so I guess that's something. But to me it further lowers my estimation of the probability of the Divine, or any kind of transcendent meaning to anything. That feeling you get? Yeah, it's just some brain circuitry firing a certain way or, as in this awful case, a tumor.
I just never *got* religion. I had zero interest in Zen or ego death or anything mystical. I understand the desire to save your soul from an eternity of torment in the afterlife, but if that isn't true...what's the point? It seems like it serves other people more than you. I think I just don't have whatever the temporal lobe wiring is for it. Or something.
Eat, sleep, work to pay your bills, f***k if you can (unless you're ace), save for retirement, raise kids as so many pronatalists want to so you have something left after you. (I whiffed on the last one, though I admit I was never really all that interested and shocked people in high school by admitting it to them.) The rest is commentary.
You're just a monkey with a bigger cortex. I am. We all are. Enjoy your bananas, the end is coming for you sooner or later.
But: having a brain tumor is a really horrible stroke of luck, and I'm really sorry for you. I'm hoping for a miracle. I'd pray for you, but if He's up there, I doubt He'd listen to me.
I came to the opposite conclusion: I have to meditate enough so that I can withstand the tsunami of suffering which life will at some point dump on me. I didn't focus on the shortness of life, but on the long suffering at the end.
Also, maybe I shouldn't have kids, so I always have an exit hatch. Although, to be honest, I would not blame my dad at all for taking his life under these circumstances (not sure of the exact details of OP's situation, sounds like the mother is no longer around which does make it worse).
Of course whatever works for someone right? We don't all need to be walking the same path.
Do you know My Stroke of Insight by Jill Bolte Taylor? A neuroscientist reflecting on her altered perceptions from a massive stroke and the insights it led her to that were life-changing for her.
To me a brain tumor or a stroke or psychedelics providing a dramatic shift of perspective doesn't undermine the credibility of the perspective. It says to me that the normal well defined circuits of our ordinary thinking lead to X kinds of awareness/knowledge and exceptional experiences can produce extremely interesting Y kinds of awareness/knowledge. Like the difference between research studies and poetry. Are they not both valuable?
He isn't up there. (Yet. God growth mindset!) Save your hope for likelier things.
I do continue to think that in a sense, we're also all the same universe that's wearing our faces as its masks, using many brains and networking them together through language to figure itself out. That's just philosophy, didn't go away when the seizures went away.
People often ask how the methods of rationality can help in our daily life. The first part of this essay was an eye-opening insight into how rationality can help, when it is all you have left.
Very much so. My fellow patients were busy with painful thoughts like "this is because I'm so stressed" or "I knew it would all go wrong if I did not quit my job" at exactly the worst possible time for such shit. The Methods of Rationality seemed indispensable for navigating this.
Like some commenters here, upon reading the point about the survival/thriving tension, it intuitively felt correct to me. It is a simple but insightful theory. Thanks for sharing, and I’m sorry you had to go through such pain to discover it. I am amazed you were able to break through the pain and share it.
The theory sheds some light on a tension I’m experiencing between two groups in my life. I am part of a local YIMBY movement and have had several long conversations with local NIMBYs. Often time it feels like we are speaking different languages. I could not understand some NIMBY concerns, like how tall buildings would decrease the sun on their gardens + simply not liking tall buildings, would be in anyway comparable to the huge exodus of people (many of them my friends) from my city because of the lack of housing.
Instead of just concluding these NIMBYs are garbage, Böttger’s theory gives a much kinder, much more actionable explanation for what’s going on. The YIMBYs are motivated by survival (i.e. people able to live in my city) whereas the NIMBYs have their housing and have now progressed to focusing on thriving.
Pattern Matching!
I don’t think the time-space part of theory is correct, though. In my example the tension seems to be between (YIMBYs) the existence of housing and how to get to a minimum where people can even think of living in my city vs (NIMBYs) improving the quality of housing for those who have it. The idea of space efficiency doesn’t really fit in with ensuring increased housing supply, at least not to me.
What makes more sense to me is not a space-efficiency tradeoff, but rather a time vs. solution-quality tradeoff. In more computer sciency terms, it’s the tradeoff between the faster but less optimal approach of approximation algorithms vs. the slower but provably-optimal approach of the algorithms we learned in Algorithms 101. I’ve seen this dynamic most clearly in my upper computer science classes on NP-Complete approximation algorithms, which you can read about here: https://www.khoury.northeastern.edu/home/rraj/Courses/7880/F09/Lectures/ApproxAlgs.pdf.
Finally, I think Böttger’s suggestions at the end still apply regardless of which theory proposed by people you choose here. I would just add one imperative to the Team Survival’s checklist, which is to try to convince Team Thriving that this is indeed a survival situation, because survival is more important than thriving. If the situation is indeed a survival situation, the Team Thriving should be able to temporarily set aside their goals, and join forces with Team Survival. Maybe we call the synergy of the two Team Flourishing?
The YIMBY and NIMBY labeling is surprising! My first intuition is (YIMBYs) want to optimize the current system to improve the lives of mostly strangers that can be doing much better vs (NIMBYs) want to keep everything exactly the way it is, we are doing fine and doing anything different might hurt, thank you very much.
I doubt it. Despite the label I think that "YIMBYs" are rarely people who actually own a backyard, they're either people who are desperate to move into the neighbourhood (and don't care if it gets slightly worse in the process) or property developers (who probably live in a different place anyway).
You could also reverse the survive-thrive classification here just as easily. NIMBYs are "survive" types, they just want to maintain what they already have. YIMBYs are "thrive" types who are trying to move up in the world, either from tenant to owner-occupier or from resident of a less popular place to resident of a more popular place.
Alternately, NIMBYs who worry that the wrong sort of people moving in and ruining the neighborhood could be said to have a "survive" mentality, whereas people who expect a neighborhood to improve with more development have a "thrive" mentality.
Id versus Superego. Some people are more driven by one, some by the other. It maps onto this theory and explains phenomena like the masochistic guilt shown by some groups to others (overpowered oppressive superego they identify with) and the sadistic glee by which those others concur with that guilt (overpowered wild Id with rejected superego.) There are many other terminologies for this core individual human conflict and how it might be projected out in various ways, but currently I find this one most elegant.
Is this not just short term vs long term goals, tactics vs strategy? Survive is tactics, thrive is strategy.
Every specialism has its jargon, primarily to avoid long-winded descriptions. This appears terse to the outsider but it's just efficiency. Knowing the lingo creates an in-crowd, especially if they're working together every day. Plus a medic needs to distance themselves to remain objective and do their job. These can combine to make the patient feel excluded. When you speak it's as if you're a distraction from their private business and it makes you feel like a child interrupting a grown-up conversation.
[I realised after posting this that doctor-patient is classic parent-child in TA]
re sorting books: surely you just repeatedly start from the beginning, swapping adjacent books if they're in the wrong order? ;)
I know too little about military things, haven't even read Clausewitz. Do they have a standard resolution mechanism for differences of opinion between tactics and strategy? I can imagine armies where strategy always wins via chain of command, but I can also imagine armies where the folks in the trenches decide what elements of the strategy are actually doable. War gives pretty intense feedback though, so over the millennial, centuries and decades I would expect some kind of convergence.
In military terms (and I know less than you do) I presume strategy sets your objectives and tactics is how you achieve them. But "No Plan Survives First Contact With the Enemy".
re BubbleSort: if you were in a wheelchair and all your library was on a single floor, that'd probably be the most efficient. My tangential point was that algorithmic efficiency isn't fixed in stone: it's about having a large library of possibilities to choose from.
Having read the theory, I feel like the author's belief that it will contribute to saving the world is a grandiose delusion. In light of the line "Although I can’t speak for its world-historical importance", I infer that Scott also thinks this, although he is surely too polite to express his doubt in those particular words. This causes me to suspect that Scott has published the essay primarily out of sympathy for a very ill man who describes himself as adjacent to suicide (no one says "I can’t do that to the kids" unless they've thought about it real seriously).
I know it's very cruel to say, but I totally agree with this. To be fair to the OP, maybe he meant "contribute to saving the world" in the same way that donating $100 for mosquito nets contributes to saving the world--it's a small contribution, but it is a contribution nonetheless. That said, I don't believe this post contributes as much to saving the world as donating $100 to charity.
I'm torn on whether I agree with this. I share your view that the idea that this is world saving is delusion. I'd imagine some heightened sense of salience helps with the pattern matching but also tends to over-assign a sense of importance to those patterns once identified.
In principle, I'm against pity-posting, but I do think this is an excellent essay. I found it an interesting theory written about in a clear and engaging way. I just don't think it's a world saving theory, or anything close.
If his premises were correct, perhaps it would. The delusion, alas, is a common one: that this 'thriving' as described exists or could exist. It doesn't matter if one carries this delusion forward into 'fully automated luxury space communism', the error infects everything.
'thrivers' aren't mistaken that we 'survivors' are hostile. It's not a communication problem, but a reality problem.
Whether you agree that the piece is interesting or useful or not, it seems unnecessarily harsh to say Scott published it out of pity. The OP is a decent writer telling a powerful story and offering up a theory he took from it. It doesn't need to save the world to be an interesting offering. Way less interesting things have been posted in this space.
This is exactly why I wrote the first part of the post. You should doubt this theory, because I was definitely not thinking straight. That's why I repeatedly emphasized that in the text.
Still, at the same time, it is also an idea that might make sense, might be trivial even, and maybe the only reason at has not been described before is that nobody had the same weird combination of circumstances (intensive care, Algorithms 101, nurse mother, having read Scott's thrive/survive, compulsive theorizing etc.) and time to write about it.
It was not pity. We've exchanged businesslike emails over the previous guest post. I've met Scott for all of five minutes, years ago, and he said he "kind of hated" the theory I had back then.
If I were striving for accuracy in describing delusions, I would say that "delusional" is a property that applies to people's thought processes rather than belief content per se. And delusional thought processes exist on a continuum (or share the space of, I don't want to imply unidimensionality) with non-delusional insight-generating processes. Delusional trains of thought may pass by reasonable and interesting stations before hopping the tracks. Ideas that have been kicking around your head start getting integrated, and sometimes you hit on novel solutions to actual problems. So I don't think you can exactly prohibit the publication of delusions, and I don't think you should prohibit the publication of those who are delusional.
But I agree with others that the author's assessment of the importance of this piece (I hesitate to say "idea" because I'm having troubling isolating one core insight--sorry Daniel, I think you're throwing off lots of sparks and heat but I'm concerned that it's secondary to the collapsing structure of your conceptual space) is way off base, and some features of the writing are very familiar to me as an occasional rider of runaway trains. The ideas expressed are cool tools but not paradigm shifters, and the way they are strung together in this piece makes me worry for the author (sorry Daniel, it's not pity, just worry, and I'm glad you have good German doctors and all but you're exactly the kind of person who would excel at concealing a burgeoning psychosis). But I can think of good reasons to publish in spite of this, and by my lights it doesn't take much...affirmative action for the sanity-challenged to bring it up to par, if that was even a consideration. I wonder if Scott's motivation wasn't in part to help them both (thinking of the parable of Sally the psychiatrist here: https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) test their expectations against a larger group of people.
"Saving the world" is a grand and high-variance claim. If you're not completely on board with such a claim, but still think there is value in the perspective, it makes sense to hedge againt the high variance part. It looks like Scott did that with the intro.
I found this essay extremely interesting and insightful. Powerful even. I read this one all the way through; it was gripping. I sent it to family. Will it contribute to saving the world? Maybe! Most things don't, so the prior is low, but many big ideas that eventually work start out as low probability ideas.
This... seems very explanatory. A simple but powerful theory, with enough secondary correlations ("the Moore Law will just keep working, so we can be cavalier with memory") to be convincing. And very ACX-y.
This is an alternative framing of the old problem of decision making given certain risks and certain opportunities. Be overly cautious and you'll be "survival" biased. Be focused on chasing the shiny opportunities and you'll be "thriving" biased.
It's clear we don't frame our policies, societies and behaviours following this dichotomy, but just determine case by case what to do - especially considering that assessing risks and opportunities is far from an objective endeavor.
So I don't think this framing is particularly interesting or useful.
If you think this is just a framing, you don't understand Algorithms 101. This is an objectively solvable math problem. In fact much of it is already solved. I said there are methods for integrating algorithms of these different types and I meant it.
If you want to use this, don't focus on the meta-levels of societies and policies, go to the interfaces of collaboration and do math.
I happen to be a math guy. You got me curoius: what kind of math problem is this - except for the one I already saw: a game theoretical, utility-optimization, subjective one?
PS: framing here doesn't have any specific meaning as you seem to imply. Framing is the "kind of representation" we give to an issue.
I have to say, with all the talk about tumors and communication, my main takeaway is that a vehicle with five occupants hit a tree at 60 mph, and EVERYONE walked away. There can't be a better advertisement for whatever vehicle that was.
That's an advertisement for the Survive mentality that drives vehicle testing regulations. If the regulations had been the product of Thrive mentality he'd have hit the tree at 120 and nobody would have survived.
If we had an actual survive mentality, we would ban cars because of the absurd amount of casualties it causes. What we have now is an optimum. The increased efficiency more than justifies the deaths it brings.
Obviously it doesn't matter what dead people think, but living people are far more useful than dead people. That still doesn't mean the optimal number of preventable deaths is zero.
"dead people" who didn't actually die would be living people so your utility argument doesn't wash. What I'm saying is that there's nobody to present their point of view, not first hand anyway (and the rest is hearsay).
The same happens with war: you only ever meet the survivors so you inevitably get a one-sided view, and even that is highly selected for optimism.
Honda Civic. Yes belts and child safety seats and airbags are awesome. First responders too, and the Good Samaritan who called them and pulled me out. It's not all just the car.
I am preregistering my attempt to solve a personal, unsolved-for-twelve-years problem using the thankful theory. Will report results. If no results are reported for 7 days, I probably forgot to do it and the lack of a follow-up should not be taken as evidence for or against the theory.
Me and the other party agreed that the theory describes us surprisingly accurately, but results would probably take months of better communication to become noticeable. We are optimistic though! Might report later if I have anything unusual to say.
I'm not a doctor but, I'm not sure there's a good way to do this. If you don't have any symptoms (or some other risk factor, like age/family history), a doctor probably won't prescribe a scan either for specific body parts/diseases, let alone your whole body. As far as I know, such scans would show lots of random, weird-looking anomalies, most of which mean nothing at all. The evidence that you have cancer would be very weak compared to the prior that you don't, unless you happen to already have something that's progressed a bunch. So doing this scan would be pointless, and possibly even counter-productive (i.e. you might end up trying to get dangerous surgery to remove a harmless cyst).
(I could be wrong about the above, but it's the impression I have, )
I'm pretty sure some Youtuber went into great details about the horrifying consequences of testing for too many things, and then trying to treat problems found, but which aren't really a problem.
But this was not the video I am thinking of. The video I'm thinking of he mentions a women who has a tumour in her breast, and gets a mastectomy (or something) and has complications, and it's awful, and she really should not have bothered.
Basically... when you get tested, even if the test is mostly inconclusive, if there's something that COULD be bad, no doctor will recommend against further testing and procedures, since that might lead to their being removed from the field, whereas suggesting extra (in the end harmful and/or useless) testing and procedures will never result in the same loss of status or job.
So. Bottom line: don't get tested unless there are symptoms or family history.
Thanks for the link. The statistics are something I already knew, but it's good to know that at least one doctor thinks about this question in the way I would expect.
Doctor Vladimir Alipov (Dysphorra YouTube channel) described the same on his streams. Your description is very similar to what o remember from his streams.
I also have a glioma in my temporal lobe and I also have the same history of many (10? 20?) years of ignoring symptoms and having doctors send me away saying nothing to worry about. My tumour is a rare form called gliomatosis cerebri which grows to multiple lobes. I am up to six lobes and two hemispheres now.
I was officially diagnosed 2½ years ago. My neurosurgeon wanted to do a craniotomy but I declined because of the risk of probable defects and the certainty that he would not get all of the tumour. Up until this year, my symptoms were only minor and annoying but I have recently started to have massive seizures and memory problems. I am lucky that I have had no pain.
I'm not sure I frame my experience in terms of thriving and surviving. I decided very early on that survival was not important to me. I needed to survive long enough to make sure my wife and family are ready to manage without me. In all this time, out of a couple of hundred friends and acquaintances, I have only had two who insisted that I should be focused on trying to survive. The rest accepted and respected my decision.
Thriving has not been at the top of my priorities either. I am starting to lose my memory (which sucks) and I know that I have loss of all my faculties ahead of me but I am OK with all that too. My wife likes to pretend that everything will be OK but — even there — she respects my wishes and communication has not been a problem.
FWIW I am also a software engineer and I don't think of my experience in terms of time- versus space- efficiency. It never occurred to me and, even now, I don't think it applies. I am not optimising for anything. I'm living my life the best that I can and one day it will end. I am good with that.
Back in the paleocomputer age it was a daily consideration. Save a few bytes at the expense of more instructions in rarely executed code, spend a few bytes to save a few instructions in frequently executed code. That was many systems programmer's daily life.
I think you are right! I do this in my life and in the software I build too.
I always want to build the "best" system that I can. Time-efficiency or space-efficiency only rise to the top when they become concerns. It's usually a tiny percent of code that causes efficiency problems and we can fix them when they become a problem.
I was also there near the end of the paleocomputer age. I think a lot system programmers bring along those paleo- ideas when they are not necessary.
"I think a lot system programmers bring along those paleo- ideas when they are not necessary."
I'm sure you're right. There's personal pride in writing tight code.
The more important target is maintainability and future proofing (and nowadays freedom from malware). So many programmers just grab the nearest module off the web to save "reinventing the wheel" and then two years later when it disappears eveything goes belly up or you find all your data has appeared on the dark web. Long live NIH! ;)
I strive for maintainability and user-friendly rather than fast or small. Of course, I like my code to follow the basic rules of efficiency but optimisation can be put off until it is needed. It usually isn't.
I started my IT studies in 1998. There were many more computational resources than before, but the institutional/professorial memory of needing to mind your memory and execution time was still there.
There are still many areas today where these considerations matter, for professionals and hobbyists alike. Computer sciences. Data engineering. Video games. Server software. Trading software. Granted, time efficiency is usually more important than space-efficiency nowadays, but there are examples where the latter still matters, such as the demoscene.
Hadn't come across "demoscene". I see it's a thing (and not just the name of some forgotten geological epoch! Demos means people so it would be an alternative to anthropocene)
It's the demo-scene. A scene of people who show each other impressive technical demos, especially developed under tight contraints, such as less than xx bytes of code, or running on very restrictive legacy hardware.
Unless you are dealing with huge datasets like Google or Amazon, writing basic, efficient code is sufficient for most applications. You can optimise it later if it needs it. It usually doesn't.
As an algorithms person, your space vs time analogy was very muddled. In a basic model of computation, algorithms aren't at risk of having their memory overwritten. There's no fundamental reason why robustness should equal space efficiency.
I have a similar objection to the survival / thriving analogy. Yes, there are some parallels that can be ported over, but it doesn't really match the immediate vs intellectual modes of problem solving or communication.
That said, your model does make sense. I know viscerally the clusterfuck it can be when trying to wrangle with urgent / high-stakes situations using intellectual communication norms. Effective people need to learn to switch between these modes as the situation calls for.
> There's no fundamental reason why robustness should equal space efficiency.
I disagree. Random damage - minefields, artillery bombardments, suppression / saturation fire from a machine gun, cosmic rays corrupting a storage medium one bit at a time - can be expressed in terms of average spacing between hits, that is, how large a contiguous uncorrupted area can reasonably be expected to remain.
If a soldier lying prone in the tall grass has two square feet of cross-sectional area in which he'd prefer not to be shot, and that whole field has been hit with one bullet or fragment per square foot, average soldier in said field will have been hit twice. If there were a large number of soldiers, some may have survived (probably about as many as were hit four or more times), but not enough for the overall group to remain effective. Quadcopter drones with less than one square foot of vitals each might see a far higher survival percentage, thus more overall functionality remaining after the shooting stops, under those same conditions.
Similarly, if a given algorithm needs two megabytes of RAM per iteration (that is, before it has enough progress for some sort of quick parity check to provide a useful answer about whether it succeeded, or failed and needs to be redone), and the RAM it's trying to use has an average of one unrecognized corrupt sector per megabyte, that algorithm will probably be disrupted far too often to accomplish anything. Something more space-efficient, capable of making provable progress within less than a megabyte, could still function under those conditions.
Digital systems yes, but I doubt brains are that sensitive to a few faulty neurons (or LLMs to a few faulty links) tho I guess it might come down to how the fault manifests itself. The brain must have ways to deal with neurons that get stuck in firing mode - I guess the recipient neurons just get desensitised.
I've had a couple crises of my own (different, and probably less excruciating than Daniel's), and this space- vs. time-efficiency model fits some of what I've observed in them.
In the future, I'll have to explicitly optimize for space-optimized strategies when in them. That might help quite a bit; the feeling of my empty bookshelves shrinking randomly when pain hits is familiar.
I think this is a spectacular post. I need to think about whether space-efficient vs. time-efficient is a good dichotomy, but it's at least promising.
For what it's worth, I didn't find the set-up to be too slow, and I didn't anticipate the conclusion. I don't have a background in computer algorithms.
I am glad to have read this post. It ties in very well with the Principle of Charity that SSC was originally based around. I don’t think it was necessarily groundbreaking, but perhaps that is because I was already practicing its lessons but others were unaware.
This is a "galactic algorithm"(pejorative term) concept. You can find good programmers who dismiss the concept entirely and I bet you could dismiss it as highly improbable that the brain is optimized on the space frontier, given that good compression seems to be serial and very fragile, your evidence is the brain recovering data and the brain seem fractal and highly parallel.
> Space-efficient communication can’t cache a long message to be communicated, or a long message received to be understood. Therefore space-efficient communication has to rely on short, atomic messages, which in order to be informative have to be pre-agreed.
Im pretty sure this is just flatly false. The best compression uses sub-bit entropy, any missed bit causes a cascading misalignment of plausible data.
> the wetware it implements usually one, sometimes two, phonological loops
My reading of such things is that *every* neuron is a loop, and that the structure of the brain is very loopy, I dont know where your getting the claim theres 2 loops from given theres at least feedback loops from chemical systems; to be *extremely crude* your sleep cycles depend on a daily cycle of hormones that float around the brain, and idk what the mouthy cycle for women is based on, but it seems to be ... monthly and a cycle.
It take me hours to refind the sources but I vaguely remmeber someone attempting to simulate 1 neuron with an nn and finding it takes 34 neroens in a feed back loop to replicate the behavior with some degree of accuracy; and there was someone mapping out a monkeys visual cortex as if it was an electrical diagram and it was a giant mess.
> Don't think compression.
Why? Its the space efficiency frontier; would wouldnt the data be compressed.
If you disagree with my first argument, how about a 2nd, its unlikely that brains are space efficient because compression is energy inefficient, and running on physics should provide cheap shortcuts of computational value. Consider ant pheromones and ant path finding, they use physics to outsource decentralized path finding to a massively parallel diffusion "matrix"/air that does allot of free computation for the ant, but not for our machines. Why shouldn't brains use hormones as lazy, slow, probabilistic computation systems, much like ants for with the air?
Yes the brain does lots of loopy things, that was literally my previous guest post. But not every neuron is a loop, only a few "pacemaker neurons" that do essential stuff like the hardbeat. The phonological loop is different tthough. It takes a lot of neurons, leaving little for another. You can hear a podcast while singing a song that you know by heart, but it's effortful.
No complaints about this particular post (I've not read it), but I always prefer that ACX (and most other blogs) not do guest posts. This applies to the book reviews too, which I never read. These are just spam to me. I'd rather a blog just link to posts which they think readers might be interested in, in one of their other posts (e.g. link post or open thread post), if they want to share them.
I respect your opinion, but I also like getting free money without having to do any work.
Wait, how does it bring you money? Do people get paid subscriptions specifically because of guest posts and book reviews?
Not directly, but they give me money for publishing a blog, I try to publish ~2-3 posts/week to justify this, and if someone else gives me one then I only have to write 1-2.
This feels like a good time to mention that I have a paid subscription because it's something like "investing a world where Scott Alexander is doing well", mostly as gratitude for your old stuff; I don't perceive this as anything like buying a service from you. Of course not everyone is like this, but I thought I'd mention that some of us are.
This has the look and feel of a thrive-survive communication discrepancy.
If you just want to reward Scott for his past contributions and are not terribly interested in the ongoing service he provides by continuing this blog, wouldn’t it be better just to make a direct contribution to him and bypass the whole issue?
Your form of subscription is paying him to write, but with a time delay. If he wants more subscribers like you, Scott should write more, and then wait for them to be grateful.
But (as a paid subscriber), if you model us as just immediately archiving posts not written by you, the computation of whether it's worth it to continue subscribing is downstream of that deletion. So the guest posts don't bring any free money, but they might dupe you into thinking you're delivering a product of the quality you've pre-decided maximizes your audience retention utility.
There are too many “yous” in this comment for me to be completely clear about what you’re getting at, (in the first instance, I think it refers to Scott; in the second, I think it refers to his subscribers) but either way Scott is relying on his credibility so I don’t think it move the needle much. As a further note, I think the OP was kind of needling Scott to defend his credibility, and I think he dodged that nicely.
> There are too many “yous” in this comment for me to be completely clear about what you’re getting at, (in the first instance, I think it refers to Scott; in the second, I think it refers to his subscribers)
They all refer to Scott. The subscribers are referred to as "us". Subscribers obviously do not have a model of audience retention.
I owe you an apology for my somewhat pedantic and superior reply to your post. I did not live up to my pen name. I am sorry.
This post was pretty moving/meaningful.
It is a little sad that the very top of the comments discussion went right into talking about money.
I'm assuming it was just random chance, whoever posts first sets the tone.
Since I like Scott's posts, I'm also inclined to trust his opinion on what guest posts to put up.
I assume scott's money post is to some degree facetious
This guest post is in my opinion quintessential systems-thinking Scott-like post. I liked it.
The criticism could be:
It was like a recipe. The recipe was at the bottom. There was a lot of preamble.
The preamble was great, though, and I am very happy with this post.
Thanks for curating.
Yeah; my request to such people in conversation is usually roughly, "Please answer, then explain." It works some of the time.
The post originally started with a paragraph explaining that the first part was mostly to cast appropriate doubt on my finding, and was unnecessary "if you want the prize without its price." Scott replaced that with his own disclaimer which is clearer/better, but didn't include that.
I'm not sure whether you're being serious here, but FWIW, I don't consider guest posts to be part of what I'm paying for. I'm perfectly happy that the content you produce is good value for what I pay, but it's only that content I'm paying for.
How about putting up a Hidden Open Thread? Seems like quite a long time since we've had one.
For what it's worth (likely not much), I have explicitly avoided buying a paid subscription because you (and guest posters) write too much for me to keep up with everything.
I try to read all of it, because every once in a while there's a post which ends up permanently forming a fundamental piece of my understanding of something important. (The Nietzsche posts are the most recent example of this.) But when you say "nothing too important is ever going to be behind a paywall" I choose to happily take your word for it.
(Case in point: I have not yet read the post I am currently commenting on. I was scrolling to the comments to see whether this was worth reading in full. I'll probably get to it at some point.)
However, we benefit from you curating the guest posts, and from the regular community in the comments.
(Yes, and this is me delurking and joining in for the first time.)
As others have mentioned - I would not unsubscribe or anything with fewer guest posts/book reviews. I would be perfectly happy with 12 Scott posts a year.
I too wish posts like this were limited to a roundup or separate feed. Like - the book reviews are a lot, but I’m here for when you talk culture wars and politics and medicine and FDA and parenting, and this post and many book reviews… well, they aren’t written by you! There’s no micro humor or charts or research. Just like Reddit and lesswrong posts where I feel like it’s too neurotic and there isn’t a grounding in feelings, emotions, and what normies actually do/think. Scott - you somehow always are both technical but also relate things back to practical real life.
I would rather you post works from lorein as you write them - the posts about adhd, depression, etc would have great discussion.
And I know it’s a bit parasocial relationship of me, but I’d love low content posts as subscriber only stuff. What did you do this week. What toys have your kids been enjoying. What’s on the radio. What car are you thinking about buying.
Oh, I most definitely would not like that at all. I love reading Jonah Goldberg, for instance, but I’m really not interested in his dogs.
Everything its opposite! I'm a cat person and have no interest in what Jonah Goldberg thinks but I used to enjoy encountering his twitter feed just for the relentless 10-dog-things- (with occasional cat e.g. taking dog's spot on the couch: "Be the better quadruped" hee hee) to-one-politics thing ratio. Plus he seemed to live in a cool autumnal landscape.
I haven't been able to view twitter in a long time though.
There ya go…to each their own.
"Tumor growth mindset" was an attempt at micro humor. Did it work?
Yes!! I liked that! Overall your writing was interesting and I read it all. I didn’t like the bits about sequences/thinking because that’s not why I’m here, but I’m sure a ton of people did!
I’ve tried to write like Scott in a few long Reddit posts about sunscreen/skincare/etc, but it’s just hard to get the cadence and everything so incredibly legible. There’s a reason Scott makes the big bucks.
I notice that you toned down your disrecommendation at the top considerably in the current version compared to the email you sent out. I didn't read the post based on that.
But, I came here to comment that I generally like the book reviews, and I'm open to the idea of guest posts, but #guest_posts_are_not_a_suicide_pact; I think that if readers have things to say that aren't of high quality and considerable interest to you, then maybe you should just encourage them to have a substack of their own, and then highlight it in links or open threads?
I know this is partly a joke and I’m sure you agree with this next part based on other stuff you’ve written but for scrollers: you are one guy in one particular place and society overall thrives better if there’s some fostering of community and mentorship.
Edit as forgot to add the point: I am happy to see guest posts as I see it as pro social and and you do it the appropriate amount from my perspective.
Agree completely.
FWIW I very much enjoyed this guest post.
I mean, someone who I assume is a friend of the blog or the rationalist community is going through a tough time, so Scott's boosting his writing. They don't have to be totally rational, they can be human and do something nice for a friend in trouble too.
I do not think this was posted out of charity.
I don't think so either. Scott didn't respond to my offer to pay him for posting this, but I guess that's just sound policy, not charity.
I explicitly do enjoy the book reviews and this guest post, though I usually skip the open threads these days. To each their own. Now, if only there was a way for a RSS feed for each tag (guest post, thread, regular post) to exist and we would all be happy.
I disagree. I like the guest posts and book reviews. Especially when Scott is distracted / busy and so quality slips. If you don’t like them just don’t read them 🤷
Hard disagree on the book reviews. I don't read all of them (who has the time?) but I generally really enjoy the ones I do.
The book reviews are often great, though!
It's this or Scott posts more Tom Swifties.
You're missing out, and I'd recommend at least reading the past book review contest winners. The Georgism (Progress and Poverty) and Egan (Educated Mind) book reviews have stuck with me ever since I read them, and are basically introductions to a particular scholar's entire theory of economics/taxation and education respectively.
Out of this year's book reviews, I'd recommend the one on prions (The Family that Couldn't Sleep). Of the ones I read so far, it's the one that would have gotten my vote if I'd managed to actually vote in time.
I like the book reviews. The guest posts are usually not something I'm super into, but I don't mind them existing.
You can filter out emails with a certain keyword very easily. There's also ways to do this for RSS feeds (https://siftrss.com/ is one I'm using). Just remove any post with "guest post" in it and boom, you're done! I use this for various "current thing" topics like "Gaza".
I found this really interesting and with useful insights on a problem I’ve thought about a lot in the past. Thank you for taking a chance and posting it
I agree completely with this, and correspondingly disagree with Fujimura's comment above -- I often read the book reviews (you can almost tell after the first couple of sentences whether you want to go on, which I often do!), and am alerted to things I would otherwise never have considered. Anyway, bravo to Böttger for this post; it's fantastic!
I agree.
My heart goes out to Böttger - his experience with cancer was worse than mine and mine was pretty bad - but he's missing the last third of his story. He isn't "out of it yet" so he can't see his experience from the outside and judge whether the frame he put around it was a good one. As of yet, it's still a howl at the void. Recover, Daniel. Get well.
I agree. More data is incoming.
Glad to hear it! You told us your ex-father-in-law’s health status by the time of the writing of your essay, but not your own. There’s a lot of space between “fully recovered” and “capable of pressing ‘send’ on an email.”
I'm in radiotherapy, chemotherapy will come next, because that's what you do with stage 3 astrocytoma. Mental functioning is intermittent (pain-dependent), physical functioning is weak but stable. Full recovery in 2024 would be (falsely) considered a miracle.
Thank you for your responses. The fact that you’re alive to type them is, if not a miracle, then a desirable outcome of low probability. If you wished for full recovery in 2024, you’d be foolish and impatient, but you have more than just two and a half months. It’s very likely that be better in January 2025 than you are now, and even better in October 2025. At some point, hard as it may be to imagine it now, you will have recovered. You can do it.
And I most likely will. I'm just really against wishful thinking and purposely falsely assuming probabilities and dropping the "most likely".
Wishful thinking can be good to keep motivation, but I'm motivated without it.
Let's face it, you're only "out of it" when you're dead. Treatment is only a temporary relief; our bodies continue to rot until they inevitable decay into nothingness. There's no light at the end of the tunnel.
You say 'rot' like it's a bad thing. As someone who is outdoors a lot, I see rot as being a teeming - seething - universe of it's own, busy and rich with life of a different kind. Decay does not result in nothingness, but in rebirth. (Just not the rebirth of the original organism!)
Why not? It's interesting, and (as I said before) quite dynamic; there's lots to observe and think about. Is it ideal? - probably not. But it's better than the alternative, and that's got substantial value.
Nihilism is tedious.
"It" doesn't mean life. "It" is the teller's story.
There's no light outside the tunnel, only what we spark inside it. Life is a strange game; the only way to lose is not to play.
This is very nicely expressed.
There's never been a light outside
the tunnel we are in.
There's only what we spark inside
whenever we begin.
It looks like the metaphor of space-efficient VS time-efficient could be extended to high-bias VS high-variance models in machine learning. If you constrain a model a lot (for instance via regularization or even simply by keeping it small) it won’t overfit, meaning it will be more reliable. This can make it reliably wrong (biased) sometimes, but that’s a price you may be ready to pay given the use case. In medical use cases, the work of people like Cynthia Rudin showed that indeed a very simple and interpretable model such as a scoring system can be learned optimally and can save lives. The nurses are like that. To value complexity you need higher capacity (in a way this is thinking fast and slow all over again) and, crucially, trust.
It's not a metaphor.
Nurses are the example here, but this applies to bureaucracies creaking under the load of clients, resorting to form-filling because it would be bad to spend too much time on one "real" conversation while another client with a possibly more important problem is waiting.
I think you may be missing the simpler explanation: bureaucracies don’t care about people. They’re optimizing for efficiency because it can be measured on a spreadsheet and requires zero courage or willingness, on the part of the institution, to make a sacrifice.
The alternative explanations I thought of are that doctors and nurses (or any expert) will disregard any non-expert suggestions, information and opinions, for several other reasons beyond optimizing for survival:
1) Even though the suggestion might be good, it takes too much time and effort to decide whether it's good. The system runs on trust - trust that other experts know what they are doing - without having to verify each instance. This means false positives (accepting expert suggestions that are bad) and misses (rejecting non-expert suggestions that are good). This is not necessarily because of optimizing for space over time (a complicated and lengthy suggestion by another doctor is more likely to be taken seriously than a concise and simple one by a patient), just efficiency in general.
2) Experts take offense at being second-guessed by non-experts. Protecting their status takes precedence over helping the patient. They might not be fully aware of this. Patients are so low on the pecking order they aren't even on it - they are objects, not subjects or God-forbid peers.
I have seen this attitude in some of the surgeons that I've worked with, but private practice physicians with a focus on patient care over surgery that act this way are the rare exception in my experience versus being a norm.
I'm editing to specifically underline the word "some" above. Plenty of the surgeons that I have worked with exhibit more focus on the patient.
I mostly have experience with public healthcare in my country. The attitude describes 5/6 doctors I've had. I've also tried a private clinic in another country, where the doctor did listen to suggestions, but seemed to veer towards the other extreme of doing whatever it took to make me a satisfied customer rather than giving a good diagnosis and treatment.
In general it seems to me that healthcare is focused on treating the 20% of easily solvable ailments that account for 80% of the cases, and don't really do much about rare, severe or chronic diseases. Hopefully in the near future we'll look back at current times and wonder at how primitive it all was.
I'm making an assumption, but are you European?
The thing about US care is that we're business oriented thanks to our private ins system. Docs of all stripes are incentivized to at least resemble a caring physician. Our two systems are likely more aligned in attitude at the hospital v. hospital level.
It also means that 80% of the caseload that is common problems is easily managed en masse. At least in my field, ophthalmology, that includes an array of longterm illness, especially glaucoma and dry eye syndrome, as well as more niche but still common enough concerns like high risk medications (e.g, hydroxychloroquine).
So, we refer out at the primary eye doc level when someone needs more advanced care, especially severe glaucomas and low vision services.
When doctors on duty talk to each other, they're VERY terse, in my experience. When they have a lot of information to pass along, they hand each other printouts.
I was not second-guessing them so I don't think your explanation applies to my experience.
Right, I meant more than any suggestions tend to be interpreted as second-guessing by some doctors, at least in my experience. Anyway, interesting read! I've never been through anything like this, but still felt like I could relate. Wish you all the best!
Explanation number 1 still does, right?
I disagree entirely with this.
I have a brain tumour too and my neurosurgeon seemed like he just wanted to get his job done and move on but everyone else I have dealt with (oncologists, neurologists, nurses and MRI operators — even the bureaucracy of the NHS) have treated me as though I am the most important person in the world. I feel like they are all my friends and, in different circumstances, would certainly share a beer with me at the pub. They always respected my opinions and wishes.
Agreed. MS patient in Switzerland (which has a system with a highly standardized insurance mandate but some competition in care provision that in practice ends up far closer to the US than to the NHS) and while I have a lot of beef how long people did not catch it, after the diagnosis everyone has been like that.
Even the health insurance which covers a shit load of costs every year (with no obvious path aside of it increasing further) is nice and helpful!
Is *everyone* perfect? Of course not, I on one occasion had an occupational therapist which I neither liked nor thought useful in the least and that was quickly resolved (by nuking the undertaking which was exactly what I thought should happen).
Breast cancer treatment here in the US and I say the same. My surgeon was easily worn out by conversation, but every other provider -- oncologist, radiation oncologists and technicians, radiologists, and almost every single last nurse have treated me with the utmost attention and care, down to the very last detail. This at a small fairly rural non-fancy hospital with a tiny cancer care center. And despite Covid pressures and burnout and serious understaffing.
The gratitude I genuinely felt and continue to feel was an incredible existential analgesic for me all the way through. Their care and my gratitude for it felt like direct pain medication for the suffering my fear caused.
I can imagine all kinds of ways I could have produced friction for them and me. To step into that level of urgent, highly coordinated medical care is to travel to another country where you don't know the rules, don't speak the language, and your norms and ideas aren't theirs.
It helped me quite a lot to surrender to all of that and trust that they were going to carry me where I needed to go. I spoke up here and there about a few things, but otherwise the collaboration I had in mind was to participate in their system and receive its benefits. It's not the kind of collaboration I'd seek with a single other person, like a therapist or a partner or friend. I think it helped me to see what a stellar job they were all doing under such terrible circumstances. And it probably helped that I'm also a healthcare provider so I have a lot of compassion for the demands of their jobs.
I think some people do get terrible care. There are loads of people who are traumatized by their care. I was to some extent but not the fault of the providers in my case. I think some people have unrealistic expectations about what's possible in that space. And of course loads of people are just out of their minds in pain or terror and can't be expected to be in it any other kind of way.
Having said that, I want to say I also think Daniel is onto some pretty interesting ideas.
That's simpler but wrong. Bureaucracies are the jobs of other people, and they're no exception to the fact that most people are honestly trying to do the right thing, under circumstances that you're failing to imagine.
This reminded me of reading Sam Kriss, in that I didn't understand it at all but get the feeling there must be something brilliant behind the words...probably? And there's a third tragedy in there despite only two being explicated, I think? Honestly just sort of baffled. (And worrying about my own "benign" tumor...cancer sucks.)
The various doctors I spoke to about my large tumour told me that the only way to deal with it mentally is to completely ignore it. They even strongly recommended against regular scans of it (both NHS and Private doctors so different incentives) as it would only cause worry from minor growth and any actual issue is going to be noticed first through symptoms anyway. I can't imagine the horror of finding out your benign tumour was actually malignant though.
Yeah, that's where I'm at..."well, we could schedule you with a neurologist" left hanging as an open invitation, and then subsequently worrying about random unexplained one-sided headaches in the supposed area, plus other desiderata. Maybe coincidence, maybe not...would I even notice such anomalies without the priming? Obviously I'd prefer to have a better wakeup call than crashing a car or something similarly dramatic. But all those Bayes lessons about the classic mammogram problem are very much an EY-changes-your-thinking thing. Some avoid doctors cause they're hyperchondriacs, some do it cause they don't wanna wrestle with thorny statistics problems...having had a few relatives die from various cancers isn't reassuring either, even if they were often long-lived. Probably best to just not think of the pink elephant in the brain, which isn't steerable anyway.
Is this a UK thing? I feel like in America you would get an audience with a neurologist if you had a large tumor; but I don't use the medical system so may be confused about what a neurologist is/does.
In the US it often depends to an almost ridiculous degree on who your doctor is and who your insurer is. This is true regardless of how clear or unclear it is that scans and doctor visits are helpful vs watching for symptoms.
This has not been my experience.
I also have a tumour (also NHS). I researched the hell out of mine. Three hours a night for six months. I've had maybe 10 MRIs. I appreciated every snippet of information and they gave me comfort — even when the news was bad. My oncologists and nurses encouraged me in my research and answered all my questions.
Before posting the comment I tweaked some wording and accidentally appear to have removed the word benign (as I was replying to a comment about a benign tumour it mustn't have tripped my final sanity check). But yes, the NHS were very fast and helpful at getting the scans done but once it got confirmed benign they shifted into (probably rightfully) telling me to stop worrying and ignore it unless symptoms show up.
I'm glad your tumour was benign, James. I hope everything works out for you and your tumour.
I don't know. I thought it was an interesting exploration of religious questions in a very atheistic, rationalist way, sort of a 'what if a very religious person by nature didn't believe in God'. Though as I've said above, I don't hold the same view. But that's because I'm not the same person.
Sam Kriss...well, let's say he's very good at what he does, but I find him an irritating representative of the literati class with their typical prejudices ("haha let's mock working=-class and middle-class old people because they voted for Trump"), and I'll leave it at my personal statement of disgust, which only asserts how I personally feel about something. :)
Kings New Clothes syndrome. Always reminds me of attempting to read Wittgenstein's Tractatus Logico-Philosophicus many years ago. Now we have Wikipedia and the interweb I must have another go.
Interesting that you mention Sam Kriss. Daniel's post is so honest and open. I feel that I'm reading the work of an adult. Sam's work, although brilliant, does not leave me with that impression.
Love this- thank you. As a scholar of religion (and having skin in the game in religious spheres), this tension is a fantastic predictor and pattern-matching tool. The more you value your system and feel its uniquely beautiful and transcendent, the more you have a 'survive' mindset. This is linked to orthodoxy and traditionalist approaches, which I am favorable towards. "I want to avoid this from collapse because its so great, and the way it is right now is the way it should be".
But as you value it, you also can tend towards thrive and take up a totally different approach. "But we can do this better!" "But this could help EVEN MORE PEOPLE!" "But lets update for the times, or culture, etc!" The thrivers have a similarly good intention and can come to a totally different approach/conclusion with an equally valiant wish. And its hard to empathize or understand sometimes, as it can feel 'competitive' or 'misaligned'.
So anyway- I just wanted to thank you for the essay. It helped me understand something more clearly that I have been processing for a bit and now is seemingly obvious. Wishing you the best :)
Oh, very cool. Yes I agree thrive/survive does work on the reformer/orthodox quarrels that are now ongoing in many religions because they're all suffering the huge onslaught of the Internet and sanity/atheism. There are attempts to explain this with political left/right but thrive/survive imports less baggage. I've done scholarly work and a few minor papers in the study of religion. (Main result: https://sevensecularsermons.org/why-atheists-need-ecstasy/ ) If your pursuit of thrive/survive into contemporary religious dynamics leads to a publication or something, I would love to be told.
I already had some fleeting not-quite-thoughts about this, and my gut reaction is that this is awesome. It feels like someone flipped the switch on, and I can see the hypothesis I was trying to test for. It is directly relevant for my work and I will be testing it abundantly. Thanks.
Likewise. I think about reciprocity - tit for tat - a lot, and the space-time tradeoff seems to shed light on the behavioral choices as much as the thought/reflex split. Gonna go read the Wikipedia article and then do some coding.
"My father-in-law was the front seat passenger. Same story with him: put into the CAT scanner to look for fractures, and although he never had seizures they found a brain tumor in him as well..."
This reminds me of the "hitchhiker you picked up" joke. He asked, “How do you know I’m not a serial killer?"....
This is a great concept! I love the application of computer science. I agree that survival oriented processes are fighting a doomed battle, but I don’t think you take this far enough: They’re trying to push the probability of death to zero, when any study of Eliezer knows, zero isn’t a probability.
The only solution I see is, the survival oriented processes have to trust the thriving oriented processses to run the show most of the time, except for in immediate- ie short time duration- crises. If a “crisis” or “emergency” goes on for long, it will kill thriving. Yes, each process has to trust the other, but ultimately the thriving-oriented process has to be the captain of the ship. If the survival process doesn’t trust the thriving process to ultimately keep it safe, no amount of risk-mitigation will suffice.
From that point of view you can view the early stages of human history (first hundred thousand years or two) as being more in survival mode, from which we then gradually though unevenly have been emerging in the past couple of millenia, esp the last couple centuries. The thrivers are in charge, sort of, here and there, but are still pulled down by the overwhelming flywheel effect of the survivors, who still dominate. Who knows where we'll end up, but Böttger's theory could sure help in keeping us on the trajectory of the past couple centuries.
If thrivers were actually in charge, we’d have nuclear power plants everywhere and very little regulation. We wouldn’t have masked toddlers in response to Covid, or said people have to give up bodily autonomy to ward off the small threat posed by Covid. There wouldn’t be major politicians pushing for digital censorship. We’d be traveling the stars, not fighting over nonsense.
We'd have to be travelling the stars cos Earth would be uninhabitable....
Thrivers ARE in charge locally, here and there; they run a lot of companies and other organizations, in some rich countries, though not usually the political systems of those countries. (The current president of Argentina could be classified as a thriver.) But as soon as we get into this discussion the question of classification becomes pretty vague. Take Elon Musk, often considered a paradigmatic thriver. But when he ventured to support the Ukranians with Starlink satellites, the Chinese were pissed off and said look here, you've got a huge amount invested here and you're dependent on us, so behave, i.e. support our friend Vladimir, or we kick you out. So Musk changed his tune and started parroting Putin rhetoric, e.g. idiotic stuff about holding "referenda" in Donetzk and other occupied territories to decide whether they should join Russia. (And in American politics it's driven him all the way into his current hyperbolic rhetoric . . .) Of course -- he wants to keep what he has; he sees the democrats as a threat not only because of taxes (that's a very small part) but because he doesn't want to get kicked out of China. So is that survival or is it thriving? It's clear that this person was a high-profile thriver up to a certain point, and it seems pretty obvious that he's now turned into more of a survivor -- but one who still needs the reckless swashbuckling thriver image he'd cultivated so consistently before 2022. Hard to classify, and I think you run into the same problems of classification with many, many people. I agree with the commenter above who thought that the survivor-thriver antithesis is too one-dimensional and needs to be seen in a wider and more multimensional context.
I think the whole distinction is a bit silly. You can be as thriver as you like at 9am, but at 9:05am something can happen that puts you (temporarily or permanently) into survivor mode.
Nobody is an always-thriver. If there's a distinction to be made it's between the sometimes-thrivers and the never-thrivers.
I try hard to walk through the world smiling, friendly, laughing, generous, happy. Given a moment between stimulus and response, I can usually be that guy. Under duress, not so good. I agree the distinction is granular; not silly, though.
We had this system, generally, during the beginning of the industrial revolution. It had horrible labor conditions and massive pollution problems. It may have led to faster "thriving" in some sense, a faster advancement, but it had tremendous downsides as well. We've perhaps gone too far in the other direction at this point, but to pretend putting "thrivers" in charge would lead to some harmonious existence is not supported by history. Many people would get stomped on underfoot to advance their goals.
The last 200 years has seen a monotonic increase in American life expectancy (possibly excepting wars, which some American data may not be granular enough to show.) While England essentially forced its rural populations into cities, in America people went voluntarily into cities and factories. Every indication is that, at least for the span of the early Industrial Revolution, the benefits of rapid industrialization far outweighed the costs. Pollution was just not as bad as starvation, material deprivation, or losing the war. And farms of the Industrial Revolution, frankly, had terrible labor conditions as well.
That we can have a cleaner environment despite our oversized population is a direct benefit of our developed industrial capacity. If modern Americans lived 'off the land' in a 1700s manner we would quickly denude the landscape beyond the point of repair.
Perhaps we're at the point where marginal improvements are not as valuable so we can afford to reign in the investors. Or perhaps not. But during the industrial revolution improvement in the economic sense was absolutely a strong net positive, with long term residual benefits. I'm not sure if the investors should be credited as strivers or survivalists, granted. But whatever happened during that period improved society in the short and long term.
It might be too late if you wait for the emergency to arrive. For example, many European countries switched to full thrive mode just before the rise of Nazism, and so did not have time to prepare themselves when attacked.
Optimally you would need to keep some places and some people on constant survival mode, so there will be some help ready for an emergency.
There's a venerable German state theorist whose name escapes me, who says the frue sovereign is the one who can declare a state of emergency. Maybe that has to be a survival-oriented person/institution.
I see it exactly reversed: Most processes value thriving at the expense of survival to a suicidal degree.
People will burn every tree on the island to smelt copper, then freeze to death in the winter 100% of the time unless the few survival oriented perspectives fight a brutal battle of the somme trench crawl against the optimists forever, every day, because there is a limit to how good something can get on a set time scale but there is no limit to how bad something can get. Goodness/badness is a multiplier on a variable, and the multiplier for death is 0.
I think the OP dichotomy is between short-term survival and long[er]-term thriving. The issue as you point out that both the post and most of the society do not pay enough attention to long-term survival.
I know Jared Diamond thought Easter Islanders cut down all their trees out of myopia, but that doesn't appear to have actually been the case. https://entitledtoanopinion.wordpress.com/2024/06/24/update-on-jared-diamond-being-wrong-about-easter-island/
I was thinking more metaphorically, but thanks for the link. That bit always struck me as odd: It's easy to imagine overhunting to extinction, but you can get on a high place and use your eyes to see how close you are to tree = 0.
Surely there was at least one survival maxer to throw a fit around.
A rivetting piece! The perennial conflict between Surviving and Thriving is both new to me and intuitively correct, and also something to incorporate in my worldbuilding.
However, Urgent communications are always terse, whether Surviving ("Hold the line!") or Surviving ("Charge the flank!"). So the first rule has to be: *Recognise when the situation is Urgent to the other parties.* That's not easy when medical professionals are projecting calm, and maybe impossible when you're doped on meds and distracted by pain.
I'm intrigued by Daniel's observation that Surviving conversations are terse even when Non-Urgent.
Rather than "space efficiency", I think this is about Survival being a complex but solved problem (except when it's suddenly not) - basically Chesterton's Fence.
If so, then people responsible for Survival are terse because the conversation is too big for the time and energy available (you can almost hear the mental sigh just thinking about it), and because they feel an instinctive need to maintain authority.
"Why...?"
"BECAUSE!"
Spatial efficiency usually comes at the cost of temporal inefficiency. And there’s no way I’d describe survival oriented persons as being “patient, willing to spend long amounts of time”.
I think the (unstated) assumption is that the time-inefficient component of survival happens before the crisis starts. Then, in crisis, it relies on short, cached phrases.
In other words, doctors and nurses have years of medical training so they can communicate efficiently in the moment using standardized language.
Ah, that makes sense. Like a really tightly packed suitcase.
But one where the packer and their travel companions know *exactly* where each item fits and what it's called.
Yes. So if you said to them, "Hey why don't you pack an extra beach towel?", their first response would be terse and unenthusiastic. They can't even remember *why* they've packed what they've packed, but they know it's optimum and really don't want to revisit it, and if they do it will be a whole process to decide to make room for the towel and then make that work.
Exactly.
Hmmm. I like the idea of understanding the difference between the survive and thrive mindsets, and the idea of understanding where others are coming from and being charitable to them as a way of fostering better cooperation. I'm not quite sure I'm convinced of the difference between survive and thrive mindsets mapping easily onto the difference between space-efficient and time-efficient algorithms, though. From my reading that seems pretty specific to your experience at the hospital, and really only one specific part of the hospital experience (their communication).
I feel like there are a lot of other things that go into the difference between the two mindsets (e.g. level of risk aversion, or in this case, different incentives: the hospital mostly doesn't want you to die, but cares very little about whether you're having a good time.) that to me cannot be explained away as a difference of algorithms. I don't think that e/acc's issue with AI doomers is that doomers are communicating too curtly, and I don't see how the third heuristic will help the issue. I do, on the other hand, see how the second and fourth heuristics could be valuable, but those seem more about just being charitable than having anything to do with running different algorithms. Maybe I'm missing something here, though, and would be appreciative if anyone can explain what it is.
I had a similar reaction. This seems like useful advice for how to communicate in emergency situations, but I don't see how it generalizes.
If anything, I just take it as a reminder that prolonged artificial urgency kills productivity.
I think that e/acc's issue with AI doomers is EXACTLY that doomers are communicating too curtly. "Orthogonality", "instrumental convergence on power-grabbing" etc. get used as if they meant anything to people who have failed to Read the Sequences.
But they're all IT people, they understand Algorithms 101, this should help them get to the bottom of their failures to collaborate.
I'm not sure you've convinced me. I can see why using jargon/abbreviations/terseness would be frustrating for that, but I don't think translating that jargon into a message that's more time-efficient and less space-efficient would make any more than just a marginal difference. And it's not clear to me that survive mindset groups are any more likely to communicate tersely than thrive mindset groups (I've seen a lot of very very long blog posts from AI doomers), with the notable exception of a situation like a hospital where things are happening very urgently on a timescale of minutes or even seconds.
It still feels to me like the main thing separating these two groups is risk-aversion, priors about how likely AI is to kill us all, priors about how great AI will be if it goes well, etc. (This is my attempt to be charitable. There is part of me that thinks a lot of (but not all) e/accs are just trolls or people who don't understand tail risk or survivorship bias very well.)
Yes. I don't use "collaboration" and "communication" synonymously. Reading each other's long blog post isn't usually collaboration. Collaboration is when you try to solve a problem together, and that's where the communication styles diverge.
Are you saying that e/acc-ish people don't use jargon of comparable terseness or use it much less?
I think this is great.
Thinking about this more: I don’t believe survival processes could be optimizing for spatial efficiency at the cost of temporal efficiency. There’s no way “take longer to do the processing” mitigates the risk of “bit flipping” from taking up more space, because in an emergency time is scarce, and reaction time can be life of death.
I think we should expect survival oriented processes to just be computationally simpler period. We should should expect them to be error prone to the side of over-estimating threats. Seeing a threat where it doesn’t exist inhibits thriving, but not seeing a threat could kill you.
The personal interaction rules you give are a good description of “how to deal with an emotional person”, as well as “how to interact with a computational process with very limited capacity”: send short messages, as few as possible, and be patient.
I think you underestimate greediness of such a choice. Even if _total_ time is shorter, each individual _step_ of a fancy algorithm is often longer and/or more complex, hence more prone to disruption. (And yes, what counts as an individual step is fractal, yadda-yadda-yadda.) MergeSort's individual steps are efficient but more complex than "compare A to B".
The prone-to-overestimating part is, of course, true.
differentiated by the results of mistakes …
Yes.
I symphatize with his attempt to crystallize what must have been a phenomenologically intense experience, pain notwithstanding, like, it's *that type of experience* that's notoriously difficult to communicate.
Given that certain substances like LSD or ayahuasca can scramble our sense of salience (i.e. what feels important or significant), I usually take shifts in perceived importance with a grain of salt, especially from people who have a drastically different epistemic standard. But some of the readers here, including me, would be predisposed to extend extra trust to Böttger, given his background and past work.
For what it's worth, IMO the post managed to communicate a very real and very important thing, in particular the seven heuristics that probably will get quoted a lot in the future. It needs more elaboration/response by other people (or by himself) in order to gain a stable position in people's metaethics, but as far as memetics go, it could spread far and wide.
Some quotes that i like:
"Whenever these people think there is a minute for idle chat, that proceeds flawlessly. But the more urgent collaboration is, the more frequently it appears to fail"
Reminds me of the distinction between no-slack work vs. work that has slack built-in e.g. film industry.
"The brevity of their communication will feel hostile; interpret it charitably as an expression of urgency of concern."
This resonated a lot, although gotta say: My experience living in a non-western country, where scarcity is more common, is that brevity isn't just about urgency and concern but also reflects a fear of losing out, so it's also a kind of a power play—though that might sound cynical.
A very interesting point, and one I will try to use when analyzing situations in the future. A couple of other places where it seems relevant:
Teaching. The teacher is on survive mode, knowing how little time is left to cover the entire material, the students want an explanation of why they have to learn it for every single new algorithm.
Relationships, when one partner have more experiences with relationships breaking / divorces, and so try to optimize for survival and the other partner is more afraid of the relationship going stale and so optimizes for thriving.
Probably will think of others soon. Also I for one really enjoy having guest posts here, I usually find new things to read mainly through links from blogs I already read. Will try to look into your sermons when I have time (also, a movie recommendation: https://www.imdb.com/title/tt0418455/ )
Thank you! I've seen the movie and second the recommendation.
Fascinating. I admire anyone thoughtful enough to try to pull enduring truths out of personal suffering and generous enough to share those lessons.
Having a brain tumor is a really horrible stroke of luck, and I'm really sorry for you. I'm hoping for a miracle.
Ironically, I get more of an Epicurean conclusion from this.
What this makes me think is that I'm glad I never really spent that much time on philosophy, and I feel better about all the time I spent trying to get laid (and...adjacent activities). You engage in all these intricate mental computations and constructions, ponder the meaning of life, and it turns out to be just a tumor.
(God, that SUCKS.)
Of course people enjoyed them and found them meaningful, so I guess that's something. But to me it further lowers my estimation of the probability of the Divine, or any kind of transcendent meaning to anything. That feeling you get? Yeah, it's just some brain circuitry firing a certain way or, as in this awful case, a tumor.
I just never *got* religion. I had zero interest in Zen or ego death or anything mystical. I understand the desire to save your soul from an eternity of torment in the afterlife, but if that isn't true...what's the point? It seems like it serves other people more than you. I think I just don't have whatever the temporal lobe wiring is for it. Or something.
Eat, sleep, work to pay your bills, f***k if you can (unless you're ace), save for retirement, raise kids as so many pronatalists want to so you have something left after you. (I whiffed on the last one, though I admit I was never really all that interested and shocked people in high school by admitting it to them.) The rest is commentary.
You're just a monkey with a bigger cortex. I am. We all are. Enjoy your bananas, the end is coming for you sooner or later.
But: having a brain tumor is a really horrible stroke of luck, and I'm really sorry for you. I'm hoping for a miracle. I'd pray for you, but if He's up there, I doubt He'd listen to me.
I came to the opposite conclusion: I have to meditate enough so that I can withstand the tsunami of suffering which life will at some point dump on me. I didn't focus on the shortness of life, but on the long suffering at the end.
Also, maybe I shouldn't have kids, so I always have an exit hatch. Although, to be honest, I would not blame my dad at all for taking his life under these circumstances (not sure of the exact details of OP's situation, sounds like the mother is no longer around which does make it worse).
Of course whatever works for someone right? We don't all need to be walking the same path.
Do you know My Stroke of Insight by Jill Bolte Taylor? A neuroscientist reflecting on her altered perceptions from a massive stroke and the insights it led her to that were life-changing for her.
To me a brain tumor or a stroke or psychedelics providing a dramatic shift of perspective doesn't undermine the credibility of the perspective. It says to me that the normal well defined circuits of our ordinary thinking lead to X kinds of awareness/knowledge and exceptional experiences can produce extremely interesting Y kinds of awareness/knowledge. Like the difference between research studies and poetry. Are they not both valuable?
Agreed. The same things don't work for everyone. I was saying this had the opposite effect for me, not that it shouldn't work for anyone.
He isn't up there. (Yet. God growth mindset!) Save your hope for likelier things.
I do continue to think that in a sense, we're also all the same universe that's wearing our faces as its masks, using many brains and networking them together through language to figure itself out. That's just philosophy, didn't go away when the seizures went away.
People often ask how the methods of rationality can help in our daily life. The first part of this essay was an eye-opening insight into how rationality can help, when it is all you have left.
Very much so. My fellow patients were busy with painful thoughts like "this is because I'm so stressed" or "I knew it would all go wrong if I did not quit my job" at exactly the worst possible time for such shit. The Methods of Rationality seemed indispensable for navigating this.
Like some commenters here, upon reading the point about the survival/thriving tension, it intuitively felt correct to me. It is a simple but insightful theory. Thanks for sharing, and I’m sorry you had to go through such pain to discover it. I am amazed you were able to break through the pain and share it.
The theory sheds some light on a tension I’m experiencing between two groups in my life. I am part of a local YIMBY movement and have had several long conversations with local NIMBYs. Often time it feels like we are speaking different languages. I could not understand some NIMBY concerns, like how tall buildings would decrease the sun on their gardens + simply not liking tall buildings, would be in anyway comparable to the huge exodus of people (many of them my friends) from my city because of the lack of housing.
Instead of just concluding these NIMBYs are garbage, Böttger’s theory gives a much kinder, much more actionable explanation for what’s going on. The YIMBYs are motivated by survival (i.e. people able to live in my city) whereas the NIMBYs have their housing and have now progressed to focusing on thriving.
Pattern Matching!
I don’t think the time-space part of theory is correct, though. In my example the tension seems to be between (YIMBYs) the existence of housing and how to get to a minimum where people can even think of living in my city vs (NIMBYs) improving the quality of housing for those who have it. The idea of space efficiency doesn’t really fit in with ensuring increased housing supply, at least not to me.
What makes more sense to me is not a space-efficiency tradeoff, but rather a time vs. solution-quality tradeoff. In more computer sciency terms, it’s the tradeoff between the faster but less optimal approach of approximation algorithms vs. the slower but provably-optimal approach of the algorithms we learned in Algorithms 101. I’ve seen this dynamic most clearly in my upper computer science classes on NP-Complete approximation algorithms, which you can read about here: https://www.khoury.northeastern.edu/home/rraj/Courses/7880/F09/Lectures/ApproxAlgs.pdf.
Finally, I think Böttger’s suggestions at the end still apply regardless of which theory proposed by people you choose here. I would just add one imperative to the Team Survival’s checklist, which is to try to convince Team Thriving that this is indeed a survival situation, because survival is more important than thriving. If the situation is indeed a survival situation, the Team Thriving should be able to temporarily set aside their goals, and join forces with Team Survival. Maybe we call the synergy of the two Team Flourishing?
Thanks again for the excellent theory Daniel!
The YIMBY and NIMBY labeling is surprising! My first intuition is (YIMBYs) want to optimize the current system to improve the lives of mostly strangers that can be doing much better vs (NIMBYs) want to keep everything exactly the way it is, we are doing fine and doing anything different might hurt, thank you very much.
I doubt it. Despite the label I think that "YIMBYs" are rarely people who actually own a backyard, they're either people who are desperate to move into the neighbourhood (and don't care if it gets slightly worse in the process) or property developers (who probably live in a different place anyway).
You could also reverse the survive-thrive classification here just as easily. NIMBYs are "survive" types, they just want to maintain what they already have. YIMBYs are "thrive" types who are trying to move up in the world, either from tenant to owner-occupier or from resident of a less popular place to resident of a more popular place.
Alternately, NIMBYs who worry that the wrong sort of people moving in and ruining the neighborhood could be said to have a "survive" mentality, whereas people who expect a neighborhood to improve with more development have a "thrive" mentality.
Id versus Superego. Some people are more driven by one, some by the other. It maps onto this theory and explains phenomena like the masochistic guilt shown by some groups to others (overpowered oppressive superego they identify with) and the sadistic glee by which those others concur with that guilt (overpowered wild Id with rejected superego.) There are many other terminologies for this core individual human conflict and how it might be projected out in various ways, but currently I find this one most elegant.
Is this not just short term vs long term goals, tactics vs strategy? Survive is tactics, thrive is strategy.
Every specialism has its jargon, primarily to avoid long-winded descriptions. This appears terse to the outsider but it's just efficiency. Knowing the lingo creates an in-crowd, especially if they're working together every day. Plus a medic needs to distance themselves to remain objective and do their job. These can combine to make the patient feel excluded. When you speak it's as if you're a distraction from their private business and it makes you feel like a child interrupting a grown-up conversation.
[I realised after posting this that doctor-patient is classic parent-child in TA]
re sorting books: surely you just repeatedly start from the beginning, swapping adjacent books if they're in the wrong order? ;)
I know too little about military things, haven't even read Clausewitz. Do they have a standard resolution mechanism for differences of opinion between tactics and strategy? I can imagine armies where strategy always wins via chain of command, but I can also imagine armies where the folks in the trenches decide what elements of the strategy are actually doable. War gives pretty intense feedback though, so over the millennial, centuries and decades I would expect some kind of convergence.
Not gonna get dragged into BubbleSort. :-)
In military terms (and I know less than you do) I presume strategy sets your objectives and tactics is how you achieve them. But "No Plan Survives First Contact With the Enemy".
re BubbleSort: if you were in a wheelchair and all your library was on a single floor, that'd probably be the most efficient. My tangential point was that algorithmic efficiency isn't fixed in stone: it's about having a large library of possibilities to choose from.
Agree on all points.
I agree that the main distinction here is between long term goals and short term goals.
Wish there was an upvote button so I could signal agreement more concisely.
Having read the theory, I feel like the author's belief that it will contribute to saving the world is a grandiose delusion. In light of the line "Although I can’t speak for its world-historical importance", I infer that Scott also thinks this, although he is surely too polite to express his doubt in those particular words. This causes me to suspect that Scott has published the essay primarily out of sympathy for a very ill man who describes himself as adjacent to suicide (no one says "I can’t do that to the kids" unless they've thought about it real seriously).
I would prefer Scott not pity-publish delusions.
I know it's very cruel to say, but I totally agree with this. To be fair to the OP, maybe he meant "contribute to saving the world" in the same way that donating $100 for mosquito nets contributes to saving the world--it's a small contribution, but it is a contribution nonetheless. That said, I don't believe this post contributes as much to saving the world as donating $100 to charity.
> in the same way that donating $100 for mosquito nets contributes to saving the world--it's a small contribution
How is saving societies that are doomed regardless contribute in any way to saving the world?
I'm torn on whether I agree with this. I share your view that the idea that this is world saving is delusion. I'd imagine some heightened sense of salience helps with the pattern matching but also tends to over-assign a sense of importance to those patterns once identified.
In principle, I'm against pity-posting, but I do think this is an excellent essay. I found it an interesting theory written about in a clear and engaging way. I just don't think it's a world saving theory, or anything close.
If his premises were correct, perhaps it would. The delusion, alas, is a common one: that this 'thriving' as described exists or could exist. It doesn't matter if one carries this delusion forward into 'fully automated luxury space communism', the error infects everything.
'thrivers' aren't mistaken that we 'survivors' are hostile. It's not a communication problem, but a reality problem.
Whether you agree that the piece is interesting or useful or not, it seems unnecessarily harsh to say Scott published it out of pity. The OP is a decent writer telling a powerful story and offering up a theory he took from it. It doesn't need to save the world to be an interesting offering. Way less interesting things have been posted in this space.
This is exactly why I wrote the first part of the post. You should doubt this theory, because I was definitely not thinking straight. That's why I repeatedly emphasized that in the text.
Still, at the same time, it is also an idea that might make sense, might be trivial even, and maybe the only reason at has not been described before is that nobody had the same weird combination of circumstances (intensive care, Algorithms 101, nurse mother, having read Scott's thrive/survive, compulsive theorizing etc.) and time to write about it.
It was not pity. We've exchanged businesslike emails over the previous guest post. I've met Scott for all of five minutes, years ago, and he said he "kind of hated" the theory I had back then.
Does a morsel of food save your life? Maybe, maybe not. But food is necessary for survival
If I were striving for accuracy in describing delusions, I would say that "delusional" is a property that applies to people's thought processes rather than belief content per se. And delusional thought processes exist on a continuum (or share the space of, I don't want to imply unidimensionality) with non-delusional insight-generating processes. Delusional trains of thought may pass by reasonable and interesting stations before hopping the tracks. Ideas that have been kicking around your head start getting integrated, and sometimes you hit on novel solutions to actual problems. So I don't think you can exactly prohibit the publication of delusions, and I don't think you should prohibit the publication of those who are delusional.
But I agree with others that the author's assessment of the importance of this piece (I hesitate to say "idea" because I'm having troubling isolating one core insight--sorry Daniel, I think you're throwing off lots of sparks and heat but I'm concerned that it's secondary to the collapsing structure of your conceptual space) is way off base, and some features of the writing are very familiar to me as an occasional rider of runaway trains. The ideas expressed are cool tools but not paradigm shifters, and the way they are strung together in this piece makes me worry for the author (sorry Daniel, it's not pity, just worry, and I'm glad you have good German doctors and all but you're exactly the kind of person who would excel at concealing a burgeoning psychosis). But I can think of good reasons to publish in spite of this, and by my lights it doesn't take much...affirmative action for the sanity-challenged to bring it up to par, if that was even a consideration. I wonder if Scott's motivation wasn't in part to help them both (thinking of the parable of Sally the psychiatrist here: https://slatestarcodex.com/2017/03/24/guided-by-the-beauty-of-our-weapons/) test their expectations against a larger group of people.
"Saving the world" is a grand and high-variance claim. If you're not completely on board with such a claim, but still think there is value in the perspective, it makes sense to hedge againt the high variance part. It looks like Scott did that with the intro.
I found this essay extremely interesting and insightful. Powerful even. I read this one all the way through; it was gripping. I sent it to family. Will it contribute to saving the world? Maybe! Most things don't, so the prior is low, but many big ideas that eventually work start out as low probability ideas.
This... seems very explanatory. A simple but powerful theory, with enough secondary correlations ("the Moore Law will just keep working, so we can be cavalier with memory") to be convincing. And very ACX-y.
This is an alternative framing of the old problem of decision making given certain risks and certain opportunities. Be overly cautious and you'll be "survival" biased. Be focused on chasing the shiny opportunities and you'll be "thriving" biased.
It's clear we don't frame our policies, societies and behaviours following this dichotomy, but just determine case by case what to do - especially considering that assessing risks and opportunities is far from an objective endeavor.
So I don't think this framing is particularly interesting or useful.
If you think this is just a framing, you don't understand Algorithms 101. This is an objectively solvable math problem. In fact much of it is already solved. I said there are methods for integrating algorithms of these different types and I meant it.
If you want to use this, don't focus on the meta-levels of societies and policies, go to the interfaces of collaboration and do math.
I happen to be a math guy. You got me curoius: what kind of math problem is this - except for the one I already saw: a game theoretical, utility-optimization, subjective one?
PS: framing here doesn't have any specific meaning as you seem to imply. Framing is the "kind of representation" we give to an issue.
I have to say, with all the talk about tumors and communication, my main takeaway is that a vehicle with five occupants hit a tree at 60 mph, and EVERYONE walked away. There can't be a better advertisement for whatever vehicle that was.
That's an advertisement for the Survive mentality that drives vehicle testing regulations. If the regulations had been the product of Thrive mentality he'd have hit the tree at 120 and nobody would have survived.
If we had an actual survive mentality, we would ban cars because of the absurd amount of casualties it causes. What we have now is an optimum. The increased efficiency more than justifies the deaths it brings.
Yes, cos dead men don't argue, ie survivor bias.
Obviously it doesn't matter what dead people think, but living people are far more useful than dead people. That still doesn't mean the optimal number of preventable deaths is zero.
"dead people" who didn't actually die would be living people so your utility argument doesn't wash. What I'm saying is that there's nobody to present their point of view, not first hand anyway (and the rest is hearsay).
The same happens with war: you only ever meet the survivors so you inevitably get a one-sided view, and even that is highly selected for optimism.
Honda Civic. Yes belts and child safety seats and airbags are awesome. First responders too, and the Good Samaritan who called them and pulled me out. It's not all just the car.
I am preregistering my attempt to solve a personal, unsolved-for-twelve-years problem using the thankful theory. Will report results. If no results are reported for 7 days, I probably forgot to do it and the lack of a follow-up should not be taken as evidence for or against the theory.
Me and the other party agreed that the theory describes us surprisingly accurately, but results would probably take months of better communication to become noticeable. We are optimistic though! Might report later if I have anything unusual to say.
How do I sign up to get scanned to make sure I don't have cancer in my body?
I'm not a doctor but, I'm not sure there's a good way to do this. If you don't have any symptoms (or some other risk factor, like age/family history), a doctor probably won't prescribe a scan either for specific body parts/diseases, let alone your whole body. As far as I know, such scans would show lots of random, weird-looking anomalies, most of which mean nothing at all. The evidence that you have cancer would be very weak compared to the prior that you don't, unless you happen to already have something that's progressed a bunch. So doing this scan would be pointless, and possibly even counter-productive (i.e. you might end up trying to get dangerous surgery to remove a harmless cyst).
(I could be wrong about the above, but it's the impression I have, )
I'm pretty sure some Youtuber went into great details about the horrifying consequences of testing for too many things, and then trying to treat problems found, but which aren't really a problem.
I think it was this guy: https://www.youtube.com/watch?v=7kQk9-KLPfU
But this was not the video I am thinking of. The video I'm thinking of he mentions a women who has a tumour in her breast, and gets a mastectomy (or something) and has complications, and it's awful, and she really should not have bothered.
Basically... when you get tested, even if the test is mostly inconclusive, if there's something that COULD be bad, no doctor will recommend against further testing and procedures, since that might lead to their being removed from the field, whereas suggesting extra (in the end harmful and/or useless) testing and procedures will never result in the same loss of status or job.
So. Bottom line: don't get tested unless there are symptoms or family history.
Thanks for the link. The statistics are something I already knew, but it's good to know that at least one doctor thinks about this question in the way I would expect.
Doctor Vladimir Alipov (Dysphorra YouTube channel) described the same on his streams. Your description is very similar to what o remember from his streams.
I also have a glioma in my temporal lobe and I also have the same history of many (10? 20?) years of ignoring symptoms and having doctors send me away saying nothing to worry about. My tumour is a rare form called gliomatosis cerebri which grows to multiple lobes. I am up to six lobes and two hemispheres now.
I was officially diagnosed 2½ years ago. My neurosurgeon wanted to do a craniotomy but I declined because of the risk of probable defects and the certainty that he would not get all of the tumour. Up until this year, my symptoms were only minor and annoying but I have recently started to have massive seizures and memory problems. I am lucky that I have had no pain.
I'm not sure I frame my experience in terms of thriving and surviving. I decided very early on that survival was not important to me. I needed to survive long enough to make sure my wife and family are ready to manage without me. In all this time, out of a couple of hundred friends and acquaintances, I have only had two who insisted that I should be focused on trying to survive. The rest accepted and respected my decision.
Thriving has not been at the top of my priorities either. I am starting to lose my memory (which sucks) and I know that I have loss of all my faculties ahead of me but I am OK with all that too. My wife likes to pretend that everything will be OK but — even there — she respects my wishes and communication has not been a problem.
I wrote about my fun with gliomas here:
https://www.raggedclown.com/2024/09/20/story-so-far/
And a little about dying here:
https://www.raggedclown.com/2024/03/23/how-to-die/
Good luck to us both!
FWIW I am also a software engineer and I don't think of my experience in terms of time- versus space- efficiency. It never occurred to me and, even now, I don't think it applies. I am not optimising for anything. I'm living my life the best that I can and one day it will end. I am good with that.
Back in the paleocomputer age it was a daily consideration. Save a few bytes at the expense of more instructions in rarely executed code, spend a few bytes to save a few instructions in frequently executed code. That was many systems programmer's daily life.
And you're optimising for "bestness"! :)
I think you are right! I do this in my life and in the software I build too.
I always want to build the "best" system that I can. Time-efficiency or space-efficiency only rise to the top when they become concerns. It's usually a tiny percent of code that causes efficiency problems and we can fix them when they become a problem.
I was also there near the end of the paleocomputer age. I think a lot system programmers bring along those paleo- ideas when they are not necessary.
"I think a lot system programmers bring along those paleo- ideas when they are not necessary."
I'm sure you're right. There's personal pride in writing tight code.
The more important target is maintainability and future proofing (and nowadays freedom from malware). So many programmers just grab the nearest module off the web to save "reinventing the wheel" and then two years later when it disappears eveything goes belly up or you find all your data has appeared on the dark web. Long live NIH! ;)
Right! I'm a big fan of NIH.
I strive for maintainability and user-friendly rather than fast or small. Of course, I like my code to follow the basic rules of efficiency but optimisation can be put off until it is needed. It usually isn't.
Profilers ftw.
90% of optimisation is design not coding, so it's easier if you get it right first time (but that's trivially true for everything!).
I started my IT studies in 1998. There were many more computational resources than before, but the institutional/professorial memory of needing to mind your memory and execution time was still there.
There are still many areas today where these considerations matter, for professionals and hobbyists alike. Computer sciences. Data engineering. Video games. Server software. Trading software. Granted, time efficiency is usually more important than space-efficiency nowadays, but there are examples where the latter still matters, such as the demoscene.
Hadn't come across "demoscene". I see it's a thing (and not just the name of some forgotten geological epoch! Demos means people so it would be an alternative to anthropocene)
It's the demo-scene. A scene of people who show each other impressive technical demos, especially developed under tight contraints, such as less than xx bytes of code, or running on very restrictive legacy hardware.
Thanks yes, I Googled it.
Unless you are dealing with huge datasets like Google or Amazon, writing basic, efficient code is sufficient for most applications. You can optimise it later if it needs it. It usually doesn't.
Lucid insight into insanity is rare and precious.
As an algorithms person, your space vs time analogy was very muddled. In a basic model of computation, algorithms aren't at risk of having their memory overwritten. There's no fundamental reason why robustness should equal space efficiency.
I have a similar objection to the survival / thriving analogy. Yes, there are some parallels that can be ported over, but it doesn't really match the immediate vs intellectual modes of problem solving or communication.
That said, your model does make sense. I know viscerally the clusterfuck it can be when trying to wrangle with urgent / high-stakes situations using intellectual communication norms. Effective people need to learn to switch between these modes as the situation calls for.
> There's no fundamental reason why robustness should equal space efficiency.
I disagree. Random damage - minefields, artillery bombardments, suppression / saturation fire from a machine gun, cosmic rays corrupting a storage medium one bit at a time - can be expressed in terms of average spacing between hits, that is, how large a contiguous uncorrupted area can reasonably be expected to remain.
If a soldier lying prone in the tall grass has two square feet of cross-sectional area in which he'd prefer not to be shot, and that whole field has been hit with one bullet or fragment per square foot, average soldier in said field will have been hit twice. If there were a large number of soldiers, some may have survived (probably about as many as were hit four or more times), but not enough for the overall group to remain effective. Quadcopter drones with less than one square foot of vitals each might see a far higher survival percentage, thus more overall functionality remaining after the shooting stops, under those same conditions.
Similarly, if a given algorithm needs two megabytes of RAM per iteration (that is, before it has enough progress for some sort of quick parity check to provide a useful answer about whether it succeeded, or failed and needs to be redone), and the RAM it's trying to use has an average of one unrecognized corrupt sector per megabyte, that algorithm will probably be disrupted far too often to accomplish anything. Something more space-efficient, capable of making provable progress within less than a megabyte, could still function under those conditions.
Plus it's easier to debug small programs (obviously). There are more bugs in Word than there were in WordStar.
Yes but I think the main reason isn't the amount of memory, it's the unreliability of every bit or neuron.
Digital systems yes, but I doubt brains are that sensitive to a few faulty neurons (or LLMs to a few faulty links) tho I guess it might come down to how the fault manifests itself. The brain must have ways to deal with neurons that get stuck in firing mode - I guess the recipient neurons just get desensitised.
Roger.
I've had a couple crises of my own (different, and probably less excruciating than Daniel's), and this space- vs. time-efficiency model fits some of what I've observed in them.
In the future, I'll have to explicitly optimize for space-optimized strategies when in them. That might help quite a bit; the feeling of my empty bookshelves shrinking randomly when pain hits is familiar.
I think this is a spectacular post. I need to think about whether space-efficient vs. time-efficient is a good dichotomy, but it's at least promising.
For what it's worth, I didn't find the set-up to be too slow, and I didn't anticipate the conclusion. I don't have a background in computer algorithms.
I am glad to have read this post. It ties in very well with the Principle of Charity that SSC was originally based around. I don’t think it was necessarily groundbreaking, but perhaps that is because I was already practicing its lessons but others were unaware.
> space-efficient rather than time-efficent
This is a "galactic algorithm"(pejorative term) concept. You can find good programmers who dismiss the concept entirely and I bet you could dismiss it as highly improbable that the brain is optimized on the space frontier, given that good compression seems to be serial and very fragile, your evidence is the brain recovering data and the brain seem fractal and highly parallel.
> Space-efficient communication can’t cache a long message to be communicated, or a long message received to be understood. Therefore space-efficient communication has to rely on short, atomic messages, which in order to be informative have to be pre-agreed.
Im pretty sure this is just flatly false. The best compression uses sub-bit entropy, any missed bit causes a cascading misalignment of plausible data.
Don't think compression. Human communication is more like packet switching on a lossy network.
The brain is parallel, but on the wetware it implements usually one, sometimes two, phonological loops. There's another guest post on this. :-)
> the wetware it implements usually one, sometimes two, phonological loops
My reading of such things is that *every* neuron is a loop, and that the structure of the brain is very loopy, I dont know where your getting the claim theres 2 loops from given theres at least feedback loops from chemical systems; to be *extremely crude* your sleep cycles depend on a daily cycle of hormones that float around the brain, and idk what the mouthy cycle for women is based on, but it seems to be ... monthly and a cycle.
It take me hours to refind the sources but I vaguely remmeber someone attempting to simulate 1 neuron with an nn and finding it takes 34 neroens in a feed back loop to replicate the behavior with some degree of accuracy; and there was someone mapping out a monkeys visual cortex as if it was an electrical diagram and it was a giant mess.
> Don't think compression.
Why? Its the space efficiency frontier; would wouldnt the data be compressed.
If you disagree with my first argument, how about a 2nd, its unlikely that brains are space efficient because compression is energy inefficient, and running on physics should provide cheap shortcuts of computational value. Consider ant pheromones and ant path finding, they use physics to outsource decentralized path finding to a massively parallel diffusion "matrix"/air that does allot of free computation for the ant, but not for our machines. Why shouldn't brains use hormones as lazy, slow, probabilistic computation systems, much like ants for with the air?
Yes the brain does lots of loopy things, that was literally my previous guest post. But not every neuron is a loop, only a few "pacemaker neurons" that do essential stuff like the hardbeat. The phonological loop is different tthough. It takes a lot of neurons, leaving little for another. You can hear a podcast while singing a song that you know by heart, but it's effortful.