I saw the article mentioned elsewhere and couldn't believe it. "Why do people make such a fuss about addiction? There's no such thing! I'm not addicted, I've just been taking heroin every night for five years!"
The test is stopping it, which he seems to have done when he wanted to. Similarly, I'm not addicted to caffeine, although I consume a lot of it, and the evidence is that I can go without diet coke for two weeks of Pennsic.
I don't know if he did stop, though. What he seems to have done was increased his dosage, then stopped taking that increase, and claiming to have no bad effects. He doesn't say he completely stopped taking it, and he's using the "no addiction, I proved it" argument to bolster "so me taking it every night is not being addicted" plus he seems to like throwing some other hard drugs into the mix every now and again. I think I'd like to hear from his wife and family about their opinions of his state, not self-reporting.
Temporarily stopping completely doesn't prove it's not addictive/not addiction either; there are many smokers who stop for a while but then take up smoking again because they can't get on without nicotine.
Same here, I enjoy using podcasts to fill the "empty time" when I'm commuting or working on something non-semantic like cleaning or photo editing. Some that would be interesting to people here are Lex Fridman (interviews with tech experts), 80,000 hours and Julia Galef's Rationally Speaking (interviews on rat-adjacent topics), Bayesian Conspiracy (ratsphere chitchat + sequences discussion) and Science & Futurism with Isaac Arthur (review of future technology and SF topics)
(I was originally going to comment "Where?", but then I realized that this is probably something that could be found with a quick search, so I did a quick search and found it.)
Podcasts are made by people who are too lazy to write, for people who are too lazy to read. They're utterly inferior to text in every single way. You can't ctrl-F your way through a podcast. You can't skim a podcast. You can't cite or look up citations on a podcast. You can't load a podcast on a lousy connection. You can't listen to a podcast on a crowded or noisy space. You're limited to the rate at which people speak, which is much slower than the rate at which you read. If a podcast host names a thing or a person or a place you don't know you can't look it up. If a podcast host has an accent you don't understand or the sound quality is noisy you just suck it up.
The only exceptions are when the audio medium is the *point* of the podcast, such as sleepcasts, meditation, DJ sets, or language learning.
This x 100. I hate the shift from writing/reading to podcasting-YouTubing/listening-watching. It’s the worst thing that has happened to online communication since 1985 (when I started participating). The comment above covers the general problems with it. I have the additional personal issue of some cognitive issues that make auditory processing much harder than visual. I’ve had to stop various online activities over the years as things shifted from text to voice.
Wow totally disagree. I think that audio is a much better means of communication by text not just because the voice is more versatile than the written word [in terms of pitch, tension, pauses/spacing, etc], but also because I think the "improvisational" style of a conversation gives much more insight into the nature of the speaker than a polished piece does for an author. There's also the fact that nearly everyone has had more practice speaking than writing.
Do you still think they're utterly inferior to text in every single way?
> the voice is more versatile than the written word [in terms of pitch, tension, pauses/spacing, etc]
Depends on the podcast. Obviously if you're talking about art performances, recitations, singing, that sort of thing... non-verbal information is crucial. As I said, whenever the point of the podcast is the medium itself, obviously audio is better. For everything else, where the podcaster is trying to communicate some piece of information, opinion, etc. why would the non-verbal info matter? I guess they might bring some kind of stylistic point but that's largely outweighed by all the practical drawbacks (and you can also convey stylistic points into writing).
> I think that audio is a much better means of communication by text not just because the voice is more versatile than the written word [in terms of pitch, tension, pauses/spacing, etc], but also because I think the "improvisational" style of a conversation gives much more insight into the nature of the speaker than a polished piece does for an author.
Why would you care about the personal information of a speaker you don't know and who doesn't know you? It's not a conversation, it's a podcast. You're not having a social interaction.
>Do you still think they're utterly inferior to text in every single way?
Yes. I never listen to any podcast - instead, I read the transcript if available, and if it isn't, I shrug and move on.
There are many different types of podcasts, and not all are pure info dumps. Some are just people having interesting conversations, in a way that doesn't work with text. And a lot of the appeal for others is the way the hosts interact, despite your assertion that it doesn't matter. It's not a matter of laziness, it's something you fundamentally can't have in text. You might not be interested in those, but then it sounds like you're just saying things you don't like are inferior.
And podcasts do have some inherent advantages, like the fact that you can listen to them while doing things like chores or commuting (I don't know why you think you can't listen to them in crowded or noisy places, headphones are a thing).
1. Non-verbal communication is important even if you only care about informational exchange and not about the aesthetic quality of the piece. This is because vocal realization allows for a lot more subtle emphasis than I can do with writing. This is far more than a stylistic point -- it's often essential for understanding (perhaps this is why our voices evolved to be so versatile in the first place). This is why it's valuable e.g., to listen to an audiobook of an author reading their own work, even when you're listening purely for informational content: you might learn something!
[strictly I think this point alone should prove that speech is not utterly inferior to text in every single way. Speech is higher-fi than text, and this is a big reason why you have to consume it more slowly. This comes with advantages and drawbacks].
2. I care about the personal information of the speaker because I think it's relevant to what they're saying. How does the speaker treat themselves and others? Do they really believe what they are saying? Are they present with/trying to understand their interlocutor or are they reciting points from memory and not really listening? These and others will affect how I take their point.
3. Speech, as you acknowledge, is better than text for conversation (and anyone who's ever texted recognizes this). I realize this doesn't bear on your point about broadcasts, but it does bear something on the utter inferiority of the medium. I.e. if you want to consume a conversation as opposed to a treatise or essay, speech will be a better medium.
I think we're talking past each other because we have different ideas of what a podcast is about and until we get into specifics we're probably not going to find common ground. Your points are probably valid for the kind of podcast you listen to where visibly its aesthetic value is linked to the audio medium - such as audiobook readings. Again, I'm not disputing that there's a place for such podcasts whose very point lies in the medium.
>I think this point alone should prove that speech is not utterly inferior to text in every single way.
I didn't say speech was inferior to text, I said podcasts are inferior to their written equivalent. Obviously, like million of people at this moment, I'd rather have a live conversation with friends than text them.
>if you want to consume a conversation as opposed to a treatise or essay, speech will be a better medium.
I think here lies of the crux of our differences - I don't see the point of listening to a conversation I'm not a part of. The veneer of interactivity and casualness is fake, since I cannot in fact interact with any of the participants. If someone had something interesting for me to hear, I'd be better served if they said it in an elaborate elaborate format that's easy to get through, skim and re-use (i.e. text). But again, I think we have to get into specifics if we want this discussion to get anywhere.
>Your points are probably valid for the kind of podcast you listen to where visibly its aesthetic value is linked to the audio medium - such as audiobook readings.
I agree that we're probably talking past each other (hard to communicate over text, I suppose ;)), because for this point I explicitly brought up the nonfiction audiobook as an example where the audio format was not necessary to the message, but still aided "purely informational" understanding (e.g. by including richer phrasing, emphasis, and expression).
A slightly stronger emphasis, e.g., doesn't just sound pretty, it can change the whole meaning of a sentence.
[Perhaps some writing system could seamlessly include these features, e.g. varied spacing between words, subtle levels of emphasis, slight dialectical/accent differences, etc. to the fidelity that speech does but none does even though doing so would clear up ambiguity.]
>I didn't say speech was inferior to text, I said podcasts are inferior to their written equivalent.
Interaction is information, and interaction is done worse over text. Pycea's comment is relevant here: " a lot of the appeal for others is the way the hosts interact, despite your assertion that it doesn't matter. It's not a matter of laziness, it's something you fundamentally can't have in text."
9 of them host a show in conversation or interview format. Insofar as speech is a better format for conversation than text is, and many popular podcasts are conversation-based, many popular podcasts are not better off as writeups.
>I think here lies of the crux of our differences - I don't see the point of listening to a conversation I'm not a part of. The veneer of interactivity and casualness is fake, since I cannot in fact interact with any of the participants.
That's just, like, your opinion, man.
More seriously: Listening is a social interaction (as is reading fwiw). It is interaction because it requires active effort from both parties -- it is as much work to listen well as it is to speak well.
I also think that this rebuttal misses the (central) point stated above that social interactions contain important information about the content of the argument, and so insofar as conversations are higher-fi wrt interaction specifically, they trump text in this regard.
Do these responses fairly represent and adequately address your concerns?
Insight into the nature of the speaker can be a negative as well as a positive. Writers like The Last Pychiatrist, specifically hid as many details about themselves as they could, so that the arguments would speak for themselves, and there would be no insight into the nature of the author. It wouldn't surprise me if Scott preferred being evaluated for his ideas rather than his personal idiosyncrasies.
imo nobody can avoid telling who he is. E.g. The Last Psychiatrist by deliberately hiding personal details tells me a lot about what he is like [private, maybe a bit neurotic, straining to be objective], and what to expect from his blog!
Hard disagree. My brain is very well evolved to follow conversation, and much less so a lengthy text. While I prefer text for study and review of dense material, a podcast is a much easier way to absorb adequate introductory information about a topic—which is often all that's required.
If a crowded environment or slow speaker is your issue: buy noise canceling headphones and speed up playback rate to 2x or more.
If they say something you don't know about: feel free to pause and look it up.
If the quality is so poor you can't follow it: don't listen to that podcast. If a text is so badly written that I can't make sense of it, I don't read it. Why would I not follow the same principle in audio?
>My brain is very well evolved to follow conversation, and much less so a lengthy text.
No, it's not. You learn to speak just like you learn to read.
>If a crowded environment or slow speaker is your issue: buy noise canceling headphones and speed up playback rate to 2x or more.
Yes, these are clumsy solutions to a problem that shouldn't exist. Kind of like Microsoft praising the quality of its defragmenter when Linux filesystems don't fragment at all.
>If the quality is so poor you can't follow it: don't listen to that podcast.
The point is that someone could have something interesting to say but just be bad at setting up audio (which is a separate skill and even a job for some people). If they had stuck to writing text this wouldn't have come up.
This is purely anecdotal, but when it comes to complex pieces I can follow the thread much easier if I'm getting the information via audio vs. text. If a book contains numerous digressions and side-beats that branch off from a main topic I find myself having to go back and reread prior sections pretty often to stay on track whereas if I'm listening to it I can follow it effortlessly.
I agree that if I'm getting a focused info-dump on a topic I'd rather have text than a podcast, but for more digressive works - audio is better.
>No, it's not. You learn to speak just like you learn to read.
We learn to speak via osmosis as soon as we are born and have to learn to read by explicit instruction. We have spoken for maybe 1 million years and been writing for maybe 10,000.
> The point is that someone could have something interesting to say but just be bad at setting up audio (which is a separate skill and even a job for some people).
So is writing interesting, yet easy to follow blog posts. People like Scott for whom that skill comes naturally are quite rare.
You CAN, however, listen to a podcast while doing things which otherwise occupy your attention, such as jogging, cleaning, or driving. They are very handy at those times.
"Podcasts are made by people who are too lazy to write, for people who are too lazy to read. They're utterly inferior to text in every single way"
There are reasons to hate podcasts and reasons to like them. I suggest that you're assuming that other people are so much like you, fundamentally, that there's no legitimate reason for anyone to make podcasts or listen to them.
I think the problem is people mixing up their reasons for listening. For establishing a parasocial relationship, or otherwise providing a simulation of the social experience, podcasts are superior. For the conveyance of information, text beats audio in almost every way. But in the case of a lot of popular podcasts (ie Rogan, Harris, Friedman etc) people are doing more of the latter than the former(and are barely aware of it), and therefore enjoying the experience more than that of reading a book, but actually learning and retaining a comparatively small amount of information.
I challenge anyone to listen to one of these three hour podcasts and actually measure what was retained an hour later, and then do the same with a book.
You're only addressing the 'demand side' of podcast production (and I mostly agree with you).
But the _supply side_ is different. I don't like listening to (audio-only) podcasts but I can watch some of them and I used to really like Joe Rogan when he was posting his shows to YouTube. It's probably _much_ easier to get him and his guests to _talk_ than it would be to have them write, e.g. a blog post.
I worry doing that and waiting for responses would be enough of a trivial inconvenience that I would stop doing those threads. I might try it once or twice, but I'm not optimistic.
Perhaps we could have a norm that community members put "Please don't quote me" in their bios? (1) Click the commenter's avatar, (2) Check whether it has the request, (3) Proceed with confidence.
That'd be smooth enough. There's only 5-8 comments per highlight post, so not too much workload for Scott, and it wouldn't bog down the comments section with a bunch of disclaimers, or create some meta-disincentive to post because you don't want to highlight the fact that you don't want your post highlighted.
Why can't people just prepend their comments with "don't quote" or "quote anonymously"?
It's a little extra work for them but it's less work for Scott and they're commenting on a public forum and then not wanting to be quoted.
I understand someone making a comment or two that is maybe a little more revealing than they'd like so they prepend it with request not to quote.
But if your personal policy is you're so scared of being quoted you don't want any of your comments highlighted, maybe commenting online isn't for you, because there are plenty of other people who won't respect your wishes.
I think it's similar to having a blog and not wanting anyone to link to it. Sure I can understand that for the occasional blog post, but if your policy is ask everyone to never link to your blog maybe you should make your blog private.
Most quoted posts seem like "effort posts" that the person wanted to share: either correcting misinformation or just sharing a big infodump. It doesn't seem like there's much risk of a casual comment like the one I'm currently writing will end up there, and I don't think it's arrogant to assume that a correction or infodump might be worth sharing.
I don't understand how someone, even mistaken, about whether someone else's comment is quote-worthy, could be arrogant. What's arrogant about quoting someone else?
I think you didn't understand me. I would be reluctant to label my own comment as "do not quote" because it seems to assume that what I say is quote-worthy. Even it it is, it seems arrogant for me to [i]assume[/i] it is.
But kaminiwa's observation makes the whole topic sort of moot.
I'd just like to throw into the conversation as someone who's recently been quoted in a Comments Highlights post that it absolutely made my day. Of course there are people who feel negative about it and that's worth considering, but I don't want positive experiences to be lost in the conversation.
Same, I was thrilled to get quoted. Given that the comments are, after all, public, I think we ought to lean towards quoting-by-default, with the expectation that if you don't want to be quoted you preface accordingly or put it in your profile.
This might be a bit clunky, but there's also the option to stick a disclaimer at the end of posts like "If you don't want you comment highlighted in a followup thread, please mention that". Though if you don't know ahead of time which posts you'll highlight comments for, putting it at the end of every one may be weird.
If Substack is anything like news websites, people use fake emails to sign up most of the time, and most people won't be notified, losing Scott a significant percentage of comments on average.
Maybe only do this for particularly significant comments, like insider information, people risking their jobs?
Why not just reply in a public comment asking if they're willing to be quoted? The commenter could always email Scott directly if they don't want to answer in public.
Scott would have to wait until they respond or set some kind of opt out time threshold, which probably amounts to as much if not more of a trivial inconvenience than just pinging the people in question (which Scott indicated was likely already enough of a threshold that it would make him stop doing these posts altogether).
I'm a bit confused about the warm-bloodedness thing. I was trying to look this up actually because I've just been kind of confused generally as to what the distinction is supposed to be between endotherms and homeotherms-but-not-endotherms. (I am not any sort of biologist, if that wasn't clear.) Like, OK, poikilothermy seems obviously distinct, but once you're within homeotherms, I was like, I don't really see an obvious distinction between these different methods of maintaining temperature that should distinguish some of them as "endo"? Especially considering that like a main method of thermogenesis is shivering, which -- as a muscle-based mechanism -- seems to be getting pretty close to a behavioral mechanism, you know?
It's the shivering thing that kicked off me looking this up, really. Because I keep reading that non-shivering thermogenesis, which is based on this uncoupling, happens only in brown fat cells, and that adults don't have much of these?? So they get all their heat from shivering?? And that's just like... that can't be right.
IDK, I am basically clicking around on Wikipedia here, so some parts contradict other parts. Like, oh, maybe all cells have UCP1, just brown fat cells have *more* of it. But other parts say no it's only brown fat cells. Or maybe adults have more brown fat cells than thought. I am confused!
Because like yeah generating heat from uncoupling, that's pretty distinct, much more so than shivering! And it also, y'know, matches everyday experience, where you don't start shivering the instant you're colder than is comfortable. But it's really confusing to keep reading that adult humans don't have much in the way of non-shivering thermogenesis going on. Like, huh? What's up with that statement? Where does that come from? Or is there some way I'm missing that it could actually be true??
I've been very confused about this as well. My guess has been that when they told me as a kid that there were cold-blooded and warm-blooded animals, they just didn't understand the full continuum of tunas and dinosaurs and self-warming plants and so on.
Adults have various other thermo-regulatory mechanisms short of shivering, like increasing/decreasing surface blood flow, changing body position to reduce surface area, or putting on a jacket.
From the wiki: "Such internally generated heat is mainly an incidental product of the animal's routine metabolism, but under conditions of excessive cold or low activity an endotherm might apply special mechanisms adapted specifically to heat production."
The idea is that every cell generates heat constantly, as a byproduct of their primary function. Only for shivering and brown fat cells is generating heat is the primary function.
The commentary from Cerastes seem to buffet this. They point out that cold blooded animals can usually slow down their metabolisms for long periods of time. We would explain that homeotherms need to have their metabolism running on "high" constantly in order to generate enough heat to maintain body temperature.
As a general rule, if you are confused because "these biological categories don't really seem so distinct the more I look into them", well, you probably aren't actually confused. The world is extremely fuzzy, and terminology like this is typically more useful for organizing lectures to undergrads than it is for mapping onto the world. I'm an organismal biologist and I had to look up the terms you mentioned to make sure there wasn't some important nuance I had forgotten about, but I don't really think there is.
TL;DR: "warm-bloodedness" is a matter of degree, not kind, which probably explains your confusion.
My understanding is that there are two main distinctions to make. (I'm not familiar with the english terminology, so sorry if the wording seems off - these are my own translations from swedish.) 1). Organism that can generate warmth, and those that cannot. (Endothermal - can create warmth through internal processes - and ectothermal - needs to rely on external sources such as sunlight.) 2). Organisms that keep a constant temperature regardless of their surrondings, and those that keep the same temperature as the surroundings. (Homeothermal - maintain a constant body-temperature - and poikilothermal - same temperature as surroundings.) (Endothermal organism usually have a faster growth rate, but is less energy efficient.)
But of course these are categories made by man for man to make predictions. Since biology famously is messy, we need to fill in the gap with the term "mesothermal", referring to intermediate states between endo- and ectothermal - where the body-temperature is allowed to vary within some intervall.
Also, big ectothermal organisms are "gigantothermal" through share size. This means that their body-temperature does not fluctuate quite as fast as smaller animals. A bigger mass takes longer to cool down or warm up, so a big animal - for example a big, ectothermal dinosaur - is like a coastal climate; kept closer to the mean temperature. (Weird analogy perhaps, but you get my point hopefully.)
And oh, bats and bears (for example) are heterothermal - they can vary their body-temperature, ie. have some body-parts colder than others. This is related to dormancy, which makes them more energy-efficient. Especially bats.
One more thing I thought of: Some animals, like frogs and snakes, can survive freezing temperatures due to glucose that protect their cells from actually freezing. I guess this can be considered a form of "endothermal" protection for poikilothermal organisms...
I would have thought that all organisms, including the ones we classify as cold blooded, can generate warmth through internal processes. How can you convert food into motion without generating some heat? Wouldn't the more appropriate distinction was between organisms whose biology is designed to generate heat when doing so is useful and those whose biology only generates heat as a side effect of doing other useful things, such as moving?
You’re probably right. And even the distinction I was going for should have been between animals relying on internal processes vs those relying on external sources, to regulate body temperature.
Also, the bats and bears-example are more like ”temporal heterothermy” - variation over time. Others, like great white sharks, are regionally heterothermal, ie. variation throughout the body...
In theory, making friends online should be easy. Instead of luck and circumstance of the physical world, the virtual world should give us access to the few most compatible friend-candidates out of billions.
And yet, I still default to the physical world for finding new friends.
Question 1: where, online, have you found "true" friendship and how?
Question 2: I know that some have tried (and failed) to create a social network for the non-masses. Do you think there is opportunity for a social network for people with long attention spans that rewards the building of deep relationships? If yes, do you think it should be an open network (like Reddit), or more akin to a dating/matching app that filters the billions down to the most compatible? Ex. If love of Nietzsche is non-negotiable, would be easier to filter by that first.
Maybe friendship is more about somewhat non-compatible people finding a connection, perhaps after being thrown together against their will? So searching online for the perfectly compatible person could be exactly the wrong way to find friends.
For most people, searching online is certainly the best way to find people to talk to about quantum theory, muppet porn, or whatever their niche interest is. But their best friend-to-be might well be an annoying neighbour. (If love of Nietsche is really non-negotiable, that might change things, I suppose...)
If ”similar but not to similar” in love is due to the selection for likeness - passing on as much of the genome as possible - versus the selection for avoidance of incest ... Then this should not apply to friendship at all.
Reding Henrish, The weirdest people in the world right now.
Could kinintensive cultures promote friendship between kin/most like, since trust will be mediated by kinship? And conversely, WEIRD culture promote friendship through more reciprocal altruism-styled mechanism? Like recognition of certain norms concerning ”neutrality” &c.
...for Weird: perhaps the opportunity to cooperate and engage in any activity is the base of friendship? Mutual gain?
And also, I would connect Weird culture to thymos and prestige-baed hierarchies. Weird people might be looking for ”valuable” friends, with useful skills? While kinintensive cultures would be more prone to rely on dominance-hierarchies, but - i guess - mainly within the kin-network.
I would say that this is the romantic definition of friendship which I have subscribed to and practiced for most of my life. But is it optimal?
The definition of friendship compatibility doesn't preclude a variety of personality types. For example, one can seek a chess-playing, philosopher who loves Rick and Morty. The results for such a query may still include an annoying neighbor (just not your annoying neighbor).
Old friendships are often defined by a history of shared experience. Maybe that's why new friendships have such high barriers to entry. Compatibility matching could provide a functional substitute for shared experience.
I realize that clinical terms like "compatibility matching" sound antithetical to the magic of friendships. But that can be fixed with some marketing.
IMO, Friends are people who stick with you through adversity (willingly or by coincidence).
Adversity makes people emotionally vulnerable, revealing more of them than they'd like. Accompanying people who hang around after that can usually be assumed to like the 'real you', warts and all. My fastest progressing friendships are all traceable to times when I and some strangers had to band together through a sudden and difficult situation.
It's difficult for online acquaintances to end up in such a situation.
Exception that proves the rule: Mmorpg friends I made during middle-high school felt quite real. But that's because it was a place where I could be myself, during a particularly bad time in school. The adversity, made it real.
1. I haven’t found that online. I have people who’s posts I like to read. People I feel fondly towards. People who I banter with, but no friends. None I would reach out to if I needed something (financial, emotional).
My experience is that friendship is best formed through cooperative activity towards a shared goal. If I wanted to create an online friendship generator, it would probably be more like a game than a social network. Or even more powerful would be something to connect people with compatible skills to work on real problems matched to their interests.
This seems right. The only online friends I ever became close enough that I genuinely thought of them as friends, asked some for favors, and even met one in person, came from my time working as a volunteer on a cooperative online project. (It was the Open Directory Project--a volunteer-edited online directory of websites that was useful back when search engines kind of sucked, but became obsolete once Google got good enough.) I still feel wistful about those days--I don't know if I've ever felt so much a part of a community in my life.
"Something to connect people with compatible skills to work on real problems matched to their interests"--that sounds like it would be great for several reasons, even if it didn't succeed at creating deep friendships.
It also sounds like it would be similar to an employment agency. I haven't done a deep dive on employment agencies but my current impression is that people are spending a lot of money trying to do a good job of matching employers and employees and the results are pretty disheartening. So I suspect your thing would be hard to do well.
Similar experience - my only real friendships were with the fellow members of a mod team I was on. It took both mutual interests in the topic (why we were mods there in the first place) and a forced structure/duty that kept us around at regular hours and forced near-constant discussions to push it into the friendship zone.
Exactly! IRL it's easiest to make friends by pursuing a hobby, playing a sport, or joining a group with common interests. The internet is no different - if you start sharing your interests and ideas with the world, friends (and/or potential love interests) will come to you.
Dunno about “true” friendship but I very much enjoy the Thursday Zoom happy hours on persuasion.community. I don’t have a ton of natural affinity with the people in my industry, especially wrt politics, but Persuasion is dedicated to open conversations and the group self-polices well.
Have found. Not willing to really explain. In all cases, I met people in RL later, repeatedly. In one case, trans continental flights were needed for that.
Small communities work best. I think live chats (IRC, or others like it) help with building connections with people. You see them often (however often you get on the chat), and there is a small enough user-base that you can spend more time (even unconsciously) on knowing those people. The one I most use is probably about 20 people, though not all are active all the time. This worked relatively well because of some underlying interest (programming), but I think forming small communities can be harder if you focus too hard on a single topic. Forming a community around Nietzsche might be fun, but as people grow bored or want to talk about other subjects they either make the community not solely about Nietzsche or leave to go join communities about... Hume or something.
Larger communities like many Discord servers can let you build up friendships, but it is way harder since everyone is interacting with a lot of people on there. While they may remember you and be friendly, they generally become at most "nice person to talk to".
I don't really have good ideas for a social media platform that encourages this. As mentioned in my first paragraph, I think focusing too narrowly on topics can harm this. Being able to talk about a wide range really helps get a greater view of people.
I have a suspicion that we don’t really know what to filter by in order to find highly compatible friends. There are some obvious life experience and intellectual interest candidates that maybe take you 80% of the way but a lot of the compatibility potential is in the last 20% and it’s murky. This makes serendipity a lot more important than intentionality.
I mostly haven’t, but to start out, you need a small enough community so that the same people keep showing up and you actually remember them. And the problem with that is there’s often not as much going on.
Also, when people are serious about finding someone (dating sites), the first thing they do is filter by geography. So it seems like online communities that are centered on a community or region would have an edge, even if they don’t strictly limit by geography? But there needs to be a common interest as well or you get NextDoor.
Our family has one friend who we got to know on WoW who is close enough so that my wife and our two adult kids flew up to his wedding — I had a previous commitment or would have gone.
Conjecture: making friends involves too much going on "under the hood" to be explicitly modeled in that way. In addition to the obvious surface-level exchange there's communication happening that we're largely unaware of, e.g. body language, intonation, dynamic word selection ("I saw it coming", "oh, I see what you mean"), the effect of location and ambience, even possibly pheromones. Plus the synergistic effect of all of those things together. When I think of my closest friends, yes there are shared interests and whatnot but something just clicked with them in a way that it didn't with lots of less close friends who are on paper arguably "better" friends. I suspect that trying to filter too much on conscious-level stuff like "must love Friedrich Wilhelm" is actually putting the cart before the horse.
Friendship requires trust and most of us, I think, are biologically wired to trust people we met IRL. There have been a couple of articles floating around about decreased trust between members of remote teems: there's more blaming and reporting and less spontaneous helping between people who haven't met IRL. Maybe it's something about microexpressions and emotional mimicry, or maybe it's something about the primordial fear of physical retribution for wrongdoing, but it seems that most people tend to be more ethical and trusting towards real life contacts. My experience is that this is less salient for people who are further along on the autistic spectrum.
#1: On the IRC roleplaying Darkmyst. I even met 2/3rds of my polycule there and the relationships are still going strong. As others have said, the key seems to be to have shared activities with people. (In the case of roleplaying, I find the bonds build out of the vulnerability of revealing what's going on in your imagination.)
#2: I think I haven't found one. I use and can recommend schlaugh.com for getting a social media fix, but I'm not yet sure about whether that's generating deep relationships. I do think it's generating better ones than I'd be seeing on Twitter or Facebook, but given the constraints of schlaugh.com there's also a lack of immediacy which I feel may be needed to create proper deep bonds. That said, I *would* say I've made friends there, quite strongly so.
Mysterious. I haven't really found anything I would call "friendship" online; at most I would call the users I am most familiar with "longtime acquaintances" (or something like that), and based on my experience I feel pretty dubious about a social network like that working. My current intuitive guess is (something like) that the investment required to create a "friendship" for any pair is large and there's not enough in internet communications for a-pair-of-users-who-met-on-the-internet to meet the requirement because of asynchronicity etc., but I feel pretty unsure about this.
Also, related gwern writing: "Face-to-face meetings, even brief ones, appear to cement personal connections of trust and liking to an extent not achieved by even years of more mediated contact like phone calls or Internet text discussions / emails / chat;...." (at https://www.gwern.net/Questions#sociology)
Re: comments. Just do what you're already doing, Scott. I think they add something special to the Substack.
The addition of "don't highlight" is more than enough to guarantee anyone who doesn't want their comments seen (on a public forum!) won't be surprised. You could ask Substack to put a small text under "Discussion" with a disclaimer, if you want to make absolutely sure.
The difference is that if someone has a friend who sometimes reads ACX but probably doesn't read the comments, posting a story about them buried deep is probably okay. But if it's highlighted, then there's a much higher risk of it being seen (which was the objection at least one person had). I do think the disclaimer idea is workable though.
You can't edit the "Discussion" text, either? Something like "Discussion. Your comments may be republished, check About page for details."? If you can't do that, man, Substack really has a ways to go. Having to remember to manually add a disclaimer to every post will get old fast.
What may slow them down is the fact that they seem to use the exact same code for every substack. They've hardcoded astralcodexten in numerous places for stuff like expanding comments, so maybe they're trying to find a better way to do it. On the other hand, they've hardcoded astralcodexten in numerous places, so doing it one more time shouldn't stop them.
If you open up the developer console on your browser, they actually have a recruiting message in there. But like Aapje, I wouldn't touch it with a ten foot pole.
I agree in principle; however they're probably optimizing for getting the features out to their client as fast as possible. Maybe they're planning to repay the tech debt later, in a seamless way. (Whether they get around to it is another story!)
Yes, that's always the promise: 'later, we'll fix things'. However, either they keep growing and there's money, but facilitating that growth still takes precedence, or they'll stagnate, so they could make it right, but they'll start focusing on saving money. So in practice, this theory of fixing the technical debt seamlessly almost never happens.
In reality, what tends to happen is that things become such a mess that adding features takes longer and longer & you get more and more bugs. So the only solution is to scrap things and start over, and then migrate to the new code, which is not going to be seamless.
The more crap the company accepts, the sooner the software needs to be replaced. However, you also create a company culture that accepts crap, so it's hard to change course and make the new software more robust.
++Test to see whether posting comments works again++ (can we all start developing for substack at once? incremental changes won't work. We have to be the Napoleonic France of frontend development, sweeping l'ancien regieme from before us with an iron fist, or an iron broom or somehing)
OK most likely not doable and a dumb idea. But could you heart your own comment as an indication that you don't mind if it's re-posted. This would mean that you (Scott) have to be able to see who put the hearts on a comment. If it's a workable idea, then lemon hearts turn into lemonade. (And where is the post you talked about hearts and not liking them?)
At risk of sounding incredibly naive: isn't it usually possible to tell when somebody might not want their comment signal-boosted? I imagine >90% of cases of people not wanting their comments broadcast are culture-war adjacent; so wouldn't it be sufficient to ask in those few cases, and presume consent for the rest?
I feel like I've missed some underlying context for why this is a concern. Of course we're all feeling a bit sensitive about the privacy issue due to recent events, but is there some "typical" scenario for why somebody would post a non-CW comment, but have a problem with it being signal-boosted on a "best of" post --- without it being fairly obvious that the posted might have a problem?
Welp, that's an example I would not have guessed, and I now understand the discussion. Thanks for the pointer!
(Personally, I think that retroactive removal, as happened in that case, is perfectly acceptable --- but this is clearly something where reasonable people can disagree, and I'm gonna keep my d.f. mouth shut and defer to those who comment more than I do.)
I agree with this, and it was my post that triggered this conversation... I consider it a failure on my part to express my want for obscurity clearly. Keep doing highlights posts, Scott!
I've seen a few people saying that they see subscriber only posts in the rss feed, even though they aren't subscribed. While it seems that substack doesn't provide a non-subscriber feed, I've taken a shot at filling that void here: https://pycea.tk/acxfeed . It's a direct clone of the official feed, just with sub only posts removed.
I read this through a rss feed (Feedly in case it is relevant, which I gather is the most popular one), and the subscriber only posts do get sent, but the content is just null, so you see they exist, and their titles, but not the actual text.
On comments, this seems like a rare social problem with a technical solution.
You've already asked Substack for a lot of changes to make these comments more like WordPress. It seems to me this is another feature you'd like: either a checkbox to mark a comment as highlightable / non-highlightable, or a checkbox at site-level per user, whichever's easier. Then on the admin side, comments can get different background colors depending on whether they should be highlighted or not (or something).
Might take a while to implement depending on how many other higher priorities there are, but it would save you from having to manually maintain lists of people who want comments highlighted.
I came here to suggest the same thing. I think a user/site-level opt-in plus some sort of flair (either public or visible only to admin) would solve the problem nicely.
Maybe it's just me, but the simulation hypothesis (that we're essentially living in a simulation by some superior intelligence) seems completely ludicrous, seeing how our univere very convincingly appears to show behavior spanning 30 orders of magnitude in time, 30 orders of magnitude in space, and a sheer degree of size, scale and detail that is completely absurd if you were trying to study something specific in a simulation. Am I missing something?
You're assuming the hypothetical universe simulating us is anything like ours. If we simulated an entire universe, we might fuzz some of the details, make it smaller, say fully 2D - extrapolating from that, the "outside universe" might be incredibly complex, and simulating us wouldn't be that taxing. To them, the scale of our universe might seem quaint.
The most convincing aspect of simulation hypothesis, to me, is that it's likely an universe's sentient population will make a vast amount of simulations during its lifespan, making it more likely that we're inside one of those than a "real" universe. That's making a number of assumptions, however.
Well, yes. The hypothetical universe is similar enough to ours to support intelligent creatures that are capable and willing to set up simulations, otherwise it wouldn't work. And creating simulation always involves overhead, unless you simplify the physics, and that doesn't seem to be happening to an appreciable degree. So, no matter how potent the physics in the simulating universe may be, it seems odd that they don't have anything better to do with >> 10^80 degrees of freedom.
How do you know that our parent universe doesn't have physics many orders of magnitude more complicated than ours, making our physics seem trivially simple and trivial to simulate?
Degrees of freedom stay degrees of freedom. Even if you could pin down each particle in our universe with a single integer number instead of all the quantum field in a curved space-time continuum crap we're looking at from the inside, it's still... a HUGE effort and, for the most part, highly redundant to an absurd degree.
who's to say that their computers work anything like ours? Maybe by their very nature this type of universe actually makes a lot more sense. Like the concept of pinning "down each particle in our universe with a single integer number" could be very foreign or radically inefficient. Maybe their computers are more like our quantum computers. Maybe rather than relying on Boolean algebra like the transistors in our computers do, they run on set theory or some other branch of mathematics. (I believe quantum computers use quantum logic, not Boolean logic.)
I really don't feel like you're adressing the central objection, which is that number size is relative. Just because a number is large relative to everything you're familiar with doesn't mean it isn't insignificant compared to much larger numbers. It feels stupid to even say this out loud, I'm not sure how I can make the point without sounding obnoxious.
Because there's no plausible explanation for one being there, unlike hypothetical beings simulating us. These observability arguments always felt like a bit of a cop-out to me. Just because a proposition isn't directly observable doesn't mean it doesn't have a truth value.
Sure there is. The simulators put it there, because why not? Maybe the whole point of the simulation is to see how long it takes us to find it. They have the power to do that. They have the power to do anything.
“Plausible” in this case is kind of in the eye of the beholder. Others have already stated many reasons why “the universe is a simulation” is not particularly plausible without unproven (or unprovable) epicycles like “well the simulators live in a much more complex / higher dimension universe”.
But the point of Russell’s Teapot is not really just observability - it’s that the burden of proof lies on the side making unfalsifiable claims. I.e. “you can’t prove we aren’t in a simulation” doesn’t cut it, especially when the “simulation” claim can just add on more epicycles to make our simulated existence even less falsifiable.
Your implied argument seems to be that we can place some reasonable upper bound on the number of "degrees of freedom" that would ever be spent on something frivolous, and 10^80 is higher than that bound. But I don't see how you'd derive such a bound. (If you'd asked people from 1970 what bound seemed intuitively reasonable to them, I suspect most of them would have picked a number that is lower than what's used in modern computer games.)
If we're a simulation, there's no obvious reason the people simulating us couldn't have 3^^^3 times our resources. And I don't see any obvious reason someone with 3^^^3 resources couldn't spend 10^80 on a video game.
Our universe couldn't actually create a high fidelity simulation of the universe. Indeed, it is questionable whether we could even simulate a single brain in real time.
There is no reason to believe that a universe would actually create a bunch of high fidelity replica universes full of intelligent creatures. The computational resources are implausibly high.
(00_00)(MONOGURUI: UEGHURUOMO_UEDA)(ii))(episode 303)(00)(00) (00_00)(00) ueghuruomo_ueda(ii)(00_00)((00_00)(00_00))(ii): "(entities which exist within larger entity are intended to fill them up .. to push past their boundaries: ultimately any inner vehicle is, at its limit point, intended to implode the larger vehicle which it is contained within: the point at which macropolitics becomes micropolitical, .. vice versa, forms a penetration-resistance with reverb tendencies: (00_00) managed to blow out (00_00) from the inside (usually represented by a gersgorin radius) of a floating deictic point (usually represented by a highly variable eigenvalue within a given more or less fixed range): as part of this process of modeling the theorists involved took the untraditional step of flipping around the typical analytic-synthetic structure: hä guessed that udoh(ii)UNYIMEABASHI's invitation now was the consequence of the previous day's scandal, .. that as a local liberal hä was delighted at the scandal, genuinely believing that that was the proper way to treat stewards at the club, .. that it was very well done: yagaoMEGHIRU(ii) smiled .. promised to come: instead of assuming a 0 point .. working up through iteration they took as fundamental the assumption of a given set of countless but unspecified arbitrary points .. then worked their way down from there to given points that could be specified, one of which, importantly, was assigned, again arbitrarily, as the 0 point for the given set from which relations were then generated outward, .. it was this set as modeled that formed the I:0 range for their probability field: this choice, or perspective, helped impose a certain discipline in the area of oscillatory blow-ups .. non-convergence: there were multiple computational models developed .. put into use: the primary of these implemented by YAMAUEREDA was a package called RAMPANT(ii)."
(00_00)(MONOGURUI: UEGHURUOMO_UEDA)(ii))(episode 500)(000 (___)00. (00_00)(MONOGURUI) (0000).(00_00): "(hebephrenia-bodies were kept in chambers at cold temperatures to prevent growth .. covered in tightly strapped down cotton coverings, sometimes with floral patterns suggestive of some future uncovering or release, or, at other times, in acrylic fabrics or even clear rubber similar to goryo-suit material, or patterned semi-opaque lace/nylon hybrids: the imaginary, conceived of as a product of the absence of verification, played an important role in the derivation of u_mappings: the u_space in these constructions was populated densely by both realized .. unrealized functions): (each corresponding to an actual or potential goryo: derivative positions were conceived of always as the convergence of more than one probabilistic entity into a given space: likewise u_agents attributed much of the vagueness of notions of interpulse to conceptual hesitations regarding its foundations: the question, for them, of the degree of conceptual overlap or consistency between interpulse in thermodynamics, statistical mechanics, information theory, …., was moot: it's all thermodynamics they would confide: the rest is just different words to mean the same thing, (0000): pushing out against the white flesh is a small metal stunt, positioned to just prevent the muscles touching, with a diamond fastened on each end): (one of the fundamental concepts of the u_agents was that attempts to measure m directly would invariably fail, as it was too amorphous .. mysterious in its movements to quantifiably observe: instead they emphasized the method of measuring m(0)mat or its other variances as one would a large invisible creature in one's midst, via the indirect observations one could make regarding the movements of those things it was pushing)."
I just simulated a universe. It wasn't a very interesting one, but maybe our universe is particularly uninteresting compared to the much larger one we're embedded in. Why couldn't we easily fit in a much larger universe? You say the computational costs are high, and you mean they're high relative to anything we have available in our universe. Well, yes, obviously, you'd need a larger one to fit this one. I don't see how that's at all relevant.
I've never understood why I should care if I'm living in a simulation or not. What are the practical implications of this theory and how is it decidable one way or the other?
> What are the practical implications of this theory
If we're living in a simulation it suddenly becomes *really* important to figure out a way to convince the people on the outside to not turn the simulation off.
I had the same confusion. I work in mechanical engineering with a lot of connection to physics simulation and the simulations we can run are... horribly slow. For combustion engines, simulations intended to cover effects up to 5..10 kHz usually take more than 1h per second of simulated time on very powerful hardware... and progress is not scaling with moore's law (generally, most methods do not scale linearly but at least quadratically. And even with linear scaling, the amount of progress we'd need from simulating more than "a few molecules" is insane - if you don't believe me, I think the distributed computing efforts for proteine folding for COVID might be a good reference).
I think part of the confusion is the question of what you're simulating. Everything I wrote refers to simulating a complete universe in a physics-sandbox environment. The other option would be to only simulate one consciousness (me/you) and their perception of the universe around them... which would be much closer to what videogames do (e.g. not rendering the parts of the world you're currently not looking at). This is still way beyond our current capabilities (Christof Koch likes to point out we currently cannot even simulate a simple worm), but sounds at least doable at a large amount of progress in a few centuries...
But when you're simulating a single consciousness (for whatever purpose), you can put it in any environment you want - you don't have to apologize for the physics. Why even pretend to put the consciousness in a universe that seems to follow general relativity, quantum chromodynamics and all that other crap? Why bother with creating a credible semblance of cosmic background radiation and neutrinos?
This smells way too much like metaphysics to me - "we are just a dream in the mind of God", quite literally, and with a completely geeked-out God, too.
Indeed. And while you might have to take some things into account while simulating a modern physicist, why would you simulate someone who wastes his time trying to solve puzzles you understand perfectly? Simulating a medieval peasant would be much more interesting, and they would be more tolerant of tiny discrepancies in the simulation.
That said, the most recent simulation I saw (yesterday) was "1 Million Spartans against 2000 Full Auto Shotguns", so maybe I have it wrong about the sort of thing a super-powered civilisation might find it interesting to simulate. (Hope I didn't just invent a new basilisk that creates increasingly extravagant carnage in order to test the power of its simulation equipment...)
During last year's lockdown I watched a bunch of Universe Sandbox videos. It's an astronomy simulation program where you can do things like hurl Saturn at the Earth at 90% the speed of light, replace the Sun with a giant black hole, and duplicate the Moon millions of times. So maybe that's what the basilisk will want to do!
There are testable hypotheses for a simulation universe like this one, but they're all falsified.
Thus, the simulation hypothesis is basically indistinguishable from the FSM, in that no universe that resembles the rules of this one could simulate this universe. Thus, any argument about it is just the FSM waving their noodly appendage and arbitrarily claiming it is so.
The reality is that it is basically a lot of navel-gazing. There's absolutely no reason to believe in it whatsoever, any more than there is the Flying Spaghetti Monster, or the Invisible Pink Unicorn.
Same reason why the singularity is, more or less: it seeming outwardly plausible but not actually making any sense when you have a deeper understanding of the physics of the situation.
The fact that some people think certain people are experts about these things, but they actually aren't, and thus they trust their judgement rather than actually spending time thinking about it in depth, was/is a major contributing factor.
Speaking of ways the simulation could be testable, I the argument Janelle Shane makes in her book You Look Like a Thing And I Love You: if we were living in a simulation, some life-form would have evolved to exploit the glitches, e.g. getting free energy from rounding errors.
The idea I mentioned above about life forms evolving to exploit glitches is an example, or more generally the idea that we'd observe anomalies stemming from glitches, but there are a couple possible objections (neither of which I find that plausible):
-We do indeed observe glitches in reality, e.g. the Mandela Effect, perhaps also paranormal phenomena.
-We don't notice glitches because the simulators are observing everything meticulously and pausing or rewinding whenever an error creeps in, or the simulation is entirely error-free in the first place ("It's a very good illusion" http://wondermark.com/904/). Of course this gets into Last Thursday territory
The problem is that the universe doing the simulation couldn't have laws of physics that are the same as those of our universe.
The problem is the laws of physics and information theory.
To simulate and store information about the Universe at the atomic level, the most efficient possible computer would be... an exact replica of the Universe. So any computer would, by necessity, be less efficient than this.
So obviously that's right out.
But even replicating the Earth 1:1 would require at least that much matter.
You'd *have* to cheat. But we see no evidence of such cheating.
And even if you DID cheat, it STILL wouldn't solve your problem. Even if you only had to simulate the surface of the Earth, and you could reduce the complexity by a million times, you'd *still* need a computer that was a few hundred km across.
So on top of these problems, there's physical issues with the speed of light - you need information to be transmitted within that computer at light speed, and right now, we have the ability to take audiovisual images anywhere on the planet and transmit them to anywhere else. So you can't actually run the computer in real time.
It's even worse than that, though.
You'd not only need to generate enough energy to power this computer, but you'd also need some way to *dissipate* that much heat - and frankly, you'd need to radiate that heat out into space, because you're talking about something that is of a scale that it would cover the entire surface of the earth in a sheet of transistors several centimeters thick. But this makes the speed of light problem even worse, because it means you need to make your computer even larger for heat dissipation problems. This would make your computer ridiculously frail - you can't actually build a solid dyson sphere, it would be destroyed under its own internal stresses.
Remember also that the reason why we have 2D chips is heat dissipation - running such chips at a high frequency would quickly cause the whole thing to overheat.
But you can't actually slow down the computational speed because then you start running into time constraints. Running things at 1/500,000,000th speed, a 80 year long human lifespan would take 40 billion years - far, far, far too long. Even if you speed that up by 10 times, that's approximately the age of the Sun. Speed it up by 100x, and you start probably running into serious heat dissipation problems - and you still are taking hundreds of millions of years to simulate one lifespan.
Moreover, the more we go out into space, the less able they are to cheat on the fidelty elsewhere, which makes the problem even worse. We appear to be able to arbitrarily focus our telescopes anywhere and see stuff, and arbitrarily put things under microscopes and see stuff, and as we build better telescopes and telescopes, this requires ever larger amounts of storage to make sure that the skies and small scale matter stay consistent and also that they're evolving properly over time.
Even simulating a single person and the people around them is a huge problem because we encounter new people all the time, and those people at least pretend to know other people "off-screen", so there's still an inordinate amount of calculations necessary to simulate even one intelligence's point of view, because you have to keep on simulating things outside of that, more and more, to create a plausible simulated reality for them, and errors will reveal the whole thing to be fake.
There's no way to do any of this in a computationally efficient manner. And the higher the degree of fidelity required, the worse the problem becomes. If you have to actually simulate individual atoms, the most efficient way of doing it is to actually just create a replica Earth - any other solution will be *less* efficient, so require even more material, because of how information has to be stored.
So basically, you need to posit that the "real" universe has different laws of physics - but at that point, you're just arbitrarily assigning non-falsifiable properties to the external universe, which is just a different way of saying that the Flying Spaghetti Monster used his noodly appendage to change things, and that whenever he would be revealed, he changes things so he isn't.
If you assume that the containing universe can have arbitrary physics then it just becomes a Flying Spaghetti Monster argument - completely unfalsifiable because the Flying Spaghetti Monster can just wave his noodly tendrils and change things. to be consistent.
It renders the argument unfalsifiable and therefore worthless, because an unfalsifiable argument makes no useful predictions.
Of course there's more reason. We've literally done it one level down, at our own level. We have precedent for this sort of thing happening, so we know it's possible in principle.
Consider a TV show. Very realistic for the part that the viewers see, but the show isn't bothering to "simulate" anything not shown to viewers. In the extreme, if you are the only conscious entity in the simulation, how much detail is really needed to keep the truth from being obvious to you?
Right, but in our case, the "viewers" have complete freedom to investigate the props. In a simulated world, there's nothing that would keep the simulation from withholding the deeper history. It could return "it's a piece of wood", and that's it. In our world, you can then use an electron microscope to probe the wood's detailed structure. You can do Carbon-14 dating to determine its age, use other isotopes to figure out where the piece of wood came from, do a DNA analysis to track the mutations in the plant's genome, count the tree rings to reconstruct a history of the world's climate that is consistent with the tree rings in other pieces of wood... constructing a simulation that gets all this right and consistent on the fly would be seriously impressive, to say the least.
You don't need to get it right on the fly--you can pause the simulation until you've written whatever code or run whatever real-world experiments--but you do need to notice "ah, they're about to find a bug" on the fly.
(I currently think we should update heavily against the simulation hypothesis based on the apparent size of our universe, in both directions. Most simulations will probably look more like Minecraft to the people inside them--but I note that a Minecraft villager could argue against simulation on the grounds of the massive size of their world, surely too large to fit inside any practical computer.)
You seem to be taking the opposite moral from the minecraft comments as I am. For me, they're a pretty good illustration of why simulation *is* plausible. What argument have you made here that could not also have been made by a minecraft villager explaining why she must not be in a simulation/game? E.g. the villager could, like you, point to the vast scale of time and space, the large number of degrees of freedom specifying her world. Yes the actual numbers she would quote would be much smaller than the ones you quoted, but they would serve the same purpose in the argument - numbers that to her are so big that she can't imagine anyone else having bigger numbers to work with or wasting so much computronium (redstone) on running the simulation.
You don't need to notice "ah, they're about to find a bug" on the fly. If they ever do find it, you just have to notice then and REWIND it, then prevent them from noticing.
Because your mind would also be simulated, another option is simply to make your mind incapable of noticing (or remembering) the hacks, shortcuts and inconsistencies. It might be a bit like the dreaming mind, which - in my dreams, at least - accepts at face value the craziest sequences of experiences.
This kind of nerfed mind seems less interesting to simulate than an unrestricted one. It would be relatively easy to notice the problems as they come up and fix them by rewinding when needed.
I don't think this is at all compelling. This is both 1) a natural consequence of complex-valued wavefunctions and 2) many, many orders of magnitude smaller than you are.
Imagine you didn't know about the uncertainty principle and a big part of the reason you thought this isn't all a simulation was because of how much detail we observe. If you were to learn of the uncertainty principle, by how much would you increase your estimate that this is all a simulation?
Isn't it the case that any non-omniscient mind will of necessity have certain limitations re: how deeply it can investigate without interfering with the investigation?
Isn't it the case that insofar as I don't know what you're thinking/hearing/seeing right now (and vice-versa), my mind is not omniscient?
The simulators might be rich enough that all that wasteful stuff isn’t a big deal. Today there are lots of times when people “waste” a lot of computer time to save a small amount of human time.
1. The simulators are our own descendants (or something roughly like us were their ancestors) - ancestor simulations
2. The simulating universe is so different than ours that the simulation is analogous to simulating Flatland, Minecraft, or the Game of Life - game simulations
These monikers are just labels, by the way: an ancestor simulation might be run for the purpose of a game or therapy or some other purpose that isn't testing a hypothesis (except in the sense that everything is, but that's another kind of argument).
For ancestor simulations:
A specific run of the simulation need not last long. If you want to investigate a certain decision, or situation, you might run some five minutes or some hour a billion times. In this scenario, everything up until recently could be loaded as a common save state, and that save state can take as given things like QM and cosmology, unless you yourself are studying it *right now*. Similarly, even if the only way to get to a given state is to simulate its history, you only need to do that once for any history you don't directly want to deal with. (cheek: As others have mentioned, this raises some questions about measure of simulated beings and provides a strategy for increasing your measure: try to avoid doing things that are boring to hypothetical externals. )
Even a longer-running simulation doesn't need to simulate QM, except during the period when a physicist is running some experiment the outcome of which depends on it. For the vast majority of the world, approximations are fine as long as they don't change what's perceptible to your sims. All you know is what you've been able to take in via relatively low-bandwidth senses, and the simulation can just adjust your mind so that you remember everything being fine and holding together logically as long as you're not actively experimenting right now.
For game simulations:
As someone else has mentioned cross-thread, Minecraft inhabitants, were they thinking, could point out that no conceivable computer (built in Minecraft) could simulate what they see around them. This is less of a slam dunk case than they might imagine. It could well be that our universe is much, much simpler than the simulating universe, even without the sort of simplification of detail implied in some of the ancestor simulation argument.
Some parts of our world resemble compromises an engineer might introduce to more easily compute local state: locality, uncertainty, the hard problem of consciousness, etc. (cheek: Even the tendency for interactions with the physical world to become automatic is fishy -- after you've mastered bike riding, paying too much conscious attention to exactly what you are doing can seem to remove the ability... almost as if learning to ride a bike is a simulated experience, but "Bob was riding a bike" is just a background statement... )
The biggest issue with the simulation hypothesis is philosphical, rather than evidentiary: like God and everything-is-a-dream, it's unfalsifiable. Literally anything you could experience is simulable, replayable, etc.
In the context of falsifiability, it's prefaced with "assuming we're being simulated right now, ...". There's nothing we could learn that we could trust to disprove the possibility that we're being simulated. Which is kind of a problem. :)
You can't prove that we're not being simulated, but it might be easy to prove that we are. If some crazy alien popped into existence in front of me and broke a bunch of physical laws by way of demonstration, I would find that quite convincing. No guarantees we'll find the root password laying around, but worth spending at least a little effort to look.
The last few years have convinced me that it's not so much a simulation we are living in as a video game, played by adolescents or bored students in another dimension. Forty years ago, nearly, I remember being on a training course for young government officials in the earliest days of computing, and we were allowed to play with a program which simulated efforts to improve criminal courts by tweaking different inputs. Over a rainy lunch-hour it occurred to us that, by turning the dials the wrong way (we had to change some BASIC commands) we could cause the system to crash, which we did. I assume much the same is going on here: a couple of interns have decided to test the system to destruction. How else would you explain Trump, Brexit, PMC hysteria, Russia! Russia! and the Virus except as a deliberate attempt to crash the system? And what kind of sick adolescent humour decides that the answer to all these problems is compulsory White Guilt struggle sessions? I'm waiting for the grown-ups to come back from lunch, or whatever it is out there.
Genuine question - why should we have any priors at all over how much computing power seems like "a reasonable amount to spend on a simulation" to aliens we've never met, in a universe we've never seen, governed by laws of physics we don't know?
Because if we apply our notion of a simulation at all, why not apply our knowledge of conceivable motivations for doing it as well? Otherwise, you're firmly in the realm of theology. God moves in mysterious ways, and all that.
Yeah, as I hinted at elsewhere, claiming that the simulation is from a more complex universe with computing powers far beyond our own, or the like, is pushing the argument into Last Thursdayland, making it unfalsifiable.
The original simulation argument disjunction (almost no human-level civilizations become posthuman OR almost no posthuman civilizations are interested in ancestor simulations OR almost all human-like observers are in simulations) depends on "the immense computing power of posthuman civilizations" compared to what is needed to simulate a human-like civilization.
That is, it depends on our expectation that a civilization like ours, in the future, will be able to simulate many civilizations like ours (with appropriate simplifications). If we could only be in a simulation if the simulators have much more access to computation than is possible in our universe, then it's not an ancestor-like simulation, and the argument doesn't work.
That is an excellent point; however, notice that the argument doesn't demand *recursive* simulations. That is, it requires that the original humans gain enough computing power to simulate their ancestors, but it doesn't require that those simulations are able to perform simulations of their own.
Suppose you only wanted to simulate our ancestors up until the year 1000 C.E. There's a lot of computationally-expensive physics that you could probably get rid of without seriously affecting the results. You probably don't need quantum, or relativity. The stars could probably be some kind of pre-recorded movie projected onto a celestial TV screen. If you ran the simulation long enough, these glaring omissions would eventually distort the results compared to what really-originally-happened, but for ancient human history it seems unlikely to matter much.
Our distant descendants could one day disassemble all the stars in our universe to make computronium, and then use that computronium to simulate our distant ancestors in a world where the sky is a TV screen. (They couldn't simulate US that way, because we've sent probes out there, and triangulated celestial distances from opposite ends of Earth's orbit. But that's pretty recent in human history.)
If this had already happened, and we were IN the simplified universe that no longer has enough resources to simulate itself, would we know? What if quarks are "supposed" to be made up of even-smaller particles that could somehow (in the far future) be used for more-efficient computation? What if all that "dark energy" we can't seem to find is actually a kludge to make up for the fact that they removed most of the universe's mass so that they wouldn't have to simulate it? (Though really, if they took something out, it would probably be something that wasn't discovered until after the time period they were simulating, so we couldn't seriously hope to guess what it was.)
Or more simply, what if the parts of our universe that are really far away are just using much lower-resolution simulations than the things that are nearby? In the original universe, humans eventually colonized those places, claimed those resources, and spent some of them to make simulations that stopped before their simulated subjects got that far. In the meantime, the stars in the Andromeda galaxy are being simulated as point-masses or something.
Yeah, I would think Step 1 for ancestor simulations is to limit yourself to the immediate vicinity of the people you're focusing on (which might be everyone) and only simulate at the level of detail those people will be able to perceive. Your skin doesn't need to be made up of cells until you're putting it under a microscope. Even brains can probably be simplified a lot without noticeably changing their functionality.
There could be some future discovery about physics that makes computation dramatically easier, but I don't think it's necessary at all. We can probably be simulated pretty well with the technology we can already see coming. If you were trying to simulate a simplified physics in full detail, you might want to leave out quantum--people have made a lot of fuss recently about the difficulty of simulating quantum physics with classical computers. But there's no need to simplify the physics if you model details on an as-needed basis.
There is probably a thermodynamics argument. Shannon showed that each bit has entropy. But if we posit multiple higher dimensions... maybe anything is possible?
If we are simulated by aliens from a different universe, who knows. Except, maybe some universal prior that smaller universes are more likely? But that is somewhat dubious; the actual complexity that matters is the number and complexity of laws of physics, not the diameter of universe in meters.
But one popular version of simulation hypothesis is that our (post-Singularity?) descendants will run simulations of their history, i.e. us... either as historical movies, or as "what if" experiments. In that case, we know the laws of physics of their universe, because it is our universe.
I think with respect to "laws of physics we don't know" it's worth mentioning that there is a respectable belief that the law of physics are unique -- that is, that it is *logically impossible* to construct different laws. Were you to try, you would sooner or later find that your different laws were logically inconsistent -- could not all simultaneously be true. It would be like trying to construct a "new" geometry in which Euclid's 5th postulate was both true and untrue.
I don't mean different in a trivial way, like the fine structure constant has a different value, but in a meaningfully different way, like the speed of light is infinite so instant action at a distance is possible.
I raise the point because it seems to me those who suggest "different physics" is a plausible answer to proposed difficulties of a simulation hypothesis ought to be asked to say why they think different physics is even in principle possible. We do not, after all, lightly assume different *mathematics* is possible in some other universe -- it's not clear assuming different physics is possible is any less logically dubious.
Heavens, those are no more universes than a globe is equivalent to the planet Earth. I don't doubt you can pick out some small subset of physical law and make it consistent, but we're talking about *all* physical law. So far, we have not even been able to come up with even one fully-consistent set of physical laws, e.g. the gross incompatibility between quantum mechanics and general relativity, the fact that QED has an ultraviolet catastrophe that has to be renormalized away in an arbitrary way (that is inconsistent with GR of course), and so on.
What's the standard for what constitutes a "set of laws of physics?". A cellular automaton could be construed as a kind of universe. You can't just summarily reject mathematical structures that you don't think are complicated enough.
The standard is "must describe things that actually exist" where "exist" has the usual definition -- occupies space, has kinematic mass, continues to exist when we shut the power off or look the other way. Metaphorical "universes" like a game of Conway's Life running on a PC don't qualify.
I'm not rejecting mathematical structures that aren't complicated enough, I'm rejecting those that (1) do not describe any measurable reality or (2) those which are logically inconsistent. That *does* put me in the position of rejecting the Standard Model in case (2), but I reserve the right to soften that to mere skepticism and the belief that it isn't the final answer.
I see two possible interpretations of the simulation hypothesis.
1) the universe running the simulation has nothing in common with our own universe. This is unfalsifiable and has zero implications. If interpreted broadly enough, it's both probably true and assumed true by most physicists, but it doesn't imply any of the interesting thought experiments like "what if the aliens decide to turn us off"
2) the universe running the simulation is similar to our own universe. This is ludicrous, and while technically difficult to disprove and it could have implications, it is staggeringly unlikely. Any implications it did have would be counterbalanced by the possibility of a dissimilar simulating universe having the opposite implication for unknowable reasons
Well, when human beings simulate things, we simulate only what is absolutely necessary to extract the data we want, because simulation is expensive (or if you prefer because if you cut out the unnecessary bits you can run your simulation longer for the same cost).
So while a <i>perfect</i> simulation hypothesis is unfalsifiable (and for that matter I don't see how a perfect simulation could be meaningfully distinguished from reality per se), if the kind of simulation being done is like those we do, you should see gaps and missing pieces all over the place -- practically everything should be a facade, with things that are of no importance to the simulators left out or blank. For example, if the interest is in how humans interact socially, say, or any other external manifestation, then why go to the trouble of simulating a complete digestive system for everybody? Nobody is <i>aware</i> of lipids being hydrolyzed in the small intestine, why bother doing it? Just move the organism from hungry to sated, and disappear the lipids, and call it a day.
Obviously a choice to directly set the simulation (or parts of it) to a given state, instead of letting it "naturally evolve," in order to save costs, would break the laws of physics within the simulation -- but who cares? We do that all the time in our own simulation, for convenience, e.g. "periodic boundary conditions" in a fluid simulation. So long as we're confident we're not compromising our final results, it's an excellent trade-off.
Which means that an "in simulation" marker of being in a simulation, at least the kind we do, would be the regular appearance of miracles -- things happening that <i>cannot</i> happen according to our "in simulation" laws of physics -- because the simulators are cutting costs wherever they can.
> an "in simulation" marker of being in a simulation, at least the kind we do, would be the regular appearance of miracles -- things happening that <i>cannot</i> happen according to our "in simulation" laws of physics
I would have taken that in a different direction, given the first half of your post: if the universe is under-simulated, with complex features taken out, then "miraculous" things might happen naturally, without miraculous cause—and faithfully-simulated humans would ascribe them to miracle *anyway.*
(But then that might me be reading "miracle" as "something produced by a magic system", not a shortcut that skips steps altogether.)
I think that's what I said, although now I'm wondering if I expressed it poorly. What I meant is that a simulation has no requirement to be consistent with the simulated laws. If I simulate a 2D universe according to some 2D laws I make up, my simulation isn't limited by those laws, because it's not actually taking place in the imaginary universe. I can have my simulation do things that totally violate those laws -- indeed, it would often be efficient to do so, because I might know the endpoint I want, and the endpoint might be fully consistent with the simulated laws, but I might not want to waste the computing resources it takes to let the endpoint happen naturally -- I'll just jump to it in one step, kaboom.
But that would seem to us (in the simulation) as a miracle. If someone beats cancer via a year-long arduous process involving powerful drugs and radiation, we don't consider that a miracle. But if he just went to sleep one night and woke up the next in the very same cured state, we would. So jumping over process to reach even a consistent end would seem like a miracle to us. But it's exactly the kind of thing *we* do when we simulate complex systems and we know certain things will happen, but don't want to waste the computing resources on getting them to happen the slow way.
The best answer to the simulation hypothesis is that it does not matter. If you think that our universe can be simulated, that means that it can be described by having a state space and some sort of rules that act on the state space in a way that can be computed. So when we think of something happening, it actually just follows deterministically from the rules and the state space (if the rules are indeterministic, there are ways to make indeterministic systems deterministic in simulations). So you can run the simulation one time, or a hundred times in parallel, or not at all - what happens is still conserved in the rules, and then our universe does not rely on being simulated in some kind of universe.
Hard agree. The universe exists and does the same things irrespective of whether it is simulated or "base reality", so there are essentially *no* testable predictions we could make of it. This is why I don't have an opinion about the simulation hypothesis at all.
Well, for starters, there's no guarantee that those 30 orders of magnitude actually exist anywhere we're not currently looking. Video games don't render the entire game world constantly all the time, they just fill in whatever the player looks at, at whatever level of detail they happen to be looking.
Second, nothing has to be synchronous. The simulation could take 500 processor-years to render 1 second of our universe.
Third, those numbers seem big to us given the physical parameters of our universe and our current level of technology; they may be extremely small from the perspective of whatever is running the simulation.
The 1999 movie "Thirteenth Floor" also addresses the issue, with part of the movie being set in a very carefully simulated Los Angeles and the rest of that universe being simulated to the extent that the sim-Angelinos are likely to notice it.
One way of ”solving” the simulation argument is to assume that all advanced civilizations thought of it, and all independently one-boxed a cross-civilizational simulation-taboo. Just to avoid setting up the probabilities in that direction in the first place.
This is compatible with the argument, and only explains why civilizations dont run simulations even if they dont kill themselves. And the solution is that they all use newcombs problem to avoid being a simulation.
Is there maybe a more general phenomenon where conspiracy theorism, or extreme politics in general, comes from a kind of frustrated religious instinct?
I can't say I have any thoughts about the underlying mechanism. But "Trump will sweep aside the evil people and create the new world order" has a similar structure to a day-of-judgement prophecy.
From the numbers in the article, QAnon believers are somewhat more likely to be evangelicals than the general public, and evangelicals are somewhat more likely to be Republicans. I'm not sure there's actually anything to see here.
Marxists have all kind of insane conspiracy theories despite being "antireligious". Indeed, Marxism itself springs from antisemitic conspiracy theories, the idea that the rich jews are hoarding all the money.
For his antisemitism and the attendant conspiracy theories, ever read Marx's "On the Jewish Question"? Or "The Russian Loan"? Or his ranting about Lasalle?
Heck, it is reflected throughout socialist thought, the idea of US vs THEM, that there is some sort of CLASS CONFLICT, that this is THE DEFININING THING, but when challenging, it... just falls apart. The idea that the capitalists are just out to enslave everyone and rob everyone blind is just not how reality works, and indeed, the fact that standard of living has been skyrocketing for the last several centuries in capitalist countries kind of makes how much of a lie it is obvious.
I suspect part of this sort of thing is because of his apocalyptic mindset. "Forced Emigration" is an example of such. He believed that things were rushing forward and "The classes and the races, too weak to master the new conditions of life, must give way."
This sort of false sense of urgency is very common amongst conspiracy theories - the idea that the aliens are coming, that the Illuminati have control and are about to enslave us all, that the Storm is coming, that all police are secretly racist, that we're going to run out of food and all starve.
The idea underlying all of this is that the way society is now will collapse and no longer be viable, that it is all about to slip away or be destroyed, and that those in charge are lying about it or covering it up. It's the same sort of apocalyptic thinking you see in religious thought, but in non-religious contexts as well.
Instead of Satan, it's The Capitalists, or The Government.
Doubly a problem because the universe selects hard against those who were not urgent enough soon enough, but gives more of a by for those who jump the gun a tad.
That may have to do with the "frustrated religious impulses" issue. If you approach economics or politics in a religious way, but have a lot invested in believing that you aren't religious, the impulses can sort of sneak up on you.
I think that there's some sort of cognitive defect which historically frequently manifested itself in religious cults, but actually has nothing to do with religion.
There have been "environmentalists" who believed that everyone was going to starve to death and die in the near future since Malthus. The rationale for this belief is ever-changing and related to the issues of the day, but it is always "THE END IS 10-20 YEARS OFF!"
The Population Bomb and Future Shock are good examples of this from about 1970.
The present climate change nuttery is the same thing.
The thing is, it's not that overpopulation or pollution weren't real issues, no less than global warming isn't a real issue - these are all real things. But the apocalyptic minded people took these real issues and turned them into EVIDENCE THAT THE END IS NIGH.
I saw people saying that millions of people would die if Bernie Sanders wasn't elected, so people should empty their life savings into donating for his campaign.
The "woke" people holding mass protests in the heart of the coronavirus pandemic is yet another example of this. Was there any reason why that couldn't wait for another year? Not really, but their brains told them that the walls were closing in, so they had to go act now now now now now.
There's this sense of false urgency, that the walls are closing in, that the world is going to end if you don't do something now.
It's signs and portents, but in a non-religious context.
When there’s competition between various factions within apocalypto-armageddonology, the urgency obviously increases - some of the extinction rebellion people are certain that the whole of western civilisation must be dismantled by next Tuesday afternoon.
"There have been "environmentalists" who believed that everyone was going to starve to death and die in the near future since Malthus."
That was not, however, Malthus' view. His was that the real income of the mass of the population could never be much higher, because if it were population would increase, pushing it back down. That seems to be a pretty good description of most of human history — ending at about the point when Malthus was writing.
It was already wrong when he came up with it in the place where he wrote it.
Indeed, pretty much the entire history of civilization was a result of increases in *per capita* productivity. These increases mean that as your population goes up, your total productivity goes up.
This is precisely why human population had expanded since the dawn of agriculture.
He was wrong in the same sense that Newton was wrong: his theory is a very good approximation of what he could observe. To see that he was wrong, you have to look at a very long timescale, or a time of rapid technological progress.
<I>I saw people saying that millions of people would die if Bernie Sanders wasn't elected, so people should empty their life savings into donating for his campaign.</I>
The ACX Tweaks browser extension supports limited markdown. You can enclose words in *asterisks* for italics. Obviously, though, it won't work for any readers not using the extension.
Some of them were seriously proposing, after Biden won, that he'd been face-swapped with Trump, so actually Trump was still President. If QAnon is still around when Trump kicks the bucket, they'll probably come up with a similarly implausible claim.
I consider myself pretty well connected with some moderately to far right individuals (family and co workers), and have not seen anything this nutty. Where are you finding this information?
I take the opposite approach actually, and basically use my real name everywhere. The goal is to never let myself be confused about what the future will know about me.
Ditto. I try not to say what I would not like to have repeated. I do guard my tone less in a private written communication, but even there I like to test myself now and then: "Would you be embarrassed to sound like this in front of the people who disagree with you, or the people you've strayed into gossiping about?"
My trick is to be so unimportant that not only would nobody bother doxxing me, but said doxxing would totally ineffective because nobody cares enough about what proles think to retaliate against them.
Emmanuel Cafferty is/was a prole, but got fired because someone falsely accused him of making a politically symbolic gesture. This was in meatspace rather than online though.
I don't use Facebook or Twitter, and generally use different pseudonyms across different sites. Basically I keep "real life" separate from my online life, and each community separate from the others.
Use pseudonyms (including different pseuds on different sites).
Selectively alter details about your real life if you must discuss it, but endeavor to be vague regardless.
Never share photos, especially selfies. Never livestream.
Do not mix real life contacts and "internet" contacts. If you must have eg a Twitter account under your real name, use it only for the blandest of networking, and save the polemics for an anonymous account. Do not give real life contacts your "internet" handles.
Do not get into "beefs". Do not engage with angry people. Do not sling shit.
(I was a child who adored technology and computers, and my parents wanted to support me but were also terrified that internet predators would somehow snatch me. This is all probably overkill, but by now it's habit)
I simply refrain from using social media of any kind beyond very basic private conversations with people I know in real life, and use a throwaway pseudonym for literally every other online interaction.
Nothing extra - I'm unlikely to be prominent enough to become a target as an individual, so I don't have to run faster than the bear - just faster than the slowest person its chasing ;-(
What I don't want is merely:
- someone reading my resume, googling my name, and getting results that encourage them to discard the resume
- twitter mobsters picking me as the next "evil monster" to cancel - and being able to cancel more than my current posting alias, and my presence wherever they've targeted
FWIW, I regard the latter as unlikely in any case. They are mostly too lazy to go beyond their own preferred social media looking for victims. And I ahve nothing I want to say to the general public within the twitter character limit.
This is a rabbit whole which you could fall very far into depending on how obscure you want to be. VPNs, Tor, unique and multiple identities across sites tied to unique email addresses, virtual credits cards for payments (e.g., Privacy.com), etc. There's always going to be a trade-off between convenience and anonymity, though—none of this is much fun to maintain.
I have a protonmail and don't share much possibly-personally-identifying information online. I don't join privacy-annihilating sites like those of Mark Zuckerberg. That's literally it.
On DNP: I feel like the survey missed a "It made me consider DNP but it did not make it look worth the risk". After all, watching video of BASE jumpers part of my brain wants to try that, but that does not mean I consider that a good idea.
I think the BASE jumper risk calculations are heavily skewed by all the jumpers that want to fly/glide VERY close to big, hard objects. I think even the 'flying squirrel suits' aren't that much more risky than parachuting, and that's pretty safe – unless you want to fly _right_ over the top of some rocky ledge or alongside a mountain cliff or just over the tops of some trees ...
My prior for anything blockchain is that if they can't explain it to me in a way that I understand (and I have a CS degree) then it's probably bulls--t, a scam, or both.
That said, it's almost a Pascal's wager to buy maybe $1 of anything in the blockchain space that looks like it could make it big, because if you did that back when a bitcoin was $0.00.. then you'd be a millionaire now.
The space is unfortunately full of "solutions" looking for problems. Meanwhile the kind of thing where a blockchain would actually be a really good answer, like Certificate Transparency for TLS, seems to manage just fine without.
I feel similarly about the Pascal's wager element. But with NFTs that seems harder because you can't just buy 1 of whatever new currency is coming out.
I don’t understand the appeal either. People compare the NBA Top Shots phenomenon favorably to baseball cards. Sure you can print out a cardboard image of any play in baseball history, but it will be worthless. Call it a baseball card though and produce it in limited quantities and it might be worth a lot of money (especially if you did this 60+ years ago and it’s of a famous player). That being said, baseball card collecting reached its peak in the early 90s (I think—I was a kid then and that was my impression; I haven’t kept up). My Ken Griffey Jr. rookie card that was probably worth tens of dollars or maybe more back then is probably worth $5 or less today. I had assumed its value would go the opposite direction. However, I think the truly rare cards that were produced before people started collecting cards to resell them have continued to appreciate. Assuming NFTs stick around, my guess is we’ll see something similar. Most that are going forward hundreds or thousands of dollars now will be worthless (or close to it) in a few years, but a few will be worth millions.
One area NFTs could be interesting is tickets to concerts and sporting events. It gets rid of counterfeits and more importantly allows the original seller to reap some of the benefits of the resale market. My understanding of NFTs is that every time the token is sold the creator gets a cut. So say Bruce Springsteen or the Lakers sell a ticket to the Staples Center for $100. If the buyer turns around and sells that ticket for $500, some of that money goes to Springsteen or the basketball team.
But if you lose the “ticket” it’s gone. Whereas now you can probably get it re-issued. I believe most “middlemen” exist for a reason...it’s only when you start cutting them out you realize why. I like knowing I can call my credit card to stop payment in the event of a dispute... no such mechanism in a token world.
My impression is that most collectibles top out their prices when the people imprinted on them have their peak wealth-- it's "buying back their childhood".
There are exceptions where there's a larger consensus that the thing is valuable.
Right. In comics, it used to be the rule of 5/30 (that second number might be wrong). At least before digital comics became common, the idea was that people wanted comics from the last 5 years because they were filling in stuff that they were reading that they wanted to complete a run of. After that, the demand for comics drops off until they're about 30 years old, and then people collect their childhoods. Children's books, in general, are also notoriously cyclical like that -- parents buy the children their own favorites.
>My understanding of NFTs is that every time the token is sold the creator gets a cut. So say Bruce Springsteen or the Lakers sell a ticket to the Staples Center for $100. If the buyer turns around and sells that ticket for $500, some of that money goes to Springsteen or the basketball team.
This is not the case out of the box, NFTs, non-fungible tokens are just tokens with their own id. You can make them work that way if you build them to but you shouldn't assume that they do.
My general feeling is that there is a lot of "value" in crypto currencies right now with no place to go. The owners of that value are trying really hard to build a market in *anything* so they can eventually cash out. So I imagine there are a lot of crypto millionaires overpaying each other for NFTs so they can convince people this market makes sense.
I don't think it does make sense. But I'm happy watching some struggling artists get some of this cash. I can't wait to read a book on this subject (and this time) in 5 to 10 years.
It seems like a solution looking for a problem, but you might think about why there are titles for land and cars. Having the title to a car doesn’t mean you have the car. It might be useful for proving you own the car, but first you need consensus that we care about who has the title to the car, or it’s just a piece of paper. It seems like having the *key* to the car is pretty good proof of ownership, except that it would be inconvenient because sometimes you want to let people use the car without selling it to them.
So it seems like these tokens could be useful, but first you need to get agreement that the token is how we prove ownership of the thing, and most people aren’t all that willing to grant ownership to any sketchy anonymous person who shows up with an Internet token. It might work as a second factor to make ownership transfer inconvenient but possible. (Inconvenient compared to loaning someone a key.)
Much of the appeal seems to be that you could transfer ownership in the same transaction that transfers cryptocurrency in the opposite direction. This seems like the only reason for it to be on a blockchain at all. So this is proof that you paid for it, I guess?
>much of the appeal seems to be that you could transfer ownership in the same transaction that transfers cryptocurrency in the opposite direction. This seems like the only reason for it to be on a blockchain at all.
Not really, the primary reason to want ownership on a block chain is for the decentralized effect, so you don't have to rely on some central authority to manage your ownership and it can be independently verified by anyone with internet access without relying on some other service to host the current ownership information.
Some combination of speculation and conspicuous consumption? I don't fully understand it either.
What does it mean to "own" something which isn't at all under your control? If I own a painting, I can put it in my house and only in my house and it's most definitely not in your house. If I own an NFT, anyone on the Internet can download the same damn jpg my token points to for nothing and it's 100% identical to "my" art. You don't even get copyright over it!
The only value I can see is that you're buying/selling a story: "This is the token that supported the artist who made Artwork X." And there's some value there—after all, perfect forgeries of masterpieces are themselves worthless. But still, I think there's got to be a qualitative difference between something I'd find in a museum and even the most popular animated gif meme.
I think they're currently a scam but some kind of ownership token actually makes a lot of sense for simplifying copyright. People are buying and selling one off art pieces meant to stand alone, that seems silly to me, what makes sense are game assets, audio samples, things that people who would bother to obtain the rights before using the content would be interested in.
It's a phenomenon that has made me start to really question whether the concept of "ownership" is obsolete. Even more than existing digital media, which still puts up at least a perfunctory paywall before you're allowed to consume it - from what I understand everyone can look at these art pieces no matter who owns them.
The people buying them must fundamentally care about "owning" the thing, in a purely abstract sense that is hard for me to fathom. Or more likely, it's a bubble with them thinking more people will get fooled into wanting to buy one in the near future and drive the price up.
NFTs are kind of bullshit – not so much the very idea of them, as the "NFT space". Thing is, the art market is also kind of bullshit, so an NFT art market makes sense!
In terms of them being attached to 'art' works (Jack Dorsey first tweet selling for $2m5 and counting) I purely perceive these as Veblen goods/ Zahavian signals, their value is only for signalling that you've got money to waste. True of much of the art world as a 'store' of value.
Side note. There is an oddity with art in that it can sit in customs warehouses indefinitely, and be outside any tax jurisdiction. You never know quite what low tax country you might want to land it in
This is a little silly, but I'd like to add my novel to the Top Web Fiction website, and it looks like it's invitation-only. Is there anyway you could send me an invitation?
I'm assuming that somebody here will have a more informed take than my own, so I'm writing this in the hope that I get put in my place. Here are my thoughts:
First of all, it's already interesting (although I assume well-known) that facial expressions + head pose (it looks like these are correlated) do so well. Obviously these are choices, and choices reflect the culture of one's peers, but I'm not sure I'd have guessed that the bias is so strong. Relatedly, I definitely would not have guessed that "sunglasses" would be so poor of a predictor relative to, well, anything else.
I was personally surprised that the algorithmic accuracy was greater than human accuracy at the same task. After reading a bit, I think that I just haven't kept up with the SOTA of facial recognition --- apparently algorithms now beat humans in general. This is surprising to me, since I learned that our brains are pretty specialized for facial recognition, but I guess this is just my ignorance speaking. Fine.
Anyway, for the big picture: the attributes against which facial recognition was compared were all *voluntary*. Users had the ability to chose whether to look up or down in their profile photo; whether to smile or frown or wear sunglasses. There are many such choices, and including more of them (like more subtle aspect of the facial expression) might have yielded an even more accurate prediction. Facial recognition itself, though, will include demographic characteristics (*partially* controlled for by the study), but also things like "has this person had plastic surgery". What I really want to know is, how much of the predictive power lies with these uncontrollable characteristics, and how much lies with aspects of the facial expression that can be faked?
I now wait for somebody to tell me that the entire study is flawed because it didn't control for whether the pictures were taken at day or at night.
> What I really want to know is, how much of the predictive power lies with these uncontrollable characteristics, and how much lies with aspects of the facial expression that can be faked?
This meme[1] is a meme and obviously not scientific, but lets use it.
Someone in the top left would need to buy a trucker cap (easy), sunglasses (easy), take the photo from below (easy), while they're in a truck (requires effort and/or non-trivial money), get a haircut (easy to do but requires a long term investment), and possibly grow facial hair (same).
The truck will be cropped out. So if you wanted to fake it maybe the only real investment is to change your hairstyle. It's not something you quickly do for 5 seconds, but it's not too hard either.
Facial symmetry correlates with IQ. IQ correlates with political orientation (higher IQ makes you more likely to be liberal, lower IQ makes you more likely to be conservative). Therefore, political orientation should correlate with facial symmetry.
Race is also predictive of political orientation, so if you fail to compensate for that, you might get a huge but worthless signal.
The correlation isn't particularly strong - it's only a few IQ points different on average. There are lots of dumb liberals and smart conservatives.
Note also that this is talking about liberalism as in *liberalism*, i.e. belief in greater civil liberties. Thus it is not a left/right thing per se; people like Mitt Romney are liberals in this categorization.
While this is true, given that both are related to cognition, it wouldn't be surprising if it was the reason. Additionally, areas with more inbreeding are often stereotyped as being more conservative, which would suggest that mutational load might be associated with it - and that is inversely associated with both facial symmetry and IQ.
Pedantic note: I said 'guarantee' and 'correlation' was unrestricted. The fact that squares exist does not invalidate the statement "rectangles are not guaranteed to be squares" and adding "but squares are" is not useful.
> I was personally surprised that the algorithmic accuracy was greater than human accuracy at the same task.
It was not the same task, the AI is working on one set of pictures, and the human performance is only a reference to some other paper using a different (presumably less biased) picture gallery
The problem with agreeing that sticking DNH = do not highlight or something like that on the end of comments is that someone can now search specifically for those tags, and ... if they have really no morals, write a bot that automatically reposts these comments as highlights on sneerclub or something.
I use an online alias here that's not linked to my offline identity or any other online ones, and so should you.
People might do it in a non-standard way, a bit how we have a culture of obfuscating email addresses online. Not saying it would be practical, but I'm mildly curious to see the creative solutions that would turn up.
I wouldn't expect casual obfuscation attempts to stand up to any serious attempt to automate finding them. (This is also my opinion for obfuscating email addresses.)
They'll still stop a statistically noticeable number of would-be searchers, but only in a "beware trivial inconveniences" sense.
How about linking to 6 or 8 of them at a time with just a list of the books reviewed?
I guess I’d plump for Scott skim-reading the whole lot (as punishment for being such a book-reviewer-attractive blogging phenomenon). And then publishing his favourites so the masses can chip in to help select those for whom the riches beckon.
Could he post them as comments to a top level "Book Reviews" post? I guess I don't know what the length limit is on Substack comments -- I can paste a ton of text in here and it doesn't seem to trip anything, but there might be some limit server-side.
> I’ll choose some number of finalists – probably around five, but maybe more or less depending on how many I get – and publish them on the blog, with full attribution, just like with the adversarial collaborations.
Note that I like Scott's book reviews just fine. They're often of things I would never have read, and enjoy knowing something about them. But 100 is a lot, and none of them are Scott's. If, in fact, there's only a half dozen highlighted to read and vote on, I'll definitely be doing that no problem.
A possible solution for the comment highlight issue would be to make these posts subscriber-only. The obvious downside is that non-subscribers will lose some interesting content, and it kind of goes against your original commitment to make most of your posts free. On the other hand, this puts highlighted comments out of reach for search engines and the internet at large - so the threat to commenters' privacy goes way down.
I was under the impression that you were going to publish the best 5 book reviews and invite additional comments to help decide on prizes, kudos and glory. Are you now going to publish them all? All 100?
Ok, a) that’s a lot b) you’ll publish mine. Have you thought this through?
It was never the plan to post them all. I think Scott planned to post the top 5 so that people voted, and then maybe link to a few others. I'm lazy to search for where he talked about it.
The most politically influential person you never heard of? Is it possible an unassuming children's books author shaped the world we live in?
A strange thing happened a few days ago. My wife and I were at the rumpus room reading Who's Bashing Whom? Trade Conflict in High Technology Industries ( http://cup.columbia.edu/book/a/9780881321067 ) when our oldest son told us he had been assigned for reading a book by British writer Aldous Huxley. My wife and I had never heard of him before. What happened to Beatrix Potter, C. S. Lewis, Roald Dahl, R. L. Stine, Horace Greeley, Lemony Snicket, Lewis Carroll and G. A . Henty? Has the great state of California cancelled them? Anyway, I decided to ask my own questions and do some research on this fellow. Apparently, his books are required reading in many Democratic-controlled school districts, are very strongly opposed by concerned parents due to anti-religion and anti-family themes and sexual content ( https://en.m.wikipedia.org/wiki/List_of_most_commonly_challenged_books_in_the_United_States ) and (surprise, surprise! ) were widely read in the communist Soviet Union ( https://www.jstor.org/stable/3831583 ). The book my son was assigned to read proposes the abolishing families, replacing religion with orgies and banning the Bible and the works of William Shakespeare.
But who was this Huxley guy and how did he shape the world we live in? He was a scion of the Huxley family, founded by Victorian scientist Thomas Huxley, who was nicknamed Darwin's Bulldog due to his rabid championing Darwin's ideas. More than any other single individual, Thomas Huxley was responsible for the triumph of Darwinism, which led, in the 20th Century, to the Holocaust and the Gulag. Aldous Huxley, besides writing books, also studied Oriental religions such as Buddhism and Hinduism and became widely familiar with the substances used in such religions to create altered conscience states known as trances. On these matters, he wrote a book called The Doors of Perception, which kicked off the psychedelic movement. The famous rock n' roll band The Doors was named after the book. Huxley's eloquence and charism made him a kind of intellectual patron saint of conscience altering drugs, the consequences of which America and the world at large experience to this very day. Here he can be watched being interviewed by famous journalist Mike Wallace https://m.youtube.com/watch?v=alasBxZsb40 .
The rabbit hole, however, goes much deeper. Huxley wrote a biography of famous Catholic priest Urbain Grandier, who was burned alive by his own correligionists for practising withcraft. Is it to take things too far to suppose that, while maintaining his façade as a beloved children's books author, Huxley was able to combined his studies on Hinduist devil-worship and his research on Grandier's Medieval withchraft to make demonic forces do his bidding and assure his success at Hollywood, where he worked as a scriptwriter? If you don't believe so, maybe you should read this hair-raising report of the visit by an Evangelical author who originally did not know who Huxley was to the Buddhist Monastery in which founding he was instrumental and decide for yourself if you choose the blue pill or the red pill. https://midwestoutreach.org/2019/05/18/thomas-merton-the-contemplative-dark-thread/
Why would you ascribe to Aldous Huxley the political philosophy of the totalitarian society he portrays in his classic dystopian novel, Brave New World? Would you say that George Orwell is also a proponent or propagandist of communism for writing Animal Farm, or 1984? Also, what are the *consequences" of mind altering drugs that "America and the world at large experience to this very day"? Are you including prescription drugs in this, or stimulants that were widely used in non-psychedelic circles, or for some reason just psychedelics? Aldous Huxley is not at all someone I hold in the highest esteem, but I think this call to cancel him is anemic and uninformed.
It is not that simple. Orwell was a socialist who fought for the socialist government against the Francoist Nationalist forces. Trotsky opposed the Stalinist regime and wrote books such as The Revolution Betrayed, but was a communist propagandist.
I think it is clear Huxley's works kegitimated, so to speak, drug taking.
So we are sniffing out all left-leaning intellectuals and drug legitimators and putting them in an axis of demonic evil? Ok go for it. It sounds rather totalitarian to me. I would be far more concerned about reading books from Francoists or behaviorialist psychologists or Evangelical apostasy-detectors (such as the one you linked) than socialists or psychonauts, who often have interesting perspectives and critiques (that don't involve reducing humans to animals or machines).
My point is, there is good reason to believe Huxley was marshalling demonic forces to subvert American society while maintaing his façada as a belived children's books author to suit his own goals.
Huxley wasn't a "beloved children's books author." He was a prominent writer and philosopher, famous mostly for a dystopian novel. I don't think he wrote any children's books at all — can you name one? Shakespeare is frequently assigned in class — does that make him a "children's books author"?
Again, the Carholic Church burned at the stake Urbain Grandier, about whom Huxley wrote a biography. Why would it have done so if he were not a wizard?
It is more complicated than that. The higher castes used it, too, though probably less. It is also presented as means to perfect one's character. "Christianity without tears", it is how it is described.
Woah hold on just a minute here. You haven't heard of Aldous Huxley? That's hardly a mark against him, and Brave New World is pretty common high school reading. And it's odd you're comparing him to the likes of R.L. Stine and Beatrix Potter (the latter I read in kindergarten and the former was on middle school shelves, in California). Maybe C.S. Lewis, but I'm not sure he's usual reading for schools.
You *are* talking about Brave New World right? Since when did that propose abolishing families and banning the Bible? ...You realize that was a dystopia right? (And doesn't "very strongly opposed by concerned parents" sound a lot like canceling?) I tried to read your reference on the Soviet Union thing but it's behind a paywall. The preview does however contain the quote "For the six decades since he was first heard of in what used to be the Soviet Union, that spirit, more often than not, was critical and hostile." So I'm not sure where you're getting your conclusion.
Also... what's this about Darwinism and the Holocaust? Are you talking about some weird form of social Darwinism? Trying to link the descendant of Darwin's proponent for a perversion of his theory a hundred years later is like... trying to blame Orville Wright's grandson for 9/11 (he never had kids, but the point remains).
> Huxley's eloquence and charism made him a kind of intellectual patron saint of conscience altering drugs, the consequences of which America and the world at large experience to this very day.
Man, wait til you hear about Alexander Shulgin.
> Is it to take things too far to suppose that, while maintaining his façade as a beloved children's books author, Huxley was able to combined his studies on Hinduist devil-worship and his research on Grandier's Medieval withchraft to make demonic forces do his bidding and assure his success at Hollywood, where he worked as a scriptwriter?
Sorry, what? Is it weird that my main takeaway from this is that you think Huxley is a beloved author of children's books? You already said you think his books advocate banning the Bible etc., so I'm not sure exactly what facade you think he was holding up? If you're really going to try to argue here that he was marshaling demonic forces, well, I don't know what to tell you. He seems to have used much of the money to help out people in wartime Germany, so I supposed the demons have that going for them. I guess to answer your question then, yes, yes it is.
"Brave New World is pretty common high school reading."
Exactly. Because Democrats find it useful to their agenda.
"You *are* talking about Brave New World right? Since when did that propose abolishing families and banning the Bible? ...You realize that was a dystopia right?"
one has to read between the lines. And this was a constant in Huxley' work. In his last book, Island, one of the main characters criticizes St. Paul, Calvin and the hymn "There is a Fountain Filled with Blood Drawn From Emmanuel's Veins" while supporting Hinduism and Buddhism.
"(And doesn't 'very strongly opposed by concerned parents' sound a lot like canceling?)"
No, it is very different. We are talking about the books children have access to or are given to read. Schools and libraries, by definition, make selections.
"I tried to read your reference on the Soviet Union thing but it's behind a paywall. The preview does however contain the quote 'For the six decades since he was first heard of in what used to be the Soviet Union, that spirit, more often than not, was critical and hostile.' So I'm not sure where you're getting your conclusion."
Howeber, it clearly says "In his new capacity as a liberal public figure, Huxley was judged worthy of public attention. This accounts for the sensational appearance of four chapters of Brave New. World in Number 8 for 1935." At the height of Stalinist repressions and anti-Western animus in the Soviet Union.
"Sorry, what? Is it weird that my main takeaway from this is that you think Huxley is a beloved author of children's books?"
His works havebeen adopted by schools and libraries all over the United States.
"You already said you think his books advocate banning the Bible etc., so I'm not sure exactly what facade you think he was holding up?"
His façade as a beloved children's books author.
"If you're really going to try to argue here that he was marshaling demonic forces, well, I don't know what to tell you. He seems to have used much of the money to help out people in wartime Germany, so I supposed the demons have that going for them. I guess to answer your question then, yes, yes it is."
Couldn't it be part of his façade as beloved children's books author? Soviet spy Philby managed to be made head of SIS (British intelligence) Section Nine, which led anti-communist efforts.
I'm not sure if this is what's going on, but one of the many irritating things about the modern era is referring to everyone below the age of majority as a child.
"Children's book" as an idiom typically means books aimed at people below age 8 or so, which Brave New World is not. It's treated as more of a book for teenagers.
I think Huxley & Orwell wanted their books to be read & taken seriously by adults. Adults in term find the books useful for introducing people approaching adulthood to adult political concepts.
I'm not sure how you go from a single sentence saying "judged worthy of public attention" to "widely read in the communist Soviet Union", especially without bothering to read the rest of the source, but I'll give you that it's as well supported as everything else you're saying.
> Couldn't it be part of his façade as beloved children's books author?
I don't know, could it? Maybe Gandhi's peace advocacy was just a facade to spread HINDUIST DEVIL-WORSHIP to the West. And Hitler was working for the forces of good to ensure the demise of the terrible toothbrush moustache.
At this point, you're just adding epicycles and trying to use cheap language tricks to... try to convince everyone Huxley is an agent of the devil or something? I mean, I can do it too:
Jesus was an anti-Jewish[1] anarchist[2] who used supernatural agents to enforce his will[3] and maintained his facade as beloved children's book author[4] in order to spread ideas that lead to the deaths of millions[5].
After further reflection I have concluded that Thiago Ribeiro is an agent of the Epoch Times or one of their associated entities. I have observed consistent errors and idiosyncracies in their mimicking of right wing "traditionalist" anti-communism, such as what is on display here: an assumption that people are unfamiliar with Aldous Huxley's name; a strange grouping of R.L. Stine with C.S. Lewis and... Horace Greeley of all people?; an apparent belief that Huxley is primarily a children's book author; an awkward and internally inconsistent Christian purism; a paranoiac obsession with Communism and Socialism and their apparent cultural tentacles, in this case the demonic; extraneous references to Chinese cultural flashpoints (in this case using Buddhism as a sort of bogeyman).
There are more signs here but I will desist from the analysis, lest I enable them to improve their (pretty terrible) simulation of "truth and tradition".
Does anyone here agree with my thesis, and does anyone else have thoughts on this particular flavor of information warfare, or on discerning bad faith arguments in general?
1) Why would *anyone* (Epoch-employed or not) adduce in THIS forum an apparently "Evangelical" take-down of Thomas Merton and Aldous Huxley on the basis of their demonic involvement and one blogger's "experience" of ill stomach spirits upon visiting his lair? Is there an anti-demonic constituency here among the Astral Codex clan that I was unaware of?
2) Is there a name for the bias (that I suspect we are suffering from en masse) of responding to a bad faith messenger as if he were coming in good faith? For instance my original response to Thiago, and Pycea's as well. We did not give the author much honor for his arguments but we did treat him as a person who was making an argument, rather than a false character injecting a meme along with his cohort of meme bots.
Is there a name for this bias, and more importantly--what are its consequences?
2) I have no idea what, if a thing, Epoch Times is.
3) R. L. Stine, C. S. Lewis and Horace Greeley wrote children's books.
4) The point is, why the Soviet Union's regime, at the height of Stalinist isolationism and repression, supported Huxley if not because its leaders found his ideas congenial?
5) Not only Buddhism, but also Hinduism. Both were highly commended in Huxley's works such as Island and The Devils of Loudun (the biography of Catholic witchcraft practitioner Urbain Grandier).
Thought that flows from fascism / cultural purism / nationalism can be so breathlessly simplistic that it is sometimes hard to believe it is held in good faith-- but here we are, in 2021. It is interesting that people who constantly decry totalitarian psychology and cancel culture can so perfectly exemplify it as well.
1. I meant Horatio Alger. I am sorry. He penned works such as Frank's Campaign; or, What Boys Can Do on the Farm for the Camp, Paul Prescott's Charge: A Story for Boys and Struggling Upward; or, Luke Larkin's Luck.
2) I think it is clear Huxley legitimated drug use, Buddhism and Hinduisn in the USA and in the West at large. Also, it seems clear countries like China and the USA are converging to some version of the society Huxley championed in Brave New World.
3) I have no idea. I am not a Census taker.
4) It is presented as being so, with the characters going on and on and on how the society is stable and everyone is happy all the time. Many of the book's main themes are recurrent in Huxley's works and have been appeared in books like Island,
Sorry, I meant Horatio Alger, author of books such as Ben The Luggage Boy; or, Among the Wharves, Ragged Dick; or, Street Life in New York with the Bootblacks and Paul Prescott's Charge: A Story for Boys.
Nah, I think we've just picked up a new friend who came to us via the NYT article and truly believes we gather in every hidden Open Thread to belt out a few choruses of the Horst Wessel song, so they're adopting camouflage to fit in.
The Horst Wessel song is cathy even if the ideology is detestable. The same is true of The International, the Soviet Anthem versions, Giovinezza and Cara al Sol.
Thiago has been commenting on Marginal Revolution for years now, always obsessed with how wonderful Brazil is. I don't think the NYT was his way in, but he's definitely strange.
I don't know what that is. I just pulled up the site, but other than that it's clearly very right-wing I'm not sure what I'm looking at. Can you sum up? (Like a paragraph, I'm not asking you to write an essay.)
Epoch Times is apparently owned and operated by Falun Gong, a Chinese religious movement that is suppressed by the Chinese government. They have made a major play for Western media in the past few years and came out swinging hard for Trump and conspiracy theories and deliver free copies of their paper all over the country, and other versions around the world.
I agree he's weird. What specifically made you link this dude to the ecpoch times, however, instead of just generally accusing him of being a troll? That's a very specific accusation to make.
Honestly, I jumped the gun. But I've been reading some of their stuff over the past few months, and I always notice similar signs of foreignness, paranoiac Communist obsession, awkward Evangelistic simulation, and random references to elements of a Chinese worldview that don't make sense over here. In this case it seems the fellow is a Brazilian. Clearly my trolldar needs recalibration. Also I still just find it hard to believe that these campaigns have produced actual people whose minds work in that way now.
Funny stuff. Perhaps best read in combination with this review complaining about Huxley hanging onto social conservatism and writing as if Brave New World is a dystopia rather than a utopia. http://adamcadre.ac/calendar/14/14432.html
Acceptable in the sense of other people thinking it makes sense, no. Acceptable in the sense that he is allowed to say it, sure. The fact that some people have weird ideas is both inherently interesting and informative about the world we live in.
Calling Hindus devil-worshippers is fairly unkind and not particularly necessary, not to mention untrue. Interesting and weird ideas are fine, but random diatribes against other religions and The Commies aren't exactly high quality discourse.
I don't think Scott has been given a ban-hammer yet, and there is certainly, as yet, no report button. I too am sure it's banworthy, but I don't know if Scott can do it yet, and I don't see any way to call it to his attention.
At least one person (pseudo-Josh Hawley) has already been banned, so we know it's something Scott can do. We do need a report button, though. I'm surprised Substack didn't already have one built in.
Substack desperately needs an “ignore commenter” feature. Plus people here need to remember the first rule of internet commenting: don’t feed the trolls!
Don't highlight is kinda funny because it announces to the world that you'd rather not announce to the world this opinion. I think we just have to accept the nature of social media. Anything you post can go viral at any time after you post it. Could be getting highlighted, could be ten years down the road at a job interview. That's how it works, highlighting or not. We should keep highlighting as is because it keeps us in touch with the reality of how online forums work.
My comments at the end of the survey, because I think people not Scott need to hear them.
1) IANAL, and I think you should get actual legal advice on this because the idea that "pointing out a potential investment opportunity" is illegal doesn't pass the smell test with me. But hey, I think growing my own corn and feeding it to my own pigs isn't "interstate commerce" so what do I know.
2) There is, in my mind, more ethical issues with promoting your buddy's "non profit" (from which the buddy draws a 6 figure salary, or is using to build points for their kid to get into Harvard, or what ever) than in promoting a stranger's business. I think the glorification of "non-profits" completely misses how modern society/economics treats these as wealth & prestige builders for the individuals employed by them, and is another casualty of the demonization of normal business and investing.
3) In any case, you should say if (known to you) a friend/relative/ex/employer is involved in any such promotion. To me, that's the only ethical hurdle beyond 'don't take money for promoting things unless you disclose that' and 'def don't promote things you don't think should be promoted, even if they pay you for the promo.'
4) I like the highlights reels, as they often bring out more conversations. They are not a complete substitute for reading the whole comments, which is occasionally enlightening as to what *doesn't* get promoted.
5) You should never promote something in a malicious way, or in order to cause issues for someone.
6) What we say is our own responsibility. Period. You are not the boss a' me, and you don't get to police what I say. I am a grown adult and I do not yield you that power.
The covid infection and hospital rates in the US have both dropped steeply over that past two months. The time correlates with when we started vaccinating people. But to (non expert) me, it seems like it has dropped way faster than we have been vaccinating. I haven't found any news stories that say whether this drop was due to vaccinations or maybe due to post-holiday social distancing or maybe even due to some amount of herd-immunity.
What's the consensus here? Why have cases/hospitalizations dropped so precipitously in the US over the past two months?
My optimistic case had natural herd immunity being reached on 20 January, which is pretty close to what we got. However, the rate of the subsequent decline is more than I would expect from natural herd immunity alone, and I suspect that is in part due to vaccination of the most susceptible individuals at about the same time that natural herd immunity started reducing the total new-infection rate.
Nice read. I think if I had read it when it was written I would have been a lot more skeptical than I am right now :)
FWIW, it seems cases are falling less quickly in the NorthEast now than anywhere else. Coincidentally, the NorthEast was the first place to get hit hard. Do you think it's possible that immunity can disappear after around 12 months? Or is there a different explanation?
The problem with "natural herd immunity" is that what percentage of the population has to be immune in order to produce herd immunity depends on how people are behaving. If we were all hermits, we might manage herd immunity at zero.
My interpretation of what happened is that by early January, the percentage immune was enough to give herd immunity if people were moderately careful, not enough if many people were having parties indoors. So once the holiday socializing was over, infection rates started to fall pretty fast.
Could be behavioral changes, or some environmental factor. Thing is each time the virus start to multiply again, the population in which it grows has changed, the amount of completely unprotected people that can still be sick is reduced, through vaccination ( but ATM in Europe and US, apart from special cases like care homes, it concern a trivial number), or previous infection. So even if R>1 once again, back to last year levels, the epidemic should looks quite different jsut because the suceptible population is much smaller. It will get exhausted not faster but with a much smaller wave. This may explain why we have more of a slow burn than a wave in Belgium, R is growing progressively, but the suceptible population get exhausted very quickly. It seems more reasonable than the official interpretation that assume a quite low amount of people protected by a previous infection (20-30%) that, while significant, would not change so much the epidemy dynamic, and an R that hoover around 1 just because the measures (government or self - imposed) are just right. This seems too good to be true, especially with the anecdotal evidence of widespread ignorance of many of those measures (by people who religiously followed them a few month ago).
So I think the amount of really susceptible people is much lower than 70-80% (100%-vaccinated-previous covid), probably because of cross-immunity and/or natural resistance (blood type for example). If you assume 60% of the population is naturally resistant and would not catch COVID in most circumstance (barring massive exposure for example), what's happening now and in the past makes much more sense. For example, I have a few couples in my acquaintances where one did catch and not the other (sero and PCR negative)....while living a normal couple life in a single home. Not really possible in 100% of people could be contaminated under normal social circumstances....
In terms of hospitalizations - we should expect that to come down quite rapidly, because hospitalizations are heavily skewed towards the elderly, who are given first priority on the vaccine. So (to give an 80/20 rule) something like the first 20% of vaccines distributed end up doing 80% of the work in terms of bringing down hospitalization and death.
The thing I've heard most is what some people call the "control mechanism" - whenever stories about the pandemic become scary enough and widespread enough, more people start being a little bit more careful about what they're doing. In this case, many people probably had family plans for Thanksgiving/Christmas/New Year's, so that they delayed taking things more seriously until after those dates. But in very many states, the peak day of infection seems to have been right around Jan. 8, which is 7 days after New Year's, which is exactly when you'd expect the peak if you think people started their new caution right on Jan. 1.
Vaccination, and widespread immunity from prior infection, are likely contributing factors as well. But the thing that caused the big change is most likely human behavior.
Thanksgiving+Christmas results in a higher growth rate for cases. When they are over you should go back to the same growth rate you had before. To get a reversion to level you need to assume a feedback mechanism in which the higher level results in people taking more precautions than they were taking before the holidays, pushing the growth rate negative.
Yep. This is what I meant by "thermostatic response the public has seemed to have (independent of legal restrictions) to caseloads and hospitalizations."
My point was that your first order was wrong. Without the thermostatic response, the growth rate should have gone back to what it was before, not the level. That isn't quite true because the high case rate in the previous month+ raised the number of immune people, but that doesn't give you a reversion of level, it just means the growth rate should have been a little lower than before the holidays.
People should just adopt some quick signal they can indicate if they don't want their posts highlighted, something like an asterisk or DNH or whatever.
> I’m a little worried about the Comment Highlights posts, because they broadcast things people said down in the comments where nobody would ever read them out to the entire Internet.
Just wanted to chime in and say 1. I don't think there's anything wrong with how you've handled it so far, 2. I do think this is a fair/nontrivial concern which is worth considering, and 3. I don't see any easy solutions, so the answer might just have to be to accept it?
On the last comment highlights I was in this *exact* position. I wrote a hasty/unedited post about Josh Hawley (that basically amounted to a rant) under my real name, and was slightly surprised to see it featured more prominently. Of course, I stand by the general idea, but it was written kinda impulsively out of annoyance, so if I had thought more than a couple of people would read it, I would have written it very differently (and been careful about the argument), and I wouldn't have "chosen" to have it featured.
To be clear, I don't personally mind, and if this had actually troubled me I could have tried to message you (to omit the name, or remove the comment). But I can at least sympathize with how it might take someone by surprise, even though in retrospect it seems fairly obvious (it's an open discussion thread, the blog generates discussion by highlighting certain comments, if you don't want to be associated with something you post, don't put a name on it?).
TLDR: the norms seem pretty obvious and straightforward (it's an open/public blog, of course any comments might be featured to generate further discussion), but I can still see how someone would be surprised.
On the other hand, I just don't see any easy solutions! If you omit the name of the poster by default, I assume some folks would feel like they weren't getting "credit" for thoughtful comments they made. Nor do I think it's feasible to individually ask posters for permission.
I think the easiest improvement is just occasional mentions of these norms. For instance, on each of these "Comment Highlights" posts, I'd add a quick note explaining 1. this blog generates further discussion by highlighting comments, 2. if you want a featured comment removed, message you, and 3. if you don't want a comment featured in the future, include a note in that comment (or maybe it could be in someone's profile).
Obviously, you will still get occasional "surprises" by someone who is unfamiliar with these norms. But I don't think that edge case is a particularly big deal, it seems overall perfectly reasonable to me for a blog to highlight comments that are written on that blog. And as long as you include that occasional note to the "Comment Highlights", it would only ever be a worry for someone who was new to the blog (and growing pains like that are inevitable for any community norms).
This is a great example of how random comment highlights can be improving the discourse and is maybe a good argument for keeping them. If you know your sloppy comment might get highlighted, maybe you'll spend some extra time writing/editing it and we're all better off?
Additionally, I suspect many people want to get featured and the possibility of getting highlighted will encourage them to write better comments.
"On the other hand, I just don't see any easy solutions! If you omit the name of the poster by default, I assume some folks would feel like they weren't getting "credit" for thoughtful comments they made. Nor do I think it's feasible to individually ask posters for permission."
I think removing the name makes the most sense - posters can claim the posts in the highlights thread afterwards, if they want to, the same way people quote something they saw and then the creator hops on to say "hello, person who wrote Essay X here. Nice to see such a lively discussion..."
Regarding the DNP company, I think it's a trust issue. Theoretically, there's nothing wrong with saying "this company looks cool and you can invest in it here", but in practice nine out of ten times someone with a platform says that kind of thing it's because they have some financial stake in doing so rather than because they legitimately think it's cool. I believe Scott when he says he just think it's cool, but for anyone without a strong prior about Scott's trustworthiness, the reasonable thing to think upon seeing products shilled is "Oh, one of *those* writers."
And in this vein, I think the biggest risk from the endorsements is slowly looking more like one of those writers. Doing it occasionally probably won't hurt his standing in the eyes of people vaguely familiar with his work, but if it becomes a frequent thing, the pool of people who don't trust him will grow a lot. And this has the major downside of hurting his ability to sincerely talk about how much he cares about Givewell/EA, Metaculus, and whatever the next thing he really cares about is.
I'm not a gamer, but I'm looking for action video game recommendations (for PlayStation 4, Nintendo Switch, or Steam) that I can use for brain-training my middle-aged brain. Ideally, the game should dynamically increase in difficulty as I get better. If such a thing doesn't exist, I would like something that can be set at a super-easy level (something that would insult a five-year-old) and then ramped up manually. I tried "Call of Duty: Black Ops", but it was too difficult even at the easiest level. I don't have great eye-hand coordination, and my reflexes aren't the best. I'm not interested in walking simulators, puzzlers, or similar.
Try Hades? It's still pretty fast-twitch, but does a pretty good job at "dynamically increasing in difficulty to your skill", in the 'boring rogue-like' way instead of something cleverer.
The core part of its appeal (to me at least) is that they made all parts of the loop quite fun, and so there's a natural pull to come back and stay in (whereas normally games like this feel like they have discrete episodes).
Action games might train your reaction time (and _maybe_ attention) but not a lot else. I played those a lot in my 20s. Playing them didn't make me smarter in any verifiable way. In fact, likely the opposite occurred because I wasn't spending the time learning anything useful.
It probably doesn't meet your criteria, but some people I respect have been raving about Factorio. I haven't tried it though.
You're probably right, but I heard an intriguing podcast with a neuroscientist named Adam Gazzaley. He and his team created a video game called NeuroRacer that apparently showed benefits for people who played it. One of the game's features was that it dynamically adjusted the difficultly level based on how well (or poorly) the player was performing. I don't think the game is available to the general public, though.
I would look into roguelikes--most of them get progressively harder as you get deeper into the 'dungeon'. My personal favorite of the genre is Risk of Rain 2: it's a 3D third-person shooter where basically you beam down to an alien planet and shoot your way through it (there is a story, but it's very light) while picking up items that enhance and change your abilities. If you check it out, I would recommend sticking with it at least until you unlock Huntress (which should be relatively easy, especially if you ramp down difficulty). There are different characters with different abilities, and the only character unlocked at the beginning is Commando, who is rather boring and a bit finicky to pilot. Especially if hand-eye coordination is an issue, try Huntress--her primary ability is auto-aiming her shots.
Possibly a bit too far from what you're looking for, but you could also try The Long Dark. I'm not sure if it would be a bit too "walking simulator" for your tastes, as you do have to walk a lot in the game, but it is one of the few games I play where I feel like I'm constantly keyed in and hyper-aware. In The Long Dark, you are the sole survivor of a plane crash on a remote Canadian island far north, and you must try to survive the strange, supernatural, unending winter. There's not a lot of action or gunplay (a bit for hunting and for deterring hostile wildlife), but it wrings a lot of tension out of simple tasks like "can I make it to that cabin before I freeze or a wolf eats me?". Especially at higher difficulties balancing your short-term and long-term survival needs becomes an interesting challenge, and the game gets harder the longer you survive as the winter gets progressively colder, wildlife becomes scarcer, and the man-made items you rely upon in the early game break down.
...hmm, you may be right there. I now find the early game mind-numbingly easy, but yeah now that you mention it, I was deeply frustrated the first few hours I played it (and I have gaming experience, although I'm not a shooter person).
It's my impression that roguelikes are almost exclusively targeted at people who want really hard games. Even if you isolate just the easiest part at the beginning, I couldn't name one that would be suitably easy for someone with no prior experience at that type of general gameplay (e.g. I couldn't name a platforming roguelike that it suitable for someone with no experience at platformers).
I also don't think "the run starts easy and gets gradually harder" is something that you'd WANT to extend over a very large spectrum of difficulty, because it implies that even the players who can mostly handle the harder parts are constantly forced to re-play the easy parts every time they die. So this doesn't seem like a promising direction to me. If you're trying to practice a skill, the easier exercises should gradually be *removed* from your regimen as you grow past them.
On the contrary, the brutal, you-can-die-at-any-moment vibe keeps you on your toe all the time. DF has less of that and is more about your imagination and what you think you could do and how you would go about it (although you can still die in gruesome ways)
I'm not sure if this is too close to puzzler for you. But maybe Hardspace: Shipbreaker.
It's a game where you pay a scrapper basically, paid to deconstruct spaceships. It's not adrenaline pumping action, but it does encourage precision and speed at doing stuff like cutting pipes without hitting a fuel tank. The difficulty is mostly controlled by the ships you choose to deconstruct; later ships are larger and also more dangerous. But you can change it yourself significantly by choosing to play risky or safe.
In addition to looking for "difficulty" options, you may also want to keep an eye out for "assist mode". (This seems to be a different framing device that some games have adopted to try to reduce the stigma from making a game easier and explicitly consider people with different physical limitations, rather than just differences in training. I've noticed games with this framing seem to be more likely to offer explicit numerical sliders for things like "take less damage" or "increase the timing window for pressing this button".)
You should also maybe consider just playing an easy game, and then playing a slightly-harder game, etc. rather than looking for one game with a big range. Difficulty is multi-dimensional, and creating a game that offers robust challenges over a very wide range of difficulties is a serious engineering challenge. Lots of games only modify a couple simple variables, like damage, speed, or number of enemies, and often that's just not enough to scale a game between "5-year-old" and "e-sports champion".
Also, be aware that people who don't know of a game that actually meets some requested criteria will often recommend their favorite game anyway, because their favorite game is highly salient to them and it's just the first place their thoughts go. So be wary of recommendations for popular games that don't obviously cater to your criteria.
Thank you, that sounds like good advice. I know I can just Google it, but do you have specific recommendations for games that are inherently easy and/or games that have a good assist mode?
I personally tend to play pretty challenging games, and so am not a great source for easy recommendations.
I do feel bad leaving you with literally zero examples, so I guess here's two games that I enjoyed that are reputed to have pretty good assist modes:
Celeste (a platforming game)
CrossCode (an adventure game with fighting and puzzles)
But those games are both *quite hard* on default settings and I can't personally vouch for the assist modes because I haven't ever tried them. Honestly you are probably better off trusting Google over me.
Celeste and Hollow Knight work for me in exactly this way. Both are amazing and while really tough, also very accessible. While playing, I can directly monitor my brain getting better at them. I'd recommend looking at reviews / gameplay on Youtube to see if they are appealing to you.
But i think the 'train your brain' part won't be of any effect beyond the games. Depending on your target, a more physical training (something with balance / coordination?) might be better...
I'm going to go in a different direction from other folks and recommend Mario Odyssey on Switch. It literally has an assist mode that is intended to make it possible for a small child to beat the game.
Mario Odyssey's difficulty scaling works differently from other games though - beating the game is fairly easy, but the game is filled with hundreds of bonus objectives that reward exploration, puzzle-solving, and technical skill. You can more or less set your own difficulty by deciding what "stretch goals" you want to go for, which you can shift around on a momemt-by-momemt basis.
The problem in #6 seems like something you could automate pretty easily. Get a friend with programming skills to make a quick program for you. You put all the names into a list, then when you want to highlight a comment, have a search algorithm check if the name's in the list.
The part that is important to automate is not checking the list (ctrl-F doesn't require much programming skill) but maintaining and updating the list. [If a thousand people want to be on the list, does Scott want to have to deal with a thousand emails? If people want restrictions like "well, anything I post about my sexuality is probably not meant to be front-page, but stuff about battleships is fine", will they feel like they can put those in the email too?]
RE: Book reviews, I'm good with posting them sans names for blinding purposes; does that mean if mine should make the cut I shouldn't link friends to it from outside the blog until after they've been unblinded?
I read the Vox piece and got pretty much the same impression from it that I got from the original malaria-vaccine story. When I posted it elsewhere, my prefatory comment was "this is really preliminary, only in mouse studies, but it looks interesting."
A lot of people around here care about housing and urbanism and related economic issues, and he's probably the best economist working on this stuff. He's got the same "just plainly explain economic issues in a simple, non-moralizing technocratic way" that I enjoy about ACX or Zvi's blog. I used to recommend his book to people, but he manages to get pretty much everything in there across in a 30-60 minute youtube lecture.
I can't believe they canceled a book about sitting by a pool and maybe catching something big. Besides it's dedicated to his Dad! (I have happy memories of fishing with my dad.) Maybe it's the fishing the woke left objects to? (cry or make a joke.. my two options.) From what can tell the book was pulled because it has Eskimo fish. And Eskimo is no longer used. (This is news to me.) They would now be called Inuit fish. Is there some harm done by having Inuit children learn they were once called Eskimo? And what joy will be lost without the rhymes and illustrations. Can anyone make an argument for pulling the book? I wanted to close this rant with his words from “On Beyond Zebra” (also pulled)
“The places I took him!
I tried hard to tell
Young Conrad Cornelius o'Donald o'Dell
A few brand-new wonderful words he might spell.
I led him around and I tried hard to show
There are things beyond Z that most people don't know.
I took him past Zebra. As far as I could.
And I think, perhaps, maybe I did him some good...”
* Any time there's a big viral story like this we should consider whether it's a marketing stunt. The publisher withdrawing some Seuss books has driven sales of the others. Surely this wasn't a total surprise?
* It's fine for an author (or an author's heirs) to withdraw their work. We can question their motives, but it's odd to think they have an obligation to keep publishing things they no longer endorse.
* The creepy thing about this story is Amazon, Ebay, and others suddenly treating the no-longer-published material like kryptonite.
Re: marketing. IDK, personally I looked to buy the banned books online and found nothing... if they show up in the future I might buy one. Besides books I want nothing to do with the rest of the Seuss franchise... but I've mostly always been that way.
Re: copyright, yeah that's a bit of a weird law at the moment.
At some time we'll be able to get these books again.
I wish there was some push back from the left media, rather than what looks like complete buy in. ('Worth losing your job over Dr. Seuss?'... nah.)
More generally, I would think progressives would notice how important privacy and freedom of speech were for improving civil rights and be concerned about setting up mechanisms of censorship, but it doesn't seem to work that way.
Yeah unfortunately I think that is mostly correct. I wish more people spent time at the 'Yeah. Look." step. I'm also not sure the right's OMG response came after the acceptance by the left, or if the left's "yeah this is racist", came after the right's OMG. Everything fractures quite quickly along political lines.
I mean, probably fair. I glanced through one of the articles that circulated after this blew up and it had a few examples. They didn't look great to me. If I had a book and was reading it to my kids those pictures would probably lead me to stop and discuss, or maybe just stop. But I haven't read or heard of any of the books, or looked into it deeply.
Do you think the publisher is wrong, or do you just wish they'd spent more time making their case? Of course, that raises the question of to whom?
Doesn't seuss out, since eg. eBay was actually pulling listings and I think Amazon clamped down on resale too. Not stocking or reprinting is one thing, but the books are very definitively actively censored beyond "these just don't make money, sorry."
On the author/the author's heir piece: it's also worth discussing just how long copyright is nowadays. Seuss himself has been dead for 30 years and his books won't start entering the public domain until 2053. His heirs may feel like they ought not publish these books anymore, but clearly many people feel otherwise and would gladly keep publishing them. Especially when you're dealing with books with such cultural clout, the excruciatingly long copyrights rob society at large of a part of their own culture.
It is absolutely a marketing stunt. I looked at all of the books in question, and I would agree with *quietly* letting At the Zoo fall out of print. But you have to wonder: if the content is so embarrassing, why highlight it? Nobody would notice At the Zoo quietly becoming scarce. Either they cynically wanted to drum up sales of other books by intimating that those too could be withdrawn in the future (see: skyrockets of "Cat in the Hat" sales), or they've wholly bought into the self-flagellating 'anti-racist' culture (which is pretty endemic in publishing right now) and they earnestly thought they had to prostrate themselves before the masses--virtue signal, in essence.
The de-platforming of sales of these books is the most chilling though. It's also patently ridiculous--you can buy Mein Kampf, Protocols of the Elders of Zion, the SCUM Manifesto, and a bunch of other very objectionable content on every single one of those platforms. I don't want those books banished either, but in terms of "books that have actually caused harm", they are all far more dangerous than Dr Seuss. It's unfortunate that Dr Seuss drew a pair of African people who look like pot-bellied monkeys in one of his books, but it's hardly dangerous. Those other books have inspired actual, literal, violence.
Re: marketing - Yeah, I worry that media outlets, online retailers, and even smart people on here are playing into the hands of cynics.
An example from the 2010s - guerilla marketer Ryan Holiday ginned up publicity for an author known for off-color stories by instigating a minor culture war flare-up:
Can't we just use "If I Ran the Zoo" as a teaching moment? He's going all over the world collecting animals, he could have made a slightly different drawing for the Africans, but so what. There are historical stereo-types in all his images. What's racist now was the norm in the 1947.
So funnily enough, when I first read the book, I didn't even flag the Africans image as inappropriate because my brain didn't parse that Seuss was depicting humans--I just thought he drew some funny chimpanzees. The drawing, I feel, falls into one of those categories where you only realize it's racist if you have the cultural knowledge to know that "black people are monkeys" is a racist trope, which children do not have at that age.
You could also use it as a teaching moment, sure. It would be an interesting discussion to have with maybe high school students about changing norms, and how the meanings of things can change over time (I doubt that Seuss was deliberately attempting to invoke a racist trope, but nowadays that drawing cannot be seen any other way, for instance).
Yeah sure, look I don't see monkeys.. (that's your image) Seuss drew plenty of 'stereo-typical' black people in cartoons. 'Worse' than in this kids book. But I'm going to push back on the idea that the image can not be seen any other way. I can certainly see a trope, I also see how an American master drew Africans circa 1950. Personally I don't think Dr.
Seuss had a racist bone in his body, like everyone he's a product of his times.
As a technical matter, it's fine for a copyright holder to do whatever they want with their work. Even if their actions are meant to be symbolic or done because of broader societal pressure.
As a technical matter, it's fine for a baker to refuse to perform services for a gay couple. Even if it's being done as symbolic virtue signaling or to fuel a astroturfed legal effort.
The frustration isn't about the technical legal matter, it's about what kind of societal norms we have.
I don't think I understand the Seuss Estate's reasoning well enough to Steelman it. But it's pretty clear that a lot of people don't want to live in a world where we assign a high value to not offending people or apply current standards of morality to the past or police their customer's ideas around race or use contemporary issues as an opportunitc marketing trick.
Eskimo != Inuit. Yupiks & Aleuts have also been called that, and aren't Inuit. Nor do they like being falsely referred to as Inuit, which is what happens when people think they can just do a text-replace.
I think TGGP's point is that when it comes to Alaskan Natives, Intuit isn't a like for like term for Eskimo and efforts by those to be more racially sensitive by using Intui where they'd use Eskimo are actually introducing a new problem of calling natives by a term they don't identify with.
It's rather like saying "Instead of calling East Asian people [a word that I won't print here], we should instead call all of them Japanese." It's not accurate, insulting, and (at least in the Asian case, not sure about the Alaskan natives) groups together people that were historically conquered with the people that they were historically conquered by.
I'd personally go with "Alaska Native," since it seems to be what people generally prefer, but if you happen to know the specific native group, go with that. That is, "Eskimo" is never okay, "Inuit" is better but not great unless you know about the person you're referring to, and "Alaska Native" is a bit clunky but isn't going to insult anyone.
The Dr. Seuss estate has the legal right to unpublish the books. IMO, the books are old enough that they do not have the *moral* right to control access to them or restrict their distribution. They should have been in the public domain years ago. But that's just the same drum I keep banging about all older intellectual property.
Ebay (not Amazon, AFAIK?) refusing to sell used copies is downright creepy. That sounds more like being banned than just being taken out of print (which of course happens all the time).
I hate how no one keeps a sense of proportion about stuff like this. The supposedly hurtful and wrong material ranges from clearly racist caricatures (the apparently African bushmen in *If I Ran the Zoo*) to arguably racist depictions (the "Chinese man who eats with sticks" in *Mulberry Street*) to obviously harmless (maybe I just lack empathy, but I can't take seriously the idea that anyone would be hurt by the "Eskimo fish"). Why are these all being treated the same way?
Another thing that bothers me is how many people are like, "It doesn't matter because these weren't Dr. Seuss's greatest hits anyway. The minor works don't matter." No! That's not how literature works! If we're interested in Dr. Seuss's work (and we should be!), *all* of it is important. We need each part of the canon to interpret the others. What kind of demented Great Books purism would dismiss the first children's book by the biggest children's book author of the twentieth century as irrelevant?
Yes, all of that. Mc'Elligot's pool is a classic, if the estate has water colors for all the drawings they could republish it at twice the price. But it's not about money... I sorta get the feeling they (Seuss estate) would like to pull 'Ran the Zoo' and thought, 'we can't cancel just one, there needs to be a number/pattern.
I think this is related to how the name "Anasazi" is the Navajo word for "ancient enemies", and so the preferred term these days for the people largely displaced by the Navajo is "Puebloans" - even though it's a Spanish term, it's at least a respectful one.
In Huckleberry Finn, the whole *point* of the story is that we are meant to recognize how casually cruel the morality is that Huck himself pays lip service to, but we are also supposed to recognize that Huck himself knows on some level that much of it is wrong, and behaves much better.
It matters a lot when someone is using casually cruel terminology whether we are meant to recognize it as casually cruel and recoil from it, or whether we are meant to take it as a good and proper thing to do.
So I have a question. Maybe someone could play devil's advocate and explain what they think is so evil about having "A Chinese man who eats with sticks" in a children's book? "Racial stereotyping" isn't a valid answer, because these words don't explain anything. Explain, instead, why what you call "racial stereotyping" in this particular case is somehow bad. Same question about "Eskimo fish".
You know, Chesterton's fence and all, maybe there is something we are all missing?
Is there an illustration of the Chinese man? Because I saw an image from one of the other books depicting some Chinaman caricatures, so I wouldn't be surprised if that was the case.
Apparently these aren't the only Seuss books with objectionable material, but they never sold very well so the estate wouldn't lose much by removing them from publication.
Plenty. Just type into images.google.com "Chinese man eating with sticks Seuss", and you'll get it. If you look at the whole picture, you'll see that all humans in this story look about equally ridiculous. He's not any more ridiculous than the policemen, the Mayor, the Aldermen, the band, the pilot, or the people on the plane. Also, you'll notice that people inside each group - the Aldermen, the policemen, the band, the men dumping confetti from the plane - all have pretty much the same face. At least, the Chinese guy is unique.
So no, he didn't set out to caricature just the Chinese guy. It's a caricature of everyone mentioned in the poem.
Actually, almost everyone in that picture has the same face. It's the hair, the headwear, and what they are doing with their mouths that is different between the groups.
It's also worth pointing out that, at the time Seuss drew the image, this is roughly the dress Chinese immigrants would have worn. It's exaggerated (as are all Seussian characteristics), but for the time it was created it is accurate.
I mean, you've already stated the answer. Cartoons that exaggerate or stereotype racial/ethnic characteristics are considered offensive in and of themselves, even if the immediate context seems harmless. (To see why, you might consider Seuss's [anti-Japanese WWII cartoons](https://www.openculture.com/2014/08/dr-seuss-draws-racist-anti-japanese-cartoons-during-ww-ii.html). Normalizing such depictions has broader consequences.) In the original edition of the book, the Chinese character had yellow skin and a long pigtail (and was a "Chinaman," not a "Chinese man"). Later, Seuss went back and removed those features, but the slanted eyes and Chinese peasant costume remained. Meanwhile, everything described in the book is meant to be weird, outlandish, or fantastical, but the Chinese guy isn't really doing anything weird, he's just running along being Chinese. So he's also being presented as an exotic, alien figure.
It's clearly an exaggerated and demeaning caricature by modern standards. It's not wrong because of it's content per se, it's wrong because of its style. And it was probably perfectly mainstream when it was drawn. The past is a foreign country and all that. But you wouldn't see it on a billboard in 2021, and if you did the person who put it there would get canned. So it's understandable that someone made the decision to pull it. It's much less understandable (to me) why they chose to call attention to it, unless, as suggested elsewhere, the whole thing is a cynical cash grab.
So you're basically saying that having a character not of your race/ethnicity is generally supposed to be offensive, unless that character is not displaying any characteristics of his race/ethnicity. That doesn't answer my question, that just generalizes it. What is the alleged evil that's being done when the context is harmless as far as everyone can tell?
Anti-Japanese cartoons during WWII are a whole different thing, and I don't want to divert the thread into explaining what's different. (I find them really disgusting, but my reasons might not be the same as yours.)
Wow, thanks for the link to his political cartoons. (I don't suppose there was ever a published compilation? Not that I could afford it.)
Thread drift is fine. In my mind this was not done for any monetary gain on the part of the Seuss estate. It was a 'don't hurt me'* sign to the left.
Re: political cartoons; The whole war years were a very different time. My best understanding to date comes from Dan Carlin, Super Nova in the East and Ghosts of the Ostfront.
Back to Seuss, he's always been political. "The Lorax" was a bit too preachy for me. And I hadn't read the butter battle book till a few days ago. It's far from 'my' political north. But so what, he's an eef'ing American Master and to understand him you need to have access to his entire body of work.
Hey, thanks all for the nice discussion. I'm going to dig out the frog pond this spring (summer) and call it McElligot's pool. It will need at least one sign if the fish are to find the way to it.
*Heather Heying, darkhorse podcast 'don't hurt me' is similar to, "I'm on your side".
I wouldn't describe those cartoons as racist. He could have written the same cartoons against the Confederates if he had been around then, and he did include Hitler in the first.
They are anti-Japanese, and the assumption that the Nisei were traitors seems to have been mistaken, but negative representations of the people you are fighting against are pretty normal.
There was a cartoon with a rather moderate caricature of Hitler and a grotesque anti-Japanese image for Japan. I'd describe that one, at least, as racist.
I'm not sure what 'racist' means anymore. I read/ heard 'defenders' of Seuss lament the loss of some work, but still describe the images in "If I ran the Zoo" as racist. I mean they certainly portray different races. But to recap the story. (Imagine you are a young boy in 1950.)
Young Gerald McGrew, likes the zoo but then starts to dream how to make it better.... He releases all the current animals and then goes collecting around the world. There follows a parade of wacky animals,
machines and natives who are helping with the collecting. And we get cartoons of how a boy in 1950 might picture his native helpers. I still see hired natives on nature expeditions, though now dressed in jeans and tee shirts. So the problem must be how he drew and dressed them. The pictures could be somewhat different, but exotic dress is part of story. In 1950 these were not racist images. (well using my 'fuzzing' definition of racist = insulting someones race.) And in the same vein, I read on npr that Seuss, in high school, wrote a minstrel show, in which he starred in black face! (This must have been ~1930 I mean when was Al Jolson?..)
It took me a while to sort this out, but "is this racist?" splits into at least two questions. One is "was the creator a bad/malicious person?" and the other is "is this art a satisfying experience for the people depicted and/or does it put them at risk?".
Both of them are frequently hard to answer accurately most of the time.
What @Nancy said. To make it more explicit, it's not that it wasn't racist in 1950 (try to imagine a black parent reading this book to their kid, 1950 or not). It's that it wasn't particularly noticeable in 1950, when pretty much all of society was pretty deeply racist.
Now it's been 2 days since I asked the question, and nobody has come up with any claim of any actual harm being done by the image of "A Chinese man who eats with sticks". If no harm that anyone knows of is done, then the prohibition on content like this seems to have no moral ground. (Please correct me if I'm wrong on this.)
So we have a prohibition that appears to have no moral basis but is just a matter of faith for those people who believe in it. A lot of those people are not content to just avoid that kind of content but are extremely aggressive in trying to make sure that everyone else follow the same kind of prohibition.
I think that's what we call "religious extremism". Imagine if some Christians were trying to make sure that nobody would be able to publish or sell books with gay characters; imagine what kind of treatment they would have got. I think we're looking at the same situation, and these people deserve the same kind of treatment.
I mean, what harm is done by marketing a cola that isn't for the n-words? I saw the picture in question, said, roughly, "Yikes", and moved on. It's racist. Probably not noticeably so in the 1950s or 60s, but it's jarringly so now.
No, I'm not harmed. If someone had given me the book as a gift and I was reading it to my four year old and came across it by surprise, I still wouldn't be harmed, but I'd be annoyed and take time out to to explain about racism and stereotypes in as accessible way as I could manage.
But if it were my job to decide what does and doesn't get published in the Seuss estate, I can certainly see myself making the same call, and being a bit bewildered by the blowback.
As far as "these people", are you talking about the estate managers? What sort of treatment do you suggest? The right was big on cancelling things in the 80s and 90s, but I thought that had gone out of fashion over there.
Clearly, one of those things is not like the other. Your example is so offensive to black people that you didn't even type the offensive word in it - and if you had, you likely would have been banned from this board, and not many people (if any at all) would have argued that you didn't deserve it.
At the same time, nobody seems to know of anything that would offend a Chinese person about that picture of a Chinese man eating with chopsticks. (If you know what would be offensive to a Chinese person here, please tell.) Would you be offended if you saw a cartoon of a white man eating with a fork and wearing a baseball hat in an Asian book?
You imply that after reading that book you wouldn't have written to the publishers or to the sellers that it needs to be cancelled. I.e., you have a prohibition not everyone shares, but you're not trying to impose it on other people. So you're not one of "these people". You're not a religious extremist.
I don't know the details on what got the estate managers to cancel the books, or on what got its sellers to stop selling it. I assume it was "these people", but, like other posters suggested, something else might have been the cause. However, there's really no shortage of "these people", and their influence seems to be completely out of scale compared to their visible number - see, for example, this: https://abcnews.go.com/US/trader-joes-change-product-branding-petition-calls-racist/story?id=71868367 .
The way you treat religious extremists is not allow them to have the influence they want, which means you have to either ignore or deflect their forays. Anything else will either let them run amok or let them look as victims.
I still wouldn't say it harmed me, but it would surely offend me.
And your whole schtick about extremists is based on people who, as far as I can tell, aren't in evidence. The estate people did this, people you broadly agree with decided there was a canceling (now a secret one?), and then when you people made a fuss people like me had a look, shrugged, and said, "Yeah, that makes sense." You have a narrative that you clearly *really* want to apply here, and you're all bent out of shape over it, but it just doesn't match up very well.
Sorry about seemingly ignoring your reply. I've been genuinely puzzled about how to answer. For most modern readers, it's immediately obvious that there's just something *off* about that picture, but explaining exactly why stereotypes are bad to a skeptic is hard. That said, if anyone feels hurt or offended by such content, that does constitute harm. Someone being offended doesn't mean the material should immediately be burned regardless of other considerations, but it is something to weigh in the balance.
"That said, if anyone feels hurt or offended by such content, that does constitute harm."
Does that apply to other things that someone might be offended by. Should people in the U.S. avoid the term "socialism" for fear of offending migrants who grew up in Maoist China and think of it as the label for the nightmares of the Cultural Revolution and the famine? How about the term "racist," which offends me, given that it has been expanded, in practice, from "someone who hates or despises other people because of their race" to "anyone who holds any view related in some way to race that I disagree with." Or the term "Nazi," which has had a similar expansion.
Avoiding saying something because someone says it offends him, whether or not you believe there is any good reason for offense, looks to me like giving in to a heckler's veto.
As I already said, someone taking offense isnt dispositive or a veto. It's something to critically consider when making a final judgment. For example, while I agree that some people have an over-broad definition of racism, you have an absurdly narrow one, so I wouldn't take your offense too seriously in this case.
The example I saw about the sort of depiction in Mulberry St. is a story about a black kid who was told by a white kid that the black kid didn't need a costume for Halloween-- the implication was that being black was enough.
I don't know how wearing that sort of thing is-- it's way short of active, malevolent prejudice-- but it's a reminder that one is a perpetual outsider.
I've seen a suggestion that those pages could have been left out of subsequent editions.
So you think the harm is that the book is suggesting to kids that Chinese people are somehow different, exotic, in a class of their own, and thus prompting them to say cringeworthy things to this effect to people of different races than their own that they meet and thus upset them?
That's an explanation. Thank you!
I'm not sure if it's a very satisfying explanation, though. Even if you insulate kids from books that have that kind of pictures, they will soon enough see real-life people who look different, wear weird clothes, eat strange foods with strange utensils. And we all know that kids say the darndest things, and say these things they will.
But it's an explanation. Thank you! And if there's one explanation, maybe there are others.
I think it's partly a matter of how much stereotyping people grow up with.
When I was a kid, it was normal to portray people from Holland as wearing wooden shoes and standing next to windmills. I don't know that it did me or them any harm, but it didn't make me more intelligent about the world. Maybe a little more intelligent than if I had no idea that anyone anywhere had different customs than my normal way of doing things, but not nearly as good as showing that people with different customs have a lot more to them than their customs than a simplifies (and probably out-of-date) look at their customs can show.
A minor annoyance: I run into non-Jews who have trouble understanding that there are Jews (like me, for example) who don't keep kosher.
Please note that people can get a bad impression of their own people and culture if the dominant culture portrays them as bad.
I don't have any particular attachment to five of the canceled books, but On Beyond Zebra! was one of my absolute favorite books as a child. And while I'm not going to defend that particular illustration in If I Ran the Zoo, I really don't get what was so offensive about OBZ. I guess it's the illustration of the "Nazzim of Bazzim", who is wearing vaguely middle Eastern or Persian garb. You'd have to squint pretty hard and look out of the corner of your eye to find that offensive.
And while I was merely saddened to see it taken out of print, the de-listing of these titles by eBay and other companies is positively chilling.
And the really puzzling thing is this: It's not like removing or re-drawing the problematic illustrations was not an option. Assuming there's not some other objectionable illustration in OBZ that I can't find, you could just cut out two pages and remove one letter from the list of all the letters at the end.
And quietly making a few changes would have been much better for the brand than throwing themselves in the middle of the woke/anti-woke culture wars, I think.
But given the fact that they green-lit that dreadful Cat in the Hat movie, Seuss Enterprises doesn't seem like it's particularly competent at preserving the brand.
Alternate hypothesis that my wife pointed out: "Spazz is a letter I use to spell Spazzim, a beast that belongs to the Nazzim of Bazzim". "Spazz" is a homophone for "spaz", which is now regarded as an ableist slur in the UK, but which of course wasn't even a word when OBZ was written. Still, you could just remove those two pages.
No opinion on the book itself because I'm not familiar. That said, cancelled implies there was some sort of call to action on the part of the left to get the book pulled, which the publishers responded to. That simply isn't the case here. The publisher (or heirs? I'm not clear) pulled the books, and now (some) people are throwing a fit about it.
I believe that criticism of Seuss's books as racist had been going on for some years, although I don't know that it was targeted at those particular six.
I think these articles tend to support my point of view rather than yours. There's no drumbeat, or hashtag, or mob, but isolated criticism here and there. And then the company reaches out, several months ago, and talks with "teachers, academics and specialists in the field", and then, after an internal process, on Dr. Seuss's birthday, announces the change. All of this happened on the company's schedule, because the company wanted it to.
It does show that there has been some public criticism for years, but so far as I can see that's it. This doesn't look anything at all like cancellation as commonly understood, as in, say, the fire Gina Carano mob. At least IMO, YMMV.
She's the only example of someone famous. Do you actually know of any other examples of deaf-blind people at all? It seems strange to be suspicious of their possible intelligence if you don't know many individuals at all.
I think the claim was about sophistication, not intelligence. For a given level of intelligence, being deaf and blind would make it a lot harder to learn things.
There are examples of ppl with both hearing and sight but raised in extreme isolation and neglect that are barely functional. If Helen Keller really did reach such a high degree of sophistication it’s probably a miracle.
The law school graduate has 1% of her sight and can hear at a high pitch. It’s extremely impressive that she was able to graduate from college but she had already learned some language before she was impaired by an illness
I was just biking by the creek yesterday in Austin, and saw a pair of people in front of me laughing, one using a cane and signing to the other, while the other held her arm. I'm not too far from a major school for the deaf, and have a friend who works as an interpreter there and says he often hangs out with deaf-blind people and communicates through tactile signing while they visually sign back.
I think these days, most deaf culture is conducted in sign languages of one sort or another, which may have the effect of isolating deaf-blind people from spoken language communities, but may provide them with more rewarding local cultural opportunities. That might be why Helen Keller-level fame is less common.
I take two grams of Metformin a day for anti-aging reasons. From the Washington Post: "researchers noticed that diabetics who took [metformin] outlived non-diabetics who did not. Moreover, metformin had shown an effect in separate studies against each of the three diseases [dementia, heart disease and cancer]"
Scott, please consider doing a metformin, much more than you wanted to know post. I think taking metformin has the potential to be a big win for rationalists.
If you don't mind sharing, how do you get your Metformin? Were you able to convince your doctor to prescribe it solely for anti-aging, and if so was it hard? Or do you get it via some grey-market route?
I've convinced two doctors to give it to me since my first left general practice. I said I wanted it to reduce the risk of heart disease and cancer. To convince a doctor (1) know the dosage you want, (2) know the side effects, and (3) tell the doctor you will pay directly so they don't need to justify the decision to insurance companies.
I have to take metformin so hell yeah I'd be very interested to go "this is not a wonder drug" from my own experiences in response to such a post. I'm a tiny bit disgruntled that people seem to be thinking this is some kind of wonder weight loss drug (not in my case it isn't and wasn't) or anti-aging or whatever drug. I wish I didn't have to take it. If you don't need it medically and are just treating it as "amateur messing around" well phooey.
Some people get upset stomachs, although mostly when they first try it. For this reason, you should start at a low dosage and slowly increase how much you take. There are some potentially very bad side effects, but I think these are very rare, although I'm not a medical doctor.
I got the usual gastrointestinal side-effects at the start but those settled down and I find it very tolerable. I *think* my doctor expected that it would also have the "make the pounds melt away" side effect but good luck with that and my metabolism, which seems to react to every "this will make you lose weight" effort by resetting to conserve, and indeed pack on, the pounds (I noticed my appetite increasing, for example). "This will burn off fat but it works by exploding you from the inside" recent post by Scott is about the only thing that sounds like a chemical solution for me, but um, exploding from the inside is a *bit* extreme.
I think the metformin studies show that for elderly people at risk of developing those particular disorders (cardiovascular, dementia, etc.) then it may indeed work to extend lifespan a few years. I don't think healthy young people in their 30s taking it will see any benefit.
The weight loss effects seem to be small, but that doesn't stop it being touted as "miracle weight loss drug" by people who are enthusiastic about off-label use. About a kilo or so over two years, more if you diet-and-exercise (but of course if you diet-and-exercise you'll lose weight even if you are just taking placebo pills).
The only weight loss strategy that works for me is *severe* calorie reduction and counting carbs. My poor doctor, who is perennially optimistic in the face of my jaundiced view (having been up and down with yo-yoing weight due to diet, off diet, on diet but got sick, on another diet, gave it up because my brain was at me, etc. over decades) prescribed me another drug to (gentlemen of a nervous disposition look away now) pee out excess sugar in my urine. She warned me that one side-effect would be more likely to get thrush infections, but also chirpily added that it could cause sudden and great weight loss.
"Oh, don't worry about *that*", I said.
"No, no!" she said. "A patient of mine lost so much weight he had to stop taking this!"
Well, guess what side effect I got? I *didn't* burn off the pounds, weight remained the same. I *did* get the recurring thrush infections 🤣
I don't know if it would help you, but I find that intermittent fasting, no food between 7PM and 11 AM, makes it easier to keep calories down, as well as being arguably good for you.
There are ways for book reviews to be self-deanonymizing to various degrees ("As a person named John Doe, I found Mary Roach's "Stiff" highly upsetting" etc.), so watch out for this if you end up going ahead with the "publish everything anonymously" plan.
On DNP ethics: I briefly considered going into more detail on safe-ish DNP dosing strategy and decided not to, primarily due to the fact that comments aren’t editable so if I accidentally say something wrong I can’t cleanly go back and fix it.
[Also this topic seems like an ACTUAL Copenhagen Interpretation of Ethics situation. If I give advice that makes an unsafe thing seem less dangerous so people do more of that unsafe thing, whose fault IS it when they hurt themselves doing it?]
I think there are actual ethical arguments against it - partly it's helping users take it more safely, but also it's removing a small barrier to use and normalizing it.
Are you responsible for anyone who hurts themselves because they didn't read your safe dosing protocol?
We all criticize how the CDC, FDA, and Fauci handled communication and approvals. But almost all of their mistakes can be tracked down to one cardinal sin, they seem their role as influencer not truth teller.
They asked themselves if I tell the truth what are the consequences of that action? Will people buy masks so that healthcare workers don't have any?
If I approve this vaccine early and it's not safe or effective am I responsible for any deaths that may result?
I think we overvalue the consequences of our actions and undervalue the consequences of our inaction. Also it's hard to measure or have a good intuition about the second or third order effects.
Also I can't imagine any especially insightful advice about DNP besides you might die, do too little, take your temperature, have a buddy who can check on you, don't drink alcohol, take plenty of water, don't go to a club or anywhere where you could get too hot, don't exercise, I'm sure there are others.
And while DNP is dangerous it's probably not as dangerous as a drug like heroin where many of us liberals have accepted sharing harm reduction strategies as more beneficial than pretending like you can't reduce harm.
It seems like you want to be careful about harm reduction strategies for things most people have never heard of, though? Maybe there is a way to limit the audience.
I guess I don't understand how it's ok to talk about DNP, it's ok to talk about heroin, it's ok to talk about harm reduction from heroin use, but it's not ok to talk about harm reduction from DNP use.
It seems to me the most dangerous position to be in is to have knowledge of a drug, but lack harm reduction knowledge.
For medicine specifically, there is a very good reason 'first do no harm' is a guiding principle. For almost the entirety of human history, medicine was more harmful than not. It's a running joke throughout pre-modern literature that poor sick men survived and rich sick men died because rich men could afford doctors who then killed them. The most common treatment for diseases, bloodletting, actively weakened sick individuals, and until the invention of penicillin in 1941 there was basically nothing a doctor could do about an infectious disease anyway except provide supportive care. Yet bloodletting persisted as a cure-all because bloodletting appeared to make patients temporarily better: a sick man who had just been bled would have a slower pulse, a lower temperature, and would 'sleep' peacefully, and the doctors at the time didn't have the understanding of the human body to know that these signs, while they may have appeared encouraging, were actually signs of harm.
Over history, the human cost of too-aggressive doctors is likely very high. The history of puerperal fever alone is nightmarish (in short: for millennia women gave birth without doctors, mostly successfully. Then doctors started getting involved in childbirth. At the time, nobody knew that you really ought to wash your hands, so doctors didn't wash their hands before examining a women giving birth, thus introducing bacteria and making her sick. Deaths of women who had just given birth rose as high 63/1,000 deliveries in some places)
We have now gotten very, very good at combating infectious diseases. That expertise does not translate into other areas of medicine. My impression is that we're still in miasma theory territory when it comes to metabolic diseases. Miasma theory isn't a bad explanation of the observable facts: sick people, especially very sick people, often smell, and people who have been in the same space as sick people often get sick even if they never touch the sick person or any of the sick person's things. And miasma theory accidentally led to some innovations that helped (improved sanitation in cities reduced disease even if it wasn't the smell of shit in the streets causing disease, and masks to avoid inhaling smells probably incidentally also helped avoid inhalation of airborne diseases) and some innovations that were worthless (burning incense and the like).
And that's the impression I get when reading about metabolic diseases. We have some theories that seem to explain observable facts fairly well some of the time, and some interventions that do indeed seem to work for a lot of cases, but we know much less about the intricacies of how the body regulates itself than we pretend. That's when we run into trouble. There is a very long history of weight loss drugs that turned out to be far, far more dangerous than being fat. Look at fen-phen: despite promising trial data, it caused heart problems so severe that it led to at least 50 deaths before it was withdrawn from the market. At least 175,000 claims alleging harm were filed against the pharmaceutical company who made it. Look at benfluorex in France: 2,000 people died of heart problems caused by it before it was withdrawn.
We are also missing a critical link in our logic: we do not know if making fat people thin actually makes fat people healthier. This has not been studied long-term! It's just an "everyone knows" assumption, even though we have experimental work (Jules Hirsch's experiments) and epidemiological data (the various "obesity paradoxes", studies about things such as the Dutch Hunger Winter, studies into weight cycling) that suggest this might not be true, or may only be partially true, or may be true only in some cases. And that's without getting into what appears to be corruption in the field of obesity research itself (namely, that many big players in the field run weight loss clinics or sit on the boards of pharmaceutical companies that sell weight loss drugs).
That makes me real uncomfortable when someone with Scott's clout starts tossing out "why not take this drug you can totally buy online that could kill you because I assume it probably has health benefits". Scott may be right! But back-of-the-envelope calculations aren't how we figure that out. And people, I think, take away the incorrect impression that we understand what's going on when we don't, and that they can make an informed decision when they can't.
How many lives would have been saved if we'd released the vaccines in July? I think it'd makes 50 look tiny.
With Benfluorex how many QALY did it save in 33 years of it being on the market? I'm not arguing it's safe, only that we're ignoring one side of the equation.
But the most important point is you read Scott's post and concluded "why not take this drug you can totally buy online that could kill you because I assume it probably has health benefits".
Is that really what you concluded from reading it? How much more likely are you to consider purchasing DNP after reading Scott's post than before?
I can’t imagine giving the same amount of attention to 100 book reviews. It seems likely that they’ll get attention based on superficial interest or the order they’re posted.
One possibility would be to post them all at once and ask for each volunteer judge to read them in a different order, based on a randomly generated list. You can keep going down the list until you get bored and stop judging.
The interactive fiction competition (ifcomp) seems like good prior work for an Internet competition based on judging.
Re: the mass of book reviews - suppose there was a call for volunteers to read some subset of the entries and respond with yes or no to these questions: (1) Does this seem like a good fit for SSC/ACX? and (2) Is the writing of tolerable quality?
Would this meaningfully whittle down the number of entries to review in depth? I would volunteer to read a dozen or so with those questions in mind.
It’s a fair idea, but I’d bet that despite the unexpectedly large number of reviews, Scott will feel obliged to look at them all. And therefore will have done whatever whittling is necessary - he’ll be in a position to offer up a half dozen, a dozen, whatever, of the best for further chewing over by the commentariat.
I am looking to collect databases from real businesses and business-like entities, including those that have failed or otherwise become "past-tense". Read on if you or someone you know might have access to such things.
I'm a software engineer, specializing in data systems (i.e. a data engineer), with about 16 years in the industry under my belt. Something that's always frustrated me about the way that we design and build systems, is the way that knowledge fails to diffuse through the industry, because we don't _study_ what we do, and especially we don't study our failures.
As an example, the 2010s witnessed the full hype cycle (rise and fall) of "NoSQL" databases, such as MongoDB, Cassandra, DynamoDB, Riak, Aerospike, and many others. Did they turn out to be any good? Individually, in local circumstances, some engineers know the answer, or at least _an_ answer. Collectively, we have no idea. This knowledge only spreads as the primary sources write blog posts (mostly terrible), or move on to new jobs and tell stories (distorted by all sorts of biases). What we *should* be doing is studying what was actually built, out in the open, where everyone can see it if they're interested.
Additionally, I find it very difficult to teach other engineers about data systems, in a scalable way, without open example material. There are many online courses in SQL and things of that nature, but they always deal with trivially small, trivially clean data sets, without any of the richness or messiness of Real World Data. Many years ago, my own skill in dealing with data grew by leaps and bounds the instant I was exposed to real business data and asked to solve real business problems with it.
To these ends, I am looking to collect real business data sets. I use the term "business" loosely, in the same sense that engineers often say "business logic". Non-profits, community efforts, personal side projects, these things all count. The key thing I'm after are custom-built databases, meaning they either started from a blank MySQL/Postgres/MongoDB/etc, or heavily customized an off-the-shelf system like Wordpress or Salesforce.
I recognize there are thorny issues here with respect to intellectual property and personal data privacy. I do not expect anyone to just hand over a database and wish me well. We would have to work something out, whether that's an NDA, or thorough anonymization, or whatever.
In any event, if you possess a data set like this, and *might* be willing to share it for research purposes, please reply here and we can figure out how to connect and discuss.
(This is the third time I've posted this. My plan is to re-post it periodically. If that runs afoul of any rules, written or unwritten, let me know and I can adjust.)
Addendum:
I recently became aware of this book https://fightchurnwithdata.com/ which might seem unrelated, but I mention because it contains specific details about how to model measurements of subscription customers and their behavior. This is the _kind_ of thing I'd like to be able to produce from my research: specific guidance on how to model specific business concepts and processes.
Your ideas are intriguing to me and I wish to subscribe to your newsletter.
I think about this kind of problem a lot, and have worked with the kinds of data I think you’re talking about. I’ve spent most of my career working with messy bibliographic data, product information and consumer data. Unfortunately I don’t think I could legally share any examples with you.
I’m not a software engineer; my background is library science and taxonomy. But my software developer husband seems to be spending more and more of his time solving the kinds of data problems I used to tackle with my minimal technical background. His area is oncology data, and I doubt he could legally share any of it, either. We could both probably give you some examples of how data became messy to the point of failure, since that sort of sleuthing is something we’ve both spent a lot of time doing.
I'm advising a young company that's developing novel approaches to data analytics: organic database design powered by FPGA+NVMe hardware architecture. Any interest in hearing more and giving feedback?
One of the commenters in this thread mentioned something rather illuminating (for me):
> This sort of false sense of urgency is very common amongst conspiracy theories - the idea that the aliens are coming, that the Illuminati have control and are about to enslave us all, that the Storm is coming, that all police are secretly racist, that we're going to run out of food and all starve.
The pandemic is coming? The earthquake is coming? Climate change is coming? General AI is coming?
It seems like a sense of urgency makes memes spread faster and get discussed more, but it says little about whether an idea is worth investigating. It’s a shallow heuristic. One should be somewhat skeptical of any meme that’s optimized for replication is as urgent as it seems, but it may still be worth checking out.
That's a good point that it is not always obvious at a glance which scary ideas demanding urgent action are likely to be conspiracy theory dead ends instead of real urgent problems needing a solution.
Can't credit the whole thing, I'd 'snipped' it but I don't know from where.
The Sense of an Ending
In his 1967 book The Sense of an Ending, the literary critic Frank Kermode argued that human beings try to give significance to our short lives in the long sweep of history by placing ourselves in the middle of a narrative arc. That arc typically traces civilization's fall from a golden age through a current stage of decadence to an impending apocalypse—one that may, through the bold efforts of the current generation, usher in a new age.
"The great majority of interpretations of Apocalypse assume that the End is pretty near," observed Kermode. But since the end never arrives, "the historical allegory is always having to be revised….And this is important. Apocalypse can be disconfirmed without being discredited. This is part of its extraordinary resilience."
The dire prophecies of the first Earth Day have been mostly proven wrong, but the prophets of an always-impending environmental apocalypse have not thereby been discredited. Auguries of imminent catastrophe remain resilient, even as the world of 2020 is in a much happier state than the Catastrophists of 1970 ever expected.
From a book about art fraud-- if you want someone to pay too much for fake art, invoke need, greed, and speed.
From seeing a man be completely convinced about the dangers of drugs from seeing a sideshow exhibit: sometimes a sufficiently shocking image will cause people's minds to shut down.
Does anyone have a good recommendation for books on the pre-American history of slavery? I've read several books on American slavery, but am looking to expand. It seems like every single book out there is about American slavery. I'm looking for something that is more apolitical, just the facts.
The Vikings were huge slave traders, a fact that frequently goes unmentioned (looking at you, Assassins Creed). There’s a book I’d love to read called “Viking Age Trade: Silver, Slaves and Gotland”. It just came out a few months ago. It seems to be a kind of pricey niche publication, so I might see if I can get it through my local library.
Part of the issue with that is that the meaning of "slave" changes in different cultures and different contexts. Classical slavery in Greece and Rome was a (somewhat) different sort of phenomenon than the specific form of chattel slavery that came to make up a significant chunk of the New World economy.
I'm sure a sufficiently thoughtful book could draw useful comparisons and do some comparative work, but it's not surprising to me that many historians are reluctant to go deep in that direction.
I think a lot about how to create effective organizations. Let's say that we want to run a big organization, a cooperation, non-profit or government. There are a couple of failure modes.
One is lack of structure, where every decision is taken willy-nilly and nothing is written down. This trends towards nepotism and corruption, where decisions are based on flattery on who-knows-who in the old boys gang.
Another is bureaucratization and procedural-ism, where everyone does things right but no-one does the right thing. This seems to be something that comes creeping into an organization and is very hard to stamp out without burning things to the ground and rebuilding. Ossified companies fall to creative destruction, governmental agencies can seemingly live on forever. One reason that is often used to explain bureaucratization is that organizations respond to violations and accidents by adding more procedure to avoid repeats in the future, inadvertently stiffening themselves. Another is the middle-management and bureaucrats that were hired to perform important work tend to invent tasks for themselves to do during downtime, and these tasks don't go away when things gets busier. I guess there are many possible just-so stories about the origins of bureaucratization. Does anyone know of more rigorous takes on this phenomena?
A third failure mode is to have both: Strict regulations that can be skirted if you grease the right palm or know the management. This is akin to organizational anarcho-tyranny.
A fourth failure mode is to have a little bit of both: Just enough regulations to create extra work, just enough personal touch that the extra work doesn't matter and the management gets to decide in the end anyway.
What are the success modes? One seems to be to hire the right people and then leave them alone to do the job. Presumably they will create the level of bureaucracy needed to get the task done and no more. I'm not sure this is the secret sauce though. Is anyone aware of an example when "hire good people and leave them alone" failed?
Another success mode might be "meritocracy": Simple, relevant and transparent rules/checklists/procedure/criteria where it's applicable, human judgement at the lowest level everywhere else. Reward results, but beware of Goodhart's law. (This is what I personally think is the best police for governmental agencies.)
I would like to plot this on a two-by-two but it seems like one access becomes "good-vs-bad" which isn't that useful.
Slack is probably another important factor: lots of slack can create amazing results (Bell Labs?) but can also create an adult daycare. Little slack leaves little room for experimentation, but it's also my impression of how some very successful companies (e.g. Tesla?) are run.
Thoughts and comments on effective organizations? Was "Good to Great" correct all along? Am I missing something important?
(1) You specified a "big organization," but I wonder how important it is to get things right when the organization is still small. Your list of failure modes is spot on - could each be a case of the organization mis-judging its current size?
(2) If you've never read Steve Yegge's "platform rant," do check it out. It's a description of how Amazon navigates these issues. My feeling is that this is the most important writing on management in the 2010s, far more significant than any business-focused book.
I think there's a fallacy in assuming that a big organization automatically need more bureaucracy. But I don't know what drives bureaucracy if not size? Maybe complexity of the product (in engineering)? Conway's law says that the organization will be roughly as complex as the product, which should drive bureaucracy? The best example of successful bureaucracy I can think of is SUBSAFE, which seem very complex, very bureaucratic and hasn't lost a sub since 1963.
Thanks for the link! I'll read it and return if I have comments.
Bureaucracy seems to grow more in some environments than others. Organizations where the middle management are responsible for failures but do not reap rewards for success, for instance, seem to grow bureaucracy more than other types of organizations.
The mechanism is simple - If you get chastised for failure, you will create systems to prevent failure. Policy, procedure, rules, etc. If you are rewarded for success, you will want to massage the systems to best produce good results, which means adapting bad rules into good rules, and so on. If you take away the incentive to make the rules better, you end up with an organization that continually becomes more rigid.
On the other side, if you only get the rewards and never the punishment, then you become corrupt and chaotic.
"What are the success modes? One seems to be to hire the right people and then leave them alone to do the job. Presumably they will create the level of bureaucracy needed to get the task done and no more. I'm not sure this is the secret sauce though. Is anyone aware of an example when "hire good people and leave them alone" failed?"
I've been thinking about something from Albert Jay Nock. He claimed he did a good job of running a business by hiring good people and then "management by mumbling". When people came to him with a problem, he'd mumble until they went off and solved it.
I saw the article mentioned elsewhere and couldn't believe it. "Why do people make such a fuss about addiction? There's no such thing! I'm not addicted, I've just been taking heroin every night for five years!"
Yeah, sure, you're not addicted.
Well, as long as it doesn't ruin your life, you're merely dependent on it, not addicted to it.
The test is stopping it, which he seems to have done when he wanted to. Similarly, I'm not addicted to caffeine, although I consume a lot of it, and the evidence is that I can go without diet coke for two weeks of Pennsic.
I don't know if he did stop, though. What he seems to have done was increased his dosage, then stopped taking that increase, and claiming to have no bad effects. He doesn't say he completely stopped taking it, and he's using the "no addiction, I proved it" argument to bolster "so me taking it every night is not being addicted" plus he seems to like throwing some other hard drugs into the mix every now and again. I think I'd like to hear from his wife and family about their opinions of his state, not self-reporting.
Temporarily stopping completely doesn't prove it's not addictive/not addiction either; there are many smokers who stop for a while but then take up smoking again because they can't get on without nicotine.
Same here, I enjoy using podcasts to fill the "empty time" when I'm commuting or working on something non-semantic like cleaning or photo editing. Some that would be interesting to people here are Lex Fridman (interviews with tech experts), 80,000 hours and Julia Galef's Rationally Speaking (interviews on rat-adjacent topics), Bayesian Conspiracy (ratsphere chitchat + sequences discussion) and Science & Futurism with Isaac Arthur (review of future technology and SF topics)
Scott has previously mentioned he doesn't like real-time communication.
Link for future readers: https://astralcodexten.substack.com/p/weyl-contra-me-on-technocracy
(I was originally going to comment "Where?", but then I realized that this is probably something that could be found with a quick search, so I did a quick search and found it.)
Podcasts are made by people who are too lazy to write, for people who are too lazy to read. They're utterly inferior to text in every single way. You can't ctrl-F your way through a podcast. You can't skim a podcast. You can't cite or look up citations on a podcast. You can't load a podcast on a lousy connection. You can't listen to a podcast on a crowded or noisy space. You're limited to the rate at which people speak, which is much slower than the rate at which you read. If a podcast host names a thing or a person or a place you don't know you can't look it up. If a podcast host has an accent you don't understand or the sound quality is noisy you just suck it up.
The only exceptions are when the audio medium is the *point* of the podcast, such as sleepcasts, meditation, DJ sets, or language learning.
This x 100. I hate the shift from writing/reading to podcasting-YouTubing/listening-watching. It’s the worst thing that has happened to online communication since 1985 (when I started participating). The comment above covers the general problems with it. I have the additional personal issue of some cognitive issues that make auditory processing much harder than visual. I’ve had to stop various online activities over the years as things shifted from text to voice.
Wow totally disagree. I think that audio is a much better means of communication by text not just because the voice is more versatile than the written word [in terms of pitch, tension, pauses/spacing, etc], but also because I think the "improvisational" style of a conversation gives much more insight into the nature of the speaker than a polished piece does for an author. There's also the fact that nearly everyone has had more practice speaking than writing.
Do you still think they're utterly inferior to text in every single way?
> the voice is more versatile than the written word [in terms of pitch, tension, pauses/spacing, etc]
Depends on the podcast. Obviously if you're talking about art performances, recitations, singing, that sort of thing... non-verbal information is crucial. As I said, whenever the point of the podcast is the medium itself, obviously audio is better. For everything else, where the podcaster is trying to communicate some piece of information, opinion, etc. why would the non-verbal info matter? I guess they might bring some kind of stylistic point but that's largely outweighed by all the practical drawbacks (and you can also convey stylistic points into writing).
> I think that audio is a much better means of communication by text not just because the voice is more versatile than the written word [in terms of pitch, tension, pauses/spacing, etc], but also because I think the "improvisational" style of a conversation gives much more insight into the nature of the speaker than a polished piece does for an author.
Why would you care about the personal information of a speaker you don't know and who doesn't know you? It's not a conversation, it's a podcast. You're not having a social interaction.
>Do you still think they're utterly inferior to text in every single way?
Yes. I never listen to any podcast - instead, I read the transcript if available, and if it isn't, I shrug and move on.
There are many different types of podcasts, and not all are pure info dumps. Some are just people having interesting conversations, in a way that doesn't work with text. And a lot of the appeal for others is the way the hosts interact, despite your assertion that it doesn't matter. It's not a matter of laziness, it's something you fundamentally can't have in text. You might not be interested in those, but then it sounds like you're just saying things you don't like are inferior.
And podcasts do have some inherent advantages, like the fact that you can listen to them while doing things like chores or commuting (I don't know why you think you can't listen to them in crowded or noisy places, headphones are a thing).
These are all great points!
1. Non-verbal communication is important even if you only care about informational exchange and not about the aesthetic quality of the piece. This is because vocal realization allows for a lot more subtle emphasis than I can do with writing. This is far more than a stylistic point -- it's often essential for understanding (perhaps this is why our voices evolved to be so versatile in the first place). This is why it's valuable e.g., to listen to an audiobook of an author reading their own work, even when you're listening purely for informational content: you might learn something!
[strictly I think this point alone should prove that speech is not utterly inferior to text in every single way. Speech is higher-fi than text, and this is a big reason why you have to consume it more slowly. This comes with advantages and drawbacks].
2. I care about the personal information of the speaker because I think it's relevant to what they're saying. How does the speaker treat themselves and others? Do they really believe what they are saying? Are they present with/trying to understand their interlocutor or are they reciting points from memory and not really listening? These and others will affect how I take their point.
3. Speech, as you acknowledge, is better than text for conversation (and anyone who's ever texted recognizes this). I realize this doesn't bear on your point about broadcasts, but it does bear something on the utter inferiority of the medium. I.e. if you want to consume a conversation as opposed to a treatise or essay, speech will be a better medium.
What do you think about these points?
I think we're talking past each other because we have different ideas of what a podcast is about and until we get into specifics we're probably not going to find common ground. Your points are probably valid for the kind of podcast you listen to where visibly its aesthetic value is linked to the audio medium - such as audiobook readings. Again, I'm not disputing that there's a place for such podcasts whose very point lies in the medium.
>I think this point alone should prove that speech is not utterly inferior to text in every single way.
I didn't say speech was inferior to text, I said podcasts are inferior to their written equivalent. Obviously, like million of people at this moment, I'd rather have a live conversation with friends than text them.
>if you want to consume a conversation as opposed to a treatise or essay, speech will be a better medium.
I think here lies of the crux of our differences - I don't see the point of listening to a conversation I'm not a part of. The veneer of interactivity and casualness is fake, since I cannot in fact interact with any of the participants. If someone had something interesting for me to hear, I'd be better served if they said it in an elaborate elaborate format that's easy to get through, skim and re-use (i.e. text). But again, I think we have to get into specifics if we want this discussion to get anywhere.
>Your points are probably valid for the kind of podcast you listen to where visibly its aesthetic value is linked to the audio medium - such as audiobook readings.
I agree that we're probably talking past each other (hard to communicate over text, I suppose ;)), because for this point I explicitly brought up the nonfiction audiobook as an example where the audio format was not necessary to the message, but still aided "purely informational" understanding (e.g. by including richer phrasing, emphasis, and expression).
A slightly stronger emphasis, e.g., doesn't just sound pretty, it can change the whole meaning of a sentence.
[Perhaps some writing system could seamlessly include these features, e.g. varied spacing between words, subtle levels of emphasis, slight dialectical/accent differences, etc. to the fidelity that speech does but none does even though doing so would clear up ambiguity.]
>I didn't say speech was inferior to text, I said podcasts are inferior to their written equivalent.
Interaction is information, and interaction is done worse over text. Pycea's comment is relevant here: " a lot of the appeal for others is the way the hosts interact, despite your assertion that it doesn't matter. It's not a matter of laziness, it's something you fundamentally can't have in text."
Here's a list of the 10 most highly paid podcast hosts: https://www.celebritynetworth.com/articles/entertainment-articles/the-10-highest-paid-podcasts-and-podcastsers-2020/
9 of them host a show in conversation or interview format. Insofar as speech is a better format for conversation than text is, and many popular podcasts are conversation-based, many popular podcasts are not better off as writeups.
>I think here lies of the crux of our differences - I don't see the point of listening to a conversation I'm not a part of. The veneer of interactivity and casualness is fake, since I cannot in fact interact with any of the participants.
That's just, like, your opinion, man.
More seriously: Listening is a social interaction (as is reading fwiw). It is interaction because it requires active effort from both parties -- it is as much work to listen well as it is to speak well.
I also think that this rebuttal misses the (central) point stated above that social interactions contain important information about the content of the argument, and so insofar as conversations are higher-fi wrt interaction specifically, they trump text in this regard.
Do these responses fairly represent and adequately address your concerns?
Insight into the nature of the speaker can be a negative as well as a positive. Writers like The Last Pychiatrist, specifically hid as many details about themselves as they could, so that the arguments would speak for themselves, and there would be no insight into the nature of the author. It wouldn't surprise me if Scott preferred being evaluated for his ideas rather than his personal idiosyncrasies.
imo nobody can avoid telling who he is. E.g. The Last Psychiatrist by deliberately hiding personal details tells me a lot about what he is like [private, maybe a bit neurotic, straining to be objective], and what to expect from his blog!
Podcasts are good in situations where you can't read. Driving, cleaning, cooking, etc.
The same situations where people traditionally listen to radio.
Hard disagree. My brain is very well evolved to follow conversation, and much less so a lengthy text. While I prefer text for study and review of dense material, a podcast is a much easier way to absorb adequate introductory information about a topic—which is often all that's required.
If a crowded environment or slow speaker is your issue: buy noise canceling headphones and speed up playback rate to 2x or more.
If they say something you don't know about: feel free to pause and look it up.
If the quality is so poor you can't follow it: don't listen to that podcast. If a text is so badly written that I can't make sense of it, I don't read it. Why would I not follow the same principle in audio?
>My brain is very well evolved to follow conversation, and much less so a lengthy text.
No, it's not. You learn to speak just like you learn to read.
>If a crowded environment or slow speaker is your issue: buy noise canceling headphones and speed up playback rate to 2x or more.
Yes, these are clumsy solutions to a problem that shouldn't exist. Kind of like Microsoft praising the quality of its defragmenter when Linux filesystems don't fragment at all.
>If the quality is so poor you can't follow it: don't listen to that podcast.
The point is that someone could have something interesting to say but just be bad at setting up audio (which is a separate skill and even a job for some people). If they had stuck to writing text this wouldn't have come up.
This is purely anecdotal, but when it comes to complex pieces I can follow the thread much easier if I'm getting the information via audio vs. text. If a book contains numerous digressions and side-beats that branch off from a main topic I find myself having to go back and reread prior sections pretty often to stay on track whereas if I'm listening to it I can follow it effortlessly.
I agree that if I'm getting a focused info-dump on a topic I'd rather have text than a podcast, but for more digressive works - audio is better.
>No, it's not. You learn to speak just like you learn to read.
We learn to speak via osmosis as soon as we are born and have to learn to read by explicit instruction. We have spoken for maybe 1 million years and been writing for maybe 10,000.
> The point is that someone could have something interesting to say but just be bad at setting up audio (which is a separate skill and even a job for some people).
So is writing interesting, yet easy to follow blog posts. People like Scott for whom that skill comes naturally are quite rare.
You CAN, however, listen to a podcast while doing things which otherwise occupy your attention, such as jogging, cleaning, or driving. They are very handy at those times.
"Podcasts are made by people who are too lazy to write, for people who are too lazy to read. They're utterly inferior to text in every single way"
There are reasons to hate podcasts and reasons to like them. I suggest that you're assuming that other people are so much like you, fundamentally, that there's no legitimate reason for anyone to make podcasts or listen to them.
I think the problem is people mixing up their reasons for listening. For establishing a parasocial relationship, or otherwise providing a simulation of the social experience, podcasts are superior. For the conveyance of information, text beats audio in almost every way. But in the case of a lot of popular podcasts (ie Rogan, Harris, Friedman etc) people are doing more of the latter than the former(and are barely aware of it), and therefore enjoying the experience more than that of reading a book, but actually learning and retaining a comparatively small amount of information.
I challenge anyone to listen to one of these three hour podcasts and actually measure what was retained an hour later, and then do the same with a book.
You're only addressing the 'demand side' of podcast production (and I mostly agree with you).
But the _supply side_ is different. I don't like listening to (audio-only) podcasts but I can watch some of them and I used to really like Joe Rogan when he was posting his shows to YouTube. It's probably _much_ easier to get him and his guests to _talk_ than it would be to have them write, e.g. a blog post.
I don't know whether it would be worth the trouble for you, but you could ping people about specific comments you want to quote.
I worry doing that and waiting for responses would be enough of a trivial inconvenience that I would stop doing those threads. I might try it once or twice, but I'm not optimistic.
Perhaps we could have a norm that community members put "Please don't quote me" in their bios? (1) Click the commenter's avatar, (2) Check whether it has the request, (3) Proceed with confidence.
That'd be smooth enough. There's only 5-8 comments per highlight post, so not too much workload for Scott, and it wouldn't bog down the comments section with a bunch of disclaimers, or create some meta-disincentive to post because you don't want to highlight the fact that you don't want your post highlighted.
Why can't people just prepend their comments with "don't quote" or "quote anonymously"?
It's a little extra work for them but it's less work for Scott and they're commenting on a public forum and then not wanting to be quoted.
I understand someone making a comment or two that is maybe a little more revealing than they'd like so they prepend it with request not to quote.
But if your personal policy is you're so scared of being quoted you don't want any of your comments highlighted, maybe commenting online isn't for you, because there are plenty of other people who won't respect your wishes.
I think it's similar to having a blog and not wanting anyone to link to it. Sure I can understand that for the occasional blog post, but if your policy is ask everyone to never link to your blog maybe you should make your blog private.
If I didn't want to be quoted, I think I might still feel like it was arrogant to assume what I was saying was quote-worthy.
Most quoted posts seem like "effort posts" that the person wanted to share: either correcting misinformation or just sharing a big infodump. It doesn't seem like there's much risk of a casual comment like the one I'm currently writing will end up there, and I don't think it's arrogant to assume that a correction or infodump might be worth sharing.
That's a really good point; I have to admit I didn't think the question through to that extent.
I don't understand how someone, even mistaken, about whether someone else's comment is quote-worthy, could be arrogant. What's arrogant about quoting someone else?
I think you didn't understand me. I would be reluctant to label my own comment as "do not quote" because it seems to assume that what I say is quote-worthy. Even it it is, it seems arrogant for me to [i]assume[/i] it is.
But kaminiwa's observation makes the whole topic sort of moot.
What about only pinging people whose user names appear to be their real names?
That's a very small proportion of commenters.
that's the point
I think an example that started the discussion was this comment: https://astralcodexten.substack.com/p/highlights-from-the-comments-on-class#comment-1427102 -- that doesn't obviously appear to be someone's real name, but they still didn't want to be re-broadcast. (In that case, it seemed to be specific to the comment, not the author.)
I'd just like to throw into the conversation as someone who's recently been quoted in a Comments Highlights post that it absolutely made my day. Of course there are people who feel negative about it and that's worth considering, but I don't want positive experiences to be lost in the conversation.
Same, I was thrilled to get quoted. Given that the comments are, after all, public, I think we ought to lean towards quoting-by-default, with the expectation that if you don't want to be quoted you preface accordingly or put it in your profile.
Do this but interpret no response within n days as a yes?
I really want an opt-in system. No response should be categorized as a no.
This might be a bit clunky, but there's also the option to stick a disclaimer at the end of posts like "If you don't want you comment highlighted in a followup thread, please mention that". Though if you don't know ahead of time which posts you'll highlight comments for, putting it at the end of every one may be weird.
If Substack is anything like news websites, people use fake emails to sign up most of the time, and most people won't be notified, losing Scott a significant percentage of comments on average.
Maybe only do this for particularly significant comments, like insider information, people risking their jobs?
Why not just reply in a public comment asking if they're willing to be quoted? The commenter could always email Scott directly if they don't want to answer in public.
Scott would have to wait until they respond or set some kind of opt out time threshold, which probably amounts to as much if not more of a trivial inconvenience than just pinging the people in question (which Scott indicated was likely already enough of a threshold that it would make him stop doing these posts altogether).
I'm a bit confused about the warm-bloodedness thing. I was trying to look this up actually because I've just been kind of confused generally as to what the distinction is supposed to be between endotherms and homeotherms-but-not-endotherms. (I am not any sort of biologist, if that wasn't clear.) Like, OK, poikilothermy seems obviously distinct, but once you're within homeotherms, I was like, I don't really see an obvious distinction between these different methods of maintaining temperature that should distinguish some of them as "endo"? Especially considering that like a main method of thermogenesis is shivering, which -- as a muscle-based mechanism -- seems to be getting pretty close to a behavioral mechanism, you know?
It's the shivering thing that kicked off me looking this up, really. Because I keep reading that non-shivering thermogenesis, which is based on this uncoupling, happens only in brown fat cells, and that adults don't have much of these?? So they get all their heat from shivering?? And that's just like... that can't be right.
IDK, I am basically clicking around on Wikipedia here, so some parts contradict other parts. Like, oh, maybe all cells have UCP1, just brown fat cells have *more* of it. But other parts say no it's only brown fat cells. Or maybe adults have more brown fat cells than thought. I am confused!
Because like yeah generating heat from uncoupling, that's pretty distinct, much more so than shivering! And it also, y'know, matches everyday experience, where you don't start shivering the instant you're colder than is comfortable. But it's really confusing to keep reading that adult humans don't have much in the way of non-shivering thermogenesis going on. Like, huh? What's up with that statement? Where does that come from? Or is there some way I'm missing that it could actually be true??
I've been very confused about this as well. My guess has been that when they told me as a kid that there were cold-blooded and warm-blooded animals, they just didn't understand the full continuum of tunas and dinosaurs and self-warming plants and so on.
Adults have various other thermo-regulatory mechanisms short of shivering, like increasing/decreasing surface blood flow, changing body position to reduce surface area, or putting on a jacket.
But a lizardman can do those things, it doesn't make it warm-blooded.
Yup, that's exactly my point. Note I'm only considering the distinction among homeotherms, I'm not considering poikilotherms.
From the wiki: "Such internally generated heat is mainly an incidental product of the animal's routine metabolism, but under conditions of excessive cold or low activity an endotherm might apply special mechanisms adapted specifically to heat production."
The idea is that every cell generates heat constantly, as a byproduct of their primary function. Only for shivering and brown fat cells is generating heat is the primary function.
The commentary from Cerastes seem to buffet this. They point out that cold blooded animals can usually slow down their metabolisms for long periods of time. We would explain that homeotherms need to have their metabolism running on "high" constantly in order to generate enough heat to maintain body temperature.
As a general rule, if you are confused because "these biological categories don't really seem so distinct the more I look into them", well, you probably aren't actually confused. The world is extremely fuzzy, and terminology like this is typically more useful for organizing lectures to undergrads than it is for mapping onto the world. I'm an organismal biologist and I had to look up the terms you mentioned to make sure there wasn't some important nuance I had forgotten about, but I don't really think there is.
TL;DR: "warm-bloodedness" is a matter of degree, not kind, which probably explains your confusion.
My understanding is that there are two main distinctions to make. (I'm not familiar with the english terminology, so sorry if the wording seems off - these are my own translations from swedish.) 1). Organism that can generate warmth, and those that cannot. (Endothermal - can create warmth through internal processes - and ectothermal - needs to rely on external sources such as sunlight.) 2). Organisms that keep a constant temperature regardless of their surrondings, and those that keep the same temperature as the surroundings. (Homeothermal - maintain a constant body-temperature - and poikilothermal - same temperature as surroundings.) (Endothermal organism usually have a faster growth rate, but is less energy efficient.)
But of course these are categories made by man for man to make predictions. Since biology famously is messy, we need to fill in the gap with the term "mesothermal", referring to intermediate states between endo- and ectothermal - where the body-temperature is allowed to vary within some intervall.
Also, big ectothermal organisms are "gigantothermal" through share size. This means that their body-temperature does not fluctuate quite as fast as smaller animals. A bigger mass takes longer to cool down or warm up, so a big animal - for example a big, ectothermal dinosaur - is like a coastal climate; kept closer to the mean temperature. (Weird analogy perhaps, but you get my point hopefully.)
And oh, bats and bears (for example) are heterothermal - they can vary their body-temperature, ie. have some body-parts colder than others. This is related to dormancy, which makes them more energy-efficient. Especially bats.
You might not be able to read this wonderfully informative piece on dinosaur body-temperature, but most of the references at the bottom of the page are in english: http://www.djur.cob.lu.se/Djurartiklar/Info/dinosaurier.html
One more thing I thought of: Some animals, like frogs and snakes, can survive freezing temperatures due to glucose that protect their cells from actually freezing. I guess this can be considered a form of "endothermal" protection for poikilothermal organisms...
I would have thought that all organisms, including the ones we classify as cold blooded, can generate warmth through internal processes. How can you convert food into motion without generating some heat? Wouldn't the more appropriate distinction was between organisms whose biology is designed to generate heat when doing so is useful and those whose biology only generates heat as a side effect of doing other useful things, such as moving?
You’re probably right. And even the distinction I was going for should have been between animals relying on internal processes vs those relying on external sources, to regulate body temperature.
Also, the bats and bears-example are more like ”temporal heterothermy” - variation over time. Others, like great white sharks, are regionally heterothermal, ie. variation throughout the body...
In theory, making friends online should be easy. Instead of luck and circumstance of the physical world, the virtual world should give us access to the few most compatible friend-candidates out of billions.
And yet, I still default to the physical world for finding new friends.
Question 1: where, online, have you found "true" friendship and how?
Question 2: I know that some have tried (and failed) to create a social network for the non-masses. Do you think there is opportunity for a social network for people with long attention spans that rewards the building of deep relationships? If yes, do you think it should be an open network (like Reddit), or more akin to a dating/matching app that filters the billions down to the most compatible? Ex. If love of Nietzsche is non-negotiable, would be easier to filter by that first.
Maybe friendship is more about somewhat non-compatible people finding a connection, perhaps after being thrown together against their will? So searching online for the perfectly compatible person could be exactly the wrong way to find friends.
For most people, searching online is certainly the best way to find people to talk to about quantum theory, muppet porn, or whatever their niche interest is. But their best friend-to-be might well be an annoying neighbour. (If love of Nietsche is really non-negotiable, that might change things, I suppose...)
Perhaps finding friends is like finding romance - there is the "similar but not too similar" thing going on there as well.
If ”similar but not to similar” in love is due to the selection for likeness - passing on as much of the genome as possible - versus the selection for avoidance of incest ... Then this should not apply to friendship at all.
Reding Henrish, The weirdest people in the world right now.
Could kinintensive cultures promote friendship between kin/most like, since trust will be mediated by kinship? And conversely, WEIRD culture promote friendship through more reciprocal altruism-styled mechanism? Like recognition of certain norms concerning ”neutrality” &c.
...for Weird: perhaps the opportunity to cooperate and engage in any activity is the base of friendship? Mutual gain?
And also, I would connect Weird culture to thymos and prestige-baed hierarchies. Weird people might be looking for ”valuable” friends, with useful skills? While kinintensive cultures would be more prone to rely on dominance-hierarchies, but - i guess - mainly within the kin-network.
I would say that this is the romantic definition of friendship which I have subscribed to and practiced for most of my life. But is it optimal?
The definition of friendship compatibility doesn't preclude a variety of personality types. For example, one can seek a chess-playing, philosopher who loves Rick and Morty. The results for such a query may still include an annoying neighbor (just not your annoying neighbor).
Old friendships are often defined by a history of shared experience. Maybe that's why new friendships have such high barriers to entry. Compatibility matching could provide a functional substitute for shared experience.
I realize that clinical terms like "compatibility matching" sound antithetical to the magic of friendships. But that can be fixed with some marketing.
IMO, Friends are people who stick with you through adversity (willingly or by coincidence).
Adversity makes people emotionally vulnerable, revealing more of them than they'd like. Accompanying people who hang around after that can usually be assumed to like the 'real you', warts and all. My fastest progressing friendships are all traceable to times when I and some strangers had to band together through a sudden and difficult situation.
It's difficult for online acquaintances to end up in such a situation.
Exception that proves the rule: Mmorpg friends I made during middle-high school felt quite real. But that's because it was a place where I could be myself, during a particularly bad time in school. The adversity, made it real.
Muppet porn you say?
(epistemic status: what seemingly worked for me)
You need the community to be pretty small such that it is possible to pretty consistently recognize usernames.
Then there needs to be activities eg games, people dont tend to consider discussion partners friends.
A surprisingly hard step is to go from group chats to private chats, i dont have a good solution here.
1. I haven’t found that online. I have people who’s posts I like to read. People I feel fondly towards. People who I banter with, but no friends. None I would reach out to if I needed something (financial, emotional).
My experience is that friendship is best formed through cooperative activity towards a shared goal. If I wanted to create an online friendship generator, it would probably be more like a game than a social network. Or even more powerful would be something to connect people with compatible skills to work on real problems matched to their interests.
This seems right. The only online friends I ever became close enough that I genuinely thought of them as friends, asked some for favors, and even met one in person, came from my time working as a volunteer on a cooperative online project. (It was the Open Directory Project--a volunteer-edited online directory of websites that was useful back when search engines kind of sucked, but became obsolete once Google got good enough.) I still feel wistful about those days--I don't know if I've ever felt so much a part of a community in my life.
"Something to connect people with compatible skills to work on real problems matched to their interests"--that sounds like it would be great for several reasons, even if it didn't succeed at creating deep friendships.
It also sounds like it would be similar to an employment agency. I haven't done a deep dive on employment agencies but my current impression is that people are spending a lot of money trying to do a good job of matching employers and employees and the results are pretty disheartening. So I suspect your thing would be hard to do well.
I would be happy to learn that I am wrong!
Similar experience - my only real friendships were with the fellow members of a mod team I was on. It took both mutual interests in the topic (why we were mods there in the first place) and a forced structure/duty that kept us around at regular hours and forced near-constant discussions to push it into the friendship zone.
Exactly! IRL it's easiest to make friends by pursuing a hobby, playing a sport, or joining a group with common interests. The internet is no different - if you start sharing your interests and ideas with the world, friends (and/or potential love interests) will come to you.
My closest friend, not counting wife and kids, was someone I got to know through the SCA, as are several other reasonably close friends.
Dunno about “true” friendship but I very much enjoy the Thursday Zoom happy hours on persuasion.community. I don’t have a ton of natural affinity with the people in my industry, especially wrt politics, but Persuasion is dedicated to open conversations and the group self-polices well.
Have found. Not willing to really explain. In all cases, I met people in RL later, repeatedly. In one case, trans continental flights were needed for that.
Small communities work best. I think live chats (IRC, or others like it) help with building connections with people. You see them often (however often you get on the chat), and there is a small enough user-base that you can spend more time (even unconsciously) on knowing those people. The one I most use is probably about 20 people, though not all are active all the time. This worked relatively well because of some underlying interest (programming), but I think forming small communities can be harder if you focus too hard on a single topic. Forming a community around Nietzsche might be fun, but as people grow bored or want to talk about other subjects they either make the community not solely about Nietzsche or leave to go join communities about... Hume or something.
Larger communities like many Discord servers can let you build up friendships, but it is way harder since everyone is interacting with a lot of people on there. While they may remember you and be friendly, they generally become at most "nice person to talk to".
I don't really have good ideas for a social media platform that encourages this. As mentioned in my first paragraph, I think focusing too narrowly on topics can harm this. Being able to talk about a wide range really helps get a greater view of people.
I have a suspicion that we don’t really know what to filter by in order to find highly compatible friends. There are some obvious life experience and intellectual interest candidates that maybe take you 80% of the way but a lot of the compatibility potential is in the last 20% and it’s murky. This makes serendipity a lot more important than intentionality.
I mostly haven’t, but to start out, you need a small enough community so that the same people keep showing up and you actually remember them. And the problem with that is there’s often not as much going on.
Also, when people are serious about finding someone (dating sites), the first thing they do is filter by geography. So it seems like online communities that are centered on a community or region would have an edge, even if they don’t strictly limit by geography? But there needs to be a common interest as well or you get NextDoor.
Our family has one friend who we got to know on WoW who is close enough so that my wife and our two adult kids flew up to his wedding — I had a previous commitment or would have gone.
Conjecture: making friends involves too much going on "under the hood" to be explicitly modeled in that way. In addition to the obvious surface-level exchange there's communication happening that we're largely unaware of, e.g. body language, intonation, dynamic word selection ("I saw it coming", "oh, I see what you mean"), the effect of location and ambience, even possibly pheromones. Plus the synergistic effect of all of those things together. When I think of my closest friends, yes there are shared interests and whatnot but something just clicked with them in a way that it didn't with lots of less close friends who are on paper arguably "better" friends. I suspect that trying to filter too much on conscious-level stuff like "must love Friedrich Wilhelm" is actually putting the cart before the horse.
Friendship requires trust and most of us, I think, are biologically wired to trust people we met IRL. There have been a couple of articles floating around about decreased trust between members of remote teems: there's more blaming and reporting and less spontaneous helping between people who haven't met IRL. Maybe it's something about microexpressions and emotional mimicry, or maybe it's something about the primordial fear of physical retribution for wrongdoing, but it seems that most people tend to be more ethical and trusting towards real life contacts. My experience is that this is less salient for people who are further along on the autistic spectrum.
#1: On the IRC roleplaying Darkmyst. I even met 2/3rds of my polycule there and the relationships are still going strong. As others have said, the key seems to be to have shared activities with people. (In the case of roleplaying, I find the bonds build out of the vulnerability of revealing what's going on in your imagination.)
#2: I think I haven't found one. I use and can recommend schlaugh.com for getting a social media fix, but I'm not yet sure about whether that's generating deep relationships. I do think it's generating better ones than I'd be seeing on Twitter or Facebook, but given the constraints of schlaugh.com there's also a lack of immediacy which I feel may be needed to create proper deep bonds. That said, I *would* say I've made friends there, quite strongly so.
Mysterious. I haven't really found anything I would call "friendship" online; at most I would call the users I am most familiar with "longtime acquaintances" (or something like that), and based on my experience I feel pretty dubious about a social network like that working. My current intuitive guess is (something like) that the investment required to create a "friendship" for any pair is large and there's not enough in internet communications for a-pair-of-users-who-met-on-the-internet to meet the requirement because of asynchronicity etc., but I feel pretty unsure about this.
Also, related gwern writing: "Face-to-face meetings, even brief ones, appear to cement personal connections of trust and liking to an extent not achieved by even years of more mediated contact like phone calls or Internet text discussions / emails / chat;...." (at https://www.gwern.net/Questions#sociology)
Re: comments. Just do what you're already doing, Scott. I think they add something special to the Substack.
The addition of "don't highlight" is more than enough to guarantee anyone who doesn't want their comments seen (on a public forum!) won't be surprised. You could ask Substack to put a small text under "Discussion" with a disclaimer, if you want to make absolutely sure.
I second that a disclaimer or checkbox in or next to the comment box seems ideal.
The difference is that if someone has a friend who sometimes reads ACX but probably doesn't read the comments, posting a story about them buried deep is probably okay. But if it's highlighted, then there's a much higher risk of it being seen (which was the objection at least one person had). I do think the disclaimer idea is workable though.
Substack hasn't even gotten around to removing the hearts after a month, I'm not optimistic about them doing some special new thing.
You can't edit the "Discussion" text, either? Something like "Discussion. Your comments may be republished, check About page for details."? If you can't do that, man, Substack really has a ways to go. Having to remember to manually add a disclaimer to every post will get old fast.
What may slow them down is the fact that they seem to use the exact same code for every substack. They've hardcoded astralcodexten in numerous places for stuff like expanding comments, so maybe they're trying to find a better way to do it. On the other hand, they've hardcoded astralcodexten in numerous places, so doing it one more time shouldn't stop them.
> They've hardcoded astralcodexten in numerous places for stuff like expanding comments
So they've got an utterly incompetent dev team :/
The proper way to do it is of course to make a setting that is configurable per substack.
Job opportunity, sounds like.
I've done enough heroic efforts at incompetent companies that refuse to listen. They need to want to do better.
If you open up the developer console on your browser, they actually have a recruiting message in there. But like Aapje, I wouldn't touch it with a ten foot pole.
Why? Is there something specific about Substack that turns you off?
I agree in principle; however they're probably optimizing for getting the features out to their client as fast as possible. Maybe they're planning to repay the tech debt later, in a seamless way. (Whether they get around to it is another story!)
Yes, that's always the promise: 'later, we'll fix things'. However, either they keep growing and there's money, but facilitating that growth still takes precedence, or they'll stagnate, so they could make it right, but they'll start focusing on saving money. So in practice, this theory of fixing the technical debt seamlessly almost never happens.
In reality, what tends to happen is that things become such a mess that adding features takes longer and longer & you get more and more bugs. So the only solution is to scrap things and start over, and then migrate to the new code, which is not going to be seamless.
The more crap the company accepts, the sooner the software needs to be replaced. However, you also create a company culture that accepts crap, so it's hard to change course and make the new software more robust.
++Test to see whether posting comments works again++ (can we all start developing for substack at once? incremental changes won't work. We have to be the Napoleonic France of frontend development, sweeping l'ancien regieme from before us with an iron fist, or an iron broom or somehing)
An iron guillotine?
That seems to match the analogy the best :P
Well it's nice to see they've made the CSS look pretty but the 'new first' button is still completely borked.
OK most likely not doable and a dumb idea. But could you heart your own comment as an indication that you don't mind if it's re-posted. This would mean that you (Scott) have to be able to see who put the hearts on a comment. If it's a workable idea, then lemon hearts turn into lemonade. (And where is the post you talked about hearts and not liking them?)
As-of four days later, the hearts are gone! (I kind of miss them.)
At risk of sounding incredibly naive: isn't it usually possible to tell when somebody might not want their comment signal-boosted? I imagine >90% of cases of people not wanting their comments broadcast are culture-war adjacent; so wouldn't it be sufficient to ask in those few cases, and presume consent for the rest?
I feel like I've missed some underlying context for why this is a concern. Of course we're all feeling a bit sensitive about the privacy issue due to recent events, but is there some "typical" scenario for why somebody would post a non-CW comment, but have a problem with it being signal-boosted on a "best of" post --- without it being fairly obvious that the posted might have a problem?
Here: https://astralcodexten.substack.com/p/highlights-from-the-comments-on-class#comment-1427102
Welp, that's an example I would not have guessed, and I now understand the discussion. Thanks for the pointer!
(Personally, I think that retroactive removal, as happened in that case, is perfectly acceptable --- but this is clearly something where reasonable people can disagree, and I'm gonna keep my d.f. mouth shut and defer to those who comment more than I do.)
Yes, I bet I could guess 90% of cases, I'm just worried that the few that slipped through would really hurt/bother people.
I agree with this, and it was my post that triggered this conversation... I consider it a failure on my part to express my want for obscurity clearly. Keep doing highlights posts, Scott!
I've seen a few people saying that they see subscriber only posts in the rss feed, even though they aren't subscribed. While it seems that substack doesn't provide a non-subscriber feed, I've taken a shot at filling that void here: https://pycea.tk/acxfeed . It's a direct clone of the official feed, just with sub only posts removed.
I read this through a rss feed (Feedly in case it is relevant, which I gather is the most popular one), and the subscriber only posts do get sent, but the content is just null, so you see they exist, and their titles, but not the actual text.
On comments, this seems like a rare social problem with a technical solution.
You've already asked Substack for a lot of changes to make these comments more like WordPress. It seems to me this is another feature you'd like: either a checkbox to mark a comment as highlightable / non-highlightable, or a checkbox at site-level per user, whichever's easier. Then on the admin side, comments can get different background colors depending on whether they should be highlighted or not (or something).
Might take a while to implement depending on how many other higher priorities there are, but it would save you from having to manually maintain lists of people who want comments highlighted.
I came here to suggest the same thing. I think a user/site-level opt-in plus some sort of flair (either public or visible only to admin) would solve the problem nicely.
That sounds like a far preferable approach than indicating it publically which has very obvious (I hope) failure modes.
Maybe it's just me, but the simulation hypothesis (that we're essentially living in a simulation by some superior intelligence) seems completely ludicrous, seeing how our univere very convincingly appears to show behavior spanning 30 orders of magnitude in time, 30 orders of magnitude in space, and a sheer degree of size, scale and detail that is completely absurd if you were trying to study something specific in a simulation. Am I missing something?
You're assuming the hypothetical universe simulating us is anything like ours. If we simulated an entire universe, we might fuzz some of the details, make it smaller, say fully 2D - extrapolating from that, the "outside universe" might be incredibly complex, and simulating us wouldn't be that taxing. To them, the scale of our universe might seem quaint.
The most convincing aspect of simulation hypothesis, to me, is that it's likely an universe's sentient population will make a vast amount of simulations during its lifespan, making it more likely that we're inside one of those than a "real" universe. That's making a number of assumptions, however.
Well, yes. The hypothetical universe is similar enough to ours to support intelligent creatures that are capable and willing to set up simulations, otherwise it wouldn't work. And creating simulation always involves overhead, unless you simplify the physics, and that doesn't seem to be happening to an appreciable degree. So, no matter how potent the physics in the simulating universe may be, it seems odd that they don't have anything better to do with >> 10^80 degrees of freedom.
How do you know that our parent universe doesn't have physics many orders of magnitude more complicated than ours, making our physics seem trivially simple and trivial to simulate?
Degrees of freedom stay degrees of freedom. Even if you could pin down each particle in our universe with a single integer number instead of all the quantum field in a curved space-time continuum crap we're looking at from the inside, it's still... a HUGE effort and, for the most part, highly redundant to an absurd degree.
who's to say that their computers work anything like ours? Maybe by their very nature this type of universe actually makes a lot more sense. Like the concept of pinning "down each particle in our universe with a single integer number" could be very foreign or radically inefficient. Maybe their computers are more like our quantum computers. Maybe rather than relying on Boolean algebra like the transistors in our computers do, they run on set theory or some other branch of mathematics. (I believe quantum computers use quantum logic, not Boolean logic.)
I really don't feel like you're adressing the central objection, which is that number size is relative. Just because a number is large relative to everything you're familiar with doesn't mean it isn't insignificant compared to much larger numbers. It feels stupid to even say this out loud, I'm not sure how I can make the point without sounding obnoxious.
How do you know that there isn’t a china teapot in an elliptical orbit around the sun somewhere between Earth and Mars?
Would you settle for a sports car in solar orbit?
It's so awesome that this is a technically correct (the best kind!) response to that thought experiment!
Because there's no plausible explanation for one being there, unlike hypothetical beings simulating us. These observability arguments always felt like a bit of a cop-out to me. Just because a proposition isn't directly observable doesn't mean it doesn't have a truth value.
Sure there is. The simulators put it there, because why not? Maybe the whole point of the simulation is to see how long it takes us to find it. They have the power to do that. They have the power to do anything.
“Plausible” in this case is kind of in the eye of the beholder. Others have already stated many reasons why “the universe is a simulation” is not particularly plausible without unproven (or unprovable) epicycles like “well the simulators live in a much more complex / higher dimension universe”.
But the point of Russell’s Teapot is not really just observability - it’s that the burden of proof lies on the side making unfalsifiable claims. I.e. “you can’t prove we aren’t in a simulation” doesn’t cut it, especially when the “simulation” claim can just add on more epicycles to make our simulated existence even less falsifiable.
Should that really seem odd?
Your implied argument seems to be that we can place some reasonable upper bound on the number of "degrees of freedom" that would ever be spent on something frivolous, and 10^80 is higher than that bound. But I don't see how you'd derive such a bound. (If you'd asked people from 1970 what bound seemed intuitively reasonable to them, I suspect most of them would have picked a number that is lower than what's used in modern computer games.)
If we're a simulation, there's no obvious reason the people simulating us couldn't have 3^^^3 times our resources. And I don't see any obvious reason someone with 3^^^3 resources couldn't spend 10^80 on a video game.
Our universe couldn't actually create a high fidelity simulation of the universe. Indeed, it is questionable whether we could even simulate a single brain in real time.
There is no reason to believe that a universe would actually create a bunch of high fidelity replica universes full of intelligent creatures. The computational resources are implausibly high.
(00_00)(MONOGURUI: UEGHURUOMO_UEDA)(ii))(episode 303)(00)(00) (00_00)(00) ueghuruomo_ueda(ii)(00_00)((00_00)(00_00))(ii): "(entities which exist within larger entity are intended to fill them up .. to push past their boundaries: ultimately any inner vehicle is, at its limit point, intended to implode the larger vehicle which it is contained within: the point at which macropolitics becomes micropolitical, .. vice versa, forms a penetration-resistance with reverb tendencies: (00_00) managed to blow out (00_00) from the inside (usually represented by a gersgorin radius) of a floating deictic point (usually represented by a highly variable eigenvalue within a given more or less fixed range): as part of this process of modeling the theorists involved took the untraditional step of flipping around the typical analytic-synthetic structure: hä guessed that udoh(ii)UNYIMEABASHI's invitation now was the consequence of the previous day's scandal, .. that as a local liberal hä was delighted at the scandal, genuinely believing that that was the proper way to treat stewards at the club, .. that it was very well done: yagaoMEGHIRU(ii) smiled .. promised to come: instead of assuming a 0 point .. working up through iteration they took as fundamental the assumption of a given set of countless but unspecified arbitrary points .. then worked their way down from there to given points that could be specified, one of which, importantly, was assigned, again arbitrarily, as the 0 point for the given set from which relations were then generated outward, .. it was this set as modeled that formed the I:0 range for their probability field: this choice, or perspective, helped impose a certain discipline in the area of oscillatory blow-ups .. non-convergence: there were multiple computational models developed .. put into use: the primary of these implemented by YAMAUEREDA was a package called RAMPANT(ii)."
Good morning, GPT-2. I agree.
(00_00)(MONOGURUI: UEGHURUOMO_UEDA)(ii))(episode 500)(000 (___)00. (00_00)(MONOGURUI) (0000).(00_00): "(hebephrenia-bodies were kept in chambers at cold temperatures to prevent growth .. covered in tightly strapped down cotton coverings, sometimes with floral patterns suggestive of some future uncovering or release, or, at other times, in acrylic fabrics or even clear rubber similar to goryo-suit material, or patterned semi-opaque lace/nylon hybrids: the imaginary, conceived of as a product of the absence of verification, played an important role in the derivation of u_mappings: the u_space in these constructions was populated densely by both realized .. unrealized functions): (each corresponding to an actual or potential goryo: derivative positions were conceived of always as the convergence of more than one probabilistic entity into a given space: likewise u_agents attributed much of the vagueness of notions of interpulse to conceptual hesitations regarding its foundations: the question, for them, of the degree of conceptual overlap or consistency between interpulse in thermodynamics, statistical mechanics, information theory, …., was moot: it's all thermodynamics they would confide: the rest is just different words to mean the same thing, (0000): pushing out against the white flesh is a small metal stunt, positioned to just prevent the muscles touching, with a diamond fastened on each end): (one of the fundamental concepts of the u_agents was that attempts to measure m directly would invariably fail, as it was too amorphous .. mysterious in its movements to quantifiably observe: instead they emphasized the method of measuring m(0)mat or its other variances as one would a large invisible creature in one's midst, via the indirect observations one could make regarding the movements of those things it was pushing)."
(00_00) indeed!
0oooo
o0ooo
oo0oo
ooo0o
oooo0
I just simulated a universe. It wasn't a very interesting one, but maybe our universe is particularly uninteresting compared to the much larger one we're embedded in. Why couldn't we easily fit in a much larger universe? You say the computational costs are high, and you mean they're high relative to anything we have available in our universe. Well, yes, obviously, you'd need a larger one to fit this one. I don't see how that's at all relevant.
Because the simulation argument is equivalent to the Flying Spaghetti Monster argument.
If your argument by its very nature makes no testable predictions it is worthless.
I've never understood why I should care if I'm living in a simulation or not. What are the practical implications of this theory and how is it decidable one way or the other?
> What are the practical implications of this theory
If we're living in a simulation it suddenly becomes *really* important to figure out a way to convince the people on the outside to not turn the simulation off.
The argument is pointless because it makes no testable predictions by its very nature.
I had the same confusion. I work in mechanical engineering with a lot of connection to physics simulation and the simulations we can run are... horribly slow. For combustion engines, simulations intended to cover effects up to 5..10 kHz usually take more than 1h per second of simulated time on very powerful hardware... and progress is not scaling with moore's law (generally, most methods do not scale linearly but at least quadratically. And even with linear scaling, the amount of progress we'd need from simulating more than "a few molecules" is insane - if you don't believe me, I think the distributed computing efforts for proteine folding for COVID might be a good reference).
I think part of the confusion is the question of what you're simulating. Everything I wrote refers to simulating a complete universe in a physics-sandbox environment. The other option would be to only simulate one consciousness (me/you) and their perception of the universe around them... which would be much closer to what videogames do (e.g. not rendering the parts of the world you're currently not looking at). This is still way beyond our current capabilities (Christof Koch likes to point out we currently cannot even simulate a simple worm), but sounds at least doable at a large amount of progress in a few centuries...
But when you're simulating a single consciousness (for whatever purpose), you can put it in any environment you want - you don't have to apologize for the physics. Why even pretend to put the consciousness in a universe that seems to follow general relativity, quantum chromodynamics and all that other crap? Why bother with creating a credible semblance of cosmic background radiation and neutrinos?
This smells way too much like metaphysics to me - "we are just a dream in the mind of God", quite literally, and with a completely geeked-out God, too.
Indeed. And while you might have to take some things into account while simulating a modern physicist, why would you simulate someone who wastes his time trying to solve puzzles you understand perfectly? Simulating a medieval peasant would be much more interesting, and they would be more tolerant of tiny discrepancies in the simulation.
That said, the most recent simulation I saw (yesterday) was "1 Million Spartans against 2000 Full Auto Shotguns", so maybe I have it wrong about the sort of thing a super-powered civilisation might find it interesting to simulate. (Hope I didn't just invent a new basilisk that creates increasingly extravagant carnage in order to test the power of its simulation equipment...)
During last year's lockdown I watched a bunch of Universe Sandbox videos. It's an astronomy simulation program where you can do things like hurl Saturn at the Earth at 90% the speed of light, replace the Sun with a giant black hole, and duplicate the Moon millions of times. So maybe that's what the basilisk will want to do!
There are testable hypotheses for a simulation universe like this one, but they're all falsified.
Thus, the simulation hypothesis is basically indistinguishable from the FSM, in that no universe that resembles the rules of this one could simulate this universe. Thus, any argument about it is just the FSM waving their noodly appendage and arbitrarily claiming it is so.
The reality is that it is basically a lot of navel-gazing. There's absolutely no reason to believe in it whatsoever, any more than there is the Flying Spaghetti Monster, or the Invisible Pink Unicorn.
That was pretty much my impression.
Do you have an idea why the idea nevertheless seemed to be taken seriously by quite a few smart people?
Same reason why the singularity is, more or less: it seeming outwardly plausible but not actually making any sense when you have a deeper understanding of the physics of the situation.
The fact that some people think certain people are experts about these things, but they actually aren't, and thus they trust their judgement rather than actually spending time thinking about it in depth, was/is a major contributing factor.
Speaking of ways the simulation could be testable, I the argument Janelle Shane makes in her book You Look Like a Thing And I Love You: if we were living in a simulation, some life-form would have evolved to exploit the glitches, e.g. getting free energy from rounding errors.
Examples here: https://aiweirdness.com/post/172894792687/when-algorithms-surprise-us
> There are testable hypotheses for a simulation universe like this one, but they're all falsified.
What hypotheses, and which are falsified?
The idea I mentioned above about life forms evolving to exploit glitches is an example, or more generally the idea that we'd observe anomalies stemming from glitches, but there are a couple possible objections (neither of which I find that plausible):
-We do indeed observe glitches in reality, e.g. the Mandela Effect, perhaps also paranormal phenomena.
-We don't notice glitches because the simulators are observing everything meticulously and pausing or rewinding whenever an error creeps in, or the simulation is entirely error-free in the first place ("It's a very good illusion" http://wondermark.com/904/). Of course this gets into Last Thursday territory
The problem is that the universe doing the simulation couldn't have laws of physics that are the same as those of our universe.
The problem is the laws of physics and information theory.
To simulate and store information about the Universe at the atomic level, the most efficient possible computer would be... an exact replica of the Universe. So any computer would, by necessity, be less efficient than this.
So obviously that's right out.
But even replicating the Earth 1:1 would require at least that much matter.
You'd *have* to cheat. But we see no evidence of such cheating.
And even if you DID cheat, it STILL wouldn't solve your problem. Even if you only had to simulate the surface of the Earth, and you could reduce the complexity by a million times, you'd *still* need a computer that was a few hundred km across.
So on top of these problems, there's physical issues with the speed of light - you need information to be transmitted within that computer at light speed, and right now, we have the ability to take audiovisual images anywhere on the planet and transmit them to anywhere else. So you can't actually run the computer in real time.
It's even worse than that, though.
You'd not only need to generate enough energy to power this computer, but you'd also need some way to *dissipate* that much heat - and frankly, you'd need to radiate that heat out into space, because you're talking about something that is of a scale that it would cover the entire surface of the earth in a sheet of transistors several centimeters thick. But this makes the speed of light problem even worse, because it means you need to make your computer even larger for heat dissipation problems. This would make your computer ridiculously frail - you can't actually build a solid dyson sphere, it would be destroyed under its own internal stresses.
Remember also that the reason why we have 2D chips is heat dissipation - running such chips at a high frequency would quickly cause the whole thing to overheat.
But you can't actually slow down the computational speed because then you start running into time constraints. Running things at 1/500,000,000th speed, a 80 year long human lifespan would take 40 billion years - far, far, far too long. Even if you speed that up by 10 times, that's approximately the age of the Sun. Speed it up by 100x, and you start probably running into serious heat dissipation problems - and you still are taking hundreds of millions of years to simulate one lifespan.
Moreover, the more we go out into space, the less able they are to cheat on the fidelty elsewhere, which makes the problem even worse. We appear to be able to arbitrarily focus our telescopes anywhere and see stuff, and arbitrarily put things under microscopes and see stuff, and as we build better telescopes and telescopes, this requires ever larger amounts of storage to make sure that the skies and small scale matter stay consistent and also that they're evolving properly over time.
Even simulating a single person and the people around them is a huge problem because we encounter new people all the time, and those people at least pretend to know other people "off-screen", so there's still an inordinate amount of calculations necessary to simulate even one intelligence's point of view, because you have to keep on simulating things outside of that, more and more, to create a plausible simulated reality for them, and errors will reveal the whole thing to be fake.
There's no way to do any of this in a computationally efficient manner. And the higher the degree of fidelity required, the worse the problem becomes. If you have to actually simulate individual atoms, the most efficient way of doing it is to actually just create a replica Earth - any other solution will be *less* efficient, so require even more material, because of how information has to be stored.
So basically, you need to posit that the "real" universe has different laws of physics - but at that point, you're just arbitrarily assigning non-falsifiable properties to the external universe, which is just a different way of saying that the Flying Spaghetti Monster used his noodly appendage to change things, and that whenever he would be revealed, he changes things so he isn't.
The starting assumption that a universe which contains our own necessarily has the same physics seems unwarranted.
If you assume that the containing universe can have arbitrary physics then it just becomes a Flying Spaghetti Monster argument - completely unfalsifiable because the Flying Spaghetti Monster can just wave his noodly tendrils and change things. to be consistent.
It renders the argument unfalsifiable and therefore worthless, because an unfalsifiable argument makes no useful predictions.
Of course there's more reason. We've literally done it one level down, at our own level. We have precedent for this sort of thing happening, so we know it's possible in principle.
Consider a TV show. Very realistic for the part that the viewers see, but the show isn't bothering to "simulate" anything not shown to viewers. In the extreme, if you are the only conscious entity in the simulation, how much detail is really needed to keep the truth from being obvious to you?
Right, but in our case, the "viewers" have complete freedom to investigate the props. In a simulated world, there's nothing that would keep the simulation from withholding the deeper history. It could return "it's a piece of wood", and that's it. In our world, you can then use an electron microscope to probe the wood's detailed structure. You can do Carbon-14 dating to determine its age, use other isotopes to figure out where the piece of wood came from, do a DNA analysis to track the mutations in the plant's genome, count the tree rings to reconstruct a history of the world's climate that is consistent with the tree rings in other pieces of wood... constructing a simulation that gets all this right and consistent on the fly would be seriously impressive, to say the least.
You don't need to get it right on the fly--you can pause the simulation until you've written whatever code or run whatever real-world experiments--but you do need to notice "ah, they're about to find a bug" on the fly.
(I currently think we should update heavily against the simulation hypothesis based on the apparent size of our universe, in both directions. Most simulations will probably look more like Minecraft to the people inside them--but I note that a Minecraft villager could argue against simulation on the grounds of the massive size of their world, surely too large to fit inside any practical computer.)
Seriously. The amount of redstone you'd need to simulate even a single creeper boggles the mind!
Right. Every level of simulation adds more overhead... contrary to what the simulation hypothesis claims.
You seem to be taking the opposite moral from the minecraft comments as I am. For me, they're a pretty good illustration of why simulation *is* plausible. What argument have you made here that could not also have been made by a minecraft villager explaining why she must not be in a simulation/game? E.g. the villager could, like you, point to the vast scale of time and space, the large number of degrees of freedom specifying her world. Yes the actual numbers she would quote would be much smaller than the ones you quoted, but they would serve the same purpose in the argument - numbers that to her are so big that she can't imagine anyone else having bigger numbers to work with or wasting so much computronium (redstone) on running the simulation.
You don't need to notice "ah, they're about to find a bug" on the fly. If they ever do find it, you just have to notice then and REWIND it, then prevent them from noticing.
https://www.overcomingbias.com/2011/07/me-in-new-scientist-on-sims.html
Because your mind would also be simulated, another option is simply to make your mind incapable of noticing (or remembering) the hacks, shortcuts and inconsistencies. It might be a bit like the dreaming mind, which - in my dreams, at least - accepts at face value the craziest sequences of experiences.
This kind of nerfed mind seems less interesting to simulate than an unrestricted one. It would be relatively easy to notice the problems as they come up and fix them by rewinding when needed.
Heisenberg's uncertainty principle (conveniently) limits my ability to investigate the props.
I don't think this is at all compelling. This is both 1) a natural consequence of complex-valued wavefunctions and 2) many, many orders of magnitude smaller than you are.
Imagine you didn't know about the uncertainty principle and a big part of the reason you thought this isn't all a simulation was because of how much detail we observe. If you were to learn of the uncertainty principle, by how much would you increase your estimate that this is all a simulation?
Isn't it the case that any non-omniscient mind will of necessity have certain limitations re: how deeply it can investigate without interfering with the investigation?
Isn't it the case that insofar as I don't know what you're thinking/hearing/seeing right now (and vice-versa), my mind is not omniscient?
The simulators might be rich enough that all that wasteful stuff isn’t a big deal. Today there are lots of times when people “waste” a lot of computer time to save a small amount of human time.
Two fun possibilities are:
1. The simulators are our own descendants (or something roughly like us were their ancestors) - ancestor simulations
2. The simulating universe is so different than ours that the simulation is analogous to simulating Flatland, Minecraft, or the Game of Life - game simulations
These monikers are just labels, by the way: an ancestor simulation might be run for the purpose of a game or therapy or some other purpose that isn't testing a hypothesis (except in the sense that everything is, but that's another kind of argument).
For ancestor simulations:
A specific run of the simulation need not last long. If you want to investigate a certain decision, or situation, you might run some five minutes or some hour a billion times. In this scenario, everything up until recently could be loaded as a common save state, and that save state can take as given things like QM and cosmology, unless you yourself are studying it *right now*. Similarly, even if the only way to get to a given state is to simulate its history, you only need to do that once for any history you don't directly want to deal with. (cheek: As others have mentioned, this raises some questions about measure of simulated beings and provides a strategy for increasing your measure: try to avoid doing things that are boring to hypothetical externals. )
Even a longer-running simulation doesn't need to simulate QM, except during the period when a physicist is running some experiment the outcome of which depends on it. For the vast majority of the world, approximations are fine as long as they don't change what's perceptible to your sims. All you know is what you've been able to take in via relatively low-bandwidth senses, and the simulation can just adjust your mind so that you remember everything being fine and holding together logically as long as you're not actively experimenting right now.
For game simulations:
As someone else has mentioned cross-thread, Minecraft inhabitants, were they thinking, could point out that no conceivable computer (built in Minecraft) could simulate what they see around them. This is less of a slam dunk case than they might imagine. It could well be that our universe is much, much simpler than the simulating universe, even without the sort of simplification of detail implied in some of the ancestor simulation argument.
Some parts of our world resemble compromises an engineer might introduce to more easily compute local state: locality, uncertainty, the hard problem of consciousness, etc. (cheek: Even the tendency for interactions with the physical world to become automatic is fishy -- after you've mastered bike riding, paying too much conscious attention to exactly what you are doing can seem to remove the ability... almost as if learning to ride a bike is a simulated experience, but "Bob was riding a bike" is just a background statement... )
The biggest issue with the simulation hypothesis is philosphical, rather than evidentiary: like God and everything-is-a-dream, it's unfalsifiable. Literally anything you could experience is simulable, replayable, etc.
" Literally anything you could experience is simulable, replayable, etc."
That seems like a bold statement, seeing how we have no idea how to simulate anything that we can be sure to be actually conscious.
In the context of falsifiability, it's prefaced with "assuming we're being simulated right now, ...". There's nothing we could learn that we could trust to disprove the possibility that we're being simulated. Which is kind of a problem. :)
You can't prove that we're not being simulated, but it might be easy to prove that we are. If some crazy alien popped into existence in front of me and broke a bunch of physical laws by way of demonstration, I would find that quite convincing. No guarantees we'll find the root password laying around, but worth spending at least a little effort to look.
The last few years have convinced me that it's not so much a simulation we are living in as a video game, played by adolescents or bored students in another dimension. Forty years ago, nearly, I remember being on a training course for young government officials in the earliest days of computing, and we were allowed to play with a program which simulated efforts to improve criminal courts by tweaking different inputs. Over a rainy lunch-hour it occurred to us that, by turning the dials the wrong way (we had to change some BASIC commands) we could cause the system to crash, which we did. I assume much the same is going on here: a couple of interns have decided to test the system to destruction. How else would you explain Trump, Brexit, PMC hysteria, Russia! Russia! and the Virus except as a deliberate attempt to crash the system? And what kind of sick adolescent humour decides that the answer to all these problems is compulsory White Guilt struggle sessions? I'm waiting for the grown-ups to come back from lunch, or whatever it is out there.
Genuine question - why should we have any priors at all over how much computing power seems like "a reasonable amount to spend on a simulation" to aliens we've never met, in a universe we've never seen, governed by laws of physics we don't know?
Because why would they bother at all
Because if we apply our notion of a simulation at all, why not apply our knowledge of conceivable motivations for doing it as well? Otherwise, you're firmly in the realm of theology. God moves in mysterious ways, and all that.
Yeah, as I hinted at elsewhere, claiming that the simulation is from a more complex universe with computing powers far beyond our own, or the like, is pushing the argument into Last Thursdayland, making it unfalsifiable.
The original simulation argument disjunction (almost no human-level civilizations become posthuman OR almost no posthuman civilizations are interested in ancestor simulations OR almost all human-like observers are in simulations) depends on "the immense computing power of posthuman civilizations" compared to what is needed to simulate a human-like civilization.
That is, it depends on our expectation that a civilization like ours, in the future, will be able to simulate many civilizations like ours (with appropriate simplifications). If we could only be in a simulation if the simulators have much more access to computation than is possible in our universe, then it's not an ancestor-like simulation, and the argument doesn't work.
That is an excellent point; however, notice that the argument doesn't demand *recursive* simulations. That is, it requires that the original humans gain enough computing power to simulate their ancestors, but it doesn't require that those simulations are able to perform simulations of their own.
Suppose you only wanted to simulate our ancestors up until the year 1000 C.E. There's a lot of computationally-expensive physics that you could probably get rid of without seriously affecting the results. You probably don't need quantum, or relativity. The stars could probably be some kind of pre-recorded movie projected onto a celestial TV screen. If you ran the simulation long enough, these glaring omissions would eventually distort the results compared to what really-originally-happened, but for ancient human history it seems unlikely to matter much.
Our distant descendants could one day disassemble all the stars in our universe to make computronium, and then use that computronium to simulate our distant ancestors in a world where the sky is a TV screen. (They couldn't simulate US that way, because we've sent probes out there, and triangulated celestial distances from opposite ends of Earth's orbit. But that's pretty recent in human history.)
If this had already happened, and we were IN the simplified universe that no longer has enough resources to simulate itself, would we know? What if quarks are "supposed" to be made up of even-smaller particles that could somehow (in the far future) be used for more-efficient computation? What if all that "dark energy" we can't seem to find is actually a kludge to make up for the fact that they removed most of the universe's mass so that they wouldn't have to simulate it? (Though really, if they took something out, it would probably be something that wasn't discovered until after the time period they were simulating, so we couldn't seriously hope to guess what it was.)
Or more simply, what if the parts of our universe that are really far away are just using much lower-resolution simulations than the things that are nearby? In the original universe, humans eventually colonized those places, claimed those resources, and spent some of them to make simulations that stopped before their simulated subjects got that far. In the meantime, the stars in the Andromeda galaxy are being simulated as point-masses or something.
Yeah, I would think Step 1 for ancestor simulations is to limit yourself to the immediate vicinity of the people you're focusing on (which might be everyone) and only simulate at the level of detail those people will be able to perceive. Your skin doesn't need to be made up of cells until you're putting it under a microscope. Even brains can probably be simplified a lot without noticeably changing their functionality.
There could be some future discovery about physics that makes computation dramatically easier, but I don't think it's necessary at all. We can probably be simulated pretty well with the technology we can already see coming. If you were trying to simulate a simplified physics in full detail, you might want to leave out quantum--people have made a lot of fuss recently about the difficulty of simulating quantum physics with classical computers. But there's no need to simplify the physics if you model details on an as-needed basis.
Note that video games already work this way. Google Earth doesn't download the entire dataset, just the part you're looking at.
There is probably a thermodynamics argument. Shannon showed that each bit has entropy. But if we posit multiple higher dimensions... maybe anything is possible?
If we are simulated by aliens from a different universe, who knows. Except, maybe some universal prior that smaller universes are more likely? But that is somewhat dubious; the actual complexity that matters is the number and complexity of laws of physics, not the diameter of universe in meters.
But one popular version of simulation hypothesis is that our (post-Singularity?) descendants will run simulations of their history, i.e. us... either as historical movies, or as "what if" experiments. In that case, we know the laws of physics of their universe, because it is our universe.
I think with respect to "laws of physics we don't know" it's worth mentioning that there is a respectable belief that the law of physics are unique -- that is, that it is *logically impossible* to construct different laws. Were you to try, you would sooner or later find that your different laws were logically inconsistent -- could not all simultaneously be true. It would be like trying to construct a "new" geometry in which Euclid's 5th postulate was both true and untrue.
I don't mean different in a trivial way, like the fine structure constant has a different value, but in a meaningfully different way, like the speed of light is infinite so instant action at a distance is possible.
I raise the point because it seems to me those who suggest "different physics" is a plausible answer to proposed difficulties of a simulation hypothesis ought to be asked to say why they think different physics is even in principle possible. We do not, after all, lightly assume different *mathematics* is possible in some other universe -- it's not clear assuming different physics is possible is any less logically dubious.
Here's what a universe built in hyperbolic geometry would look like to us:
https://m.imgur.com/gallery/V96Zy7b
Here's an internally consistent universe in which perspective distortion is actually ground truth:
https://youtu.be/IGpB8aULB_g
Here's one where time is cyclic:
https://youtu.be/tI0RfSn8oYg
There are more things in heaven and Earth, Horatio, than are dreamt of in your philosophy.
Heavens, those are no more universes than a globe is equivalent to the planet Earth. I don't doubt you can pick out some small subset of physical law and make it consistent, but we're talking about *all* physical law. So far, we have not even been able to come up with even one fully-consistent set of physical laws, e.g. the gross incompatibility between quantum mechanics and general relativity, the fact that QED has an ultraviolet catastrophe that has to be renormalized away in an arbitrary way (that is inconsistent with GR of course), and so on.
What's the standard for what constitutes a "set of laws of physics?". A cellular automaton could be construed as a kind of universe. You can't just summarily reject mathematical structures that you don't think are complicated enough.
The standard is "must describe things that actually exist" where "exist" has the usual definition -- occupies space, has kinematic mass, continues to exist when we shut the power off or look the other way. Metaphorical "universes" like a game of Conway's Life running on a PC don't qualify.
I'm not rejecting mathematical structures that aren't complicated enough, I'm rejecting those that (1) do not describe any measurable reality or (2) those which are logically inconsistent. That *does* put me in the position of rejecting the Standard Model in case (2), but I reserve the right to soften that to mere skepticism and the belief that it isn't the final answer.
I see two possible interpretations of the simulation hypothesis.
1) the universe running the simulation has nothing in common with our own universe. This is unfalsifiable and has zero implications. If interpreted broadly enough, it's both probably true and assumed true by most physicists, but it doesn't imply any of the interesting thought experiments like "what if the aliens decide to turn us off"
2) the universe running the simulation is similar to our own universe. This is ludicrous, and while technically difficult to disprove and it could have implications, it is staggeringly unlikely. Any implications it did have would be counterbalanced by the possibility of a dissimilar simulating universe having the opposite implication for unknowable reasons
Well, when human beings simulate things, we simulate only what is absolutely necessary to extract the data we want, because simulation is expensive (or if you prefer because if you cut out the unnecessary bits you can run your simulation longer for the same cost).
So while a <i>perfect</i> simulation hypothesis is unfalsifiable (and for that matter I don't see how a perfect simulation could be meaningfully distinguished from reality per se), if the kind of simulation being done is like those we do, you should see gaps and missing pieces all over the place -- practically everything should be a facade, with things that are of no importance to the simulators left out or blank. For example, if the interest is in how humans interact socially, say, or any other external manifestation, then why go to the trouble of simulating a complete digestive system for everybody? Nobody is <i>aware</i> of lipids being hydrolyzed in the small intestine, why bother doing it? Just move the organism from hungry to sated, and disappear the lipids, and call it a day.
Obviously a choice to directly set the simulation (or parts of it) to a given state, instead of letting it "naturally evolve," in order to save costs, would break the laws of physics within the simulation -- but who cares? We do that all the time in our own simulation, for convenience, e.g. "periodic boundary conditions" in a fluid simulation. So long as we're confident we're not compromising our final results, it's an excellent trade-off.
Which means that an "in simulation" marker of being in a simulation, at least the kind we do, would be the regular appearance of miracles -- things happening that <i>cannot</i> happen according to our "in simulation" laws of physics -- because the simulators are cutting costs wherever they can.
> an "in simulation" marker of being in a simulation, at least the kind we do, would be the regular appearance of miracles -- things happening that <i>cannot</i> happen according to our "in simulation" laws of physics
I would have taken that in a different direction, given the first half of your post: if the universe is under-simulated, with complex features taken out, then "miraculous" things might happen naturally, without miraculous cause—and faithfully-simulated humans would ascribe them to miracle *anyway.*
(But then that might me be reading "miracle" as "something produced by a magic system", not a shortcut that skips steps altogether.)
I think that's what I said, although now I'm wondering if I expressed it poorly. What I meant is that a simulation has no requirement to be consistent with the simulated laws. If I simulate a 2D universe according to some 2D laws I make up, my simulation isn't limited by those laws, because it's not actually taking place in the imaginary universe. I can have my simulation do things that totally violate those laws -- indeed, it would often be efficient to do so, because I might know the endpoint I want, and the endpoint might be fully consistent with the simulated laws, but I might not want to waste the computing resources it takes to let the endpoint happen naturally -- I'll just jump to it in one step, kaboom.
But that would seem to us (in the simulation) as a miracle. If someone beats cancer via a year-long arduous process involving powerful drugs and radiation, we don't consider that a miracle. But if he just went to sleep one night and woke up the next in the very same cured state, we would. So jumping over process to reach even a consistent end would seem like a miracle to us. But it's exactly the kind of thing *we* do when we simulate complex systems and we know certain things will happen, but don't want to waste the computing resources on getting them to happen the slow way.
The best answer to the simulation hypothesis is that it does not matter. If you think that our universe can be simulated, that means that it can be described by having a state space and some sort of rules that act on the state space in a way that can be computed. So when we think of something happening, it actually just follows deterministically from the rules and the state space (if the rules are indeterministic, there are ways to make indeterministic systems deterministic in simulations). So you can run the simulation one time, or a hundred times in parallel, or not at all - what happens is still conserved in the rules, and then our universe does not rely on being simulated in some kind of universe.
Hard agree. The universe exists and does the same things irrespective of whether it is simulated or "base reality", so there are essentially *no* testable predictions we could make of it. This is why I don't have an opinion about the simulation hypothesis at all.
Well, for starters, there's no guarantee that those 30 orders of magnitude actually exist anywhere we're not currently looking. Video games don't render the entire game world constantly all the time, they just fill in whatever the player looks at, at whatever level of detail they happen to be looking.
Second, nothing has to be synchronous. The simulation could take 500 processor-years to render 1 second of our universe.
Third, those numbers seem big to us given the physical parameters of our universe and our current level of technology; they may be extremely small from the perspective of whatever is running the simulation.
" they just fill in whatever the player looks at, at whatever level of detail they happen to be looking." There is a Heinlein story along those lines.
"They".
The 1999 movie "Thirteenth Floor" also addresses the issue, with part of the movie being set in a very carefully simulated Los Angeles and the rest of that universe being simulated to the extent that the sim-Angelinos are likely to notice it.
One way of ”solving” the simulation argument is to assume that all advanced civilizations thought of it, and all independently one-boxed a cross-civilizational simulation-taboo. Just to avoid setting up the probabilities in that direction in the first place.
This is compatible with the argument, and only explains why civilizations dont run simulations even if they dont kill themselves. And the solution is that they all use newcombs problem to avoid being a simulation.
Since we're talking about politics, I found this 538 article about the link between qanon and white evangelicals interesting. https://fivethirtyeight.com/features/why-qanon-has-attracted-so-many-white-evangelicals/
Is there maybe a more general phenomenon where conspiracy theorism, or extreme politics in general, comes from a kind of frustrated religious instinct?
I can't say I have any thoughts about the underlying mechanism. But "Trump will sweep aside the evil people and create the new world order" has a similar structure to a day-of-judgement prophecy.
From the numbers in the article, QAnon believers are somewhat more likely to be evangelicals than the general public, and evangelicals are somewhat more likely to be Republicans. I'm not sure there's actually anything to see here.
Marxists have all kind of insane conspiracy theories despite being "antireligious". Indeed, Marxism itself springs from antisemitic conspiracy theories, the idea that the rich jews are hoarding all the money.
Do you have evidence or details for this?
For his antisemitism and the attendant conspiracy theories, ever read Marx's "On the Jewish Question"? Or "The Russian Loan"? Or his ranting about Lasalle?
Heck, it is reflected throughout socialist thought, the idea of US vs THEM, that there is some sort of CLASS CONFLICT, that this is THE DEFININING THING, but when challenging, it... just falls apart. The idea that the capitalists are just out to enslave everyone and rob everyone blind is just not how reality works, and indeed, the fact that standard of living has been skyrocketing for the last several centuries in capitalist countries kind of makes how much of a lie it is obvious.
I suspect part of this sort of thing is because of his apocalyptic mindset. "Forced Emigration" is an example of such. He believed that things were rushing forward and "The classes and the races, too weak to master the new conditions of life, must give way."
This sort of false sense of urgency is very common amongst conspiracy theories - the idea that the aliens are coming, that the Illuminati have control and are about to enslave us all, that the Storm is coming, that all police are secretly racist, that we're going to run out of food and all starve.
The idea underlying all of this is that the way society is now will collapse and no longer be viable, that it is all about to slip away or be destroyed, and that those in charge are lying about it or covering it up. It's the same sort of apocalyptic thinking you see in religious thought, but in non-religious contexts as well.
Instead of Satan, it's The Capitalists, or The Government.
Thank you. Urgency is occasionally appropriate, and the hard question is figuring out when.
Doubly a problem because the universe selects hard against those who were not urgent enough soon enough, but gives more of a by for those who jump the gun a tad.
That may have to do with the "frustrated religious impulses" issue. If you approach economics or politics in a religious way, but have a lot invested in believing that you aren't religious, the impulses can sort of sneak up on you.
I think that there's some sort of cognitive defect which historically frequently manifested itself in religious cults, but actually has nothing to do with religion.
There have been "environmentalists" who believed that everyone was going to starve to death and die in the near future since Malthus. The rationale for this belief is ever-changing and related to the issues of the day, but it is always "THE END IS 10-20 YEARS OFF!"
The Population Bomb and Future Shock are good examples of this from about 1970.
The present climate change nuttery is the same thing.
The thing is, it's not that overpopulation or pollution weren't real issues, no less than global warming isn't a real issue - these are all real things. But the apocalyptic minded people took these real issues and turned them into EVIDENCE THAT THE END IS NIGH.
I saw people saying that millions of people would die if Bernie Sanders wasn't elected, so people should empty their life savings into donating for his campaign.
The "woke" people holding mass protests in the heart of the coronavirus pandemic is yet another example of this. Was there any reason why that couldn't wait for another year? Not really, but their brains told them that the walls were closing in, so they had to go act now now now now now.
There's this sense of false urgency, that the walls are closing in, that the world is going to end if you don't do something now.
It's signs and portents, but in a non-religious context.
When there’s competition between various factions within apocalypto-armageddonology, the urgency obviously increases - some of the extinction rebellion people are certain that the whole of western civilisation must be dismantled by next Tuesday afternoon.
"There have been "environmentalists" who believed that everyone was going to starve to death and die in the near future since Malthus."
That was not, however, Malthus' view. His was that the real income of the mass of the population could never be much higher, because if it were population would increase, pushing it back down. That seems to be a pretty good description of most of human history — ending at about the point when Malthus was writing.
It was already wrong when he came up with it in the place where he wrote it.
Indeed, pretty much the entire history of civilization was a result of increases in *per capita* productivity. These increases mean that as your population goes up, your total productivity goes up.
This is precisely why human population had expanded since the dawn of agriculture.
He was wrong in the same sense that Newton was wrong: his theory is a very good approximation of what he could observe. To see that he was wrong, you have to look at a very long timescale, or a time of rapid technological progress.
<I>I saw people saying that millions of people would die if Bernie Sanders wasn't elected, so people should empty their life savings into donating for his campaign.</I>
Roko's Bernielisk!
Argh just pretend those italics worked...
The ACX Tweaks browser extension supports limited markdown. You can enclose words in *asterisks* for italics. Obviously, though, it won't work for any readers not using the extension.
I'm interested in what happens with QAnon when Trump dies. Do they find another savior or do they expect him to come back?
Some of them were seriously proposing, after Biden won, that he'd been face-swapped with Trump, so actually Trump was still President. If QAnon is still around when Trump kicks the bucket, they'll probably come up with a similarly implausible claim.
I consider myself pretty well connected with some moderately to far right individuals (family and co workers), and have not seen anything this nutty. Where are you finding this information?
This and similar ideas were circulating on 4Chan: https://www.mic.com/p/qanon-followers-are-having-a-hard-time-with-bidens-inauguration-58197531
Of course, it's hard to say how serious they were or how many people believed in it all.
I'm hoping for a war of succession between Jnr. and Melania
What measures do people take to obscure their identity online? Beyond using a pseudonym
I take the opposite approach actually, and basically use my real name everywhere. The goal is to never let myself be confused about what the future will know about me.
Ditto. I try not to say what I would not like to have repeated. I do guard my tone less in a private written communication, but even there I like to test myself now and then: "Would you be embarrassed to sound like this in front of the people who disagree with you, or the people you've strayed into gossiping about?"
I do agree, but I'm still afraid to say/write lots of things I think are true; sad.
My trick is to be so unimportant that not only would nobody bother doxxing me, but said doxxing would totally ineffective because nobody cares enough about what proles think to retaliate against them.
Emmanuel Cafferty is/was a prole, but got fired because someone falsely accused him of making a politically symbolic gesture. This was in meatspace rather than online though.
I don't use Facebook or Twitter, and generally use different pseudonyms across different sites. Basically I keep "real life" separate from my online life, and each community separate from the others.
Use pseudonyms (including different pseuds on different sites).
Selectively alter details about your real life if you must discuss it, but endeavor to be vague regardless.
Never share photos, especially selfies. Never livestream.
Do not mix real life contacts and "internet" contacts. If you must have eg a Twitter account under your real name, use it only for the blandest of networking, and save the polemics for an anonymous account. Do not give real life contacts your "internet" handles.
Do not get into "beefs". Do not engage with angry people. Do not sling shit.
(I was a child who adored technology and computers, and my parents wanted to support me but were also terrified that internet predators would somehow snatch me. This is all probably overkill, but by now it's habit)
I simply refrain from using social media of any kind beyond very basic private conversations with people I know in real life, and use a throwaway pseudonym for literally every other online interaction.
Nothing extra - I'm unlikely to be prominent enough to become a target as an individual, so I don't have to run faster than the bear - just faster than the slowest person its chasing ;-(
What I don't want is merely:
- someone reading my resume, googling my name, and getting results that encourage them to discard the resume
- twitter mobsters picking me as the next "evil monster" to cancel - and being able to cancel more than my current posting alias, and my presence wherever they've targeted
FWIW, I regard the latter as unlikely in any case. They are mostly too lazy to go beyond their own preferred social media looking for victims. And I ahve nothing I want to say to the general public within the twitter character limit.
This is a rabbit whole which you could fall very far into depending on how obscure you want to be. VPNs, Tor, unique and multiple identities across sites tied to unique email addresses, virtual credits cards for payments (e.g., Privacy.com), etc. There's always going to be a trade-off between convenience and anonymity, though—none of this is much fun to maintain.
If you're so inclined, this podcast is a treasure trove of ways to be anonymous in a digital world: https://soundcloud.com/user-98066669
(Note: I do almost none of this because I don't have a personal threat model that justifies the effort.)
I have a protonmail and don't share much possibly-personally-identifying information online. I don't join privacy-annihilating sites like those of Mark Zuckerberg. That's literally it.
On DNP: I feel like the survey missed a "It made me consider DNP but it did not make it look worth the risk". After all, watching video of BASE jumpers part of my brain wants to try that, but that does not mean I consider that a good idea.
I think the BASE jumper risk calculations are heavily skewed by all the jumpers that want to fly/glide VERY close to big, hard objects. I think even the 'flying squirrel suits' aren't that much more risky than parachuting, and that's pretty safe – unless you want to fly _right_ over the top of some rocky ledge or alongside a mountain cliff or just over the tops of some trees ...
Non-fungible tokens (explainer here https://www.theverge.com/22310188/nft-explainer-what-is-blockchain-crypto-art-faq). Fad or next big thing? I don't really understand why anyone is buying them
My prior for anything blockchain is that if they can't explain it to me in a way that I understand (and I have a CS degree) then it's probably bulls--t, a scam, or both.
That said, it's almost a Pascal's wager to buy maybe $1 of anything in the blockchain space that looks like it could make it big, because if you did that back when a bitcoin was $0.00.. then you'd be a millionaire now.
The space is unfortunately full of "solutions" looking for problems. Meanwhile the kind of thing where a blockchain would actually be a really good answer, like Certificate Transparency for TLS, seems to manage just fine without.
Sounds a lot like a literal lottery ticket: only $1 but could be worth a fortune...
In theory its better than a lottery ticket as the value won't go to zero. Assuming the market doesn't entirely collapse
NFT are a social phenomenon (like money is), not a technological one.
Fully agree that there are a lot of (mostly bad) solutions in search of a problems in the space.
I feel similarly about the Pascal's wager element. But with NFTs that seems harder because you can't just buy 1 of whatever new currency is coming out.
I don’t understand the appeal either. People compare the NBA Top Shots phenomenon favorably to baseball cards. Sure you can print out a cardboard image of any play in baseball history, but it will be worthless. Call it a baseball card though and produce it in limited quantities and it might be worth a lot of money (especially if you did this 60+ years ago and it’s of a famous player). That being said, baseball card collecting reached its peak in the early 90s (I think—I was a kid then and that was my impression; I haven’t kept up). My Ken Griffey Jr. rookie card that was probably worth tens of dollars or maybe more back then is probably worth $5 or less today. I had assumed its value would go the opposite direction. However, I think the truly rare cards that were produced before people started collecting cards to resell them have continued to appreciate. Assuming NFTs stick around, my guess is we’ll see something similar. Most that are going forward hundreds or thousands of dollars now will be worthless (or close to it) in a few years, but a few will be worth millions.
One area NFTs could be interesting is tickets to concerts and sporting events. It gets rid of counterfeits and more importantly allows the original seller to reap some of the benefits of the resale market. My understanding of NFTs is that every time the token is sold the creator gets a cut. So say Bruce Springsteen or the Lakers sell a ticket to the Staples Center for $100. If the buyer turns around and sells that ticket for $500, some of that money goes to Springsteen or the basketball team.
But if you lose the “ticket” it’s gone. Whereas now you can probably get it re-issued. I believe most “middlemen” exist for a reason...it’s only when you start cutting them out you realize why. I like knowing I can call my credit card to stop payment in the event of a dispute... no such mechanism in a token world.
This is basically the criticism I levy against most block chain ideas.
Disclaimer: economist with a strong tech understanding who has missed out on bitcoin.
My impression is that most collectibles top out their prices when the people imprinted on them have their peak wealth-- it's "buying back their childhood".
There are exceptions where there's a larger consensus that the thing is valuable.
Right. In comics, it used to be the rule of 5/30 (that second number might be wrong). At least before digital comics became common, the idea was that people wanted comics from the last 5 years because they were filling in stuff that they were reading that they wanted to complete a run of. After that, the demand for comics drops off until they're about 30 years old, and then people collect their childhoods. Children's books, in general, are also notoriously cyclical like that -- parents buy the children their own favorites.
>My understanding of NFTs is that every time the token is sold the creator gets a cut. So say Bruce Springsteen or the Lakers sell a ticket to the Staples Center for $100. If the buyer turns around and sells that ticket for $500, some of that money goes to Springsteen or the basketball team.
This is not the case out of the box, NFTs, non-fungible tokens are just tokens with their own id. You can make them work that way if you build them to but you shouldn't assume that they do.
My general feeling is that there is a lot of "value" in crypto currencies right now with no place to go. The owners of that value are trying really hard to build a market in *anything* so they can eventually cash out. So I imagine there are a lot of crypto millionaires overpaying each other for NFTs so they can convince people this market makes sense.
I don't think it does make sense. But I'm happy watching some struggling artists get some of this cash. I can't wait to read a book on this subject (and this time) in 5 to 10 years.
It seems like a solution looking for a problem, but you might think about why there are titles for land and cars. Having the title to a car doesn’t mean you have the car. It might be useful for proving you own the car, but first you need consensus that we care about who has the title to the car, or it’s just a piece of paper. It seems like having the *key* to the car is pretty good proof of ownership, except that it would be inconvenient because sometimes you want to let people use the car without selling it to them.
So it seems like these tokens could be useful, but first you need to get agreement that the token is how we prove ownership of the thing, and most people aren’t all that willing to grant ownership to any sketchy anonymous person who shows up with an Internet token. It might work as a second factor to make ownership transfer inconvenient but possible. (Inconvenient compared to loaning someone a key.)
Much of the appeal seems to be that you could transfer ownership in the same transaction that transfers cryptocurrency in the opposite direction. This seems like the only reason for it to be on a blockchain at all. So this is proof that you paid for it, I guess?
>much of the appeal seems to be that you could transfer ownership in the same transaction that transfers cryptocurrency in the opposite direction. This seems like the only reason for it to be on a blockchain at all.
Not really, the primary reason to want ownership on a block chain is for the decentralized effect, so you don't have to rely on some central authority to manage your ownership and it can be independently verified by anyone with internet access without relying on some other service to host the current ownership information.
Cryptokitties 2.0. I almost regret not getting some at the first wave.
It's obviously dumb, but it can take a while until the market runs out of greater fools.
Some combination of speculation and conspicuous consumption? I don't fully understand it either.
What does it mean to "own" something which isn't at all under your control? If I own a painting, I can put it in my house and only in my house and it's most definitely not in your house. If I own an NFT, anyone on the Internet can download the same damn jpg my token points to for nothing and it's 100% identical to "my" art. You don't even get copyright over it!
The only value I can see is that you're buying/selling a story: "This is the token that supported the artist who made Artwork X." And there's some value there—after all, perfect forgeries of masterpieces are themselves worthless. But still, I think there's got to be a qualitative difference between something I'd find in a museum and even the most popular animated gif meme.
I think they're currently a scam but some kind of ownership token actually makes a lot of sense for simplifying copyright. People are buying and selling one off art pieces meant to stand alone, that seems silly to me, what makes sense are game assets, audio samples, things that people who would bother to obtain the rights before using the content would be interested in.
It's a phenomenon that has made me start to really question whether the concept of "ownership" is obsolete. Even more than existing digital media, which still puts up at least a perfunctory paywall before you're allowed to consume it - from what I understand everyone can look at these art pieces no matter who owns them.
The people buying them must fundamentally care about "owning" the thing, in a purely abstract sense that is hard for me to fathom. Or more likely, it's a bubble with them thinking more people will get fooled into wanting to buy one in the near future and drive the price up.
NFTs are kind of bullshit – not so much the very idea of them, as the "NFT space". Thing is, the art market is also kind of bullshit, so an NFT art market makes sense!
In terms of them being attached to 'art' works (Jack Dorsey first tweet selling for $2m5 and counting) I purely perceive these as Veblen goods/ Zahavian signals, their value is only for signalling that you've got money to waste. True of much of the art world as a 'store' of value.
Side note. There is an oddity with art in that it can sit in customs warehouses indefinitely, and be outside any tax jurisdiction. You never know quite what low tax country you might want to land it in
This is a little silly, but I'd like to add my novel to the Top Web Fiction website, and it looks like it's invitation-only. Is there anyway you could send me an invitation?
Claim: poltiical orientation can be determined (unreliably, but statistically different from random) via facial recognition. Here's the link: https://www.nature.com/articles/s41598-020-79310-1.
I'm assuming that somebody here will have a more informed take than my own, so I'm writing this in the hope that I get put in my place. Here are my thoughts:
First of all, it's already interesting (although I assume well-known) that facial expressions + head pose (it looks like these are correlated) do so well. Obviously these are choices, and choices reflect the culture of one's peers, but I'm not sure I'd have guessed that the bias is so strong. Relatedly, I definitely would not have guessed that "sunglasses" would be so poor of a predictor relative to, well, anything else.
I was personally surprised that the algorithmic accuracy was greater than human accuracy at the same task. After reading a bit, I think that I just haven't kept up with the SOTA of facial recognition --- apparently algorithms now beat humans in general. This is surprising to me, since I learned that our brains are pretty specialized for facial recognition, but I guess this is just my ignorance speaking. Fine.
Anyway, for the big picture: the attributes against which facial recognition was compared were all *voluntary*. Users had the ability to chose whether to look up or down in their profile photo; whether to smile or frown or wear sunglasses. There are many such choices, and including more of them (like more subtle aspect of the facial expression) might have yielded an even more accurate prediction. Facial recognition itself, though, will include demographic characteristics (*partially* controlled for by the study), but also things like "has this person had plastic surgery". What I really want to know is, how much of the predictive power lies with these uncontrollable characteristics, and how much lies with aspects of the facial expression that can be faked?
I now wait for somebody to tell me that the entire study is flawed because it didn't control for whether the pictures were taken at day or at night.
> What I really want to know is, how much of the predictive power lies with these uncontrollable characteristics, and how much lies with aspects of the facial expression that can be faked?
This meme[1] is a meme and obviously not scientific, but lets use it.
Someone in the top left would need to buy a trucker cap (easy), sunglasses (easy), take the photo from below (easy), while they're in a truck (requires effort and/or non-trivial money), get a haircut (easy to do but requires a long term investment), and possibly grow facial hair (same).
The truck will be cropped out. So if you wanted to fake it maybe the only real investment is to change your hairstyle. It's not something you quickly do for 5 seconds, but it's not too hard either.
[1]https://knowyourmeme.com/photos/1636907-political-compass
Facial symmetry correlates with IQ. IQ correlates with political orientation (higher IQ makes you more likely to be liberal, lower IQ makes you more likely to be conservative). Therefore, political orientation should correlate with facial symmetry.
Race is also predictive of political orientation, so if you fail to compensate for that, you might get a huge but worthless signal.
That’s odd. I’m a staunch liberal, and also have an embarrassingly low IQ..
The correlation isn't particularly strong - it's only a few IQ points different on average. There are lots of dumb liberals and smart conservatives.
Note also that this is talking about liberalism as in *liberalism*, i.e. belief in greater civil liberties. Thus it is not a left/right thing per se; people like Mitt Romney are liberals in this categorization.
The story says that they (claimed to have) controlled for race/ethnicity.
Yes, but the signal became weaker after they compensated for it.
Correlation has no transitivity guarantee.
While this is true, given that both are related to cognition, it wouldn't be surprising if it was the reason. Additionally, areas with more inbreeding are often stereotyped as being more conservative, which would suggest that mutational load might be associated with it - and that is inversely associated with both facial symmetry and IQ.
Pedantic note: /sufficiently strong/ correlations do. But we're not looking at those, so of course you're right.
Pedantic note: I said 'guarantee' and 'correlation' was unrestricted. The fact that squares exist does not invalidate the statement "rectangles are not guaranteed to be squares" and adding "but squares are" is not useful.
> I was personally surprised that the algorithmic accuracy was greater than human accuracy at the same task.
It was not the same task, the AI is working on one set of pictures, and the human performance is only a reference to some other paper using a different (presumably less biased) picture gallery
The problem with agreeing that sticking DNH = do not highlight or something like that on the end of comments is that someone can now search specifically for those tags, and ... if they have really no morals, write a bot that automatically reposts these comments as highlights on sneerclub or something.
I use an online alias here that's not linked to my offline identity or any other online ones, and so should you.
People might do it in a non-standard way, a bit how we have a culture of obfuscating email addresses online. Not saying it would be practical, but I'm mildly curious to see the creative solutions that would turn up.
I wouldn't expect casual obfuscation attempts to stand up to any serious attempt to automate finding them. (This is also my opinion for obfuscating email addresses.)
They'll still stop a statistically noticeable number of would-be searchers, but only in a "beware trivial inconveniences" sense.
people on sneerclub only highlight weird stuff about the IQ of black people
So ACX is becoming a competitor to Goodreads?
I'm definitely not looking forward to 100 book reviews. It'll be interesting to see how he posts them.
How about linking to 6 or 8 of them at a time with just a list of the books reviewed?
I guess I’d plump for Scott skim-reading the whole lot (as punishment for being such a book-reviewer-attractive blogging phenomenon). And then publishing his favourites so the masses can chip in to help select those for whom the riches beckon.
Could he post them as comments to a top level "Book Reviews" post? I guess I don't know what the length limit is on Substack comments -- I can paste a ton of text in here and it doesn't seem to trip anything, but there might be some limit server-side.
From the original call for entries:
> I’ll choose some number of finalists – probably around five, but maybe more or less depending on how many I get – and publish them on the blog, with full attribution, just like with the adversarial collaborations.
https://slatestarcodex.com/2020/05/05/book-review-contest-call-for-entries/
just filter your rss feed or mailbox or someting
I am! At the least, it will let me know what books I can continue to ignore, and at best I may find new reading material. But each to his own!
Note that I like Scott's book reviews just fine. They're often of things I would never have read, and enjoy knowing something about them. But 100 is a lot, and none of them are Scott's. If, in fact, there's only a half dozen highlighted to read and vote on, I'll definitely be doing that no problem.
A possible solution for the comment highlight issue would be to make these posts subscriber-only. The obvious downside is that non-subscribers will lose some interesting content, and it kind of goes against your original commitment to make most of your posts free. On the other hand, this puts highlighted comments out of reach for search engines and the internet at large - so the threat to commenters' privacy goes way down.
I was under the impression that you were going to publish the best 5 book reviews and invite additional comments to help decide on prizes, kudos and glory. Are you now going to publish them all? All 100?
Ok, a) that’s a lot b) you’ll publish mine. Have you thought this through?
I'm worried that 100 book reviews are too many and will dilute the blog with boring content.
Maybe post 10/week as part of links posts, have people fill out weekly surveys, and then post the top 5 as their own posts?
Because honestly, I didn't come here to read a book review at the quality that I could write one ;)
It was never the plan to post them all. I think Scott planned to post the top 5 so that people voted, and then maybe link to a few others. I'm lazy to search for where he talked about it.
The most politically influential person you never heard of? Is it possible an unassuming children's books author shaped the world we live in?
A strange thing happened a few days ago. My wife and I were at the rumpus room reading Who's Bashing Whom? Trade Conflict in High Technology Industries ( http://cup.columbia.edu/book/a/9780881321067 ) when our oldest son told us he had been assigned for reading a book by British writer Aldous Huxley. My wife and I had never heard of him before. What happened to Beatrix Potter, C. S. Lewis, Roald Dahl, R. L. Stine, Horace Greeley, Lemony Snicket, Lewis Carroll and G. A . Henty? Has the great state of California cancelled them? Anyway, I decided to ask my own questions and do some research on this fellow. Apparently, his books are required reading in many Democratic-controlled school districts, are very strongly opposed by concerned parents due to anti-religion and anti-family themes and sexual content ( https://en.m.wikipedia.org/wiki/List_of_most_commonly_challenged_books_in_the_United_States ) and (surprise, surprise! ) were widely read in the communist Soviet Union ( https://www.jstor.org/stable/3831583 ). The book my son was assigned to read proposes the abolishing families, replacing religion with orgies and banning the Bible and the works of William Shakespeare.
But who was this Huxley guy and how did he shape the world we live in? He was a scion of the Huxley family, founded by Victorian scientist Thomas Huxley, who was nicknamed Darwin's Bulldog due to his rabid championing Darwin's ideas. More than any other single individual, Thomas Huxley was responsible for the triumph of Darwinism, which led, in the 20th Century, to the Holocaust and the Gulag. Aldous Huxley, besides writing books, also studied Oriental religions such as Buddhism and Hinduism and became widely familiar with the substances used in such religions to create altered conscience states known as trances. On these matters, he wrote a book called The Doors of Perception, which kicked off the psychedelic movement. The famous rock n' roll band The Doors was named after the book. Huxley's eloquence and charism made him a kind of intellectual patron saint of conscience altering drugs, the consequences of which America and the world at large experience to this very day. Here he can be watched being interviewed by famous journalist Mike Wallace https://m.youtube.com/watch?v=alasBxZsb40 .
The rabbit hole, however, goes much deeper. Huxley wrote a biography of famous Catholic priest Urbain Grandier, who was burned alive by his own correligionists for practising withcraft. Is it to take things too far to suppose that, while maintaining his façade as a beloved children's books author, Huxley was able to combined his studies on Hinduist devil-worship and his research on Grandier's Medieval withchraft to make demonic forces do his bidding and assure his success at Hollywood, where he worked as a scriptwriter? If you don't believe so, maybe you should read this hair-raising report of the visit by an Evangelical author who originally did not know who Huxley was to the Buddhist Monastery in which founding he was instrumental and decide for yourself if you choose the blue pill or the red pill. https://midwestoutreach.org/2019/05/18/thomas-merton-the-contemplative-dark-thread/
Why would you ascribe to Aldous Huxley the political philosophy of the totalitarian society he portrays in his classic dystopian novel, Brave New World? Would you say that George Orwell is also a proponent or propagandist of communism for writing Animal Farm, or 1984? Also, what are the *consequences" of mind altering drugs that "America and the world at large experience to this very day"? Are you including prescription drugs in this, or stimulants that were widely used in non-psychedelic circles, or for some reason just psychedelics? Aldous Huxley is not at all someone I hold in the highest esteem, but I think this call to cancel him is anemic and uninformed.
It is not that simple. Orwell was a socialist who fought for the socialist government against the Francoist Nationalist forces. Trotsky opposed the Stalinist regime and wrote books such as The Revolution Betrayed, but was a communist propagandist.
I think it is clear Huxley's works kegitimated, so to speak, drug taking.
So we are sniffing out all left-leaning intellectuals and drug legitimators and putting them in an axis of demonic evil? Ok go for it. It sounds rather totalitarian to me. I would be far more concerned about reading books from Francoists or behaviorialist psychologists or Evangelical apostasy-detectors (such as the one you linked) than socialists or psychonauts, who often have interesting perspectives and critiques (that don't involve reducing humans to animals or machines).
My point is, there is good reason to believe Huxley was marshalling demonic forces to subvert American society while maintaing his façada as a belived children's books author to suit his own goals.
Huxley wasn't a "beloved children's books author." He was a prominent writer and philosopher, famous mostly for a dystopian novel. I don't think he wrote any children's books at all — can you name one? Shakespeare is frequently assigned in class — does that make him a "children's books author"?
It is completely different.
He actually did write one children's book, The Crows of Pearblossom. (But of course this is not at all what he's famous for.)
>My point is, there is good reason to believe Huxley was marshalling demonic forces to subvert American society
Do you believe Hinduist devils and witchcraft are real?
Again, the Carholic Church burned at the stake Urbain Grandier, about whom Huxley wrote a biography. Why would it have done so if he were not a wizard?
What probability do you give that demonic forces are real?
It is a very high probability, I think. The Catholic Church investigated Urbain Grandier and found him guilty of witchcraft and invoking demons.
The soma in Brave New World was the opiate of the subordinate classes. He didn't make it look attractive.
It is more complicated than that. The higher castes used it, too, though probably less. It is also presented as means to perfect one's character. "Christianity without tears", it is how it is described.
https://books.google.com.br/books?id=O8_WDwAAQBAJ&printsec=frontcover&dq=inauthor:%22Aldous+Huxley%22&hl=pt-BR&sa=X&ved=2ahUKEwjiiabirZ7vAhUCLLkGHcOGB8IQ6AEwAXoECAYQAw#v=snippet&q=Christianity%20without&f=false
Good point, though I don't think Huxley made it sound like a lot of fun.
I'm not sure who did legitimate drug-taking, other than Timothy Leary.
https://www.goodreads.com/quotes/680511-there-is-always-soma-delicious-soma-half-a-gramme-for
Woah hold on just a minute here. You haven't heard of Aldous Huxley? That's hardly a mark against him, and Brave New World is pretty common high school reading. And it's odd you're comparing him to the likes of R.L. Stine and Beatrix Potter (the latter I read in kindergarten and the former was on middle school shelves, in California). Maybe C.S. Lewis, but I'm not sure he's usual reading for schools.
You *are* talking about Brave New World right? Since when did that propose abolishing families and banning the Bible? ...You realize that was a dystopia right? (And doesn't "very strongly opposed by concerned parents" sound a lot like canceling?) I tried to read your reference on the Soviet Union thing but it's behind a paywall. The preview does however contain the quote "For the six decades since he was first heard of in what used to be the Soviet Union, that spirit, more often than not, was critical and hostile." So I'm not sure where you're getting your conclusion.
Also... what's this about Darwinism and the Holocaust? Are you talking about some weird form of social Darwinism? Trying to link the descendant of Darwin's proponent for a perversion of his theory a hundred years later is like... trying to blame Orville Wright's grandson for 9/11 (he never had kids, but the point remains).
> Huxley's eloquence and charism made him a kind of intellectual patron saint of conscience altering drugs, the consequences of which America and the world at large experience to this very day.
Man, wait til you hear about Alexander Shulgin.
> Is it to take things too far to suppose that, while maintaining his façade as a beloved children's books author, Huxley was able to combined his studies on Hinduist devil-worship and his research on Grandier's Medieval withchraft to make demonic forces do his bidding and assure his success at Hollywood, where he worked as a scriptwriter?
Sorry, what? Is it weird that my main takeaway from this is that you think Huxley is a beloved author of children's books? You already said you think his books advocate banning the Bible etc., so I'm not sure exactly what facade you think he was holding up? If you're really going to try to argue here that he was marshaling demonic forces, well, I don't know what to tell you. He seems to have used much of the money to help out people in wartime Germany, so I supposed the demons have that going for them. I guess to answer your question then, yes, yes it is.
"Brave New World is pretty common high school reading."
Exactly. Because Democrats find it useful to their agenda.
"You *are* talking about Brave New World right? Since when did that propose abolishing families and banning the Bible? ...You realize that was a dystopia right?"
one has to read between the lines. And this was a constant in Huxley' work. In his last book, Island, one of the main characters criticizes St. Paul, Calvin and the hymn "There is a Fountain Filled with Blood Drawn From Emmanuel's Veins" while supporting Hinduism and Buddhism.
"(And doesn't 'very strongly opposed by concerned parents' sound a lot like canceling?)"
No, it is very different. We are talking about the books children have access to or are given to read. Schools and libraries, by definition, make selections.
"I tried to read your reference on the Soviet Union thing but it's behind a paywall. The preview does however contain the quote 'For the six decades since he was first heard of in what used to be the Soviet Union, that spirit, more often than not, was critical and hostile.' So I'm not sure where you're getting your conclusion."
Howeber, it clearly says "In his new capacity as a liberal public figure, Huxley was judged worthy of public attention. This accounts for the sensational appearance of four chapters of Brave New. World in Number 8 for 1935." At the height of Stalinist repressions and anti-Western animus in the Soviet Union.
"Sorry, what? Is it weird that my main takeaway from this is that you think Huxley is a beloved author of children's books?"
His works havebeen adopted by schools and libraries all over the United States.
"You already said you think his books advocate banning the Bible etc., so I'm not sure exactly what facade you think he was holding up?"
His façade as a beloved children's books author.
"If you're really going to try to argue here that he was marshaling demonic forces, well, I don't know what to tell you. He seems to have used much of the money to help out people in wartime Germany, so I supposed the demons have that going for them. I guess to answer your question then, yes, yes it is."
Couldn't it be part of his façade as beloved children's books author? Soviet spy Philby managed to be made head of SIS (British intelligence) Section Nine, which led anti-communist efforts.
I'm not sure if this is what's going on, but one of the many irritating things about the modern era is referring to everyone below the age of majority as a child.
"Children's book" as an idiom typically means books aimed at people below age 8 or so, which Brave New World is not. It's treated as more of a book for teenagers.
Would you prefer "minors' books"?
"Child" is, among other things, a way to refer to all kinds of minors. https://www.merriam-webster.com/dictionary/child
I think Huxley & Orwell wanted their books to be read & taken seriously by adults. Adults in term find the books useful for introducing people approaching adulthood to adult political concepts.
Which means the people made to read those books are minors, also called children.
I'm not sure how you go from a single sentence saying "judged worthy of public attention" to "widely read in the communist Soviet Union", especially without bothering to read the rest of the source, but I'll give you that it's as well supported as everything else you're saying.
> Couldn't it be part of his façade as beloved children's books author?
I don't know, could it? Maybe Gandhi's peace advocacy was just a facade to spread HINDUIST DEVIL-WORSHIP to the West. And Hitler was working for the forces of good to ensure the demise of the terrible toothbrush moustache.
At this point, you're just adding epicycles and trying to use cheap language tricks to... try to convince everyone Huxley is an agent of the devil or something? I mean, I can do it too:
Jesus was an anti-Jewish[1] anarchist[2] who used supernatural agents to enforce his will[3] and maintained his facade as beloved children's book author[4] in order to spread ideas that lead to the deaths of millions[5].
[1] John 8:44
[2] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1337761
[3] John 3:16
[4] Your kids *do* read the Bible right?
[5] http://necrometrics.com/pre1700a.htm see "Crusades" et al.
It is completely different. It is obvious Huxley was a part of an effort to subvert Western, particularly American, values.
Oh I'm sorry, I didn't realize it was obvious. Consider me suitably chagrined.
Huxley was British and I don't think he was specifically aiming at "American" values as opposed to what Scott has called "universal culture". https://slatestarcodex.com/2016/07/25/how-the-west-was-won/
It is obvious and the proof is left to the reader?
I think it is clear his public advocacy is compatible with undermining America.
""Brave New World is pretty common high school reading."
Exactly. Because Democrats find it useful to their agenda. "
I first read it when Reagan was president, at a time when Christian "parent's groups" were a powerful force in education and popular culture.
My point is, Democrats control the educational apparatus.
After further reflection I have concluded that Thiago Ribeiro is an agent of the Epoch Times or one of their associated entities. I have observed consistent errors and idiosyncracies in their mimicking of right wing "traditionalist" anti-communism, such as what is on display here: an assumption that people are unfamiliar with Aldous Huxley's name; a strange grouping of R.L. Stine with C.S. Lewis and... Horace Greeley of all people?; an apparent belief that Huxley is primarily a children's book author; an awkward and internally inconsistent Christian purism; a paranoiac obsession with Communism and Socialism and their apparent cultural tentacles, in this case the demonic; extraneous references to Chinese cultural flashpoints (in this case using Buddhism as a sort of bogeyman).
There are more signs here but I will desist from the analysis, lest I enable them to improve their (pretty terrible) simulation of "truth and tradition".
Does anyone here agree with my thesis, and does anyone else have thoughts on this particular flavor of information warfare, or on discerning bad faith arguments in general?
Two follow-up questions:
1) Why would *anyone* (Epoch-employed or not) adduce in THIS forum an apparently "Evangelical" take-down of Thomas Merton and Aldous Huxley on the basis of their demonic involvement and one blogger's "experience" of ill stomach spirits upon visiting his lair? Is there an anti-demonic constituency here among the Astral Codex clan that I was unaware of?
2) Is there a name for the bias (that I suspect we are suffering from en masse) of responding to a bad faith messenger as if he were coming in good faith? For instance my original response to Thiago, and Pycea's as well. We did not give the author much honor for his arguments but we did treat him as a person who was making an argument, rather than a false character injecting a meme along with his cohort of meme bots.
Is there a name for this bias, and more importantly--what are its consequences?
Although I am totally unconvinced by his argument, I am not sure he is arguing in bad faith.
Nor am I. There are lots of crazy people in the world.
Sometimes it's hard to tell whether _anyone_ isn't crazy!
1 - "Charity". 2 - Bad arguments fail to prevail (sunlight is the best disinfectant).
It makes no sense.
1) I am no one's agent.
2) I have no idea what, if a thing, Epoch Times is.
3) R. L. Stine, C. S. Lewis and Horace Greeley wrote children's books.
4) The point is, why the Soviet Union's regime, at the height of Stalinist isolationism and repression, supported Huxley if not because its leaders found his ideas congenial?
5) Not only Buddhism, but also Hinduism. Both were highly commended in Huxley's works such as Island and The Devils of Loudun (the biography of Catholic witchcraft practitioner Urbain Grandier).
Please excuse me. It is also plausible that you are dumb.
Thanks Buz.
I guess I jumped the troll gun.
Thought that flows from fascism / cultural purism / nationalism can be so breathlessly simplistic that it is sometimes hard to believe it is held in good faith-- but here we are, in 2021. It is interesting that people who constantly decry totalitarian psychology and cancel culture can so perfectly exemplify it as well.
Poe's Law strikes again.
I don't find it plausible in the least.
1. Do you think that Horace Greeley’s contributions to society are writing children’s books? Which ones?
2. Do you think that Aldous Huxley shaped the world we live in? In what ways?
3. Do you think that most people reading this Substack are unaware of Aldous Huxley?
4. Do you think the society portrayed in Brave New World is aspirational?
1. I meant Horatio Alger. I am sorry. He penned works such as Frank's Campaign; or, What Boys Can Do on the Farm for the Camp, Paul Prescott's Charge: A Story for Boys and Struggling Upward; or, Luke Larkin's Luck.
2) I think it is clear Huxley legitimated drug use, Buddhism and Hinduisn in the USA and in the West at large. Also, it seems clear countries like China and the USA are converging to some version of the society Huxley championed in Brave New World.
3) I have no idea. I am not a Census taker.
4) It is presented as being so, with the characters going on and on and on how the society is stable and everyone is happy all the time. Many of the book's main themes are recurrent in Huxley's works and have been appeared in books like Island,
The Doors of Perception,
Heaven and Hell and
Brave New World Revisited.
LOL – thanks!
What children's books did Horace Greeley write?
Sorry, I meant Horatio Alger, author of books such as Ben The Luggage Boy; or, Among the Wharves, Ragged Dick; or, Street Life in New York with the Bootblacks and Paul Prescott's Charge: A Story for Boys.
Nah, I think we've just picked up a new friend who came to us via the NYT article and truly believes we gather in every hidden Open Thread to belt out a few choruses of the Horst Wessel song, so they're adopting camouflage to fit in.
The Horst Wessel song is cathy even if the ideology is detestable. The same is true of The International, the Soviet Anthem versions, Giovinezza and Cara al Sol.
Thiago has been commenting on Marginal Revolution for years now, always obsessed with how wonderful Brazil is. I don't think the NYT was his way in, but he's definitely strange.
You're right – I'd gotten the blogs mixed-up and somehow thought 'You again?' when I saw his/their handle just now.
I don't know what that is. I just pulled up the site, but other than that it's clearly very right-wing I'm not sure what I'm looking at. Can you sum up? (Like a paragraph, I'm not asking you to write an essay.)
Epoch Times is apparently owned and operated by Falun Gong, a Chinese religious movement that is suppressed by the Chinese government. They have made a major play for Western media in the past few years and came out swinging hard for Trump and conspiracy theories and deliver free copies of their paper all over the country, and other versions around the world.
https://www.randomlengthsnews.com/archives/2021/02/18/the-dark-propaganda-strategy-of-epoch-times/32105
https://www.niemanlab.org/2021/02/the-dark-side-of-translation-the-epoch-times-is-now-spreading-disinformation-through-new-brands/
https://www.nytimes.com/2020/10/24/technology/epoch-times-influence-falun-gong.html
I agree he's weird. What specifically made you link this dude to the ecpoch times, however, instead of just generally accusing him of being a troll? That's a very specific accusation to make.
Honestly, I jumped the gun. But I've been reading some of their stuff over the past few months, and I always notice similar signs of foreignness, paranoiac Communist obsession, awkward Evangelistic simulation, and random references to elements of a Chinese worldview that don't make sense over here. In this case it seems the fellow is a Brazilian. Clearly my trolldar needs recalibration. Also I still just find it hard to believe that these campaigns have produced actual people whose minds work in that way now.
Thanks for the polite reply! I reread my comment and realized it came across as more accusatory that I intended.
I wasn't trying to call you out; just found that to be such a uniquely specific charge to level that I was legitimately curious as to your reasoning.
I don't think he is a troll. Pretty clearly he believes what he is saying.
David, that's the _worst_ kind of troll!
Well at least you are not going on and on about Brazil but I am not sure this is really all that much better....
I have no idea what you are talking about. Maybe you are mistaking me for a friend of yours?
Funny stuff. Perhaps best read in combination with this review complaining about Huxley hanging onto social conservatism and writing as if Brave New World is a dystopia rather than a utopia. http://adamcadre.ac/calendar/14/14432.html
Good point.
> Hinduist devil-worship
This cannot conceivably be acceptable discourse on this forum.
Acceptable in the sense of other people thinking it makes sense, no. Acceptable in the sense that he is allowed to say it, sure. The fact that some people have weird ideas is both inherently interesting and informative about the world we live in.
Calling Hindus devil-worshippers is fairly unkind and not particularly necessary, not to mention untrue. Interesting and weird ideas are fine, but random diatribes against other religions and The Commies aren't exactly high quality discourse.
I'm not sure what the moderation policy is here, but this definitely would've been ban-worthy on the old SSC forums.
I don't think Scott has been given a ban-hammer yet, and there is certainly, as yet, no report button. I too am sure it's banworthy, but I don't know if Scott can do it yet, and I don't see any way to call it to his attention.
At least one person (pseudo-Josh Hawley) has already been banned, so we know it's something Scott can do. We do need a report button, though. I'm surprised Substack didn't already have one built in.
Pull the other leg, it's got bells on
Substack desperately needs an “ignore commenter” feature. Plus people here need to remember the first rule of internet commenting: don’t feed the trolls!
my bad
I’m sorry.
I strongly suspect I'm being Poe's Law'd with this post.
Don't highlight is kinda funny because it announces to the world that you'd rather not announce to the world this opinion. I think we just have to accept the nature of social media. Anything you post can go viral at any time after you post it. Could be getting highlighted, could be ten years down the road at a job interview. That's how it works, highlighting or not. We should keep highlighting as is because it keeps us in touch with the reality of how online forums work.
Yeah, DO NOT HIGHLIGHT seems like a Streisand effect waiting to happen.
My comments at the end of the survey, because I think people not Scott need to hear them.
1) IANAL, and I think you should get actual legal advice on this because the idea that "pointing out a potential investment opportunity" is illegal doesn't pass the smell test with me. But hey, I think growing my own corn and feeding it to my own pigs isn't "interstate commerce" so what do I know.
2) There is, in my mind, more ethical issues with promoting your buddy's "non profit" (from which the buddy draws a 6 figure salary, or is using to build points for their kid to get into Harvard, or what ever) than in promoting a stranger's business. I think the glorification of "non-profits" completely misses how modern society/economics treats these as wealth & prestige builders for the individuals employed by them, and is another casualty of the demonization of normal business and investing.
3) In any case, you should say if (known to you) a friend/relative/ex/employer is involved in any such promotion. To me, that's the only ethical hurdle beyond 'don't take money for promoting things unless you disclose that' and 'def don't promote things you don't think should be promoted, even if they pay you for the promo.'
4) I like the highlights reels, as they often bring out more conversations. They are not a complete substitute for reading the whole comments, which is occasionally enlightening as to what *doesn't* get promoted.
5) You should never promote something in a malicious way, or in order to cause issues for someone.
6) What we say is our own responsibility. Period. You are not the boss a' me, and you don't get to police what I say. I am a grown adult and I do not yield you that power.
7) Damn, it's good to have you back, Scott.
Were it the upper class people who did not want their comments highlighted? It would be typical for their secretive subculture as described by Fussel.
The covid infection and hospital rates in the US have both dropped steeply over that past two months. The time correlates with when we started vaccinating people. But to (non expert) me, it seems like it has dropped way faster than we have been vaccinating. I haven't found any news stories that say whether this drop was due to vaccinations or maybe due to post-holiday social distancing or maybe even due to some amount of herd-immunity.
What's the consensus here? Why have cases/hospitalizations dropped so precipitously in the US over the past two months?
I tried to estimate the time to natural herd immunity (absent vaccine effects) based on trends in mind-December, in a post on DSL
https://www.datasecretslox.com/index.php/topic,1958.msg53625.html#msg53625
My optimistic case had natural herd immunity being reached on 20 January, which is pretty close to what we got. However, the rate of the subsequent decline is more than I would expect from natural herd immunity alone, and I suspect that is in part due to vaccination of the most susceptible individuals at about the same time that natural herd immunity started reducing the total new-infection rate.
Nice read. I think if I had read it when it was written I would have been a lot more skeptical than I am right now :)
FWIW, it seems cases are falling less quickly in the NorthEast now than anywhere else. Coincidentally, the NorthEast was the first place to get hit hard. Do you think it's possible that immunity can disappear after around 12 months? Or is there a different explanation?
The problem with "natural herd immunity" is that what percentage of the population has to be immune in order to produce herd immunity depends on how people are behaving. If we were all hermits, we might manage herd immunity at zero.
My interpretation of what happened is that by early January, the percentage immune was enough to give herd immunity if people were moderately careful, not enough if many people were having parties indoors. So once the holiday socializing was over, infection rates started to fall pretty fast.
Could be behavioral changes, or some environmental factor. Thing is each time the virus start to multiply again, the population in which it grows has changed, the amount of completely unprotected people that can still be sick is reduced, through vaccination ( but ATM in Europe and US, apart from special cases like care homes, it concern a trivial number), or previous infection. So even if R>1 once again, back to last year levels, the epidemic should looks quite different jsut because the suceptible population is much smaller. It will get exhausted not faster but with a much smaller wave. This may explain why we have more of a slow burn than a wave in Belgium, R is growing progressively, but the suceptible population get exhausted very quickly. It seems more reasonable than the official interpretation that assume a quite low amount of people protected by a previous infection (20-30%) that, while significant, would not change so much the epidemy dynamic, and an R that hoover around 1 just because the measures (government or self - imposed) are just right. This seems too good to be true, especially with the anecdotal evidence of widespread ignorance of many of those measures (by people who religiously followed them a few month ago).
So I think the amount of really susceptible people is much lower than 70-80% (100%-vaccinated-previous covid), probably because of cross-immunity and/or natural resistance (blood type for example). If you assume 60% of the population is naturally resistant and would not catch COVID in most circumstance (barring massive exposure for example), what's happening now and in the past makes much more sense. For example, I have a few couples in my acquaintances where one did catch and not the other (sero and PCR negative)....while living a normal couple life in a single home. Not really possible in 100% of people could be contaminated under normal social circumstances....
In terms of hospitalizations - we should expect that to come down quite rapidly, because hospitalizations are heavily skewed towards the elderly, who are given first priority on the vaccine. So (to give an 80/20 rule) something like the first 20% of vaccines distributed end up doing 80% of the work in terms of bringing down hospitalization and death.
it's going down for now but vaccination in the US might not outpace the B117 variant
The thing I've heard most is what some people call the "control mechanism" - whenever stories about the pandemic become scary enough and widespread enough, more people start being a little bit more careful about what they're doing. In this case, many people probably had family plans for Thanksgiving/Christmas/New Year's, so that they delayed taking things more seriously until after those dates. But in very many states, the peak day of infection seems to have been right around Jan. 8, which is 7 days after New Year's, which is exactly when you'd expect the peak if you think people started their new caution right on Jan. 1.
Vaccination, and widespread immunity from prior infection, are likely contributing factors as well. But the thing that caused the big change is most likely human behavior.
I think it's the "control system" maybe, but your summary is otherwise perfect.
Thanksgiving + Christmas happen -> cases spike -> no more holidays -> cases down
This explains everything to a first order.
Next best after that is the thermostatic response the public has seemed to have (independent of legal restrictions) to caseloads and hospitalizations.
This + decreased social activity due to it being winter are my guesses.
Thanksgiving+Christmas results in a higher growth rate for cases. When they are over you should go back to the same growth rate you had before. To get a reversion to level you need to assume a feedback mechanism in which the higher level results in people taking more precautions than they were taking before the holidays, pushing the growth rate negative.
Yep. This is what I meant by "thermostatic response the public has seemed to have (independent of legal restrictions) to caseloads and hospitalizations."
My point was that your first order was wrong. Without the thermostatic response, the growth rate should have gone back to what it was before, not the level. That isn't quite true because the high case rate in the previous month+ raised the number of immune people, but that doesn't give you a reversion of level, it just means the growth rate should have been a little lower than before the holidays.
People should just adopt some quick signal they can indicate if they don't want their posts highlighted, something like an asterisk or DNH or whatever.
Reminder that the monthly DSL effortpost contest closes in about 24 hours. Lots of good stuff this month.
https://www.datasecretslox.com/index.php/topic,2812.0.html
> I’m a little worried about the Comment Highlights posts, because they broadcast things people said down in the comments where nobody would ever read them out to the entire Internet.
Just wanted to chime in and say 1. I don't think there's anything wrong with how you've handled it so far, 2. I do think this is a fair/nontrivial concern which is worth considering, and 3. I don't see any easy solutions, so the answer might just have to be to accept it?
On the last comment highlights I was in this *exact* position. I wrote a hasty/unedited post about Josh Hawley (that basically amounted to a rant) under my real name, and was slightly surprised to see it featured more prominently. Of course, I stand by the general idea, but it was written kinda impulsively out of annoyance, so if I had thought more than a couple of people would read it, I would have written it very differently (and been careful about the argument), and I wouldn't have "chosen" to have it featured.
To be clear, I don't personally mind, and if this had actually troubled me I could have tried to message you (to omit the name, or remove the comment). But I can at least sympathize with how it might take someone by surprise, even though in retrospect it seems fairly obvious (it's an open discussion thread, the blog generates discussion by highlighting certain comments, if you don't want to be associated with something you post, don't put a name on it?).
TLDR: the norms seem pretty obvious and straightforward (it's an open/public blog, of course any comments might be featured to generate further discussion), but I can still see how someone would be surprised.
On the other hand, I just don't see any easy solutions! If you omit the name of the poster by default, I assume some folks would feel like they weren't getting "credit" for thoughtful comments they made. Nor do I think it's feasible to individually ask posters for permission.
I think the easiest improvement is just occasional mentions of these norms. For instance, on each of these "Comment Highlights" posts, I'd add a quick note explaining 1. this blog generates further discussion by highlighting comments, 2. if you want a featured comment removed, message you, and 3. if you don't want a comment featured in the future, include a note in that comment (or maybe it could be in someone's profile).
Obviously, you will still get occasional "surprises" by someone who is unfamiliar with these norms. But I don't think that edge case is a particularly big deal, it seems overall perfectly reasonable to me for a blog to highlight comments that are written on that blog. And as long as you include that occasional note to the "Comment Highlights", it would only ever be a worry for someone who was new to the blog (and growing pains like that are inevitable for any community norms).
This is a great example of how random comment highlights can be improving the discourse and is maybe a good argument for keeping them. If you know your sloppy comment might get highlighted, maybe you'll spend some extra time writing/editing it and we're all better off?
Additionally, I suspect many people want to get featured and the possibility of getting highlighted will encourage them to write better comments.
"On the other hand, I just don't see any easy solutions! If you omit the name of the poster by default, I assume some folks would feel like they weren't getting "credit" for thoughtful comments they made. Nor do I think it's feasible to individually ask posters for permission."
I think removing the name makes the most sense - posters can claim the posts in the highlights thread afterwards, if they want to, the same way people quote something they saw and then the creator hops on to say "hello, person who wrote Essay X here. Nice to see such a lively discussion..."
Regarding the DNP company, I think it's a trust issue. Theoretically, there's nothing wrong with saying "this company looks cool and you can invest in it here", but in practice nine out of ten times someone with a platform says that kind of thing it's because they have some financial stake in doing so rather than because they legitimately think it's cool. I believe Scott when he says he just think it's cool, but for anyone without a strong prior about Scott's trustworthiness, the reasonable thing to think upon seeing products shilled is "Oh, one of *those* writers."
And in this vein, I think the biggest risk from the endorsements is slowly looking more like one of those writers. Doing it occasionally probably won't hurt his standing in the eyes of people vaguely familiar with his work, but if it becomes a frequent thing, the pool of people who don't trust him will grow a lot. And this has the major downside of hurting his ability to sincerely talk about how much he cares about Givewell/EA, Metaculus, and whatever the next thing he really cares about is.
Feel free to quote with full attribution any comment I make here.
> Feel free to quote with full attribution any comment I make here.
-​​
Am I doing it right?
I wish I could get the DMV to do as well!
I'm not a gamer, but I'm looking for action video game recommendations (for PlayStation 4, Nintendo Switch, or Steam) that I can use for brain-training my middle-aged brain. Ideally, the game should dynamically increase in difficulty as I get better. If such a thing doesn't exist, I would like something that can be set at a super-easy level (something that would insult a five-year-old) and then ramped up manually. I tried "Call of Duty: Black Ops", but it was too difficult even at the easiest level. I don't have great eye-hand coordination, and my reflexes aren't the best. I'm not interested in walking simulators, puzzlers, or similar.
Try Hades? It's still pretty fast-twitch, but does a pretty good job at "dynamically increasing in difficulty to your skill", in the 'boring rogue-like' way instead of something cleverer.
The core part of its appeal (to me at least) is that they made all parts of the loop quite fun, and so there's a natural pull to come back and stay in (whereas normally games like this feel like they have discrete episodes).
I don't think it has a difficulty setting good for a 5 year old though.
But I really do think it does a good job at the other things, as you mention.
Action games might train your reaction time (and _maybe_ attention) but not a lot else. I played those a lot in my 20s. Playing them didn't make me smarter in any verifiable way. In fact, likely the opposite occurred because I wasn't spending the time learning anything useful.
It probably doesn't meet your criteria, but some people I respect have been raving about Factorio. I haven't tried it though.
You're probably right, but I heard an intriguing podcast with a neuroscientist named Adam Gazzaley. He and his team created a video game called NeuroRacer that apparently showed benefits for people who played it. One of the game's features was that it dynamically adjusted the difficultly level based on how well (or poorly) the player was performing. I don't think the game is available to the general public, though.
https://en.wikipedia.org/wiki/NeuroRacer
If you're willing to try a multiplayer game, I'd recommend Deep Rock Galactic.
I would look into roguelikes--most of them get progressively harder as you get deeper into the 'dungeon'. My personal favorite of the genre is Risk of Rain 2: it's a 3D third-person shooter where basically you beam down to an alien planet and shoot your way through it (there is a story, but it's very light) while picking up items that enhance and change your abilities. If you check it out, I would recommend sticking with it at least until you unlock Huntress (which should be relatively easy, especially if you ramp down difficulty). There are different characters with different abilities, and the only character unlocked at the beginning is Commando, who is rather boring and a bit finicky to pilot. Especially if hand-eye coordination is an issue, try Huntress--her primary ability is auto-aiming her shots.
Possibly a bit too far from what you're looking for, but you could also try The Long Dark. I'm not sure if it would be a bit too "walking simulator" for your tastes, as you do have to walk a lot in the game, but it is one of the few games I play where I feel like I'm constantly keyed in and hyper-aware. In The Long Dark, you are the sole survivor of a plane crash on a remote Canadian island far north, and you must try to survive the strange, supernatural, unending winter. There's not a lot of action or gunplay (a bit for hunting and for deterring hostile wildlife), but it wrings a lot of tension out of simple tasks like "can I make it to that cabin before I freeze or a wolf eats me?". Especially at higher difficulties balancing your short-term and long-term survival needs becomes an interesting challenge, and the game gets harder the longer you survive as the winter gets progressively colder, wildlife becomes scarcer, and the man-made items you rely upon in the early game break down.
Risk of Rain 2 starts off very hard though. For a non-gamer looking to brain train, I wouldn't start there.
...hmm, you may be right there. I now find the early game mind-numbingly easy, but yeah now that you mention it, I was deeply frustrated the first few hours I played it (and I have gaming experience, although I'm not a shooter person).
It's my impression that roguelikes are almost exclusively targeted at people who want really hard games. Even if you isolate just the easiest part at the beginning, I couldn't name one that would be suitably easy for someone with no prior experience at that type of general gameplay (e.g. I couldn't name a platforming roguelike that it suitable for someone with no experience at platformers).
I also don't think "the run starts easy and gets gradually harder" is something that you'd WANT to extend over a very large spectrum of difficulty, because it implies that even the players who can mostly handle the harder parts are constantly forced to re-play the easy parts every time they die. So this doesn't seem like a promising direction to me. If you're trying to practice a skill, the easier exercises should gradually be *removed* from your regimen as you grow past them.
Roguelikes, as that dude said. Also Dwarf Fortress obv
That really doesn't dynamically increasing difficulty. You suffer as much as the RNG wants you to suffer.
On the contrary, the brutal, you-can-die-at-any-moment vibe keeps you on your toe all the time. DF has less of that and is more about your imagination and what you think you could do and how you would go about it (although you can still die in gruesome ways)
I'm not sure if this is too close to puzzler for you. But maybe Hardspace: Shipbreaker.
It's a game where you pay a scrapper basically, paid to deconstruct spaceships. It's not adrenaline pumping action, but it does encourage precision and speed at doing stuff like cutting pipes without hitting a fuel tank. The difficulty is mostly controlled by the ships you choose to deconstruct; later ships are larger and also more dangerous. But you can change it yourself significantly by choosing to play risky or safe.
In addition to looking for "difficulty" options, you may also want to keep an eye out for "assist mode". (This seems to be a different framing device that some games have adopted to try to reduce the stigma from making a game easier and explicitly consider people with different physical limitations, rather than just differences in training. I've noticed games with this framing seem to be more likely to offer explicit numerical sliders for things like "take less damage" or "increase the timing window for pressing this button".)
You should also maybe consider just playing an easy game, and then playing a slightly-harder game, etc. rather than looking for one game with a big range. Difficulty is multi-dimensional, and creating a game that offers robust challenges over a very wide range of difficulties is a serious engineering challenge. Lots of games only modify a couple simple variables, like damage, speed, or number of enemies, and often that's just not enough to scale a game between "5-year-old" and "e-sports champion".
Also, be aware that people who don't know of a game that actually meets some requested criteria will often recommend their favorite game anyway, because their favorite game is highly salient to them and it's just the first place their thoughts go. So be wary of recommendations for popular games that don't obviously cater to your criteria.
Thank you, that sounds like good advice. I know I can just Google it, but do you have specific recommendations for games that are inherently easy and/or games that have a good assist mode?
I personally tend to play pretty challenging games, and so am not a great source for easy recommendations.
I do feel bad leaving you with literally zero examples, so I guess here's two games that I enjoyed that are reputed to have pretty good assist modes:
Celeste (a platforming game)
CrossCode (an adventure game with fighting and puzzles)
But those games are both *quite hard* on default settings and I can't personally vouch for the assist modes because I haven't ever tried them. Honestly you are probably better off trusting Google over me.
Celeste and Hollow Knight work for me in exactly this way. Both are amazing and while really tough, also very accessible. While playing, I can directly monitor my brain getting better at them. I'd recommend looking at reviews / gameplay on Youtube to see if they are appealing to you.
But i think the 'train your brain' part won't be of any effect beyond the games. Depending on your target, a more physical training (something with balance / coordination?) might be better...
I'm going to go in a different direction from other folks and recommend Mario Odyssey on Switch. It literally has an assist mode that is intended to make it possible for a small child to beat the game.
Mario Odyssey's difficulty scaling works differently from other games though - beating the game is fairly easy, but the game is filled with hundreds of bonus objectives that reward exploration, puzzle-solving, and technical skill. You can more or less set your own difficulty by deciding what "stretch goals" you want to go for, which you can shift around on a momemt-by-momemt basis.
How about asking people in a reply to their comment whether they're willing to have their comment highlighted, and only highlight if they say yes?
This does little to publicize the comment, and gives people a chance to cancel the comment if, after thought, they think they shouldn't have said it.
It's opt-in, which is safer than opt-out. It will lose some potential highlight-worthy comments, but I hope not a lot of them.
For example, I don't know how to see replies to my comments. And if it involves spamming my mailbox, I don't want to know.
I didn't think about that angle.
I thought the approach of "please don't comment here if you don't want people to read your comment" made a lot of sense.
The problem in #6 seems like something you could automate pretty easily. Get a friend with programming skills to make a quick program for you. You put all the names into a list, then when you want to highlight a comment, have a search algorithm check if the name's in the list.
The part that is important to automate is not checking the list (ctrl-F doesn't require much programming skill) but maintaining and updating the list. [If a thousand people want to be on the list, does Scott want to have to deal with a thousand emails? If people want restrictions like "well, anything I post about my sexuality is probably not meant to be front-page, but stuff about battleships is fine", will they feel like they can put those in the email too?]
RE: Book reviews, I'm good with posting them sans names for blinding purposes; does that mean if mine should make the cut I shouldn't link friends to it from outside the blog until after they've been unblinded?
I'm selfishly disappointed that this means you won't be doing a Conversation with Tyler.
What's the policy on discussing treatment for medical diagnoses?
I read the Vox piece and got pretty much the same impression from it that I got from the original malaria-vaccine story. When I posted it elsewhere, my prefatory comment was "this is really preliminary, only in mouse studies, but it looks interesting."
Content recommendation: Ed Glaeser's online lectures (e.g. https://www.youtube.com/watch?v=6WEusDeJfXI ).
A lot of people around here care about housing and urbanism and related economic issues, and he's probably the best economist working on this stuff. He's got the same "just plainly explain economic issues in a simple, non-moralizing technocratic way" that I enjoy about ACX or Zvi's blog. I used to recommend his book to people, but he manages to get pretty much everything in there across in a 30-60 minute youtube lecture.
McElligot's Pool (by Dr. Seuss)
“This book is dedicated to
T.R. Geisel of Springfield Mass.,
The Worlds Greatest Authority
on Blackfish, Fiddler Crabs and Deegel Trout”
Searching for 'Deegel trout' I found this nice blog post,
https://seussblog.wordpress.com/2012/11/18/mcelligots-pool/
I can't believe they canceled a book about sitting by a pool and maybe catching something big. Besides it's dedicated to his Dad! (I have happy memories of fishing with my dad.) Maybe it's the fishing the woke left objects to? (cry or make a joke.. my two options.) From what can tell the book was pulled because it has Eskimo fish. And Eskimo is no longer used. (This is news to me.) They would now be called Inuit fish. Is there some harm done by having Inuit children learn they were once called Eskimo? And what joy will be lost without the rhymes and illustrations. Can anyone make an argument for pulling the book? I wanted to close this rant with his words from “On Beyond Zebra” (also pulled)
“The places I took him!
I tried hard to tell
Young Conrad Cornelius o'Donald o'Dell
A few brand-new wonderful words he might spell.
I led him around and I tried hard to show
There are things beyond Z that most people don't know.
I took him past Zebra. As far as I could.
And I think, perhaps, maybe I did him some good...”
A few unconnected thoughts on this topic:
* Any time there's a big viral story like this we should consider whether it's a marketing stunt. The publisher withdrawing some Seuss books has driven sales of the others. Surely this wasn't a total surprise?
* It's fine for an author (or an author's heirs) to withdraw their work. We can question their motives, but it's odd to think they have an obligation to keep publishing things they no longer endorse.
* The creepy thing about this story is Amazon, Ebay, and others suddenly treating the no-longer-published material like kryptonite.
Thanks,
Re: marketing. IDK, personally I looked to buy the banned books online and found nothing... if they show up in the future I might buy one. Besides books I want nothing to do with the rest of the Seuss franchise... but I've mostly always been that way.
Re: copyright, yeah that's a bit of a weird law at the moment.
At some time we'll be able to get these books again.
I wish there was some push back from the left media, rather than what looks like complete buy in. ('Worth losing your job over Dr. Seuss?'... nah.)
That assumes they disagree. My understanding, again, without personal knowledge.
Seuss authorities: We're pulling these six books no one buys.
People in general: Um. Ok. Why?
Seuss authorities: Some of the stuff here is pretty racist.
People in general: Really.
Seuss authorities: Yeah. Look.
People in general: Huh. Ok.
The right: OMG DR SEUSS IS CANCELLED.
Basically, as far as I can tell, no one else is upset about this, or really paying attention to it other than to pay attention to the reaction.
My impression is that there are still some liberals left, who get upset at suppressing books. I saw an essay on the subject by one of them.
I hope the link turns up.
More generally, I would think progressives would notice how important privacy and freedom of speech were for improving civil rights and be concerned about setting up mechanisms of censorship, but it doesn't seem to work that way.
What suppression? Other than copyright being at least 50 years too long in general?
An effort to make it significantly harder to get the books.
Yeah unfortunately I think that is mostly correct. I wish more people spent time at the 'Yeah. Look." step. I'm also not sure the right's OMG response came after the acceptance by the left, or if the left's "yeah this is racist", came after the right's OMG. Everything fractures quite quickly along political lines.
I mean, probably fair. I glanced through one of the articles that circulated after this blew up and it had a few examples. They didn't look great to me. If I had a book and was reading it to my kids those pictures would probably lead me to stop and discuss, or maybe just stop. But I haven't read or heard of any of the books, or looked into it deeply.
Do you think the publisher is wrong, or do you just wish they'd spent more time making their case? Of course, that raises the question of to whom?
Doesn't seuss out, since eg. eBay was actually pulling listings and I think Amazon clamped down on resale too. Not stocking or reprinting is one thing, but the books are very definitively actively censored beyond "these just don't make money, sorry."
On the author/the author's heir piece: it's also worth discussing just how long copyright is nowadays. Seuss himself has been dead for 30 years and his books won't start entering the public domain until 2053. His heirs may feel like they ought not publish these books anymore, but clearly many people feel otherwise and would gladly keep publishing them. Especially when you're dealing with books with such cultural clout, the excruciatingly long copyrights rob society at large of a part of their own culture.
It is absolutely a marketing stunt. I looked at all of the books in question, and I would agree with *quietly* letting At the Zoo fall out of print. But you have to wonder: if the content is so embarrassing, why highlight it? Nobody would notice At the Zoo quietly becoming scarce. Either they cynically wanted to drum up sales of other books by intimating that those too could be withdrawn in the future (see: skyrockets of "Cat in the Hat" sales), or they've wholly bought into the self-flagellating 'anti-racist' culture (which is pretty endemic in publishing right now) and they earnestly thought they had to prostrate themselves before the masses--virtue signal, in essence.
The de-platforming of sales of these books is the most chilling though. It's also patently ridiculous--you can buy Mein Kampf, Protocols of the Elders of Zion, the SCUM Manifesto, and a bunch of other very objectionable content on every single one of those platforms. I don't want those books banished either, but in terms of "books that have actually caused harm", they are all far more dangerous than Dr Seuss. It's unfortunate that Dr Seuss drew a pair of African people who look like pot-bellied monkeys in one of his books, but it's hardly dangerous. Those other books have inspired actual, literal, violence.
Re: marketing - Yeah, I worry that media outlets, online retailers, and even smart people on here are playing into the hands of cynics.
An example from the 2010s - guerilla marketer Ryan Holiday ginned up publicity for an author known for off-color stories by instigating a minor culture war flare-up:
* https://www.forbes.com/sites/ryanholiday/2012/04/03/why-wont-planned-parenthood-take-500000/?sh=7814458a570c
* https://www.theatlantic.com/culture/archive/2012/04/forbes-lets-tucker-maxs-strategist-play-contributor/329747/
* https://www.ibtimes.com/was-tucker-maxs-500000-planned-parenthood-donation-publicity-stunt-433944
Can't we just use "If I Ran the Zoo" as a teaching moment? He's going all over the world collecting animals, he could have made a slightly different drawing for the Africans, but so what. There are historical stereo-types in all his images. What's racist now was the norm in the 1947.
So funnily enough, when I first read the book, I didn't even flag the Africans image as inappropriate because my brain didn't parse that Seuss was depicting humans--I just thought he drew some funny chimpanzees. The drawing, I feel, falls into one of those categories where you only realize it's racist if you have the cultural knowledge to know that "black people are monkeys" is a racist trope, which children do not have at that age.
You could also use it as a teaching moment, sure. It would be an interesting discussion to have with maybe high school students about changing norms, and how the meanings of things can change over time (I doubt that Seuss was deliberately attempting to invoke a racist trope, but nowadays that drawing cannot be seen any other way, for instance).
Yeah sure, look I don't see monkeys.. (that's your image) Seuss drew plenty of 'stereo-typical' black people in cartoons. 'Worse' than in this kids book. But I'm going to push back on the idea that the image can not be seen any other way. I can certainly see a trope, I also see how an American master drew Africans circa 1950. Personally I don't think Dr.
Seuss had a racist bone in his body, like everyone he's a product of his times.
As a technical matter, it's fine for a copyright holder to do whatever they want with their work. Even if their actions are meant to be symbolic or done because of broader societal pressure.
As a technical matter, it's fine for a baker to refuse to perform services for a gay couple. Even if it's being done as symbolic virtue signaling or to fuel a astroturfed legal effort.
The frustration isn't about the technical legal matter, it's about what kind of societal norms we have.
I don't think I understand the Seuss Estate's reasoning well enough to Steelman it. But it's pretty clear that a lot of people don't want to live in a world where we assign a high value to not offending people or apply current standards of morality to the past or police their customer's ideas around race or use contemporary issues as an opportunitc marketing trick.
Eskimo != Inuit. Yupiks & Aleuts have also been called that, and aren't Inuit. Nor do they like being falsely referred to as Inuit, which is what happens when people think they can just do a text-replace.
Sorry, my fault. I honestly have no idea. Did the Eskimo fish bother you?
I think TGGP's point is that when it comes to Alaskan Natives, Intuit isn't a like for like term for Eskimo and efforts by those to be more racially sensitive by using Intui where they'd use Eskimo are actually introducing a new problem of calling natives by a term they don't identify with.
It's rather like saying "Instead of calling East Asian people [a word that I won't print here], we should instead call all of them Japanese." It's not accurate, insulting, and (at least in the Asian case, not sure about the Alaskan natives) groups together people that were historically conquered with the people that they were historically conquered by.
I'd personally go with "Alaska Native," since it seems to be what people generally prefer, but if you happen to know the specific native group, go with that. That is, "Eskimo" is never okay, "Inuit" is better but not great unless you know about the person you're referring to, and "Alaska Native" is a bit clunky but isn't going to insult anyone.
TheGodfatherBaritone has my point correct.
The Dr. Seuss estate has the legal right to unpublish the books. IMO, the books are old enough that they do not have the *moral* right to control access to them or restrict their distribution. They should have been in the public domain years ago. But that's just the same drum I keep banging about all older intellectual property.
Ebay (not Amazon, AFAIK?) refusing to sell used copies is downright creepy. That sounds more like being banned than just being taken out of print (which of course happens all the time).
I hate how no one keeps a sense of proportion about stuff like this. The supposedly hurtful and wrong material ranges from clearly racist caricatures (the apparently African bushmen in *If I Ran the Zoo*) to arguably racist depictions (the "Chinese man who eats with sticks" in *Mulberry Street*) to obviously harmless (maybe I just lack empathy, but I can't take seriously the idea that anyone would be hurt by the "Eskimo fish"). Why are these all being treated the same way?
Another thing that bothers me is how many people are like, "It doesn't matter because these weren't Dr. Seuss's greatest hits anyway. The minor works don't matter." No! That's not how literature works! If we're interested in Dr. Seuss's work (and we should be!), *all* of it is important. We need each part of the canon to interpret the others. What kind of demented Great Books purism would dismiss the first children's book by the biggest children's book author of the twentieth century as irrelevant?
I just checked. Ebay is still selling the books, they're just expensive. You'll have trouble finding a copy of Mulberry Street for under $200.
https://www.wsj.com/amp/articles/dr-seuss-books-deemed-offensive-will-be-delisted-from-ebay-11614884201
There are still listings, but Ebay is actively delisting them.
Thank you.
I'm surprised it's taking them this long to finish the job.
Yes, all of that. Mc'Elligot's pool is a classic, if the estate has water colors for all the drawings they could republish it at twice the price. But it's not about money... I sorta get the feeling they (Seuss estate) would like to pull 'Ran the Zoo' and thought, 'we can't cancel just one, there needs to be a number/pattern.
In Canada, "Eskimo" is seen as a slur (https://en.wikipedia.org/wiki/Eskimo)
I think this is related to how the name "Anasazi" is the Navajo word for "ancient enemies", and so the preferred term these days for the people largely displaced by the Navajo is "Puebloans" - even though it's a Spanish term, it's at least a respectful one.
OK, was it like that in 1947? Have you read Huckleberry Finn?
In Huckleberry Finn, the whole *point* of the story is that we are meant to recognize how casually cruel the morality is that Huck himself pays lip service to, but we are also supposed to recognize that Huck himself knows on some level that much of it is wrong, and behaves much better.
It matters a lot when someone is using casually cruel terminology whether we are meant to recognize it as casually cruel and recoil from it, or whether we are meant to take it as a good and proper thing to do.
So I have a question. Maybe someone could play devil's advocate and explain what they think is so evil about having "A Chinese man who eats with sticks" in a children's book? "Racial stereotyping" isn't a valid answer, because these words don't explain anything. Explain, instead, why what you call "racial stereotyping" in this particular case is somehow bad. Same question about "Eskimo fish".
You know, Chesterton's fence and all, maybe there is something we are all missing?
Is there an illustration of the Chinese man? Because I saw an image from one of the other books depicting some Chinaman caricatures, so I wouldn't be surprised if that was the case.
For that matter, is there a comprehensive article anywhere that shows the objectionable content in each book? I've only found articles that show or describe a few, like this one: https://www.cnn.com/2021/03/02/us/dr-seuss-books-cease-publication-trnd/index.html
Apparently these aren't the only Seuss books with objectionable material, but they never sold very well so the estate wouldn't lose much by removing them from publication.
Plenty. Just type into images.google.com "Chinese man eating with sticks Seuss", and you'll get it. If you look at the whole picture, you'll see that all humans in this story look about equally ridiculous. He's not any more ridiculous than the policemen, the Mayor, the Aldermen, the band, the pilot, or the people on the plane. Also, you'll notice that people inside each group - the Aldermen, the policemen, the band, the men dumping confetti from the plane - all have pretty much the same face. At least, the Chinese guy is unique.
So no, he didn't set out to caricature just the Chinese guy. It's a caricature of everyone mentioned in the poem.
Actually, almost everyone in that picture has the same face. It's the hair, the headwear, and what they are doing with their mouths that is different between the groups.
It's also worth pointing out that, at the time Seuss drew the image, this is roughly the dress Chinese immigrants would have worn. It's exaggerated (as are all Seussian characteristics), but for the time it was created it is accurate.
I mean, you've already stated the answer. Cartoons that exaggerate or stereotype racial/ethnic characteristics are considered offensive in and of themselves, even if the immediate context seems harmless. (To see why, you might consider Seuss's [anti-Japanese WWII cartoons](https://www.openculture.com/2014/08/dr-seuss-draws-racist-anti-japanese-cartoons-during-ww-ii.html). Normalizing such depictions has broader consequences.) In the original edition of the book, the Chinese character had yellow skin and a long pigtail (and was a "Chinaman," not a "Chinese man"). Later, Seuss went back and removed those features, but the slanted eyes and Chinese peasant costume remained. Meanwhile, everything described in the book is meant to be weird, outlandish, or fantastical, but the Chinese guy isn't really doing anything weird, he's just running along being Chinese. So he's also being presented as an exotic, alien figure.
It's clearly an exaggerated and demeaning caricature by modern standards. It's not wrong because of it's content per se, it's wrong because of its style. And it was probably perfectly mainstream when it was drawn. The past is a foreign country and all that. But you wouldn't see it on a billboard in 2021, and if you did the person who put it there would get canned. So it's understandable that someone made the decision to pull it. It's much less understandable (to me) why they chose to call attention to it, unless, as suggested elsewhere, the whole thing is a cynical cash grab.
So you're basically saying that having a character not of your race/ethnicity is generally supposed to be offensive, unless that character is not displaying any characteristics of his race/ethnicity. That doesn't answer my question, that just generalizes it. What is the alleged evil that's being done when the context is harmless as far as everyone can tell?
Anti-Japanese cartoons during WWII are a whole different thing, and I don't want to divert the thread into explaining what's different. (I find them really disgusting, but my reasons might not be the same as yours.)
Wow, thanks for the link to his political cartoons. (I don't suppose there was ever a published compilation? Not that I could afford it.)
Thread drift is fine. In my mind this was not done for any monetary gain on the part of the Seuss estate. It was a 'don't hurt me'* sign to the left.
Re: political cartoons; The whole war years were a very different time. My best understanding to date comes from Dan Carlin, Super Nova in the East and Ghosts of the Ostfront.
Back to Seuss, he's always been political. "The Lorax" was a bit too preachy for me. And I hadn't read the butter battle book till a few days ago. It's far from 'my' political north. But so what, he's an eef'ing American Master and to understand him you need to have access to his entire body of work.
Hey, thanks all for the nice discussion. I'm going to dig out the frog pond this spring (summer) and call it McElligot's pool. It will need at least one sign if the fish are to find the way to it.
*Heather Heying, darkhorse podcast 'don't hurt me' is similar to, "I'm on your side".
> I don't suppose there was ever a published compilation?
[There is one.](https://www.amazon.com/Dr-Seuss-Goes-War-Editorial/dp/1565847040/) It's still in print, although paper copies are currently unavailable from Amazon.
I wouldn't describe those cartoons as racist. He could have written the same cartoons against the Confederates if he had been around then, and he did include Hitler in the first.
They are anti-Japanese, and the assumption that the Nisei were traitors seems to have been mistaken, but negative representations of the people you are fighting against are pretty normal.
... Theoretically, what do you think a racist anti-Japanese cartoon would look like?
Sticking foot in mouth for D. Friedman, I think he would describe them as war propaganda. Which is not so much about race as the enemy.
Something like this? https://www.pragmaticmom.com/wp-content/uploads/2016/11/Dr-Seuss-Anti-Asian-Racist.jpg
There was a cartoon with a rather moderate caricature of Hitler and a grotesque anti-Japanese image for Japan. I'd describe that one, at least, as racist.
I'm not sure what 'racist' means anymore. I read/ heard 'defenders' of Seuss lament the loss of some work, but still describe the images in "If I ran the Zoo" as racist. I mean they certainly portray different races. But to recap the story. (Imagine you are a young boy in 1950.)
Young Gerald McGrew, likes the zoo but then starts to dream how to make it better.... He releases all the current animals and then goes collecting around the world. There follows a parade of wacky animals,
machines and natives who are helping with the collecting. And we get cartoons of how a boy in 1950 might picture his native helpers. I still see hired natives on nature expeditions, though now dressed in jeans and tee shirts. So the problem must be how he drew and dressed them. The pictures could be somewhat different, but exotic dress is part of story. In 1950 these were not racist images. (well using my 'fuzzing' definition of racist = insulting someones race.) And in the same vein, I read on npr that Seuss, in high school, wrote a minstrel show, in which he starred in black face! (This must have been ~1930 I mean when was Al Jolson?..)
It took me a while to sort this out, but "is this racist?" splits into at least two questions. One is "was the creator a bad/malicious person?" and the other is "is this art a satisfying experience for the people depicted and/or does it put them at risk?".
Both of them are frequently hard to answer accurately most of the time.
What @Nancy said. To make it more explicit, it's not that it wasn't racist in 1950 (try to imagine a black parent reading this book to their kid, 1950 or not). It's that it wasn't particularly noticeable in 1950, when pretty much all of society was pretty deeply racist.
Here's a rather extreme example: https://www.theatlantic.com/ideas/archive/2021/03/my-mother-found-dr-seuss-book/618225/
A black mother does a tremendous amount to insulate her children from white people and white culture. The results may not have been bad.
I do think Seuss's caricatures of black people are noxious in a way that the drawing of the Chinese man isn't.
Now it's been 2 days since I asked the question, and nobody has come up with any claim of any actual harm being done by the image of "A Chinese man who eats with sticks". If no harm that anyone knows of is done, then the prohibition on content like this seems to have no moral ground. (Please correct me if I'm wrong on this.)
So we have a prohibition that appears to have no moral basis but is just a matter of faith for those people who believe in it. A lot of those people are not content to just avoid that kind of content but are extremely aggressive in trying to make sure that everyone else follow the same kind of prohibition.
I think that's what we call "religious extremism". Imagine if some Christians were trying to make sure that nobody would be able to publish or sell books with gay characters; imagine what kind of treatment they would have got. I think we're looking at the same situation, and these people deserve the same kind of treatment.
I mean, what harm is done by marketing a cola that isn't for the n-words? I saw the picture in question, said, roughly, "Yikes", and moved on. It's racist. Probably not noticeably so in the 1950s or 60s, but it's jarringly so now.
No, I'm not harmed. If someone had given me the book as a gift and I was reading it to my four year old and came across it by surprise, I still wouldn't be harmed, but I'd be annoyed and take time out to to explain about racism and stereotypes in as accessible way as I could manage.
But if it were my job to decide what does and doesn't get published in the Seuss estate, I can certainly see myself making the same call, and being a bit bewildered by the blowback.
As far as "these people", are you talking about the estate managers? What sort of treatment do you suggest? The right was big on cancelling things in the 80s and 90s, but I thought that had gone out of fashion over there.
Clearly, one of those things is not like the other. Your example is so offensive to black people that you didn't even type the offensive word in it - and if you had, you likely would have been banned from this board, and not many people (if any at all) would have argued that you didn't deserve it.
At the same time, nobody seems to know of anything that would offend a Chinese person about that picture of a Chinese man eating with chopsticks. (If you know what would be offensive to a Chinese person here, please tell.) Would you be offended if you saw a cartoon of a white man eating with a fork and wearing a baseball hat in an Asian book?
You imply that after reading that book you wouldn't have written to the publishers or to the sellers that it needs to be cancelled. I.e., you have a prohibition not everyone shares, but you're not trying to impose it on other people. So you're not one of "these people". You're not a religious extremist.
I don't know the details on what got the estate managers to cancel the books, or on what got its sellers to stop selling it. I assume it was "these people", but, like other posters suggested, something else might have been the cause. However, there's really no shortage of "these people", and their influence seems to be completely out of scale compared to their visible number - see, for example, this: https://abcnews.go.com/US/trader-joes-change-product-branding-petition-calls-racist/story?id=71868367 .
The way you treat religious extremists is not allow them to have the influence they want, which means you have to either ignore or deflect their forays. Anything else will either let them run amok or let them look as victims.
Probably not, but I wouldn't care for something like this: https://www.pinterest.com/pin/341851427956748213/
or this:
https://www.media-diversity.org/understanding-the-antisemitic-history-of-the-hooked-nose-stereotype/
I still wouldn't say it harmed me, but it would surely offend me.
And your whole schtick about extremists is based on people who, as far as I can tell, aren't in evidence. The estate people did this, people you broadly agree with decided there was a canceling (now a secret one?), and then when you people made a fuss people like me had a look, shrugged, and said, "Yeah, that makes sense." You have a narrative that you clearly *really* want to apply here, and you're all bent out of shape over it, but it just doesn't match up very well.
Sorry about seemingly ignoring your reply. I've been genuinely puzzled about how to answer. For most modern readers, it's immediately obvious that there's just something *off* about that picture, but explaining exactly why stereotypes are bad to a skeptic is hard. That said, if anyone feels hurt or offended by such content, that does constitute harm. Someone being offended doesn't mean the material should immediately be burned regardless of other considerations, but it is something to weigh in the balance.
"That said, if anyone feels hurt or offended by such content, that does constitute harm."
Does that apply to other things that someone might be offended by. Should people in the U.S. avoid the term "socialism" for fear of offending migrants who grew up in Maoist China and think of it as the label for the nightmares of the Cultural Revolution and the famine? How about the term "racist," which offends me, given that it has been expanded, in practice, from "someone who hates or despises other people because of their race" to "anyone who holds any view related in some way to race that I disagree with." Or the term "Nazi," which has had a similar expansion.
Avoiding saying something because someone says it offends him, whether or not you believe there is any good reason for offense, looks to me like giving in to a heckler's veto.
As I already said, someone taking offense isnt dispositive or a veto. It's something to critically consider when making a final judgment. For example, while I agree that some people have an over-broad definition of racism, you have an absurdly narrow one, so I wouldn't take your offense too seriously in this case.
The example I saw about the sort of depiction in Mulberry St. is a story about a black kid who was told by a white kid that the black kid didn't need a costume for Halloween-- the implication was that being black was enough.
I don't know how wearing that sort of thing is-- it's way short of active, malevolent prejudice-- but it's a reminder that one is a perpetual outsider.
I've seen a suggestion that those pages could have been left out of subsequent editions.
So you think the harm is that the book is suggesting to kids that Chinese people are somehow different, exotic, in a class of their own, and thus prompting them to say cringeworthy things to this effect to people of different races than their own that they meet and thus upset them?
That's an explanation. Thank you!
I'm not sure if it's a very satisfying explanation, though. Even if you insulate kids from books that have that kind of pictures, they will soon enough see real-life people who look different, wear weird clothes, eat strange foods with strange utensils. And we all know that kids say the darndest things, and say these things they will.
But it's an explanation. Thank you! And if there's one explanation, maybe there are others.
I think it's partly a matter of how much stereotyping people grow up with.
When I was a kid, it was normal to portray people from Holland as wearing wooden shoes and standing next to windmills. I don't know that it did me or them any harm, but it didn't make me more intelligent about the world. Maybe a little more intelligent than if I had no idea that anyone anywhere had different customs than my normal way of doing things, but not nearly as good as showing that people with different customs have a lot more to them than their customs than a simplifies (and probably out-of-date) look at their customs can show.
A minor annoyance: I run into non-Jews who have trouble understanding that there are Jews (like me, for example) who don't keep kosher.
Please note that people can get a bad impression of their own people and culture if the dominant culture portrays them as bad.
I don't have any particular attachment to five of the canceled books, but On Beyond Zebra! was one of my absolute favorite books as a child. And while I'm not going to defend that particular illustration in If I Ran the Zoo, I really don't get what was so offensive about OBZ. I guess it's the illustration of the "Nazzim of Bazzim", who is wearing vaguely middle Eastern or Persian garb. You'd have to squint pretty hard and look out of the corner of your eye to find that offensive.
And while I was merely saddened to see it taken out of print, the de-listing of these titles by eBay and other companies is positively chilling.
And the really puzzling thing is this: It's not like removing or re-drawing the problematic illustrations was not an option. Assuming there's not some other objectionable illustration in OBZ that I can't find, you could just cut out two pages and remove one letter from the list of all the letters at the end.
And quietly making a few changes would have been much better for the brand than throwing themselves in the middle of the woke/anti-woke culture wars, I think.
But given the fact that they green-lit that dreadful Cat in the Hat movie, Seuss Enterprises doesn't seem like it's particularly competent at preserving the brand.
Alternate hypothesis that my wife pointed out: "Spazz is a letter I use to spell Spazzim, a beast that belongs to the Nazzim of Bazzim". "Spazz" is a homophone for "spaz", which is now regarded as an ableist slur in the UK, but which of course wasn't even a word when OBZ was written. Still, you could just remove those two pages.
For what it's worth, it's an ableist slur in the US as well. I remember hearing it in the '90s. Long after Dr. Seuss wrote, of course.
Weird Al got called out for using that word a few years ago, for which he apologized in a delightful bait-and-switch tweet:
https://languagelog.ldc.upenn.edu/nll/?p=13552
On another linguistic note, I think the zoo book is the first attested usage of the word "nerd".
Third hypothesis: it's the reference to "explore like Columbus", since there's a push to demonize Columbus
I mean, Columbus kinda managed that all on his own.
John McWhorter is as puzzled as I am about the cancellation of OBZ: https://johnmcwhorter.substack.com/p/and-then-they-came-for-on-beyond
What's the over/under on when the Streisand Effect kicks in hard on these six books?
I think it already has, with people selling them on eBay for hundreds.
No opinion on the book itself because I'm not familiar. That said, cancelled implies there was some sort of call to action on the part of the left to get the book pulled, which the publishers responded to. That simply isn't the case here. The publisher (or heirs? I'm not clear) pulled the books, and now (some) people are throwing a fit about it.
I believe that criticism of Seuss's books as racist had been going on for some years, although I don't know that it was targeted at those particular six.
https://www.nbcnews.com/news/nbcblk/reckoning-dr-seuss-racist-imagery-has-been-years-making-n1259330
https://www.nbcnews.com/news/us-news/six-books-nix-books-dr-seuss-works-halted-racist-images-n1259256
These articles do get some things wrong, including the content of books in question.
I think these articles tend to support my point of view rather than yours. There's no drumbeat, or hashtag, or mob, but isolated criticism here and there. And then the company reaches out, several months ago, and talks with "teachers, academics and specialists in the field", and then, after an internal process, on Dr. Seuss's birthday, announces the change. All of this happened on the company's schedule, because the company wanted it to.
It does show that there has been some public criticism for years, but so far as I can see that's it. This doesn't look anything at all like cancellation as commonly understood, as in, say, the fire Gina Carano mob. At least IMO, YMMV.
How far back do you go when trying to keep up with ACX threads?
I tried this in the hidden OT but here's the question again. What are some new examples of patterns that should not have been crystallized?
https://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/
Was Hellen Keller a hoax? Has any other deaf mute reached her level of sophistication?
It's possible to be deaf-blind and intelligent at the same time.
There’s good reason to be skeptical. Is Helen Keller our only example?
She's the only example of someone famous. Do you actually know of any other examples of deaf-blind people at all? It seems strange to be suspicious of their possible intelligence if you don't know many individuals at all.
I think the claim was about sophistication, not intelligence. For a given level of intelligence, being deaf and blind would make it a lot harder to learn things.
There are examples of ppl with both hearing and sight but raised in extreme isolation and neglect that are barely functional. If Helen Keller really did reach such a high degree of sophistication it’s probably a miracle.
Of course, Helen Keller wasn't raised in extreme isolation and neglect, so I'm not sure what your point it.
Anyway, there's a Wikipedia article on deaf and blind people, including a Harvard Law School graduate: https://en.wikipedia.org/wiki/Deafblindness
The law school graduate has 1% of her sight and can hear at a high pitch. It’s extremely impressive that she was able to graduate from college but she had already learned some language before she was impaired by an illness
I was just biking by the creek yesterday in Austin, and saw a pair of people in front of me laughing, one using a cane and signing to the other, while the other held her arm. I'm not too far from a major school for the deaf, and have a friend who works as an interpreter there and says he often hangs out with deaf-blind people and communicates through tactile signing while they visually sign back.
I think these days, most deaf culture is conducted in sign languages of one sort or another, which may have the effect of isolating deaf-blind people from spoken language communities, but may provide them with more rewarding local cultural opportunities. That might be why Helen Keller-level fame is less common.
Yes, she's totally fake, and so are snow and birds.
I take two grams of Metformin a day for anti-aging reasons. From the Washington Post: "researchers noticed that diabetics who took [metformin] outlived non-diabetics who did not. Moreover, metformin had shown an effect in separate studies against each of the three diseases [dementia, heart disease and cancer]"
https://www.washingtonpost.com/health/anti-aging-drugs-metformin-study/2021/03/05/d9de870a-7882-11eb-9537-496158cc5fd9_story.html
Scott, please consider doing a metformin, much more than you wanted to know post. I think taking metformin has the potential to be a big win for rationalists.
Gwern on metformin: https://www.gwern.net/Longevity#metformin
"
This would be helpful. I got a lot of use out of the old melatonin explainer.
As did I. I’ve also shared it many times.
If you don't mind sharing, how do you get your Metformin? Were you able to convince your doctor to prescribe it solely for anti-aging, and if so was it hard? Or do you get it via some grey-market route?
I've convinced two doctors to give it to me since my first left general practice. I said I wanted it to reduce the risk of heart disease and cancer. To convince a doctor (1) know the dosage you want, (2) know the side effects, and (3) tell the doctor you will pay directly so they don't need to justify the decision to insurance companies.
I have to take metformin so hell yeah I'd be very interested to go "this is not a wonder drug" from my own experiences in response to such a post. I'm a tiny bit disgruntled that people seem to be thinking this is some kind of wonder weight loss drug (not in my case it isn't and wasn't) or anti-aging or whatever drug. I wish I didn't have to take it. If you don't need it medically and are just treating it as "amateur messing around" well phooey.
It might slow aging a little. If it were worth an extra couple of years, I don't think the effect would be noticeable for an individual.
Do you get side effects from it?
Some people get upset stomachs, although mostly when they first try it. For this reason, you should start at a low dosage and slowly increase how much you take. There are some potentially very bad side effects, but I think these are very rare, although I'm not a medical doctor.
I got the usual gastrointestinal side-effects at the start but those settled down and I find it very tolerable. I *think* my doctor expected that it would also have the "make the pounds melt away" side effect but good luck with that and my metabolism, which seems to react to every "this will make you lose weight" effort by resetting to conserve, and indeed pack on, the pounds (I noticed my appetite increasing, for example). "This will burn off fat but it works by exploding you from the inside" recent post by Scott is about the only thing that sounds like a chemical solution for me, but um, exploding from the inside is a *bit* extreme.
I think the metformin studies show that for elderly people at risk of developing those particular disorders (cardiovascular, dementia, etc.) then it may indeed work to extend lifespan a few years. I don't think healthy young people in their 30s taking it will see any benefit.
Apparently metformin can cause a modest amount of weight loss. Some of the time. In some people.
The weight loss effects seem to be small, but that doesn't stop it being touted as "miracle weight loss drug" by people who are enthusiastic about off-label use. About a kilo or so over two years, more if you diet-and-exercise (but of course if you diet-and-exercise you'll lose weight even if you are just taking placebo pills).
The only weight loss strategy that works for me is *severe* calorie reduction and counting carbs. My poor doctor, who is perennially optimistic in the face of my jaundiced view (having been up and down with yo-yoing weight due to diet, off diet, on diet but got sick, on another diet, gave it up because my brain was at me, etc. over decades) prescribed me another drug to (gentlemen of a nervous disposition look away now) pee out excess sugar in my urine. She warned me that one side-effect would be more likely to get thrush infections, but also chirpily added that it could cause sudden and great weight loss.
"Oh, don't worry about *that*", I said.
"No, no!" she said. "A patient of mine lost so much weight he had to stop taking this!"
Well, guess what side effect I got? I *didn't* burn off the pounds, weight remained the same. I *did* get the recurring thrush infections 🤣
I regret that I'm not good at angry ranting (perhaps you can help out), but I'm beyond fury at the risks people are supposed to accept to lose weight.
I don't know if it would help you, but I find that intermittent fasting, no food between 7PM and 11 AM, makes it easier to keep calories down, as well as being arguably good for you.
How do you feel about Resveratrol?
I take it everyday.
I would add NAD as really looking good, multiple areas of action.
There are ways for book reviews to be self-deanonymizing to various degrees ("As a person named John Doe, I found Mary Roach's "Stiff" highly upsetting" etc.), so watch out for this if you end up going ahead with the "publish everything anonymously" plan.
Indeed. Mine starts with “......, a book review by Anteros” - most readers will likely work that one out...
But I don’t think it’s doxxing level anonymity that’s aimed for, is it? Just so well known commenters names aren’t foremost in readers minds.
My guess is that Scott will do something that works and is sensible.
I enjoyed that survey much more than I typically enjoy surveys. Well done I guess?
On DNP ethics: I briefly considered going into more detail on safe-ish DNP dosing strategy and decided not to, primarily due to the fact that comments aren’t editable so if I accidentally say something wrong I can’t cleanly go back and fix it.
[Also this topic seems like an ACTUAL Copenhagen Interpretation of Ethics situation. If I give advice that makes an unsafe thing seem less dangerous so people do more of that unsafe thing, whose fault IS it when they hurt themselves doing it?]
I think there are actual ethical arguments against it - partly it's helping users take it more safely, but also it's removing a small barrier to use and normalizing it.
Are you responsible for anyone who hurts themselves because they didn't read your safe dosing protocol?
We all criticize how the CDC, FDA, and Fauci handled communication and approvals. But almost all of their mistakes can be tracked down to one cardinal sin, they seem their role as influencer not truth teller.
They asked themselves if I tell the truth what are the consequences of that action? Will people buy masks so that healthcare workers don't have any?
If I approve this vaccine early and it's not safe or effective am I responsible for any deaths that may result?
I think we overvalue the consequences of our actions and undervalue the consequences of our inaction. Also it's hard to measure or have a good intuition about the second or third order effects.
Also I can't imagine any especially insightful advice about DNP besides you might die, do too little, take your temperature, have a buddy who can check on you, don't drink alcohol, take plenty of water, don't go to a club or anywhere where you could get too hot, don't exercise, I'm sure there are others.
And while DNP is dangerous it's probably not as dangerous as a drug like heroin where many of us liberals have accepted sharing harm reduction strategies as more beneficial than pretending like you can't reduce harm.
It seems like you want to be careful about harm reduction strategies for things most people have never heard of, though? Maybe there is a way to limit the audience.
I guess I don't understand how it's ok to talk about DNP, it's ok to talk about heroin, it's ok to talk about harm reduction from heroin use, but it's not ok to talk about harm reduction from DNP use.
It seems to me the most dangerous position to be in is to have knowledge of a drug, but lack harm reduction knowledge.
For medicine specifically, there is a very good reason 'first do no harm' is a guiding principle. For almost the entirety of human history, medicine was more harmful than not. It's a running joke throughout pre-modern literature that poor sick men survived and rich sick men died because rich men could afford doctors who then killed them. The most common treatment for diseases, bloodletting, actively weakened sick individuals, and until the invention of penicillin in 1941 there was basically nothing a doctor could do about an infectious disease anyway except provide supportive care. Yet bloodletting persisted as a cure-all because bloodletting appeared to make patients temporarily better: a sick man who had just been bled would have a slower pulse, a lower temperature, and would 'sleep' peacefully, and the doctors at the time didn't have the understanding of the human body to know that these signs, while they may have appeared encouraging, were actually signs of harm.
Over history, the human cost of too-aggressive doctors is likely very high. The history of puerperal fever alone is nightmarish (in short: for millennia women gave birth without doctors, mostly successfully. Then doctors started getting involved in childbirth. At the time, nobody knew that you really ought to wash your hands, so doctors didn't wash their hands before examining a women giving birth, thus introducing bacteria and making her sick. Deaths of women who had just given birth rose as high 63/1,000 deliveries in some places)
We have now gotten very, very good at combating infectious diseases. That expertise does not translate into other areas of medicine. My impression is that we're still in miasma theory territory when it comes to metabolic diseases. Miasma theory isn't a bad explanation of the observable facts: sick people, especially very sick people, often smell, and people who have been in the same space as sick people often get sick even if they never touch the sick person or any of the sick person's things. And miasma theory accidentally led to some innovations that helped (improved sanitation in cities reduced disease even if it wasn't the smell of shit in the streets causing disease, and masks to avoid inhaling smells probably incidentally also helped avoid inhalation of airborne diseases) and some innovations that were worthless (burning incense and the like).
And that's the impression I get when reading about metabolic diseases. We have some theories that seem to explain observable facts fairly well some of the time, and some interventions that do indeed seem to work for a lot of cases, but we know much less about the intricacies of how the body regulates itself than we pretend. That's when we run into trouble. There is a very long history of weight loss drugs that turned out to be far, far more dangerous than being fat. Look at fen-phen: despite promising trial data, it caused heart problems so severe that it led to at least 50 deaths before it was withdrawn from the market. At least 175,000 claims alleging harm were filed against the pharmaceutical company who made it. Look at benfluorex in France: 2,000 people died of heart problems caused by it before it was withdrawn.
We are also missing a critical link in our logic: we do not know if making fat people thin actually makes fat people healthier. This has not been studied long-term! It's just an "everyone knows" assumption, even though we have experimental work (Jules Hirsch's experiments) and epidemiological data (the various "obesity paradoxes", studies about things such as the Dutch Hunger Winter, studies into weight cycling) that suggest this might not be true, or may only be partially true, or may be true only in some cases. And that's without getting into what appears to be corruption in the field of obesity research itself (namely, that many big players in the field run weight loss clinics or sit on the boards of pharmaceutical companies that sell weight loss drugs).
That makes me real uncomfortable when someone with Scott's clout starts tossing out "why not take this drug you can totally buy online that could kill you because I assume it probably has health benefits". Scott may be right! But back-of-the-envelope calculations aren't how we figure that out. And people, I think, take away the incorrect impression that we understand what's going on when we don't, and that they can make an informed decision when they can't.
So I agree medicine was really bad in the past.
I agree DNP is incredibly dangerous.
How many lives would have been saved if we'd released the vaccines in July? I think it'd makes 50 look tiny.
With Benfluorex how many QALY did it save in 33 years of it being on the market? I'm not arguing it's safe, only that we're ignoring one side of the equation.
But the most important point is you read Scott's post and concluded "why not take this drug you can totally buy online that could kill you because I assume it probably has health benefits".
Is that really what you concluded from reading it? How much more likely are you to consider purchasing DNP after reading Scott's post than before?
I agree with CMN's general points, but I also agree that Scott didn't make a sloppy recommendation of DNP.
I can’t imagine giving the same amount of attention to 100 book reviews. It seems likely that they’ll get attention based on superficial interest or the order they’re posted.
One possibility would be to post them all at once and ask for each volunteer judge to read them in a different order, based on a randomly generated list. You can keep going down the list until you get bored and stop judging.
The interactive fiction competition (ifcomp) seems like good prior work for an Internet competition based on judging.
Re: the mass of book reviews - suppose there was a call for volunteers to read some subset of the entries and respond with yes or no to these questions: (1) Does this seem like a good fit for SSC/ACX? and (2) Is the writing of tolerable quality?
Would this meaningfully whittle down the number of entries to review in depth? I would volunteer to read a dozen or so with those questions in mind.
'I would volunteer'
Me too. I wrote a review, but there's bound to be some overlap between 'writes reviews' and 'reviews reviews'.
It’s a fair idea, but I’d bet that despite the unexpectedly large number of reviews, Scott will feel obliged to look at them all. And therefore will have done whatever whittling is necessary - he’ll be in a position to offer up a half dozen, a dozen, whatever, of the best for further chewing over by the commentariat.
Hello ACX commenters,
I am looking to collect databases from real businesses and business-like entities, including those that have failed or otherwise become "past-tense". Read on if you or someone you know might have access to such things.
(previous post generated a bit of good discussion... https://astralcodexten.substack.com/p/hidden-open-thread-1595#comment-1235343)
Background:
I'm a software engineer, specializing in data systems (i.e. a data engineer), with about 16 years in the industry under my belt. Something that's always frustrated me about the way that we design and build systems, is the way that knowledge fails to diffuse through the industry, because we don't _study_ what we do, and especially we don't study our failures.
As an example, the 2010s witnessed the full hype cycle (rise and fall) of "NoSQL" databases, such as MongoDB, Cassandra, DynamoDB, Riak, Aerospike, and many others. Did they turn out to be any good? Individually, in local circumstances, some engineers know the answer, or at least _an_ answer. Collectively, we have no idea. This knowledge only spreads as the primary sources write blog posts (mostly terrible), or move on to new jobs and tell stories (distorted by all sorts of biases). What we *should* be doing is studying what was actually built, out in the open, where everyone can see it if they're interested.
Additionally, I find it very difficult to teach other engineers about data systems, in a scalable way, without open example material. There are many online courses in SQL and things of that nature, but they always deal with trivially small, trivially clean data sets, without any of the richness or messiness of Real World Data. Many years ago, my own skill in dealing with data grew by leaps and bounds the instant I was exposed to real business data and asked to solve real business problems with it.
To these ends, I am looking to collect real business data sets. I use the term "business" loosely, in the same sense that engineers often say "business logic". Non-profits, community efforts, personal side projects, these things all count. The key thing I'm after are custom-built databases, meaning they either started from a blank MySQL/Postgres/MongoDB/etc, or heavily customized an off-the-shelf system like Wordpress or Salesforce.
I recognize there are thorny issues here with respect to intellectual property and personal data privacy. I do not expect anyone to just hand over a database and wish me well. We would have to work something out, whether that's an NDA, or thorough anonymization, or whatever.
In any event, if you possess a data set like this, and *might* be willing to share it for research purposes, please reply here and we can figure out how to connect and discuss.
(This is the third time I've posted this. My plan is to re-post it periodically. If that runs afoul of any rules, written or unwritten, let me know and I can adjust.)
Addendum:
I recently became aware of this book https://fightchurnwithdata.com/ which might seem unrelated, but I mention because it contains specific details about how to model measurements of subscription customers and their behavior. This is the _kind_ of thing I'd like to be able to produce from my research: specific guidance on how to model specific business concepts and processes.
I'm curious if there's somewhere we can subscribe to keep tabs on the results of your work?
If anything comes of it I will at least mention it on my (terrible!) blog https://forgedinstars.blogspot.com/
Commenting so I can follow this comment thread
Your ideas are intriguing to me and I wish to subscribe to your newsletter.
I think about this kind of problem a lot, and have worked with the kinds of data I think you’re talking about. I’ve spent most of my career working with messy bibliographic data, product information and consumer data. Unfortunately I don’t think I could legally share any examples with you.
I’m not a software engineer; my background is library science and taxonomy. But my software developer husband seems to be spending more and more of his time solving the kinds of data problems I used to tackle with my minimal technical background. His area is oncology data, and I doubt he could legally share any of it, either. We could both probably give you some examples of how data became messy to the point of failure, since that sort of sleuthing is something we’ve both spent a lot of time doing.
Not a lot of results, but you may have better search terms: https://datadryad.org/search?utf8=%E2%9C%93&q=mysql
I'm advising a young company that's developing novel approaches to data analytics: organic database design powered by FPGA+NVMe hardware architecture. Any interest in hearing more and giving feedback?
Can you get substack to implement a "don't highlight" option when someone creates a post that only you can see?
One of the commenters in this thread mentioned something rather illuminating (for me):
> This sort of false sense of urgency is very common amongst conspiracy theories - the idea that the aliens are coming, that the Illuminati have control and are about to enslave us all, that the Storm is coming, that all police are secretly racist, that we're going to run out of food and all starve.
I would love to see Scott weigh in on that.
The pandemic is coming? The earthquake is coming? Climate change is coming? General AI is coming?
It seems like a sense of urgency makes memes spread faster and get discussed more, but it says little about whether an idea is worth investigating. It’s a shallow heuristic. One should be somewhat skeptical of any meme that’s optimized for replication is as urgent as it seems, but it may still be worth checking out.
That's a good point that it is not always obvious at a glance which scary ideas demanding urgent action are likely to be conspiracy theory dead ends instead of real urgent problems needing a solution.
Can't credit the whole thing, I'd 'snipped' it but I don't know from where.
The Sense of an Ending
In his 1967 book The Sense of an Ending, the literary critic Frank Kermode argued that human beings try to give significance to our short lives in the long sweep of history by placing ourselves in the middle of a narrative arc. That arc typically traces civilization's fall from a golden age through a current stage of decadence to an impending apocalypse—one that may, through the bold efforts of the current generation, usher in a new age.
"The great majority of interpretations of Apocalypse assume that the End is pretty near," observed Kermode. But since the end never arrives, "the historical allegory is always having to be revised….And this is important. Apocalypse can be disconfirmed without being discredited. This is part of its extraordinary resilience."
The dire prophecies of the first Earth Day have been mostly proven wrong, but the prophets of an always-impending environmental apocalypse have not thereby been discredited. Auguries of imminent catastrophe remain resilient, even as the world of 2020 is in a much happier state than the Catastrophists of 1970 ever expected.
From a book about art fraud-- if you want someone to pay too much for fake art, invoke need, greed, and speed.
From seeing a man be completely convinced about the dangers of drugs from seeing a sideshow exhibit: sometimes a sufficiently shocking image will cause people's minds to shut down.
Do we blame the urgency, or do we blame the theory? If a theory is of high quality, and suggests urgency, would we still question the urgency?
Or maybe the sense of urgency is just a bit of classic marketing. "You need to buy this PS5 today or you will fall into a deep depression."
Does anyone have a good recommendation for books on the pre-American history of slavery? I've read several books on American slavery, but am looking to expand. It seems like every single book out there is about American slavery. I'm looking for something that is more apolitical, just the facts.
Slaves were a big deal in Rome, how far back do you want to go?
The Vikings were huge slave traders, a fact that frequently goes unmentioned (looking at you, Assassins Creed). There’s a book I’d love to read called “Viking Age Trade: Silver, Slaves and Gotland”. It just came out a few months ago. It seems to be a kind of pricey niche publication, so I might see if I can get it through my local library.
Part of the issue with that is that the meaning of "slave" changes in different cultures and different contexts. Classical slavery in Greece and Rome was a (somewhat) different sort of phenomenon than the specific form of chattel slavery that came to make up a significant chunk of the New World economy.
I'm sure a sufficiently thoughtful book could draw useful comparisons and do some comparative work, but it's not surprising to me that many historians are reluctant to go deep in that direction.
https://www.datasecretslox.com/index.php/topic,2863.msg85806.html#msg85806
I found an interesting photo poster of a Girl with a dog.
It was behind my mirror - it's almost a century old.
I think a lot about how to create effective organizations. Let's say that we want to run a big organization, a cooperation, non-profit or government. There are a couple of failure modes.
One is lack of structure, where every decision is taken willy-nilly and nothing is written down. This trends towards nepotism and corruption, where decisions are based on flattery on who-knows-who in the old boys gang.
Another is bureaucratization and procedural-ism, where everyone does things right but no-one does the right thing. This seems to be something that comes creeping into an organization and is very hard to stamp out without burning things to the ground and rebuilding. Ossified companies fall to creative destruction, governmental agencies can seemingly live on forever. One reason that is often used to explain bureaucratization is that organizations respond to violations and accidents by adding more procedure to avoid repeats in the future, inadvertently stiffening themselves. Another is the middle-management and bureaucrats that were hired to perform important work tend to invent tasks for themselves to do during downtime, and these tasks don't go away when things gets busier. I guess there are many possible just-so stories about the origins of bureaucratization. Does anyone know of more rigorous takes on this phenomena?
A third failure mode is to have both: Strict regulations that can be skirted if you grease the right palm or know the management. This is akin to organizational anarcho-tyranny.
A fourth failure mode is to have a little bit of both: Just enough regulations to create extra work, just enough personal touch that the extra work doesn't matter and the management gets to decide in the end anyway.
What are the success modes? One seems to be to hire the right people and then leave them alone to do the job. Presumably they will create the level of bureaucracy needed to get the task done and no more. I'm not sure this is the secret sauce though. Is anyone aware of an example when "hire good people and leave them alone" failed?
Another success mode might be "meritocracy": Simple, relevant and transparent rules/checklists/procedure/criteria where it's applicable, human judgement at the lowest level everywhere else. Reward results, but beware of Goodhart's law. (This is what I personally think is the best police for governmental agencies.)
I would like to plot this on a two-by-two but it seems like one access becomes "good-vs-bad" which isn't that useful.
Slack is probably another important factor: lots of slack can create amazing results (Bell Labs?) but can also create an adult daycare. Little slack leaves little room for experimentation, but it's also my impression of how some very successful companies (e.g. Tesla?) are run.
Thoughts and comments on effective organizations? Was "Good to Great" correct all along? Am I missing something important?
Two thoughts:
(1) You specified a "big organization," but I wonder how important it is to get things right when the organization is still small. Your list of failure modes is spot on - could each be a case of the organization mis-judging its current size?
(2) If you've never read Steve Yegge's "platform rant," do check it out. It's a description of how Amazon navigates these issues. My feeling is that this is the most important writing on management in the 2010s, far more significant than any business-focused book.
https://gist.github.com/chitchcock/1281611
I think there's a fallacy in assuming that a big organization automatically need more bureaucracy. But I don't know what drives bureaucracy if not size? Maybe complexity of the product (in engineering)? Conway's law says that the organization will be roughly as complex as the product, which should drive bureaucracy? The best example of successful bureaucracy I can think of is SUBSAFE, which seem very complex, very bureaucratic and hasn't lost a sub since 1963.
Thanks for the link! I'll read it and return if I have comments.
Bureaucracy seems to grow more in some environments than others. Organizations where the middle management are responsible for failures but do not reap rewards for success, for instance, seem to grow bureaucracy more than other types of organizations.
The mechanism is simple - If you get chastised for failure, you will create systems to prevent failure. Policy, procedure, rules, etc. If you are rewarded for success, you will want to massage the systems to best produce good results, which means adapting bad rules into good rules, and so on. If you take away the incentive to make the rules better, you end up with an organization that continually becomes more rigid.
On the other side, if you only get the rewards and never the punishment, then you become corrupt and chaotic.
"What are the success modes? One seems to be to hire the right people and then leave them alone to do the job. Presumably they will create the level of bureaucracy needed to get the task done and no more. I'm not sure this is the secret sauce though. Is anyone aware of an example when "hire good people and leave them alone" failed?"
I've been thinking about something from Albert Jay Nock. He claimed he did a good job of running a business by hiring good people and then "management by mumbling". When people came to him with a problem, he'd mumble until they went off and solved it.
> Is anyone aware of an example when "hire good people and leave them alone" failed?"
Isn't this just a truism if you define 'good people' by them being successful at their job if you leave them alone?
That's the issue. Maybe we can define "good people" as "honest competent people"? Honesty and competence can be inferred, right?