I'm toying with the idea of starting a blog. I feel this community, for good or ill, knows what I have to offer and how I write. I'm curious if there's any thoughts as to: First, what people would be most interested in me talking about. Secondly, stylistic or delivery advice that people would like to see. Thanks in advance!
I've been blogging[1] for... geez probably over a decade now and basically I do a couple of things:
1) Professional progress updates (I'm a game developer so I post monthly progress reports for a game I'm working on)
2) Analysis pieces on trends (again in the games industry)
3) Anything that I want to be able to put into one authoritative work and then be able to link to people from then on
4) I often use twitter, forums, or blogs as "rough drafts" for an article. They say the best way to get correct information on the Internet is to post wrong information, and all that. After I've felt out an idea and gotten a bunch of feedback and corrections to obvious mistakes I blogify it.
5) I have an academic background so I had to slowly work my way out of that style (never use the word "I", lean too much on passive voice) into a more informal one that fits me better
3) in particular has been useful for me and where some of my most read works have come from.
My only advice, as someone unsuccessful at blogging regularly, is:
1) to not overthink what you're writing about, where you're publishing or anything details that don't involve sitting at a keyboard and typing words. Set a schedule and start writing, and you'll figure out what topics resonate with both yourself and your audience. It's very easy to bikeshed unimportant choices when the #1 way to fail is to not get around around to actually writing.
2) don't be embarrassed about self-promoting. Someone has to do it.
Thanks! I'm afraid I'm prone to overthinking. I do have a list of topics though. Also, I'm aware self-promotion is necessary. I'm not so much afraid of that. But I do feel like fame is a sword with no grip, as our host knows.
Thank you! I'm afraid if I talked about what I wanted it would be so incredibly scattershot that it would be hard for the blog or audience to have much of an identity...
You gave me the advice to go for an identity but can't your identity be a very intelligent and interesting blogger? What exactly is Scott's identity? He writes about everything but he's extremely successful.
I'd say Scott's a rationalist blogger that deals with the more humanistic side of things. Psychology, politics, philosophy, etc. The blog mostly appeals to rationalist type adult men, so well educated STEM types and the like. I'm not sure if he set out to appeal to a specific demographic but the blog has a very defined identity and audience.
One thing that I'd throw in to the pot of advice is not to be be married to the particular blog focus you decide on at first. I had an idea of what my blog would be at first, and then started writing those things and the fit/feel was off. I don't think it would have been sustainable in the original form I thought about. The nice thing about the beginning part is nobody in particular is reading yet - you can try a few a things and see what you like before it's affecting anything much.
You have a lot of knowledge about a bunch of things, so I suggest to write what you know best. As for delivery, I think you write overly tersely and confidently. I suggest loosening up the writing style a bit, so it sounds like someone more your age.
Thanks. It's picking the topic that's the issue. I'm torn between the instinct to write about whatever interests me and the need to winnow it down to a specific theme. But certainly there are topics I know more or less and I'll stick with more.
Thanks again for the advice on my voice. I'm afraid this is pretty close to how I talk. But I can definitely soften my voice in a writing piece. There I'm trying to inhabit an explicit tone and mood. And that voice can be more uncertain and longform.
Ancient history, preferably. As far as delivery goes, I think for a blog post (as opposed to a forum comment) you need to expand on the details a lot, and it would help if you would cite your sources, because a lot of your posts on history are interesting but handwavy. For example, I didn't come away from your criticisms of Bret Devereaux (like his Marxist assumptions in the bread series) with any sense of where the Marxist analysis goes wrong or what the alternative is. I said privately at the time:
> [Erusian] says for instance that Devereaux's presuming that labor vs capital in this mode of production matters most for power distribution. Okay, sounds pretty Marxist to me, but where it is *wrong*? What matters more?
Yeah, my comments are kind of handwavy because they're me just sitting down and saying what I think. It's also why they have typos. Actual posts would, I hope, be higher quality. This is in some way graduating to a higher level because I've reached the point where I'm being treated seriously.
My threshold for interest in reading someone's writing is lower when it's possible for me to respond, and lower still when it's reasonably likely that a conversation will result.
This is true in general, and not at all specific to you.
Notability and the cadre of "We'll delete any article on this subject no matter how well researched, sourced, and written!" deletionists makes it difficult to mentally invest in anything but minor fixups.
Same. Also: articles on important subjects that are permanently watched by obsessives with axes to grind who know how to wikilawyer all contenders to exhaustion really sap the will to bother out of me.
I do wish there was a "big database of human knowledge" fork of Wikipedia that continually pulls in all the Wikipedia content while also allowing the extra articles: a lower bar for notability, less concern about whether a topic is befitting of a traditional print encyclopedia, curation of effort-articles which were deleted from Wikipedia etc.
My dream is an "evidence clearing house". A site where you just input data (and links to data), and then aggregate claims based on that, in a hierarchical way where authors recursively build bigger and more interesting claims out of hierarchies of smaller claims, with all claims ultimately backed legibly by data.
+1. I have considered contributing to Wikipedia but the thought that some stranger can just revert it stops me from doing so. Would I even be notified if my work was reverted? Could I appeal? What's the probability that I get reverted? Without some assurance there isn't a delete-happy editor overseeing whatever I edit, I'm reluctant.
I started to recently, mostly making small improvements to articles I read rather than adding substantial amounts of new information; I have not noticed the problems other commenters mention, though that is probably because I haven't been active there for very long.
Editing wikipedia seems like homework. I would rather write on my blog if I am writing something and have it attributed to me. It's probably just a psychological difference. I can't say either side is correct in this.
Prior to becoming an active Wikipedian I think I had this idea about categorizing everything I read by topic, but then I realized that this was basically the function of the encyclopedia. So now what I do is I blog or comment if I have a Super Original Take(tm) or a life update, and write wikipedia text for everything else.
My eleventh GA is currently at FAC and I passed 10k edits a couple weeks ago. Multiple people have told me they have my RfA watchlisted; some of them are pretty big names.
I have gone through a great deal of trouble to acquire the highest level of expertise in several subjects, none of which is internal wiki-admin politics. So, no, I don't edit Wikipedia. They've made it clear that I will be at most grudgingly tolerated, and there's no respect or reward in it for me. and if your plan is "other people will produce valuable content for our project because it is the Right Thing To Do; we shall reward them with Grudging Tolerance", then they can sod off.
I do on occasion write essays and articles that other people use as sources when they edit wikipedia, which seems to me wholly superior in that A: it results in wider distribution of knowledge, both through wikipedia and through specialist media, and B: it results in more credit for me,.
A lot of my North Korea work is at 38 North, https://www.38north.org , though I had to pull back from that when my classified day-job work started overlapping my open-source arms control and non-proliferation work.
The other day I glanced at the Wikipedia entry on Fabio. I noticed that the article mentioned his sponsorships and reality TV show, and the goose incident, but didn't mention that he is primarily known for being on the cover of romance novels. I left a comment asking about this on the talk page, and now it's there, but still only as two minor sentences, as opposed to full paragraphs about his sponsorship from Nintendo.
I stopped when every change resulted in a comment that said, "please read these 10000 word policies on notability when it comes to films, tv shows, plays, entertainment, entertainers, and jugglers before contributing."
Currently, I almost never edit Wikipedia, but I used to.
The three and a two half reasons:
- the software has been getting harder and harder for me to use. I'm very comfortable with markup; not so comfortable with tool bars etc. And the templates (macros) get more and more complex - and less and less documented - every year, as well as being more and more required.
- (half reason) the rules for how to write, what may be written, what constitutes a source etc. seem to get both harder to follow and more often ignored.
- Wiki-politics is ugly, and tends to drive away editors and administrators I like and respect. I also get the impression that wikimedia foundation, like Stack Exchange, is keen to monetize a volunteer effort they are also sabotaging, e.g. by tone deaf attempts to promote oppressed minorities.
- I just don't have the time.
- (half reason) I'm rather more of an essayist than e.g. a science journalist. I don't especially enjoy writing balanced, carefully-sourced descriptions of consensus knowledge. And it's a complete PITA looking for "reliable sources" for e.g. particular computer algorithms and where they are used. (Note "reliable source" is a technical term on wikipedia, and while it's intended to mean what it says, it's actually more of a set of bureaucratic regulations.)
I tried a few times, but never to good result. One set in particular really turned me off, where I was updating wiki articles with citations to relevant articles the academic journal my PhD advisor ran. Sometimes this was just a further reading link, other times I would add a paragraph or so if the article was in conflict etc.
Every single one of these changes got rejected, sometimes with a nastygram attached to the effect of "we don't like your kind here" and little else. Whether this is due to political economy being a little touchy a subject, or the general culture of Wikipedia, I don't know, but that put me off pretty well.
Nope. High quality writing doesn't come easy to me, and I'd get stressed and a bit obsessed. Also, when I go to Wikipedia it's usually to learn stuff that I don't know much about, so I don't often bump into articles to which I could contribute significantly without having to study quite a bit first (and I bet there aren't many anyway). Occasionally, I read stuff that I feel I could improve a little, but not often enough to make it worth learning the Wikipedia rules and inner workings.
Never had visions but I always try to pay attention at that border. Thoughts controlled by what I think of as ‘me’ get increasingly scrambled, but I try to follow the process. Usually my body will experienced a little myoclonic jerk at that liminal point.
Rarely when I'm lying in bed I will have hallucinations (usually still or moving images, rarely audio) that are clearly the start of my brain's dreaming subroutine, but occurring while I'm still somewhat lucid, mobile and able to abort the going to sleep process. When I was a kid and did a lot of bedtime reading this usually took the form of suddenly realizing that I had closed my eyes from tiredness so it did not make sense that I was still perceiving book full of words, these days I think it's rarer and it's more like experiencing a couple seconds of a totally random dream with no connection to my current experience.
I have hypnogogic hallucinations sometimes upon waking or approaching sleep, one of many fun symptoms from having Narcolepsy. I describe it as having scenes from a dream superimposed on reality, kind of like painted animation cels.
It started a year ago. When I fall asleep I immediately begin to see dreams for 30- 60 minutes but my brain can't turn off and as soon as I fall asleep for the first time I instantly wake up. And after that I fall asleep normally
I had infinite universe black/white visions as a child. They didn't come during sleep/wake transitions but came at random. They were visions in the true sense that I lost my regular sight and started seeing this stuff. It was nuts.
I get hypnagogic hallucinations while falling asleep or waking up, with associated sleep paralysis, but they are generally auditory-only, or if they're visual at all they're incredibly vague and indistinct, more like confusion over the interpretation of what I'm seeing vs seeing something that isn't there.
I assume this is related to the fact that I seem to be aphantasic (or uh, hypophantasic, if that's a word). This thread prompted me to go read a bit about aphantasia, and to discover this recent new study on the topic: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7308278/
Discussion of scam ticket-selling sites. They resemble the actual venue, but vastly overcharge and don't deliver tickets reliably.
The federal government makes some efforts to stop them, but is fairly ineffective partly as a result of not being well-funded for the size of the job, and I suspect that the size of the job keeps increasing.
This gets to the question of what can be done about fraud, and also issues of what makes for a high-trustworthiness society. If a tenth of a percent or so is defecting a lot, they do a good bit of damage.
Re point 3, I regularly experienced an "expansion" feeling, something like what Shulgin described, while falling asleep when I was younger. It was like my body was inflating or being elongated to incredible lengths. (I've occasionally wondered if Lewis Carroll had similar experiences which inspired the size change magic in Wonderland.) I don't recall anything quite like those other visions, though, certainly not on a regular basis.
When I was a kid I had similar experiences whenever I would have a high fever - sensations of my body being enormous, or extremely small, in an infinite and otherwise empty universe. It would always freak me out. I assumed the feeling was happening because I was sick.
"Then there was a recurring one I can barely describe except that I suddenly perceived all spoken words as large, puffy, rubbery letters, like gigantic pasta shapes, that I was digging through with my hands as I processed their meaning. I would only understand the words once my fingers felt the tiny “bones” inside them, but the sensation of finding those bones made kid-me start BAWLING."
Huh, I always thought that segment of Comfortably Numb was artistic license, or "for effect". Thanks for sharing, this is an interesting shade of the human experience!
Hi! Amazed to hear your description. I had this too for many years as a child. However it would happen without a fever, and without going to sleep. I would just randomly be playing in my room and then suddenly have these sensations/visions of infinitely sized objects next to impossibly large/small ones.
Question: do you remember if your visions were in color or monochrome?
I don't remember for certain about the chromaticity of my fever visions, but since I almost never have regular dreams in color, my guess is that they would also have been the same. (Now I almost want to run another high fever and do some self-observations!)
While lying in bed, I sometimes got the sensation that my room was far larger than it really was, with the walls and ceiling very far away. I wouldn't call it a "vision", because nothing *looked* out of the ordinary; the sensation was separate from my eyesight.
I have the same sensation, especially when not feeling well (but sometimes when I feel fine). In my case, I attributed it to a practice of trying to make my room as dark as possible, especially eliminating well-defined light sources. If the only thing I can see when trying to sleep is just the diffuse glow from the translucent curtains covering the edges of the blinds, there really isn't anything to use as a reference for scale, and my tired mind probably loses the ability to judge distances with binocular vision. Fixed light sources break the effect, especially if they're not red. My current house has a smoke detector with a small green LED, which is sharp enough to fixate on if it's in my line of vision.
I used to have similar experiences as a kid, especially when sick, of my body parts pulsating between being immense and tiny. Not visual at all, just at the proprioception level. Not an unpleasant experience necessarily, just weird
Same here! Exact same hallucinations when I was sick or feverish, or sometimes just when I was trying to sleep. It was relatively rare and stopped completely as I got older.
This definitely also happened to me. I attribute it to my "body image" not having fully coalesced at that age. I am still able to modify the shape of my proprioceptive body at will, but it doesn't go haywire on me like that when I'm not paying attention any more. This coincided pretty closely with the age (pretty late, maybe mid-teens) that I stopped being sort of surprised and taken aback every time I looked in a mirror because what I saw seemed unexpected somehow.
On one hand the article seems to suggest that this technology is not very effective. On the other hand it expresses concerns about exacerbating inequalities. I feel like at most one of these two concerns can be valid; not both at the same time.
There is a section "unintended consequences" that says that selecting for high IQ would also increase the risk of bipolar disorder. What's not mentioned is that selecting for high IQ would decrease the risk of most other diseases. Bipolar disorder is an exception in that regard. Genetically, most good things cluster together with other good things. So the majority of unintended consequences of selecting for high IQ would actually be positive.
Keep in mind that scientists in western countries can't simply publicly endorse polygenic embryo screening without taking major career risks. Condemning it on the other hand is relatively safe.
Suppose somebody's been exposed to a mutagen and then has kids. A mutagen is likely to cause thousands or tens of thousands of mutations, most of which don't do anything and most of the rest of which are bad. As such, the kids are going to have problems in lots of different ways.
If you have a population, half of which has been exposed to mutagens and the other half of which hasn't, there will therefore be a positive correlation between good outcomes in their children - because the non-mutated kids are going to be statistically superior to the mutated kids in almost everything.
There are a variety of things which can cause high mutational load IRL, like maternal/paternal age, hence these correlations. I'm *not* sure whether this correlation would hold up when performing IVF, as the mutational load of the parents is largely fixed (it's the same two people, at the same time in their lives).
It's because a single broken gene can have negative effects on multiple functions. E.g. consider all the symptoms of Huntington's disease. The same thing happens for mutations with less dramatic effects.
I think it's just a consequence of the fact that good things cluster together phenotypically. High IQ tends to be correlated with all sorts of positive life outcomes, including an (often slightly) lower risk for many diseases. Probably not because there is a direct causal link between IQ and disease, but because there is some other common underlying causal factor. If that other common underlying causal factor is partially genetic - and it will be, because everything is partially genetic - then this will mean that on average genetic variants that are associated with a higher IQ will also be associated with lower disease risk.
I feel I should point out to you that this is not an explanation - it's just restating the fact that good things go together, without explaining why. The reasons *why* are pleiotropy, non-independence of criteria (i.e. high IQ makes you get higher test scores which makes you more competitive in the labour market which makes you richer) and the non-uniform mutational load which makes broken things correlate with other broken things.
You're right, my explanation doesn't go all the way down the causal ladder, it's just stating that we should expect selection for IQ to have mostly positive side effects if it mostly has positive correlations with other positive traits (low disease risk). Positive phenotypic correlations usually translate into positive genetic correlations.
I don't know if we can be certain at this point about genetic mechanisms that are causing this. The mutational load argument sounds compelling and is certainly true to some extent, but polygenic predictors and genetic correlations are both based on common genetic variants that have very little in common with the very rare genetic variants that people have in mind when they talk about mutational load. The reason for that is that on average you can predict a polygenic trait much better by considering a large number of individually unremarkable, common variants, than by considering a necessarily smaller number of rare, large-effect, mutational-load variants.
The varations are likely random, but remember that the initial genome being varied is not random, but has survived lots of selection and is far more functional than a genome created at random. So random changes are much more likely to have negative rather than positive effects. Therefore if one system is observed to have been corrupted by random changes, it is likely that others are as well. By analogy, a car that has had parts changed at random with those of different models is likely to end up worse in every measure, because the original car was a carefully selected combination of parts.
To me the article seems generally on board with the idea as long as its caveats and limitations are properly communicated, although someone else might read this as "polygenetic selection is a scam and should be banned". It did suggest that companies shouldn't be charging for this until they can prove it works, which seems like a great way to kill it, but it's probably not that expensive to throw in a few extra tests into disease screening so maybe it could still work. The reference to the evils of eugenics seemed like it was only there because it had to be included, it's important to regularly remind everyone that you think Involuntary Sterilisation Is Bad.
"someone else might read this as "polygenetic selection is a scam and should be banned"
Not something that had occurred to me prior to reading your comment, but oh yeah. Every cowboy and dodgy doc in the business will jump on the chance to offer "polygenic screening for a better baby!" the way dodgy clinics are currently offering "stem cell cures":
"There is a section "unintended consequences" that says that selecting for high IQ would also increase the risk of bipolar disorder" -- OH NO, NEURODIVERGENT PEOPLE!
From where I'm standing, allism and simultypy are severe, crippling disabilities that are obviously completely incompatible with any kind of good life, and purging them from the population is a self-evident good, and any allist or simultype who disagrees with me is just blinded by having a weird brain that thinks wrong things. *Mysteriously*, this position is far less common than its inverse.
"Schizo" to "split" gives you "simul" to "together". I used to identify as autistic, on account of being diagnosed with it for longer, but "what were you diagnosed with first, according to what was trendy when you were a kid to label a given cluster of weird kids with" is not the be-all and end-all of a given person's neurotype and if the schizotypal liberation movement ends up consisting of me and me only, well, that's an improvement on it consisting of no one.
I am unconvinced of the degree to which I believe the statement above and the degree to which I think it's a worthwhile thing to say independent of its truth, because of everyone else believing the exact opposite thing. I *do* strongly and sincerely believe that letting allistic and simultypal people decide what the correct balance of neurotypes is may as well be signing society's death warrant, that there's no possible way it could result in a non-dystopian world, and that the current balance of those neurotypes in the population is too low rather than too high.
I realise you've already said you're not sure whether this statement is true later in this thread, but I'm curious about this scenario, regardless - so here's a genuine question from my point of ignorance:
If we were to treat allism as a disability and mankind were to abandon empathy (as in, the illusion of knowing what other people are feeling), is that not a loss from a game-theoretic point of view?
I'm unfamiliar with how cooperation with people you expect to only ever meet once arises and remains stable in this scenario (something that seems fairly crucial to civilisation as we know it). Would we depend on laws and their consequences to regulate all those scenarios (bland example - the person you just gave five dollars to should give you the thing you're buying with those five dollars, otherwise they will be punished by society)?
Or is the argument that you shouldn't, precisely because it's illogical (and therefore civilisation as we know it potentially needs serious changes)?
I'm reasonably sure this is probably covered by a 101 somewhere, but I'm not sure how to find it. Feel free to just slap a link my way that covers this!
I think lack-of-cognitive-empathy *is* one of the disabling effects of autism. My experience is you can solve much of this problem with psychedelics, so...work backwards from there.
As an autistic person, my experience is that non-autistics have precious little empathy for autistics, to the point of resembling the caricature descriptions of autistics sometimes produced.
AFAICT, most non-autistics get away with projecting their own feelings (in similar circumstances) onto others, and calling that "empathy". They get away with it because when they do this with "normal"/high status people, they are often right, and those judging them don't care about "weird"/lower status people.
Strongly agreed. I first noticed the pattern when reading Temple Grandin's memoir and realizing what a hard time her mother had empathizing with her.
Also, what passes for empathy with neurotypicals can be pretty mechanical-- a hand on the shoulder at the right time can make people feel a lot better even if the person supplying the hand isn't feeling much of anything.
Indeed and when autistic people project their own feelings/desires on others, this is called a lack of empathy.
It's really rather ugly. Like a supercharged version of the stereotypical 'ugly American' who starts shouting at a local who doesn't understand English when visiting a foreign country. Only this time the person is even accused of being illiterate, their mastery of the local language being completely ignored.
I wonder how much of the alleged normie empathy is actually recognizing someone else's emotion... and how much is an unwritten social contract to be so bland that they can freely project their emotions on each other, and to never contradict each other when doing so.
If the normie has a model of you, and you show that this model is wrong -- for example by expressing interest in something the normie considers boring, and therefore automatically assumes you also consider it boring, -- the normie feels discomfort, because their illusion of empathy was broken. The normie will then project, and blame you for not having empathy.
I don't think that's it. I think the normie thing is to have beliefs (predictive processing?) which is usually not too far off from most people's actual emotions.
Empathy the skill and empathy the character trait are different things.
Autistic people aren't *inherently* great at empathy-the-skill targetting neurotypicals, although we can learn it. (I'm not sure of the state of the research on how well autists read other autists.)
We have empathy-the-character-trait, though. That is to say, we care but don't always know how you feel. (This is the reverse of a sociopath, who knows but doesn't care how you feel.)
The majority of autists find lying deeply upsetting and don't generally do it, so I imagine you'd actually have to worry about fraud less than usual. There are potential issues with an all-autistic society, but it's the sort of thing that's probably worth a try to see if it pans out.
Anecdotes are not data, I realise, but my exposure to one case of someone with schizophrenia did not convince me that they were living their best life.
Regularly go off their meds, lose their job because of that, then spiral down until they came into the office claiming that people were breaking into their house to smear faeces on their walls, visibly upset and distressed and convinced this was true (it was not) - I don't see that as helping that person or society in general.
Best to worst case: neurotypical -> neurodivergent but can stay on their meds -> neurodivergent, can't stay on their meds, become more and more enmeshed in their disorder until they're a danger to themselves and/or others.
Neurodivergent is a wide catagory. So wide I don't think it can be said to be better or worse in aggregate. In specifics, sure, the kid with an 80 IQ that can't communicate with his parents is a painful life. The adult with a high-paying job in a technical field who doesn't give two shits about social pressures to spend beyond their means is a great life.
Anecdata is not data. My anecdata is that psychosis is not an impediment to my life, and is often an improvement to it. My anecdata is also that I've never taken neuroleptics and don't plan to; there is a fair amount of interesting suggestion in the direction they are not a net positive, but rather a contributor to the disability that people with some neurotypes experience.
Fun for you. Not so fun for the people who have to deal with you pissing and howling on the floor. If you aren't one of the people who get so bad that they do end up pissing and howling on the floor, congratulations. But then what you are saying is "I can manage my symptoms/my symptoms are not so bad that they impinge on my ability to lead an independent life".
You're like someone who says "Well I only need glasses, plainly trying to cure river blindness is oppression of the differently abled by the ignorant and repressive majority!"
When nerdy people say they are "a little autistic," are they being a little cheeky, or do they mean it in a way that is continuous with a clinical diagnosis of autism? Is it legitimately just a matter of degree?
That makes sense. I recently heard myself described as "a little autistic" for the first time, and while that description makes a lot of my traits more legible, I'm still not sure how much I or other very mildly autistic people should lean into it. At first glance it feels vaguely insensitive to use the same term to both describe my excessively mechanistic approach to social interactions / inability to tune out background noise, and someone who is autistic to the point of being non-verbal.
I’ve wondered if the whole “spectrum” idea might one day fall apart, or schism into multiple spectra. I personally have never understood how we came up with a spectrum that has my high-functioning friends on one end and the kind of profound autism that leaves you institutionalized on the other. The traits we identify with autism don’t seem like they slide along a scale the way something like hearing loss does.
Consider for instance Temple Grandin. She was non verbal as a kid, only learned to speak after speech therapy. Was falsely diagnosed with brain damage. It seems like the only thing separating her and a "classical case" of autism was the dedication of her parents. If you agree to put Temple in the same category as a "classical autism" case, then what is the difference between her now and your "mild autisic" high functioning friends?
I’m not sure I had a “classical” idea of autism. But there are a lot of people on the spectrum who are sufficiently high functioning that they’re only diagnosed as adults, if at all. And there are others in institutions whose condition would seem to defy any amount of parental or professional dedication. Surely there’s some difference between those two groups, and maybe that difference warrants two distinct diagnoses.
In my view, it’s totally fine to use descriptions along the line of “a little bit autistic”. Unless you are in the context of a psychiatric institution, the majority of autistic people that you’ll interact with are probably going to be the high functioning ones (such as myself), and there really is a meaningful category of people that have some traits associated with (predominantly high functioning) autism, while not being to the point of qualifying for a diagnosis.
I do find it a bit odd how much popular perception of autism conflates the high functioning (to the degree that you might not know they have autism from a casual interaction) with the low functioning (to the degree that they cannot really live without significant chaperoning).
This is similar to my view. I have never been diagnosed myself, but my son was. What the doctor described as the tells/symptoms were things I myself had as much or more, but had learned to cope with. (My son had sessions with a therapist for a while and now has no obvious or outward signs that are distinguishable from regular shyness or social awkwardness. He is also high functioning and doesn't meet most people's perceptions of what autism looks like, though it was more obvious before the sessions).
I don't tell people I'm autistic, because I'm high functioning enough that very few people would ever notice and I was never diagnosed. I even work in a people-oriented field, having figured out lots of workarounds to being a genuine people-person.
> I'm still not sure how much I or other very mildly autistic people should lean into it
In general I think we are doing a bit too much leaning into labels. Once you decide who you are is a foobarist you end up buying foobarist clothes, going to foobarist parties, and agreeing with foobarist policy positions. It’s true that this can give a comforting sense of belonging but it also closes you off to a lot of possibilities that you end up rejecting out of hand because they aren’t foobarist.
Tried the questionnaire, got to "I am fascinated by dates (agree/disagree)" and couldn't figure out whether they were talking about fruit, social interactions, or calendars. Unfortunately my answer is very different depending.
To add on to this, it is to be noted that very, very often we *hear* 'spectrum' but *envision* a _gradient_ — i.e., a scale going from more to less affected by a 'color' (the condition), when the intended meaning is usually a collection of different 'colors' (ways of being affected).
I think (as Brian said), that the clinical diagnosis is a spectrum, so that helps it be consistent. But it’s also a good shorthand for a lot of behaviors and traits; saying someone is “mildly autistic” actually has a pretty high information content about them. Even if a few bits of information are wrong in what’s conveyed, the mass of information can be worth it to use as a phrase.
wrt whether it’s insensitive, maybe? but it’s not clear to me who it’d be problematic for. I think people can be offended by basically anything, but generally I think if you’re using it as a legitimate descriptor, rather than as a pejorative term, you’re within the normal bounds of insensitivity, since it’s a spectrum and there isn’t anyone who seems to have a clear case for being offended by its usage.
Obviously if someone can present a clear case for why it’s problematic I’m open to changing my mind, but otherwise I view it as other spectrum words like “disabled” “hearing impaired” or “buff”
They're probably being cheeky. I could see this sort of thing as mildly bothersome to autistic people. But psychological conditions do seem to be on a spectrum. So if someone does describe themselves as a little autistic, what they are saying could make sense depending on why they said it. Just like someone could say they're a little bit borderline or bipolar or traumatized but generally those terms are reserved for more serious cases. I think everything is just a matter of degree.
It's legitimately just a manner of degree, and an inconsistent one. I have an ASD diagnosis since toddlerhood; most people in the ratsphere, including the ones who swear up and down that they're not, are far more characteristically autistic than me.
Can't speak for nerdy people generally, but when I say it I mean I have something like a subclinical form of autism. I have all the usual deficits/strengths, but they're minor enough that I really don't think I'd be diagnosable.
I suspect there some who are cheeky, some who try to point out some traits on the continuous spectrum of symptoms, and some are doing a little bit both, often inadvertently. It is difficult to self-diagnose ones self-psychology, and its contingent on reference points one has available.
They're doing the best they can to describe how they think their brain works, now that the Very Serious Medical Authorities have told them they're not allowed to use the word "Asperger's" for that purpose. I'm not sure it's an improvement to treat everything from what used to be called mild Asperger's, to someone who will spend the rest of their life in an asylum screaming, a "spectrum", but that's what we're stuck with unless we want to fight a language war to reclaim "Asperger's".
Wasn't a lot of this a push-back from parents of autistic kids, who didn't want their kids to be stigmatised as the "have to wear a helmet 24/7 or they will beat their brains out against a wall" (an instance I encountered during my working life) type, so they insisted that the Asperger's and 'high functioning' be folded into the entire autism definition and that it be put on a spectrum?
There *is* a wide spectrum between "socially awkward but can handle their symptoms" to "noticeably autistic but functional" to "will bite off their own fingers, will always need institutional care" autism, but I think that the very severe cases are not on the same line as the autists formerly known as Aspies.
I hadn't heard that theory as to the origin of the "spectrum" diagnosis, but it seems plausible. And I agree, we may not be dealing with just a single axis where the helmet-Autistics just have more of what the Aspies have. Actually, now that I think about it, this is something I'd expect Scott to be familiar with; he's written a few things sort of adjacent to it before, but I don't recall him addressing it directly.
I'm going mostly on what I've heard/read elsewhere. Currently working in administrative capacity in a centre for children with additional needs. They get referred to it by the local Early Intervention Services Team. Includes children with autism but also other developmental needs. Generally "mild to moderate" cases. I haven't seen any of the severe cases in this job.
The really severe one was a previous job, where part of the work was administering grants for home improvements for medical reasons. One family had two teenage/young adult sons with autism, and they wanted to build an extension to have a padded room so the older son could go there when getting stressed out, and so he could take off the motorcycle helmet he had to wear 24/7 because in the padded room even if he did knock his head against the walls he couldn't damage himself.
I haven't seen any of the 'savant' types. I do think keeping Asperger's Syndrome separate was more helpful, as there are definitely 'clusters' - the mild to the moderate to the severe types.
I worry that a lot of it is the same as when type A personalities describe themselves as "a little OCD". It's possible that some of them are right, but I've got to imagine that a lot of self-diagnosis is just people who have no idea what the real condition is like.
Depends on what you think about taxonometrics. If you take their word for it, there isn't a bimodal distribution of autistic and non-autistic people.
Troll answer: they don't know any better to compare two (morally loaded) concepts which have significantly different magnitudes. In this case, nerdiness and autism.
I'm starting a self-experiment to see if melatonin can help prevent me from waking up, and am hoping to get some feedback/critique of my experimental design. Anyone have any suggestions?
I realize melatonin is typically used to control when you go to sleep, but I'm hopeful that it will last long enough in the bloodstream that it might impact time asleep as well.
- Doses: 0, 0.3, and 3 mg, taken 30 min. before bedtime
- Randomization: pills placed in opaque gelatin capsules, randomly assigned to each day by an assistant (i.e. blinded)
- Measurement: time fell asleep, time woke up, time asleep, HRV, sleeping pulse, fasting blood sugar, change in blood sugar, measured by Apple Watch/Autosleep app & Dexcom G6
- Analysis: effect size and p-value tested for melatonin vs. blank. If meaningful effect size is observed, experiment will be repeated with that dose to confirm.
*Any suggestions on improving the protocol or other interventions would be greatly appreciated.*
A variant you might try is to set an alarm for 2 or 3 hrs after you go to sleep, and take the pill then. This is my go-to trick for periods when my sleep cycle gets funky and I wake up nightly after 4 or 5 hours and can't go back to sleep. Don't think I've ever done this with melatonin -- have used diphenhydramine or lorazepam. For me, anyhow, an alarm that goes off in the first 2 or 3 hours pulls me out of a dead sleep, and I can go right back to sleep after taking the pill. Taking the med at that point in the night means it kicks in at around the time I would normally had my too-early spontaneous awakening, and its effect lasts til I've been asleep 8 hours or so.
Interesting. I'm only waking up 20-30 min. early, so I don't think this would be worth it for me. I'll recommend it to my wife, though. She frequently wakes up 3-4h after going to sleep.
I'm struggling with how much I can expect from a partner with ADHD. We have a young son who requires a lot of attention, and my husband just doesn't seem to have the patience required to spend long periods of time playing with him. I can deal with it, but I'd sure like to just hang out and play video games also. Am I being ablist to think he should just suck it up and split the time equally?
I'm not really qualified to give advice here, but is there something else helpful he could be doing? If he genuinely can't handle long play sessions it won't be good for anyone to try to force it; but if he spends that time doing something fun and easy and unproductive, you will understandably be frustrated and also somewhat suspicious (not necessarily that he's being cynical or deliberately dishonest, but that he's subconsciously making more of a choice than he'd admit to himself or you). If there are household tasks that he can handle, and that you might otherwise have to do yourself, maybe he could put some extra time into those while you're playing with your son.
I would try to recast your question to ignore the ADHD, because it's not necessarily relevant. I used to love playing with our little kids. My wife has always been unable to do it, and she doesn't have any kind of mental health issue. Ability to play with young children simply varies from person to person. What she could do, on the other hand, was plan day trips and put the kids in the car and go and look at a Thing (whereas I found that kind of process utterly enervating and pointless). With parenting, ultimately you just have to accept what the other person can or cannot do with the children; you can't force it.
The other issue is that you seem to suggest that when your husband isn't on kid duty, he goofs off and plays games. I don't know if you're a woman, but this sounds like quite a common male/female dynamic, and my very very very very strong advice is to be very explicit with him about it. (Apologies for horrible stereotyping to follow, obviously this is not true of everyone, but...) Men really really think differently about stuff, and we really really won't realise that we're annoying our partners until a row blows up. If you want your resident male to do more about the house, instructing him directly on what to do and when to do it is much more likely to get good results than any other tactic.
Completely agree. Some people just do not enjoy playing with small children. Does not mean they have attention deficit disorder or a cold heart. Think of a task your husband can do to help the household and free you up some, something he does not find irksome and weird, and ask him to do more of that. It's likely that when your child is older your husband will find him much more fun to hang out with.
If you are just in the same room as your kids and they play by themselves, you are not neglecting them. Kids do most of their playing that way. Sometimes they need direct attention and it can be fun to get on the floor and play with them, but if you are not doing that, they'll still do fine.
This. The expectation that parents constantly play with their kids, Bluey-style (https://www.youtube.com/watch?v=EJkn-r-rJJY) is very, very new and not necessarily good. You say you have a young son who "requires a lot of attention," but are you absolutely certain that's true?
I obviously don't know anything about your son, so maybe it is true. But unless he's too young to be trusted not to choke himself to death swallowing a toy or he has a developmental issue, he should be able to play without your constant involvement! Lego, Play-doh, caring for baby dolls, digging a hole in a sandbox, going down the slide of a swing set, imagining a story with friends...until 20-30 years ago, these were activities parents generally didn't get directly involved with. They might admire the end product of a Lego fort or a clay snake, but they didn't usually put their hands on the toys.
I'm 41, have memories going back to 2.5 years old, and I don't remember my parents *ever* playing with my toys (or my younger brother's) as if they were a peer. In fact, from 4 years old on, if my mother "caught" me acting out a story with My Little Pony figurines, I would stop and wait for her to leave the room before getting back to it. Every parent of every kid I ever knew had similar boundaries. An adult joining us in imaginative play would have been way too intimate and intrusive.
Obviously none of this answers your question about your husband's involvement in child-rearing, but it is worth considering: What if you gradually assert some boundaries on you son's demands for your attention, and insist that he learn how to entertain himself without you (or your husband) actively engaging him every minute? If your son can act out a dinosaur war on the living room rug or sculpt a Play-doh family to live in the Lego house he built while your husband and/or you read a book or play a video game nearby, you'll all benefit.
Yeah, there are a lot of good points here. I think part of my problem is guilt from having him at daycare 40 hours of the week. I feel like I need to "make the most" of the time I am there with him. He feels that too. It's a constant refrain of "Mama Mama Mama"
Strictly anecdotal example coming up, but I don't think we can assume daycare actually impacts clinginess/sense of loss/etc!
How old is your little guy? My best friend is a stay-at-home mom with an almost four year old girl. Between normal maternity leave and the pandemic, her kid has never even met a hired babysitter, much less been in daycare. She has quite literally spent her whole life exclusively in the company of her parents' adult pod (grandparents, an uncle, a couple of her parents' friends), and her mom was present for way more than 99% of those gatherings.
Her kid is "Mama Mama Mama" with 20 minute meltdowns if she wants to leave the kid with her doting grandparents and go to the grocery store by herself.
My friend has *maybe* had 150 total hours away from her daughter in her daughter's whole life. And yet her kid throws huge fits whenever they part, as if she's being abandoned in the woods to hungry bears.
So I'm just saying. It might not be daycare, it might just be that your kid is programmed by evolution to solicit maximum engagement from you so he survives to adulthood. That doesn't mean he actually needs it in your safe 2021 home.
I'm sure you're a great mom. Don't accidentally deprive him of the opportunity to be totally self-absorbed in autonomous play. He'll figure out that it's an pleasure different from playing with adults.
1) Kids will behave how you condition them, and you condition them with both rewards and punishment. If the constant refrain is "Mama Mama Mama" its because it leads to good results for him. You need to find ways to recondition him. That first invovles figuring out what you want out of him. More autonomy? Maybe. Trying to handle things himself first? Maybe.
2) From above, it sounds like you have a marraige type that I call "equal," where your foundational principal is that the two adults are equal partners who contribute equally. You should do half the kid playing, and your partner should do the other half.
(Totally personal opinion here) This, frankly, is a bad way to structure your marraige, expecially once kids show up. How can a kid be equal with the parents.
Instead, think about moving your marrage from a marraige of equalit to one of mercy. It is the job of each adult to give each member of the family (including themselves) what they need. Full stop. The same way two renters are 100% responsible for the rent. It doesn't mean you need to kill yourself to serve everyone else - that's not giving yourself what you need.
But my wife hates hates hates scrubbing the bathroom. If find it fine, but old cold wet food gives me the willies. So I scrub the bathroom and she does the dishes. Simple example, but it applies to larger things. I work an intensive, long-hour job. My wife just isn't up to it. She makes about 12% what I do in a part time job. If I demanded she have a job like mine, it would burn her out.
Now you throw in a kid, who doesn't know how to be fair. He's going to throw a wrench into any equal plans you have because he's a child. I bet you suspend the equal thinking when your partner has the flu or whatever. Well, a kid is needs that 24/7 for at least the first decade of life.
You can play with the kid for a certain amount of time before you get burned out. Partner can play with the kid for a shorter amount of time. That's all the two of you can give. Your job is to figure out how to make it enough, and to give your kid what he needs.
But I bet he needs a lot less than you think. You should read Selfish Reasons to Have More Kids - it really illuminates how over-worked the modern American parents are in a way that has no benefit on their children.
It seems to me, as others have said, that you ought to be able to find room for your husband to watch the kid without the husband needing to exhaust himself by "playing" in what sounds like a fairly specific and, for him, exhausting way.
It's totally fine for the husband to just keep an eye on the kid while he plays by himself, watches TV, or does whatever. The husband can also go take the kid for a walk around the neighborhood, go visit the park, or just take the kid with him on a run to the store or something. It could also be that your husband can find his own way of playing with the kid - if he has a hobby or interest of his own, maybe he can find ways to share parts of that with the kid, which might feel much more meaningful than whatever banal playtime activities are currently normal.
Ultimately though, it's not ableist to ask your husband to step up and do his part. But it probably is ableist to expect that your husband's interactions with the kid will look exactly like your interactions, and that his parenting style will exactly match your own specific preferences.
I'm surprised that nobody here has asked if you've tried medication for ADHD? If you have and it didn't work or if you haven't and have reasons for that stance, perhaps you'd explain in a further post.
Christina The Story Girl has replied to you in an excellent post with which I fully agree. It seems to me your thinking is taking you on a dangerous path that may not end up anywhere nice for your family.
I just did a podcast episode with Greg Cochran on UFOs. We both think there is a reasonable chance that some of the UFOs seen by the Navy are aliens. Scott once wrote that Greg had "creepy oracular powers". https://soundcloud.com/user-519115521/cochran-on-ufos-part-1
This! ^ I’m glad audiobooks and podcasts exist for those who enjoy them, but to me it’s a maddeningly slow and inefficient method of getting information.
I'm listening to it rather gradually, and I recommend the first 20 minutes or so as good about high altitude lightning (sprites) that pilots simply didn't talk about in public because it would make them seem too weird. There was also a problem with it being visible to humans but so brief that it was difficult to photograph.
So I'll recommend cachelot.com - a collection of essays arguing a fascinating case for cachalots (aka sperm whales) being human-level intelligent. If nothing else, it shows just how little we really know about cetacean and especially whale intelligence, even now. The site also includes a blog on animal intelligence in general, written more rationalist-adjacently than other stuff I've seen on the subject.
Something I’ve been thinking about a little recently:
Is it surprising how much intelligent people disagree?
I’m often amazed by the extent to which people of roughly equal intellect arrive at such divergent conclusions, even when they are forming opinions based on the same information and possess similar expertise. I also find it fascinating when people that I agree with on one issue express views which I find utterly illogical elsewhere. Intuitively, I am surprised by these observations, but maybe I shouldn’t be.
A lot of intelligence is creative and performative - it's not like a prediction market where all the points are for being right. Especially with armchair thinking (no skin in the game), having opinions is more like building something out of lego than finding the "right answer". From that perspective, it's not surprising that people can reach different conclusions from the same set of initial blocks.
They may have vastly different priors, and confirmations bias does the rest of the work. In general, I'd say that human brain didn't evolve to be able to reliably arrive at objective truth, and rationalism's expectation that it's feasible to overcome this is naive at best.
I'm not sure that two people (say Alice and Bob) being presented with the same information is enough to ensure that they come to the same assumptions. Even if Alice and Bob have approximately the same priors coming into it,
1. They could have differing update rules. How do they incorporate new evidence into their model of their world? Alice might weight new information much more than Bob.
2. They could just experience the information differently. Alice might see the dress (https://en.wikipedia.org/wiki/The_dress) as white and gold, Bob might see it as black and blue. Silly example, but if we can't even agree on something as foundational as color it's possible we could subjectively disagree on more abstract information.
3. What if they just "think differently", but still rationally?. If Alice tends to think in terms of causal descscion theory and Bob tends to think in evidentiary descscion theory, are either of their positions on https://en.wikipedia.org/wiki/Newcomb%27s_paradox more justified than the other's?
People have different personalities and moral foundations. If something is an issue related to morality or something political, then it is difficult for people to think clearly. Moral reasoning is a complicated process and different views can lead to different outcomes. Political beliefs are built on moral reasoning. Political and moral beliefs inform a lot of thinking about empirical issues like minimum wage and unemployment or climate change. If something is non-controversial, non-moral and non-political, it is easier to agree but intelligent people are still subject to biases.
I think some of it is that people differ in how conformist they are, i.e. if there is an objectively correct viewpoint and a different socially correct viewpoint, equally intelligent people will disagree simply based on different levels of conformity. Another factor is that eureka moments happen randomly.
Aumann's Agreement Theorem describes the broad area you speak of. Here's a pointer to Yudkowsky's essay on the subject, "complete" with some comments/discussions below from the rationalist community:
I'd say that the concept is unrealistic and useless. The profoundly unrealistic part is condensed in these five words: "based on the same information".
On tabula-rasa AIs, maybe. Not on humans.
The "information" based on which you assess situations, has accreted over decades and decades, depending on your age. There's megabits and megabits of such information involved, a significant portion of which can be a factor in how you view and judge any given issue. When you "state" any given issue from your perspective, you might state it with a few dozen bits of information, and you'll be under the illusion that those few dozen bits are "really it", that they fully cover your judgement of that issue, and there can not be any disagreement in the judgement if there's no disagreement upon those few dozen bits of information.
That view is, in almost all cases, false. There will be kilobits, hundreds of kilobits of information that really goes into that judgement which you *think* is fully expressed by those few dozen bits. The other 99% you "naturally take for granted", whereas granted it is not. If you took someone whose judgements you "find illogical" and locked yourself in a mountain hut for a whole week, doing nothing but descent into the differences between all your priors, you might still emerge a week later having covered no more than one percent of the *actual* information space from which your different judgements stem.
Scott's blog - much more so than Yudkowsky's - is the zone where people come to engage in such activities. (minus the mountain hut)
Aumann's Agreement Theorem doesn't require the information be common, only the priors and the posteriors (the differences in posterior being the disagreement). The example he gives at the end of the paper (http://www.dklevine.com/archive/refs4512.pdf) does a good job of showing it -- the parties are exchanging their posteriors, not the actual evidence they used to form it. The common prior assumption is strong, but not as strong as requiring all information to be shared. Some even say common knowledge of rationality is enough to meet common priors (https://mason.gmu.edu/~rhanson/prior.pdf) .
As I understand it, the "point" of the theorem is less that rational agents presented with the exact same evidence in total should come to the same conclusion, and more that if you trust that the other agent is rational you should use that when refining your own posterior. It's practical value (which I agree is dubious), is to point you to be more modest in your disagreement if the other agent is honest and rational.
Human intelligence is not really optimized for finding the "correct" view on something - it is optimized for mounting a rhetorical defense of an arbitrary, pre-selected position.
If your social community believes obscure religious idea A, and a neighboring social community believes a contradicting obscure religious idea B, your selection of obscure religious idea to believe will simply be a factor of whether you can to remain a member in good standing with your current community or if you see a benefit to jumping ship to a new community.
In either case, the ability to mount creative and clever arguments for your chosen position will place you in high regard in whichever community you choose to affiliate with, and may even be respected within the rival community in a "worthy opponent" sort of way.
This sort of social game is what human intelligence is optimized for. We are not optimized at all for looking at the world with unbiased eyes and forming an accurate map of the territory, because that sort of behavior tends to make you popular with nobody, and it is only in very recent times that the rewards for seeing clearly may come to be as worthwhile as the rewards for being a member in good standing with the community.
If you are surprise by it, you are running off the wrong theory of rationality/epistemology, because the right theory would predict the facts.That should be obvious, but a lot of the rationalsphere insist on using broken theories ,like Aumanns theorem.
Issues that are sufficiently deep, or which cut across cultural boundaries run into a problem where, not only do parties disagree about the object level issue, they also disagree about underlying questions of what constitutes truth, proof, evidence, etc. "Satan created the fossils to mislead people" is an example of one side rejecting the other sides evidence as even being evidence . Its a silly example, but there are much more robust ones.
Aumanns theorem tacitly assumes that two debaters agree in what evidence is: real life is not so convenient.
Can't you just agree on an epistemology, and then resolve the object level issue? No, because it takes an epistemology to come to conclusions about epistemology. Two parties with epistemological differences at the object level will also have them at the meta level.
Once this problem, sometimes called "the epistemological circle" or "problem of the criterion" is understood, it will be seen that the ability to agree or settle issues is the exception, not the norm. The tendency to agree , where it is apparent, does not show that anyone has escaped the epistemological circle, since it can also be explained by culture giving people shared beliefs. Only the convergence of agents who are out of contact with each other is strong evidence for objectivism
Experts are selected not for accuracy, but for improving our theorizing about a phenomenon. In the long run that should lead to accuracy, but in the short run it's better to have lots of people proposing weird theories like hollow earth, heliocentrism, etc., some of which will be right and most wrong.
I am not sure experts are even selected for that. It seems to me that experts are chosen based on how much they agree with existing experts, or just the relevant prior beliefs of the relevant selection group.
I don't think that's right. You don't become a respected academic by agreeing with everyone - you absolutely *must* say something weird and new. There are definitely limits of the type of weirdness and newness, but someone who just agrees with all the conventional wisdom won't get far.
Not at all. The vast majority of every conference I have attended is made up of papers that poke a little at the edges of what is generally known and find "Yup, what we thought was true is pretty much true", with the caveat that they can demonstrate it at p <= .05.
Now, if you want to be in the top say 5% of a field, you want to try and mix it up quite a bit. That might get you in the door. From that position, you then dictate what "respectable" means.
Further, note that there is a difference between your research conforming to beliefs about what the current state of research is and beliefs about how the world should work. If you are say, Card and Krueger, you can get famous publishing an article that goes against established economic theory, that raising the minimum wage lowers employment, but goes along with the prevailing political (for lack of a better word) beliefs of the academic community, that raising the minimum wage is good and something we should do.
In fact, doing that is a big win because people WANT to believe it. All those academic economists that are ashamed to point out at faculty gatherings that raising the minimum wage probably hurts employment prospects now can merrily join the consensus of the rest of the faculty. Whew! No more cognitive dissonance!
So yea, if you want to be respectable, that is, hold your job and get tenure, you need to toe the damned line. That line is drawn both by what current experts agree is true, and what the group wants to be true. After you get tenured you can maybe afford to get a little weird, but even then you'd better be careful.
*Note* My experiences are from the field of economics, with a little dabbling in philosophy and sociology. The sciences that are farther from politics are possibly not so bad, but any science that suggests what governments should do in some fashion gets corrupted really quickly. Jane Goodall might have some fun stories about how people pushed back against her work, however, and chimp behavior is pretty far from human society. Well, at least in how we apply it.
> even when they are forming opinions based on the same information
That's never the case. There's literally always divergence in information, most often in personal experiences that shaped their reactions and intuitions around certain topics. Couple that with pervasive cognitive bias and divergence in conclusions shouldn't be logically surprising. Much like the unexpected hanging paradox, we're often surprised nonetheless.
This is extra interesting in the case of differences in Protestant theology. We're all working off the same source material, which we hold to be self-consistent and infallible. Most of us will quote identical maxims about taking the text as it stands, not theologizing off experience, etc. We'll even agree about the order of suprency of the books. But consider the Calvinists vs the Lutherans - a tiny (and I do mean tiny) difference in how to weight pure logical consistency vs. our human inability to fully comprehend the Divine, and they wind up disagreeing about a whole slew of things. It's astonishing how much of a divergence small differences can produce.
In my experience, some highly intelligent people develop such a powerful distrust of the intellectually average that they end up a little too self-reliant when it comes to forming opinions about the world, effectively insulating themselves from other people's perspectives and opinions even when those people have far more expertise behind those perspectives and opinions. Meanwhile, smart people are still susceptible to bias. So you end up with some very biased and insulated smart people convinced of really weird ideas, despite having access to the same information as everyone else, because they won't let anybody talk them out of it.
There are a lot of good responses to your question, but I'll also add one other thing to consider. You are looking at the differences, which are sometimes pretty extensive and important. If you looked at the similarities, you may find that even smart people who disagree vehemently on certain topics actually agree on the vast majority of topics. Not too many smart people disagree about the rough shape of the earth, the existence of countries, the structure of the periodic table of the elements, and thousands of other details.
We concentrate on the differences because there isn't that much to say about where we agree. "Argentina is a real country." - "Okay, I agree." That's pretty much our response to tons of random factoids, unless or until we disagree. Scott's Link posts contains lots of cool information and nobody says much about them. When they disagree, it spawns comment threads and that's most of what people will read about the Link posts.
Even in fields where disagreements are notorious, like economics, there is mostly agreement on basic principals and facts. If two "intelligent people" agree on 98% of all relevant facts, it is certainly noteworthy that they could still disagree on conclusions and important details, but I think it's less noteworthy than the fact that they agree on so much in the first place.
Diversity cultivates more broad experimentation, which invites more serendipity. I’m not sure how an evolutionary mechanism would reward that, though. Even if being part of a diverse society helps everyone, the individual payoffs might still encourage convergence. Since we observe divergence, maybe not.
On polygenic screening - a recent article in the Guardian gives some evidence that public opinion (in the UK at least) might be positive towards the broad family of genetic screening. Of course, just one data point, and this is post-natal not embryonic, but I think this is useful for context as there was some speculation about whether the tech would be outright banned.
Of course I could see this going differently in the US vs. Europe.
That's not eugenics, though, because you aren't selecting the babies with the lowest risk; killing the babies with high risk of X gets you arrested for murder.
sounds super interesting :)
Long thread on Maimonides, rationalism, and religion. https://mobile.twitter.com/ZoharAtkins/status/1410612795712802822
Love to read about Maimonides. But on Twitter? Thank you very much, but no.
In that case read Moshe Halbertal’s amazing book
Thanks Zohar
Life and Thought or Law and Mysticism?
life and thought
Arrived today. Diving in now. Thanks for the tip.
Thankfully there are ways to make Twitter readable e.g. https://threadreaderapp.com/thread/1410612795712802822.html
That's some good nominative determinism, Zohar.
I'm toying with the idea of starting a blog. I feel this community, for good or ill, knows what I have to offer and how I write. I'm curious if there's any thoughts as to: First, what people would be most interested in me talking about. Secondly, stylistic or delivery advice that people would like to see. Thanks in advance!
Ghost and Substack are both fine and easy to use and you can add monetization on either later if you want or keep them free. I say go for it.
Thanks. I'm not so worried about platforms. I more meant user experience or stylistic preferences. Or any particular topics.
I've been blogging[1] for... geez probably over a decade now and basically I do a couple of things:
1) Professional progress updates (I'm a game developer so I post monthly progress reports for a game I'm working on)
2) Analysis pieces on trends (again in the games industry)
3) Anything that I want to be able to put into one authoritative work and then be able to link to people from then on
4) I often use twitter, forums, or blogs as "rough drafts" for an article. They say the best way to get correct information on the Internet is to post wrong information, and all that. After I've felt out an idea and gotten a bunch of feedback and corrections to obvious mistakes I blogify it.
5) I have an academic background so I had to slowly work my way out of that style (never use the word "I", lean too much on passive voice) into a more informal one that fits me better
3) in particular has been useful for me and where some of my most read works have come from.
[1] https://www.fortressofdoors.com
Thanks!
Oh hey, you're that guy! I like your writing, nice work on all the HN front-pages recently.
My only advice, as someone unsuccessful at blogging regularly, is:
1) to not overthink what you're writing about, where you're publishing or anything details that don't involve sitting at a keyboard and typing words. Set a schedule and start writing, and you'll figure out what topics resonate with both yourself and your audience. It's very easy to bikeshed unimportant choices when the #1 way to fail is to not get around around to actually writing.
2) don't be embarrassed about self-promoting. Someone has to do it.
Thanks! I'm afraid I'm prone to overthinking. I do have a list of topics though. Also, I'm aware self-promotion is necessary. I'm not so much afraid of that. But I do feel like fame is a sword with no grip, as our host knows.
I enjoy your comments and would look forward to this. Let me know when you start so I can signal-boost you.
Thanks! I'll definitely take advantage of this.
You should do it. Talk about what you want to and make it enjoyable for yourself.
Thank you! I'm afraid if I talked about what I wanted it would be so incredibly scattershot that it would be hard for the blog or audience to have much of an identity...
You gave me the advice to go for an identity but can't your identity be a very intelligent and interesting blogger? What exactly is Scott's identity? He writes about everything but he's extremely successful.
I'd say Scott's a rationalist blogger that deals with the more humanistic side of things. Psychology, politics, philosophy, etc. The blog mostly appeals to rationalist type adult men, so well educated STEM types and the like. I'm not sure if he set out to appeal to a specific demographic but the blog has a very defined identity and audience.
You left out the whimsy-- I'm not sure if it's a major draw, but it's definitely part of what Scott does.
In any case, Scott's success seems to prove that a very wide range of topics can be held together by personality.
One thing that I'd throw in to the pot of advice is not to be be married to the particular blog focus you decide on at first. I had an idea of what my blog would be at first, and then started writing those things and the fit/feel was off. I don't think it would have been sustainable in the original form I thought about. The nice thing about the beginning part is nobody in particular is reading yet - you can try a few a things and see what you like before it's affecting anything much.
You need to write more things ! :-/
Agreed! Something should be going up tomorrow morning. I'm trying to actually get on a schedule, it's screwing me up.
Yeah, I'm definitely going to have to see what works and pivot.
Also, yes: You need to write more things.
You definitely need to write more, I find your stuff very interesting
You have a lot of knowledge about a bunch of things, so I suggest to write what you know best. As for delivery, I think you write overly tersely and confidently. I suggest loosening up the writing style a bit, so it sounds like someone more your age.
Thanks. It's picking the topic that's the issue. I'm torn between the instinct to write about whatever interests me and the need to winnow it down to a specific theme. But certainly there are topics I know more or less and I'll stick with more.
Thanks again for the advice on my voice. I'm afraid this is pretty close to how I talk. But I can definitely soften my voice in a writing piece. There I'm trying to inhabit an explicit tone and mood. And that voice can be more uncertain and longform.
If substack includes tagging, I recommend tagging your posts so people can find what you've got on a topic.
Thanks, yeah, I'd definitely do my best to make it digestible.
Ancient history, preferably. As far as delivery goes, I think for a blog post (as opposed to a forum comment) you need to expand on the details a lot, and it would help if you would cite your sources, because a lot of your posts on history are interesting but handwavy. For example, I didn't come away from your criticisms of Bret Devereaux (like his Marxist assumptions in the bread series) with any sense of where the Marxist analysis goes wrong or what the alternative is. I said privately at the time:
> [Erusian] says for instance that Devereaux's presuming that labor vs capital in this mode of production matters most for power distribution. Okay, sounds pretty Marxist to me, but where it is *wrong*? What matters more?
Yeah, my comments are kind of handwavy because they're me just sitting down and saying what I think. It's also why they have typos. Actual posts would, I hope, be higher quality. This is in some way graduating to a higher level because I've reached the point where I'm being treated seriously.
I don't have advice for you, but I would be interested in reading your blog.
Thank you!
My threshold for interest in reading someone's writing is lower when it's possible for me to respond, and lower still when it's reasonably likely that a conversation will result.
This is true in general, and not at all specific to you.
Wait what? This doesn't make any sense to me at all. I feel the exact opposite.
I am going back and forth on comments. I'll take both your opinions into account.
And Arendt vs. Strauss on solitude https://mobile.twitter.com/ZoharAtkins/status/1411831108883337220
Do you edit Wikipedia? Why or why not?
Used to, wanted to add to big database of human knowledge. Have stopped because of the internal politics.
Notability and the cadre of "We'll delete any article on this subject no matter how well researched, sourced, and written!" deletionists makes it difficult to mentally invest in anything but minor fixups.
Same. Also: articles on important subjects that are permanently watched by obsessives with axes to grind who know how to wikilawyer all contenders to exhaustion really sap the will to bother out of me.
I do wish there was a "big database of human knowledge" fork of Wikipedia that continually pulls in all the Wikipedia content while also allowing the extra articles: a lower bar for notability, less concern about whether a topic is befitting of a traditional print encyclopedia, curation of effort-articles which were deleted from Wikipedia etc.
I would love that. Good point that "database of human knowledge" and "encyclopedia" are subtly different things
That would be wonderful. I'm not sure whether my framing of "wikipedia but with more sensible policies" is quite the same thing.
My dream is an "evidence clearing house". A site where you just input data (and links to data), and then aggregate claims based on that, in a hierarchical way where authors recursively build bigger and more interesting claims out of hierarchies of smaller claims, with all claims ultimately backed legibly by data.
+1. I have considered contributing to Wikipedia but the thought that some stranger can just revert it stops me from doing so. Would I even be notified if my work was reverted? Could I appeal? What's the probability that I get reverted? Without some assurance there isn't a delete-happy editor overseeing whatever I edit, I'm reluctant.
Occasionally. The bureaucracy that's grown up is really tiresome, though.
I started to recently, mostly making small improvements to articles I read rather than adding substantial amounts of new information; I have not noticed the problems other commenters mention, though that is probably because I haven't been active there for very long.
Yeah, making typo fixes and wording improvements won't generally wake up the dreaded WikiZombies.
Yes. It always feels great to be able to ‘give back’ every now and then.
Me too.
Editing wikipedia seems like homework. I would rather write on my blog if I am writing something and have it attributed to me. It's probably just a psychological difference. I can't say either side is correct in this.
Prior to becoming an active Wikipedian I think I had this idea about categorizing everything I read by topic, but then I realized that this was basically the function of the encyclopedia. So now what I do is I blog or comment if I have a Super Original Take(tm) or a life update, and write wikipedia text for everything else.
Sometimes. I translated an article from English into Dutch one time, but mostly I correct typos and grammar mistakes.
My eleventh GA is currently at FAC and I passed 10k edits a couple weeks ago. Multiple people have told me they have my RfA watchlisted; some of them are pretty big names.
I have gone through a great deal of trouble to acquire the highest level of expertise in several subjects, none of which is internal wiki-admin politics. So, no, I don't edit Wikipedia. They've made it clear that I will be at most grudgingly tolerated, and there's no respect or reward in it for me. and if your plan is "other people will produce valuable content for our project because it is the Right Thing To Do; we shall reward them with Grudging Tolerance", then they can sod off.
I do on occasion write essays and articles that other people use as sources when they edit wikipedia, which seems to me wholly superior in that A: it results in wider distribution of knowledge, both through wikipedia and through specialist media, and B: it results in more credit for me,.
Where do you post these essays and articles?
A lot of my North Korea work is at 38 North, https://www.38north.org , though I had to pull back from that when my classified day-job work started overlapping my open-source arms control and non-proliferation work.
The other day I glanced at the Wikipedia entry on Fabio. I noticed that the article mentioned his sponsorships and reality TV show, and the goose incident, but didn't mention that he is primarily known for being on the cover of romance novels. I left a comment asking about this on the talk page, and now it's there, but still only as two minor sentences, as opposed to full paragraphs about his sponsorship from Nintendo.
https://en.wikipedia.org/wiki/Fabio_Lanzoni
I've seen a few articles are written on the assumption that the reader has a basic knowledge of the subject.
I stopped when every change resulted in a comment that said, "please read these 10000 word policies on notability when it comes to films, tv shows, plays, entertainment, entertainers, and jugglers before contributing."
I fix typos once in a while.
Currently, I almost never edit Wikipedia, but I used to.
The three and a two half reasons:
- the software has been getting harder and harder for me to use. I'm very comfortable with markup; not so comfortable with tool bars etc. And the templates (macros) get more and more complex - and less and less documented - every year, as well as being more and more required.
- (half reason) the rules for how to write, what may be written, what constitutes a source etc. seem to get both harder to follow and more often ignored.
- Wiki-politics is ugly, and tends to drive away editors and administrators I like and respect. I also get the impression that wikimedia foundation, like Stack Exchange, is keen to monetize a volunteer effort they are also sabotaging, e.g. by tone deaf attempts to promote oppressed minorities.
- I just don't have the time.
- (half reason) I'm rather more of an essayist than e.g. a science journalist. I don't especially enjoy writing balanced, carefully-sourced descriptions of consensus knowledge. And it's a complete PITA looking for "reliable sources" for e.g. particular computer algorithms and where they are used. (Note "reliable source" is a technical term on wikipedia, and while it's intended to mean what it says, it's actually more of a set of bureaucratic regulations.)
I don't edit Wikipedia except for a few minor efforts, but thank you for a question which has gotten a bunch of interesting responses.
I tried a few times, but never to good result. One set in particular really turned me off, where I was updating wiki articles with citations to relevant articles the academic journal my PhD advisor ran. Sometimes this was just a further reading link, other times I would add a paragraph or so if the article was in conflict etc.
Every single one of these changes got rejected, sometimes with a nastygram attached to the effect of "we don't like your kind here" and little else. Whether this is due to political economy being a little touchy a subject, or the general culture of Wikipedia, I don't know, but that put me off pretty well.
"Political economy"?
Nope. High quality writing doesn't come easy to me, and I'd get stressed and a bit obsessed. Also, when I go to Wikipedia it's usually to learn stuff that I don't know much about, so I don't often bump into articles to which I could contribute significantly without having to study quite a bit first (and I bet there aren't many anyway). Occasionally, I read stuff that I feel I could improve a little, but not often enough to make it worth learning the Wikipedia rules and inner workings.
Some people talking about visions when falling asleep:
https://twitter.com/hankgreen/status/1405171528140951556
https://twitter.com/vihartvihart/status/1405343268242530313
My experience: I almost never remember dreams, but while I'm falling asleep, my imagination becomes incredibly vivid and self regulating.
EG, I can tell myself a story and not know what's going to happen next.
Never had visions but I always try to pay attention at that border. Thoughts controlled by what I think of as ‘me’ get increasingly scrambled, but I try to follow the process. Usually my body will experienced a little myoclonic jerk at that liminal point.
Rarely when I'm lying in bed I will have hallucinations (usually still or moving images, rarely audio) that are clearly the start of my brain's dreaming subroutine, but occurring while I'm still somewhat lucid, mobile and able to abort the going to sleep process. When I was a kid and did a lot of bedtime reading this usually took the form of suddenly realizing that I had closed my eyes from tiredness so it did not make sense that I was still perceiving book full of words, these days I think it's rarer and it's more like experiencing a couple seconds of a totally random dream with no connection to my current experience.
I have hypnogogic hallucinations sometimes upon waking or approaching sleep, one of many fun symptoms from having Narcolepsy. I describe it as having scenes from a dream superimposed on reality, kind of like painted animation cels.
It started a year ago. When I fall asleep I immediately begin to see dreams for 30- 60 minutes but my brain can't turn off and as soon as I fall asleep for the first time I instantly wake up. And after that I fall asleep normally
I had infinite universe black/white visions as a child. They didn't come during sleep/wake transitions but came at random. They were visions in the true sense that I lost my regular sight and started seeing this stuff. It was nuts.
I get hypnagogic hallucinations while falling asleep or waking up, with associated sleep paralysis, but they are generally auditory-only, or if they're visual at all they're incredibly vague and indistinct, more like confusion over the interpretation of what I'm seeing vs seeing something that isn't there.
I assume this is related to the fact that I seem to be aphantasic (or uh, hypophantasic, if that's a word). This thread prompted me to go read a bit about aphantasia, and to discover this recent new study on the topic: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7308278/
Re. Further weird shit Covid does:
https://www.news-medical.net/news/20210618/Alarming-COVID-study-indicates-long-term-loss-of-gray-matter-and-other-brain-tissue.aspx
Any thoughts, or further developments?
Study seems pretty solid, as it is comparing subjects to themselves; but who knows long term.
Another take on that study:
https://theconversation.com/covid-linked-to-loss-of-brain-tissue-but-correlation-doesnt-prove-causation-163183
https://revealnews.org/podcast/the-ticket-trap-2021/
Discussion of scam ticket-selling sites. They resemble the actual venue, but vastly overcharge and don't deliver tickets reliably.
The federal government makes some efforts to stop them, but is fairly ineffective partly as a result of not being well-funded for the size of the job, and I suspect that the size of the job keeps increasing.
This gets to the question of what can be done about fraud, and also issues of what makes for a high-trustworthiness society. If a tenth of a percent or so is defecting a lot, they do a good bit of damage.
Re point 3, I regularly experienced an "expansion" feeling, something like what Shulgin described, while falling asleep when I was younger. It was like my body was inflating or being elongated to incredible lengths. (I've occasionally wondered if Lewis Carroll had similar experiences which inspired the size change magic in Wonderland.) I don't recall anything quite like those other visions, though, certainly not on a regular basis.
When I was a kid I had similar experiences whenever I would have a high fever - sensations of my body being enormous, or extremely small, in an infinite and otherwise empty universe. It would always freak me out. I assumed the feeling was happening because I was sick.
That reminds me of the bizarre fever dream experience described in this post, which also mentions expansion and shrinking:
https://bogleech.tumblr.com/post/182230727313/that-fever-dream-episode-of-rugrats-was-way-too
"Then there was a recurring one I can barely describe except that I suddenly perceived all spoken words as large, puffy, rubbery letters, like gigantic pasta shapes, that I was digging through with my hands as I processed their meaning. I would only understand the words once my fingers felt the tiny “bones” inside them, but the sensation of finding those bones made kid-me start BAWLING."
Huh, I always thought that segment of Comfortably Numb was artistic license, or "for effect". Thanks for sharing, this is an interesting shade of the human experience!
Hi! Amazed to hear your description. I had this too for many years as a child. However it would happen without a fever, and without going to sleep. I would just randomly be playing in my room and then suddenly have these sensations/visions of infinitely sized objects next to impossibly large/small ones.
Question: do you remember if your visions were in color or monochrome?
I don't remember for certain about the chromaticity of my fever visions, but since I almost never have regular dreams in color, my guess is that they would also have been the same. (Now I almost want to run another high fever and do some self-observations!)
While lying in bed, I sometimes got the sensation that my room was far larger than it really was, with the walls and ceiling very far away. I wouldn't call it a "vision", because nothing *looked* out of the ordinary; the sensation was separate from my eyesight.
I have the same sensation, especially when not feeling well (but sometimes when I feel fine). In my case, I attributed it to a practice of trying to make my room as dark as possible, especially eliminating well-defined light sources. If the only thing I can see when trying to sleep is just the diffuse glow from the translucent curtains covering the edges of the blinds, there really isn't anything to use as a reference for scale, and my tired mind probably loses the ability to judge distances with binocular vision. Fixed light sources break the effect, especially if they're not red. My current house has a smoke detector with a small green LED, which is sharp enough to fixate on if it's in my line of vision.
I don't remember my room being that dark, but it was an awfully long time ago.
I used to have similar experiences as a kid, especially when sick, of my body parts pulsating between being immense and tiny. Not visual at all, just at the proprioception level. Not an unpleasant experience necessarily, just weird
Same here! Exact same hallucinations when I was sick or feverish, or sometimes just when I was trying to sleep. It was relatively rare and stopped completely as I got older.
Later I discovered that this may be called “Alice in Wonderland syndrome”: https://en.wikipedia.org/wiki/Alice_in_Wonderland_syndrome
This definitely also happened to me. I attribute it to my "body image" not having fully coalesced at that age. I am still able to modify the shape of my proprioceptive body at will, but it doesn't go haywire on me like that when I'm not paying attention any more. This coincided pretty closely with the age (pretty late, maybe mid-teens) that I stopped being sort of surprised and taken aback every time I looked in a mirror because what I saw seemed unexpected somehow.
Same, except not when I was younger; I've had the experience as recently as a few months ago.
Here is the full New England Journal of Medicine article: https://www.dropbox.com/s/l1gklud7udot4jj/nejmsr2105065.pdf
On one hand the article seems to suggest that this technology is not very effective. On the other hand it expresses concerns about exacerbating inequalities. I feel like at most one of these two concerns can be valid; not both at the same time.
There is a section "unintended consequences" that says that selecting for high IQ would also increase the risk of bipolar disorder. What's not mentioned is that selecting for high IQ would decrease the risk of most other diseases. Bipolar disorder is an exception in that regard. Genetically, most good things cluster together with other good things. So the majority of unintended consequences of selecting for high IQ would actually be positive.
Keep in mind that scientists in western countries can't simply publicly endorse polygenic embryo screening without taking major career risks. Condemning it on the other hand is relatively safe.
So just a question from a layperson - why would good things cluster together in genetic terms? Isn’t genetic variation usually random?
Suppose somebody's been exposed to a mutagen and then has kids. A mutagen is likely to cause thousands or tens of thousands of mutations, most of which don't do anything and most of the rest of which are bad. As such, the kids are going to have problems in lots of different ways.
If you have a population, half of which has been exposed to mutagens and the other half of which hasn't, there will therefore be a positive correlation between good outcomes in their children - because the non-mutated kids are going to be statistically superior to the mutated kids in almost everything.
There are a variety of things which can cause high mutational load IRL, like maternal/paternal age, hence these correlations. I'm *not* sure whether this correlation would hold up when performing IVF, as the mutational load of the parents is largely fixed (it's the same two people, at the same time in their lives).
It's because a single broken gene can have negative effects on multiple functions. E.g. consider all the symptoms of Huntington's disease. The same thing happens for mutations with less dramatic effects.
I think it's just a consequence of the fact that good things cluster together phenotypically. High IQ tends to be correlated with all sorts of positive life outcomes, including an (often slightly) lower risk for many diseases. Probably not because there is a direct causal link between IQ and disease, but because there is some other common underlying causal factor. If that other common underlying causal factor is partially genetic - and it will be, because everything is partially genetic - then this will mean that on average genetic variants that are associated with a higher IQ will also be associated with lower disease risk.
I feel I should point out to you that this is not an explanation - it's just restating the fact that good things go together, without explaining why. The reasons *why* are pleiotropy, non-independence of criteria (i.e. high IQ makes you get higher test scores which makes you more competitive in the labour market which makes you richer) and the non-uniform mutational load which makes broken things correlate with other broken things.
You're right, my explanation doesn't go all the way down the causal ladder, it's just stating that we should expect selection for IQ to have mostly positive side effects if it mostly has positive correlations with other positive traits (low disease risk). Positive phenotypic correlations usually translate into positive genetic correlations.
I don't know if we can be certain at this point about genetic mechanisms that are causing this. The mutational load argument sounds compelling and is certainly true to some extent, but polygenic predictors and genetic correlations are both based on common genetic variants that have very little in common with the very rare genetic variants that people have in mind when they talk about mutational load. The reason for that is that on average you can predict a polygenic trait much better by considering a large number of individually unremarkable, common variants, than by considering a necessarily smaller number of rare, large-effect, mutational-load variants.
The varations are likely random, but remember that the initial genome being varied is not random, but has survived lots of selection and is far more functional than a genome created at random. So random changes are much more likely to have negative rather than positive effects. Therefore if one system is observed to have been corrupted by random changes, it is likely that others are as well. By analogy, a car that has had parts changed at random with those of different models is likely to end up worse in every measure, because the original car was a carefully selected combination of parts.
To me the article seems generally on board with the idea as long as its caveats and limitations are properly communicated, although someone else might read this as "polygenetic selection is a scam and should be banned". It did suggest that companies shouldn't be charging for this until they can prove it works, which seems like a great way to kill it, but it's probably not that expensive to throw in a few extra tests into disease screening so maybe it could still work. The reference to the evils of eugenics seemed like it was only there because it had to be included, it's important to regularly remind everyone that you think Involuntary Sterilisation Is Bad.
"someone else might read this as "polygenetic selection is a scam and should be banned"
Not something that had occurred to me prior to reading your comment, but oh yeah. Every cowboy and dodgy doc in the business will jump on the chance to offer "polygenic screening for a better baby!" the way dodgy clinics are currently offering "stem cell cures":
https://www.statnews.com/2020/08/18/separate-scientific-scam-stem-cell/
https://blogs.sciencemag.org/pipeline/archives/2019/07/16/the-bottom-of-the-stem-cell-barrel
"There is a section "unintended consequences" that says that selecting for high IQ would also increase the risk of bipolar disorder" -- OH NO, NEURODIVERGENT PEOPLE!
From where I'm standing, allism and simultypy are severe, crippling disabilities that are obviously completely incompatible with any kind of good life, and purging them from the population is a self-evident good, and any allist or simultype who disagrees with me is just blinded by having a weird brain that thinks wrong things. *Mysteriously*, this position is far less common than its inverse.
What is simultypy a dysphemism for here?
"Schizo" to "split" gives you "simul" to "together". I used to identify as autistic, on account of being diagnosed with it for longer, but "what were you diagnosed with first, according to what was trendy when you were a kid to label a given cluster of weird kids with" is not the be-all and end-all of a given person's neurotype and if the schizotypal liberation movement ends up consisting of me and me only, well, that's an improvement on it consisting of no one.
I am unconvinced of the degree to which I believe the statement above and the degree to which I think it's a worthwhile thing to say independent of its truth, because of everyone else believing the exact opposite thing. I *do* strongly and sincerely believe that letting allistic and simultypal people decide what the correct balance of neurotypes is may as well be signing society's death warrant, that there's no possible way it could result in a non-dystopian world, and that the current balance of those neurotypes in the population is too low rather than too high.
I realise you've already said you're not sure whether this statement is true later in this thread, but I'm curious about this scenario, regardless - so here's a genuine question from my point of ignorance:
If we were to treat allism as a disability and mankind were to abandon empathy (as in, the illusion of knowing what other people are feeling), is that not a loss from a game-theoretic point of view?
I'm unfamiliar with how cooperation with people you expect to only ever meet once arises and remains stable in this scenario (something that seems fairly crucial to civilisation as we know it). Would we depend on laws and their consequences to regulate all those scenarios (bland example - the person you just gave five dollars to should give you the thing you're buying with those five dollars, otherwise they will be punished by society)?
Or is the argument that you shouldn't, precisely because it's illogical (and therefore civilisation as we know it potentially needs serious changes)?
I'm reasonably sure this is probably covered by a 101 somewhere, but I'm not sure how to find it. Feel free to just slap a link my way that covers this!
I think lack-of-cognitive-empathy *is* one of the disabling effects of autism. My experience is you can solve much of this problem with psychedelics, so...work backwards from there.
As an autistic person, my experience is that non-autistics have precious little empathy for autistics, to the point of resembling the caricature descriptions of autistics sometimes produced.
AFAICT, most non-autistics get away with projecting their own feelings (in similar circumstances) onto others, and calling that "empathy". They get away with it because when they do this with "normal"/high status people, they are often right, and those judging them don't care about "weird"/lower status people.
Strongly agreed. I first noticed the pattern when reading Temple Grandin's memoir and realizing what a hard time her mother had empathizing with her.
Also, what passes for empathy with neurotypicals can be pretty mechanical-- a hand on the shoulder at the right time can make people feel a lot better even if the person supplying the hand isn't feeling much of anything.
Indeed and when autistic people project their own feelings/desires on others, this is called a lack of empathy.
It's really rather ugly. Like a supercharged version of the stereotypical 'ugly American' who starts shouting at a local who doesn't understand English when visiting a foreign country. Only this time the person is even accused of being illiterate, their mastery of the local language being completely ignored.
I wonder how much of the alleged normie empathy is actually recognizing someone else's emotion... and how much is an unwritten social contract to be so bland that they can freely project their emotions on each other, and to never contradict each other when doing so.
If the normie has a model of you, and you show that this model is wrong -- for example by expressing interest in something the normie considers boring, and therefore automatically assumes you also consider it boring, -- the normie feels discomfort, because their illusion of empathy was broken. The normie will then project, and blame you for not having empathy.
I don't think that's it. I think the normie thing is to have beliefs (predictive processing?) which is usually not too far off from most people's actual emotions.
Empathy the skill and empathy the character trait are different things.
Autistic people aren't *inherently* great at empathy-the-skill targetting neurotypicals, although we can learn it. (I'm not sure of the state of the research on how well autists read other autists.)
We have empathy-the-character-trait, though. That is to say, we care but don't always know how you feel. (This is the reverse of a sociopath, who knows but doesn't care how you feel.)
The majority of autists find lying deeply upsetting and don't generally do it, so I imagine you'd actually have to worry about fraud less than usual. There are potential issues with an all-autistic society, but it's the sort of thing that's probably worth a try to see if it pans out.
Not really an all-autistic society, but a great movie anyway:
https://www.imdb.com/title/tt1058017/ The Invention of Lying (2009)
Anecdotes are not data, I realise, but my exposure to one case of someone with schizophrenia did not convince me that they were living their best life.
Regularly go off their meds, lose their job because of that, then spiral down until they came into the office claiming that people were breaking into their house to smear faeces on their walls, visibly upset and distressed and convinced this was true (it was not) - I don't see that as helping that person or society in general.
Best to worst case: neurotypical -> neurodivergent but can stay on their meds -> neurodivergent, can't stay on their meds, become more and more enmeshed in their disorder until they're a danger to themselves and/or others.
Neurodivergent is a wide catagory. So wide I don't think it can be said to be better or worse in aggregate. In specifics, sure, the kid with an 80 IQ that can't communicate with his parents is a painful life. The adult with a high-paying job in a technical field who doesn't give two shits about social pressures to spend beyond their means is a great life.
Anecdata is not data. My anecdata is that psychosis is not an impediment to my life, and is often an improvement to it. My anecdata is also that I've never taken neuroleptics and don't plan to; there is a fair amount of interesting suggestion in the direction they are not a net positive, but rather a contributor to the disability that people with some neurotypes experience.
If you don't mind, what sort of psychosis?
Fun for you. Not so fun for the people who have to deal with you pissing and howling on the floor. If you aren't one of the people who get so bad that they do end up pissing and howling on the floor, congratulations. But then what you are saying is "I can manage my symptoms/my symptoms are not so bad that they impinge on my ability to lead an independent life".
You're like someone who says "Well I only need glasses, plainly trying to cure river blindness is oppression of the differently abled by the ignorant and repressive majority!"
When nerdy people say they are "a little autistic," are they being a little cheeky, or do they mean it in a way that is continuous with a clinical diagnosis of autism? Is it legitimately just a matter of degree?
I think the answer has changed recently, when the DSM started treating Autism as a spectrum disorder.
People can have traits that would be in that direction on a spectrum from none to lots, but not far enough along the spectrum for a diagnosis.
That makes sense. I recently heard myself described as "a little autistic" for the first time, and while that description makes a lot of my traits more legible, I'm still not sure how much I or other very mildly autistic people should lean into it. At first glance it feels vaguely insensitive to use the same term to both describe my excessively mechanistic approach to social interactions / inability to tune out background noise, and someone who is autistic to the point of being non-verbal.
I’ve wondered if the whole “spectrum” idea might one day fall apart, or schism into multiple spectra. I personally have never understood how we came up with a spectrum that has my high-functioning friends on one end and the kind of profound autism that leaves you institutionalized on the other. The traits we identify with autism don’t seem like they slide along a scale the way something like hearing loss does.
Consider for instance Temple Grandin. She was non verbal as a kid, only learned to speak after speech therapy. Was falsely diagnosed with brain damage. It seems like the only thing separating her and a "classical case" of autism was the dedication of her parents. If you agree to put Temple in the same category as a "classical autism" case, then what is the difference between her now and your "mild autisic" high functioning friends?
I’m not sure I had a “classical” idea of autism. But there are a lot of people on the spectrum who are sufficiently high functioning that they’re only diagnosed as adults, if at all. And there are others in institutions whose condition would seem to defy any amount of parental or professional dedication. Surely there’s some difference between those two groups, and maybe that difference warrants two distinct diagnoses.
In my view, it’s totally fine to use descriptions along the line of “a little bit autistic”. Unless you are in the context of a psychiatric institution, the majority of autistic people that you’ll interact with are probably going to be the high functioning ones (such as myself), and there really is a meaningful category of people that have some traits associated with (predominantly high functioning) autism, while not being to the point of qualifying for a diagnosis.
I do find it a bit odd how much popular perception of autism conflates the high functioning (to the degree that you might not know they have autism from a casual interaction) with the low functioning (to the degree that they cannot really live without significant chaperoning).
This is similar to my view. I have never been diagnosed myself, but my son was. What the doctor described as the tells/symptoms were things I myself had as much or more, but had learned to cope with. (My son had sessions with a therapist for a while and now has no obvious or outward signs that are distinguishable from regular shyness or social awkwardness. He is also high functioning and doesn't meet most people's perceptions of what autism looks like, though it was more obvious before the sessions).
I don't tell people I'm autistic, because I'm high functioning enough that very few people would ever notice and I was never diagnosed. I even work in a people-oriented field, having figured out lots of workarounds to being a genuine people-person.
> I'm still not sure how much I or other very mildly autistic people should lean into it
In general I think we are doing a bit too much leaning into labels. Once you decide who you are is a foobarist you end up buying foobarist clothes, going to foobarist parties, and agreeing with foobarist policy positions. It’s true that this can give a comforting sense of belonging but it also closes you off to a lot of possibilities that you end up rejecting out of hand because they aren’t foobarist.
AFAIK the spectrum is from Asperger's to low-functioning autism, not from neurotypical to autistic.
Yes. I have Asperger's and mostly fit in without people knowing it.
https://psychology-tools.com/test/autism-spectrum-quotient
Tried the questionnaire, got to "I am fascinated by dates (agree/disagree)" and couldn't figure out whether they were talking about fruit, social interactions, or calendars. Unfortunately my answer is very different depending.
To add on to this, it is to be noted that very, very often we *hear* 'spectrum' but *envision* a _gradient_ — i.e., a scale going from more to less affected by a 'color' (the condition), when the intended meaning is usually a collection of different 'colors' (ways of being affected).
https://en.wikipedia.org/wiki/Spectrum_disorder
https://neuroclastic.com/2019/05/04/its-a-spectrum-doesnt-mean-what-you-think/
I think (as Brian said), that the clinical diagnosis is a spectrum, so that helps it be consistent. But it’s also a good shorthand for a lot of behaviors and traits; saying someone is “mildly autistic” actually has a pretty high information content about them. Even if a few bits of information are wrong in what’s conveyed, the mass of information can be worth it to use as a phrase.
wrt whether it’s insensitive, maybe? but it’s not clear to me who it’d be problematic for. I think people can be offended by basically anything, but generally I think if you’re using it as a legitimate descriptor, rather than as a pejorative term, you’re within the normal bounds of insensitivity, since it’s a spectrum and there isn’t anyone who seems to have a clear case for being offended by its usage.
Obviously if someone can present a clear case for why it’s problematic I’m open to changing my mind, but otherwise I view it as other spectrum words like “disabled” “hearing impaired” or “buff”
They're probably being cheeky. I could see this sort of thing as mildly bothersome to autistic people. But psychological conditions do seem to be on a spectrum. So if someone does describe themselves as a little autistic, what they are saying could make sense depending on why they said it. Just like someone could say they're a little bit borderline or bipolar or traumatized but generally those terms are reserved for more serious cases. I think everything is just a matter of degree.
It's legitimately just a manner of degree, and an inconsistent one. I have an ASD diagnosis since toddlerhood; most people in the ratsphere, including the ones who swear up and down that they're not, are far more characteristically autistic than me.
Can't speak for nerdy people generally, but when I say it I mean I have something like a subclinical form of autism. I have all the usual deficits/strengths, but they're minor enough that I really don't think I'd be diagnosable.
I suspect there some who are cheeky, some who try to point out some traits on the continuous spectrum of symptoms, and some are doing a little bit both, often inadvertently. It is difficult to self-diagnose ones self-psychology, and its contingent on reference points one has available.
They're doing the best they can to describe how they think their brain works, now that the Very Serious Medical Authorities have told them they're not allowed to use the word "Asperger's" for that purpose. I'm not sure it's an improvement to treat everything from what used to be called mild Asperger's, to someone who will spend the rest of their life in an asylum screaming, a "spectrum", but that's what we're stuck with unless we want to fight a language war to reclaim "Asperger's".
Wasn't a lot of this a push-back from parents of autistic kids, who didn't want their kids to be stigmatised as the "have to wear a helmet 24/7 or they will beat their brains out against a wall" (an instance I encountered during my working life) type, so they insisted that the Asperger's and 'high functioning' be folded into the entire autism definition and that it be put on a spectrum?
There *is* a wide spectrum between "socially awkward but can handle their symptoms" to "noticeably autistic but functional" to "will bite off their own fingers, will always need institutional care" autism, but I think that the very severe cases are not on the same line as the autists formerly known as Aspies.
I hadn't heard that theory as to the origin of the "spectrum" diagnosis, but it seems plausible. And I agree, we may not be dealing with just a single axis where the helmet-Autistics just have more of what the Aspies have. Actually, now that I think about it, this is something I'd expect Scott to be familiar with; he's written a few things sort of adjacent to it before, but I don't recall him addressing it directly.
I'm going mostly on what I've heard/read elsewhere. Currently working in administrative capacity in a centre for children with additional needs. They get referred to it by the local Early Intervention Services Team. Includes children with autism but also other developmental needs. Generally "mild to moderate" cases. I haven't seen any of the severe cases in this job.
The really severe one was a previous job, where part of the work was administering grants for home improvements for medical reasons. One family had two teenage/young adult sons with autism, and they wanted to build an extension to have a padded room so the older son could go there when getting stressed out, and so he could take off the motorcycle helmet he had to wear 24/7 because in the padded room even if he did knock his head against the walls he couldn't damage himself.
I haven't seen any of the 'savant' types. I do think keeping Asperger's Syndrome separate was more helpful, as there are definitely 'clusters' - the mild to the moderate to the severe types.
Are Tony Atwood´s Aspie criteria (http://www.tonyattwood.com.au/index.php?option=com_content&view=article&id=79:the-discovery-of-aspie-criteria) known to everyone here? When I was re-diagnosed with ASD as an adult I found them quite helpful.
I worry that a lot of it is the same as when type A personalities describe themselves as "a little OCD". It's possible that some of them are right, but I've got to imagine that a lot of self-diagnosis is just people who have no idea what the real condition is like.
Depends on what you think about taxonometrics. If you take their word for it, there isn't a bimodal distribution of autistic and non-autistic people.
Troll answer: they don't know any better to compare two (morally loaded) concepts which have significantly different magnitudes. In this case, nerdiness and autism.
Hang on, I thought Scott's taxometrics post said autism was a taxon?
I'm starting a self-experiment to see if melatonin can help prevent me from waking up, and am hoping to get some feedback/critique of my experimental design. Anyone have any suggestions?
I realize melatonin is typically used to control when you go to sleep, but I'm hopeful that it will last long enough in the bloodstream that it might impact time asleep as well.
Based largely off of this SSC post (https://slatestarcodex.com/2018/07/10/melatonin-much-more-than-you-wanted-to-know/), I've selected the following protocol:
- Study duration: 4 weeks
- Doses: 0, 0.3, and 3 mg, taken 30 min. before bedtime
- Randomization: pills placed in opaque gelatin capsules, randomly assigned to each day by an assistant (i.e. blinded)
- Measurement: time fell asleep, time woke up, time asleep, HRV, sleeping pulse, fasting blood sugar, change in blood sugar, measured by Apple Watch/Autosleep app & Dexcom G6
- Analysis: effect size and p-value tested for melatonin vs. blank. If meaningful effect size is observed, experiment will be repeated with that dose to confirm.
*Any suggestions on improving the protocol or other interventions would be greatly appreciated.*
For those who are interested, full details, including self-collected data that motivated the study, at: https://www.quantifieddiabetes.com/2021/07/please-critique-my-experiment-design.html
A variant you might try is to set an alarm for 2 or 3 hrs after you go to sleep, and take the pill then. This is my go-to trick for periods when my sleep cycle gets funky and I wake up nightly after 4 or 5 hours and can't go back to sleep. Don't think I've ever done this with melatonin -- have used diphenhydramine or lorazepam. For me, anyhow, an alarm that goes off in the first 2 or 3 hours pulls me out of a dead sleep, and I can go right back to sleep after taking the pill. Taking the med at that point in the night means it kicks in at around the time I would normally had my too-early spontaneous awakening, and its effect lasts til I've been asleep 8 hours or so.
Interesting. I'm only waking up 20-30 min. early, so I don't think this would be worth it for me. I'll recommend it to my wife, though. She frequently wakes up 3-4h after going to sleep.
You may also want to experiment with extended release melatonin.
Thanks, didn't know that was available. Do you know of any studies that indicate the optimal dose? Looking around, I'm seeing mostly 3-10 mg
Found a 0.3 mg extended release (https://www.amazon.com/Life-Extension-Melatonin-Released-Vegetarian/dp/B00CDABRUW?th=1). So looks like I can match doses for extended and regular release.
I'm struggling with how much I can expect from a partner with ADHD. We have a young son who requires a lot of attention, and my husband just doesn't seem to have the patience required to spend long periods of time playing with him. I can deal with it, but I'd sure like to just hang out and play video games also. Am I being ablist to think he should just suck it up and split the time equally?
When my son was young, my wife and I would sometimes hire a babysitter when we were both home so we could each get work done.
I'm not really qualified to give advice here, but is there something else helpful he could be doing? If he genuinely can't handle long play sessions it won't be good for anyone to try to force it; but if he spends that time doing something fun and easy and unproductive, you will understandably be frustrated and also somewhat suspicious (not necessarily that he's being cynical or deliberately dishonest, but that he's subconsciously making more of a choice than he'd admit to himself or you). If there are household tasks that he can handle, and that you might otherwise have to do yourself, maybe he could put some extra time into those while you're playing with your son.
I would try to recast your question to ignore the ADHD, because it's not necessarily relevant. I used to love playing with our little kids. My wife has always been unable to do it, and she doesn't have any kind of mental health issue. Ability to play with young children simply varies from person to person. What she could do, on the other hand, was plan day trips and put the kids in the car and go and look at a Thing (whereas I found that kind of process utterly enervating and pointless). With parenting, ultimately you just have to accept what the other person can or cannot do with the children; you can't force it.
The other issue is that you seem to suggest that when your husband isn't on kid duty, he goofs off and plays games. I don't know if you're a woman, but this sounds like quite a common male/female dynamic, and my very very very very strong advice is to be very explicit with him about it. (Apologies for horrible stereotyping to follow, obviously this is not true of everyone, but...) Men really really think differently about stuff, and we really really won't realise that we're annoying our partners until a row blows up. If you want your resident male to do more about the house, instructing him directly on what to do and when to do it is much more likely to get good results than any other tactic.
Completely agree. Some people just do not enjoy playing with small children. Does not mean they have attention deficit disorder or a cold heart. Think of a task your husband can do to help the household and free you up some, something he does not find irksome and weird, and ask him to do more of that. It's likely that when your child is older your husband will find him much more fun to hang out with.
If you are just in the same room as your kids and they play by themselves, you are not neglecting them. Kids do most of their playing that way. Sometimes they need direct attention and it can be fun to get on the floor and play with them, but if you are not doing that, they'll still do fine.
This. The expectation that parents constantly play with their kids, Bluey-style (https://www.youtube.com/watch?v=EJkn-r-rJJY) is very, very new and not necessarily good. You say you have a young son who "requires a lot of attention," but are you absolutely certain that's true?
I obviously don't know anything about your son, so maybe it is true. But unless he's too young to be trusted not to choke himself to death swallowing a toy or he has a developmental issue, he should be able to play without your constant involvement! Lego, Play-doh, caring for baby dolls, digging a hole in a sandbox, going down the slide of a swing set, imagining a story with friends...until 20-30 years ago, these were activities parents generally didn't get directly involved with. They might admire the end product of a Lego fort or a clay snake, but they didn't usually put their hands on the toys.
I'm 41, have memories going back to 2.5 years old, and I don't remember my parents *ever* playing with my toys (or my younger brother's) as if they were a peer. In fact, from 4 years old on, if my mother "caught" me acting out a story with My Little Pony figurines, I would stop and wait for her to leave the room before getting back to it. Every parent of every kid I ever knew had similar boundaries. An adult joining us in imaginative play would have been way too intimate and intrusive.
Obviously none of this answers your question about your husband's involvement in child-rearing, but it is worth considering: What if you gradually assert some boundaries on you son's demands for your attention, and insist that he learn how to entertain himself without you (or your husband) actively engaging him every minute? If your son can act out a dinosaur war on the living room rug or sculpt a Play-doh family to live in the Lego house he built while your husband and/or you read a book or play a video game nearby, you'll all benefit.
It's just something to consider.
Yeah, there are a lot of good points here. I think part of my problem is guilt from having him at daycare 40 hours of the week. I feel like I need to "make the most" of the time I am there with him. He feels that too. It's a constant refrain of "Mama Mama Mama"
Strictly anecdotal example coming up, but I don't think we can assume daycare actually impacts clinginess/sense of loss/etc!
How old is your little guy? My best friend is a stay-at-home mom with an almost four year old girl. Between normal maternity leave and the pandemic, her kid has never even met a hired babysitter, much less been in daycare. She has quite literally spent her whole life exclusively in the company of her parents' adult pod (grandparents, an uncle, a couple of her parents' friends), and her mom was present for way more than 99% of those gatherings.
Her kid is "Mama Mama Mama" with 20 minute meltdowns if she wants to leave the kid with her doting grandparents and go to the grocery store by herself.
My friend has *maybe* had 150 total hours away from her daughter in her daughter's whole life. And yet her kid throws huge fits whenever they part, as if she's being abandoned in the woods to hungry bears.
So I'm just saying. It might not be daycare, it might just be that your kid is programmed by evolution to solicit maximum engagement from you so he survives to adulthood. That doesn't mean he actually needs it in your safe 2021 home.
I'm sure you're a great mom. Don't accidentally deprive him of the opportunity to be totally self-absorbed in autonomous play. He'll figure out that it's an pleasure different from playing with adults.
A couple of things jump out at me here.
1) Kids will behave how you condition them, and you condition them with both rewards and punishment. If the constant refrain is "Mama Mama Mama" its because it leads to good results for him. You need to find ways to recondition him. That first invovles figuring out what you want out of him. More autonomy? Maybe. Trying to handle things himself first? Maybe.
2) From above, it sounds like you have a marraige type that I call "equal," where your foundational principal is that the two adults are equal partners who contribute equally. You should do half the kid playing, and your partner should do the other half.
(Totally personal opinion here) This, frankly, is a bad way to structure your marraige, expecially once kids show up. How can a kid be equal with the parents.
Instead, think about moving your marrage from a marraige of equalit to one of mercy. It is the job of each adult to give each member of the family (including themselves) what they need. Full stop. The same way two renters are 100% responsible for the rent. It doesn't mean you need to kill yourself to serve everyone else - that's not giving yourself what you need.
But my wife hates hates hates scrubbing the bathroom. If find it fine, but old cold wet food gives me the willies. So I scrub the bathroom and she does the dishes. Simple example, but it applies to larger things. I work an intensive, long-hour job. My wife just isn't up to it. She makes about 12% what I do in a part time job. If I demanded she have a job like mine, it would burn her out.
Now you throw in a kid, who doesn't know how to be fair. He's going to throw a wrench into any equal plans you have because he's a child. I bet you suspend the equal thinking when your partner has the flu or whatever. Well, a kid is needs that 24/7 for at least the first decade of life.
You can play with the kid for a certain amount of time before you get burned out. Partner can play with the kid for a shorter amount of time. That's all the two of you can give. Your job is to figure out how to make it enough, and to give your kid what he needs.
But I bet he needs a lot less than you think. You should read Selfish Reasons to Have More Kids - it really illuminates how over-worked the modern American parents are in a way that has no benefit on their children.
It seems to me, as others have said, that you ought to be able to find room for your husband to watch the kid without the husband needing to exhaust himself by "playing" in what sounds like a fairly specific and, for him, exhausting way.
It's totally fine for the husband to just keep an eye on the kid while he plays by himself, watches TV, or does whatever. The husband can also go take the kid for a walk around the neighborhood, go visit the park, or just take the kid with him on a run to the store or something. It could also be that your husband can find his own way of playing with the kid - if he has a hobby or interest of his own, maybe he can find ways to share parts of that with the kid, which might feel much more meaningful than whatever banal playtime activities are currently normal.
Ultimately though, it's not ableist to ask your husband to step up and do his part. But it probably is ableist to expect that your husband's interactions with the kid will look exactly like your interactions, and that his parenting style will exactly match your own specific preferences.
I'm surprised that nobody here has asked if you've tried medication for ADHD? If you have and it didn't work or if you haven't and have reasons for that stance, perhaps you'd explain in a further post.
Christina The Story Girl has replied to you in an excellent post with which I fully agree. It seems to me your thinking is taking you on a dangerous path that may not end up anywhere nice for your family.
I just did a podcast episode with Greg Cochran on UFOs. We both think there is a reasonable chance that some of the UFOs seen by the Navy are aliens. Scott once wrote that Greg had "creepy oracular powers". https://soundcloud.com/user-519115521/cochran-on-ufos-part-1
Is there a transcript somewhere? I read much faster than people talk.
Sorry but no.
This! ^ I’m glad audiobooks and podcasts exist for those who enjoy them, but to me it’s a maddeningly slow and inefficient method of getting information.
I'm listening to it rather gradually, and I recommend the first 20 minutes or so as good about high altitude lightning (sprites) that pilots simply didn't talk about in public because it would make them seem too weird. There was also a problem with it being visible to humans but so brief that it was difficult to photograph.
So I'll recommend cachelot.com - a collection of essays arguing a fascinating case for cachalots (aka sperm whales) being human-level intelligent. If nothing else, it shows just how little we really know about cetacean and especially whale intelligence, even now. The site also includes a blog on animal intelligence in general, written more rationalist-adjacently than other stuff I've seen on the subject.
Looks like a major research effort is underway to decode their hypothetical language!
https://www.projectceti.org/
Something I’ve been thinking about a little recently:
Is it surprising how much intelligent people disagree?
I’m often amazed by the extent to which people of roughly equal intellect arrive at such divergent conclusions, even when they are forming opinions based on the same information and possess similar expertise. I also find it fascinating when people that I agree with on one issue express views which I find utterly illogical elsewhere. Intuitively, I am surprised by these observations, but maybe I shouldn’t be.
Interested to hear others’ thoughts.
A lot of intelligence is creative and performative - it's not like a prediction market where all the points are for being right. Especially with armchair thinking (no skin in the game), having opinions is more like building something out of lego than finding the "right answer". From that perspective, it's not surprising that people can reach different conclusions from the same set of initial blocks.
They may have vastly different priors, and confirmations bias does the rest of the work. In general, I'd say that human brain didn't evolve to be able to reliably arrive at objective truth, and rationalism's expectation that it's feasible to overcome this is naive at best.
I'm not sure that two people (say Alice and Bob) being presented with the same information is enough to ensure that they come to the same assumptions. Even if Alice and Bob have approximately the same priors coming into it,
1. They could have differing update rules. How do they incorporate new evidence into their model of their world? Alice might weight new information much more than Bob.
2. They could just experience the information differently. Alice might see the dress (https://en.wikipedia.org/wiki/The_dress) as white and gold, Bob might see it as black and blue. Silly example, but if we can't even agree on something as foundational as color it's possible we could subjectively disagree on more abstract information.
3. What if they just "think differently", but still rationally?. If Alice tends to think in terms of causal descscion theory and Bob tends to think in evidentiary descscion theory, are either of their positions on https://en.wikipedia.org/wiki/Newcomb%27s_paradox more justified than the other's?
People have different personalities and moral foundations. If something is an issue related to morality or something political, then it is difficult for people to think clearly. Moral reasoning is a complicated process and different views can lead to different outcomes. Political beliefs are built on moral reasoning. Political and moral beliefs inform a lot of thinking about empirical issues like minimum wage and unemployment or climate change. If something is non-controversial, non-moral and non-political, it is easier to agree but intelligent people are still subject to biases.
I think some of it is that people differ in how conformist they are, i.e. if there is an objectively correct viewpoint and a different socially correct viewpoint, equally intelligent people will disagree simply based on different levels of conformity. Another factor is that eureka moments happen randomly.
Pointer: https://www.wikiwand.com/en/Aumann's_agreement_theorem
Aumann's Agreement Theorem describes the broad area you speak of. Here's a pointer to Yudkowsky's essay on the subject, "complete" with some comments/discussions below from the rationalist community:
https://www.lesswrong.com/posts/NKECtGX4RZPd7SqYp/the-modesty-argument
I'd say that the concept is unrealistic and useless. The profoundly unrealistic part is condensed in these five words: "based on the same information".
On tabula-rasa AIs, maybe. Not on humans.
The "information" based on which you assess situations, has accreted over decades and decades, depending on your age. There's megabits and megabits of such information involved, a significant portion of which can be a factor in how you view and judge any given issue. When you "state" any given issue from your perspective, you might state it with a few dozen bits of information, and you'll be under the illusion that those few dozen bits are "really it", that they fully cover your judgement of that issue, and there can not be any disagreement in the judgement if there's no disagreement upon those few dozen bits of information.
That view is, in almost all cases, false. There will be kilobits, hundreds of kilobits of information that really goes into that judgement which you *think* is fully expressed by those few dozen bits. The other 99% you "naturally take for granted", whereas granted it is not. If you took someone whose judgements you "find illogical" and locked yourself in a mountain hut for a whole week, doing nothing but descent into the differences between all your priors, you might still emerge a week later having covered no more than one percent of the *actual* information space from which your different judgements stem.
Scott's blog - much more so than Yudkowsky's - is the zone where people come to engage in such activities. (minus the mountain hut)
There’s a theorem by Scott Aaronson that says that actually you don’t need to exchange that much information to arrive at agreement.
Aumann's Agreement Theorem doesn't require the information be common, only the priors and the posteriors (the differences in posterior being the disagreement). The example he gives at the end of the paper (http://www.dklevine.com/archive/refs4512.pdf) does a good job of showing it -- the parties are exchanging their posteriors, not the actual evidence they used to form it. The common prior assumption is strong, but not as strong as requiring all information to be shared. Some even say common knowledge of rationality is enough to meet common priors (https://mason.gmu.edu/~rhanson/prior.pdf) .
As I understand it, the "point" of the theorem is less that rational agents presented with the exact same evidence in total should come to the same conclusion, and more that if you trust that the other agent is rational you should use that when refining your own posterior. It's practical value (which I agree is dubious), is to point you to be more modest in your disagreement if the other agent is honest and rational.
Human intelligence is not really optimized for finding the "correct" view on something - it is optimized for mounting a rhetorical defense of an arbitrary, pre-selected position.
If your social community believes obscure religious idea A, and a neighboring social community believes a contradicting obscure religious idea B, your selection of obscure religious idea to believe will simply be a factor of whether you can to remain a member in good standing with your current community or if you see a benefit to jumping ship to a new community.
In either case, the ability to mount creative and clever arguments for your chosen position will place you in high regard in whichever community you choose to affiliate with, and may even be respected within the rival community in a "worthy opponent" sort of way.
This sort of social game is what human intelligence is optimized for. We are not optimized at all for looking at the world with unbiased eyes and forming an accurate map of the territory, because that sort of behavior tends to make you popular with nobody, and it is only in very recent times that the rewards for seeing clearly may come to be as worthwhile as the rewards for being a member in good standing with the community.
If you are surprise by it, you are running off the wrong theory of rationality/epistemology, because the right theory would predict the facts.That should be obvious, but a lot of the rationalsphere insist on using broken theories ,like Aumanns theorem.
Issues that are sufficiently deep, or which cut across cultural boundaries run into a problem where, not only do parties disagree about the object level issue, they also disagree about underlying questions of what constitutes truth, proof, evidence, etc. "Satan created the fossils to mislead people" is an example of one side rejecting the other sides evidence as even being evidence . Its a silly example, but there are much more robust ones.
Aumanns theorem tacitly assumes that two debaters agree in what evidence is: real life is not so convenient.
Can't you just agree on an epistemology, and then resolve the object level issue? No, because it takes an epistemology to come to conclusions about epistemology. Two parties with epistemological differences at the object level will also have them at the meta level.
Once this problem, sometimes called "the epistemological circle" or "problem of the criterion" is understood, it will be seen that the ability to agree or settle issues is the exception, not the norm. The tendency to agree , where it is apparent, does not show that anyone has escaped the epistemological circle, since it can also be explained by culture giving people shared beliefs. Only the convergence of agents who are out of contact with each other is strong evidence for objectivism
Experts are selected not for accuracy, but for improving our theorizing about a phenomenon. In the long run that should lead to accuracy, but in the short run it's better to have lots of people proposing weird theories like hollow earth, heliocentrism, etc., some of which will be right and most wrong.
I am not sure experts are even selected for that. It seems to me that experts are chosen based on how much they agree with existing experts, or just the relevant prior beliefs of the relevant selection group.
I don't think that's right. You don't become a respected academic by agreeing with everyone - you absolutely *must* say something weird and new. There are definitely limits of the type of weirdness and newness, but someone who just agrees with all the conventional wisdom won't get far.
Not at all. The vast majority of every conference I have attended is made up of papers that poke a little at the edges of what is generally known and find "Yup, what we thought was true is pretty much true", with the caveat that they can demonstrate it at p <= .05.
Now, if you want to be in the top say 5% of a field, you want to try and mix it up quite a bit. That might get you in the door. From that position, you then dictate what "respectable" means.
Further, note that there is a difference between your research conforming to beliefs about what the current state of research is and beliefs about how the world should work. If you are say, Card and Krueger, you can get famous publishing an article that goes against established economic theory, that raising the minimum wage lowers employment, but goes along with the prevailing political (for lack of a better word) beliefs of the academic community, that raising the minimum wage is good and something we should do.
In fact, doing that is a big win because people WANT to believe it. All those academic economists that are ashamed to point out at faculty gatherings that raising the minimum wage probably hurts employment prospects now can merrily join the consensus of the rest of the faculty. Whew! No more cognitive dissonance!
So yea, if you want to be respectable, that is, hold your job and get tenure, you need to toe the damned line. That line is drawn both by what current experts agree is true, and what the group wants to be true. After you get tenured you can maybe afford to get a little weird, but even then you'd better be careful.
*Note* My experiences are from the field of economics, with a little dabbling in philosophy and sociology. The sciences that are farther from politics are possibly not so bad, but any science that suggests what governments should do in some fashion gets corrupted really quickly. Jane Goodall might have some fun stories about how people pushed back against her work, however, and chimp behavior is pretty far from human society. Well, at least in how we apply it.
> even when they are forming opinions based on the same information
That's never the case. There's literally always divergence in information, most often in personal experiences that shaped their reactions and intuitions around certain topics. Couple that with pervasive cognitive bias and divergence in conclusions shouldn't be logically surprising. Much like the unexpected hanging paradox, we're often surprised nonetheless.
This is extra interesting in the case of differences in Protestant theology. We're all working off the same source material, which we hold to be self-consistent and infallible. Most of us will quote identical maxims about taking the text as it stands, not theologizing off experience, etc. We'll even agree about the order of suprency of the books. But consider the Calvinists vs the Lutherans - a tiny (and I do mean tiny) difference in how to weight pure logical consistency vs. our human inability to fully comprehend the Divine, and they wind up disagreeing about a whole slew of things. It's astonishing how much of a divergence small differences can produce.
In my experience, some highly intelligent people develop such a powerful distrust of the intellectually average that they end up a little too self-reliant when it comes to forming opinions about the world, effectively insulating themselves from other people's perspectives and opinions even when those people have far more expertise behind those perspectives and opinions. Meanwhile, smart people are still susceptible to bias. So you end up with some very biased and insulated smart people convinced of really weird ideas, despite having access to the same information as everyone else, because they won't let anybody talk them out of it.
There are a lot of good responses to your question, but I'll also add one other thing to consider. You are looking at the differences, which are sometimes pretty extensive and important. If you looked at the similarities, you may find that even smart people who disagree vehemently on certain topics actually agree on the vast majority of topics. Not too many smart people disagree about the rough shape of the earth, the existence of countries, the structure of the periodic table of the elements, and thousands of other details.
We concentrate on the differences because there isn't that much to say about where we agree. "Argentina is a real country." - "Okay, I agree." That's pretty much our response to tons of random factoids, unless or until we disagree. Scott's Link posts contains lots of cool information and nobody says much about them. When they disagree, it spawns comment threads and that's most of what people will read about the Link posts.
Even in fields where disagreements are notorious, like economics, there is mostly agreement on basic principals and facts. If two "intelligent people" agree on 98% of all relevant facts, it is certainly noteworthy that they could still disagree on conclusions and important details, but I think it's less noteworthy than the fact that they agree on so much in the first place.
Human reasoning is heuristics stacked upon heuristics. Expect chaotic outcomes.
Diversity cultivates more broad experimentation, which invites more serendipity. I’m not sure how an evolutionary mechanism would reward that, though. Even if being part of a diverse society helps everyone, the individual payoffs might still encourage convergence. Since we observe divergence, maybe not.
On polygenic screening - a recent article in the Guardian gives some evidence that public opinion (in the UK at least) might be positive towards the broad family of genetic screening. Of course, just one data point, and this is post-natal not embryonic, but I think this is useful for context as there was some speculation about whether the tech would be outright banned.
Of course I could see this going differently in the US vs. Europe.
https://www.theguardian.com/science/2021/jul/04/whole-genome-sequencing-of-all-uk-newborns-would-have-public-support
That's not eugenics, though, because you aren't selecting the babies with the lowest risk; killing the babies with high risk of X gets you arrested for murder.