Has this happened for any other great scientists/intellectuals either in those days or recently? Was Einstein a special case, or have pilots just stopped doing that sort of thing?
Depression isn't the only cause of suicide, People kill themselves because of severe pain that they don't think is going to get better, and sometimes they're right about how intractable the pain is.
10-15 years ago, descriptivism was left-coded and prescriptivism was right-coded. The right said "speak properly" and the left said "let people be as they are -- all language is legitimate."
But when the left gained cultural power in the past several years, progressive organizations started endorsing prescriptive changes in language e.g., latinx, pregnant people, etc. that were used little outside of small political circles.
Power corrupts! Even in petty linguistic debates.
Of course, none of this matters. Like everything else, language evolves through shared mechanisms of popular usage, activist innovation, and elite endorsement, and neologisms succeed and fail based on murky societal machinery.
Not to rehash the debate but proponents of using literally in figurative language, like myself, are being prescriptive. This isn’t just something that crept into the language, it’s always been correct.
The Descriptivism vs. Prescriptivism debate is mostly an uninteresting one for me.
I do think that one of FDB's main points is fallacious, he seems to be saying in this point : "Descriptivists claim that they're not prescribing, but - you see - they *are* indeed prescribing, they are prescribing descriptivism", which is... just a trivial gotcha ? This is like saying """Post-Modernists say they are skeptical of grand narratives, but - my friends - isn't being skeptical of grand narratives, in and of itself, a grand narrative that they believe in ?""". Or """The priest tells me not to judge other people, but isn't *he* judging my tendency to judge other people by that statement ?""". Almost any opinion against anything that is general enough can be gotcha-ed in this way. It's lazy and uninteresting.
I see Descriptivism as a perfectly coherent way of thought. It *is* prescribing, it never said it doesn't prescribe, it's a normative opinion about how to study languages after all. It prescribes that you shouldn't prescribe language to its practitioners when you're studying it. Biologists don't genetically-engineer animals to the forms and functions that they *think* will be better, they study the forms and functions *as they already exist now*. Similarily, linguists should study dialects and other forms of languages as they are spoken\written\performed, not as they wish them to be.
I think that the point that FDB *should* be arguing, and is indeed implicitely arguing beneath the fallacious main point, is that everyday-life invocations of Descriptivism as a free anything-goes invitation is annoying and cringe. Descriptivism is the attitude of linguists, it evolved to fill a very specific niche in a very specific intellectual context, we're under no obligation to follow it in our everyday life or any life where we're not studying languages professionally. It would be like telling people who prefer cats over dogs as pets "But... But... you *should* live with animals as they are, not as you wish them to be", pet lovers are not biologists, they are allowed to have favorites. Just like ordinary people in everyday life are allowed to have preferences in language and they are allowed to hate and argue against other ways of practicing it that they don't like.
Also, Merriam-Webster is a woke organization who was caught changing the definition of words to better match The Faith : automatic +2500 cringe points.
I posted the comment above because it's a relatively novel point in this debate-space that I have never thought of before : Descriptivism vs. Prescriptivism as a political tool. Whenever you - a political faction - don't have the tools to enforce Prescriptivism, you preach Descriptivism (or, more accurately, the anything-goes caricature of Descriptivism that allows you to speak as you like against the dominant forms of language). Whenever you do have the tools to enforce Prescriptivism, you sing the praises of Prescriptivism and enforce it.
Having nearly been felled twice already by electric cars gliding past as silent as ghosts, I wonder if drive tones should be made mandatory, analogous to ring tones on phones! I think I'd go for clip clopping horses hooves, or maybe a chuff chuff steam train noise.
I only hope musical jingles are excluded though. Otherwise walking down the street in electric vehicle traffic it will sound like one is surrounded by hundreds of chiming ice cream vans!
One would think so, but I'm pretty sure the cars I encountered made the minimum noise a one ton powered object in motion possibly could!
I imagine before long, if they aren't already, electric cars in motion will be legally be required to make a noise comparable to a reasonable quality internal combustion engine. That noise will also need real-time dynamic adjustments to match the car's acceleration, even an artificial squeal when braking sharply, or turning if the tyres don't themselves squeal.
I am stricken with dread upon reading this suggestion. The idea of a street full of artificially generated car noises sounds like a hellscape. Doubtless they would all be custom, and all of them doubtlessly awful. The aural equivalent of xenon headlights.
Let's not forget the *current* situation is a hellscape! It's only through constant exposure that the roar of traffic isn't identified as awful. The deep feeling of well-being when I'm nature is in no small part due to the soundscape. I'm willing to take a few more accidents to improve the quality of life in cities dramatically. Also, anti crash software is improving rapidly, so this might just be a transition period in any case.
Doctors of AST: what is your relationship with radiologists? Do you ever disagree with their reading of an image, do you have access to images, do you consult the images yourselves or just go off of the report? Has your hospital/health care system made it harder to work with radiology by centralizing or outsourcing it recently?
I'm asking out of curiosity because I've heard of that type of centralization, outsourcing image reading to doctors outside the system (even the country), and wondering if other specialists ever second-guess radiology.
Also curious to hear how this sort of centralization affects the radiologists and how you interact with the other specialties, if at all.
Good article on people's tendency to talk nonsense when they have opinions without specific knowledge and how hard it is to know when you're having an opinion without specific knowledge.
Here's what I believe to be a powerful marker-- the word "just" as in why don't people just do whatever you think will make the situation better. Why don't fat people just eat less? Why don't people just stop having wars? Why don't people just stop committing crimes? Why don't the police just stop abusing people?
"Just" means you're ignoring why something isn't happening.
I don't like the example of people not knowing how bicycles work in detail, it's a meaningless gotcha in the wider context of the article.
In "The Design Of Everyday Things", the author recounts a study of how people failed to recognise (not draw, just recognise) their country's coin designs among multiple similar but fictitious designs. (e.g. The face of the person on the coin is to the left instead of to the right, the phrase on the coin is written slightly differently or in a different position, etc...) The author explains this is because the brain only understands objects\systems\phenomena enough to distinguish between them and other similar objects\systems\phenomena. For example, if you have a red notebook and a green notebook, your brain might tune out completely the drawings on the notebooks' cover or the size, it just wants to distinguish between the 2 and the red-green distinction is as clear as anything and better. If you got a third notebook that is also red, only then might your brain start paying attention to other qualities of the other red notebook.
The "Design"'s author phrases this as a general principle : the brain remembers only enough to discriminate between alternatives. If a certain piece of knowledge is not necessary to distinguish between 2 decisions or situations that the brain needs to distinguish often, it's most likely not kept in memory, why should it? Do any kind of people besides physicists and bicycle manufacturers ever need to know how bicycles work in detail to distinguish between 2 similar situations?
The rest of the article is nice, we already have ~13 slightly different ways of expressing "They think they know but they actually don't" but "Cocktail Party Ideas" is a welcome addition nonetheless, certainly better than the overused and oft-misunderstood Dunning-Kruger effect. The author could certainly use less arrogance.
Dan Luu takes a skeptical eye to the accuracy of futurists. This seems relevant also to Scott's claimed success in predicting scale improvements to image-generation AIs.
This was one of my first thoughts when I read the first few articles on obesity by SlimemoldTimemold. The different responses of the sexes is interesting too... we see the same in humans somewhat.
As well as obesogens, he might turn his attention to possible "gendergens". There always have been camp guys and butch women (putting it crudely) but most of these have been and are content with their gender. In proportion to population sizes though, were there really as many people with gender dysphoria in the past as there seem to be today?
Perhaps there always have been, but obviously those in the past would have simply had to make the best of a bad job.
But if not and there is a genuine upsurge then maybe chemicals introduced and widely used only in recent times have an effect on gender preferences in people predisposed to their effects, possibly as embryos.
Yeah, if more people would focus on this then maybe we could find out what the exact' bad' chemicals are. And maybe discover that yeah, that plasticizer has an unwanted fat response, but this one here is fine. (Kinda like different fluoro-carbons and ozone... though a lot more difficult problem to figure out.)
I've been reading two books on William Marshal, a prominent medieval figure — born the fourth son of a minor baron during the Stephen and Matilda civil war, he spent a decade or two as one of the top tournament knights in western Europe and before he died was regent of England. His other distinction is that he is, I think, the only knight of the period for whom we have a biography written shortly after his death.
Both books agree that the biography is not entirely accurate, which is true, but they disagree about which parts are wrong. Crouch believes that William's father saved Matilda from capture and lost an eye doing it, does not believe that William was accused by rivals at the Young King's court of an affair with Henry's wife. Asbridge doesn't believe the first, does believe the second. In both cases, the author puts his view not as "I think this is what happened, but ..." but as a simple fact — "this is what happened."
This is a case where they have opposite views but more generally, a problem I noticed reading Crouch before I looked at Asbridge, they treat guesses as facts. In one case Crouch gives a footnote to support his guess. If you read it it turns out that there are three primary sources for the question, the biography and another source agree, the third doesn't, and Crouch simply asserts on that evidence that the biography is lying and goes on to describe events after the relevant scene on that assumption.
Is this sort of intellectual arrogance, treating conjectures as facts, typical behavior for academic historians? It it only for books aimed at a general audience, which both of these are, trying tell an entertaining story without confusing the reader with alternative interpretations of the evidence? If I read journal articles by the same authors would they recognize the uncertainty of their interpretation?
For the curious, the books are _William Marshal_ (third edition) by David Crouch and _The Greatest Knight_ by Thomas Asbridge. The biography is _The History of William Marshal_, translated by Nigel Bryant.
Archeology seems rife with this stuff as well. If you follow up on the expansive descriptions of ancient cultures and peoples, many times the only evidence for it is something like a broken dish, or a bone that looks like it was made into a tool. You can sometimes read page after page of description about how this culture supposedly lived, which even if it's a pretty educated guess, is still wildly made up. We know *far* less in fact about almost all ancient cultures (especially pre-circa 4,000 BC) than most people commonly assume.
I have now finished Asbridge, and found another and more important disagreement. The _Histoire_ claims that John, on his deathbed, asked for William (not present) to forgive him for all he had done to him and take charge of John's son. That's the case where Crouch confidently asserts that the _Histoire_ is lying, and goes on to describe William's appointment as regent as a coup. Asbridge accepts the _Histoire_'s account, noting that it is supported by another source.
Would you please give the page numbers or some aid to finding the footnotes or passages you're pointing out? Otherwise it is difficult to re-trace your line of inquiry, and so answer your first question.
Asbridge discusses who saved Matilda in chapter 1, section "The Civil War," page 13 of the kindle, which is how I am reading it. He accepts the story about William being accused of an affair with the Young King's queen in Chapter 6. He discusses what King John said on his deathbed near the end of Chapter 13, Section "The Greatest Choice," p. 340 of the kindle.
Crouch discusses the saving of Matilda on pages 16 and 17, the accusation of adultery on pp. 58 and 59, describes the Histoire as lying about John's deathbed on pages 158-9.
Yes in popular histories. Less so in journal articles. Classes and the like are somewhat in between which is not to their credit.
Historians are some of the worst public intellectual academics in my experience. They tend to have extremely specific specialties (which is good) that they then apply to broad modern issues (which is bad) while simultaneously arguing they don't count as "theories" that should be subject to empirical scrutiny or verification (which is worse). This is how you get a specialist in pre-war Republican economic ideology or Interwar Italian cinema writing what's basically political red meat about modern issues and then responding with snobbery when challenged. Even in books about their specialty they often fall down.
Can you give the page numbers for the passages you're referring to? It shouldn't be too hard to find what the most recent scholarly interpretation is.
To a first approximation, articles are peer-reviewed more thoroughly than academic books, and popular books sometimes not at all. What you describe would not tend to pass peer review in a good journal. There is a sort of semi-convention where senior professors get to write a book that is a "this is how I see it" -- and you can take it or leave it but their intuition is honed on decades of research and so is worth taking seriously. Crouch's book strikes me that way. I don't read it as arrogant, but I can see how one might.
Ashbridge I don't know, but I immediately get warning flags about that book, as it's written first to be entertaining and uses a vague system of endnotes for its references.
If you message Brett Devereaux on Twitter you might get some interesting opinions on this. Not the medieval stuff specifically but history scholarship generally.
I've lately been working my way through a bunch of English political history books, mostly royal biographies. I've frequently noted substantial disagreements in interpretation between different authors covering overlapping grounds. The most glaring example I can think of off the top of my head is where Henry VIII's clever "compromise" idea during his divorce proceedings against Catherine of Aragon by which he would have remained married to Catherine but received special dispensation to also marry Anne Boleyn, based on Old Testament precedent for plural marriage by the Kings of Israel and Judah and by various Patriarchs. Peter Marshall's "Heretics and Believers" attributes it to Martin Luther, Allison Weir's "Six Wives of Henry VIII" (if I recall correctly) attributes it to the Pope, and Carolly Erickson's "Bloody Mary" implies it to be Henry's own idea. Of the three, I think I believe Marshall the most, as he's the most recent source of the three and because he sounds like he's citing a specific letter from Luther to Henry (unfortunately, since I've been consuming them as audiobooks, I can't easily check footnotes to see what sources each gives.
I get the impression that a big part of the problem is how sparsely documented Medieval and Early Modern Europe were by modern standards and how even the "good" contemporary sources tended to be severely unreliable narrators. Any coherent narrative necessarily needs to make a ton judgements and interpolations, and I've noticed that a lot of the reasoning behind these (especially in books targeted more towards popular audiences) often gets relegated to the footnotes or skimmed over entirely.
That said, I really appreciate authors who actively discuss points on which they disagree with other historians with at least a brief discussion in the main text of how they disagree and their sources and methods for reaching their contrary conclusions. Of the authors I've been reading, Ian Mortimer seems to do the best job of this, although that may be driven by his advocacy for hypotheses that are significantly contrary to the prevailing interpretations by other academic historians. Antonia Fraser also seems to do a particularly good job of showing her work around her interpretations of uncertain or controversial questions.
On unreliable narrators, I've been getting particularly frustrated with the degree to which Tudor historians tend to rely on the Imperial Ambassador Eustace Chapuys. To Chapuys's credit, he had excellent access to most of the important figures of Henry VIII's court, especially Catherine of Aragon and her daughter Mary but also Henry himself among others, and he wrote extensive and detailed dispatches. Unfortunately, he was a highly partisan figure who tended to lie a lot.
The further back you go, the worse the documentation tends to get. I started my current dive with Marc Morris's book on Anglo-Saxon England, which contained some bits which were almost entirely reliant on archaeological evidence due to an almost complete lack of contemporary written records. He also at one point cited Beowulf as an illustration of court life, albeit with the qualification "As a historical source, this story has the disadvantage of being completely made-up: the monsters and the dragon are something of a giveaway."
I hope you get meaningful responses from people familiar with medieval historians. Before settling for reading the Very Short Introduction to the Crusades, I considered longer books. Asbridge was one of the authors in question. I don't recall any reviewers criticizing him for jumping to conclusions.
For a while I've wondered about something related. Take a group of people with shared background and intellectual interests. Compare the ones who publish a lot, the ones who publish a little, and the ones who don't publish. How much difference is there between the group means on some hypothetical measure of self-confidence about one's own interpretations ('jumping to conclusions', 'intellectual arrogance', etc.)?
Any difference in those means wouldn't necessarily be causal. How much you write could instead cause your self-confidence in your own interpretations to rise or fall, for instance.
My experience is that the primary distinctions are (1) how obscure your specialty is, and (2) how often you go to conferences. Both of those things influence how often you are faced with pointed and in-your-face questions from peers who disagree with you, or are at least highly skeptical, and I think it's only that experience that teaches people to be careful about what they're saying even when they speak from a position of expertise. I don't think publication quantity per se is as useful, except insofar as it is is a proxy for either of these.
Otherwise...it's an almost ubiquitous human failing to assume that if you are an expert inside region R, then you are also an expert inside R + dR, where the size of dR/R is contingent on your natural character and experience but is almost always greater than 0.
Anorexia bestows a strikingly high death rate. The absence of professionally condoned protocols for anorexics facing the end of life stages is a huge disservice for those of us with severe and enduring anorexia. It's one thing to believe that anorexia should never be terminal, in the same way that HIV should never be terminal. But it seems inhumane to not offer end of life care to individuals in the final stages. What is behind the lack of acceptance in the medical community that anorexia can be terminal?
The paper proposes characteristics of terminal anorexia as : diagnosis of anorexia, being of age 30 or older, prior engagement in eating disorder care and consistent expression that further treatment is futile. Do you think any of these characteristics are unnecessary? What would you add to the criterion? The paper also stipulates that an individual must have a life expectancy of within 6 months in order to receive medical aid in dying.
Do you think a terminal diagnosis is ever appropriate or does it really indicate a failure of the treatment system to support complex patients, particularly those who are marginalized in traditional treatment. Could this diagnosis be weaponized against those that are non compliant with treatment? Non compliance shouldn't be viewed as a bad thing, but rather as an indication that the individual has a will to live agentically and that the treatment provided is failing them. Existing eating disorder treatment was designed for young, cisgender, white women and thus is less effective for POC/older/men/LGBTQ patients. If people are noncompliant or nonresponsive to a treatment that was never even designed for them are they truly beyond help, or is our system broken?
It's also worth noting that in a specialized hospital it is possible to refeed and weight restore nearly every patient. As weight restores, the majority of medical complications cease. Anorexia is almost never medically terminal.
I am a PhD student. I have been through revolving doors of inpatient treatment and I truly cannot fight any more. I can no longer live with this disease and I cannot maintain the minimum nutritional intake for living. I am not willing to participate in recovery oriented treatment and I am no longer trying to prolong my life. I believe that any further treatment will at best only result in brief improvement and is unlikely to provide long-term quality of life. How should I best advocate for my right to die with dignity? What can we do to advocate for a professional consensus for terminal anorexia and patients' end of life rights?
You either have gender dysphoria or you don't. What next, you're going to create a specific word for people who aren't schizophrenic?
And notice that 'cisgender' is a label created and almost exclusively applied by people who don't actually identify with that label themselves. This is frowned upon in most other contexts, but the people who came up with the term 'cisgender' did so precisely to make it seem like gender dysphoria isn't a disorder, when it very obviously is.
Lots of cis folks use cisgender to refer to themselves. You see it in the wild all the time, if you live in certain bubbles.
Not sure who came up with it, but unless you know for sure I don't think the smart bet is trans-folk. There just aren't that many of them. More likely in my opinion to have come from the sort of ally who insists that everyone in the room announce their pronouns at the start of a meeting, whether or not there's any ambiguity for anyone present.
There are several words that describe people who don't have a disorder or symptom. "Neurotypical." "Able-bodied." "Asymptomatic." "Uninjured." Almost any word with the prefix "non" or "un-."
Any time you need to contrast people who do have a condition with people who don't, you'll invent a word to do it. If I say "Autistic people often have more difficulty with social interaction than neurotypicals" or "Only able-bodied people should go mountain climbing" would you go after me with the same anger as people who use "cisgender"? Would it really make a difference to the culture wars if the official term was something like "nondysphoric" rather than "cisgender"?
These examples kind of prove the point. Those terms have only become popular very recently, and are primarily used by the same crowd as the "trans/cis/etc" people. Google Ngram confirms this for neurotypical, asymptomatic, and uninjured - all three words plateaued in the 1990s and then started growing rapidly in around ~2000. The word neurotypical didn't even exist before the 90s.
Able-bodied actually used to be more popular in the 1800s, although looking at the examples shows that's mostly related to political / legal / military usage, not with modern grievance studies connotations.
Anyway, to answer your question, yes, I do have a problem with "Autistic people often have more difficulty with social interaction than neurotypicals." Better would be "Autistic people often have more difficulty with social interaction than other people." Words like neurotypical are profoundly impactful on the culture war, because they are literally the symbolic representation of an outgroup.
Words like these are formed in a kind of linguistic judo designed to divide people as much as possible. The trick goes like this: first, someone makes up a term for a concept, usually some sort of identity group. Let's say they had a valid reason to do so, and that this concept has a legitimate useful meaning. And this identity differs from the norm in some way, otherwise there would be no need for the term. Normally, the way you would describe people who don't fit into this identity is "non-X", "un-X", "normal people", or "other people".
However, you don't like just using the negation of the term like that, so you make up a new word for the concept of *literally everything except this identity*, and you try to make the word sound as symmetric as possible with the normal identity, e.g. cisgender vs transgender, neurotypical vs neurodivergent, abled vs disabled. And now what you've done is two-fold. First, by giving both groups symmetric-sounding terms, you've put the abnormal minority identity on an even linguistic playing field with the normal identity, masking the reality of its abnormality and its small population relative to the norm. Second, you've just created a term that *literally* means "my outgroup" for the people in the aforementioned identity group. This gives them a word to rally against and fester hate towards, inflaming the culture wars to unprecedented levels. It's really hard to build an internet sub-community solely based on hating "the normals" or "the non-mentally-ill", but it's easy to do for "the neurotypicals."
>It's really hard to build an internet sub-community solely based on hating "the normals" or "the non-mentally-ill", but it's easy to do for "the neurotypicals."
Spoken like someone who's never heard 4channers talk about "normies."
Also, if you are correct that having a word that doesn't use "non" or "un" is in some way significant for "putting them on a level playing field," why is that a *bad* thing? Why is it important for our language to constantly remind us that autistic people are unequal, abnormal, that they have the playing field sloped against them, or whatever metaphor you choose?
The conflict here is that you need cisgender only in the case where you've committed to referring to trans women as "women, full stop, no question". Otherwise you'd just say something like "designed for young white women, and less effective for trans women" or similar.
There's personality types (mine included) that hate this. Because the logic goes "Trans women are women, full stop, because they identify as such. But we need a word to differentiate them from natural-born women, because sometimes we are going to have to talk about the many ways they are physically and psychologically distinct; we will allow this ONLY when it's guaranteed to help trans people get something, or win an argument."
Where this gets really wacky (and most clearly a political/power thing) is you can use cisgender here to mean "Born as a woman, fits in the cluster of woman-things physically" because here it's seen as beneficial to trans people. If this usage works, they get personalized anorexia treatments. But when the same trans-concerned people talk about another instance where cisgender/trans distinctions matter in the same way (sports) they forget the word immediately and entirely - trans women go back to being women, full stop, with no differences at all.
I think if I saw this as a situation where truly everyone knew the distinction, and you could use "woman" for a trans woman with everyone knowing it was a politness thing that wasn't meant to carry data, I'd probably be OK with it. But there's so much "we want the power to enforce language" stuff mixed in that it makes it really hard - most of the time when this comes up in my life, it's really clear that the person doing it just wants to prove they have the power to make me say something they think that I think isn't true, so they can feel they were able to make me bend the knee.
I've actually softened on this over the years - like, I'd probably at this point be fine with "woman/transwoman", since transwoman carries a distinct definition that doesn't have the "Listen, these are all 100% women, except where they arent and someone might die, and then you can have a term that means "woman" again so long as it's clear to everyone that our fist is still firmly clenched around your windpipe" baggage Ciswoman does.
Right. My problem with it is twofold. For one, it's creating a category distinction where there shouldn't need to be one. But even if I was okay with the category distinction, I also have a problem with the choice of the term "cisgender," which seems deliberately designed to be abstract, esoteric (cis- mostly used in chemistry settings), and somehow symmetric to "transgender" as if they are just flip sides of the same coin. It's kind of like the word "gentile," which describes an odd amalgamation of very different kinds of people defined only by not being part of this religion that 0.2% of the world follows.
I would be more okay with a distinction of "normal woman" vs "trans woman", or even "biological woman" vs "trans woman". But "cis woman" vs "trans woman" puts two very asymmetric groups on an even playing field.
Interesting, what specific treatment for anorexia do you have in mind that was designed for young cisgender white women and does not work for POC/older/men/LGBTQ patients?
Not All Black Girls Know How to Eat: A Story of Bulimia
Admittedly, this is a book about bulimia rather than anorexia, but it makes it clear that a lot of people, both sufferers and professionals, assume that anorexia and bulimia are diseases of white women, which means that anyone else is less likely to be diagnosed or treated.
Realistically, no one can stop you if you are determined to die. But it is unreasonable to expect people in general to help you do it, when the only reason is that (from their point of view) you are not in your right mind and are misunderstanding the nature of your future and what is valuable. That is even more true if they care about you as a person, or even category of person.
I can readily understand that you want to take command of your life, and your destiny -- particularly if you have been involved in lots of medicine, or the law, both of which are notorious for taking lightly or even ignoring individual choice and viewpoint. Too often, and even with the best of intentions, they end up treating you like a case or a disease and not an actual real person. It ought to be better, but unfortunately like all institutions, these institutions are run by human beings, and human beings are not perfect, they screw up, all the time, even when they are trying their best.
In general most of us would strongly support an ambition to take command of your destiny (because we want that for ourselves). But we also generally draw a line when that command involves destruction of life -- that of others, but also that of yourself. It is very likely if you were to find a way to satisfy your ambition that doesn't involve taking a life (including your own), that you would find most people would be very much in support. It's 2022, you can be or do almost anything you choose, and there is an enormous respect for the right of the individual to be who he or she chooses. You can become a powerful lawyer (perhaps one who advocates for better treatment options for the anorectic), or you can be a granola hermit, live in a cabin without running water and commune with the birds and trees. You can become a painter and paint powerful images that uniquely explain what it's like to be inside your head (which would also benefit others trying to understand the anorectic mind), or you can sell yachts and spend all your dough on traveling to Thailand and climbing K2 and not talking about it with anybody. Any path you choose to create is yours more or less for the asking (and the required effort), including paths niether you nor I can imagine right now.
It is actually pretty common among human beings to predict the future poorly. For example, I have spent most of my working life doing something quite different from what I thought I'd be doing when I went to college. I am not married to the woman I thought was the love of my life when I was 25, and I'm glad of that. None of my children turned out the way I guessed they would, when they were born, or even toddlers, and yet they are all precious to me, I am proud of them and love them to bits. Many of the friends and activities I enjoy most right now I just stumbled into over the course of living, and I could never have predicted before they happened that they would happen, or that they would be important. The future is extremely hard to predict in general. I would never attempt to predict your future, even if I knew you extremely well. Perhaps you will die, perhaps this year, perhaps even within the next week or two. Or perhaps you will not, and you will become a person with extraordinary stories to tell -- a kind of person I appreciate more and more as I get older: the variety of actual lived experience and the insights people derive from them is better than anything even J. R. R. Tolkien or George R. R. Martin could dream up.
I have known two young women who were anorectic. The one I know very well, because she's family, survived. She is now in early middle age, has a husband who adores her, a couple of cats, and a well-paying job she loves in a career that is about a thousand miles from what she thought she would like, and what her parents thought she would be. She no longer lives near her family, and she has different interests and friends. If I ask her what made the difference, she doesn't really know. No treatment or intervention or pep talk or anything she read or heard seemed sufficient, she said they seemed all equally worthless. She just decided one day to do something different, because she could, because life offered more possibilities than death, and then because she had an iron will (not uncommon among anorectics) she made it happen. I wish I knew more than that, but I don't. (It's certainly likely if you yourself talked to anorexia survivors, and there are a large number of them, you would learn much more than I could ever know.)
Some years ago -- actually more like 10-12 years -- I read an article in a major East Coast magazine on people who had jumped off the Golden Gate bridge, which was a subject of great interest to me because when I was young a close friend of mine did just that (he did not survive). Two things in that story really stuck with me. The first was that a study was made in the late 70s of the relatively few people who survived the jump, and it was found (to the surprise of the investigators) that 94% never even attempted suicide again. The second was a comment made by one of those survivors, which perhaps gives insight into the phenomenon. I wrote it down, because it was so important to me. It was from a surivor named Ken Baldwin. He said: "I still see my hands coming off the railing. I instantly realized that everything in my life that I’d thought was unfixable was totally fixable--except for having just jumped."
I hope you find a way to a different path. We need you. You are an important person, how important we don't even know yet, because you aren't yet all you could be.
Here's an angle which might be relevant-- if good treatment for anorexia is possible (I'm really not sure), the standard for getting it-- very low BMI-- may be inappropriate. I've seen complaints that a person can be fat, and still eating so little that they're damaging themself, but they can't get treatment. The same might well apply to people of more average weight.
This seems downstream of more general "Right to die" advocacy.
That said, what would you want to see here? I can't really think of a good way for the medical community to support you here. I'm trying to say this as gently as possible, and honestly can't figure out a kinder way to phrase this, but if someone is choosing to not eat until they die, what kind of support can the medical community give here? Are you just thinking pain meds? Do you want an IV drip? Intentionally keeping a person alive while they starve to death seems really, really, monstrously cruel.
Well, if Ludex wanted to make a top level post objecting to right to die legislation I think it would be reasonable. Describing the activist's goals as "Horrific and cowardly" is a little too "boo outgroup" for my taste, but I think Ludex would be well within his rights to do so. I just think doing so when the top level poster seems to be a suicidal person who is trying to get medical assistance in dying is unkind, unneccesary, and kind of in bad taste.
One often sees discussions about well-known existential hazards to the human race. But what about left field risks that nobody anticipated, perhaps a disastrous unexpected consequence of something almost everyone assumed would be a marvellous idea?
World Government would be one such example, in my opinion, should it ever be attained before there were flourishing human colonies throughout the Solar System and behond. But the merits of that are far from generally agreed, and anyway this post is about something else.
Probably most people would agree that the World would be a better place if everyone's IQ was bumped up by, say, twenty points. No doubt that will soon be a realistic possibility with genetic tweaks to the unborn, and much in demand. But I think the opposite is true: It would be disastrous, and increase the level of strife and contention.
Aren't most terrorists, for example, besides the patsies they persuade to sacrifice themselves, better educated and smarter than the average Joe? What if everyone in society felt they were intellectually special and demanded to be heard and became bitter if they were but one voice among the multitude. Highly intelligent people can be very quarrelsome and arrogant, whereas we lesser intellects (speaking for myself!) are mostly content with the status quo, and that means on the whole a more stable and peaceful society, instead of the opposite.
And don't get me started on intellectually enhanced talking pets. That would open up a whole new can of worms! :-)
There’s evidence that 25% of janitors are smarter than 25% of PHD students when tested. A smarter population might be a bit restless in menial duties but many would be happy enough. The economy would perhaps be more egalitarian.
There seems to be a recurring pattern of you asserting things and then being hostile towards the idea of having to make any effort at substantiating them.
Its easy to find the source for that. No idea what else you are referring to but you seem to enjoy drive by comments about spelling or demanding sources. The last 6 of my comments have replies from you, none of them worthwhile.
This kind of posting is generally decried as sealioning. Anyway you are added to my personal blocking list, a list of one. And reported.
>Aren't most terrorists, for example, besides the patsies they persuade to sacrifice themselves, better educated and smarter than the average Joe? What if everyone in society felt they were intellectually special and demanded to be heard and became bitter if they were but one voice among the multitude.
"Special" is only meaningful in a relative sense. If everyone is "special", nobody is. Being bright (compared to today's standards) but no more so than anyone else likely wouldn't make you feel special. There may be a transition period where today's tasks are easy for the average person, but if even the high-IQ people are getting a bump then they're going to be operating on a higher level themselves and so remarkable today becomes unremarkable tomorrow.
> Highly intelligent people can be very quarrelsome and arrogant, whereas we lesser intellects (speaking for myself!) are mostly content with the status quo, and that means on the whole a more stable and peaceful society, instead of the opposite.
And I don't know where you're living, but lower IQ (than yourself) people are certainly not okay with the status quo. "Inequality" is a huge issue at the moment.
And your basic claim is trivially wrong. The most strife-ridden parts of the world are amongst the lowest IQ, whereas western Europe is extremely peaceful and is self-destructively tolerant. If sub-saharan africans had a mean IQ of 100 with a SD of 15, do you imagine there would be *more* e.g. ethnic or religious conflict there?
Mostly agree. We are breeding a population of smarter people. (Since elite schools select for smarts, and you've got a good chance of marrying someone you meet in school.) Smart costal elites and all us dumb f's left in the middle. (Personally I'm happy living in the middle.)
The evidence is that assertive mating does exist but that in general we are all getting dumber for a number of reasons, mutational load, and dysgenic effects caused by the educated classes having fewer children relatively.
Well, it depends what aspect you focus on. I could believe that assortive mating is stronger than in centuries past, producing a bubble of super-elites, but I know your point that said bubble is small and smaller every generation because smart people have fertility well below replacement.
I think the problem with terrorism and such isn't too much intelligence, it might be placing too much value on everyone making a large difference.
I've only listened to about half of this piece about The Hero's Journey, but it's got material about how it used to be that heroes were godlike, and then it became normal but rare to be a hero, and now everyone is supposed to be running their own destiny.
> it might be placing too much value on everyone making a large difference.
That would certainly be a large part of the problem, the more so because by then most of the low hanging fruit will have been picked. For example, every math problem solved and refined, every poem written, can never again be solved anew or written quite the same. The only possible novelties will be ever more specialised and arcane.
One could argue that people in future, feeling smothered by these past achievements, will do what they have always done, and just ignore them where possible. There are already literally miles of bookshelves groaning with worthy past literature most of which practically everyone is blissfully unaware of and will probably never be read again.
But even that won't be possible if instant recall, Google on mega-steroids as it were, one day makes all this searchable and available for instant comparison with supposed novelties.
Also, advances in AI may well have consolidated and entrenched one agreed sensible and sound world view about most things, so that dissenting opinions will have even less weight, and find it harder to make headway, than they would now.
There's a Spider Robinson story "Melancholy Elephants"(1982) which makes the point, though I think he underestimates the importance of rhythm and microtonality for possibilities for new tunes.
I also think there are some interesting possibilities for increasing human sensory ranges to make new art possible. Still, there are probably limits.
Pressure on people to all have political effect is probably more dangerous than pressure to create new art.
I'm going to hazard that the former derives from the latter, and not vice versa. I doubt you can build resilient institutions out of fragile components (fragile people), notwithstanding the construction of bridges out of straws that beginning engineering students do for laughs.
Aren't most of our moral choices these days fucking phony? Like plastic straws or no bags at the grocery? None of it means shit in the scheme of things, yet we are supposed to pretend it does.
It gets hard to believe that any so-called moral choices have any reality behind them.
that doesn't mean there aren't plenty of potential choices of high moral significance, it's just that, as ever, high moral ground is tough and only for the few.
there are people founding U of Austin or putting careers on the line pushing against orthodoxy. or people who risk their lives uncovering Falun Gong organ harvesting. or just people who move to Taiwan because they care, or simply give a few thousand bucks to ukrainian military:
Sure, using a paper straw is pretty pointless, but there's no reason to generalize from that to other things that you consider real moral choices. Do you try to avoid harming other people, do you help those close to you lead more fulfilling lives, and so on... In general, the existence of BS doesn't mean everything is BS.
Decisions like straws or bags fall into the general category of 'retail consumption ethics', and I think your statement is broadly in that arena. People make many of those decisions largely out of tribalism, either consciously or unconsciously, with little actual impact to the real world.
But there is obviously a much bigger world of moral choices that do have a lot of impact. at least in a marginal sense, e.g. "should I cheat on my taxes?"
Isn't the decision to cheat on your taxes mostly about weighing the potential consequences of the action rather than a moral choice? I think most people would cheat on their taxes with a clear conscience if they 100% knew they would get away with it.
I think the main moral choice people make is: Should I cheat on my spouse? Yet I rarely hear that brought up when people get into ethical discussions.
Just because you don't evaluate something as a moral choice doesn't mean it isn't one. I agree most people probably think in practical terms more than moral ones, but that doesn't make the moral angle vanish.
As to spousal fidelity - my experience is vastly different. Between advice columns, /r/AmITheAsshole, etc., I've seen a ton of discussions around the ethics on that issue.
In some ways, that's really good! You don't have to make life or death decisions anymore, those things suck. We have successfully created a utopian society where the worst things we have to worry about are straws, instead of surviving the winter or if the black plague will get us.
No, the worst thing we have to worry about is being cancelled because we used the wrong kind of straw. Or maybe being killed in a nuclear war because Vladimir Putin had a temper tantrum after the Fourth Battle of Kharkiv, but for the purposes of the analogy, we'll go with straw-based ostracism.
In the Before Times, you risked being ostracized by the tribe because you e.g. carelessly let a pack of wolves get at the sheep. And it makes sense that you'd want to impose that sort of penalty on people who make that sort of mistake. Now, we've still got the "must ostracize people for endangering the tribe" impulse, but we direct it against people who use the wrong sort of straw, because that's all we've got. That sort of takes the shine off the "utopia" where none of your decisions can really hurt you.
Cancellation is a pretty big upgrade from exile, imo. At least when i'm cancelled i can still get food, shelter, friendship, and other necessities. Exile back in the day was seen then worse then death!
But yes, you are correct we still have the urge to punish people, and do so even when the punishment is disproportionate to the crime. Isn't really relevant to the op tho.
Sure, that can be bad i suppose, but imo it seems better to have fake problems we turn into real problems, then just have real problems in the first place. Besides, not really relevant to OPs question about morals.
Because some entity outputs the string "suffering isn't important", should that make suffering/happiness axiologically neutral for *you*, as someone who experiences it?
Are you saying that you would choose to be kept alive (and survive) forever in an infinite torture machine?
Or do you mean that you prize evolutionary fitness above all else? In that case, would you let aliens inflict infinitely painful experiments on you forever, if in return, the aliens made sure you had more descendants than any other living organism?
It sounds like you do have a value system after all. You might say that your aversion to suffering is just behavioristic, without any value-making assumptions to it. But I doubt you would agree to be reprogrammed to seek out suffering as your goal, even if you were compensated for this with money. There seems to be something about the experience of suffering itself that you value negatively, apart from evolutionary fitness/etc. This seems hard to square with genuine nihilism.
It seems to me there are two main approaches to understanding the world/existence/reality. One is to assume dualism, which leads to a scientific and perhaps rationalist approach. This the bottom-up view.
The other is to assume sensation comes first. This is the poetic approach. As the poet Octavio Paz writes: "Poetry is the testimony of the senses."
These different approaches don't necessarily contradict one another. There is no reason they can't fit perfectly together. Yet we are a long way from a unified theory of reality, so those two perspectives remain in conflict.
I tend to believe that the poetic reality is likely closer to the truth. Our senses are subjective yet also objective. What we feel/taste/touch/hear is real. What we think may not be.
I think we should dedicate more effort to understanding the world from the poetic view and less from the scientific view, which is too subject to fashion.
By bottom-up I mean the belief that every effect has a cause. The effect is at the surface and the cause is beneath. Whereas a poetic perspective focuses on sensation and experience and is agnostic to causes.
One problem is that that all the hard, important poetic work that has been done can't be condensed because it loses its value upon abbreviation. Science, OTOH, advances in the direction of simplicity.
So I wrote a novel that is supposed to promote Effective Altruism with funding from one of ACX plus grants that never got announced here. I've reached a point where it is fairly polished, and I'm looking for feedback, and also ideas about ways to get it the biggest audience when I publish it.
The idea seems incompatible with itself; the Effective Altruism stuff is completely undercut by the massive, ongoing invasion. I think you need to cut one or the other, which is going to mean major rewriting.
I would not recommend publishing anytime soon, you need more practice writing first. Shelf this one, get a few more stories written, get a better feel for pacing and exposition, and come back to it with more seasoned eyes.
I read a few pages but found the style of the soliloquies too exuberant and repetitive, as if you are trying to drum your thoughts into the reader's head! "High? Did I say the mountain was high? It was really, _really_ high" that kind of thing (quoting from memory).
IMHO you should try and cultivate a more sparing and easy going style, and stand back and let the reader interpret your meaning more instead of trying to ram it down their throat!
Also I didn't like the effing and blinding in the descriptions. Maybe that is fine with your target readership, and they may even expect it. But, without wanting to sound like a prude, it seems to me gratuitous and off putting.
Edit: You may find the following blog article about Effective Altruism interesting:
I haven't read your novel (may look at it later), but can I make a small point? Please don't start a sentence with "So", unless it is a logical consequence of the previous sentence!
Starting sentences with "So" for no apparent reason is the besetting sin of most academics. But to normal people it sounds insufferably pompous, as if the pronouncement they are about to give is unchallengeable wisdom "So be it .."
My biggest suffering during Covid was having to listen to medics and pundits, wheeled onto TV one after another to pontificate about the pandemic and, guess what, they almost invariably started every reply with "So"!
Seamus Heaney in his fairly famous 2000 translation of "Beowulf" departed radically from tradition in translating the initial Hwæt as "So." where it had always been "Lo!" or "Hark!" or some such. He said[1]:
"in Hiberno-English Scullion-speak, the particle ‘so’ came naturally to the rescue, because in that idiom ‘so’ operates as an expression that obliterates all previous discourse and narrative, and at the same time functions as an exclamation calling for immediate attention."
It’s also colloquial where I come from, in Ireland. In fact Seamus Heaney translated that first word of Beowolf (hwæt) as So. He felt that the other translations were too strong, that hwæt was more of a slight interjection not the normal Hark!.
He used so, because it was common enough usage to him.
Hmm interesting, yes, I did a bit of searching on the web and it does seem to be more a US colloquialism. British people, mostly young people and PR representatives and politicians (the latter two being much the same these days!), tend to ape American turns of phrase to try and sound trendy. But I wonder why academics, of all ages, are so addicted to it!
I can imagine someone in the UK saying the example you gave, without it sounding jarring. But it would most likely be in the middle of some story, where the "so" was indicating a consequence like "as a result" or "next thing you know" ..
Another example of an author successfully using novels to make ideological arguments would be Dickens. A more sophisticated but less successful one would be Trollope — who has Dickens as a character in one of his novels, portrayed critically.
I haven't read your novel, but I see two problems with using a novel to make an ideological point. The first is artistic. In my experience, no plot survives contact with the characters. If you are committed in advance to where the story will go you are not free to let the characters you have created act as those people would, so risk making them puppets rather than people.
The second is an issue of honesty — it's too easy to cheat. You can give the people who disagree with you bad arguments, the people who agree with you good arguments. You can arrange to have policies you disapprove of have bad results, policies you approve of have good results. You can thus make the case for your position look much stronger than it is. That's a good deal of the reason that, although I have written three novels and they are affected by the fact that I am both an economist and a libertarian, none of them is an argument for either libertarianism or economics. I leave that to my nonfiction.
In General i agree, however Dickens works although he makes his bad characters caricatures and his good characters saints. Maybe we are too far away from the era to worry about the misrepresentation of workhouse board of gentlemen. Maybe they were decent folk, in general.
Sorry, I had to give up at around the part where the guy (I can't call him the hero, he's too damn annoying) is flying alongside a dragon. Oh, and congratulations on making *flying alongside a dragon* TEDIOUS, because your guy is so busy patting himself on the back about being smart and a rationalist and smart and an altruist and smart and did I mention he's really smart?
There is a way to do infodumps and lectures about your pet philosophy, and I'm afraid you haven't cracked it yet. Well, this is what first novels are for - write write write, produce bad work; write something else, write write write, it's still bad but you're learning; repeat until eventually you produce something that can stand on its own two feet.
Your guy is too busy *lecturing* about the free market and I can't remember what-all, then every so often you remember "Oops, I've left him naked on top of a giant mountain peak, better mention something about that". Right now, I don't care if he *could* be the One True Saviour, Because He's A 21st Century American, Goshdarnit! of this world, I would like him to be hit by a truck again. Or eaten by a dragon. Or something, because he is so tiresome.
"All these dumb chuckleheads couldn't figure out how to Do Good with their accumulated wealth, but since I am a Bay Area altruist, I am sooooo way smarter than them, I can fix this world easy-peasy!"
First I think that the concept of writing a novel to promote some ideas has a looong and respectable history. It is especially common in science fiction, HG Wells for example had a very clear political agenda for the majority of his novels, or more recently HPMOR has of course a very clear message! I don't see the problem. Yes, authors of fiction often have values that appear in their novels, and? For me it is a problem only if it is very "on the nose". Or it can be for readers that disagree with the authors, for example religious readers of His Dark Materail are frequently bothered by the very clear anti-religious messages of the novels. But then what?
I also disagree with the evaluation about the quality of you writing itself. I love great writers, those who have incredible writing, who are able to evoke in a few sentences a striking scene that will remain in the readers' memory... and I also love writers who have a classic writing style but have an interesting story to tell. For me, you write well, your descriptions are clear and telling, and yes, of course, you can improve in writing quality but what you have written seems to me already of a quite sufficient quality for a novel that could be very enjoyable to read.
Honestly, it's not very good. The prose is competent but no more than that and almost nobody wants to read a novel that doesn't have way, way above-average prose.
You have smarts and talent, but it isn't in novel-writing.
The concept of writing a novel to promote a Movement is pretty ridiculous. Harriet Beecher Stowe pulled it off, but I can't think of another successful example. Can anyone?
Ayn Rand managed to create a movement, but I certainly wouldn't call her fiction work "good". Her essay collections are much better, but very few people want to read essays.
I was going to say Ayn too. And I loved her when I was about sixteen, so I would dispute good. The Clansman could also count, though maybe only as a bank shot. It was the novel on which Birth of a Nation was based.
I'm currently working on my first work of fiction, and I am very sensitive to feedback. So, keep that in mind when I say that I mostly agree with Previous Jack. Some thoughts:
- Plus one to his point about writing a piece with an explicitly political/ideological purpose. That is an incredibly high bar to clear.
- Double spacing would dramatically improve the readability.
- Your writing reads very much like someone who is trying to write. I don't mean that to be discouraging. I see a lot of good pieces and parts - you clearly have the chops to make something good. But, as someone who is currently trying out a lot of different voices on for size, I can recognize you as someone who is adopting a voice.
Have you ever tried doing a podcast? It's kinda the same - "I'm a good conversationalist, so obviously I'll be a good podcaster." (No, no you won't.) Your writing has that 'this is my first podcast' feel. It takes a lot of work for your writing to feel natural. I learned that firsthand with my book review (Society of the Spectacle, my first piece of public writing ever), which was very polarizing mostly due to stylistic/rhetorical choices that distracted from the points I was trying to make. Scott makes it look easy. It is not.
How do you make your writing feel natural? I'll let you know if I discover the secret (after I make a lot of money selling webinars). But my first piece of advice would be to try writing some of those passages as if you were a different person, or in a different mood, or from another perspective, or in a different tense.
- Lastly, you are doing yourself no favors by announcing what you are doing and why. If you say "I wrote an EA novel", pretty much everyone is gonna assume that it is bad from the jump. Same for any movement novel. You are digging yourself a hole and affecting your feedback because of that.
I have had that same problem more generally as both a writer and a critic. When my friend wrote a novel, I noticed that I was judging his work in a different way than I would judge a work by my favorite author. With a proven professional, people assume it will be good until something jars them out of the reading experience. With an avowed amateur, readers question every sentence and every word until they get hooked despite themselves.
Point being, when you are trying to solicit meaningful feedback, try presenting yourself and your work differently. You might be surprised by how the criticism changes. Who knows? Maybe if you send this exact same draft to a different group of people with a different introduction, you might get a different reaction.
As Previous Jack said, you have smarts and talent. Some people straight up can't write, and you definitely don't fall in that category. You *can* do this, and soliciting feedback and iterating is the right way to go about it. Keep working, we look forward to seeing the next draft!
I still agree mostly with Jack. There are quite a lot of good things about your novel - it's interesting, has some good ideas, an engaging story etc etc. Unfortunately novels get to be judged the other way round - from the top down with reviewers listing the things that grate or a clunky or lack plausibility. And that always comes across as harsh, like Jack Wilson's comment which I sort of disagree with - your novel is a bit better than that.
Specifically I agree that to a first approximation it's impossible to use a novel to sell an ideology. Yours is a good an attempt as many, but there's a real limit to how many people (apart from the already signed up) who can bear to listen to missionary zeal.
A couple of things I found odd - the non-profane language all the way through apart from one character who says 'Fuck' all the time. Fuck this, Fucking that... Yes, I know it distinguishes him from his other half but there may be less clunky ways to do that. And the line 'Fuck. Fuck. Fuckity. Fuck.' Isn't punctuated the way someone would say it, so of course people are going to stumble over it.
Also (especially given the language above) you have for some reason avoided any dalliance with romance at all. It seems odd - I know it can be difficult to write about, but it left me feeling something was missing.
If I'm honest, my curiosity was nudged by wondering what a defence of EA might look like to someone like me who is profoundly anti-EA. Obviously we could argue all night and not make the slightest progress... so I'll make just the one point.
Right at the beginning of chapter one you declare -
<"You should always consider the possibility that you are wrong, and making a mistake>
This is something I hear frequently in EA - and broader rationalist circles. And yet I have a profound feeling that at no point in the novel have you questioned your ideology. The fundamental beliefs (and I would say faith) are simply not up for discussion or consideration. The uninitiated 'aliens' are just people who haven't seen the light yet and so their naive objections are feeble straw men.
Of course if you were really open to thinking you are wrong - and making a terrible mistake - you wouldn't be writing a novel trying to convince others. But the plonking of that dictum right at the beginning of your book jarred quite a lot, because you've clearly made up your mind and 'Considering the possibility that you are wrong' isn't something you give the impression of doing at all.
I enjoyed the story and particularly the ending, except for the last few sentences. Even if I was a big fan of EA, I think I'd have still thought that it was too much spoonfeeding. As if at the end of Animal Farm Orwell had said "Oh, btw I think totalitarianism is really really bad...". You've made your point!
I haven't read your novel - wild horses couldn't pull me in the direction of a novel that was pro EA, but I think Jack's advice is spot on. Really - there's a lot of insightful stuff in there. And yes, above all, keep writing.
A discussion about adding voting restrictions came up on a different site a while back, and among more foundational arguments against it I remember thinking "we already have voting restrictions, it's called children," and it suddenly occurred to me; why DO we restrict children from voting? The only arguments I can think of in favor of restricting children from voting are:
1. They aren't well informed, which is true for most adults as well
2. Their parents would pretty much just vote for them, which seems like a feature; if a mother of four gets four more votes than a bachelor I'm fine with that.
3. Their parents or other authority figures would exert too much control in trying to get them to vote a certain way. I'm pretty sure this is already pretty bad, since everyone already knows they're going to grow up and vote in the future, but maybe it would be problematically worse. Campaign ads during Looney Tunes.
4. The system couldn't handle that many new voters, which if true would need fixing in the long term anyway.
5. edge cases involving newborns, which can be avoided by making the children be able to write their name or something.
The upside would be getting people to vote in the first year they want to, whichever year that is, and not giving them the sense that their input into important events is completely excluded, which seems like it could either convince people to never vote, or radicalize to "make up for lost time".
Should children be tried as adults whenever they are charged with a crime?
I was smarter as a child than the average adult criminal is, so unless you want to try dumb adults as "children", then there's no reason to have different criminal standards arbitrarily based on age.
Also, do you not understand how awful it would be to have people trying to influence children to vote for their party? I'm not saying it would be particularly effective, but the possibility for real harm exists.
>which seems like it could either convince people to never vote
That seems extraordinarily unlikely. It seems much more likely that people who don't vote do this because they're uninterested in politics, and children are much less interested in politics than adults, so it seems silly that they would have voted as kids.
>or radicalize to "make up for lost time".
Again, very unconvincing. A majority of people do not become radicals, which makes 'not being able to vote' a poor explanation, and since its so rare it seems like the sort of thing you're almost bound to become anyway.
You assume that it would be the default that children are allowed to vote, and that we need some sort of justification to deviate from this default.
But we can also see it the other way: by default, children do not have any freedom of actions. They are not allow to vote, yes. But they are also not allowed to make contracts, to decide on their medical treatments, to decide where they live, and so on.
This goes pretty far. In Germany, children are not allowed to buy anything without parental consent until they are 12 (because every transaction is a contract). If they want to buy a toy from their "own" allowance, and the parents don't want that, then the transaction in the shop is invalid.
So in general, children get such rights step by step between the age 12 to 18 (ages vary from country to country).
So by default children cannot vote, and the only question should be when exactly they obtain that right.
In practice, if a child walks into a shop alone, then the shop can sell a toy to the child, assuming that the parents consent with that. If the parents at home are happy with that, nothing else happens.
But if the parents at home don't like that their child bought that toy, they can go back to the shop and undo the transaction. (Money and toy are returned.) Legally speaking, the purchase then has never happened, because the child was never able to make a binding contract. Principles like "pacta sunt servanda" (contracts are binding) do not apply to small children.
EDIT: A very quick google search suggests that similar rules apply in many US state and probably other countries as well.
The American version (as I was taught in my high school Business Law class) is that minors are entitled to return anything they purchased, regardless of the store's return policy, as any contract entered into by a minor is voidable at the discretion of the minor.
It seems to be the same in (some states of) the US:
"Under state laws, parents of nonemancipated minors can void purchases and other contracts their children have made without adult permission, especially those involving face-to-face transactions, where sellers are in a position to know or suspect they're dealing with an underage consumer"
It seems that there are exceptions, and the details are complicated. From
>1. They aren't well informed, which is true for most adults as well
Agreed.
>2. Their parents would pretty much just vote for them, which seems like a feature; if a mother of four gets four more votes than a bachelor I'm fine with that.
This is still true for the majority of adults if you just change "parents" their authority figures. Their favorite celebrity, politician, etc. The vast majority vote for people and with their mid/hindbrain, instead of voting for well thought out policy with their forebrains. This is no different than when literal children do it wrt their parents.
>3. Their parents or other authority figures would exert too much control in trying to get them to vote a certain way. I'm pretty sure this is already pretty bad, since everyone already knows they're going to grow up and vote in the future, but maybe it would be problematically worse. Campaign ads during Looney Tunes.
This is no different than campaign ads during Game of Thrones etc. Staring at a box watching stuff that didn't happen or stuff that doesn't matter is childish no matter what. Just because they inject sex and violence for the big kids doesn't make it any different.
>4. The system couldn't handle that many new voters, which if true would need fixing in the long term anyway.
The system handled the 19th amendment fine
>5. edge cases involving newborns, which can be avoided by making the children be able to write their name or something.
There are also edge cases involving people in comas.
Overall, there is no reason to not allow children to vote when the current ethos amounts to begging the worst examples of homo sapiens over 18 to please cast their very important, very well informed vote. Many people on this forum live in a bubble of sorts -- most people are 20+ IQ points lower than you which is A LOT. They can hardly read based on tests like those from the famous Robin Hanson post. They are basically not capable of doing anything but their mid-low 5 figure highly supervised jobs. They cannot lead, they cannot learn much, and they cannot think. Most people can't learn algebra -- just talk to teachers in average school districts. Many students are pity passed with Cs through high school math and never get it. These people are so far from being capable of having any sort of informed view on 21st century politics that you might as well let children vote. In fact if rationalists are about 120 IQ on average, the typical rationalist was probably as intelligent as the average adult at 12 or 13 years old. They have already committed a crime against you by not allowing you to vote in middle school.
Voting confers legitimacy if it is done by the free and the equal. Children are not free - in the sense of being able to unilateraly make life decisions and enter into legally binding realations - which, as you mention, eventually boils down to giving their *parents* more votes, hence failing the equality prong.
Anyone who finds themselves on the losing end of either the "unequal" or "unfree" aspects of the system is fully justified in denying its legitimacy and undertaking whatever steps might be found fit or necessary to free themselves from its rule.
Most children have close to zero knowledge of and interest in politics. So their votes would be completely uninformed and unmotivated, besides barely understood ideas put in their heads by adults around them. These votes would thus discredit democracy and debase political discourse even more than the bear pit it already is!
In the UK two hundred years ago it was illegal for teachers to teach children history, let alone politics, more recent than twenty years in the past. Any teacher with the bright idea of teaching what we would call current affairs would soon find themselves arrested for sedition! But, like sex, total ignorance of politics, and deliberate isolation from it, as children didn't seem to affect the interest of most in it as adults (even though only property owning males had the vote back then). So maybe it wasn't such a bad approach :-)
Could you expand on your "illegal to teach history" factoid? Two hundred years ago, most schooling in the U.K. was private and, I think, essentially unregulated. What laws or precedents would prevent a teacher from telling kids about recent history or politics?
I read it years ago, and forget which book. But there is probably a reference to it in this article: https://www.jstor.org/stable/4285072 (It'll cost $50 to buy though!)
Most "adults" have close to zero knowledge of and little significant interest (some get into it like sports but never understand it) in politics. So their votes are completely uninformed and unmotivated, besides barely understood ideas put in their heads by high IQ agentic adults around them. These votes thus discredit democracy and debase political discourse even more than the bear pit it was 200 years ago!
Perhaps it should be illegal for professors and journalists to teach "adults" history/politics more recent than twenty years in the past. Sadly most journalists and many professors think teaching what we call current affairs to these "adults" is a bright idea. Maybe they should be arrested for sedition!
I think the idea is that they have little intrinsic interest or capacity to think beyond what they're told. So children would be voting for what "adults" tell them to want, which mirrors how "adults" vote for what the actual Adults In The Room offer them.
1. The one that actually probably has the strongest effect: parents who actually have children on the whole think letting their children vote would be a nightmare
2. The one that actually means the most: letting children vote is probably very bad _for the children_. I fucking hate being advertised at as an adult. At least when you're a child it's only about your parents money and it's gated by your parents.
3. Pragmatically, children cannot reliably get access polls and we're not going away from physical+mail voting anytime soon
She basically came to many of the same points (but didn't even bother saying the children should be able to write their name - if they can't write their name, then they're probably going to cast an invalid vote anyway, which is fine).
One objection not addressed in the article is that individuals under 18 are treated differently in justice system. If they are considered less responsible than those over 18 for stealing then maybe they aren't responsible enough to vote.
As regards that objection, it is worth noting that it's fairly common for teenagers to be tried as adults for more serious crimes, though I'm not sure of the precise criteria used.
I think at #1 you're mistaking the important aspect of being "well informed." Voting is not meant to be rocket science, you are not asking everyone to be an expert on some abstract social issue, should we send humans to Mars or not? Devalue the dollar or not? et cetera.
What people are expected to be very well informed on is their own interests, so that when Congress proposes to, say, tax the bejeesus out of gasoline because EVs are cool and Gaia loves them, each individual voter will know very well how that would effect him or her personally, and can send that feedback (via a vote) back to the government. It's best to think of voting as an information-gathering mechanism, where we collect everyone's opinion on how X or Y will affect him personally, and funnel the results back to everyone who wants to know. The fact that it can result in power changes is mostly because we can subsume the entire issue of feedback from The People into the simple question of: who should make decisions on their behalf? From that choice all else follows. So instead of having voting on every issue under the Sun (although living in California one feels we edge ever closer to that madness), we just vote on who should *decide* issues.
From that point of view, it's easy to see why we exclude children: they don't understand their own interests, pretty much by definition. They'll eat ice cream for breakfast, lunch, and dinner, fail to dress warmly in cold weather, and cross the street without looking. So asking them their opinion on how X or Y will affect them is useless -- they won't answer accurately.
Of course, some adults won't answer accurately either, but that's on the margin, and usually has to do with very hypothetical changes, like how would colonization of Mars affect you? rather than more realistic changes like how would $8/gal gas affect you? Of all the kinds of expertise people have, the one we can most rely on is expertise in knowing what they like and don't like.
>From that point of view, it's easy to see why we exclude children: they don't understand their own interests, pretty much by definition. They'll eat ice cream for breakfast, lunch, and dinner, fail to dress warmly in cold weather, and cross the street without looking. So asking them their opinion on how X or Y will affect them is useless -- they won't answer accurately.
If they aren't literally eating ice cream for breakfast, lunch, and dinner they might as well.
It's hard to exaggerate the severe failure that being obese is in 99% of cases (I am excluding stuff like cancer treatment making you fat). 99+% of obese people should basically be told what to eat by a guardian type figure. In history, they were, and that is why they weren't fat (in combination with less food available). They were serfs and slaves for a reason. If they can't even make good decisions for their own body, they shouldn't have a vote that's equivalent to mine, or else you might as well let 10 year olds vote too. You might say children are worst, but children over the age of 5 or so can actually take care of themselves at least as well as an obese "adult". There are many recorded cases of such children surviving in the wild as ferals, being homeless, etc. Children have self-preservation instincts and enough knowledge to at least match the performance of a 350 lb 35 year old. The gap between "adults" and children is much narrower than most people presume, because they have comically low expectations for children and a comically high estimation of the ex-serfs, which lived and evolved as children for 1000s of years. The biggest gap is between real adults and serfs/children. Then you go from people who can run billion dollar enterprises down to people who can't properly feed themselves without strict external discipline from a master-figure, whether it be a lord or a parent or an owner.
P.S. your distinction between normative politics and the need for expertise is messier than you think. It would be nicer if it worked along the lines of "everyone knows what is in their best interests, and delegate to experts to work out the details," but this is not the case. For example, if people wrongly think that having a ton of Mexican immigrants will improve the economy, or specifically their own wealth, because they fundamentally do not understand stuff like HBD and economics, then they will vote for someone who will let in immigrants and hurt them. So they hurt themselves due to lack of expertise. You need to understand many things at a high level to understand what is in your best interests, especially in this complicated world. Even your example, taxing, can get economically complicated quickly such that most people, even including me (I'm not yet as educated on economics as I want to be) won't really know what is going to happen under a new tax policy and if they will really have higher utility or whatever or not. In fact in basic micro economics it is often expressed as to how most people don't understand that there is apparently no difference between a sales tax and a production tax. Most consumers would prefer the latter but really it impacts them in the same way. They can't make an informed vote without knowing this and it only gets more complicated.
The idea that obesity is entirely based on diet is contested. I think there's a very good argument that our current issues with it as a society are primarily due to environmental contaminants.
lol, for the record, are you fat? I'm skeptical of their claims because of the law of conservation of mass and energy. I am more open to the claim that certain chemicals including junk food have made the low agency more likely to overeat than in the past, but this point is moot. The proximal cause is still their constant diet of soda, doritos, and McDonalds. If you show me a study that says that the obese present with diets which contain a normal amount of calories then I will change my view. I tried looking for a study like this and couldn't find one but maybe my codewords were wrong.
Also your link is written by a somewhat inadequately learned individual, just wanted to point out something I noticed:
>Kitava is a Melanesian island largely isolated from the outside world. In 1990, Staffan Lindeberg went to the island to study the diet, lifestyle, and health of its people. He found a diet based on starchy tubers and roots like yam, sweet potato, and taro, supplemented by fruit, vegetables, seafood, and coconut. Food was abundant and easy to come by, and the Kitavans ate as much as they wanted. “It is obvious from our investigations,” wrote Lindeberg, “that lack of food is an unknown concept, and that the surplus of fruits and vegetables regularly rots or is eaten by dogs.”
His overall contention of course is that some contaminant is making Westerners phenotypically different from this isolated tribe which evolved under conditions of plenty. The idea that a cold winter race might tend to store more fat for genetic reasons (not storing fat, eg throwing away excess energy, does not violate conservation of mass and energy. But magically putting on weight you didn't consume because does) is evidently lost on this writer. He is inadequately learned in the subject of biology to be writing on biological phenomena.
He also seems really biased towards excusing fat people for their horrible diets. It is possible that stuff like lithium exposure and microplastics and certain genes cause your body to store more excess energy from your calorie surplus than it otherwise would holding your diet constant. It would be highly preferable for fat people if they could simply live the eternal Marvel movie of constant eat-whenever-you-want, whatever-you-want without any consequence, eat least visual and embarrassing, for their behavior. Nonetheless, BMR is still about 2000 calories per day. https://www.nejm.org/doi/full/10.1056/nejm199212313272701
If you just eat your expenditure, you won't get fat. It's that simple. And evidently the toxins aren't bad enough that they're reducing expenditure to low levels somehow. It's 2000 a day for obese people in that study. So these people are fat because they eat too much.
Personally, I gain fat. I'm not like those islanders. I gained about 10 lbs a year from high school to the beginning of college. Stayed 6 ft tall and went from 145 lbs to 195 lbs. I was overweight and if I didn't change my diet I would be obese in another few years (I forget where the cutoff was). I didn't feel like my diet was that unhealthy but whatever. I changed it, I lost 30 lbs, started exercising, got into fitness, actually exercised enough to eat 2800-3000 calories at day at parity. And I was eating a pretty good diet before all of this compared to what I saw most people eating. Maybe without microplastics I never would have gained the weight, but who cares, I adapted to my environment and you can too. Fat people will not be excused for their gluttony.
"I'm skeptical of their claims because of the law of conservation of mass and energy."
Nobody denies that, but the amount of food you eat depends on the amount of hunger you feel.
The amount of hunger you feel, as well as the amount of calories you burn through unconscious processes, are obviously regulated by mechanisms that are outside your conscious control, but are part of your body, and therefore can go haywire.
I'm 5'3", 140 lbs, so no, not fat. On the edge of overweight but always in healthy range on BMI charts. Out of shape currently, but used to cycle 50-100 miles a week, too, when I lived in a more bike friendly place. Your uncharitable bias against fat people is noted, though.
I believe in the environmental contamination explanation for the obesity epidemic based on the series I linked you to and other articles I've seen, including the one linked in this open thread. I didn't believe it before but they made a very compelling argument. On an individual level obviously diet and exercise make a difference, but there are a lot of factors in the obesity epidemic on a societal level. If you think the problem is gluttony, you'll have to explain why even rich people in the past, with the ability and drive to be gluttonous, weren't nearly as obese as modern day gluttons, if they were even obese at all.
I also find the idea that serfs etc weren't gluttonous because their lords controlled their diets is absurd. But still, you have to explain why the lords weren't obese gluttons themselves.
>If you think the problem is gluttony, you'll have to explain why even rich people in the past, with the ability and drive to be gluttonous, weren't nearly as obese as modern day gluttons, if they were even obese at all.
Simple, superior moral fiber.
As for the rest,
"The most significant change in per capita caloric
consumption over the past century occurred between the
1950s and the present. Adjusting for loss, it is estimated
that average caloric intake increased from 1,900 kcal per
capita in the late 1950s to 2,661 kcal per capita in 2008,
representing a 761-kcal increase over 58 years. The bulk
of the calorie increase (530 kcal) occurred from 1970
Not to mention way more low IQ people are obese and very few, eg, college professors are obese. 14% of doctors and professors were obese while 28% of high school teachers and 48% of nurse assistants (lower than RN) were obese https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4681272/
>1) The idea that feudal lords paternalistically micromanaged what their serfs ate is...interesting, to say the least. Whatever their lack of legal rights, on a day-to-day, sociological basis serfs generally managed their own lives and economically supported themselves, making their own decisions about how to grow food on their allotments.
You're straw manning me to make me look stupid. I was alluding to laws restricting serf hunting and fishing and their generally restricted diets. Many people throughout history have been enslaved and they are generally fed by their masters. Urban Roman and Greek slaves, black cotton picking slaves, etc were fed by their masters. Serfs and slaves were not allowed to eat to obesity, through a combination of legal restrictions and production restraints of their times. These people were bought and sold, they were the property of the masters, where they lived, what they did for a living, their religion, often how many children they were to have, and yes, to a large extent their diets, were determined by their masters or lords.
>2) Support for increased immigration on economic grounds is substantially *higher* among more-educated people.
You seem to have reading comprehension issues (my priors are high for this since the vast majority of the population does according to tests). I never claimed anything contrary to this point. And "educated" is doing a lot of work here. The average college graduate from 10 years ago had an IQ of 108 per pumpkinperson, and I have heard this has declined. I estimate the average rationalist to have an IQ of 120. So there is already a big gap between your expected IQ and the intelligence of an "educated" person. Elsewhere in this thread I have discussed how journalists and professors function like people worry how parents would were children given the right to vote. For various reasons, susceptibility to certain "parents" for "adults" may not decrease monotonically as IQ increases. But the overall effect is there despite education level https://www.pewresearch.org/politics/2018/06/28/shifting-public-views-on-legal-immigration-into-the-u-s/
>What people are expected to be very well informed on is their own interests,
An interesting use of the passive voice. You might expect people to be very well-informed on their own interests, but I don't.
If I did think people voted based on an accurate assessment of their own interests, I'd expect them to vote randomly or not at all, since the odds of your vote actually changing anything of importance are so minuscule as to not be worth the effort of identifying the candidate that benefits you the most.
"so that when Congress proposes to, say, tax the bejeesus out of gasoline because EVs are cool and Gaia loves them, each individual voter will know very well how that would effect him or her personally,"
I don't think that's the standard at all. I don't think most people even know what Congress is proposing, they just vote for the letter by the person's name because their friends say they should.
Hardly. I certainly did in college and graduate school, but I've been in the real world for 45 years now. I know policemen, parole officers, people who have been convicted of petty theft, people who've spent years in AA fighting addiction, teachers -- both good and bad, at all levels from kindergarten through professional school -- physicians, surgeons, gardeners, people who do drywall work, salesmen, illegal immigrants both young and old, people who are rabid socialists and others who fly MAGA flags. It's kind of what happens to you if you live long enough.
How many specific people have you talked to for more than, say, 5 hours? You don't really know the cop who pulled you over, or your kid's teacher who you talked to for 20 minutes during parent day, or your gardener who you make small talk with for a few minutes when he comes and goes.
In broad areas of philosophical discourse and intellectual debate, sure. They have real lives to live, after all. But on the areas in which they make their contribution, and which affect them every day, I think not.
I find that if I spend 30 minutes talking to the HVAC guy when he's working on my A/C, or the ICU nurse when she's assessing my dad post-op, or even the gardener when he's deciding whether to prune the roses this week or next, and by how much, my conclusion is that almost all people know a great deal of pretty subtle detail about some area or other, however they earn their living, and are pretty smart about deploying it.
Doesn't mean they can write brilliant prose paragraphs about the theory of utility monsters, of course. But they might be able to fix a jet engine or grow a bushel of corn, which seems useful.
There's a good reason for that. The standard of proof for statements by politicians of what happens to those blue-painted people over there yonder is going to be much lower than statements about what is happening to YOU and people you know. If Governor Newsom tells me his latest nostrum will be good/bad for me, then I am going to be pretty dang critical of his prognostication. I know myself well, I know my situation and all its complexity well -- I'm an expert on that stuff, and therefore his theories will be pitilessly examined and the odds that I'm likely to just accept it okey doke if you say so bub are nearly zero.
On the other hand, if he proposes that something will be good for Native Indian casinos or pot growers in Ukaiah -- well, I have a lot less expertise to deploy. I *can't* be as mercilessly critical, and I'm much more likely to accept or reject the hypothesis on the basis of tribal loyalty ("A Democrat proposed it! It must be noble/selfless/good/corrupt/deceptive/evil!"), or because I heard some talking head I admire rant on it during a 30-second TV spot, or 140-character Tweet, or because I did/did not have my morning bowel movement on schedule.
In short, I think politicians are much better able to manipulate voters through tribal hominid instincts if they stick to topics on which relatively few people have any kind of direct experience, and of course they know this. Religious crusades, foreign adventures and wars have been a great traditional field for such demagoguery, and the environment and social justice for groups and lifestyles about which many of us know very little are pretty much the modern equivalent.
Ummmmm... voting is supposed to reflect whatever the individuals voting want it to reflect. So that their preferences get expressed in policy.
Also, yes, the information gathering mechanism is one part of the value of democracy, but the much bigger and more significant one is that it disperses power amongst the people.
It doesn't disperse power at all. There are about 800,000 people in my Congressional district. That does not mean I have 435/800000 of a vote when Congress considers any bill. Power is concentrated in the hands of people who actually wield it, which is not me. What I can do is (help) choose the people who will wield power, which is surely a form of power, indirectly, but as I said above the main reason for having the whole representative republic thing is not so much that I can exert power, because I don't, and not even so much that one man doesn't exert very much power (because we have a President who exerts enormous power), but so that I can (help) choose the right people to exert the power. The value is in the fact that the people I choose will represent (in theory) my interests, that is, it is the communication of my interests to the governing body that is important here.
That is, after all, why that person is called my "representative." He is presumed to "represent" me -- to speak for me, to be aware of my interests -- and that is the purpose of the election, to convey that crucial information.
The franchise gets extended when the party in power sees an opportunity to gain a reliable voting bloc.
There have been efforts in many countries to lower the voting age to 16 because that would tend to give a reliable voting bloc to the left-of-centre party. But if you go any lower than that then the effort required to pander to them is probably higher than the potential gain. No politician wants to give speeches aimed at the 5-9 year old demographic.
In the USA, we also restrict felons and non-citizens from voting. I think the idea overall is that we want to restrict voting to people with sound judgment who are committed to life in the USA. At the same time, we have to be really careful and conservative about what rules we use to draw that line.
Hence, we go with three criteria that are as legally objective as possible: age, felony conviction, and citizenship. By keeping these criteria steady, we avoid the problem of parties redefining voting criteria in favor of new potential constituent groups (not to get culture warry, but US examples might be Republicans restricting voting ages upward, or Democrats expanding voting rights to green card holders).
These might not be the objectively best way to approach it, but I think it captures what would be seen by many people as common sense.
I also disagree with the felon conviction one, it seems like a crutch to lock in laws against a shifting mindset. Felony drug convictions are no longer able to vote to legalize drugs, on the ground that drugs are illegal. Surely if the laws are just, then the number of prisoners can't hope to outweigh the ordinary citizens and there's no loss in letting them vote.
Citizenship's a different thing, you should absolutely have to commit to be subject to the result of the vote before you can vote, which foreign nationals aren't doing.
I feel obliged to note that there's a way to route around this, basically to the tune of "disenfranchisement for felony is okay, but drug possession and similarly-legalisable* things shouldn't be on the list".
*By "legalisable" I mean "society wouldn't explode if you legalised this". Fraud, rape and murder are obviously not legalisable in their normal senses, although consensual "rape" (because of AOC violation, because drunk, etc.) demonstrably is.
Do you think a society in which a majority of people want to legalize murder will be saved from exploding by denying them the vote? I just don't see a legitimate to it. If a vote is related to the crime they're the most motivated party, if it's unrelated to the crime then it's unrelated to the crime.
>Do you think a society in which a majority of people want to legalize murder will be saved from exploding by denying them the vote?
No. The principled rationale for denying criminals the vote is typically along the lines of "they are demonstrably shit at decision-making and/or evil and thus contribute only noise", not "society will explode if you legalise murder"; it is extremely obvious that murder will never be legalised because even murderers don't want to get murdered.
I'm not particularly sold on this rationale; I'm just pointing out that your specific objection can be routed around. Kind of steelmanning.
Recently, there was a Guardian article about a subreddit that I'm a longtime member of, and overall it was surprisingly fair and accurate. However, there was one bit that baffles me - they claim that "to the floor" was a popular meme in the community despite noone ever saying that. A Reddit search shows only a single post containing the phrase, from three months ago with just 7 votes.
I just can't understand how something like that could happen. Obviously, they did do research for the article, and I don't think even the most cynical and underhanded writer would just make shit up, especially in an article that is otherwise accurate, so they must have gotten the idea from somewhere, but I have no idea where.
Keep in mind how almost all journalism works. Someone with no direct experience in an area asks questions of people who are involved. They also do some basic research, nowadays probably mostly in Google.
The number of points of contact a journalist has with any of their facts is minimal, maybe just one reference. One of their sources might say [thing they regard as fact] and the journalist has no ready means to verify it. It often gets printed as such. A thing that happens frequently in a community may only have been relayed to the journalist a single time, which in their minds is exactly equal to a thing that only happened once - so long as they hear about it. Journalists frequently do not know which things to fact check or review, so even if they are attempting to be careful something like the false meme can still easily slip through. And that's assuming the journalist understands the material they are researching. Science journalism is often terrible, because the journalists don't even know enough about the subject to identify basic misunderstandings.
Journalists tend to plan out their story before they do any research on it. They know two or three things about bitcoin, and one of them is the "to the moon" meme. So they assume there must be a "buttcoin" equivalent and they ask someone what it is. That someone shrugs and says "uhh to the floor I guess" and that gets printed.
But the other memes mentioned (e.g. "few understand" and "this is good for bitcoin") were accurate. In your hypothetical, why wasn't that inverted too?
Wow, that was a pretty sad read. Imagine giving up permanent rent-free real estate in your head like that. Did your childhood bully become a crypto billionaire and you're still holding on to resentment from that or something? Do you need a hug?
Eh I actually think it's probably net good for there to be people bashing crypto since it probably raises the awareness of the scams among potential future victims.
This is just a prediction that I'm making. Wanted somewhere to post the prediction so I would be commited.
I can't bet on PI because I am too poor right now. Made some good money on the Bernie market in 2019, though.
In almost every single Senate, Gov, or even state Presidential GE poll released by the pollster Trafalgar they have hit the Republican candidate's final vote share dead on. Within 1% in 90% of cases. Then within 2% in another 8% of cases and finally I found two polls where they missed by 3%. It doesn't really matter whether the poll was released 1 or 2 months ahead or in the final week.
I'd like to predict that based on the 8 Senate polls so far, sadly NH wasn't polled since the primary is tomorrow, and for some reason no Florida poll, that:
JD Vance 49% in Trafalgar poll - loses
Mehmet Oz 44% in Trafalgar poll - loses
H. Walker 47% in Trafalgar poll - loses
A. Laxalt 47% in Trafalgar poll - loses
B. Masters 44% in Trafalgar poll - loses
Ted Budd 44% in Trafalgar poll - loses
R Johnson 44% in Trafalgar poll - loses
T Smiley 46% in Trafalgar poll - loses
I'd expect Hassan's opponent to get sub 47% in the NH T-Poll and lose and I expect Rubio to be on the edge 48-51 like Vance, but also to lose, pending results from a Florida T-Poll.
I don't normally watch the polling as closely as I have this year, so maybe Trafalgar was dealing with similar patterns in previous years, but the timing of the polls may matter a lot. After Republicans made significant gains for much of the year, the Democrats recently made a swing upward. If your measurement period is during this (apparently real) upward swing, then it's going to be worse for Republicans than if you were looking at polling data from even a few weeks prior. I have also heard that Republicans are now moving back upward again in polls, such that the final polling may be different than what we're seeing now - despite the current polling apparently being at least mostly accurate.
Literally 45 minutes ago. Of course I also told Cahaly about my theory the day before they announced their new polls so I expected them to improve for Rs. They will also do AZ/NV/WI. In any case this would still be a loss for Oz.
It is an interesting finding that Trafalgar's estimates for Republican candidates seem to be better than their estimates for Democratic candidates though.
Shannon Bray got 3.1% in that race on top of an October surprise sex scandal. Also Trafalgar had an October poll with Tillis at 48.6. In other presentations of this theory I discussed averaging the last 2-3 Trafalgar polls as well as considering the power of third parties. 3.4% would count as a 3% like Heller's race. I do concede I actually didn't check this race for my original analysis. So there are now 3 polls with a higher than 3% shift.
I don't think the 2020 NC Senate race changes the analysis that much considering the above factors but I do feel foolish for forgetting about it.
Do you have any knowledge or guesses as to what about Trafalgar's polling is causing this to happen?
Trafalgar is a really interesting pollster. Last I looked, it seemed they were the only pollster who, when faced with low single-digit poll response rates, recognized that non-responders might be different from responders and tried to do something about it. (Maybe I missed someone, but it looked like all the other polling companies just stuck their heads in the sand and assumed that, as long as they weigh for demographics, they'll be OK.) The details on this are somewhat short, except for them presumably asking people how they think their neighbors would vote. I don't blame them for keeping their methodology secret, but your observation makes me even more curious as to what they are doing.
I'm guessing their accuracy on the R side might be due to their focus on counting the undercounted exactly right, and that they just don't put the same effort into counting the undercounted on the D side. ("Do you think your neighbors will vote for Trump?" seems like a one-sided question.) But that's just a guess.
Trafalgar is a very pro-Trump pollster in their surveys and their public presence. Many people actually argue that they don't even do actual polls but that it is fake but I think 538 probably did due-diligence before giving them an A- rating so I assume they do actually poll.
Trafalgar discusses I think 7 data gathering methods they use as a mixed model on their website if I am recalling correctly.
My suspicion is that Trafalgar assigns R leaners but not D leaners and that may explain why their D numbers are always too low and somehow no undecideds ever move to the R column. But they are still more consistently accurate on R vote share than you expect of some other pollster who did that so they have *something* in their special sauce.
I think their bias, and they are considered extremely R biased, prevents them from getting correct D numbers. Which is weird. Unless they are primarily an activist pollster for their public releases you'd expect them to want to get D numbers correct.
Cahaly is apparently going on the Star Spangled Gambler podcast next week and the host said he would ask Cahaly about how his R numbers are so good and also about my claim that they are predicting a Dem sweep according to my theory. We'll see if they actually do that.
The Economist's model says that most likely outcome is Democrats pick up 1 seat. Their odds of keeping control of the senate is 3:1. The two reasons they gave is that the Republicans chose weak, Trumpy candidates, and the abortion decision is unpopular.
Yes all the major models are pretty in sync about the results. Important to remember that we haven't gotten serious polls from the major pollsters post Labor Day. The point of "Trafalgarian Augury" is that the models will be off until we get those polls whereas R vote share in Trafalgar polls is accurate regardless of other data. Of course maybe those polls will have bad news for Dems, who knows. The Suffolk poll for Tim Ryan suggests they'll be good news but the correlation would 50% or something so it doesn't assure that the models will have to update.
So the prediction is that the Democrats pick up 5 seats in the senate? But the Republican in Washington (T Smiley) ends up doing better than Republicans in Wisconsin, North Carolina, Arizona, and Pennsylvania? That doesn't sound that plausible to me. I'm expecting that you've misread something somewhere if you claim that there is a pollster whose individual polls have nearly all been within 2% of the final margin, including polls three months out from the election. (It really doesn't seem possible for a good pollster to hit the final margin within 2% from three months out, because a significant number of races should move by more than that much over the course of three months.)
It seems like I need to clarify something, which happens often with this claim. I am talking specifically and only about R vote share. Trafalgar's polls are garbage if you are looking at Dem vote share or final margin.
A classic example is the 2018 Nevada Gov race with Laxalt, the current Nevada Senate candidate, and the current governor Steve Sisolak. Everyone roasts Trafalgar because they had Laxalt up 6 and Sisolak won by 8.
However that is irrelevant for my theory. You see their final poll has Laxalt 45 and Sisolak 39. If you subscribe to "Trafalgarian Augury" you can already guess where this is going. The final result was Sisolak 53 and Laxalt 45.
In any case I checked a triple digit number of polls in a mid double digit number of races. I only found 2 polls that were off by 3% on the R vote share and none worse. This is only Senate/Gov/Biden-Trump Pres races, to be clear.
Very interesting! It still seems surprising that Republican vote share would be so stable throughout several months of polling. In 2016 in particular, I would have thought there were some big gains and losses over the fall. (Though now I see that you mention only Biden-Trump, not Clinton-Trump, which does raise questions about how it would generalize to a second presidential contest.)
Trafalgar is putting out, for some unexplainable reason, new polls this week for WI, PA, AZ, and NV. Governor and Senate though I don't much care about races for Governor outside examples of their polling accuracy. Much rather have Florida and NH but w/e. So we'll see. These polls will be anywhere from 30 to 23 days newer than the existing Trafalgar polls.
Very curious if the R number stays the same and if the D numbers fluctuate.
Under my theory if the R is stable but D is moving that suggest the R numbers are believed by Trafalgar to be solid whereas the D numbers could move down to change the topline margins because Trafalgar seems to be a sort of "activist pollster" for right wingers.
Well as I said we have to wait for the Trafalgar Florida poll and probably the NH poll. So maybe Trafalgar gives him 52%, that'd be a sure win using "Tralfagarian Augury".
There have been several polls putting Rubio in danger recently, but nothing as reliable as Trafalgarian Augury. However it is a pretty common belief that Florida is not as Red as Ohio. So with Vance at 49% I'd expect Rubio to be in the danger zone.
How often do small or micro communities branch off into their own "unnetworked" site? How do "censorship" concerns affect this?
I'm asking because themotte.org finally started running their own site and, as far as I know, have abandoned the subreddit. This seems like a very rare thing but I'm wondering if this is more normal than I anticipated for two reasons:
First, between datasecretslox and themotte, I know of two forums that have split off over the SSC-sphere over the past couple years, both over "censorship" related issues. I don't pay attention to too many other online communities, is this normal?
Second, themotte based their setup on something called rdrama, which is apparently another reddit sub that got kicked off and became their own little thing. And I think the_Donald branch-off is still around. But I'd never even heard of rdrama and it was weird to find this whole little hidden community.
So yeah, most people on the internet tend to go to a few large sites but I'm wondering how many little "dark" communities there are that have branched off. Is this common because, well, one of the definitions is that they're insular/not easily findable via the big sites.
Here's the communities that I know that got kicked off reddit and started up somewhere else:
/r/fatpeoplehate and /r/greatapes back in 2015: the first real round of subreddit-level censorship. Both set up on voat, which managed to limp on for a few years under heavy DDoS attack before giving up.
/r/the_donald: moved to thedonald.win, which lasted until just after the 2020 election. They tried to start up america.win after that, but this seems to have vanished too.
/r/themotte: already a group of refugees from /r/slatestarcodex, they moved again to themotte.org
One thing I've noticed with all these communities (except /r/greatapes which I was never any part of) is that none of them really survived well. Communities tend to lose a lot of their best posters in the process of moving and descend into shallow caricatures of themselves. I looked at themotte.org the other day and saw a lot of low-effort "boo other side" comments that wouldn't have passed muster on /r/themotte, let alone on /r/slatestarcodex.
/r/themotte always had a problem with low-effort "boo other side" posts, due to a weird moderation failure mode where a poster could make essentially any kind of insinuation directed at a group of people, but calling out his prejudice slash intellectual dishonesty in response constituted a personal attack and was therefore verboten.
I ate my share of bans over at TheMotte but I would say that is slightly unkind characterisation.
They certainly did bend over backwards to give the benefit of the doubt, and how you phrased your response often seemed more important than the actual content: if (say like me) you tended to respond "you blithering idiot", then yeah, that was a personal attack. Some people did dodge around the rules very successfully by sticking on *just* this side of the line of stating outright the bad stuff, especially if they were trying to bait responses of the "you blithering idiot" kind.
But some people did not "call out prejudice/intellectual dishonesty", they went on rants and accusations of their own, eventually got bounced, then to this day maintain it was all mod prejudice and the right-wing/left-wing slant of the sub-reddit (TheMotte has been accused of being, at the same time, in thrall to the far-right and hopeless puppets of the progressives) while they were only being moderate and pointing out the bad stuff.
I think to say that DSL 'Split off....over "censorship" issues' gives a completely false impression of the reason for DSL's existence.
Scott deleted his blog, and soon afterwards, a few of his regular commenters got together and created a forum where some of the spirit of SSC could continue, in a forum setting. There was no way of knowing if Scott would ever blog again. Of course, it turns out he did, and does, but as with the way of these things, DSL having been brought into existence by unfortunate circumstances, found no reason to self-cancel and indeed thrives to this very day.
"Splitting Off" is a misrepresentation - Scott lists DSL at the top of his community links,
Alright, I'm not 100% sure what you heard, here's what I'm trying to say.
Scott deleted his blog over potentially being doxxed by the NYT, which certainly seems "censorship" adjacent.. Scott wasn't censoring anyone...alright, take that back, we had the "Reign of Terror" back in the day but basically everyone was ok with it and definitely all the DSL people were ok with it. But yeah, the core thing I'm trying to understand is whether these spinoff "dark" sites/forums are everywhere or just "CW"-adjacent.
Second, splitting off seems fair because DSL definitely has its own identity and "vibe" and that's a very intentional decision by obormot et al. The two most obvious examples are that DSL tends to focus much more on CW while SSC is still, like, more weird techy stuff, and DSL tends to stylistically prefer shorter, catchier posts while SSC is generally prefers longer, ramblier posts. Scott seems to be on good terms with ACX/SSC associated sites like themotte and DSL but they very much their own entities
There is also TheSchism, which is still on Reddit, and which in its turn split off from TheMotte as, and this is my take on it, a place for the left-wing/liberal/progressives who felt they were being dog-piled any time they tried to say something by the righties.
It never had the same amount of engagement, and it seems (again, to me) that once you kick out all the witches, or set up a witch-free space, all the people there are so in agreement that they don't really feel the need to discuss things at great length. "X is good". "Yeah, I agree". "Me, as well" doesn't lead to the same kind of 1000+ comment threads.
There's also /r/culturewarroundup, another splinter of themotte which seems to still be up and running.
As far as I can figure out it's not exactly a further-right splinter of themotte, it's just one that has offloaded the pretence of high-minded discussion and just gone into full "Can You Believe What Outgroup Did This Week?" mode. I just checked and it seems to be reasonably active, attracting a manageable 200 comments per week on its culture war roundup threads. Seems quite successful, if that's what you're into.
I’m considering another - and final, I’m 48 - career move. But I’m a little lost, and I’d love advice from this group.
I’m an attorney (20+ years). I’d like to start taking night classes for computer programming. There’s a lot of activity in the legal tech world, but not a ton of overlap between the tech side and the law side. I think that someone with skills in both areas could do very well.
But I know very little of the tech world. I’m just starting investigating this, and I need some fundamental advice. Any advice is welcome, but the questions I can think of are:
1. How important is credentialism? Do I need a degree, or are targeted classes enough? I’m old-ish and have three kids; I probably can’t take four years of classes, not to mention the expense.
2. Assuming I don’t need a degree, what classes are necessary, what languages should I know?
3. What math classes are useful/necessary?
4. How important is the name of the school? Anything I’m likely to be able to do would be Long Island local- there’s no MIT in my future
5. What other independent activities can someone take to demonstrate skills? I assume “I took a class” isn’t half as good as “here’s a fun project I did myself”.
Lot of this will depend on what kind of programming you want to do. Web Developer will be different than something more theoretical or "hard core" like working at intel programming chips.
For web dev type stuff (start ups, most large companies, agencies and such) my answers would be:
1. not very at all
2. Its easy to learn the basics of a language once you understand the main concepts. But its helpful to know Javascript because it so pervasive. Then something like Java, Ruby, Python is good. But, really knowing the language isn't very important. At my company we regularly hire people who don't know Python even though that's what half our system is written in.
3. I don't us any math concepts beyond algebra on a regular basis, but maybe you can impress people in an interview with some fancier.
4. Not important
5. Yes, projects! You could have a high school education but if you built a few moderately sized apps, you'll know much more than someone who went to Cal Tech and graduated top of their class but only knows the theory. In my experience, having a CS degree has a very low correlation with knowing how to actually code.
I suggest doing projects that are beyond the typical suggestions (todo lists, blogs, etc.). Either build something that you wish existed or just copy an existing app (unless you are trying to be a designer, the look of the app can be exactly the same - its that you did it that's important).
I was a consultant for a while then went to a coding boot camp and have now worked at SaaS companies for over 5 years. I don't have any reason to think a CS degree would advance my career (any job that required that is probably not a place I want to work). At my currently company, probably a third of the programmers don't have any formal CS degree (there are about 40 total). These people are at multiple levels of seniority, not just juniors.
Your experiences as a lawyer likely means you are personable and can do well in a cultural/non-technical interview which will really be a big leg up. Also consider roles that are coding adjacent but leverage your existing skill set. Things like support engineering or implementation engineering are technical roles but also require working with customers to solve their problems. You can start in these roles then pivot to full time programming role after a short time. (This is what I did).
1. Not terribly important. I know people who had completely unrelated degrees and ended up with good tech jobs. Some of them went to a 6 month coding boot camp, which both taught them useful skills and helped them get job interviews which eventually led to jobs.
2. This somewhat depends on what technical work you are going to be doing. Front end vs back-end vs full stack being the biggest distinctions, but also where you work matters. FAANG companies will emphasize different things from startups which will emphasize different things from banks. I work in backend, and say C++, Java or Python are safe languages to learn (Python has the advantage of being probably easiest to learn, although you may want to learn one of the other two as well at some point). Class-wise I'd look into Discrete Math, Basic Algorithms and datastructures, and some familiarity with databases.
3. Discrete Math is the big one. Linear Algebra is quite useful in some fields but if you're looking for a crash course I would skip it.
4. I would strongly recommend against going back to college for programming. Coding bootcamps take less time, many will charge you a percentage of your future salary, which is both cheaper than college overall and incentives them to get you a job, and they will work harder to get you employed than a college will. They won't teach you everything a 4 year degree would, but the good ones are pretty impressive in what you do come out with.
5. Just start coding. I'd say mainly to learn the stuff, although some people do contribute to open source projects or have web pages where you can see things they've programmed. Personally, I pay no mind to this when I'm interviewing candidates, but some people may. Really, programming interviews are just like long tests, if the candidate can answer the questions, they're probably good.
6. Generally I recommend people switch careers to tech. That said, I don't know enough about your job/background to know if it is a good idea. If you're 48, it may not be worth the trouble to have a major career change and, no offense intended, you should be honest with yourself about your motives for the change and if this is some sort of mid-life crisis or a well thought out decision.
I will also note, the field does have a lot of age discrimination (in favor of the young). Someone making a switch to tech out of the blue around the age of 30 is accepted, but someone doing it at the age of 48 is not commonly seen (at least by me). People over 40 can have trouble getting some jobs even with many years of experience.
I'm in Long Island too (Syosset). If you have more questions and want to discuss it more, I'd be happy to discuss this more.
You can check out /dev/lawyer, I like his blog and the takes on copyright.
5. Honestly anything you can complete is a boon. When starting, think a little smaller, and finish a few things. My first program I used daily was a program to download NASA's picture of the day and set it as my desktop background.
But finishing stuff is hard, so it's good to pick some intentionally small stuff to start with.
Are you already doing computer programming? If not, perhaps learn some _before_ hopping into classes. It is incredibly accessible. Online resources are vast... tutorials and people.
You should make sure you enjoy this before a career change. Consider that a hobby might provide more fulfillment than a job. Also a junior dev job is likely to pay much less than being an attorney... a senior dev job is likely to pay less a lot less than being a senior attorney.
1. You need a degree. But you have a degree. This degree is not CS, and you should take coursework. But being able, understanding, and _experience_ are most important.
2. Learn all the languages you can. Learn assembly at some point. Take logic (you might have already!), algorithms and data structures, discrete math, and anything else that seems interesting.
3. Discrete math. Maybe linear algebra. Logic. Anything else that seems interesting.
4. None, unless you want to be a prof.
5. The only thing you can really do to learn, is to do. Classes are good but will teach you little. Write programs. Do projects. Make things. If you're not already doing this, start today.
You should do something you love. You can start today and find if you love programming, and you can do it even if it is not a career. Are there technical tools you lack? Things you wish existed? Try making those. Need is a very good driver, and doing is the only true teacher.
>2. Learn all the languages you can. Learn assembly at some point. Take logic (you might have already!), algorithms and data structures, discrete math, and anything else that seems interesting.
Hard disagree on the assembly language rec. Don't do that too yourself. Especially starting at your age. You don't have the time left to so casually flush that much of it.
*Am 48 in three weeks, took assembly at 20. It's not necessary outside of very specialized career paths.
It really depends on what GP wants to do with his tech knowledge. If they want to build legal-tech tools or products, then a CS education at is absolutely useless.
> 2. Learn all the languages you can. Learn assembly at some point. Take logic (you might have already!), algorithms and data structures, discrete math, and anything else that seems interesting.
> 3. Discrete math. Maybe linear algebra. Logic. Anything else that seems interesting.
Completely disagree. Algos/DS is useful. Calc101 and AP Discrete-math and AP statistics level knowledge is more than sufficient for any generic Software engineer job.
It will probably come down to industry-focused fullstack development skills or Data Science skills depending on what kind of legal-tech they want to contribute to.
Now, if OP wants to go into CS research then they'd probably need those skills, but they'd probably also need to spend 6 years doing a PhD on minimum wage, which I doubt they'd want to do at 48.
____________
I would suggest not wasting time on learning too many things. Learn the basic bootcamp level skills in the area you are interested in and pick up a part-time low-wage job. The industry is in very high demand, so anyone who will take a low-wage and has basic skills gets employed easily.
Could cities reasonably prevent bars from overserving patrons to the point of intoxication- like, imposing a 3 drink limit per patron? And probably a minimum cover tab too, to minimize bar hopping after you've had hit your per-bar limit.
Alcohol continues to be society's most damaging drug by a huge margin, and alcohol taxes do seem to be proven pretty efficient at reducing alcoholism, alcohol-related violence, etc. I'd argue that over the decades/centuries, high levels of consumption have just become less & less socially acceptable- people used to drink WAY more in the 18th & 19th centuries. Given that- the idea of people routinely getting drunk and committing crimes at a public venue that's licensed and regulated by the state seems rather odd, yeah? Ask the local police department where most of the crimes are committed every Friday & Saturday night, year in and year out.
I think society can reasonably say- public drunkenness & its associated social ills (fights, etc.) are just not acceptable in our downtown. If you'd like to get drunk you can certainly do so in the privacy of your own home, or at a private party- but not in the middle of our city's commercial district, at a state-regulated & licensed establishment. All bars & restaurants now have a 3 drink maximum per patron, and if we see that this just leads to a lot of bar hopping, we're going to make bars pool driver's licenses associated with one's tab on a common server each night to prevent it. Basically- our commercial district is not available for large-scale intoxication, disruptive behavior and petty crime. Seems reasonable eh?
"Basically- our commercial district is not available for large-scale intoxication, disruptive behavior and petty crime. Seems reasonable eh?"
Al Capone says yes, this is very reasonable and he hopes you can get the city government to implement it ASAP.
Yes, yes, your version is different. But it's not really that different, and will have the same failure modes. There will still be places people go to get drunk, and your police won't be able to stop them, and you'll have less influence over what goes on inside. And the worst sort of people will have an extra chance to get rich.
A lot of people responded with some variation on this, I don't find it very insightful. The difference between Prohibition and just restricting how much a bar can serve you is that the former actually outlawed all alcohol sales. I agree if you literally outlawed booze you'd create a black market, but this is..... just a 3 or 4 drink maximum at the bar. If you want to get hammered, you can just buy a bottle and go home to do it.
I'm pretty confident that in the 21st century the police can find literal speakeasies in a city. (For one thing, every idiot would post photos of it on social media!)
Alcohol sales is already a highly regulated business, and 48 states already make it illegal to overserve someone. I think giving bars a firm number rather than vague criteria to determine inebriation is much more fair. Has Al Capone started his black market in say Utah, where alcohol is extremely regulated? I think people just repeat their Prohibition analogies without looking a little more at the current regulatory environment
If you want to get hammered, you go to the bar that will let you get hammered. Even if that's illegal.
During Prohibition, alcohol wasn't just *sold* illegally. It was *served*, in bars, even though that's an extraordinarily risky way to sell an illegal product. It happened because the demand that Capone et al were serving wasn't just "imbibe alcohol", but "get drunk in the company of friends, or friendly strangers who I maybe wouldn't want to invite home and in any event I'm going to be too drunk to handle the logistics of a house party".
That's a demand that lots of people have, and that lots of bars (though not all of them) exist to serve. And we know what happens when you make it illegal to satisfy that demand by making it illegal to operate a bar - the Al Capones of the world will operate bars anyway, even though just the existence of a bar serving any alcohol is theoretically enough to bring down the wrath of the law. If it's *legal* to operate a bar in general, and the law can only touch you if it can prove that you served more than the legal number of drinks to a customer who will be working with you to establish plausible deniability on that, then it's going to be even easier to operate an illegal bar than it was for Capone. And he didn't have any trouble with that. At least, none that couldn't be solved with a bit of bribery and/or submachine gun fire.
So, those are your choices. People getting drunk in bars, or people getting drunk in bars with a side order of bribery and automatic weapons. Probably less of that than in Capone's day, at least.
The 80/20 rule applies to bars. Bars survive financially on the 20% of customers who have 6-12 drinks per night. Get rid of them and the bar fails.
Having spent a lot of time in bars and witnessed a lot, I'd bet that most of the violence in bars doesn't come from the regular drunks but from angry men who have only had a few and were looking for a fight from the start.
Don't we already have this? It's already illegal in nearly every state to sell alcohol to an intoxicated person. They don't have some maximum number of drinks they can sell you, but they are required to see if you look drunk, so you can't get around it by going to another bar.
"How many states have underage furnishing and sales to intoxicated persons (SIP) laws?
All states prohibit sales to minors, and all but two states – Florida and Nevada– have at least some form of SIP laws, which legally require that alcohol retailers’ staff look for behavioral signs of intoxication prior to serving or selling alcohol."
There are still dry counties in the US where you can't buy alcohol. Utah has restrictions on the alcohol content of beer. Many places have restrictions about what types of places can server alcohol (must serve food as well, can't serve food as well, can't have certain other actives on the same location, etc.).
So, sure you could make this rule. But the US won't ever make it. And I can't see it being at all effective in reducing the type of behavior your talking about.
> the idea of people routinely getting drunk and committing crimes at a public venue that's licensed and regulated by the state seems rather odd, yeah? Ask the local police department where most of the crimes are committed every Friday & Saturday night, year in and year out.
Sure but that is also because bars/nightclubs/downtowns are where people are at night. Everywhere else they might be is probably closed. The alcohol surely makes things worse but its not the only variable.
> If you'd like to get drunk you can certainly do so in the privacy of your own home, or at a private party
I haven't seen any evidence that getting drunk at home leads to less bad behavior. Things like domestic violence (which is very correlated with alcohol use) sure happen at home 99% of the time and not at a bar.
Bartender here. Once again, I agree with Other Jack. Most bars have tight margins. They depend on regulars who get shitfaced. Even more so, they depend on drunk people buying food specifically because they are drunk. Cut off the drinks and you cut off the food sales - and therefore cut off most bars.
I haven't dug into the stats, so someone may be able to contradict this. But the biggest preventable crime and/or danger to the public from bars comes from drunk driving. The other negative aspects - fights, mostly? Not sure what you're objecting to, maybe people having sex in public restrooms or pissing in the street? - are probably not going to be much reduced by changes to the law. It'll just start happening in house parties in neighborhoods and cause all sorts of problems in places that are scattered throughout the city and therefore harder to observe and regulate. People are going to drink and do stupid shit.
To my mind, seems like the 80/20 proposal would be to subsidize Uber and Lyft and try to eliminate drunk driving.
We already fought this battle in the late 19th and early 20th century and settled on a compromise that is generally accepted. I doubt many people are interested in rekindling the temperance movement and smashing up saloons.
A) Bars/Clubs pull in a significant amount of tax revenue, especially from alcohol sales. Local governments would be reluctant to limit that.
B) Most people aren't going to stop getting as drunk as they want and committing crimes. Prohibition more or less proved that. Containing them to certain nightlife/commercial districts is likely much better than letting it spread to private parties and speakeasies in residential areas.
You have the personal freedom to shoot a gun, drive a monster truck, smoke a joint etc. etc.- in designed private property areas. Not in a downtown commercial district. How is 'getting blackout drunk' different?
"You can't consume toxins that kill 95k people in the US every year in our city's downtown. Unlimited toxin consumption is allowed at home or on private properties"
You appear to misunderstand the nature of civil rights, at least in the United States. It is extremely difficult to circumscribe them in a public area -- the government generally needs to have what the constitutional lawyers call a "compelling interest," and the mere hypothesis that some restriction on the rights of free association and bodily autonomy would in general lead to less public disorder isn't sufficient.[1]
After all, you could (and people once did) make a formally identical argument that blacks and whites should be segregated everywhere in public, since it would significantly reduce interracial strife and potential violence. But it would never pass constitutional muster, because the government does not have a compelling interest that could justify abrogating the First Amendment right of free association and the Fourteenth Amendment right of equal protection.
--------------
[1] States get away with it with respect to drunk driving laws because the state issues licenses for driving, and can revoke them for whatever reason it chooses, id est it is not a constitutional right to be able to move around *via automobile* on public roads.
As the reply below me notes, race-based restrictions are (deliberately) treated with 'strict scrutiny'. This is demonstrably not the case for alcohol regulation, as:
1. There are tons & tons of dry towns, cities, and even counties all across the US- alcohol is completely banned. There were even multiple dry states for 40+ years after Prohibition was lifted (the Bible Belt ones), and this was never ruled unconstitutional. It can't be ruled unconstitutional because the 21st Amendment *explicitly gives any 'possession' of the US the right to regulate alcohol*
Section 2. The transportation or importation into any State, Territory, or possession of the United States for delivery or use therein of intoxicating liquors, in violation of the laws thereof, is hereby prohibited
2. Not only does the Constitution explicitly give any 'possession' of the US the right to regulate alcohol- in practice there are thousands of such regulations. Bars are highly regulated man! Any place that sells alcohol already has restrictions on when they can sell (set hours of operation), who they can sell to, how much they can sell, and so on
3. It's already illegal to overserve customers to the point of drunkenness
Again- this is *already criminalized now*. Arguably it'd be more fair to simply set a hard limit in terms of number of drinks, than to subject servers to highly subjective standards about how inebriated someone is
You are greatly overstating your case. First, no right to association has been recognized by the Supreme Court outside of association for the purpose of speech, and rights related to intimate relationship (marriage, child rearing, etc). See https://www.mtsu.edu/first-amendment/article/1594/freedom-of-association
Nor is the right to bodily autonomy, to the extent it exists, implicated by law re the consumption of alcohol. (That should be obvious, since laws barring entirely the consumption of marijuana are perfectly constitutional).
A law such as that proposed by the OP could only be challenged under the Equal Protection clause, and, since it does not discriminate based on a suspect class nor in regard to a fundamental right, would be subject to rational basis review, whereby a law is constitutional if it is rationally related to a legitimate goal, a very low bar. https://en.wikipedia.org/wiki/Rational_basis_review
BTW, that is the flaw in your segregation analogy: A race-based law is subject to strict scrutiny, an extremely high bar that requires that the law be necessary for a compelling state interest. https://en.wikipedia.org/wiki/Strict_scrutiny
I see I expressed myself poorly, and thank you for the opportunity to clarify.
I am of course not suggesting state governments can't regulate the sale of alcohol, either by blanket prohibition or by a variety of time place and manner restrictions. Clearly they can, and have, and will. (My opinion on whether that is practical or not is contained in a comment further up.)
What I was addressing was only nifty's very first line, in which he or she appeared to suggest that civil rights can readily be curtailed if a person happens to be in a public space, and if the state sees some rational interest in doing so. So far as I know, that is not the case. Civil rights are no less potent in public spaces as private (I'm not even sure what they mean in the private space anyway), and they are by definition things the majority cannot infringe just because it wants to, or see some rational interest in doing so. To the extent the Supreme Court has allowed any infringement at all, it requires a compelling -- not merely rational -- state interest.
My example was meant to show a provocative illustration of the point, that segregation was certainly argued at various points to be a rational interest of the state, because of the reduction in interracial friction and the potential for violence. (Whether those things are true or not is irrellevant to the argument, as what matters is whether the majority believed them to be true.) What the Supreme Court has said in general, and which we all kind of instinctively agree on with respect to segregation, is that merely having some kind of reasonable hypothesis of how such-and-such abridgment of a right might be in the public interest is insufficient. Civil rights are exactly those things that cannot be abridged regardless of the reasonableness or general belief in such theories.
So to the extent anyone can raise a constitutional civil right issue with respect to any proposed regulation of alcohol, the mere fact that the law only applies in public places, and that the hypothesis that it will produce some general public good is widely believed, are insufficient to allow abridgment of those rights. Whether any rights are actually implicated by any alcohol regulation would of cousre depend on the nature of the regulation.
No, civil rights generally cannot be curtailed merely because a person happens to be in a public space, but my main point is that there is no constitutional right implicated by the laws that the OP suggests
How about if I offer a place for my friends and even friendly strangers to come and consume those toxins, on my own property. Surely no problem there, right?
I think given that people mostly don't live downtown, there isn't much incentive for society to reasonably say "public drunkenness & its associated social ills (fights, etc.) are just not acceptable in our downtown" when they don't really experience those ills.
Do black (African/Caribbean) people have biologically different voices from white (European) people?
When I hear a black person on the radio, I can often tell without any explicit mention of race. This is especially true for black women.
Of course, there are massive social and cultural factors in play. The community and region in which you grow up will affect accent, dialect and vocabulary choice. If you grow up around black people, you will speak and sound like other black people for this reason. (Note: I live in the United Kingdom, which has a very high level of regional accent variation, with the majority of black people living in London and other major cities, so my experience of these effects will be magnified.)
But I believe, even accounting for all this, that there is often a racial difference. Black women in particular have voices that sound lower and raspier/coarser than white women.
It would not be surprising if this were true. Black people have different facial features from white people. (You can see this most obviously by comparing photos of albino black people and albino white people.) This presumably results from differences in bones and muscle, which could easily affect voice production too.
But I have been unable to find any information about this online, whether research papers or blog posts. Part of the problem is that my Google searches tend to return results about dialect. If there is research on this, I'm not sure what search terms I should use to find it. (Conversely, I was able to find discussion of what it means to have a "gay voice".)
1. I provided evidence that anatomical differences between races exist
2. If it were a matter of accent, it should be difficult to tell men and women apart who have the same accent. The opposite is true. It's exceptionally easier, which means accent is not what is solely relevant here. And these male female differences correspond to similar vocal tract anatomical differences between races.
>Acoustic pharyngometry evaluates the geometry of the vocal tract with acoustic reflections and provides information about vocal tract cross-sectional area and volume from lip to the glottis. Variations in vocal tract diameters are needed for speech scientists to validate various acoustic models and for medical professionals since the advent of endoscopic surgical techniques. Race is known to be one of the most important factors affecting the oral and nasal structures. This study compared vocal tract dimensions of White American, African American, and Chinese male and female speakers. One hundred and twenty healthy adult subjects with equal numbers of men and women were divided among three races. Subjects were controlled for age, gender, height, and weight. Six dimensional parameters of the speakers' vocal tract cavities were measured with acoustic reflection technology (AR). Significant gender and race main effects were found in certain vocal tract dimensions. The findings of this study now provide speech scientists, speech-language pathologists, and other health professionals with a new anatomical database of vocal tract variations for adult speakers from three different races.
>In paired dialect identification tasks, differing only by speakers' sex, New Yorkers were asked to identify the race and national heritage of other New Yorkers. Each task included eight speakers: two Chinese Americans, two Korean Americans, two European Americans, a Latino, and an African American. Listeners were successful at above chance rates at identifying speakers' races, but not at differentiating the Chinese from Koreans. Acoustic analyses identified breathier voice as a factor separating the Asian Americans most frequently identified from the non-Asians and Asians least successfully identified. Also, the Chinese and Latino men's speech appeared more syllable timed than the others' speech. Finally, longer voice onset times for voiceless stops and lower /ε/s and /r/s were also to be implicated in making a speaker “sound Asian.” These results support extending the study of the robust U.S. tendency for linguistic differentiation by race to Asian Americans, although this differentiation does not rise to the level of a systematic racial dialect. Instead, it is suggested that it be characterized as an ethnolinguistic repertoire along the lines suggested by Sarah Bunin Benor.
That's absolutely irrelevant. It's like saying that Yao Ming is a hall of famer, therefore the reason why blacks outnumber asians in the NBA by orders of magnitude has nothing to do with biology.
It doesn't seem like quite the same thing to me. The ability to vocalise, and to understand and express the meaning of words in a play/show is very well distributed, in my opinion.
Thanks. This is exactly the kind of study I was wondering if someone had done, and confirms my suspicion that there are biological/physiological factors that affect voice in a significant way.
I think the relevant terms I would have needed to find this through Google are "physiology/physiological" and "voice quality" (where "quality" means "kind/type", not how "good/nice" something is).
Considering Laurence's comment about how pitch can vary even with the same person speaking different languages (which wouldn't be physiological), it seems like physiology may not be the only factor or even the dominant factor. Based on the sources I've seen and people have linked to, I don't have a good idea of what the likely relative contributions of physiology versus psychology/socialisation are to voice pitch and timbre.
I find it interesting that many of the responses were about accent and dialect, even though I explicitly stated that this was not what I was thinking about. I think that, considering speech and voice as a whole, accent, dialect and vocabulary choice are usually far more obvious (and probably more reliable) indicators of the speaker's race than pitch and timbre; maybe this is why people gravitated towards that discussion. I found that the comments on this blog post discussing racial differences in voice quality tended the same way:
Many of the comments there are primarily anecdotal, but of those who do think that black people have different voices from white people, they tend to agree with me that black voices tend to be deeper and "huskier". I particularly enjoyed one commenter's description of how we was convinced there is a racial difference in voices by listening to Klingons in Star Trek: TNG and predicting that Worf's actor must be black.
Grew up in majority black community, listening to local female dj, always thought she was black, around 7,8th grade friends and I were shocked to discover she was very very white. It was shocking not just because of our expectations, also because we often heard white people speaking like locals, it never sounded right.
Second anecdote: there's a true story movie about a black guy who infiltrates the kkk with Kylo Ren
This is probably really about accent and dialect, rather than voice pitch.
Comedian/actor Sacha Baron Cohen's first big character success, Ali G, played on this idea. When the character was created, many white British youths fantasised about and imitated aspects of black hip hop culture and gang culture, including speech. Ali G was a parody of these white people. He attracted some criticism because people thought he was a parody of black culture, rather than a parody of white people imitating black culture.
Do you have any supporting evidence for this claim? I would not be surprised if the media voices have different styles, but e.g., is there any study where people listen to speakers reading out some neutral text, and then perform better than chance at identifying their political views?
This seems not to be the case on the U.K., and definitely not for well educated black people. I was watching the BBC coverage of the Queen’s passing and the presenter Clive Myrie sounded just as posh as the rest of them.
It occurs to me that I should give a couple of examples of people who I think sound "black", but not just because they speak London Multicultural English.
Looking at Wikipedia's category of black, British MPs, consider Diane Abbott or Dawn Butler. Both have voices that I would consider to be lower and huskier than is typical for a white woman.
I am from the UK and was talking mainly about my experience listening to people from the UK.
Your suggestion that Clive Myrie "sounded just as posh" sounds to me like a claim about accent/dialect, not a claim about pitch and timbre. Even so, I must disagree with you partly on this point. Yes, he speaks clearly and sounds educated. Yes, his accent is not recognisably black. However, while I would say it is definitely more Received Pronunciation than Estuary English, he sounds nowhere near as "plummy" as, say, David Cameron, Boris Johnson, or Jacob Rees-Mogg. However, I guess if you were comparing him with other BBC presenters of the current generation, "just as posh" is probably fair. He certainly doesn't sound "common".
the test here Colin is to close your eyes and do a blind test. Get someone to help you with different presenters, preferably ones you don't know. That Clive sounds less posh than Johnson doesn't mean you could work his race out on the radio. I just checked on Diane Abbot and I closed my eyes and didn't hear much huskiness.
For goodness' sake, why do you insist on using these non-random examples? The average UK politician sounds different to average UK person, therefore they're not good examples. How many MPs have thick cockney accents? None?
Given the pains mainstream broadcasters take these days to avoid giving offence and not cause prejudice against specific racial groups, it is amazing the way voice disguises, in true crime programs and suchlike, make perp interviewees' voices sound like those of some blacks! The quality of this kind of voice is hard to define, but it has a sort of resonant sound which (I guess) may be mainly the result of a larger jaws and thus mouth and/or larger nasal cavities.
By contrast, some American voices, especially female voices with what I believe is a Brooklyn accent, seem to have become squawkier and more nasal with every passing decade! At this rate, by 2050 the speech of some American women will sound like short bursts of fast forwarding on an analogue tape recorder! :-)
Okay but there is OBVIOUSLY massive selection going on there. Of course a news presenter sounds posh, they wouldn't have hired him if he sounded caribbean and couldn't say 'th'. Do you imgaine that newsreaders are selected randomly from the population?
Okay, and that's wrong, because you're not basing it on a representative sample of the population, you're basing it on a single individual selected precisely because he sounds "white". I've met white people in the US whose voices sound indistinguishable from those of 'black-sounding' black people. It doesn't mean that mean differences in voice quality don't exist between the races, because I would be basing my view of things on an unrepresenatative sample.
In order for what you are saying to be true, we would need some randomly selected whites, blacks and whoever else from the UK population and blindly guess the race from the voice.
If you're claiming that its possible to find people of different races between whom no difference in voice quality is detectable, then yes, this is obviously true but also irrelevant. The claim "you can't tell in the UK" implies no mean difference.
I am not just talking about this individual, he was an 'example". In Britain as a whole there is no black accent except regional accents that apply to anywhere. A black man from Liverpool sounds like a white man from Liverpool. And this is true across class divides. Footballers dont have black accents.
The only perhaps racial accents are London Multicultural London English, but White people mimic that as well.
>I am not just talking about this individual, he was an 'example".
Yes, and it's *literally the worst possible example you could have provided* because its the opposite of random.
>In Britain as a whole there is no black accent except regional accents that apply to anywhere.
This isn't about accents. This is about voice quality. Two people with the same accent can sound very different. Blacks and whites in the US can have the same accent and yet have distinguishable voices.
For goodness sake, white men and white women from the same place will have almost identical accents, and yet >9/10 we have absolutely no difficultly identifying the sex of the person talking.
>Footballers dont have black accents.
They sound different to white footballers on average.
I think I agree mostly. Certainly, I would expect a black man who grew up in Liverpool to have the same accent/dialect as a white man who grew up in Liverpool.
I think I have a perception of London Multicultural English as being "more black" (but see my comment about Ali G above), which is presumably because such a large proportion of black people in the UK live in London.
But I think there is a further effect in play. When you have a somewhat insular ethnic minority community, that community can develop and maintain its own accent/dialect variation, which effectively becomes a racial accent. You can hear this most obviously if you go and talk to someone working in a kebab shop or as a taxi driver in a major city outside London. (For readers outside the UK: A very high proportion of people working in these jobs are south Asian; that is, Indian, Pakistani or Bangladeshi.)
I wanted to flag this up as something I was aware of potentially existing because it can be difficult, as a listener, to separate perception of accent from perception of voice quality/timbre.
So, can you tell when it's say, Nigerian instead of American? I was listening to the audiobook of One Half A Yellow Sun and it's actually startling when a black American woman who moved to Nigeria talked (they have voice actors for the audiobook).
I'm not quite sure what you mean by American? Unless you are talking about Native Americans, which I doubt, I'm not sure what "American" would mean racially.
Are you comparing a black woman who grew up in America and then moved to Nigeria, with a black woman who grew up in Nigeria, where both are speaking English?
It could possibly be true, but evidently the sociocultural environment has a huge effect on how a person's voice sounds. Multilingual people can drastically change the 'default' pitch of their voice depending on what language they're speaking in, without even realizing it. Given that such differences can be very large within one person, it would be very unlikely that racial differences are especially influential.
I found your remark that multilingual people change pitch when changing languages very interesting. This is a possibility I had not really considered before. I had assumed that the voice pitch people speak with "normally" was mostly/entirely physiologically determined.
While looking for references to back up your remark, I came across this, which has a nice summary of research on "default voice pitch" (for which the technical term seems to be "F0") in the introduction:
>Acoustic pharyngometry evaluates the geometry of the vocal tract with acoustic reflections and provides information about vocal tract cross-sectional area and volume from lip to the glottis. Variations in vocal tract diameters are needed for speech scientists to validate various acoustic models and for medical professionals since the advent of endoscopic surgical techniques. Race is known to be one of the most important factors affecting the oral and nasal structures. This study compared vocal tract dimensions of White American, African American, and Chinese male and female speakers. One hundred and twenty healthy adult subjects with equal numbers of men and women were divided among three races. Subjects were controlled for age, gender, height, and weight. Six dimensional parameters of the speakers' vocal tract cavities were measured with acoustic reflection technology (AR). Significant gender and race main effects were found in certain vocal tract dimensions. The findings of this study now provide speech scientists, speech-language pathologists, and other health professionals with a new anatomical database of vocal tract variations for adult speakers from three different races.
>Acoustic pharyngometry evaluates the geometry of the vocal tract with acoustic reflections and provides information about vocal tract cross-sectional area and volume from lip to the glottis. Variations in vocal tract diameters are needed for speech scientists to validate various acoustic models and for medical professionals since the advent of endoscopic surgical techniques. Race is known to be one of the most important factors affecting the oral and nasal structures. This study compared vocal tract dimensions of White American, African American, and Chinese male and female speakers. One hundred and twenty healthy adult subjects with equal numbers of men and women were divided among three races. Subjects were controlled for age, gender, height, and weight. Six dimensional parameters of the speakers' vocal tract cavities were measured with acoustic reflection technology (AR). Significant gender and race main effects were found in certain vocal tract dimensions. The findings of this study now provide speech scientists, speech-language pathologists, and other health professionals with a new anatomical database of vocal tract variations for adult speakers from three different races.
When I was in college, I found I could identify black people from New York, sight unseen. It was pretty clearly some kind of local accent, because it was very location specific.
OTOH, I've heard any number of British people, who turn out to be second generation from all over the world, including from mostly black colonies, and I can't distinguish them by voice.
Reasoning:
I doubt there's a voice difference. I'd expect accents - plural, not singular - and also dialects - with varying degrees of difference between white and black people form the same location, depending on details of local customs. Children seem to absorb whatever way of speaking they are raised in, and sound like their parents and neighbours, whether or not those are foster parents.
Like so many other things, I'd expect that if you went out objectively measuring voices by some set of objective criteria, you'd find that different races have heavily overlapping distributions with slightly different averages.
To pick a simple example I would not be surprised if the average pitch of a black man's voice is deeper than the average pitch of a white man's voice which is deeper than the average pitch of an Asian man's voice, even when all are raised in the same way speaking the same dialect.
>In paired dialect identification tasks, differing only by speakers' sex, New Yorkers were asked to identify the race and national heritage of other New Yorkers. Each task included eight speakers: two Chinese Americans, two Korean Americans, two European Americans, a Latino, and an African American. Listeners were successful at above chance rates at identifying speakers' races, but not at differentiating the Chinese from Koreans. Acoustic analyses identified breathier voice as a factor separating the Asian Americans most frequently identified from the non-Asians and Asians least successfully identified. Also, the Chinese and Latino men's speech appeared more syllable timed than the others' speech. Finally, longer voice onset times for voiceless stops and lower /ε/s and /r/s were also to be implicated in making a speaker “sound Asian.” These results support extending the study of the robust U.S. tendency for linguistic differentiation by race to Asian Americans, although this differentiation does not rise to the level of a systematic racial dialect. Instead, it is suggested that it be characterized as an ethnolinguistic repertoire along the lines suggested by Sarah Bunin Benor.
If our perception of reality is just a rendering of physical/base reality in our consciousness why is it mostly beautiful? Why is our rendering of a wave beautiful if it is just a bunch of colorless molecules? Why do we perceive nature as mostly beautiful, even sublime? Is it an evolutionary aspect of our brain/consciousness to make existence bearable?"
I've heard that default ideas of beauty can change-- in the west, beautiful land was at least fairly level and fertile, and the the idea of the sublime (mountains, storms) came in a century or so ago.
On the other hand, there are so many cliffs in traditional Chinese and Japanese art that I was surprised to see a field with a lot of flowers in a Kurosawa movie.
Not all of life is beautiful. War and devastation suck. And so when it is beautiful, we should totally enjoy it. I'm heading out on a walk, it's been wet here and mushrooms are popping up all over. (Well mostly in the woods.)
It's not a rendering of base reality though. All of our perception is an extremely limited construction based on a very narrow selection of information. We take a certain range of the electromagnetic spectrum as input, and then apply layers and layers and layers of post-processing to that. Early layers give us experience of color and brightness, and later ones give us depth, motion, and face perception. Yet, few (arguably any) of those experiences exist in reality. Colors may correspond to certain wavelengths, but our perception has nothing in common with base reality when it comes to faces, motion or even brightness, which are constructed from scratch.
The reason why nature is beautiful doesn't directly follow from this, but you can easily bridge the gap. We perceive things the way we do because this apparently was most conducive to our survival. One explanation is that we developed some neural algorithm that intuitively evaluated spaces for habitability, and somewhere with lots of greenery, a varied landscape, and long sightlines gets evaluated positively because it means fertile land and lots of different foraging/hunting opportunities. So we call that intuition 'beauty'.
I am trying to make sense of the comments. Let's see......We have no direct access the base reality. We apply a multitude of layers to a small range of electromagnetic input. I assume the more sophisticated the organism the more sophisticated the layers. Hard to prove on the level of consciousness but I assume this is done on the level of neurology. Ok going on..... evolution gave humans a beauty circuit but not for specific things because we have not enough genes to support that. I am aware of the synchronic and diachronic as well as individual differences between appreciating certain qualia or even aesthetics but I am not convinced. All aspects of beauty are cultural? Are there culture that find no aspect of nature beautiful? What about the sublime? Awesome yet fear inspiring? A mighty waterfall or storm clouds? As for socialization I propose a Bach cantata against screeching noise {yes not 100 percent but feels universal both historically and throughout cultures today if asked for preference say in a closed room for 24 hours. } There are many assumptions here and it seems like we are going out of our way to avoid some sort of dualism or simulation. Where am I going wrong?
I think "beauty is cultural" and "beauty is innate" aren't necessarily exclusive. Cultural influences might cause someone to dislike what most cultures would like and vice-versa, just like how people learn to like spicy food and bitter tonic. Brutalism is the first example that comes to mind. I'm just spitballing here but if the perception of beauty has a survival advantage, then you'd expect it not to be sensitive to cultural effects, at least not to the point that one's sense of beauty is the complete opposite of another's. But if the experience of beauty is just a side effect of how we process stimuli, then it might be more malleable.
You didn't evolve to find specific things beautiful you don't have enough genes for that.
Evolution gave your brain a beauty circuit and yours was trained to find nature beautiful, but not everyone's is. Many people don't find nature beautiful. And many find things antithetical to nature beautiful like cities or sci-fi scapes.
Nature is a reasonable candidate for beauty because you live in it, it's often a matter of life and death, and it's legible: you can learn a lot about it with your senses. Beautiful doesn't always mean good, I find the utter barren rock scape of Mars beautiful. A better shorthand for beauty might be "compelling".
It's actually even more fun - I seem to recall reading that historically nature was perceived as hideous. Which jives with how we'd have less appreciation for natural spaces back when they were more likely to kill us than present an opportunity for a nice stroll.
Very interesting. It seems like a phenomenology thing, the much debated capacity to see the "itness" or "thingness" of a thing/phenomena before culture teaches you to see it in a certain way. Wonder how that will pan out with objects in space ..we see galaxies as mostly beautiful but will we be acculturated to see dark frightening rocks as such? Maybe it also has to do the time we spent gazing at them and how threatening we perceive them to be so we see far celestial objects as beautiful as well as the Moon and Mars neighbors we are somewhat intimately familiar with.
Interesting phenomenon: the sound of the surf breaking on the beach is similar to the sound of a nearby highway. The label I put on the sound makes me experience it as beautiful or ugly.
> Why do we perceive nature as mostly beautiful, even sublime?
I guess the purpose is to make us relax, thus conserve calories. Nature is beautiful when there are no predators, decaying corpses, dangerous insects, spiders, or snakes nearby.
Does anyone know of any large scale data sets on human mood?
I'm interested to know how often and how much people are unhappy. If we took a million people and sampled their self reported mood up to several times a day for six months... we might learn whether unhappiness is in fact a fairly common experience, and whether there are clusters in how it behaves (I'd expect people to be somewhat arranged along the Neuroticism axis of the 5 factor model...).
Anyway I think that probably there have been apps supporting better mental health etc that have got and perhaps published this data - but I cannot find it. I had a look around and I could find stuff about how mood was correlated to social forces, and sentiment analysis on social media, but nothing that I could use to explore the questions I'm asking.
Maybe the ACX community can point me in the right direction!
For high school students, there is the huge PISA datasets which also contains questions about the well-being of students. Just once per every 2-3 years, but it covers a huge number of students and countries.
there was a study i participated in years ago called BRIGHTEN Study/was ginger.io app where it messages you several times a day or several times a week about your mood. I'm not sure what came of the results, they never sent them out to the participants like i was told would happen.
I was once a subject in a study of mood. I got pinged at random times during the day, and was the report my activity at the time and how happy/unhappy I was. I found those happiness ratings extremely hard to do. I'm not convinced that happiness/unhappiness is a dimension that's meaningful all the time. If I'm quite happy or unhappy I'm clear about that, but lot of the time I'm in a state where I'm neither. There are things I can say about the state, but how happy or unhappy I am isn't one of them. For instance right now I'm neither happy nor unhappy, and honestly I do not think it's accurate to say that I'm on the midpoint of the scale either. What's much clearer to me is how invested I am in what I'm doing. When I saw Andrew O's question I wanted to answer it. Now that I'm answering it, I'm invested in being clear, getting my idea across. That's keeping me typing energetically, and I would definitely not welcome an interruption right now -- so in that sense I "like" writing this -- I want to do what I am doing right now. But I think for me at least it would be forcing an inappropriate template on my state to say that writing this is making me somewhat happy. It's much more accurate to say I am engaged, interested, invested. Everybody has a certain weight at any given moment, but maybe we don't have a certain quantity of happiness at every moment.
I'm pretty sure Daniel Kahneman has done some of these studies (as have many other psychologists, though Kahneman describes them in Thinking, Fast and Slow).
What you're describing is known as Environmental Momentary Assessment, or EMA, and I have a few colleagues working on it. They use an app called Ethica, which asks people to report their mood on various scales several times a day for a number of weeks or months. However, I don't know if unhappiness per se is something that EMA captures well. Absent chronic stress, I would expect that hedonic treadmill effects keep people at roughly the same level of (un)happiness, depending on how you define it. But if you do some Google Scholar searches, you could probably find something interesting.
This doesn't directly answer your question, but most of the research in this area refers to 'well-being' rather than 'mood'; searching on the former may be more fruitful, in case you haven't tried that. For example, searching '+patterns +wellbeing ' in Pub Med turns up > 30k hits.
I doubt anyone has sampled a million people multiple times a day. I was involved in one study for a larger employer that did exactly what you describe - randomly polled their ~10k employees through the day, over many months, about their well-being, via a smartphone app - but the data and findings are not publicly available.
Anyone here with aquarium hobby or knowledge about biological filtration?
TLDR: I am looking for some research articles or other materials about biological filtration in aquariums.
A lot of the information I found so far online is not very rigorous.
I am looking for some pointers or recommendations to good research/books/anything serious to help me answer these questions:
1. Why does it take so long for the bacteria to colonize the filter (even when the bacteria are added explicitly)? How do parameters of the water influence this? Can it be shortened?
2. Is filter media really necessary? It is usually said filter media provides surfaces for bacteria but these are just a fraction of other surfaces combined in an average tank (substrate, rocks, etc.). Wouldn't just establishing a continuous flow of water be enough? If not, what is the appropriate amount of filter media?
3. It is recommended to run the filter all the time. Are the filter bacteria really so fragile that they wouldn't survive with some fraction of time with the filter turned off? This does seem counter intuitive to me.
4. It is said filter media transplantation to a new tank usually does not work and cycling has to be started again. What is the reason behind this? Can it be done somehow?
5. How do water parameters affect the capacity of filtration?
6. How does the strength of water flow affect filtration?
A lot of answers to these questions I found on aquarium webs/blogs/discussions are quite contradictory and sometimes I get the feeling nobody really knows what they are talking about or are just repeating what others said. That's why I want to look at some more rigorous research.
Also, do you think any of these would be a good research question?
What's more important than the precise absolute values is that tank conditions do not change quickly. (this goes double for marine aquaria. The ocean is a very consistent environment.) Small tanks can be a false economy because they change too fast. Don't rush to put fish in before the system has equilibrated.
> (substrate, rocks, etc.). Wouldn't just establishing a continuous flow of water be enough?
This is called an under-gravel filter. water is drawn down through the substrate and up a tube. They used to be more popular a couple of decades ago and are not great for live plants.
I worked in sewage treatment for a while, and would point you toward that in terms of pure biological filtration mechanics questions. I don't have much experience a closed system including fish, though.
In my experience, this level of information basically does not exist for hobbies. If you have the kind of mind that asks these kind of questions, and the patience/time/space (and probably disposable income) necessary to test them yourself, you could probably become a minor celebrity among your hobby's community. (See the "brulosophy" folks in the homebrewing community) Be ready to become a villain to a small subset when you inevitably slaughter some sacred cows.
The best you can hope for is that there _might_ be literature from research/large scale aquariums but I doubt it, as they tend to use completely different systems and techniques relative to home aquariums.
In general, this "best practices" in hobbies (and in most things really), comes from people knowing a certain thing like "there is a bacterial community that helps filter water, and bad things happen if it isn't robust enough". They then start throwing things at the wall until they find something that works, with no rigor or methodicalness to try and find _which_ things specifically, and in what amounts and under what conditions actually matter. The real world is complex and finding the boundary conditions among many interacting factors is difficult, so mostly people don't bother, they just do the thing that works, which usually consists of some mixture of things that have no effect, things are done in complete overkill, and a couple things that are completely essential, and no way of differentiating them.
What kind of aquarium are you looking to run? I generally had success not doing even half of what is "best practices", but the one thing I did do that I believe helped a lot is use 1/2 a gravel base from an already healthy aquarium.
I'm a high school student with an interest in computer science, but it looks like coding will be a less valuable skill in the future with things like Copilot coming out. Is majoring in comp-sci still a good idea? Also, if not, what are some other intellectually-demanding majors that are more resilient to automation (Maybe stuff in math or bio)?
Copilot is garbage because getting code 95% of the way to correct is worse than getting it 0% of the way. Debugging takes most of the development time.
Majoring in comp-sci is one of the few reliable ways to still be relevant in 20 years.
90% of my time spent programming is not writing code, its deciding what to write and how to write it. Things like Copilot are just like having a map function built into a language. It makes it easier/faster to do routine things. Copilot just greatly expands what can be considered routine.
Computer science being fully automated is essentially the definition of the Singularity, at which point any decisions you've made prior are irrelevant. Conditioning on the Singularity *not* happening imminently, there's still growing demand for *skilled* developers.
I'd still learn to program. It's not like auto mechanics went away when we got power tools.
And a lot of the copilot stuff is just letting you go faster, not really creative. So it might actually raise programmer productivity as long as we don't run out of problems to solve. And that's hella true.
In my dad's day, you'd program computers using punch cards and assembly language. In the intervening decades there have been a bunch of innovations to make programming computers easier and massively increase the productivity of each programmer. This has not decreased the demand for programmers.
AI-assisted code generation will definitely be part of programming in the future; ultimately it will make programming harder rather than easier, since each programmer will suddenly be responsible for even more code, even more complexity, making decisions at an even higher level of abstraction.
I can't give career advice, but my studied to be a journalist nephew has now pivoted to a job as a web dev (if I'm getting the term correct) and is doing well in that.
(So uh, yeah, I suppose he did "learn to code").
So if he could do it, you can find a field you are interested in and study for it.
Copilot, if you excuse my wording, is utter and hot garbage. Copilot is not going to replace programmers anymore than GPT3 is going to replace writers. Copilot frequently regurgitates code from its training data as-is. The fundamental assumption underlying Copilot is wrong and misguided : good code can't be created like natural language, because natural language is very redundant and error-resilient, a couple of mistakes here and there don't impede the flow of meaning, this is the exact opposite of how code works, a single pedantic mistake can spell disaster and completely trash the entire process. I can't emphasise strongly enough how different code is to natural language, you might as well try to synthesis Math proofs or Physics papers with GPT3.
Even granting that Copilot is an average human programmer (this is extremely wrong, but grant it to make a more interesting point), it still won't replace human programmers. The fundamental bottleneck in any large software (>30K lines or so of code) isn't the skill of the individual programmer, but the Managment and Communication of the group of programmers making the system. If you can't make Copilot contribute during a code review or attend an agile standup or really just talk to humans for 5 minutes and understand the massive amounts of social context implied in the conversation, then you can't replace programmers.
Automation fears are largely unfounded and hysterical. The vast majority of human work, when considered end-to-end, is AGI-hard because, trivially, it involves talking to humans and convincing them of things using mental models of their internal state, as well as open-ended interaction with the world that can never be done successfully if all you have ever seen before is 100 TB of dead data scraped from Wikipedia. The poster boys for automation (factories, trucks,....) are tasks which require very little human interaction from start to finish, and this is relatively rare (and even then, they are still not completely solved).
Finally, modern neural networks is a very lame kind of intelligence, and will never pass the Turing test in by 2100. My Turing test is if you can convince a non-horny heterosexual man that his conversational partner is a woman worth dressing up for, or an equivalent. It's amazing how far dumb matrix multiplication can get you, but it will never get you as far as a system developed over the course of ~10 million years and is full to the brim of special circuits and tricks. Keep throwing more hardware and more Wikipedias on a problem like "automate human conversations" while using only numerical neural networks, and I gaurantee you (1) Failure (2) Pyrrhic successes, i.e. You successfully created your mAI Waifu^TM chatbot but only the USA's department of defense can run it and your competitors simply pay real girls instead and operate a wildly successful business. Plenty of human work fundamentally involve human conversation.
> Automation fears are largely unfounded and hysterical. The vast majority of human work, when considered end-to-end, is AGI-hard because, trivially, it involves talking to humans and convincing them of things using mental models of their internal state, as well as open-ended interaction with the world that can never be done successfully if all you have ever seen before is 100 TB of dead data scraped from Wikipedia.
I can't help but notice that since the industrial revolution, the increase in productivity per worker often went with a reduction of work force. There is little solace for a fired factory worker in the fact that the factory still employs some workers.
Historically, I think that while all jobs required some interfacing with other humans, the vast majority of the jobs often was back-breaking labor with a bit of social interactions sprinkled on top: a charcoal maker probably did not spend half their time making Charisma based rolls to sell their product for the best price. Instead, 95% of their time was probably spend on Strength/Constitution based rolls.
> The poster boys for automation (factories, trucks,....) are tasks which require very little human interaction from start to finish, and this is relatively rare (and even then, they are still not completely solved).
I would call the profession of computer even more of a poster boy for automation.
While I agree that the current approach to AI is very much brute-force (a human child can learn a language on orders of magnitude less data than GPT-3), computing power will only get cheaper while humans (hopefully) will only get more expensive to employ.
Even if copilot can never replace a good programmer, or GPT-3 a good author, or DALL-E a truly creative artist, this does not mean that no jobs will be lost.
> There is little solace for a fired factory worker in the fact that the factory still employs some workers.
That mostly depends on where on the bell curve do you fall. Technology is a massive force multiplier for IQ, so with automation your economic viability quickly separates into the extremes of either worthless or irreplaceable.
Sure, but technological innovation also creates jobs. I mean, there were very few "computer programmers" in the 1950s and what they did bore little relation to what the millions of them do now. There were no commercial airplane mechanics in the 1920s, all the guys who might've been good at it worked on tractors or mine pumps instead.
Whether on balance more jobs are created than lost, and whether the new jobs are of higher satisfaction or lower, and what effect this all has on the distribution of incomes, is an exceedingly complex question which I expect to be debated until the Sun burns out. But history does suggest that on the whole technological innovation is good for everybody, jobs included.
>looks like coding will be a less valuable skill in the future with things like Copilot coming out.
Maybe, but on the other hand, copilot (and, most importantly, the hacked model of copilot that is bound to be leaked eventually) may act as a multiplier for your output, making your skill -or a slightly different skill- that much more valuable. See Jevon's Paradox (https://en.wikipedia.org/wiki/Jevons_paradox).
I don't know if Math would be a good alternative, but I'd wager that if comp-sci is decimated by AI, then bio will get an even worse deal (and if it isn't, then bio will become much closer to comp-sci than it is now)
Copilot is not anywhere within a few light years of replacing programmers who can do things beyond the first year courses. If you enjoy programming and are able to graduate college you'll be safe for longer then your lifetime from AI takeover. Besides, computer science is the job that makes AIs! What could be more resistant to automation then the one who designs the automations?
I think you will be fine. But what are your goals? If you really want a degree that is for sure going to be remunerative going forward I would think some sort of robotics engineering or a wide variety of other engineering degrees are the way to go.
If you just want to make money try to go into high end math/statics/modeling and then to finance.
I find it amusingly hypocritical that I complain about pushing politics in movies, and yet the recent few movies I enjoyed a lot were strongly political. It's just... someone else's culture war, so I don't mind.
* Hindus and Muslims should live together in peace
* caste discrimination is bad
* Britain is evil (except for that one girl who falls in love with the protagonist)
EDIT: For the record, I didn't mean that all those three happened in the same movie, so you don't have to guess. I have watched many Indian movies recently.
btw you might want to reexamine what about "political" movies turns you off. politics is a mjor aspect of society and it would be a shame if movies were completely barred from adressing it.
Sure, you can't avoid politics completely, also these days almost everything is political. I guess what you *can* avoid, is letting the political message make your movie predictable.
For example, suppose an American movie starts with a group of men trying to accomplish some goal, there is a woman who wants to join them, and some of the guys says "lol, girls can't do this". I think at this moment you can make a safe prediction about how the movie will end: the girl will accomplish the goal, all guys will suffer a humiliating defeat. So even if this is the next Predator movie, there is just no tension left after the first five minutes, when the words "too dangerous for a girl" have summoned a plot armor for the girl.
As a contrast to this, consider the Alien movie. It happens to end in a similar way, but it wasn't auto-spoilered, because the movie was *not* about gender conflict. There was a chance, at least during the first half of the movie, that anyone could get killed. The female actor did not have a plot armor. -- This is one good way to make a strong female character. The other possible way is to own it: if you call your movie "Xena: Warrior Princess", no one has a right to complain that it was about a warrior princess kicking everyone's ass and surviving unlikely odds. What you should *not* do, is take the Star Wars universe, and make a Xena out of it.
Okay, back to Indian (I think the best ones are often Tamil) movies. "Article 15" is a political movie that owns it. If you have read one sentence on IMDB, or if you have seen the "what the fuck is going on here?!" excerpt on YouTube, you know it is going to be a movie about how caste discrimination is bad. I liked it. It probably helps that on one hand I happen to agree with the political message, but also I do not hear it often (so I am not *tired* of hearing it yet again). Also, the movie does not have the black-and-white woke morality; people from an upper caste are also allowed to be good guys who fight against discrimination, they are not reduced to mere "allies" and told to step aside because their origin already made them too tainted to make their own decisions about right and wrong.
A great movie I would recommend is "Maanaadu" and I strongly recommend watching this movie without knowing *anything* about it. Do not even look at the IMDB page! Knowing the genre of this movie is already a kind of spoiler; it is in my opinion an even greater experience if you have no idea. One of the best movies I have ever seen. (Also, contains the Hindus and Muslims cooperating, and even explicitly comments on that fact. It happens organically; there is a good in-universe reason for that.)
You think the protagonist in Alien doesn't have plot armor? Of course she does. Obviously she's going to defeat the monster and survive, how else is the movie going to turn out? It doesn't matter what gender the protagonist is or how woke the movie is, they're going to defeat the monster and survive.
Look, while I agree that it is unfair to the English to cast the Raj as solely torturers and murderers, on the other hand who doesn't enjoy a good "Brits out" movie? If you've been on the other side of the Mother of Parliaments and colonialism, that is 😁
There is definitely a political angle pushing an idealised and indeed fictionalised hyper-patriotism, which is indeed a problem of its own, but uh, let me link to our own contribution to the genre (no tigers, though, alas!)
I'm starting to think that political arguments really have nothing to do with values, and are entirely just arguments over facts.
I could be convinced otherwise if someone could show two schools of political thought which are identical in terms of their factual (i.e. predictive) views, and yet disagree only in terms of their normative views.
I think the opposite is closer to the truth, and these values drive what people view as "facts".
Most people are strongly opposed to the view that genetic variation explains racial behavioral differences, but this is almost never because they have looked into it and have a reasoned belief that the evidence falls in favor of this being false. Their underlying value of race equality means that they're hostile to even looking into at all and many oppose the research being allowed to exist.
Their factual understanding of the world is different to me, but this is largely a product of their values in the first place. This applies to most hot button issues.
My point, but better (although I'll abstain from the race discussion). I think Mark is correct about where we end up, but I'm not at all certain that he has cause and effect correctly ordered.
I think it's both, actually. There are some real value differences that will not go away through shared understanding (abortion for sure, illegal immigration and gun control in most cases). But you hit the nail on the head in terms of why things are getting worse. If you optimize for freedom and I optimize for justice, we can respectfully agree to disagree. If we do not have a shared reality - different facts, different predictive models, different histories - then there is no conversation to be had.
In all three of those cases I think supporters and opponents will disagree about likely consequences
To pick one, gun control opponents think that private ownership of guns reduce the chances of a government engaging in tyranny, and also reduce petty crime. Control control supporters generally think both of these claims are overblown.
Likewise, gun control supporters think that gun control will likely reduce violence. Gun control opponents think the violence will happen regardless.
I suspect these disagreements on these predictions are probably sufficient to predict someone’s values here; i will be very shocked if we can find two people who read this blog who agree on the relative likelihood of the above claims and simply disagree on their relative values.
Well, now you're either muddling your point or I misunderstood what you were trying to say.
If you're saying that people with political disagreements almost always disagree about the facts and consequences of their favored positions, that is certainly true. And also trivial.
It seems like you're trying to parlay that banal observation into the assertion that there *are* no value differences, period. Which... maybe, if you extrapolate to the extremes? We certainly know that people tend to abandon any and all values if the (perceived) risk or reward is great enough. Whatever value I cherish the most would almost certainly get tossed aside if I was convinced that sticking to my guns would result in calamity.
But if we leave aside the hypotheticals, I think that insight is less helpful. In the real world of real decisions about politics, people place an (arguably irrational) emphasis on values. And I would assert that those decisions are not a complex calculus, but in fact what they appear to be on the surface. People oppose gun control because it is their constitutional/God-given right - period, end of story. If you could convince them that their position would result in armageddon, they would probably change their mind. But they aren't listening to you, or thinking in that mode. (Not scoring political points, liberal side is a mirror image.)
It would be tiresome to list examples of principled stands - you know that some people take them, even when they agree with their opponents that standing on principle will have a worse outcome overall.
You know what - I think I must be misunderstanding you, because I have no idea what you're trying to say after reading your last comment. If your point is that true principled disagreement is rare, then yes but trivial. If your point is that on the majority of policy positions, political parties disagree about the facts and consequences of said policies - then yes, but trivial. Not trying to be rude, just confused now.
> It seems like you're trying to parlay that banal observation into the assertion that there *are* no value differences, period. Which... maybe, if you extrapolate to the extremes
This is exactly where I’m going here. It sounds kind of crazy, but the more I play around with it the more it seems like it might be right.
I think what we call our values are just compressing long cause and effect sequences. The reason we get mad and walk away is from an “expected value” calculation on the likely benefits of continuing the conversation with someone whose reality model is widely divergent from our own.
Can you point to an example of someone saying, “sticking up for this principle causes nothing good but we should do it anyhow?” I think most people defending principles will say something like, “yes it has costs but it has these benefits which outweigh the costs.”
Noting that if the theory is tested, it can move up to provisional fact.
But I'd argue that those "facts" are generally provisional - territory Z might have excellent health care, with the majority of <i>their</i> maternal mortality coming from some other cause.
I think it’s more that people emphasize (or in worst cases, create) different sets of facts in order to create the best argument for their values, so the sides in a value argument always look like they are arguing from different sets of facts.
The argument over minimum wage, for example, doesn’t seem to be likely to be settled by one set of facts just winning on the basis of being the truth of the matter. Your favorable facts for your perspective just prompts the other side to emphasize or create different favorable facts of their own.
If we discovered next week that fetal heartbeats didn’t start until the 25th week, the pro life movement wouldn’t give up and go home, they’d just consolidate around different facts while working to debunk the study. I’m sure there are at least come in the pro choice movement who would do the same if we discovered fetal cognition beginning in the 5th week.
I think facts (unfortunately) have a tendency to be treated more as weapons, and although at times there are fact-weapons so potent that they can settle a discussion even over value-objections (fetal cognition at the 5th week, I think, would be a nuclear weapon of a fact that might actually end the abortion debate for 99.9% of people, which is crazy to imagine an end for), I don’t think facts are the source points of the arguments themselves.
I have, at times, crafted an argument based on the facts (as best as I can compile them), and have had to change my position based on the overall body of evidence.
I agree with this framing - values shape which facts we consider relevant and how we interpret them.
What do you think of the claim that people identify with their beliefs - specifically, value beliefs - and so they don’t want to change their value beliefs because doing so feels a bit like dying? Changing your value beliefs in a significant way is effectively ending one identity. Our brains are tying to keep “us” alive and thriving by valuing some outcomes over others, and I think it’s easy for a brain to identify itself with its values, even more so than the body in which the brain resides.
I'd agree with that - it certainly feels true that admitting I got a fact wrong feels much easier than admitting I had wrong values. Facts are external to me, while values are internal/personal, and accepting that I misjudged something external to me doesn't carry nearly the emotional baggage that admitting something internal to me was "wrong" would.
Maybe this is one of the things that makes a certain kind of sophist/philosopher/nihilist (like myself) be able to love arguing so much and to change my stripes so often. I just don't have this instinct at all. I regularly over the years have thrown away the cloak of one little cluster of values and adopted a different one if it looks a little "warmer" intellectually.
And if it is some issue I feel 60/40 about, just having the people around me strongly argue the 60 side is enough to make me want to vociferously defend the 40 side.
IDK I grew up a traditional democrat, was like a Chomsky-ite in HS, somewhere between a Chomsky-ite socialist/libertarian and communist in college, a more traditional liberal again just after college, but then rapidly drifting off into the radical centrist wilderness as I got older. Where I have all sorts of different idiosyncratic views that don't map well onto either party and sometimes are very centrist and sometimes are quite extreme (but in both directions).
And when I encounter new info that pushes me over the edge in some direction I may at times radically change my positions. The facts (as best as we can do) matter.
A lot of this resonates with me, and I felt weird that people seemed to have genuine feelings about facts.
I notice myself drifting slowly to the right, after many years in the "a pox on both houses" mindset, which itself followed wild fluctuations between far left and libertarian positions.
Would you say "I don't commit to positions based on ideology, I follow the evidence" is an ascendant value for you? That might account for what you're describing and still fit within the "reassessing values is easier than reassessing facts" framework.
If so, as a thought exercise, what would it take to get you to reassess "I don't commit to positions based on ideology, I follow the evidence" (or whatever your personal equivalent) as a value and say "on reflection that value was wrong," and how hard would that be to do, personally, compared to changing position on a particular fact?
I think I recognize what you're talking about. There's a familiar scene that plays out repeatedly, all the time, in contemporary political disputes. People call each other names and impute value disagreements, when they really want the same thing.
But there's a whole universe of political questions that don't have anything to do with values or facts, which usually take the form of questions like "who should our city hire to clean its streets?"
That often leads to meta-questions like "what is the best way to allocate street-cleaning contracts?", and so political philosophies are born (or ideologies, if you prefer, not that they are identical). As meta-discussions get farther away from the concrete circumstances that inspired them, they can go in a few directions:
- A consensus develops around a broadly-satisfying solution.
- Or the problems are hard to resolve, so support coalesces around temporary solutions.
- Or no one takes the meta discussion seriously, or the discussion is merely for show, because of course the Mayor's niece gets the contract.
What's more, solutions are often unstable: even a consensus around a seemingly durable solution can weaken, as people come to realize that the principles underlying it lead to repugnant conclusions, or as it comes under a material threat when changing social factors undermine the solution or the consensus around it.
But even if we put aside zero-sum distributional questions and their differential impact on various social groups, even if we focus only on theoretical or ideological political arguments, there's yet another problem! Framing the question of political conflict in terms of factual and normative disagreements omits other important dimensions of political thought: incommensurability and prioritization.
For people that share terminal values, a good amount of political disagreement happens around situations where the two values cannot be reconciled or times when we can't advance both at the same time.
Thus, I'd agree with a much weaker version of your original assertion - something like "_many_ political arguments really have nothing to do with values, and are entirely just arguments over facts." But if you're feeling like that's the **only** kind of disagreement, I suspect that you're either 1) looking only at a particular community with a strong cultural consensus or 2) looking only at certain low-resolution types of political disputes: that is, looking at arguments carried out at an ideal or philosophical level rather than arguments about concrete, local circumstances.
I think maybe i can rephrase this as, "absent disagreement on facts, value disagreements are naturally going to arise, causally, from resulting from the factual disagreement"
So until i see disagreements about some system where people agree on all the facts, it's had for me to believe that _pure_ value disagreements are actually, really, truly a thing.
I agree with your first statement, but I am quite certain I see "pure value disagreements" all the time.
Let's just focus on prioritization for a moment: how much you value your values is itself a value.
I'm not trying to be clever! Here's a concrete example: I'm embroiled in an ongoing discussion with a NIMBY woman in my town. Recent zoning revisions enabled developers to build new construction on her street and she's pissed. She's written letters to the editor of our local paper and spends a non-negligible amount of her time fighting the new construction.
The existing houses on her street are predominantly circa-1900 single-family dwellings in various states of disrepair, and it's one of the last semi-affordable neighborhoods around here (in reality, calling her neighborhood "affordable" is a major stretch). She argues for the preservation of her neighborhood character and points out that our zoning changes were intended to promote affordable housing, but the greedy developer wants to build expensive, new market-rate construction on her street: no middle- or lower-income buyer will be able to afford it.
I want to go full Matt Yglesias on her and explain the economics of the situation. But I don't think it will do any good, because even if I stipulate that she cares about making housing in our town affordable, she wants her neighborhood to stay the same, and she cares about that more!
That is, for my money, a real value difference. (This is also without going into the "greedy developer" issue - very likely another major value difference between us.)
My prior is that people are generally terrible at Bayesian updates to their belief system unless they are in environments where they are rewarded only for being correct about anticipating the future.
So maybe we could expect macroeconomic investors to have largely convergent beliefs about how governments work, with respect to finance>
Because if disagreements were primarily about facts then changing the facts should change the disagreements. You are now asserting not that you disagree with that or my initial post but that people simply don't update their facts. But this isn't true either. People update their facts all the time. It's why new arguments and talking points arise so much.
Sure. Communist China and the US both agree that Taiwan is run by the Chinese Nationalists. The Communists hold the value that China should be united, the One China principle. The US does not hold this value. This is a disagreement between roughly 1.6 billion people and so fairly widespread.
If you need a domestic American example then Justice Scalia and Justice Sotomayor both believed that the second amendment exists. As far as we know they held no disagreements about gun crime statistics or ownership and if they had diverging opinions on the effect of policy they didn't share them. There was no disagreement about the content of the text either. But Justice Scalia held the values of a legal originalist while Sotomayor held the values of a legal realist thus leading to directly opposite opinions on gun control.
There are points where common values reduce a conflict to be about facts. But that by no means exhausts the category of disagreement.
Great! I think this example is summarizing a factual disagreement between the two of us: namely, which factual beliefs are relevant in these disagreements? This might seem like i'm dodging or 'moving the goalposts', but i think where we disagree on is whether these conflicts stem ultimately from broader factual disagreements.
For example, i believe the disagreement between the US and china over how taiwan _ought_ to be run stems from a factual disagreement over what causes human flourishing. For example, both disagrees disagree on, say, the consequences of legally protected freedom of speech.
The communist party believes that freedom of speech will lead to cultural degradation and moral decay, and ultimately weaken a people and make them subject to the control of forces that aren't looking out for their wellbeing.
Americans (some of them) believe that freedom of speech allows for the exploration of new, better ideas, which ultimately promotes human flourishing.
I believe the disagreement over taiwan is downstream of these higher level factual disagreements. I think the same is true with Scalia vs Soto mayor
> Justice Scalia held the values of a legal originalist while Sotomayor held the values of a legal realist
Why did they hold these values, though ? Clearly, they weren't born this way.
We can ask, 'what are the likely consequences of these values being held' and this is purely a factual belief where the two are likely to differ.
Scalia likely believed that, absent faithfullness to the text, the courts will lose their legitimacy because they will become politicized and become a kind of super-legislative body that makes rules, rather than merely being tasked with interpreting them. Scalia probably believed that absent faithfullness to the text, the courts would lose their legitimacy and the american experiment would come to an end in authoriatarianism. Sotomayor probably believes that absent a consideration of social interests, the courts would become widely seen as merely defending a corrupt status quo, and that that if the system as a whole isn't perceived as fair, people will stop supporting it, and this would be bad. I think both Scalia and Sotomayor want courts to continue to enjoy widespread legitimacy, but they disagree on what kinds of rulings will cause the courts to maintain their legitimacy. So there's a single number here - percentage of the population that views the courts as legitimate" - and even if they _both_ want this number to be higher, i think they probably disagree over which actions will reduce the view of the supreme court as legitimate.
Not if political arguments are mostly expressive instead of truth seeking. Sports arguments (which team, player is better) is also about facts not values, but fans of different teams don't agree either
What's your definition of facts and not values then? Because it seems to me that you are now trapped either admitting sports fans do not update their facts, such as who won the last game, or saying that such things are not facts at all.
But "who won the last game" is not an argument sports fans have, as you suggest. What _do_ sports fans argue about most often? Is it over whether playing beautifully or effectively is more important (which they sometimes argue about) or what player should have been MVP, what team was actually better despite the results of a game, etc? They share the same value, but argue over facts which are not as easily settled as "who won the last game" because that would be silly.
I don't follow your logic. Take which should be the MVP. Don't they disagree on what values an MVP should have? You're right they don't disagree on (say) batting averages. But they still argue. Which seems against your point! Unless I'm misunderstanding.
But they value the same thing: having the "most valuable player", in the sense of making the most difference for a team to win games. That is a factual disagreement.
When Billy Beane came around to argue that the batting average was not that important, he wasn't argue that he didn't put a moral value on batting average, he was arguing that it did not make the team win as many games as people once thought. That is factual. And that is what fans argue about all the time.
Oh man sports fans disagree about facts constantly.
Ask a Notre Dame fan and a Miami fan whether Cleveland Gary’s knee was down before he dropped the ball at the ND goal line in 1988 and you will definitely get different facts. Heck, it’s part of the fun.
Sports has the advantage that in order for the fun to exist at all we have to both agree to live with the fact findings of some kind of arbiter, but that’s harder to translate to politics.
I think if you ask these people why, what they will say is that you can’t operationalize concern for everyone.
For example, I love my kids and will ignore other kids drowning in distant metaphorical ponds (of which there are an inexhaustible quantity) in order to teach my own kids to play piano. This isn’t because I don’t want to stop other kids drowning in ponds, it’s because my ability to help my own kids is far far greater than my ability to help kids far away.
That's part of it, but there are also people who have large demographic groups they don't care about.
I've wondered whether demands for people to care equally about very large groups or possibly everybody makes them less helpful.
They might have been willing to be charitable in their city, but if they're told they have to care about everyone or it's not good enough, they say fuck it.
"They might have been willing to be charitable in their city, but if they're told they have to care about everyone or it's not good enough, they say fuck it."
Yup. I have a handful of people that I care about, and that's the end of it. I'm academically interested in the rest of humanity, but only academically.
I agree with you to an extent, that far more of our disagreements involve factual discussions about best outcomes. Pretty much everyone wants to feed the poor and have a healthy economy. We certainly disagree about the best ways to reach those outcomes. If you push that hard enough, you can squint and say that everyone wants what is "good" and not what is "bad."
I think that's taking it too far. Communists really are collectivist in mindset, and really do value things contradictory to Libertarians, who instead value the individual over the collective. Pushing that hard enough to say that they are only differences in facts (presumably about how to make the best society) elides more than it illuminates.
If you ask communists or libertarians why these things are important, they will often argue that in their absence, really bad things will happen.
Communism seems to say that money and ownership are totally unnecessary for human prosperity. Libertarians clearly disagree. This is a causal belief, not a value one.
Your model explains everything and therefore nothing. You have abstracted too far and lost lots of valuable information.
In our libertarian / communist example, the libertarian will say "bad things will happen" and the communist says "no they won't" and you are calling that a factual disagreement but it isn't.
What they are actually saying is "anti-liberty things will happen and these are more valuable than the equality things" and the communist says "you're wrong, the equality things are more valuable" and this cannot be settled by fact.
The libertarians predict that "attempting to force equality will lead to the destruction of all wealth, as productive people leave the country and the incentive to invest or save disappears. It will lead to hyperinflation, as money printing will be inevitably used. The authorities will control all speech and opposition, jailing and torturing dissidents." The communists say, no, those things won't happen. And if they do, well, then it wasn't communism.
The communists predict that "absence of any centralized control will lead everyone to become slaves of the few people with money, who will control everything, with everyone else begging for scraps. They will have 'freedom' in name only, but without money, will be slaves in all but name" The libertarians say, no, those things won't happen. And if they do, well, it isn't libertarianism.
The model does not explain everything. Specifically, it cannot explain " a group of people who agree on the likely distribution of outcomes resulting from some policies, and simply disagree on the desirability of those outcomes."
If you can show that libertarians and communists agree on the _consequences_ of widespread liberty or widespread equality, then i'll agree that i'm wrong.
How do you view classical liberals and communists having different ideas about the desirability of the status quo? Is that also a factual disagreement?
I think in both cases, it is an unfounded theory based on values. Both sides, essentially strawman their opponents in their mind and then declare as certainty, the disastrous results of the other side succeeding.
I don't think I can provide the evidence you ask in the second paragraph but I mostly agree with you. Mainstream / top level / social media arguments would seem to be entirely arguments over facts (or, worse, virtue signaling, dunking etc). But I don't think it's entirely that. Underneath I do think camps actually want to optimize for different things (individual liberty vs collective good, etc). So my personal approach to parsing this mess is to ignore the vast majority of it and cut straight to the point of what is being optimized for. Most political ads, slogans etc are illegible as they relate to the underlying value and its distinction from other options. They are noise. This is probably the main reason I am frustrated by politics generally, even if leadership, governance, economics are interesting topics when discussed like adults.
I think it is "what we want to do" first, and the model of the world is created afterwards as a justification for the proposed actions.
Different goals will automatically lead to different models (because different actions need to be justified), but that doesn't mean that the difference in the models was there first. Models can even be updated, if necessary, in a way that "coincidentally" still justifies the original goals.
For example, in the current war in Ukraine, the Russian side has already presented several different models of the situation -- maybe Ukrainians (and Belarusians) are just confused Russians, who need to be reminded of their former glory... or maybe we need to protect the Ukrainian population against an imminent Nazi threat... or maybe NATO is trying to use Ukraine as a base for their nuclear strike on Moscow, and Russia has to defend itself. Well, maybe this, maybe that, who cares anymore... but the conclusion is always that Russia needs to get Ukrainian territory under its military control.
To be fair, there were/are some literal Nazis in Ukraine, and Russian-leaning Ukrainians (not necessarily _speaking_, those are not the same thing, especially after the war started) were sometimes in danger.
Which shows that the best lies are lies of omission, and blowing true things out of proportion.
Freddie had a weird post on the usage of "literally" the other day. The question is can you use "literally" like this
I literally walked a million miles yesterday.
I say you can.
The debate centred on whether *literally* could be used to mean figuratively, which is what some dictionary suggested. I disagree, you wouldn’t use figuratively there. The sentence is itself figurative. It’s hyperbole. And with hyperbole the entire sentence is read as not literal.
(Using literally as an intensifier doesn’t change that.)
I think there's a bit of is/ought or motte-and-bailey going on here.
Freddie: "Literally" ought not be used as a generic intensifier.
Merriam-Webster: "Literally" is, in fact, used as a generic intensifier, and has been for over two centuries.
F: Yes, but it shouldn't be, and you're trying to sneak in an "and it should be" instead.
Generic Descriptivist Narrator: Thus we see that the term "literally" *can* be used as a generic intensifier, but there is social pushback to this use: the listener reacts with outrage rather than incomprehension. Given that "truly", "actually", and "for real" appear to have undergone similar transitions, it is likely "literally" will also end up losing its meaning of "corresponding to objective reality" definition despite this pushback. Whether or not this is a desirable outcome is beyond the scope of this observation.
Perhaps we can just use "literally literally" to indicate that we do not mean literally figuratively. Then, if that becomes used figuratively, we just add a "literally" on front and so on.
Or we could go all Olaf Quimby II and just prohibit figurative use of language.
Alternatively, I occassionally literally say "figuratively literally" when using "literally" figuratively, eg. "Dude, I hated the proto-hobbits on Rings of Power so much I figuratively literally rage-floated outside of my body."
Sure you can. Language is what people say, and plenty of people use "literally" as a mere intensifier, which is what it's doing here. The sentence makes perfect sense, provided you understand this particular colloquial meaning of "literally," and I would say it's rare for people *not* to know this meaning exists.
On the other hand, does it makes you sound nekulturny? Also yes. It's the modern equivalent of talking like a Valley Girl ("Like, totally!").
Literally when used as an intensifier in hyperbole isn't to be taken literally, just as any other intensifier in hyperbole isn't to be taken literally. The only confusion is perhaps, that the word. "literal" is being used.
"Colloquial" refers to the acquisition by "literally" of the meaning "really, a lot, surprisingly much of [whatever word follows]," because this meaning was acquired through quotidian usage and conversation, and has (so far as I know) no compelling linguistic or etymological roots.
So what do you think about "this is literally genocide"? Do you consider "okay, that is plainly hyperbole and rhetoric, it's not meant to be taken as fact" while the person using it does so consider it to be real and factual?
It's a statement I see used by some people in the culture wars, and if you don't agree that X, Y or Z is literally genocide, you are ostracised as a X-phobe who hates those people and wants them to die.
My take is that prescriptive linguistics is mostly just descriptive linguistics of the educated middle class. If I tell you that using language in a certain way is incorrect, then it carries an unsaid "... if you want to sound like an educated middle class person".
Quite why I, as an educated middle class person, am so keen on making sure that everyone else talks like an educated middle class person, I'm not sure. Certainly I'm keen on making sure my kids sound like educated middle class people for a good reason. Why it should bug me when a random stranger misuses language though? The generous-to-myself interpretation is that I think classism is a big problem in society and I don't want to see people limit their own opportunities by failing simple shibboleths. The less generous-to-myself interpretation is that I feel like people who don't use language "correctly" are failing to show sufficient deference to my educated-middle-class-ness and that this annoys me.
Nonetheless, language exists in a constant state of flux, with linguistic "innovations" constantly bubbling up from the lower classes and the upper-middle classes trying to bat them back down. Sometimes we are successful, sometimes we give up. (There's also a whole class of linguistic innovations which are pushed down on us from above for political reasons.) I'm an observer of the struggle but I'm also a proud participant, happily fulfilling my role as an upper-middle-class grammar Nazi batting down lower-middle-class linguistic idiosyncrasies.
But putting aside wanting or trying to sound like educated middle-class for benefit, there is a meaning to words.
And if "literally" is reduced to just a filler word, an intensifier like "very" or "greatly", or just becomes a word to be stuck in like "um" and "ah", then we have lost a tool of language. We have lost a way to convey meaning. "Literally walked a million miles" and "he literally tried to strangle me" become the same thing, which is meaningless. Which is a poor look-out, because if Jane is trying to tell you that John really did try to strangle her and you take it as "oh, all she means is that they had an argument", then we've lost a way to communicate reality.
And it's just as bad if we go the other extreme, where wanting to use terms like "this is literal genocide" are to be taken as factual, not rhetorical, communication. In that sense, if Jane and John only had an argument and she tells you "He literally tried to kill me", now you are obligated to react with appropriate shock and horror as if John really had tried to kill her, not just have a loud disagreement. Otherwise you are A Bad Person.
I want words to retain as much of their meaning as possible, and I don't want to give in easily on slippage, because until there is agreement one way or the other on what "is" means, then we can't even talk to one another, something that leaves us all worse off.
That post was terrible. It was either a ridiculously bad and nearly definitionally wrong take or else it was a terrible effort to communicate some idea that might be correct but I can't tell because I didn't understand it.
Yeah, I really wish FdB had given an example of exactly who he's arguing against so we can see if they a) exist and b) mean what he says they mean.
Anyway, I also want to add that I have a (reprint of) Fowler's English Usage, 1926 edition, and the entry for "literally" is along the lines of "this battle is already lost, and the fools who use the word as an intensifier have won". So if anyone wants to argue that this will damage the language, you either have to accept that the language has already been damaged, and here we are, or explain why after 100 years now the shit is REALLY going to hit the fan.
I saw that photo. I think the newest comment in this thread sums up perfectly what is going on with the is/ought motte/bailey. That picture is describing what _is_ happening in language. FdB is assuming that this "is" is actually an "ought" like the prescriptvists are saying. Someone describing what _is_ happening is a fundamentally different thing than someone describing what _ought_ to happen. FdB argues they are the same. This is patently untrue. If the point he was _trying_ to make is that some descriptivists are actually _prescriptivists_ who are saying that, in addition to describing how things are used, they are also claiming that this is the "correct" way, then he did a terrible job of conveying that, and if he had conveyed it more clearly, I would A) disagree that most descriptivists are secretly doing this and B) think that this a totally banal and uninteresting point about the few who are, even if he could have found some non-trivial group of people doing it. I am as uninterested in a permissive prescriptivist as I am in a restrictive prescriptivist.
So he is either making a boring point that some people thing language should be used one way and other people think it should be used another way or he is making a wrong point that people describing how language is used are the same as people claiming that there _is_ a right way.
The comment you invoked (which, agreed, adequately sums up the argument) does not say what you think it says. In particular, the one it accuses of motte-baileying is Merriam-Webster. Which, obviously, you don't respond to a debate about "ought" with the statement on "is" unless you think it's relevant, especially not in the "everyone that disagrees is angry and silly" tone. But apparently it gives you plausible deniability of "just describing stuff", making it the motte to the "this settles the debate" bailey.
Freddie, for his part, correctly calls this bullshit out. Language is not some independent process, it's something all of us users actively participate in and shape. More importantly, it's something we're using for a particular reason - communication, and this requires us to establish a shared understanding, which in turn requires us to continuously resolve differences and ambiguities. And while most ambiguities are benign and can be easily resolved from context, not all are. If the case of "literally" is too marginal for you to care about, imagine a world where "no", in addition to the current meaning of "negative", has an additional attested meaning of "affirmative". (This is, hilariously, not a made-up example, "no" means "yeah" in Polish - different pronunciation, same spelling. My peer group is largely bilingual and prone to code switching, and I've been asked several times whether my text message used "Polish no or English no?") These kind of contradictory definitions literally cannot coexist, the conflict renders the word unusable. It invariably must resolve in one of three ways - one of the two uses prevails over the other, or some different set of words takes over to convey the same semantics.
It's natural to bring attention to and try to resolve those kinds of semantic conflict, because we literally wouldn't be able to communicate otherwise. It's natural to insist on the semantics you're accustomed to, especially when the other side has many more options to switch to. If that's prescriptivism, then everyone necessarily engages in it all the time (just observe how many internet discussions turn into semantic squabbles; including this one, right now) and the term is meaningless (at least as a description of a distinct intellectual position).
But I think it's not. I think the word carries at least two additional assumptions, that the prescriptions are arbitrary, and that they're made from a position of authority. And there's only one linguistic authority (Merriam-Webster) with an arbitrary (based on the term being attested and included in the dictionary, thus completely ignoring the actual unresolved problem of mutual comprehension) prescription here.
Finally, I think your problem is believing the conflict to be between descriptivism and prescriptivism. But nobody frames it in those terms, not Freddie, not the post you cite approvingly, not even Merriam-Webster. You seem to be an outspoken descriptivist, and reflexively chose what you (mistakenly) see as the descriptivist side to support. And I get the sentiment, I really do, trust me, I ain't no prescriptivist either. But here's the thing, I don't think anyone would admit to being one at this point, it's nearly universally understood as a bad thing, for what I believe are very good reasons. (The "descriptive linguistics of the educated middle class" take really nails one of them.) To engage in a bit of self-awareness, the only point of invoking it in discussions like this is as an accusation to tar your opponent with. This can only be a viable tactic under an assumption of a shared understanding that prescriptivism is, in fact, bad, otherwise the other side could simply [serious bearded man face: "Yes"] out of it. The war you're trying to fight has long been won, and fixating on it prevents you from noticing all the complexity that exists outside of it.
I think this depends on the usage? But I don't think that was Freddie's point (at least not in it's edited form). As I read it, the point was that the descriptivist position is:
1) Literally can mean either literally (it actually happened exactly as described) or as an intensifier.
While the prescriptivist position is
2) Literally should only mean literally (it actually happened exactly as described)
And his point was that those two positions are in fact in conflict and to shrug and say 'words can mean whatever their users understand them to mean' is to adopt a descriptivist position, not to be neutral.
I would. I literally would (and sometimes do). Use "figuratively", that is. I'm one of those evil traditionalists who believe that language is a tool of communication and we need (maximally) well-defined terms and concepts to maximize mutual understanding. And if you're not allowing me to clearly distinguish cases where I'm not using hyperbole from those where I am by marking the former with a single unambiguous word, then I'm going to take the next best option and clearly mark the cases where I am, in fact, using hyperbole, to establish that whenever I don't, I should be interpreted as being literal.
Using figuratively here is bad English. I wouldn’t say it’s grammatically incorrect, but it is tin eared.
Hyperbole doesn’t need to be signalled by using the word figurative, no more than any other use of figurative language. Wordsworth didn’t have to say that he was figuratively wandering lonely as a cloud which in any case wouldn’t have parsed so well.
To make your sentence clearly hyperbole, you exaggerate it. That’s all humans need.
This is the moment to point out that Freddie's entire point was not to discuss the object-level usage of "literally", but to observe that what the people on your side of the argument are doing is pure, figuratively unbridled prescriptivism. Unlike most, you seem to be explicit about this, and what's your beef with him, I don't know at this point.
As for me, I can only restate my argument that, yes, hyperbole doesn't need to be signaled, and normally isn't. It's the lack of hyperbole that needs to be signaled instead, in cases where what would normally be interpreted as exaggeration is actually an accurate description of reality. If only English had a word to convey that...
I'm not the person you replied to, but I still fail to understand how merely _describing_ that some people use language in a certain way, without trying to dictate whether it is or is not correct, can be "precriptivist". I am not telling anyone how they _must_ use language, merely documenting how many people _do_ use language. You are free to decide that some of those people are doing it "wrong" (I personally am very curious what authority you could appeal to to make such a decision, but I don't actually care that much), but the fact that you think they are wrong does not mean they aren't doing it.
To make a very extreme analogy: One person says murder is always wrong and no one should ever murder. Another person documents that some people do, in fact, murder. These two people _are not doing the same thing_. The person describing the reality of the world is not simply using a different set of ethics/morals but is instead engaged in a totally separate endeavor, from which we can not, in any way, deduce their stance on murder.
As far as I could tell, Freddie was trying to say that the person telling you how many murders occurred is merely subscribing to a different ethical framework than the person saying murder is bad, but that fundamentally they were engaged in the same kind of activity. This is ludicrous. If that is not what he was trying to do, then he utterly failed to communicate whatever idea it was that he was trying to get across. Which is ironic given that one of the most common arguments for prescriptivism (including seen in this comment thread) is that it increases mutual understanding.
"in cases where what would normally be interpreted as exaggeration is actually an accurate description of reality. If only English had a word to convey that..."
Hyperbole that isn't exaggerated enough, isn't hyperbole. Rather than say:
"I figuratively ran 20 miles yesterday"
to indicate that you actually ran less than 20, exaggerate more. Say you literally ran a billion trillion miles.
I am not prescriptive about this, you can grammatically say figuratively, but it would be in general badly worded English, but not incorrect.
Wordsworth also did not say he literally wandered lonely as a cloud. "No, dude, like, totally nebulous. Literally like a cloud in my lonesomeness. Absolutely, yah".
Well but what if he said he "figuratively walked ten miles yesterday". That might be necessary because someone might walk ten miles.
I am in the camp that we sort of have to be descriptivists, but it would be BETTER to be prescriptivists (unfortunately that is a losing battle because linguistic bad actors are always moving things around for their individual benefit to everyone's cost).
<Well but what if he said he "figuratively walked ten miles yesterday". That might be necessary because someone might walk ten miles.>
With hyperbole you have to exaggerate, so that the thing you are claiming is clearly impossible. You can't eat a horse.
"we sort of have to be descriptivists, but it would be BETTER to be prescriptivists"
I am all for fighting new forms of language, so that what survives is better, but this use of irony in hyperbole is centuries old. And it isn't just the word "irony' but all intensifiers are not taken literally.
Ok then, if not impossible, then extremely exaggerated. And you know I don't really want to work on the grey areas here where hyperbole isn't understood by the listener as hyperbole because the speaker didn't exaggerate well enough . That's on them. The solution is not use the word "figuratively" but to get better at hyperbole.
The outcome you described seems pretty positive not negative. Having four different past tenses of "run" creates real costs on society, for I would argue fairly nebulous benefits.
I agree that four past tenses of "run" have fairly nebulous benefits. There are a lot of other cases where near-synonyms are more useful:
e.g. perfume, aroma, smell, odor, stench
Personally, I lean toward the prescriptivist camp in preferring that meanings not blur too much. If "literally" gets used too heavily as an intensifier, it will become very awkward to explain that one is describing some event literally. ( Hmm... What do courts do when a witness uses "literally" as an intensifier in sworn testimony? )
I don't think there's a question about whether you "can." Anyone *can,* and there's no way to stop it. There's just opinions about how to use the word.
At this point, "literally" often functions as an intensifier meaning "I'm using the word 'literally' figuratively to indicate how intense X felt."
It's a bit of a joke, that's all. Might as well enjoy it, as you can't stop it.
Somebody will tot up all the steps they took throughout their entire life, work out it came to a million miles, and triumphantly announce "I literally walked a million miles".
And then we'll see what is or is not "the entire sentence is hyperbole" 😁
The "literally" has *not* shifted to mean "figuratively". There is a linguistic shift happening, but it's intensification, not a change to the meaning of a word.
It's easier to see (especially in slang) with a few other examples:
"A sick worldview" / i.e. "an unhealthy worldview" (negative intensifier)
-->
"A sick kickflip" / i.e. "a great kickflip" (positive intensifier, doesn't mean "healthy")
"ridiculously presented" / i.e. "joke-worthy presentation" (negative intensifier)
-->
"ridiculously fun" / i.e. "very fun" (positive intensifier, doesn't mean "seriously")
Edit: formatting, but still looks horrible. Substack should really support lists, or bold, or anything, really.
Thinking about this a bit more, I realize this happens with *so many* words *all the time*. People never seem to get hung up on it... except in the case of "literally". I suppose it's a combination of 1) its use in a purely denotative form still being common and 2) the denotation itself being so clearly and immediately broken when used as an intensifier.
I'm not talking about the colored paper in my pocket, this is easy enough. I'm talking about the myriad forms of even-more-imaginary forms of paper and computer memory, that people somehow agreed has some value and acted in accordance.
Post any kind of material (books, games, essays, articles, videos, podcasts, people, social media threads, etc...) that you think can help me understand.
What are "Derivatives" or "Futures"? Is the first one related to its math analogue, or the second one related to its programming analogue ? Why is Fractional Reserve Banking not a scam ? and if it's a scam, why do the people who understand it not start a revolution ? Why is the stock exchange useful ?
As a concrete test case, I want to read a non-dumbed-down (~2nd or ~3rd year university-level) account of the 2008 financial crisis and understand what it's saying and why it's true. There is no thing special about 2008 crisis for me, it's just a famously complex and intricate financial phenomenon that provides a good test flight of my understanding of money. You can post any other financial crisis or phenomena that you think will help me better understand how finance works or test my understanding.
I have a background in CS, I love historical explanations that trace how a complex thing started one small piece at a time. I love multi-viewpoint explanations and I feel I'm being lied to or sold something when I detect an ideological bias in the educational material. I don't have a full time devotion to this task, and I'm not interested in making money using this understanding.
> Why is Fractional Reserve Banking not a scam ? and if it's a scam, why do the people who understand it not start a revolution ?
Because the system works (until it doesn't), and people who understand it either quietly reduce their dependence on it or abuse the cheap leverage opportunities.
In addition to the already mentioned Matt Levine, I strongly recommend Patrick McKenzie's substack Bits About Money. Actually, I think Patrick is even better than Matt in this case. Matt talks about the world of investment and finance while Patrick talks about *money itself*.
A better answer, in my opinion, is that a scam is something you're hoodwinked into, but fractional reserve banking is simply mandatory. Not putting money in the bank, that is, holding paper money or coins, does not prevent losing value due to failures (or "features") of the banking system. Not holding money at all would, but that's possible only for a small number of protected autochthonous people; the rest of us have to pay taxes. Naturally they have to be paid in government sanctioned money.
It's not. A lot of people who use check cashing places don't have bank accounts. For people who live paycheck to paycheck, using a bank can often lead to the ambush overdraft fee, whereas with the check cashing places your costs are up front and clear, so they can end up being a better option.
Would it not generally be a better world if all economic theories and propositions based upon them were required by law to be presented in the form of musical dance numbers? I think so.
I think you can treat it more as a documentary than you might want to at first glance. It doesn't provide technical insight into the the 2008 financial crisis, but it does document the emotional arch of the story.
To try to answer your question: The economy is the collection of decisions we're making as a society. While a technical understanding of money could help you understand some of those decisions, it can't provide the full story. You're asking a valid question, but I don't think even a great answer would be satisfying for you.
Also, what did you mean by computer memory? I didn't understand that bit.
I meant that the vast majority of today's money are merely entries in computer memory and don't have any other existence. I don't mean this to imply "they're not real" - to be clear : *they are not real*, but not because they are entries in computer memory, their paper equivalent is equally not real.
I do appreciate the fact that money and monetary technology and practices can't be understood without understanding the economy in\around which it appeared. After all, money is just an accounting trick that **supposedly** represent "Value", any abstract proxy for human satisfaction, exactly the thing which Economics and the related social sciences study. Money is to Value like Books are to Thoughts.
But I believe there is a huge part of Economics that is not the study of money, and I want to minimize knowing about that part (Nothing against it, I just have finite time and motivation and intelligence). I want to know about Economics only the things that will allow me to understand Money, or why Modern Money is mostly fraudulent fiction, like I suspect it of being. I believe it's a reasonable and attainable goal to try to understand finance and money without, as much as possible, getting drawn into the tarpit of Economics.
To answer the specific question about derivatives - no, it is not related to the calculus concept of a first derivative. The etymology of "derivative" in this context is literally a contract that "derives" its value from some other thing. So if I promise to sell you an apple for $1.00 next week, that is a future. If I promise to pay you the grocery store price of an apple next week in exchange for a dollar today, that would be a derivative. In this case, our contract derives its value from the price of apples without us ever needing to exchange an apple between us.
Once upon a time, banknotes were not money. You went to the bank and cashed the note, and they paid you out in coin.
And then I have no idea what happened. I gave up trying to understand economics, because for every thing that happened, you could find an economist saying it was bad and the wrong thing. The economy is growing? Bad. The economy is shrinking? Bad. There's a recession? Bad (well, I think we all agree on that one). There's a boom? Bad. Full employment? Bad. High unemployment? Bad.
Jeff Bezos is worth however many billions *until* he tries to cash those out, in which case he will crash the value of his stock and be worth peanuts. It's only a fortune as long as you treat it as imaginary.
It's fairy gold, that looks like gold when you get it into your hands, but turns to withered leaves in the morning.
I remember deciding one day to improve my knowledge of economics, and went book hunting. The first one I found looked right up my alley, and so I bought it and read it. It was _Basic Economics_, by Thomas Sowell.
The funny thing about it is that I'd recommend it highly, but not for what you seem to want. It doesn't explain economics as if to a programmer; it's a polemic. It's just that it's a very well-written polemic that you could learn some things from. So, there's one.
+1 for Matt Levine, although if you're trying to wrap your mind around money, his columns quickly become advanced. If we viewed the topic of money as a layer cake where the top layer is basic grade school stuff and the horizontal dimension is the various ways money is used and managed, Levine's columns come off as a brief stop at the top layer, quickly shooting down into some random part many layers deeper. If you want to fully grasp the top 2-3 layers, you'll need to read a lot of his columns. On the upside, I find him great for looking at some structure invented to use money to do something worthwhile, thinking, something about that structure doesn't quite add up, and Matt will come along and say, "yep, you got it, that's what's fishy about it".
For general econ, I like Don Boudreaux's advice: avoid Modern Monetary Theory (MMT), or macroeconomics at first. Go straight to microecon, and stay there longer than you might think you ought to. MMT will likely fill your head with stuff you'd just have to unlearn later. Microeconomics also goes by the name Price Theory; I recommend David Friedman's textbook on it. (Plus, he might even answer questions about it here if they're good questions.) Search for it; you can find a free copy online.
For spot definitions of terms, I like Investopedia. Well written, direct.
If you want details of the 2008 crisis, honestly, watch the movie _The Big Short_. It has surprising amounts of detail, and it's *funny*. Or, get the book it's based on.
Reading Noah Smith's substack recently I had an insight; nobody including the experts fundamentally understands how this stuff works.
Economists shouldn't pretend to be physicists, they should pretend to be monks. Instead of pretending that they're explaining economics, they should instead lead us in meditating on the mysteries of economics. The question of "how does money work, really?" isn't just something that the economic profession are hiding behind abstruse explanations that you haven't been able to penetrate yet, it's a fundamental and ineffable mystery of the universe; we can feel it a bit more intuitively through meditation but we will never fully understand it.
I thought as much while reading through "Money, The True Story Of A Made Up Thing". Whenever the book described how "experts" were trying to get out of a crisis like the 1930s or the 2008 by flailing hard and changing lots of things, I got the impression of somebody messing with something they are fundamentally unequipped to reason about.
Like, imagine you're trying to solve a recursive system of equations, say it's the linear system {x+y+z = 1, x-y-z = 2, x+2y+4z = 3}, but you don't know this is called "A Linear System Of Equations In 3 Unknowns" and there are entire books on how to solve it. You only have 3 dials representing x, y and z, and you're just fiddling with them, trying to get them into a configuration that respects all 3 equations. Solving the linear system must appear awfully hard to you, every single change you make appears to affect the entire system, and there is no clear way of knowing which dial to change or by how much. A trivial problem becomes impossible when you try to solve it without tools optimized for its understanding and solution. A good tool of thought, Computer Scientist Alan Kay is fond of saying, is worth 80 IQ points.
The economy in general, and the financial sub-system that torments me in particular, appears to be a huge recursive web of cause and effect that interact and evolve in hideously complicated patterns. Every explanation seems to be a Just-So story, a way of fiddling with the dials without reasoning about the underlying rules governing them. It's extraordinarily dumb and unjust how we are forced to trade our labor for a fiction that we can't even hold in our head satisfactorily.
Like many things in modern civilization, I honestly find modern money disgusting. The notion of "This is a thing that you can't possibly hope to understand let alone control, and yet *it* can control and ruin you in ways you can't even name" breaks me, it's like a secular Problem Of Evil. Lacking any realistic means of abolishing this criminal way of life, I find myself studying it in disgusted fascination, like how one might study a plague-inducing virus.
There exist real things, like apples and houses, that require other real things (time, human labor, energy, other real objects) to make. People desire having real things, for personal reasons.
Since many real things take skill to make, and can be made in batches, you get a society with more real things per person if people specialize in making one specific real thing.
However, once you are specializing in making one class of real thing, you need a way to acquire all the other real things that you want for personal reasons.
It is slow and time consuming to trade your real things for someone else's real things.
It is faster to be able to exchange "tokens of real things", since then you don't have to carry the real things around.
It is even faster if the token of real things is used as a universal token for all real things.
However, since these tokens are not themselves real things, the people who control the creation of new tokens have immense power, since they can exchange new tokens for real things.
The best tokens to hold, when you aren't holding real things, are tokens made by token-makers that are unlikely to make many new tokens. Historically, that was using tokens made of real metals that were hard to mine. In the present day, that is using tokens made by stable governments who appear stable.
Also in the present day, there is an emerging token that is made using a consensus algorithm on a lot of computers. Since this consensus algorithm has been running for over a decade without many problems, and the social consensus appears resistant to new-token-making outside the current rules, it is becoming more valuable, especially as the stable governments making the most valuable tokens become less stable and more inclined to make more tokens.
Correct. Bitcoin's volatility reflects fundamental uncertainty about its future value, which means that it is not acting as a useful present store of value (except for nations with >70% annual inflation, like Argentina).
As Bitcoin has gotten more valuable, its up and downswings (as a % of market cap) have gotten smaller. In the world where it becomes solidified as digital gold, I'd expect that trend to continue until it was about as volatile as gold (if it ever reached gold's ~$8 trillion market cap).
It’s not a start-to-end treatment but if you read Money Stuff by Matt Levine you’ll get first-principle explanations of lots of financial instruments, and at least gradually get there. Plus he’s an excellent writer and manages to make the subject matter interesting and amusing, which I find novel.
The basic way of thinking about derivatives and futures (and options) is that they are bets about some proposition that have varying risk and payout structures. Derivatives are contracts that pay out based on the performance of some other asset class, but allow you to structure the payout and risk profile differently.
Typically these instruments are either standard and sold by any market maker, such as options - with standard-ish approaches to avoiding losing money if the market maker loses the bet, or specific synthetic products that say Goldman will come up with and market. You can also make custom bets if you have enough money.
The reason you want all of these complex bets is that in general you want to be able to bet on any trend where you think you have information advantage (say, I did a study and think there will be growth in the real estate industry in city A), to build a balanced portfolio that spreads risk across many categories and therefore reduces total risk, or, to hedge against a specific risk your company/entity faces (say, I grow corn and want to hedge against corn prices going way down next fall; so I can sell futures for some of my product to reduce the variance in my returns.)
Notably, fractional reserve banking cannot create new physical-dollar-tokens (one very narrow definition of money), but it can increase the number of people who are owned dollar-tokens, therefore increasing a different definition of money.
I think people over complicate this a lot, often by not even using common terms, which are readily available. The money in deposit accounts ( which is what I assume you mean by "ow(n)ed dollar-tokens") isn't a guarantee of money, it *is* money. You can buy items with your card, and 97% of cash is not in coins.
In an economy where banks never failed or had to suspend operations, a dollar in a chequing account is identical to a dollar bill in your hand.
In the real world, there are sometimes (rarely) very important differences relating to who is holding the "actual" dollar.
Stepping up one level of abstraction, owning wealth in the form of shares/bonds and owning wealth in the form of dollars is actually often quite different, which is why you see a "dash for cash" any time there is a recession (the velocity of money slows down in a recession, so prices for most financial assets fall & the people holding cash can then buy at a discount if they dare).
Futures are just a obligation to buy at an agreed price in the future. The person buying the future is trying to either assume that prices will rise, the seller is trying to get a guaranteed return in the future. An option is merely the option to buy something at an agree price in the future. There is no obligation.
I have tried finding a very good podcast episode on futures, but alas, I cannot. I will comeback if I do.
I have some other materials to share, two books by Yanis Varoufakis: "Talking to my daughter about the economy" and "Adults in the room".
While the first might sound like a "dumbed down" version, it is maybe only slightly so but I found it very well written and insightful nonetheless. Maybe see it as a general historical overview of why and how money and debt exists beyond the trivial facts. Its short, 4 hours as an audio book.
The second one is about the greek/EU financial crisis around 2015 (which itself is caused by 2008). It gives the listener/reader a glimpse into the financial and political establishment in mostly Europe, but also venturing a little outside to the US. Its around 16hrs of listening and definitely falls into the historical category. Extremely insightful in my opinion.
Both might have some bias, but two answers to that: 1. Much less so than one might think and when he is biased, he is very transparent about it. 2. He argues very well that money is fundamentally a political "thing" and therefore talking about money cannot be without bias. (he even argues that some systemic financial problems in Europe are caused by institutions like the ECB mandated to act unpolitically, which he thinks is a contradiction)
The first means we use it, rather than barter to exchange goods - it is a currency. The second means that it largely keeps it value over time. Of course there is inflation but you find with hyper inflation that people stop using the currency that is inflating ( so it stops being a currency in that sense).
Unit of account means we can compare items by value - this costs $200, so it twice as expensive as that which costs $100. Can't do that with barter.
So far so easy. You can see why electronic money is money in this sense. I can buy with a debit card. A credit card creates money, to be repaid later. There are other things described as money (in the broader definition of money lime M3) that are a bit confusing, but mostly they can be easily turned into a tradable currency on demand. A savings account that can't be accessed for months, isn't currency. A deposit account that can be used to buy with a card now, is.
The Bank of England describes money creation here:
I'm going to go out on a limb and claim that there's no way to describe how money works without smuggling in some sort of political ideology.
>if it's a scam, why do the people who understand it not start a revolution ?
From my perspective, this revolution exists and is ongoing, it's called bitcoin. Of course, this means most of my peers think i'm a crank. Time will tell who is right.
So, to be clear, are you saying you don't care what you can buy with those bitcoin? Or that a falling bitcoin makes the limited amount of things you can buy more expensive.
I haven't read it yet (though I have the audiobook waiting for when I get bored with language flashcards again), but as a long time reader of his blog, I second your recommendation.
As someone who criticised the excessive length of some of the entries in the book review contest I guess I should appreciate the brevity of this one, but still feel like a middle path might be best.
I have created a cloud service for understanding text. Just understanding. Suitable for chatbots, classification, etc. It's called Understanding Machine One (UM1)
I think it might be helpful if you could explain what "understanding text" means in this context -- for people like me who are not ML specialists. For example, I am somewhat familiar with machine translation, classification, and text generation, but I am no data scientist. Does "understanding" mean something like "generating embeddings" in this context ? Can I use your API to speed up the training of my classifier, or is its purpose something totally different ?
UM1 is a half-transformer, transforming text to a list of integers where the numbers are ID numbers for neurons that were activated during the reading (we use DISCRETE neurons). These represent phrases and concepts learned in the learning phase. The system will determine which discovered concepts are most salient and return those to the caller.
The caller can directly use these numbers in their business logic. The test code shows how to use Jaccard distance of these numbers for reliable by-content classification.
This is a way to avoid dealing with natural language at all when creating things like mail filters and chatbots. Send the raw text to the service and just use set theory on the results. This is a simpler API than DL and it is still 100% unsupervised. The site discusses how to use the Numbered Neuron API to classify into predetermined classes without doing supervised learning, ever, at any stage.
We have strategies for creating filters that are trivially tunable by end users based on this tech.
Ok, so I can basically use your engine for transfer learning, to jump-start the training of my own transformer (or classifier or whatever), right ? This does sound like it's generating something like embeddings (although I'm no specialist, I could be wrong); but I don't see how I can reasonably expect to use it as a classifier without at least some degree of supervised learning -- that said, I have not read your article yet.
UM1 *is* the classifier. You do not need another one.
The blog (and another post below) discuss how to use the Numbered Neuron API to classify samples into buckets. Each of the 200 tests in the test suite (on GitHub) is one target phrase that needs to be classified into one of five buckets by semantic similarity in a multiple choice test.
The entire bucket definition is that the user provides a canonical phrase to use as the bucket's focus. The idea is that any input sentence that means the same thing as some bucket's canonical means the wild text can be classified with that canonical because of high overlap in activated neuron IDs, as computed by Jaccard distance.
We claim that the responses are semantically somewhat stable even if the input language used is syntactically different. We call it "Semantic Stability".
This is why you want to use an Understanding Machine rather than to painfully parse words; words are treacherous and we need context based disambiguation to get to the next level.
Parsing words is a bother and you have to do it all over in order to handle more than one language. Better to use an external Understander. All a client app needs to do to support classifying in French is to translate the canonicals into French and specify that it wants to use a French-trained Understander.
Sure. As to examples, if you download the test code you can see what is going on. In the simple case, you send it text strings, you get back a list of numbers. It can be more complicated than that. If you send a JsonArray inside a JasonArray... arbitrarily deep with text anywhere at any level, you get back an isomorphic structure with the texts replaced by their Understandinds (yet another JsonArray, or JsonObject if you want any metadata beyond the pure Understanding). And the test code uses that to pack all 200 test cases with 6 strings each into one query in order to save 1200 TCP roundtrips.
If you want to classify to 10 categories, you start by sending ten canonical sentences to the system and you get a list of numbers each. Save those. Now when you get wild user input that you want to classify to one of those ten cases, send the wild text in, get the numbers, and see which of your 10 cases matches with the largest overlap in the returned Concept IDs. This is what Jaccard distance does.
For the main question: (quoting from the blog)
To find all documents on topic X, start with submitting one or more samples of topic X. If you want to detect the meaning of "I would like to know the balance in my checking account" in some tellerbot you are writing, then you can send that phrase as a topic-centroid-defining "Canonical Probe Phrase" to UM1 and save the resulting moniform in a table. The value in the "right hand column" in the table could be a token such as "#CHECKINGBALANCE" to use subsequently in Reductionist code, such as in a surrounding banking system.
UM1 is not a transformer; it can be described as a half-transformer, an encoder that encodes its understanding of incoming text as lists of numbers. The table of moniforms you build up when starting the client system will be used to decode the meaning. This is done entirely in the client end, in code you need to write or download.
To decode the Understanding received after sending UM1 a wild sentence (chatbot user input, a tweet to classify, etc) your client code will compare the numbers in the wild reply moniform to the numbers of all the moniforms in the probe table we built when we started, using Jaccard Similarity as the distance measure. The probe sentence that has the best matching moniform is the one we will say has semantics closest to the wild sentence.
Jaccard Distance tells how closely related two sets of items are by computing the set intersection and the set union between two sets A and B. The distance is computed by dividing the number of common elements (the intersection) by the total number of elements in either set. This provides a well behaved distance metric in the interval [0..1] as a floating point value. The canonical moniform with the highest Jaccard score is the best match.
In UM1, the ID numbers represent dimensions in a boolean semantic space. If the system has learned 20 million Nodes, each representing some identifiable language level concept, then we can view the numbers returned in the moniform as the dimension numbers of the dimensions which have the value "true". Consider a moniform that has 20 numbers (it varies by message length and input-to-corpus matchability) selected from a possible 20 million to get an idea of the size of the semantic space available to OL.
In some DL systems for language, concepts are represented by vectors of 512 floating-point numbers. In this 512-dimensional space, DL can perform vector addition and subtraction and perform amazing feats of semantic arithmetic, like discovering that KING - MALE + FEMALE = QUEEN. With boolean 0/1 dimensions, closeness in the semantic space becomes a problem of matching up the nonzero dimensions, which is why Jaccard distance works so well.
Traditional NLP is often done as a pipeline of processing modules providing streaming functions to do word scanning, lowercasing, stemming, grammar based parsing, synonym expansion, dictionary lookup, and other such techniques. When using UM1 you do not have to do any of those things; just send in the text.
Note that UM1 does not do any of those operations either. It just reads and Understands. And because OL learned the morphology (such as plural-s on English words) used by UM1, the system can be expected to work in any other learned language, even if morphology is different.
This is an update of my long-running attempt to predict an outcome of Russo-Ukrainian war. After more than a month when nothing worth updating happened, we have major developments. Previous update is here: https://astralcodexten.substack.com/p/open-thread-234/comment/7955016. (note: I have a limited time for responding to comments, maybe it might take me a few days).
15 % on Ukrainian victory (up from 8 % on July 25).
I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24, regardless of whether it is now directly controlled by Russia (Crimea), or by its proxies (Donetsk and Luhansk "republics”), without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24, or c) return to exact prewar status quo ante.
45 % on compromise solution that both sides might plausibly claim as a victory (up from 29 % on July 25).
40 % on Ukrainian defeat (down from 63 % on July 25).
I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.
Discussion:
In a nutshell, Ukrainians managed to concentrate powerful forces on the insufficiently defended part of the Russian frontline, achieving complete surprise and total rout of Russian defences, which then triggered chaotic retreat slash surrender of Russian forces concentrated on the different part of the front, threatened with encirclement. Pretty classic maneuver, well known from history books. Overall extent of the Ukrainian victory, as of now, is still unclear, and battle is ongoing, which complicates predictions.
Well, I did not expect Ukrainians would be able to do that. This indicates far lower ability of Russian command to see what Ukrainians are doing (in military lingo, I believe it is called situational awareness), and also a lack of metawareness, in a sense that they did not know what they did not know; otherwise they would not concentrate so many of their resources in attempts to break through Ukrainian lines around Izyum (and also further southeast around Bachmut), leaving large section of the frontline so poorly defended. Furthemore this shows that Ukrainian army is very good, but I knew that already.
Other important thing that is happening, also good for Ukraine, is that since my previous update 538 increased odds of Democrats retaining their majority in the House of Representatives from 15 to 26 %. I think that US support for Ukraine in the future is going to be higher if Democrats win.
Now, I am still not ready to declare imminent Ukrainian victory in the whole war. Russia still has a powerful army, controls large swathes of important Ukrainian territory, has far more resources left to mobilize than Ukraine. Future of Western support to Ukraine still remains highly uncertain. I also think, although this is more subjective, that Russian command in this war has shown an ability to learn from their previous mistakes.
BUT, of course this shows major flaws in Russian decision making, which might not be fixable. In the past, I have lost any confidence in predictions of the impending collapse of the Russian regime (those long predate the war), simply because they are just being endlessly repeated with varying justifications and regime is not collapsing. Now, I guess those guys gained back some credibility. Total collapse of the Russian army, in a 1918 Germany style, just became a lot more likely than it was a week ago. And obviously, this situation might cause antiwar sentiments in Russia to rise, especially since Russian government might feel compelled to intensify mobilization, both manpower and industrial, either to replace unexpected losses or just ensure that this disaster will not be repeated.
*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of this year, that is.
Bilbo at one point describes himself as butter that has been scraped over too much bread. I think that may describe the present status of the Russian army. They don't have the resources to defend all of what they have and continue pushing in the Donbas, which they are committed to doing. The result, as we just saw at the north end of the line, is that moving enough troops to defend one area (Kherson) leaves them with not enough to defend another.
The sensible response would be to stop attacking in the Donbas but that may be prevented by internal politics. Failing that, perhaps even if they do that, they are at risk of another successful Ukrainian advance, this time probably in the south. I'm not confident it will happen, but I think the chance of a long war of attrition is now much lower than we all thought a month ago.
I feel like last week was Ukraine's El Alamein. Not a strategic turning point but a narrative one. Aside from the morale effect, Ukraine went into the Kherson offensive wanting to prove to Europe that it can do more than hold off the inevitable (otherwise why shouldn't Germany buy Russian gas this winter) and the last week's events demonstrate that.
My impression is that everyone was predicting a long grind with battle lines not moving much (for how long? did anyone say?) until this Ukrainian breakthrough. Did anyone get it right?
If you're expecting a substantial Russian victory, what are the stages?
A lot of people including myself (in one of open threads) predicted counter-offensive. That Ukraine will undertake it at some point before the winter was basically an undisputed idea in Russian war-related communities. The breakthrough was more of a surprise, though not by much: as people mentioned already, Russian forces are spread too thin. The reluctance of Russian Army commanders to use reserves to hold at least some ground near Kharkov was a bitter surprise to many, though, and generated a lot of negative comments about Ministry of Defense.
No one I read can see a clear path to a total military victory for Russia or Ukraine. Russian commenters either hope for economic devastation of Ukraine during the winter, or some kind of mobilization (limited or total) in Russia that will put a lot more warm bodies on the frontline, which may help to push it back. A few suggest using tactical nukes, but they were doing it since day one, I think those are just people who want to see the world burn.
Barring some unlikely disaster, I personally think the counter-offensive largely exhausted itself, and further gains by Ukraine will be slim, though there is talk about yet another prong of being prepared in Donetsk region, where until now Russia continued to slowly gain ground.
Recent days saw Russia destroying Ukrainian infrastructure, parts of which was left untouched before - if this goes on, it is likely that we will see a largely frozen frontline with Russian side hoping that anti-infrastructure campaign will force Ukraine to re-start negotiations with some concessions it wasn't ready to offer before.
The consensus in Lost Armour forums is that Ukraine won't do that, and instead will badger USA to deliver long-range missiles to retaliate against Russian infrastructure, from which point the war will probably escalate further, probably with limited mobilization on Russian side.
Russia seems to have loosened ROE with last strikes at power plants (which repeated today); those plants are much harder to replace then military equipment (Ukraine haven't built a single new one since gaining independence) and given widespread use of electric trains losing them can hurt Ukrainian logistics a lot.
Realistically, I think that Russia is very likely to get what is actually wants from Ukraine: all of its Eastern territories including Odessa. This turns the remainder of Ukraine into a landlocked rump state, secures Russia's trade routes to the Black Sea (and allows it access to Moldova), and gives it some territory and a bit of an industrial base (assuming any survives), including nuclear power plants. Sure, capturing Kyiv or the entire Ukrainian territory would be nice, but it's not really an immediate requirement for Putin's imperial ambitions.
Why do you think this is likely, or in what way? I haven't seen any serious analysts suggest Russia could progress anywhere in the south, let alone take Odessa.
I think that the Ukrainian war is a war of attrition, and Russia can afford more attrition. I agree with @alesziegler when he says that a Democratic victory in the midterms (assuming it happens) looks bad for Russia; but ultimately, American support for Ukraine is going to run out eventually; in four years if not two. All Russia has to do is hang on until then, maybe trading incremental advances for incremental defeats. By contrast, once Ukraine runs out of advanced weapons, they're done -- at that point they're looking at a rapid collapse.
Why do we care whether American support for Ukraine runs out in four years or two, if the war is going to be over in less than a year? This is a war of attrition. At the rate the Russian army is attrited, it will cease to exist in a year. It will have zero tanks and zero infantry; it might have some artillery left, but artillery without an infantry screen is just free guns for whoever wants to claim them.
There are still scenarios where Russia wins, because the Ukrainian army breaks first. These are much less likely than they were a week or two ago, but it's still possible. But one way or another, it's going to happen in less than a year, and probably less than six months.
The US is highly unlikely to stop supporting Ukraine in six months to a year.
Saw your comment this morning and wanted to ask... why *is* American support for Ukraine going to run out? Look at it from a purely American perspective for a moment. America is getting to bleed our enemy (or at least long time rival) Russia and watch them slowly gut their army for a generation and detonate their economy. All at the low, low, cost of lots of money to American arms manufacturers, gas crisis in Europe, and an ocean of Ukrainian blood. Honestly, seems like a great deal... for the US. And we get to do so in a cause everybody (that we care about) agrees is just and good. And we get to test all our weapon systems in real combat and take the data back for further development and refinement.
You reference Democratic victory, but is this really a partisan issue? I'm sure there's plenty of Republicans who understand that a weaker Russia is good for the US. With the US not having to pay the cost in blood or energy, it's not like the public is going to care if we support Ukraine for the next decade.
Anecdote, but this is my boss. He's a Trump Republican and he thinks this is a big waste of money. He figures this is either our business - in which case we should declare war already - or it's not - in which case we should keep our money at home and let those people fight it out.
Logically, you are probably correct. Politically, though, Republicans believe in "America First", which means spending as little money as possible (ideally, none) on foreign wars. They campaign on this, in fact.
My question is how many Republicans believe in "Russia First?" How much of the noise about Putin being the Destroyer of Wokeness translates into support for America invading Ukraine in support of Russia?
My take is that the recent return in maneuver tank warfare reflect the attrition situation exactly the other way around: Ukrainians have now better and longer-range artillery, counter-radar capacity (HARM) and apparently also (somehow) ability to deny Russians any benefits of the air superiority they supposedly should have had since February. If Russia had working aviation, they should have been able to destroy the Ukrainian offensive in their tracks.
It is an indication how the combined military-industrial complex of the West and allies is much more capable at supplying Ukraine with advanced weapons than Russian domestic industry.
The only way I can see the local war situations turning favorable for Russia is that China steps in with logistical support to match European and the US donations. However, that would make them a North Korea in a US - China proxy war: hardly an envious position, no matter the eventual aftermath.
The ability of the Western military-industrial complex to supply Ukraine in arms and munitions for months is sadly rather dubious. We are simply not geared for a long term massive war. On the other hand Russia seem to have even bigger material problems.
Russia has massive materiel problems regarding modern weapons technology: precision rockets, guided air-to-surface munitions, agile tanks, etc. However, they have a virtually infinite supply of WW2-era weaponry, and a massive supply of bodies to wield those weapons. That is why time is on their side, I believe.
I do not intend to step on anyone's toes here but the Russo-Ukrainian War is a European war. It concerns mainly Europe and it is primarily Europe that is holding Ukraine under its arms. While the US is providing some very fancy weapon systems (and probably a respectable load of invaluable intelligence as well) it is Europe that is giving most of the financial support that enables Ukraine to continue function as a state. And the European aid is not going to run out. At least not before Russia's resources run out.
Strong take, but I think too strong. I don't see it as certain that Europe holds up through the winter. As I understand it, the "campaigning season" is going to close in a few weeks when the weather turns, and everyone is going to be where they are until spring. That's a long time watching nothing happen while paying through the nose for energy you know you could get for cheap if you just toss the Ukrainians to the wolves.
I hope you're right, but I don't think it's as sure as you do.
I don't think campaign season is a thing, any more. War started in February. See also WW2, which saw its share of succesful winter offensives. Mayby it is still somewhat more difficult to attack in winter, but by no means impossible.
And that applies for Russian offensive, too. Maybe they will be able to do some smaller local attack again, who knows
+1. This is also a great point (and thanks for reminding us Americans we’re not the pivot point of the world - embarrassing oversight seeing as I was in the EU lesss than 2 weeks ago!)
It's not a great point, it's false. See mudita's comment above; eyeballing the chart the US seems to be providing slightly more aid than all of Europe put together.
One thought that I’d add to PS’ responses is that someone could have said much the same thing in 1968.
“America has great odds to win in Vietnam. Chinese support for the North has to run out eventually - all the US and the South need to do is hold on until then.”
Unity of political will on the part of the aggressor is not guaranteed, and if that fragments the rapid collapse can easily run in the other direction.
@alesziegler says that a democrat victory in the midterms is good for Ukraine. Which should be pretty obvious tbh since the current democrat president is very much pro Ukraine...
I think there are actually three counterarguments to that. From the more specific to the more general:
1. This assumes that Russia is losing less and/or replenishing more "advanced weapons" than Ukraine. Until now, the opposite has been the case: many Russian technical capabilities have deteriorated significantly (and will take many years to build back), from APCs to PGMs. Ukraine has gained many new capabilities it didn't have at the start, from howitzers to anti-aircraft to HIMARS. There's even been talk in the past few days of supplying Western battle tanks. If this trend continues even for one year, it will be not easy at all for Russia to hold on to the Ukrainian territories it now holds.
2. A war of attrition is not just about the equipment, it's also about personnel - and Russia has huge problems with soldiers. It doesn't have enough to defend such a long frontline, and many of the troops they do have are poorly trained, poorly equipped, poorly motivated and exhausted (which are some of the reasons they were so easily overrun in the Kharkiv region, besides poor situational awareness and poor command). And Russia has no good way of fixing this problem. They struggle to find volunteers in the required quantity and quality, and to train them. Even if there's a general mobilization (which Putin seems to avoid at all costs), it will take a lot of time and might or might not be really effective. On the other hand, Ukraine has plenty of manpower (and by now also the opportunity to train them properly, both in Ukraine and in the West).
3. Finally, while it seems likely that this will once again turn into a war of attrition after the current phase, it might not. A lot can happen in two (or four) years. Lawrence Freedman has this quote from Hemingway in his recent post on the possible course of the war:
And Russia seems more likely to be going bankrupt at the moment. Not necessarily all the way to regime fall, but certainly with regard to its ability to wage this war.
I think Ukrainians are going to win according to alesziegler criteria (90% possibility). It only requires Ukraine to push Russians back to 2021 borders which is the most likely outcome. He talks about Russia giving concession to Ukraine to join the EU and/or NATO but I don't understand why Ukraine would need to ask for permission from Russia?
The main thing that allowed me to correctly predict that Ukrainians will fight and will not allow occupation of their country was from knowing the Ukrainian mood. Zelensky asking for weapons and not the ride was highly predictable from understand the mood in Ukraine.
I don't know what the mood of people in Donets and Luhansk is. If it is more favouring Russia, then Ukraine might not be able to retake those areas meaningfully. The Crimea appears to be mostly pro-Russia, therefore I cannot make any predictions in this regard. I might be wrong but I just don't know. But for the rest of Ukraine I think it is clear that Ukraine will regain these territories and will continue their integration with the EU. And that is what matters the most.
I even think that some criteria what it means for Russia to win, are too sophisticated and hedging. The reality is that most Russians don't care and those who care are motivated mostly by the idea that Ukraine is a false nation that should be incorporated into or at least strongly controlled by Russia. That is completely crazy idea by modern standards and not going to happen. It could have worked in the Middle Ages but even 100 years ago the USSR couldn't absorb Ukraine and erase its identity. No chance of that happening today. Ukraine may lose some territory but will remain as an independent nation that is even less controlled by Russia than before (99% possibility). That alone should count as a strong win from global point of view.
I think it is very unlikely in this war. Imho they would much sooner go for general mobilization on the scale Ukraine had conducted, it is unclear whether they have a political will to do even that
I'd agree with that. Broaching the nuclear taboo, especially against a non-nuclear power would have tremendous negative consequences for Russia abroad that would last decades, and it might not even end the war itself - it could just as easily *worsen* the situation for Russia by prompting Ukraine's allies to escalate involvement while Russia's allies pull back their support or are pushed to pivot from tacit acceptance to joining in economic sanctions.
Heck, an order to fire a nuke may create the exact combination of "necessary in the eyes of leadership to keep leadership personally in power, but terrible for the country overall and, hey, if leadership is gone does that make an opportunity for me personally?" that prompts a palace coup.
So unless something dramatically changes, nukes seem like they are (thankfully) off the table for the time being.
This really need to be done site by site, since the costs are mainly local. Also, the assumptions about whether the fuels produced/not by fracking would be produced otherwise.
I’m looking for a detailed and accurate cost-benefit analysis of fracking. Most of what I’ve encountered so far is polemical in one direction or the other. Any suggestions?
This really need to be done site by site, since the costs are mainly local. Also, the assumptions about whether the fuels produced/not by fracking would be produced otherwise.
Erikson is a management consultant who visits offices and types people by a four humors variant called the DISC method. Dominant, Influencer, Stable and Compliance. I'm not a people person, but the books gave me a better take on people I work with. 'Surrounded by Idiots' is if everyone around you is a different type and life is one long misunderstanding.
Erikson is confident he can type people with a month or so observing. Hard sell? Yes. But he's not typing them for all times and all peoples, he's typing how they act in office drama he's seen. Per Montaigne men do more from habit than reason.
It's easy to map DISC as the four humors and the OODA loop. Dominant as Decide and Choler, Influencer as Sanguine and Orient, Stable as Melancholy and Observe, Compliant as Phlegmatic and Act. Often wrong, but easy, and these are just rules of thumb.
'Surrounded by Bad Bosses and Lazy Employees', also Thomas Erikson.
This is the best Erikson book. Decades of experience as a management consultant describing bosses and employees. The first half boss, the second employee. How they should get along and why they don't. Driving forces as well as personality types. 'People quit their boss, not their job'. The tendency to start a job with full commitment and minimum skill, and finish with minimum commitment and maximum skill, and what you and the boss should do at each stage.
He has a lot of good, pointed anecdotes. He even throws in a good case for the reason decent people flinch from American journalism- the Pyramid story. Headline, maybe a good first sentence, probably not a good first paragraph, endless burbling. In his telling it is a good way to reach all four personality types, not an abomination against God Man and Devil as everyone thinks. Bossypants types skim the headline with decision, job done. Emo types skim the first sentence for the slant. Rocks read the first paragraph. We do our part. Nit-pickers read the whole thing in the hopeless hope of some news value. The human comedy of humors is covered, why should I sneer? Because it's a good target for random contempt. Because I've spent decades reading crap excreted by low-IQ journalists, sloppily edited by Satans in green eyeshades, stuffed in at random by rightly bored typesetters around the ads. It's bad luck to sneer at a style of writing millions of people have read for the last century. Okay.
Erikson never mentions journalism or the phrase 'pyramid story'. He just sees a useful template for reaching most types of people. Okay.
Here is Erikson at his best, a sensible expert with three decades of experience.
'Surrounded by Psychopaths' and 'Surrounded by Narcissists', by Erikson at his worst.
'Psychopaths'? Kitsch. 'Narcissists'? Kitsch. 'Malicious' and 'Selfish' are English. So, indeed, is 'just not that into you'.
The worst books he's written, with good stuff from other books drowning in drivel. Erikson is a man of sense and one who knows the world, and you feel a good mind trying to make this kitsch make sense, as no one can. If he had written 'Surrounded by Malicious People' he could have written a useful book about malicious people with a sense of human nature. No. He goobers about amygdala as if malice changed your brain into a space alien supervillain. If he had written 'Surrounded by Selfish People'- if he had written 'Surrounded by People Who Just Aren't That Into You'-
Bah.
Pretending selfish and malicious people are different from us, stuffed in our test tubes and dissected by our pseudoscientific gibberish. Look into your heart. You are malicious and selfish and just not THAT into me. Me too. Even GK Chesterton.
'Emotions of Normal People' William Marston.
Marston is the source of Erikson's DISC personality profile. Marston is also known for his blood pressure lie detector. Marston is also known as the the creator of Wonder Woman, wearer of booty shorty and wielder of the Lasso of Truth. Marston is also known for taking his women tied up and attached to his lie detector, so he could screw with their minds as he hurt their pussy. Or for finding True Love by Scientific Proof, can't say. He lived with his wife and mistress and died of cancer, not feminine outrage.
The first part of the book is his deep thoughts on evolution. He's not that deep. He's not a biologist. Skip the first 90 pages. It picks up pace as he criticizes competing 1920's psychologists. He is clever, polite, firm. Then the book starts. It's about his four humors DISC theory, as seen in decades of interviews using his lie detector. DISC was originally Dominance Influence Submission Compliance, with Submission changed to Stability by Erikson's generation to avoid hurting middle management's feelings and scaring the office people. And another reason.
You have read feminist stuff about the Evil Patriarchal Science claiming women are naturally submissive, science proves it, the little darlings love it, it's better for them anyway. Here it is, the distinguished thing. He goes into detail. He shows that women's vulvae moisten the more as they are more submissive. It is his life's work to make all women to be Love Leaders who submit to their one dominant man. Works for a lot of happy families. As the D party has cracked down on this our fertility rate has dropped like a falling safe. If Erikson didn't skip this part he'd never get work.
. . . then in the last hundred pages Erikson betrays the Patriarchy. In the natural act a woman's special place dominates a submissive phallus. Wives should all have jobs so their husbands know they live on sufferance. Margaret Sanger is right, most marriages should not have children. Companionate Marriage is his ideal. Another Patriarch's youthful thrusting ends cockadroop.
The last hundred pages are busy. He writes about his lie detector, which proves you are lying because your blood pressure shows Dominance, the first, sane, socialized start of anger. His version is much better than his competitors, who think you are a liar when your blood pressure shows Submission, the first, sane, socialized start of fear. Between the two we always lie, but okay.
It says something awful about human nature that we invent these wonderful emotion detectors and use them as crappy lie detectors. It says something about Marston that he thinks anger and fear are crazy. Lions and gazelles, rabbits and dogs, me and life's infelicities, all nuts by him.
In the last hundred pages you start to see what the first hundred pages tried to show, a truly scientific advance on Darwin's 'The Expression of Emotions in Animals and Man'. Darwin, a naturalist, typed facial expressions everyone can see in humans and other animals. Marston goes deeper. He sees the first tremors of intent in the blood. I don't think this was taken up by the world of science. Paul Ekman uses high-speed photographs of faces for micro-expressions, but this looks like a genuine lost treasure. It makes sense for Marston to extend this into bacteria and evolution and so forth and I should give the first hundred pages a fair reread and I just can't. I've read too many Derp Thoughts on Evolution.
It's too late for me, but I hope someone smarter looks into this. Any Paul Ekman students out there?
Thomas Eriksson is a fraud and his books has no actual grounding. Still, he managed to trick companies and even governmental organizations to buy his courses, not only wasting countless tax kronor, but also doing active harm since his pseudo-science have influenced hiring and promotion decisions (yes, the respective managers are responsible as well, that does not absolve Thomas).
It's hard to deny that some people are more bossy or easier to push around, or that some people are more people person and some are more job-focused. I agree that 'psychopath' and 'narcissist' reek of kitschy pseudoscience and, worse, a failure to read GK Chesterton.
Why do you think China hasn't solved their low birth rate problem with a radical social policy? If they could implement the one-child policy, why not something like two-or-more-child policy(with exceptions for certain groups perhaps)?
I know that Scott wrote a post on why low birth rate isn't such a serious problem, but that only applies when you're not a nation challenging the United States for global hegemony. In so called Cold War 2, population absolutely matters, and surely the CCP realize this as well.
I have often wondered why no technical solution hasn't been applied. What would that be? an example would be Egg harvesting and IVF, which has more of a chance to produce twins.
We don't have uterine replicators yet. The uteruses presently available to China are finite in number, and owned by people who have very definite ideas about how many babies they do or do not want to make. The ones who want to make more babies, only very rarely have any difficulty making babies up to their preferred number.
The CCP could in theory determine that those uteruses are the property of the State and will be impregnated no matter what the host wants. Reasons why this would be a bad plan, are left to the student. But if they do go that way, they don't need fancy technology to impregnate whatever uteruses they can commandeer for the purpose.
Otherwise, China will only make as many babies as the women of China each individually want. If that's not enough for someone else's plans, they need to figure out how to persuade the Chinese women.
I think you've gone off the rails here a bit. I wasn't really demanding ownership of uteruses but egg freezing to maintain fertility over time. Then of course IVF produces more fertility
In fact people do actually say they want more children than they actually have in modern societies, and for the purposes of this discussion urban China is modern.
In the US although the fertility rate has declined for couples, the desire for the number of children has stayed the same. Therefore there is something in modern life -- perhaps starting a family later, child care costs, or housing costs - that stops people who do couple up and start a family from having the number of children that they do want.
They are trying for it, it just takes a long time to shift gears when you've been pushing "one child only" policies (including forced abortions) for decades and then want to convince people who have grown up as single children that okay, now you can have two kids! three kids!
Would you be surprised if people were slow to adapt, given that they have prudent suspicions about "and what if I have two kids but next year the policy changes back to only one? what happens to me and my family?" because I certainly would not be surprised.
Notably, the "two child policy" and "three child policy" wasn't "you should have 2 / 3 children", they were "it is now legal to have 2 / 3 children".
China has not yet passed a single pro-natalist law, they just took most of a decade to fully unwind they extremely strong and coercive anti-natalist laws.
There's a lot of inertia in the ship of state, and they went from a One-Child (Max) Policy to a Two-Child (Max) Policy to a Three-Child (Max) Policy to only just in July 2021 removing limits on having more children. They haven't yet begun to try to actively boost the birthrate because 14 months ago they were still trying to lower the birthrate.
Secondarily from "it takes time to build pro-natalist consensus out of anti-natalist consensus", there's also a failure of imagination. They don't have any obvious templates to copy from around the world that could double the birth rate using secular means, so they'll need to create something from scratch. The fact that it's unproven weakens the pro-natalist faction in internal arguments because the anti-natalist faction can (somewhat plausibly) claim that the pro-natalist faction's goals are impossible or too costly.
Like Scott, I don’t actually agree that low birth rates are a disaster. The official statistics for China aren’t that different than the west. However they are trying some ideas, including letting the real estate market correct right now.
Maybe the think the absolute population advantage is suffuent and in the short rum higher polplation growth would only reduce resources useful for direct challenge.
To some extent they have introduced radical policies to try to increase birthrates. The "double reduction policy" that came in a year or so ago is a really big deal and directly aimed at this problem.
To give a brief background, China is similar to other East Asian countries in that its education system is based on highly competitive high-stakes exams. So like other countries in the region parents felt forced to sign up their children for all sorts of after-school classes to give them an advantage in those tests. This was a major cost for middle-class families and often cited as one of the barriers to having children.
So about a year ago the central government decided to get rid of this barrier by banning all for-profit tutoring of school-age children in academic subjects. This was a sector of the economy providing millions of jobs and bringing in tens of billions of dollars per year, so destroying it was a really big move.
There's a certain amount of cynicism about other motives behind the policy (increasing social control of what children are taught, reducing foreign influences by cutting off international curriculums and foreign English teachers etc) but most observers agree that increasing birthrates is at least a major part of the motivation.
"So about a year ago the central government decided to get rid of this barrier by banning all for-profit tutoring of school-age children in academic subjects."
Hmm... That is an ... interesting choice. To the extent that the tutoring was just a zero-sum competition for the high-stakes exams, this might have been the right choice, but, to the extent that it actually imparted useful knowledge or skills, the CCP may find itself wishing that they had subsidized the tutoring instead.
They prevented births by abortions and sterilizations, which are one-time interventions, and pretty cheap at that. To encourage (or even require) births would need far more expensive interventions that last over many years, since presumably you want people to not only have children but also rear them to adulthood, which takes many years and costs lots of time and money. You also need a longer-term more complex enforcement regime if you want to enforce births rather than prevent them. It's pretty cheap to know when someone is pregnant and enforce an abortion, but it's expensive to create a regime that figures out when someone could get pregnant and enforces a requirement to do so and go on to give birth.
So I expet the expense of encouraging (or requiring) births above what people naturally want to do is much, much higher than the expense of suppressing births below what people naturally want to do.
An interesting theory, but the empirical evidence is not encouraging. In Germany for example they have Kindergeld, which I'm told is something like 200 euros/month/child until the child reaches 18, sometimes later, and yet Germany's fertility rate is right in the middle of the Eurozone (and far lower than the government would like it to be). I believe a number of Eurozone countries are experimenting with cash prizes, so to speak, of $3k-10k per kid, without as yet a whole lot of success.
One could reasonably argue the money isn't enough -- and if your $20k weren't enough, I'm sure there's a number where it *would* be enough -- but that's kind of my point. You can spend a lot less than $50k/child to ensure that a child isn't born. But the other way around is much more expensive.
I mean I think my view of human nature is jaundiced enough that I would say $200/month for 18 years is a much smaller incentive to get actual behavior today than $20,000 up front.
I didn't think of this at the time I first say the comment, but I would like to argue that this isn't even that illogical a preference. Babies are expensive. You need a bunch of gear. And right when you're getting that gear, you also miss a bunch of work, usually unpaid. And then you have to either lose an income or pay childcare for a few years. And that childcare is most expensive when they're small, getting progressively cheaper as they approach pre-k age.
Kids get cheap again around when they go to school, but those first few years are rough, and if you're like most people, you're earning less money during those years than you will five or ten or fifteen years later. I think dumping a small windfall on new parents would make a lot more difference to most of them than the monthly payments, even if the monthly payments come out to more in the end.
Then why do people buy annuities? Or invest, for that matter? Maybe you're making some assumptions about the circumstances of the people who can provide the babymaking? Not disagreeing with that necessarily -- you could be right that a flat cash prize would be a better way to go than spread-out incentives, for the people most likely to respond.
But on the other hand, historically speaking, what are the incentives for babymaking? One might argue they are a consistent and modest bump up in social status and power, more like the Kindergeld than the big one-time cash prize. Seems complicated.
(1) In the West an annuity of $2,400 a year for 18 years would probably cost $25,000, not that far off from $20,000, because financial assets grow at about 7% a year:
(2) In a Communist regime with less stable property rights, a lump sum is more valuable than it is in a country with more stable property rights.
(3) In the West you can BUY an annuity as an individual, you cannot SELL an annuity as an individual. In the rare cases where random individuals chose between lump sums and annual payments [lotteries], you see a larger basis for lump sums than you'd expect from the market for annuities because large chunks of the population are poor & have higher time preference than peopel with assets.
Part of it's probably the now trading off against the later - children are demographically bad before they're demographically good. If the PRC thinks the decisive time is the next couple of decades (and Xi is incentivised to behave as if it is; he's 69), then it's worth holding off.
Another part is that enforcing a procreation minimum is trickier than enforcing a procreation maximum, due to the fact that not all of the factors preventing one from having kids are within one's control (whereas it is easy to avoid having kids). You almost have to do it with incentives and childlessness taxes rather than direct criminal penalties.
That seems ... really hard to do. Are they going to create a national childcare infrastructure? Give everyone a bigger house? Ban women from the workplace? Ban contraception? Government attempts to raise the birthrate almost never work. When they do work, they work only a little, and none (that I know of) worked any better than the Georgian Patriarch pledging to personally baptize your 3rd-or-higher kid.
"Government attempts to raise the birthrate almost never work."
I think this is the guts of it. Countries from Japan to Hungary to Sweden all have low birth rates, they've all tried a variety of interventions, none of them have been successful.
The variety of interventions tried are all quite small / weak, both as a % of GDP, a % of societal status reallocation, and an expected stability value. The case to keep an eye on is Hungary, where there appears to be a stable ruling coalition that does somewhat sincerely prioritize this. As the regime stability becomes more apparent & their existing pro-natalist measures therefore become more credible, I'm expecting that the birth rate will increase considerably over the next decade [unless the ruling party falls or the ruling party abandons pro-natalism].
Do you think that a TFR 0.5 or more above the pre-pandemic (2019) TFR is an acceptable measure of a considerable increase? Under the conditions that the ruling party stays in power and retains pro-natalist aspirations, I predict there will not be an increase in Hungarian TFR of 0.5 or more from the 2019 level by 2032.
As I mentioned in reply to WoolyAI, there is a drastic potential intervention: cutting pension payments to those with 0 or 1 children. But I find it quite unlikely the Hungarian government will try this. I think the government's foreign proponents and detractors alike overrate how different it is from other Central European governments.
I would say that that's a reasonable definition (TFR in 2019 1.49, so you are predicting that Hungarian TFR will be 1.98 or less in 2032).
The problem with operationalizing this as a prediction, IMO, is defining "retains pro-natalist aspirations". I would define a government with pro-natalist aspirations as one that steadily increases the % of GDP spent on child support as long as TFR is below replacement level (meaning that if they are pro-natalist, they have a target that they are trying to reach, and if they get signals that their current measures are insufficient they will try harder).
There is a more radical type of intervention. Alessandro Cigno and Martin Werding are interested in connecting "a person's pension entitlements to his or her number of children and the children's earning ability—proposing that, in effect, a person's pension could be financed in part or in full by the pensioner's own children." (https://mitpress.mit.edu/9780262537247/children-and-pensions/)
The more popular way would be giving a pensions boost to parents with a lot of children; the less popular way would be making a pensions deduction from those with 0 or 1 children.
I think it's highly unlikely that any particular government will be the first to try something so unpopular. However, there are around 200 governments, many of which govern countries with low fertility. I predict that it will, eventually, at least be debated more than it is now. (I have no opinion on this because I have no clear opinion on pro-/anti-natalism.)
Well how are you going to get that into law and action, barring an East German police state with half the population informing on the other? I don't see how you could conceivably get a democratically-elected legislature to enact all this drastic reform *unless* it was very broadly popular, which means almost all people are really wanting to have more babies, in which case...why don't they just go and do it? Why not just trundle off to the bedroom, lock the door, and make a 3rd baby, instead of going through all this indirect rigamarole?
I mean, if the problem is you'd really like to have a 3rd child, but tax law/inheritance law/the cost of education/the cost of childcare are holding you back, then I would expect people to advocate powerfully for various measures directly addressing the cost of young children. Which...they kind of aren't. The Biden Administration had grandiose plans for climate change, subsidizing daycare, free pre-K education for all, cheaper family healthcare plans, and free community college education, among other things. Guess which survived the inevitable need to compromise? Climate change and healthcare costs. Adult but not particularly prospective parent concerns.
It would be deeply unpopular at first, I expect. But barring a robot-savior scenario or an unexpected burst in general tax revenue, what's coming instead will also be unpopular. Doing nothing would be highly unpopular (because it means leaving the elderly short of money needed to stay in their houses and afford medical care). Putting the elderly into hostels to be looked after by immigrants from young countries would be highly unpopular.
Apart from an existing voter cleavage along family size, what would make a politician take the risk of proposing it? Probably the interpretation that this is basically a collective action problem among young couples, and solving it would be mostly popular among those couples 1) if they realize what the (above) alternatives are and 2) if there is a baby boom before the next election.
If one young couple has one or two additional children, this won't help their retirement funding unless the children, upon becoming wage earners, choose to pay their parents.
However, if the majority of young couples had one or two additional children, there would be a notable impact on the demographic pyramid. Each retirement would be rather easier to fund. There would finally be plenty of workers to take care of the retired, too.
So a politician might gamble that the policy would be mostly popular among younger couples for solving the collective action problem, and mostly popular among older couples who have enough children. That sounds like a large share of the voting population of most low-fertility countries. People like me, who would be upset and inconvenienced by this because they are childless for non-financial reasons, are a relatively smaller group.
I'm not sure what you mean about informers. Don't you think OECD governments mostly have adequate records on who has what number of children, through birth certificates, censuses, taxes, immunization records, school records, etc., already? Governments need this information at present, to provide the child subsidies that already exist (which in most countries seem to be having limited effect on TFR).
Come. It would be deeply unpopular for *decades* because that's how long it would need to go on before any economic results visibly showed up. At the very least, you need your first bumper crop of babies to become productive tax payers and start forking out Social Security in goodly amounts, which for most people doesn't even happen until their 30s or 40s when they start to make good money. There's no way such a policy could be voted in, or, having been voted in notwithstanding the polls, wouldn't be promptly reversed by the legislators who replaced those who had voted in defiance of the popular will.
And, as I said, if somehow people *did* start to realize demographic implosion was bad for the future, and had sufficient long vision and discipline to do something about it, like endure a big intervention in private life by government, why would they not just take the easy step of...having more babies on their own? I mean, babies are kind of fun. Plus there's more to be gained from grandchildren than their SS taxes, if they're related to you.
It's an interesting problem-- an economy gains temporarily from people having fewer children. For a while, the proportion of working adults goes up-- there are fewer children to take care of, and large numbers of adults haven't aged into retirement yet.
Then the bill comes due, with a smaller work force, and people who need help are becoming more numerous, people who aren't going to mature into workers.
Any society which tries to reverse this is going to have to increase the number of dependents (additional children) while having less productivity to support them. Eventually, they'll reach maturity, but we're talking about a 20 years or so, I think. Maybe a little less if they go with the premise that there's a lot of useful work (including mental work) that can be done younger than we want to accept these days, but not a lot less.
China has not been a communist nation for quite a long time, even if the ruling party still calls itself a communist party. It is currently a mixed economy dictatorship, less capitalist than the U.S. but not by a lot.
Also, whatever communism was intended to be, it ended up in the Soviet Union as a society of extreme inequality, a couple of poor first world cities and a vast third world hinterland.
Was Soviet inequality extreme? I got the impression that it was substantial, with party bosses having vacation homes and a fair amount of money. But not extreme compared to eg many third world countries where the elite is very rich
A strong central government has a *lot* of levers (of varying degrees of evilness) that it can use to increase the birth rate. China's currently not trying to, either because it thinks it isn't important OR doesn't think it's strong enough to survive trying to use those levers.
Levers include:
Significantly hiking base rate taxes but then providing large tax rebates to parents per biological child so people with 2 bio children have the same tax rate as before the reform and those with more pay less tax.
Blocking children from going to university unless they have at least 2 full bio siblings.
Outlawing abortion & contraception.
Banning childless women from professional employment.
I'm reading through the western canon, and I feel like I'd have a much better time if I had at least one friend that has done (or is doing) the same thing that was interested in talking about it. How do I find friends who want to talk about great books? In SF, btw.
I'm a bottom-up learner, so I really don't feel like I understand something unless I know the fundamentals underlying it. Since so much of our culture is built on the bedrock of these these works (and because they still hold so much relevance after all the ages), I have been really enjoying working forwards in history from the far past! It makes me all the more excited to read more current works and be able to see the echoes of these past works reverb through them.
I'd be happy to meet online! But I think programs like the Catherine Project are much better at coordinating those than I would be. What I'm really searching for are buddies in my immediate community to become friends with.
There is no such thing as a western canon, there is a different literary canon in every European language. You can't read French poetry in English. You would have to be a polyglot to read "the Western canon".
There’s a St John’s University affiliated great books reading group that meets weekly near Powell St BART. I attended for a while pre-pandemic and the discussions were excellent. They were on book three of In Search of Lost Time when I dropped in. Might be worth a google. I’m not sure how else to find them at this point.
Been doing it for about a year now! On my third tutorial — fantastic organization. But I was thinking more about making actual friends with similar interests.
Latest updates in the field of 3D printing firearms https://www.youtube.com/watch?v=_dBJUifMtTA&t=669s Here's a hobbyist who's printed a copy of the MP5, the legendary late 20th century German submachine gun. (You've seen it in a million movies even if you don't know what an MP5 is). As far as I can tell it's quite functional. He has another video where he's printed his own AR-15 and it clearly jams quite a bit- on the other hand, I think we can see where the general direction of technology is going here.
Seeing as this blog likes to write about the latest AI updates, I thought people might be interested in where the field of essentially creating semi-automatic weapons at home is going. They seem to be overcoming problems with printing the receiver specifically, which is traditionally metal. (Perhaps Glock will be an inspiration here!) It's also possible to cast your own bullets at home too.
Anyways, I'm neither praising nor criticizing, but simply noting real-world advances in the field. I'd imagine the ability to create one's own semiautomatic weapon at home will be widespread in a decade or so, definitely two. This has some policy implications!
I'm not going to watch the video, but I'd wager quite a bit of real money that he didn't 3-D print the barrel, or the bolt.
And if you can't do that, nothing else really matters. Yes, *at the moment*, in the United States, you can buy gun barrels and bolts over the counter (or internet), but that will change as soon it has to to prevent 3-D printed guns becoming more than curiosities. It isn't written by the hand of God, or even the Founding Fathers, that the law can only restrict the purchase of lower recievers.
I've often heard that it's cheaper to just buy a black market gun where it's illegal, but in Japan that recent assassination was by a guy who made a homemade smoothbore (like a shotgun) out of some pipes, and supposedly with an electrical primer.
The gun in the video uses an off-the-shelf barrel and bolt assembly, so I don't think you could make one in a country with restrictive gun laws like Japan. Anyone know how hard it would be to 3D-print the whole thing?
As you mention, the sticking points with a lot of homemade firearms are the barrel, chamber, and bolt, all of which must withstand high pressures and impacts while being pretty precisely shaped. They're essentially impossible to 3-d print in plastic (at least, if you want to fire the thing more than once!) 3-d printing isn't the only trick in the book, though. Recently there has been a lot of innovation using electrochemical machining to make rifled barrels and chambers to quite high levels of precision. This video provides a decent overview https://www.youtube.com/watch?v=TSM6fBdmuso
Kitsch? No, but with the saving leaven of kitsch that makes genre easier fun than classics.
'The EMS OODA Loop' by Brian Sharp.
Sharp is an experienced Paramedic, topped out, Flight Certified Paramedic. Observe, Orient, Decide Act is simple enough to be fed Marines with their crayons. As an inexperienced first responder I could see things I should have done. A useful checklist for writing reports, and if I ever remember it at an incident I will use it.
It's not hard to see Observe as Melancholy, Orient for Sanguine, Decide for Choler, Act for Phlegmatic. The old Four Humors, the OODA loop, tomayto tomahto.
Sharp writes clearly but with no style at all. I idly fancied him locked in some silent cell lit by the last burning copies of Strunk & White, forced to rewrite every paragraph per Fowler's 1928 'Modern English Usage'.
Not surprising of course, and if anything it's refreshing to see it stated so (relatively speaking) plainly, but this is going to make some of the most politically salient behavioral research harder to conduct/promote and skew people's perceptions of what is true (to the extent that they actually care about what research says beyond its use as ideological confirmation).
"Although the pursuit of knowledge is a fundamental public good, considerations of harm can occasionally supersede the goal of seeking or sharing new knowledge, and a decision not to undertake or not to publish a project may be warranted."
It's difficult not to read that as explicitly supporting the prior suppression of research ("a decision not to undertake...a project may be warranted") as well as, of course, a decision to suppress its later publication. To be sure, they say all the things you say about style and tone up higher -- that manuscripts should not be malicious or thoughtlessly written. But they *also* say what I've quoted above, and that, at least for me, is where the immediate stink of dishonesty arises, regardless of the amount of soothing and commonplace rationality that precedes it.
It's as if one read a nice long opinion from the Committee on Public Safety, about how important it is to treat everyone with dignity and respect, and follow the law, and respect established social norms, blah blah motherhood and apple pie, but of course every now and then we might just have to relocate a few difficult individuals to a re-education camp for the collective good.
If Nature didn't *mean* that bare statement to mean what it seems to mean, they are very good writers, and I'm pretty sure they could have phrased it to avoid any possibility of misunderstanding. I don't think they did, because I think they really mean it. They really do think it's right and good for some projects to be not be done, or not published, regardless of whether they are honest and factual, because of downstream consequences.
There may be arguments for accepting that proposition -- none come to my mind, and indeed I recoil from it as blasphemous to the concept of science and free inquiry -- but I just find it difficult to believe that proposition isn't what they mean, given what they wrote.
It would be fascinating for someone to test it empirically, e.g. by submitting to Nature two nearly identica pieces of synthetic scholarship which merely varied in whether the "facts" they "uncovered" supported or did not support the fashionable shibboleths. That would be real and valuable social science research.
Thanks for identifying that specific point that I had missed.
I still see nothing about dishonesty there, just a general concern for the effects of one’s actions. Research that can’t be expected to help and can be expected to hurt shouldn’t be done. Nothing false should be said.
I think that fact makes it hard to test your hypothesis, because it’s not going to be possible to submit two papers of the sort you mention unless at least one of them is fraudulent.
I spoke poorly. There's nothing about what they wrote that is dishonest. The stink arises from the fact that I find it very difficult to believe, people being the way they are, that once you accept that it is permissible to suppress research at all, it will always (or even mostly) be the case that it will be suppressed only for the purest and most disinterested of motives -- and not because that suits current shibboleths, the interests of the current in-group, political or economic convenience, et cetera. That is, I find this planting the seeds of future dishonesty.
That's why I tend to be a free-speech absolutist. Once you admit that OK well some speech is "too dangerous" to be allowed in a republic, it's a terribly slippery slope down to the Alien & Sedition Act, and eventually Minitrue. People are just not angels, and that kind of power just corrupts. I don't find it plausible that the editors of Nature are going to be unusually angelic.
I'm made even more suspicious that it doesn't seem to occur to *them* that they might have a problem with that, which bespeaks insufficient humility, or (circling back to the dishonesty miasma) they are being disingenuous (admittedly an interpretation more negative than I think plausible). In science it's been a credo for a long time that the human tendency to bullshit yourself is so strong that we have to go to unusual and fanatical ends to ensure that we don't, or at least that it is held in check. Giving yourself the power to say well if this is Evil Research in some not easily measureable way, just according to my theories of downstream effects, then it must not see the light of day, seems like going in the opposite direction -- trusting far more than history justifies in the ability of men to be objective.
I had in mind that *both* papers would be fraudulent (that's what I meant by "synthetic" scholarship). It might be hard, but it's been done before, e.g. to test perceptions of bias in publishing or employment, and then there is Sokal's famous hoax. I'm sure they leave a bad taste in the mouths of editors, but I don't see it as any more unethical than scads of undergraduate psychology research, provided everyone is debriefed afterward (and the papers themselves sufficiently anodyne).
No journal commits to publishing every paper that is submitted, and is in the topic of the journal, and is done correctly, and yields accurate results. It always matters that the results have sufficient amounts of novelty and interest. This is already a commitment to “suppress” research that isn’t of interest to current researchers in the field. Some people object to this on grounds similar to the ones you mention, that this makes academic fields prone to fads and fashions. I think this isn’t a bad thing, because these fads and fashions are ways to coordinate research on topics in a way that is more effective than spreading the field too thin. But regardless, I don’t think there is anything qualitatively new about this explicit new rule, at least for the dynamics of research.
Come, that's not a reasonable gloss on "suppression." Things are "suppressed" when they are hidden *even though* they are of interest to end consumers. If end consumers aren't interested in the first place, it isn't suppression.
I mean, I've heard that dysfunctional use of "suppression" from cranks all my professional life. "Phys. Rev. Lett. rejected my paper "proving" the Second Law of Thermodynamics is wrong and is established only because of a giant conspiracy between the Illuminati and Roman Catholic Church. Suppression of free inquiry!" Er...no...it's just that nobody gives a rat's ass about crank theories without very heavy evidence, which you haven't provided.
And if that's all they were doing -- not publishing stuff nobody cared about -- then (1) I wouldn't object, for just the reasons you state, but (2) they wouldn't have to issue a manifesto about it, because that's already been part of scientific publishing for centuries.
What they are explicitly saying is that *even if* the article meets all our other criteria for publication -- true, sound, based in fact, of relevance to current scientific debate and/or of interest to our subscribers, stated in a respectful and objective manner -- we *still* might not publish it, because we have imagined certain downstream effects that we think are bad for society, because someone died and made us Tsar or something.
If you don't think that's troubling, try imagining it with the ideology on the other foot -- try imagining the NSF in a future Joe Fundamentalist Administration saying it's not going to consider grant applications that, while otherwise meeting all their criteria, might reveal that there's a genetic component to being gay, because revealing that truth would cause downstream "harm" to their efforts to get every person who professes to be gay into "conversion" therapy.
Pretty ugly, no? If you're relying on the pure motives and sterling character of the gatekeepers into whose hands you have given the power to say what gets said and what doesn't, this...does not have good historical precedent.
One late-breaking thought on this - there seems to be a lot of consensus in the thread that this is a very bad move on the part of Nature. And I'm inclined to agree; "censorship = bad" is a pretty strong belief of mine.
But, for the sake of argument, do any of you think there might be some legitimacy to this move (or at it might at least be more legitimate) given how shoddy science reporting can be?
I shared this elsewhere, but I'm putting it here too because it's funny (https://www.youtube.com/watch?v=0Rnq1NpHdmw). If this is your media environment, is there any shade granted to Nature's decisionmaking?
"We refuse certain studies on race/gender because our ideology opposes them" is one thing.
"We refuse certain studies on race/gender because the media picks up all our work, reduces it to clickbait headlines, and before you know it half of the people you know think potatoes cure erectile disfunction, and that's funny when it happens with potatoes, but way less funny when their lazy takeaway is 'racism/sexism is *totally* based on science," however, is very much another.
Does that impact anyone's evaluation of this? I still think censorship is the wrong way to approach these issues, but I think acknowledging that the issue exists at least gives me some sympathy for Nature's position.
This doesn't seem to be what's going on. They aren't asking people to censor their research at all. Just to avoid phrasing it in ways that are derogatory to certain groups. No actual research would be censored, if it is phrased in ways that are accurately supported by the data gathered.
1- I'm never sympathising with a woke institution, Nature is pathetically and obviously doing this for <wink wink> reasons, and those reasons are not good and not respectable. They can fuck right off with their "eVeRy ThiNg iS pOliTicAL" bullshit and shoving their pet issues into science. No convincing steelmanning can be found when you're unironically saying things like "Scientists must consult with activists and advocacy groups". If you want sympathy for Nature, don't waste your time reading the rest.
2- More interesting questions : What should we do about bad science? Or bad reporting of okay science? Well, the answer is incentives of course.
A- First, there is the massive competition among scientists, the "Publish Or Perish". As a guy who likes small societies and small local governments, I'm inclined to say this is the inevitable result of centralization of scientific prestige and the general Moloch-ness of large scientific institutions, but this misses the very real point of resource scarcity. Science, and knowledge seeking in general, is fundamentally an idle pursuit. Just like the friends you make spontaneously without meaning to often turn out to be the best friends, the things you discover spontaneously without meaning to tend to turn out to be the best science. Scientists ideally don't have to justify themselves and beg for grants. Competition for metrics is a kiss of death to any community except possibly extremely narrow things like Chess and competitive programming puzzles.
I don't know what can be done here, I can just unhelpfully say "Just invent post-scarcity civilizations bro" but this is clearly not actionable and might not even solve the problem, as scientists can always find other scarce things to compete over.
B- Second, on the reporting side, there is the issue of pop science porn. Again, my root-problem-seeking side will just notice that this is the inevitable result of the ever-more-extreme division of labor of a complex civilization, you can't escape over-simplifications and ignorance with specialisation, they're the name of the game. But, mainstream media is so very bad that it's not hard to beat them.
My recommendation if you want to get good science reporting and also keep your labor-divisioned civilization intact : fund the shit out of volunteers like Kurzgesagt and Verisitium. Those people manage to beat the living daylight of *professional* science reporters with nothing but youtube revenues and sponsors. They are a proof by construction that entertainment is not mutually exclusive with fidelity, and that not all over-simplifications are created equal. Fund them, join them, hire them, whatever.
The bigger problem is giving a shit in the first place, I don't think the morning shows or the tabloid papers just give the smallest shit about how accurate their reporting of science is. I can almost hear them say "Who gives a fuck bro, none of this is remembered for 5 seconds, go touch grass". How are you going to make them even acknowledge there is a problem? How does the awful and atrocious covering of science material in K12 education contribute to and sustain attitudes like this towards science in general? Those aren't easy questions. Good science reporting is a solved problem, there is *always* that one guy\gal who's just begging to explain that Very Complex Topic to a general audience, they exist in abundance, I suspect I even have this bug when it comes to topics I love in computer science. The bottleneck is Who Gives A Shit? Very few, relatively speaking.
3- A much bigger question than the previous : Can Knowledge ever be harmful?
In a very small nutshell, yes. Any intelligent agent will process sensory inputs and respond with behaviour, so of course anything you know can potentially change your behaviour for worse by any definition of "worse". In computer security, any untrusted inputs to a computer program is a potential source of vulnerability, up to and including an attacker hijacking the program entirely and executing arbitrary code of their choosing. If we conceptually regard a human brain as a program and the world as a huge source of untrusted inputs, then of course there is, for every conceivable brain-type, something out there in the world that, if known, will make it think and\or behave worse, for every possible definition of worse.
But it's not clear what to do about this. Consider a video of a kitten abuser, is it good for me, a kitten lover, to watch it to be filled with righteous anger and mobilised to help kittens, or is it bad because it might make me suicidal and devoid of all hope in human kindness? Is it good for Darth, a kitten abuser, to watch it because it encourages and reinforces his behaviour and provides him with an example to follow, or is it bad because the reaction to it will serve to show him how hated he and his behaviour are ? Difficult to say, and the answer varies by question and type of people, and you can't easily do controlled experiments.
You're acting like this is a one way street. By NOT publishing certain research, then an at least equally bad ideology of e.g. blaming white people for black people's problems (and all of the policies implied from this) is seen to be vindicated. The lazy takeaway now IS that the science supports them, and that was with some studies showing that this is untrue. But without these studies, things are made only worse. It seems like you're simply defending them because you happen to fall on the same side of a two-sided issue (while pretending only one side exists).
One way to tell if this is Nature's issue is if they are concerned with instances of partisans and journalists grabbing the conclusions from some paper they didn't read to push a political agenda, or if it's only particular partisans and particular political agendas that are the problem.
Not a particle. Even if I believed this was their goal, and it wasn't instead a squalid little issue of virtue signaling to bolster their feelings of relevance in an era when traditional scientific publication is under great pressure from arxiv and open-access publication (not to mention tweets and blogs), the proposition is egoistical and wicked.
Choosing whether to say the truth or not, or how to say it, based on how unknown strangers will react, is manipulation, propaganda, a form of deception -- an attempt to get people to think something other than what they naturally would, when given a particular set of facts.
There are certainly times and places where that is a necessary evil, and will do some good -- e.g. I'm thinking of the general restraint of news outlets these days in broadcasting the details of a suicide, on the reasonable grounds that it encourages copycats and serves hardly any useful purpose -- but a scientific journal has no business getting into that beyond a core insistence on phrasing and discussion being strictly fact-based, highly restrained as to speculation, and highly avoidant of imprecise or emotionally laden terms -- all of which have been standard for scientific publication since Isaac Newton and with which I agree. Not only do they lack any shred of competence in deciding how and when to manipulate for the greater good, they lack the responsibility, and it is anyway outside their core mission -- which is to publish the truth, and nothing but the truth.
It very rightfully makes people suspect them of being willing to compromise on what "the truth" is, and from sins of omissions to sins of commission in that regard is not so very far a distance that people would comfortably rely on them never crossing the line.
I believe this is the 3rd slogan raised on the beautiful marble exterior of the Ministry of Truth: "Ignorance Is Strength." You would not want people divided, arguing, skeptical of each other and of the wisdom of our experts by experiencing any nasty barrage of data which merely happened to be measureably true, would you? That way like social chaos, surely. Debate, disunion, a failure to all agree on the same ideas, the socially delibitating insistence on individual liberty and freedom of conscience that retards social progress, weakens the collective will, saps the strength of the state. Better to think carefully about whether there are indeed things We Should Not Ask About, or, to quote from the article itself:
"Although the pursuit of knowledge is a fundamental public good, considerations of harm can occasionally supersede the goal of seeking or sharing new knowledge, and a decision not to undertake or not to publish a project may be warranted."
That's plainly said. Not all questions are permissible, and not all answers should be shared. And people have thought this way for most of human history, barring the weird three-century interlude after 1665 or so. That we might return to the more nuanced view of "the truth" that is in our nature is hardly surprising. It takes an unusual and practiced fanaticism to follow the facts wherever they may lead, and no matter the consequences.
Remind me again, back when all the fuss over embryonic stem-cell research was going on, and religious groups wanted it not to be publicly funded because they considered it unethical and immoral?
And all the scientists in favour of it told them that, fundamentally, they could blow it out their ears, science wasn't to be hampered by none of this social considerations morality shit?
Not quite the same issue. I think the argument on stem-cell research was that the research itself was immoral, like medical research on patients who have not consented. Doing the research and not publishing the results would not have solved that problem. The argument this time, I think, is that telling people about the results of the research is immoral.
Well, that's why public funding of science is a tricky proposition, and you certainly wouldn't want just the scientists deciding what to fund. I never listen to interested parties explanations of how their position is actually objectively the most ethical, anyway. By me if your salary depends on a particular view of an ethical question, it's asking a lot for you to give any kind of objective evaluation.[1]
But I'm less unhappy about that particular froofrah. Scientists are supposed to clamor for funding, and pursue every interesting angle, and devil take the hindmost. Look! If we put two U-235 atoms together, they tell two friends, and they tell two friends -- kablooie! Isn't that cool? Let's try it...!
That's what they're going to do, and that's their expected role, and that's fine. I can count on them to be eager beaver amoral learning machines. And on the other hand, I can count on philosophers and thinkers to ponder the question of Should we? and give strong opinions about it, and I strive for hope that I can rely on politicians to sum up the philosophers' reservations, the scientists' enthusiasm, the mood of the public, the needs of the future, and make some reasonable decision. That's what we pay them for.
Things get all muddled when people won't play their roles. When the scientists attempt to be amateur philosophers and politicians, when the politicians play amateur scientist and/or minor prophet ("Only I know what God wants!"), or when the voters can't be bothered with taking ultimate responsibility for what they want.
----------------------
[1] The sneering is another issue entirely. I have known a few scientists who think that way, can barely keep their contempt for the unwashed masses who pay their salaries under control, and I keep calling at each meeting of The Brotherhood for these bad apples to be horsewhipped and branded on each buttock as a warning to others, but alas the Committee on Public Safety keeps tabling the motion. I point out if we do not police ourselves we will be policed, only more roughly and indiscriminately, but people just want to talk about the Christmas social. I dunno it's almost like being smart in one area gives the average hominid the fatal delusion that he's smart in all. Not a good design -- I shall have words with the Creator if and when I get the chance.
Edit: Not being glib or sarcastic, just agree wholly with your comment, and sometimes it's just nice to know someone read and agrees with what you wrote.
The branches of science that this is going to affect are mostly going to be the social sciences. As the replication crisis has showed us, the overwhelming majority of social science is junk. It is not unreasonable (nor is it "censorship", except in the broadest sense) to hold research that produces an antisocial conclusion to a higher standard of proof, especially when it's statistically likely to be false knowledge.
And yes, there are going to be cases where it's misapplied to harmful effect, because that's an innate property of any form of bureaucratic oversight. But I don't think it's unreasonable on the whole.
The editors made it clear that they had considered refusing publication not because the article was wrong but because the knowledge would be misused, although they decided the knowledge was valuable enough so it was worth taking that risk.
Easter Island Syndrome. Many people have wondered why, when their timber supplies were dwindling and things were not looking good, the Easter Islanders spent their last remaining resources building giant freaking god statues. But I think this is a common human impulse: if you can't solve the problems that are within your purview, you...go big. Gigantic, if possible. Like Hitler in his bunker with the Soviet tanks 300m away and a dozen men remaining under his command, dreaming of The Super Weapon that will turn it all around.
I think when people (and institutions) start seeing the basic tasks within their ambit as slipping beyond them, they start grasping at grandiosity, hoping for some miracle reversal. In this case, Nature, like all scientific journals, is in trouble and has been for many years, because they're being disintermediated by the Internet. Who *needs* to subscribe to Nature any more? Who needs to compete for publication in their pages? Increasingly, the answer is...not as many as you'd think. Not as many as they'd hoped. Not enough to hire another assistant editor at a nice 90,000 pounds salary, or sponsor a Mediterranean workign conference. So it does not surprise me to find them acting a little desperate, grasping at ways to suddenly become a lot more relevant than they are.
>The branches of science that this is going to affect are mostly going to be the social sciences. As the replication crisis has showed us, the overwhelming majority of social science is junk.
The replication is overblown, and most people widly overestimate the rate of successful replication in "non-junk" sciences
Additionally, the kind of research disproportionately afected by these policies are stuff like psychometrics/intelligence research, which are amongst the most rigorous and empiricaly validated areas of research in social science
>It is not unreasonable (nor is it "censorship", except in the broadest sense) to hold research that produces an antisocial conclusion to a higher standard of proof, especially when it's statistically likely to be false knowledge.
And how on earth do you define "anti-social conclusion"? Because it really, really sounds like you're implying that contradicting PC beliefs is "anti-social". I think it's "anti-social" to blame white people for black people's poor socio-economic outcomes when the science doesn't support this.
I know this is intended as a "gotcha" question, but for the former, you basically can't. Politics isn't some abstract thing neatly separable from society, to the point where it's not unreasonable to *define* politics as "the direction you think society should go". I would hope that most people are basing their politics on what they think is morally good for society.
>I would hope that most people are basing their politics on what they think is morally good for society.
I doubt this is what most people are doing with politics, nor what we even want most people to do. I think mostly they base it on what they think it good for them, and then latch on to some stories that pretty up that behavior as "morally good".
I mean, I was going to add a second paragraph to the above comment responding to the "offends you personally" quip by saying "any aspiring rationalist should be able to recognize that no one is able to truly construct a system of morals and ethics that doesn't ultimately boil down to how they *personally* feel about things on some level", so I don't disagree with you here.
I don't think the people making these decisions really care about it that much. They don't want to be piled on on twitter and don't want to be one of the bad people".
You are assuming their value of science/truth has a much higher value in their telos network than it does. Academia is absolutely overrun with people where advancing knowledge/understanding is a distant distant priority among their goals.
I think it's important to emphasize that part of my point is that not all studies advance an understanding of truth - in fact, some do the opposite (just ask Dr. Wakefield), and that's especially common inhuman psychology. There's utility in trying to minimize the harm of that false knowledge.
I mean if you general position is: "Hey the Research we publish in our journal is wrong 40% of the time so we are going to be super careful about what we publish if we think it has negative impacts on the world at large". I get that.
But 1) I doubt that is what they are thinking. And 2) Isn't the main solution there to raise your standards and be more picky, not to start inserting more ideology into your selection process?
I mean, figuring out how to do (2) effectively and efficiently is kind the biggest open question in scientific publishing right now, so I think they have to be forgiven for not immediately solving that.
As far (1), I'm giving them the benefit of the doubt of not wanting to openly say "most of what we publish is actually garbage, whoopsie", and if it means I'm inappropriately steelmanning to counter others' straw/weakmen, I'll own up to that accusation.
So some research is done that undermines one of the tenets of AGW. But that is settled science, we shouldn't publish it because of the 'harm' it will do. (Would you have restricted research into nuclear physics, if you knew about the harm of the atomic bomb?)
>Would you have restricted research into nuclear physics, if you knew about the harm of the atomic bomb?
I alluded to this further down in the thread, but Eliezer (ironically?) suggests almost exactly that in HPMOR, and again in Three Worlds Collide, that knowledge of nuclear physics should have been restricted to a conspiracy of science for those who were trained in the methods of rationality, because of the harms it did to society. It's an interesting concept.
So your answer is; Yes, or maybe? I'm hoping nuclear energy (fission) will be more useful than nuclear weapons are harmful... but that's still an open question. (It's going to be hard to hide fusion... since how else does the sun work?)
Yeah, you know, we tried that "conspiracy of knowledge for those who are trained in the arts". It was called alchemy, and it succeeded in obfuscating what it was about so well that there are still multiple interpretations of what the symbolism and terms and processes meant.
If Yudkowsky's method had been adopted throughout history, we probably might be at the stage of - ah, but no, I cannot reveal to the vulgar gaze the sacred hidden mysteries! Who am I to draw back the veil of Isis for the profane and those who have not risen through the apprenticeship to mastery?
Sure, but that idea was put to a practical test and failed laughably. The US government did everything humanly possible, within (broad) interpretation of the law, wartime emergency powers, and almost unlimited willingness to spend money and use force to restrict the knowledge of physics necessary to build atomic bombs from the moment of inception of the MED.
And how did that work out? The Soviets had a bomb within 4 years of Trinity, and even if everything had been published openly it would hardly have taken them much less time, just given the necessary construction of industrial plant and plutonium farming. It's not clear the enormous security effort delayed Soviet acquisition of the technology by a month, let alone the decades you'd need for this to be any kind of plausible idea in the real world.
Indeed, I can't think of any recorded historical cases of secrecy retarding the development of atomic weapons by any nation that is willing to go to the (very large) expense involved. Nor can I think of any other dangerous technology that has ever been kept secret for any significant length of time, once it is known among a modest group of individuals. Zero-day exploits are yet another example.
I find this rationalization increasingly tiresome as well.
People discussed how the social sciences suck since the freaking 1980s without the gay "Muh vulnerable groups" tones that ooze out of this article (characteristic of activist "science" or "tech").
When Philip Tetlock showed that most political "experts" are no better than flipping coins on average, he wasn't raving about how that harms queer folx. He talked in detail about how they got predictions wrong, how they re-wrote their predictions after the fact to make it seem like they got it right, how it's a disaster that people like this are in charge of most governments and other powerful organizations, then he discussed pretty actionable measures to hold supposed experts to better objective standards. At no point did Tetlock ever advise "if your experts are saying the wrong things about $PET_GROUP, that's a clear sign that they're wrong".
There is not a single line in this article that says that only wrong or fraudulent social science should be rejected. (This is a challenge, find me a line that you think says otherwise and if 5 people agree with you I will say that I'm a dumbass who can't read.) In fact, almost every single paragraph begins with the (implicit or explicit) acknowledgments that "harmful" science may be perfectly true and pass all traditional tests of good results in its respective field, and that still doesn't make it okay or publishable.
From the TFA:
>Sexist, misogynistic and/or anti-LGBTQ+ content is ethically objectionable. Regardless of content type
So, MonkeyPox is 98% a gay epidmeic and was started by the sexual practices of west european gay men. This is, by any reasonable interpretation, an "anti-lgbtq+" fact, it relates a negative thing, a disease, to the coddled population and its lifestyle. Reality just so happens to be anti-lgbtq+ sometimes, this fine article is saying that lgbtq+ feelies override reality. Or are we allowed to notice only anti-lgbtq+ epidemics but not other anti-lgbtq+ things ? They didn't mention that if true, and I find it hard to see how epidemics differ from any other unpleasant and unwoke fact to merit an exceptions.
Finally, it's amusing and instructive to see the kinds of groups that they say can be harmed by research.
- Ctrl-F for "men", manually skip irrelevant results, only relevant result : "Researchers are encouraged to promote equality between men and women in their academic research"
- Ctrl-F for "misandry" or derivatives, 0 results. Ctrl-F for "misogyny" or derivatives, 3 results.
So apparently, Science Must Respect The Dignity And Rights Of All Humans, but we only need to single out certain very specific groups, and only the 50% or so of humans that nearly all mainstream media already talks about their problems incessantly, the other 50% or so of humans can fuck right off. We might care, we most probably don't, but the certain thing is that we won't even mention them, unless to provide contrast for one of our $PET_GROUP. Yay equality.
Have you actually read the recommendations? They don't say anything about not reporting facts about current monkeypox cases being 98% among men who have sex with men. It's only if you say negative things about gay people that you would be violating the policy.
I did read the article, since I posted it in a previous open thread (it's from 18 Aug after all). Hence why my bold challenge above, feel free to claim it.
Any sufficiently advanced "I Don't Want To Censor X, I Just Want People To Say X In Certain Very Specific Ways" is indistinguishable from "I Want To Censor X".
What *IS* "Negative Things About Gay People" ? is it slurs ? was that not already banned in academic publications ? Is it "Gay People Have More Promiscuous Sex Lives And Thus Spread More Diseases" ? I doubt even this is tolerated in Nature, and it's a fact.
I have extensive experience with at least 3 distinct types of authoritarians, and every single act of censoring by them is always, *always*, justified by "This Is Not Censoring, You Can Still Say Those Things, You Just Can't Say Them In Certain Harmful Ways". The disallowed Harmful Ways are never elaborated on or clarified any further. Indeed, in practice, every single Way of expressing Those Things turn out to be Harmful and disallowed according to them. It sure is a very strange coincidence to Not Want To Censor Things but your (very vague) guidelines end up Censoring Things anyway. Some bad uncharitable folks might even accuse you of meaning it.
Here is a question to chew on : Why didn't this article cite examples of bad phrasing that they don't want in their journal ? (preferably with a suggested good phrasing of the same general meaning next to it), it shouldn't be that hard should it ?
I am pretty sure there are already publications in Nature Human Behavior that make the point you claim would be banned (that many populations of gay people have more tightly connected sexual networks than straight people, and thus that certain infections have an easier time spreading in these networks). No one would publish a paper whose conclusion was *just* that, because that is a well-known point already and publication needs to add something.
1- Muslims wanting to censor criticism of the Hadiths (collections of written-down originally-oral tradition and stories about Mohammed and his companians, wives, etc...).
2- Proponents of a military dictatorship wanting to censor critcism of the "achievements" of the regime (consisting of ugly and ill-planned urban projects, like new cities in the middle of the nowhere and new bridges for regions that didn't need them).
3- Feminists, wanting to censor any discussion of male issues.
In all 3 cases, the authoritarians never admitted they are trying to censor things.
- Muslim Sheikhs and Imams always maintain that you can certainly *say* things about Mohammed and his life and the Hadiths about them, only if those things are bad you are a very bad person and deserve bad things to happen to you. You also have to cite examples and arguments from "approved" sources only, not - for example - the bad scholars with bad opinions.
- Military authoritarians insist that criticism is good for the nation, if only it's done in good faith and with accurate information. It turns out that good faith and accurate information is suspiciously correlated with not criticizing the Comrade In Chief : all good-faith accurate-info critics begin their criticism with singing his praise and their criticism amounts to saying that the regime's only flaw is that it doesn't have 50 of him, and all bad-faith misinformed critics happen to think he's an incompetent and genocidal dumbass.
- You can talk about Men's issues in feminist-controlled networks and conversations, but only if you acknowledge that it's all their fault, they deserve it anyway, feminism is never responsible for any single bit of it, and more feminism will be good for them.
"fact, it relates a negative thing, a disease, to the coddled population and its lifestyle"
Agreed. My first thought on reading the article was that half of epidemiology would be censored by these criteria.
In addition, "identify content that potentially undermines the equal dignity and rights of humans of all races/ethnicities" sounds like it would censor any study that evaluated the effectiveness of a quarantine, since a quarantine inherently limits the rights of those people quarantined. Do we _really_ want to hide evaluation of which quarantines worked and which failed?
You have significantly mistaken the point of the article. It said nothing about holding any rresearch to any higher standard of proof -- indeed, no issue of reliability or testing thereof is discussed anywhere in the article. The article begins by assuming that the research publication to be considered is factually based, and contains no error or fabrication -- that is, it is otherwise suitable for publication in Nature.
I mean, that would make sense, right? Why would Nature ever say "Hey guys, ordinarily we might publish articles that contain some pretty iffy data and suspect observations, but in certain cases we won't for the following reasons, we'll want to have some extra verification then." It would call their existing publication model into deep question, for them to suggest that they would *ever* publish *anything* that they weren't persuaded was objectively true and well supported by its data.[1]
What the article addresses is two things:
(1) The style of presentation of the research. They lay out conclusions that should not be asserted, and styles of discussion that should not happen, arguing that these statements and ways of framing discussion can do harm that outweighs the value of the knowledge gained by reading the paper (presumably because of the weight of "science published in Nature" behind the forbidden language).
(2) Whether the research, even if factually based, and even if certain conclusions or implications are readily apparent from the observational data, should nevertheless be denied publication, again because the social harm exceeds the social value of discovering some new facts or other.
It is, in short, an argument that some truths are too dangerous to publish, and some ways of speaking about the truth, or discussing it, are also too dangerous to publish. It says nothing about any issue of reliability.
----------------
[1] They may well be wrong about that, and future discoveries prove that, but that happens and everyone understands that, and it doesn't change the basic fact that no journal ever publishes anything the editors don't think *at the time of submission* is true to the best of everyone's ability to know.
It was good when PNAS no longer claimed they actually published the "highest quality scientific research", although people could still quibble over whether that's honestly the goal they're attempting to achieve.
It is relevant that it really is possible to prove anything, "statistically," if you have enough time and money. Run any study N times and only report the results of the attempts which support your position. If that were the intent of Natures decision then it would be clearly defensible.
If the argument here is “social science is all made up anyway, so we might as well just insist on made-up stories we like”, then I don’t accept the premise. “The overwhelming majority of social science is junk” seems very overblown. Even if it were true, Nature should be publishing the stuff that isn’t junk, not making the problem worse.
Well, they should have higher standards for the research, and not allow only crappy ideologically confirming research, for one; also their conception of what is "pro-social" is likely to be completely ideologically skewed in one direction and to be quite narrow, creating a major temptation to suppress even good research on ideological grounds, even if it's true.
I think the issues are how broad and ideological a lot of the definitions of are.
>produces an antisocial conclusion
Research finding that widespread emotional abuse would lead to better X is antisocial. Research findings that "maybe women aren't as good at throwing" (and there has absolutely been even tamer stuff than that suppressed) is hardly antisocial.
And I absolutely do think it trickles into the hard sciences a bit. Especially with regard to anything that touches on biology/medicine.
It's close to the opposite of newspeak, actually. It's the original meaning of the word, before it became more commonly used as a synonym of "unsociable".
Hmmm. Not my take, so could you unpack that a bit? What makes this a 'reasonable' move? What's the 'unreasonable' step they could have taken, but didn't? What are the safeguards to prevent misaplication?
Sorry, second question- in your answer, you seemed to say that you agreed with the new guidelines definition of "antisocial". Is this true, and if so, could you expand on that?
It's reasonable in the sense that I don't think they're lying about their aims - there is definitely human psychology research that has had a negative effect specific groups, and society as a whole (i.e. antisocial, as opposed to pro-social), and it's not a bad idea to at least make an effort to avoid that. Specifically, there's a rich history of junk science being used to justify antisocial and discriminatory ideologies - phrenology and Nazi race science are some hopefully uncontroversial examples of that, though they're obviously more extreme than what's being discussed here.
>Specifically, there's a rich history of junk science being used to justify antisocial and discriminatory ideologies
Yes, but this is exactly the problem. Nazi race science was a case of an ideologically-censored scientific establishment dutifully rattling off "facts" that legitimated the designs of those in power.
You can't know whether this current censorious paradigm will wind up as discredited as that one.
(It's hard to point to what's wrong in current understanding for the very reason that it's current, but to give some examples of modern policies causing harm to a group that may or may not be counterbalanced by their theoretical benefit: affirmative action has significant overhead and denies jobs to those apparently most qualified; transition therapy is generally sterilising.)
I would predict with relatively high confidence that even the least-scrupulous portions of wokeism will not be regarded as poorly as Nazism. Contrary to what either party says, the US is not actually a totalitarian state in any sense that you could meaningfully make that comparison.
Can you give me an example of research that has a negative effect, either on specific groups or on society as a whole?
By which I mean, please indicate the journal article or book that had the negative effect. People keep referencing 'Nazi race science' but what effect do those articles have today, and what is the recent research that is of concern?
Any specific examples I dig up are likely to be controversial, and risk starting a discussion on HBD, which Scott has indicated is a topic we should try to avoid unless necessary. To that effect, "The Bell Curve" is the first thing that comes to mind, which I don't want to discuss for that exact reason.
My point is that it's *reasonable* to not want that sort of thing to be published unless it's beyond repute, even if some here will disagree if that is *correct*.
Slight tangent, but every time these sorts of topics come up, it reminds me of Eliezer's suggestion (in jest?) in HPMOR that some ideas are dangerous enough to society that they are best kept inside the "conspiracy of science", where only those well-versed in the methods of rationality have access to them.
"The Bell Curve" is only tangentially about ancestry, it's major message is that intelligence has a large genetic component and that is a fact that should be known.
> My point is that it's *reasonable* to not want that sort of thing to be published unless it's beyond repute, even if some here will disagree if that is *correct*.
Doesn't this create a chicken-and-egg problem? The scientific journals will not publish it, until it is scientifically proven. Of course it is not scientifically proven -- no scientific journal has published it yet!
I have no idea where you are coming from on this. The premises (note 1) of TBC *are* beyond repute, and rejection of the facts published in that book has done nothing but harm to US (and Western) society.
I really expected better than this. I really thought you had something which had done actual harm in mind, and not just something 'politically incorrect'.
If that's all you have - that we should not discuss things with unpleasant implications - or if your stance is that only 'the right people' should be discussing the implications of scientific research, then holy cow, man, are you ever in the wrong century and in the wrong crowd, and yes, those guidelines in Nature are exactly what people are concerned they are.
Note 1: TBC notes that intelligence of an individual is shaped by both environment and genetics, and that like other geneticly influenced human traits- height, disease rates, bone density - there are variations which can be detected at the group level. What to do about this, and how to assess the human worth of people who are smarter or dumber than ourselves, is left as an exercise for the reader.
Apparently, many fail at this exercise, and never intended to treat those people less gifted than themselves as fully human.
I get why it raised eyebrows, but I am not sure how important this particular journal is considered to be, how common such changes are in the life of a journal, and how common they are in journals in general.
It’s a pretty important journal because it includes much of the social sciences in its remit - also genetics. (I’ve published there.) Yes, the new rules seem pretty shocking.
Meh to the extent it further formalizes what is already a big problem in science, it isn't good.
Part of me feels like academia is slowly being eaten from within by a modern theology. Maybe there will be some retrenchment, but I increasingly despair that in the long run the universities will not remain the preeminent place to understand truth about the world. Which is sad.
A wonder if we are slowly moving to some situation where we will need a new model, or where the university system will eventually need a massive defrockment like the monasteries once did. The reports from my friends who are academics are not encouraging.
About half basically are on board with this kind of thinking (I am not interested in the facts if they conflict with my political ideology (or just an outright refusal they could conflict with their political ideology)). And the other half tend to think it "isn't a problem", but if you ask them if they feel comfortable publishing findings that are politically charged they quickly re-evaluate and say "no, gee maybe it is a problem". And these are people with tenure.
Or they say, "I would only publish that if I could get some co-authors from the right demographics to deflect the heat", etc. Which is just so alien to me. These are people who generally were hard core "truth or bust" people who seem to have had their time in academia erode that rather than reinforce it.
Maybe it is not a problem if you are teaching intro Calc or whatever (though I suspect it causes issues even there all told), but definitely the ones who work in social/behavioral modeling just view whole swathes of hypotheses as off limits.
But where instead of the issues they are unwilling to look into being "kill all the poor", they are incredibly tame things that shouldn't be controversial. "Do gender roles in abstract sexually reproducing agents increase fitness?", or "Do systems for punishing cheating improve or degrade test result quality on X tests?".
I'm interested more in learning more about pharmacology, and a lot of intro books I'm seeing on amazon are geared towards clinicians rather than interested laypeople (ie, people like me). Would any of you all have any book recommendations?
I took a look. I see why they need to go here to solicit feedback. Their feedback page is broken! Also, (at least on android and the brave browser) their mobile site is broken! It only lets you enter text when you turn on desktop mode. My advice to to hire testers.
That's one bug I'd never find - I do as little as possible on mobile, especially when it involves entering text. Keyboards and large screens are just so much easier to deal with.
Hobby horse aside, I find it difficult to imagine how a web site could help people make friends, let alone monetize doing so. Acquaintances, sure. Friends in the sense of "someone who follows my blog and/or whose blog I follow" - sure. But some largish percent (well greater than 50%) report they can't see a person as having human feelings without meeting them face to face - and behave accordingly to people they experience as text on a screen. Maybe video interaction might enable a larger proportion to experience each other as actual people. But that's still a large way away from becoming actual friends.
I think the idea is that by organising people into groups around shared interests, Surf is going to drive in-person meetups that will lead to the formation of new friendships.
If I'd expected anything, it would be something like a dating site, but for friends rather than romantic partners. Mostly though, I didn't know, and feel like I still don't.
Would there be a substantial long-term impact on culture if somehow it gets established beyond reasonable doubt that someone other than Shakespeare wrote everything that's traditionally attributed to him? Or likewise for any other household name? Do people care about such "questions" for about the same reason that they gossip about celebrities, and it's just as trivial?
I am reminded of the old "history of the world according to student bloopers" document which used to go around, back in the days when memes were in text form. It claimed "Shakespeare's plays were not written by Shakespeare but by another man of the same name".
It was a known fact for hundreds of years that the plays attributed to "William Shakespeare" were written by him. This was known to his friends and family, to the players he worked with every day, to the theatre management, to the printers who prepared his plays for publication when the Company allowed that, and to the other playwrights of the day with whom he socialised, and in a couple of cases collaborated. The idea that "somebody else" wrote the plays is very modern, and was partly a consequence of the decadent phase of Romanticism (lone, neglected poet only recognised after his death) and partly simple snobbery (no commoner like Shakespeare, from a provincial town whose father was a small businessman can ever be our great national poet.)
If somehow the contrary could be proved, it would up-end all literary history, all literary criticism, and all dramatic tradition, and reveal a literary conspiracy unmatched in the annals of history. How, for example, would things have worked in practice when Richard Burbage says one day, just before Hamlet is acted for the first time, "Will, that speech is a bit long: can you shorten it by a third and take some of the difficult language out?" And Shakespeare goes rushing off to consult with Sir Francis Bacon, Queen Elizabeth, the ghost of Christopher Marlowe, or any of the other dozens of writers who have been proposed. Battalions of critics and historians kept busy for decades, popular books by the hundreds ...
He was also illiterate, unlike all other actors, so it was even harder for Shakespeare. The real difficulty was when he was writing the later plays and had to consult with the Earl of Oxford who was dead at the time, and John Fletcher, who wasn't.
A Shakespeare by any other name would write as sweetly.
Most of the interpretive framing around Shakespeare the man is junk. Having a different name to hang all that hopeless projection on wouldn’t change a thing.
Yes, I would consider this essentially trivial, the sort of academic question that most obsesses lesser minds, like “Was homer actually a conspiracy?” or “Who was the *historical* Jesus/who *actually* wrote the Gospels?”
I'd say that the historical Jesus thing is somewhat more interesting, for anthropological and history of religion-type reasons. Whereas Shakespeare is only relevant insofar as he wrote the works attributed to Shakespeare, arguments that rest on the accuracy of Biblical descriptions are an important part of Christian apologetics.
It isn’t, because it’s a field that exists solely on the pomposity of textual critics.
If there was any evidence that wasn’t hot air, it would be interesting. Pretending that you can slice and dice the texts to a More Historical version is so much academic make-believe, as rigorous as a seance and as empirical as faith healing.
I doubt there'd be any major impact on culture, because most people don't care that much. For most people who believe the "it's not really Shakespeare" theory, it's just a bit of trivia they pull out to sound smart at parties.
It doesn't appear unlikely that the body of work we attribute to Shakespeare is actually the work of several shakespeares, quite possibly in different eras.
It is accepted in biblical scholarship that most, if not all, of the gospels were composed by quite a few writers -- although Q is thought to be the source of three. The evolution, as it were, of these narratives has been studied through centuries, so it's not at all unreasonable that Shakespeare or even Chaucer's works were serially collective projects.
And even if that were true why would it be true that Shakespeare collaborated on most of his plays, except for the few where we we know he did he largely wrote alone. They certainly weren't written over "centuries" - the first folio is generally considered canonical. This doesn't stop different interpretations for the stage, or literal Bowdlerized versions later on.
There are lots of sources for Shakespeare and Chaucer, which are well known by the specialists. There are no known earlier versions of Shakespeare and Chaucer by different authors. If there were, it would be huge news.
Growth mindset, Wyclif's Dust! Imagine the vast number of new academic jobs created by the "Shakespeare Collective" Studies Departments! Everybody could put in their contender for 'who wrote Shakespeare?' and the beauty of it is, nobody need be wrong!
From Kit Marlowe to Lizzie herself, anyone and everyone could and can be part of the Collective! And that's only the beginning - imagine the increase in gender and queer studies when we have more than one white guy to write papers about!
As for Chaucer, now come on - are you really maintaining that some customs official could be the Father of English Poetry? 😉
"Would there be a substantial long-term impact on culture if somehow it gets established beyond reasonable doubt that someone other than Shakespeare wrote everything that's traditionally attributed to him? Or likewise for any other household name?"
Unless that 'someone' already has an identity I don't see how it could. The definition of Shakespeare for most people is, "The guy who wrote those plays and sonnets."
It if turns out the Shakespeare was really Cervantes then things get more interesting.
I’m curious what you and your readers think of the casting controversies in the Rings of Power series. I wrote a piece explaining why fans might not like it without necessarily being “racist”.
Haven't been watching Rings of Power, but generally:
If there's a high-profile casting controversy about a movie or TV show, and the controversy is that they "whitewashed" an ethnic character by casting an A-list white actor, then it *might* be a good movie or show that just wimped out and chose A-list marketability over authenticity once they found that e.g. Will Smith wasn't available. Or it might be crap. Fortunately, in that case you can probably get fair warning from the reviews.
If there's a high-profile casting controversy about a movie or TV show, and the controversy is that they recast originally-white characters as colored, or originally-male characters as female, it should be presumed crap until proven otherwise. There will always be grumbling in the dark corners of the internet when that happens, but it only becomes a high-profile controversy if the producers signal-boost it with their response. Which they do because it preemptively discredits *other* criticism of the production, and gains them unearned favorable reviews because almost nobody in the mainstream press is willing to risk offering an unfavorable one.
My beef with the casting is more general than that. And the doubling down on "if you criticise the show, it's because you're a racist" didn't endear them to me. For all the talk about Diversity and Inclusion, we've got what?
One (1) Black Dwarf who is an invented character, not main, surrounded (so far as I've seen) by white Dwarves.
One (1) mixed-race Elf (the actor is Puerto Rican, so Black Hispanic, although I'm probably getting the fine nuances of US racial classification wrong) amongst, you guessed it, majority white Elves. And not a major part, although he probably will be prominent in the sub-plot about the 'forbidden romance' (which I don't see going anywhere) and the Adar Orc-father bit.
The Harfoots. Oh, let me get started on the Harfoots. Apparently none of them have ever discovered the use of a basin of water to wash their faces. And they've all got cod-Irish Hollywood diddley-eye accents, so thanks Payne and McKay for keeping alive the hoary old "pig in the parlour Irish" stereotype. I feel a Dylan Moran clip coming on:
Why couldn't they let Lenny Henry keep his real accent, it's a perfectly fine accent? Why not let all the Harfoots keep their accents, it would fit with their multiracial nomadic tribe thing. But no, they must be "Faix and begorrah, us have only each other, so we do, bejabers".
Celebrimbor is too old. The actor may be perfectly fine, but he's too old for the character. Celebrimbor is Galadriel's first cousin once removed, and if we're getting Young Piss And Vinegar Galadriel (and apparently we are, like it or lump it), he should be equally young or younger. The only reason I can see for this characterisation is that they're going for "all the old and/or white guys are wrong, Galadriel is right and the only one who is right AT ALL TIMES" storyline. Oh, and that they've never read the books, but I think that three episodes in, that's apparent.
Gil-galad is allowed look like an Elf, which is something. There have been suggestions online that it's a rights thing, that Warner Bros studio has its lawyers on a leash slavering like wargs at the merest hint that the visuals of the movies will be copied by the TV show, so they can't make their Elves look like Peter Jackson's Elves.
But the responses in the media about how Elves live so long, they'll change over the years, aren't really satisfactory. So I suppose Elrond and Celebrimbor are going through their teen rebel phase? "You're not the boss of me, I'm going to cut my hair short like an Edan!"
The one place they *could* legitimately have cast full of brown-skinned people, which would not have contradicted canon, was their invented Southlands village of Tirharad (after all, if you're going to plonk it down with both Harad and Khand on the borders, and populate it with descendants of the Men who fought on Morgoth's side, well duh, right?) It would make the modern political references to racism and prejudice even more pointed, to have an occupying force of Westerners watching over brown-skinned people who legitimately felt aggrieved that they were being placed under suspicion for the sins of their ancestors.
But that wouldn't have permitted a white guy to be racist to a black Elf, so we get Tirharad pretty much all-white, except for Invented Female Character To Be In Forbidden Romance, where the actress is Iranian (does that count as not-white? To me she would be white, but if we're going by 'Middle-eastern is not white' then okay) and her son, who is slightly more not-white (the actor's father is from Indonesia).
(As an aside, I'm going to say here that I think Halbrand is not Sauron, that he *is* some 'King of the Southlands' and that he's Bronwyn's missing husband and Theo's father, which is why Theo's blood activated the black sword because of the whole 'bloodline in the service of Morgoth' thing).
Númenor is also multi-cultural, but that doesn't stick out so badly since most everyone is just a spear-carrier rhubarbing away in the background, except when called on to beat up Halbrand. Most of the important characters are white except for Tar-Míriel, and I honestly don't mind that too much. At least she's human playing a human character, and at least they put some effort into making her look like a queen. I'm much more annoyed about the alterations to her character, turning her into a Queen-Regent and eventually some kind of Warrior Queen, which is going to be tough to explain how she gets the throne usurped out from under her by Pharazon, but eh. I think I see the shape of the plot they're going for here, and if only they could write a decent script (instead of pseudo-profound bollocks about sinking rocks), then the stakes would seem appropriately high - that the monarchy *is* under threat by the King's Men and that the mind of the people *has* turned against friendship with the Elves, so that a popular (or populist?) uprising headed by Pharazon would mean the spectre of civil war (and that she might not want to fight her own people, or even that she can't trust her army *would* all follow her).
I think that's about it: the rest of it is that I have a feeling Meteor Man might be Saruman, not Gandalf, but I wouldn't put it past these chuckleheads to have him be Gandalf.
Mostly it's that the pacing is terrible, we're three hours in and not very much has happened. The scriptwriters can't write decent dialogue to save their lives. And it's both too dependent on lore for the casual audience (how are they supposed to know about Valinor and Morgoth and who Elendil is and all the rest of it, given that the plot jumps around the map from one character to the next without establishing anything?) and too divergent for people who are familiar with the canon (e.g. the likes of me complaining about what they did to Finrod, what they did to Celebrimbor, what they did to Elrond - 'you can't attend the council because you're not an elf-lord'? Bitch, his father is literally THE MORNING AND EVENING STAR) and most of all what they did to Galadriel. They have about five episodes left in this first season, if they want a season two they better give her some self-awareness and character-growth sharpish, because right now she's unpleasant brat who grimaces when faced with an argument based on reason and logic.
Elendil is likeable, though, how did they manage that? Or miss that, rather, to have one character in the entire set of scenes on Númenor who wasn't something you wanted to stab in the face? And his invented daughter, who is there to give "female energy" to his household, apparently. So she makes sure there are plenty of throw pillows and clean shirts and potted plants and scented candles, mmm?
"Celebrimbor is Galadriel's first cousin once removed, and... he should be equally young or younger."
I haven't seen the show but want to point out "Once Removed" is a mark of generations; the child of your first cousin is your first cousin once removed, and you are theirs. So Celebrimbor should either be old enough to be Galadriel's father, or young enough to be her son.
The latter, since his grandfather is Galadriel's uncle, and his father is her first cousin. How much younger is hard to pin down, because there aren't any definite birth dates stated for either of them; they were both born in Valinor during the Years of the Trees.
I think I understand the casting for the storyline they *seem* to be pushing but really Celebrimbor *should* be young and ambitious and more easily taken in by Annatar than Elrond and Galadriel and Gil-galad who all suspect and reject him.
Given how long elves live, generations are not a very reliable indicator of relative age. It’s quite possible for your father to have sired you a few hundred years after his nephew had a few kids. Even for humans, being significantly younger than your first cousin’s child is not impossible, even if it’s rare.
(I’m not claiming this happens for this particular pair of characters, either in the show or the books, just that it’s eminently possible.)
Heck, Galadriel could in principle just have a baby sometimes after LotR ends, and it’ll be younger than all the (hundreds?) of generations that happened during the last age or so that are nominally lower on the genealogical tree than it.
Here's one simple possible explanation: in the show's version of middle earth, hobbit skin colour inheritance works more like eye colour does in the real world, so that it's possible for two parents to have a child with a completely different skin tone.
This is a far smaller divergence from reality than magic existing. We accept that magic exists in the show universe even though it doesn't really make sense because it makes the show better. Similarly, allowing hobbits to be of different races reduces the restrictions on who can be cast and so increases the expected quality of the casting, on average making the show better.
Yep - in particular, having the very stringent quota that *100%* of characters who were white in JRRT's imagination must be played by white actors would tend to drive down quality.
It's literally what OP was proposing. (Well, almost. I got the impression OP would have been satisfied so long as 100% of the hobbits were played by actors of the same race, regardless of what that race was. But that's still very stringent.)
I believe it's also how casting was done in the original LOTR film trilogy.
"Similarly, allowing hobbits to be of different races reduces the restrictions on who can be cast and so increases the expected quality of the casting, on average making the show better."
Except who are our two main viewpoint Harfoot characters? White Poppy and White Nori. Nori has a black stepmother, and the leader of the tribe is black, but that's our main characters; the black leader and then our two white female stand-ins for Frodo and Sam.
To be blunt, the only actor I think was cast for acting ability and not merely ticking the "we have reached our quota of non-white casting" boxes is Lenny Henry. I can believe he was cast because his colour means nothing when it comes to the character. The rest of them, I can't get over all the fa-la-la about the diversity and modernisation and representation chat. So the black Dwarf is not cast because she's the best actress for the part, but because she's a black actress, and so on.
I got a bit miffed about having dwarves with high amounts of melanin as they are supposed to spend much of their lives underground, so why would that ever evolve?
There might be a misconception here wrt actual humans. High melanin did not evolve in response to intense sunshine. Instead, it is the base state for humans - normal humans are dark-skinned. Low melanin evolved for humans who migrated to high latitudes where sunshine was insufficient to generate vitamin D3, which is essential for life.
I wouldn't care about this if they wrote in a line explaining she came from a different Dwarven stronghold or had more visible black Dwarves around her. The problem is one single solitary black character in a sea of otherwise white ones does make that character stand out and if there is no explanation for it in-world, then it breaks immersion.
Let Dísa come from the Blacklocks who originated in the East. Intermarriage between two royal houses of the Dwarves. There you go, guys, when you are doing all your publicity material, just slip this in as a reason why she has different skin tone to her husband. But no, it's all "first black dwarf!" and then "that's racist!" in response to criticism.
Because melanin production evolved in our African plain dwelling ancestors as hominids lost more and more of their body hair. Our closest ape relatives have thick, dark hair and white skin. Looking at hairless chimps can be instructive (https://www.bbc.com/news/uk-england-leicestershire-36924808.amp). Also they are ripped AF, which is sort of fascinating to look at.
The point is that homo didn't start producing melanin until it needed it, and l don't think that dwarves would either.
The diverse casting has mostly been applied to extras and show-only invented characters. I basically lump it into the same category as "Numenorean beards": if you go by the lore, then all of the Numenorean characters, Aragorn, Boromir, and Faramir should have no facial hair at all. LOTR and ROP do give some of them beards, and it doesn't really matter because whether or not the adaptations feature bearded Numenoreans has no real impact on either its core themes or even really the characters themselves.
"The diverse casting has mostly been applied to extras and show-only invented characters."
That is what annoys me most about the whole thing. Amazon are pushing back on criticism very heavily with the "oh, so you're all a bunch of racists, eh?" and yet, for all their talk of 'representation', the black characters are minor ones, invented out of whole cloth.
So why not a black Galadriel, then? After all, if the "you accept dragons and wizards, why can't you accept a black Elf" argument is valid, then it's equally valid to cast a black actress as Galadriel. Major, major character played by a black person, huge representation, big important heroic role, not just shoved into a corner as a minor part or as a villain, right?
Because even they know it wouldn't effin' fly, and saying objections were racist would be laughed out of it.
I would totally watch a drama in which *all* of the Elves were cast as black, and all the Men as lily white, and they had to figure out how to work together in the struggle against Morgoth and/or Sauron. That would put a fascinating gloss on the inherent tribalism and mutual suspicion that is (only lightly) mentioned in Tolkien's original work, that arose from very different lifestyles. I think this could be done very well -- you could have enlightened members of both species who worked to overcome mistrust, you could have assholes who made it worse, you could have a lot of people just trying to muddle along.
It would certainly be a radical departure in some ways from canon in style, but maybe not that much in substance in a certain sense -- I think Tolkien did (albeit fairly lightly) consider the theme of difficulty in cooperation and the dangers of tribalism in his work. It would be -- well, could be, in skilled hands that I admit would not be super likely to actually handle it -- an extension that is not a faithful recreation of the original but also didn't spit in its eye. It also wouldn't reek of bullshit tokenism -- there would be a powerful narrative *reason* for making race-based casting decisions.
I think one takeaway I have had from a variety of projects is that generally if the casting and writing is concerned about representation, it often isn't as concerned about actually being, you know, good.
IDK it is an effective marketing strategy these days, plus free PR when you can spend time calling out a few "twitter racists", so it is hard to blame them.
But often in say games, if someone is selling you on a game with "It has a female protagonist", the game is on average kind of below average. Because if the game was good, they would just sell you on that.
It does seem kind of weird that a movement that is so concerned with "appropriation" is also very excited about appropriating whatever it can.
I get up a little earlier, but I do try to have at least 2 breakfasts before morning tea. Mind you, I'm only 5 and a half foot tall and may have a lot of hobbit in my genes.
What does deeply disappoint me is that the Elves don't behave like Elves, Galadriel is Batman because apparently that's how a "strong woman" acts these days, the plot is way off canon, and everyone is making dumb decisions. I guess we need r!Rings of Power.
That sort of "strong female character" is very much early 2000's-era Joss Whedon or Quentin Tarantino. People who actually care about female representation have moved on to "flawed female character" as a more useful test.
To respond more directly to your piece, you say "If black hobbits, elves, and dwarves exist... It requires disbelieving in evolution and approaching the series with the mind of a biblical creationist."
But biblical creationism isn't far off from canon. Eru Iluvatar created Elves and then Men (some of whom became [evolved into?] Hobbits). Aule created Dwarves. Morgoth magically transformed Elves into Orcs. Other sentient creatures have more complicated origins. So we have both creation and possibly evolution in play, as well as magical transformations and other forces. Any of those could explain differences in skin tone.
Assuming that Men had time to evolve into Hobbits, I'd imagine they also had time to make some long migrations, evolve skin colors accordingly (much quicker than speciation), and migrate on back.
Furthermore, Hobbits had three ancestral strains: Harfoots (depicted in The Rings of Power), Stoors, and Fallohides. Harfoots had the darkest skin and Fallohides had the lightest skin, canonically. Assuming that LOTR-era Shire Hobbits represented an interbreeding of all three, it's to be expected that Harfoots would have darker skin than Shire Hobbits.
I was just about to respond along these lines, but you did it better than I would have.
Arguments from evolution just don't do it for me inside fantasy settings because there's no guarantee that any of the ordinary genetic principles hold. Maybe regression to the mean doesn't happen because a deity had decreed secret complex dance moves in the great sarabande of alleles. Or something. Otherwise, you'll have to explain the biology and energy-economy of dragonfire (and flight, etc., etc.) instead of falling back (as OP does) on 'mythology familiar to Northern Europeans', as if that were somehow more dispositive than 'racial demographics familiar to North Americans'. Having it both ways is asking a little too much.
(I actually liked the analogy of the Lego-breathing dragon, but I think it overreaches - in terms of explanatory burden, it would correspond better to a cyborg hobbit.)
I'm not bothered by the casting choices. The main thought it brings up for me is: if there is racial diversity within a single village or city of a rapidly reproducing species (like humans), I want to know the backstory. Was there a recent merger of groups that hasn't had time to homogenize yet? Or is there longstanding prejudice that prevents them from intermarrying? How does that play into the worldbuilding and storytelling? If that backstory is totally ignored, that's disappointing.
With the dwarves, it seems straightforward. We only really see three of the seven clans, but we know they communicate and trade with each other. Plus IIRC from the Silmarillion Durin's Folk are basically a mix of dwarves from other clans since he awoke alone.
Hobbits are harder, but we don't know how many groups of Hobbit-like creatures are wandering around or how frequently they interact. Plus the Hobbits have always been deliberately anachronistic in Tolkien's works - they feel like a slice of the 19th century British countryside dropped into High Fantasy, and even the Harfoots kind of feel that way.
With Arondir, it's basically "elves reproduce slowly, he's probably from the Teleri group that was largest and pretty spread out before they kind of come together again later, and there's diversity among elves. They might -mostly- look fair-skinned and dark-haired with no beards, but you do get odd exceptions like Nerdanel's father from the Silmarillion who was both red-haired and bearded."
Excellent article, though I don't like even the indulgence of using the term "racist" like this uncritically, because it reinforces the belief that "racist" is a meaningful and valid word/concept.
When you say "its not racist", you're making it seem like "racist" is a meaningful thing that somebody can be, and that it would indeed be bad if those critical of the show were indeed being "racist". You're clever enough to know this isn't true, but when you use their language like this I think you've already lost.
There's no principles at play here here other than black nationalism and corporate virtue signalling. These people really just do not care at all about any of this nuance. These are the same people who lost their minds over Scarlett Johannson being in Ghost in the Shell. The correct response to accusations of racism is *not* to sincerely proclaim that "no, it's not racist!" any more than an analogous reponse would be right in the case of being called an infidel by a muslim or a counterrevolutionary by a communist. If you agree to the terms of their ideology, then you lose.
Edit: Sorry, finished editing this after you'd already responded, but the susbtance is still the same.
How is the term “racist” not a meaningful concept? It seems like in instances where someone holds prejudice against others purely based on race, it would be a useful thing to have the word to describe them as such.
Also, what is your proposed solution to allegations of racism? If someone called me an infidel or a communist, I could choose not to engage, but the fact is that I could also engage and perfectly explain why I am not, in fact, either of those. What makes “racist” any different in the validity of responding to it?
The problem with "racist" isn't that it has no meaning, but rather than it has a hundred mutually contradictory ones. A person can be racist for knowing crime statistics or for burning a cross in some black dude's backyard or for any number of intermediate points between. Which makes it pretty hard to use the word to communicate meaningfully.
Because they control the language, and will change the meaning of these words in a way that maximally advantages them in the discussion. The moment you think you've pinned down a consistent definition, they will just change it so your argument no longer works. It is not a coherent concept because its meaning is constantly shifting to serve the interests of the ruling class.
For example, let's say you're trying to argue that something is racist. You show that it fits some dictionary definitions of racism, and even some of the ways the word is used in common parlance. But let's say they like this thing, and they think it is a good thing. This cannot stand for them, because their most fundamental axiom is that "things that are racist are bad." So they will just change the definition of racism, via things like literally changing the dictionary definition, mass social media campaigns, censorship of the old usage of the term, etc., so that the thing that they like is no longer "racist".
Disagree - if A tells B "you are too fat and it's lowering your life expectancy, you should lose weight" then often B should give this serious consideration, even if A said it in a mean way.
I think they are talking about the "broader use of "racist" that has become so popular. Similar to say righties calling Clinton or Obama a "communist". Does it make sense for them to take time to explain they are not communists? Maybe, maybe not.
In some ways even edifying the attempted slur with a response gives it power.
If your response to "You're racist!" is "No I'm not," you're conceding a huge amount of ground. The underlying premise of that accusation is "The world is divided up into good people and bad people. Anti-racists are the good people, and racists are the bad people." The response "No I'm not" concedes all of that and is basically an act of begging, "Yes, I fully accept your way of framing the world, but please believe me, I really am one of the good people." But of course this will never work, because the people you are begging are the same people you just conceded the right to frame the world to, and of course they get to decide who the good and bad people are.
That's what I'm saying. "You're racist" can be directly translated as "You're one of the bad ones," the response to which should be to deny this classification system on the whole, not to engage in the losing battle of arguing that you are one of the good ones.
Yes, you’re right. I tried to use quotation marks to indicate the nebulousness of the term, but I think using it is unavoidable when you’re trying to reach the other side.
I'm not buying that Amazon could have avoided or significantly lessened the controversy by only releasing non-confrontational images of the nonwhite characters.
Of course online media were going to report on the grognards and trolls - the controversy was clearly going to be a pretty good source of clicks. Not sure how Amazon could have prevented this reporting.
Right. Evidence in favor of this theory is that pretty much no one had a problem with diverse casting in the past ~20 or so years when it wasn't done for performative reasons and didn't have as much explicit wokeness in the content itself. The negative reactions seen recently is not just "diversity -> bad", it's "diversity + explicit woke content -> evokes a mental image of the people who are pushing this stuff on us -> bad."
E.g. there are tons of universally well-liked movies with female protagonists, but these are usually the ones where the protagonist just happens to be a woman and it is otherwise a normal movie, not the ones where the whole story is about how they have it so hard because they're a woman.
"not the ones where the whole story is about how they have it so hard because they're a woman"
Oh gosh, three episodes in and this Galadriel is a thundering bitch. The only time she smiles is when she's having that slo-mo horsey ride (a Youtube review said "Maybe this is the key to her whole character, that when she was twelve, her dad didn't give her a pony"). Otherwise she is needlessly confrontational to everyone. When Halbrand, the ragged guy pulled off a raft in the middle of the ocean, can manage to be polite and diplomatic in the court of Númenor, it may be the writers hinting that he is not just the ordinary guy he pretends to be, but it just comes across as basic common sense not to piss off the powerful people who have you at their mercy.
That Galadriel, after her recital of her own titles, can't manage to be civil for five minutes is mind-boggling and infuriating. She demands everything, scowls when she doesn't get her way, and resorts to threats of theft and murder when everyone doesn't fall down at her feet. Throw her back in the Sea, Elendil, and let her swim home! With any luck, she'll be eaten by the Sea Worm and poison it to death, so two problems solved!
Or even, 'story about X who had it so hard (because X)' will appeal to some folks, but 'X has it so hard (because X)' is not necessarily a storyline with universal resonance.
> E.g. there are tons of universally well-liked movies with female protagonists, but these are usually the ones where the protagonist just happens to be a woman and it is otherwise a normal movie
Not saying you're wrong, but can you give some examples?
In terms of the original, mostly, but I feel obliged to point out that in Terminator 2 Sarah Connor goes on a deranged (as in, portrayed-as-deranged) rant about how men are evil.
"Fucking men like you built the hydrogen bomb, men like you thought it up. You're think you're so creative. You don't know what it's like to really create something... to create a life. To feel it growing inside you. All you know how to create is death, and destruction."
This is definitely a rant that wouldn't work coming from a male character.
Of course, T2's not exactly a female-protagonist movie; Sarah's credited #2, but she's the least important of the main trio (unlike T1, where she's credited #3 but is definitely the main character).
Hmmmm. I agree that Terminator wasn't a female-led film in the annoying "we've got a WOMAN in the lead, WHADDAYA THINK ABOUT THAT, BIGOTS?!" way, but I think her femininity is more than incidental - it drives a lot of her response to the Terminator. You can probably read the film as an allegory for domestic violence or something. Also, she's not the star!
Sarah Connor is a very popular movie character that men have no difficulty at all believing in and enjoying. That's the point.
It disproves the idea that men are beastly and sexist and that is why they don't like watching modern films with unappealing female characters. The problem is the character, not the audience.
For Scott's next book review contest, would it be more convenient to use a wiki where each review gets its own page?
Reviewers would be previewing their own formatting (using anonymous accounts), so there would be no more surprises about how it looks on Substack. Readers would find reviews using the random button in the sidebar. Finalists would be listed on the main page.
Possible problems: Does each image need to be uploaded? Will participation decline? Would Scott find it inconvenient to keep tabs on the wiki and Substack?
Edit: Forgot to mention what I imagine would be more convenient about the voting - voting on the page of the review itself, rather than navigating elsewhere.
I would appreciate this very much, at least for the first round. Reading a giant Google doc is really unpleasant/annoying. Anything that would load better and remember its place better on a phone would be an improvement.
Someone actually built a nice website that would give you a random review this year, that was cool (I unfortunately can't find the link right now).
I like the idea. I find large Google Docs more tedious to navigate than Wikis. I think the "Find random article" would also work better than Scott reminding people to read things randomly.
In terms of editing, the submissions could still be the way they are, and only one (or a handful) of the organizers could take care of uploading them into the Wiki, with no edits afterwards.
Would be cool to also upload all past reviews into the same one.
Would this be in lieu of actually publishing the reviews on the substack one week at a time? If so, seems like that would really stifle discussion (as everyone would be reading/commenting on all of them at once), and maybe more importantly (at least to Scott) wouldn't really be creating "content" for the substack.
Plus people would no longer be able to read directly from their emails, which seems like at least a minor disadvantage.
Back in April, Scott released them all at once in a few gargantuan Google Docs, so we already were reading many of them at once, right? I agree that commenting on finalists all at once sounds inferior to starting comments one at a time, when each finalist gets released by email.
The advantage of commenting on a wiki is faster page-loading, and use of formatting, e.g. #links to particular passages in the review. The advantage of commenting here is no need to create another account. The latter might win out.
Arguably outweighing the inconveniences of doing something new is the creation of an easily searchable repository which gains 100-200 quality-rated longform reviews per year, for as long as Scott wants to keep going.
He could also set up a few categories for important issues, so future readers could easily find e.g. all years' healthcare reviews together.
Yeah, when Scott released finalists a few people would have already read them in the "preliminary round", but the point is that most people were reading it at the same time.
And I guess, more importantly, commenting at the same time: I think these sort of discussions have a shelf-life: if I write a comment and someone replies an hour later, I'm way more likely to reply than if I get the reply a month later.
This is obviously a dumb question because if it wasn't you'd have already mentioned it, but can't anyone edit a wiki at any time? And so you'd risk changes being made to your review? (As a rule the ACX commentariat is one I'd trust not to do that maliciously though)
It's a good question. An administrator could protect the page at the same time as they make it visible (which is after the reviewer says they are done). Or a bot could automatically revert any edits not by the reviewer or an administrator.
My latest article in which every piece of artwork is made by AI got a ton of negative feedback from my follows. Like "go jump off a bridge" bad. This may be because I am more in the writing community rather than the rationalist one but I was wondering if others who use DALL-E or Midjourney in their writing are getting blowback on putting artists out of business.
One of my kiddies recently told me that they wanted to be a graphic designer when they grow up. Having had a go with Midjourney, and observing the pace of improvements in this area, I do wonder if there will be any sort of viable career in this area in a decade or so.
For someone who has just left art school, or is trying to earn money as an artist, it must be very concerning indeed.
I personally think that there will be either the same or even more graphic design work in the future, it will just look dramatically different than today's graphic design work.
If I had to guess, it will involve prompt crafting, as well as stitching together and editing AI model images. Graphic designer productivity will go way up, which will mean prices will go way down, but that far more project will use graphic designers than currently.
Overall corporate art will get better as local businesses/ads will start to have the polish and quality of the large corporate space.
My favorite analogy for this: Do we think there are more or less professional photographers than their used to be professional portrait painters pre-photography?
How many people pre-photography hired a portrait painter for their wedding, versus hire a photographer (and even a videographer, often these days!)
The details of what doing this kind of work looks like will almost certainly change drastically, but I think that the number of people doing it will not change much.
The question is then: will more graphic design *work* entail more graphic design *workers*? Cf. the "death" of American manufacturing, when more is actually being produced than ever before, just with far fewer than the peak number of manufacturing workers.
While you certainly don't deserve to be told to jump off a bridge, I think you need to consider how the written content of your article looks from the perspective of a visual artist. Since you say that you're in the writing community, I'll try to mirror your ideas to writing (and specifically short stories):
1. Everybody will use it in the next two years. The process of making short stories will be deceptively easy. You give the AI the first few lines of the story and a general theme. Then the AI uses others' fanfiction works and short stories to create a new story. It will get quicker and cheaper over time.
2. <insert same thing that you said about legal protection but for characters and plots>
3. You'll no longer need to browse the erotica section of Amazon or AO3. You don't need to hope someone else shares your kinks. You can tell the AI what you want included or excluded.
4. Original content for blogs, newspapers, etc will accelerate. Longer books will be produced faster. Details help provide clarity and are also just important for understanding. People will rely much less on copywriters and freelance writers because new stories can be generated in minutes. They won't be gone. Original, well-written research papers will always have a place in the world.
5. AI will create ideas for you. <insert same ideas, but characters and settings instead of mediums and subjects>. Here's a short story one of the Africans caught in the Atlantic slave trade which is much more evocative than I thought possible.
6. AI isn't perfect - yet. First, it has troubles with dialogue that sounds realistic. It also struggles with exact continuity. Creative writers are going to be okay for now as getting the precise flow of time can be tricky. Lastly, proper grammar is tough. This is easily overcome with Grammarly, and isn't a major barrier.
Final thought: there will be displacement and job loss in the creative writing industry (especially in short stories where you have less time to mess up continuity!) We will have many more stories, but all writers must level up their game.
---
My comparisons aren't perfect, but I think I highlighted the biggest issue: ignoring the effort *and thought* that goes into making art of visual or literary type. I imagine you don't like being told that an AI could totally replace you and all of your short story writers while maintaining 90% of the quality. Or that you shouldn't expect to be paid as much because you're easily replaceable by something that is likely feeding off of your own work without credit. Or that you would still be useful, but only in specific cases like writing instruction manuals power tools.
To be clear, I think you're right in a lot of ways! I know that many company blogs will appreciate not needing to pay for a stock photo license when they can type a few words in to get something that will only be glanced at anyway. But I think that you missed the reason why visual artists were mad at your article.
Edit: also a disclaimer, I'm not a visual artist by trade but I am trying to start it as a hobby.
I don't think that AI will actually take over either visual or literary art. In addition to what Machine Interface said about the effort to get small corrections compared to a human, there's also value derived from the knowledge that there's a person behind the story or image. You can learn a lot about a person from their art.
Have a look at Yuumei's art (https://www.yuumeiart.com/). You can see a recurring theme of nature: bathtubs full of flowers, musical instruments turned into koi ponds, cities being overgrown by vibrant grasses and leaves. You can tell without a word that she probably cares for the environment and loves nature. A step over to her blog confirms it: she ran a donation campaign for several climate and wildlife charities such as Rainforest Trust and Ocean Conservancy. An AI would have no such personality or story beyond a few words. I'm not much of a reader, so forgive my generic example for writing: you can see Harper Lee's desire to comment on race and prejudice in To Kill a Mockingbird.
As I said in my earlier comment, I fully believe that certain jobs like stock photography (writing example: corporate blogs? advertising?) will be replaced. Those seem like fairly soulless jobs (sorry stock photographers! I'm sure you're great people!) But art and writing to *tell* a story and not just look nice or be interesting will still have a place because it's about connecting with people.
So we have had twenty-six United Nations Climate Change Conferences, and as far as I can tell, there is lots of talk about "projected degrees of temperature rise", but no goal for "projected peak CO2 concentration".
The IPCC does have predictions for that - the one I found with a quick Google says that CO2 concentration will be between 550 and 970 ppm by 2100, depending on our policy choices. I suspect there's less talk of it because temperature is usually the outcome we care about, although CO2 does have some direct effects like ocean acidification.
Temperature rise is probably a more useful metric in some ways. I would guess that it has an influence on things like ocean water levels (melting ice caps) and general quality of life around the world (places without A/C aren't prepared for higher temperatures and living inside of an oven seems dangerous).
Is gravity quantized or continuous? Since it's a function of mass and distance (which is quantized), I guess it depends on whether mass is quantized. I think there's no particles having mass less than neutrinos, so maybe mass is quantized and the smallest quantum of mass is the neutrino?
But since gravity is inversely related to distance, it can be made arbitrarily small by increasing the distance, so if gravity is quantized it's a weird kind where there's no smallest quantum unit.
Well, this is not my field, but I think most of us think it's rather a choice between "Is the correct description of dynamics in our universe a lot like QFT or not?" If the answer is "yes" then gravity *must* be a quantum field, because QFT doesn't admit the possibility that some fields might be classical, some others quantum. Either QM is a correct description of our universe's dynamics, or it is not, I don't think there's a lot of taste for some kind of "sometimes" or "in some areas" answer. The fact that people have a hard time figuring out a quantum field theory of gravity that looks a lot like, say, QED, is one of the reasons to think maybe QFT isn't quite right. But on the other hand I don't think anyone is even trying to come up with a classical equivalent to QED, so if you asked most people I think they'd probably say gravity will end up with some theory that has a "quantum" feel to it.
Rest mass is certainly quantized, in the sense that for any given field you only get excitations that are...well, quantum. 1 electron, 2 electrons, et cetera, but never 1.4335578 electrons. Same with all other particles. (Photons are a weird case because they have a rest mass of zero.) But it feels like you mean rest mass per se, something independent of particle identity, e.g. *all* particles of *any* fields must have masses that are multiples of some even more fundamental quantum of mass. I dunno if that's an aspect of any current theory.
I also don't know if spacetime itself would need to be quantized in a quantum theory of gravity, in the sense that Points A and B cannot be chosen arbitrarily close -- this is not my field as I said. Usually quantization happens in the amplitude of excitations of the quantum field, e.g. if the field for gravity is the metric tensor then maybe spacetime can curve away from flat in only tiny discontinuous jumps.. If I had to speculate wildly on what this means, I would guess it means the location of events in spacetime cannot be nailed down precisely, there would always be some indeterminacy.
Oh boy, I love thinking about what could be responsible for MoND (Modified Newtonian Dynamics.... see the Triton Station blog.) I keep going back to some low energy quantum state of the universe. And if there is such a thing as a graviton, then it's lowest energy state is something like a particle in a box the size of the universe... which gives it a frequency of ~ one over the age of the universe. Now besides not knowing if gravitons are real, I also have no idea how to calculate their energy given their wavelength, but if the energy is also related through Plank's constant (as with photons), then this is a very small amount of energy... but still non-zero and quantized.
Interesting - the Hubble Parameter (AKA Hubble Constant) has the value that is almost equal to one over the age of the universe. And yes it is a frequency.
If you assume its quantized and do the same maths you do for the other forces, it works out just fine at normal energies. But at very short distances (or black holes) the maths gives infinite numbers of infinite terms, which means either its not quantized, or something else in addition must be going on.
Wait, the Ahranov Bohm effect says nothing about the quantisation of gravity as it assumes a classical potential. And it doesn't really say anything about the observable consequences of the potential, as we already know the potential arises from approximations Einstein field equations. It isn't "real" beyond that. So how's that bigthink piece say anything about quantisation.
That's not the least energetic photon possible, and in fact you could detect one with less than half the energy by simply(!) having an antenna that spans the entire observable universe, though?
Does having an antenna the size of the observable universe even make sense? I mean, even assuming you can magically make enough material just appear/assemble/whatever in the correct configuration, what *is* the correct configuration in an expanding universe? What does it mean to detect a photon with the wavelength of a few billion light-years? When do you detect it? What does it mean that the universe became a few orders of magnitude larger during one wavelength? When (and how) do you even determine if your antenna is the right length?
Having some expertise in the field, I will try to point out some misconceptions:
- Gravity is not a "function of mass and distance". Mass, or, rather, energy/momentum/pressure do warp spacetime, at least classically, but it's not a straightforward relation, curved spacetime does not require mass or energy.
- Mass is unlikely to be quantized, and one argument against it is that the usual quantization limit, "Planck... something" fails for mass, since Planck mass is about 20 micrograms, which is rather large.
- Gravity is inversely related to distance only in the classical Newtonian approximation, which does not hold in general. For example, it does not hold for black holes.
- As Larry pointed out, "quantization" does not necessarily mean the existence of a smallest possible unit of something, though sometimes it does, like with electric charge.
- Gravity may well not be a fundamental force, and so may not have a quantum limit at all, but rather be emergent from, say, the Hilbert space of quantum mechanics when the number of states gets stupidly large.
- We just don't know much about gravity at distances below a few millimeters. Well, we know that particle collisions in LHC and whatever we observe in nature don't seem to create either stable or insta-evaporating black holes, so that limits the size of extra dimensions, if any.
We can detect relatively easily that light is quantised - for example we can show that if we shine light of a particular colour on a sensor and dim it enough, we will eventually find that instead of a continuous signal getting weaker and weaker, we will eventually detect a photon of a fixed energy now and again.
We can't do this experiment with gravity. The reason is that gravity as a force is extremely weak. The only reason we know of it is that gravitational charge goes in only one direction, and like charges attract rather than repel. So you can get a static accumulation of gravity the size of a planet, or larger. You can't easily get a static accumulation of positive charge bigger than the nucleus of a large atom, and that has to be held together by the strong nuclear force or it will explode (the energy of an atom bomb, though not a hydrogen bomb, basically comes from the compressed electrostatic force breaking out). [On the more peaceful side, it's what makes hydrogen bombs hard to ignite.]
But to detect photons you need electromagnetism, which happens when electric charges oscillate at high velocity. Because there are positive and negative charges tied to objects of very small mass, it's easy to make them oscillate, and the resulting waves in the electromagnetic field are strong and easy to detect - for example light. Look closely enough at the light, and you will see the photons.
The corresponding effect in the gravitational field is called gravitomagnetism. The waves and other features exist and have been detected, but to get gravitational waves powerful enough to detect you need something like two neutron stars in close orbit. We can't make detectable gravitational waves on Earth, while it's easy to make light.
As well as being quanta of a weak force, the gravitons we might detect are very low frequency. There's no equipment we can currently conceive of building that would demonstrate their existence. The main reason for believing in them is a belief in the consistency of the fundamental forces of nature.
I don't think a field being quantized means that there is a smallest increment of that field's strength. A field being quantized means that its effect is communicated in discrete units called quanta.
For instance, the photon is the quantum of the electromagnetic field, but I don't think there is necessarily a "least energetic photon possible" or something like that (you could talk about photons with arbitrarily large wavelengths, and thus arbitrarily low energies).
So in the sense you are asking, gravity is probably continuous, even if we eventually describe it using a quantized field rather than a classical theory, as we do currently.
Of course, the current theory is a classical theory anyway, so in the current description of gravity everything is continuous.
Or at least that's what I remember from my undergrad physics studies.
AIUI, there's a "least energetic photon detectable" due to the finite size of a buildable antenna (even theoretically, due to the event horizon produced by the accelerating expansion of the universe).
I was recently thinking about something that I did not have a good grasp of and felt this community would provide helpful commentary on.
Many of the nerdy communities I participate in joke they self-select for intelligence. SSC/EA/basketball analytics discussion groups etc. At the same time, from my observation, all of these groups have very minimal East Asian representation.
Is there writing on this issue? Have people already theorized why this is?
I suspect cultural factors + smaller number of people of who grew up in the West in families comfortable enough to let their children waste tons of time on the internet and not feel academic pressure. I really don't know though.
My immediate thought is that going from the East Asian languages to English is really hard, and the hardest discussions to learn to follow are nebulous concepts like what people who self-select for intelligence are going to talk about. I suspect there aren't many native English speakers in the intelligence circles on their end either.
East Asian culture tends to be more practically minded. Intelligence in and of itself is not that valuable if it's not making you money. Spending lots of time online having intelligent discussions normally doesn't lead to money.
Maybe not directly, but it keeps the brain cells ticking over and considered posts are practice for improving reasoning and communication skills and articulacy. Both of those results, it seems to me, indirectly tend to improve money making skills, or success anyway, in jobs requiring mental effort and in particular interviews for the same.
I quite strongly disagree. Most jobs are very specific. Even reading papers/books for work is in 99% cases irrelevant luxury.. and you're talking about random intellectual chitchat on the internets.
In my mind this community selects on intelligence and certain impracticality/lack of focus/whatever your name for addiction for random intellectual rabbit holes. As a result from my perspective at least it's pretty obvious on average members on this community actually underperform on success conditional on intelligence - my Chinese friends are probably less smart than my rat friends, but there is a lot more Goldman/Citadel etc there! (though admittedly the observation is based on NY rats more than the valley, I could imagine with valley culture these over-intellectualism attitudes are less of a negative)
You all may enjoy my interview of the brilliant engineer and blogger Austin Vernon.
We discuss how energy superabundance will change the world, why software hasn't increased total factor productivity, how Starship can be turned into a kinetic weapon, why nuclear is overrated, blockchains, batteries, flying cars, finding alpha, Toyota Production System, & much more.
I've noticed over the last year, you've seemingly gone from a normal young person to an internet micro-celebrity (I mean this warmly; I hope you take it as such).
I was curious how this has impacted your life - both in terms of how you view yourself (and the paths you want to pursue) and your social life. Does your IRL network know about your success/treat you differently?
Not really impacted my social life at all, and you and IRL people get used to it pretty fast and go back to your normal relationship dynamics (thank God).
But I definitely do have a lot more confidence and ambition than I did a year ago, and that definitely affects what paths I end up pursuing.
"General Nanisca as she trains the next generation of recruits and readies them for battle against an enemy determined to destroy their way of life"
And what is the way of life General Nanisca is fighting to protect against the wicked enemy? Constant warfare, so bad that the male population has dropped enough the kingdom of Dahomey *needed* to recruit female soldiers, widespread slavery, and using slaves as human sacrifice.
That's a *little* different than the blurb would lead you to believe. I think some of the negative reviews may be based on "this is not the true history" and not review bombing.
Speaking of which, Amazon owns IMDb and admitted they were fiddling about with reviews for Rings of Power. So any reviews that were too negative were deleted:
This includes honest reviews (I have seen even generally positive reviews that go 'visuals are awesome, writing is poor and pace is terribly slow') as well as any trolls. So it's not as simple as being made out.
I've done a *lot* of complaining about Rings of Power. I wanted to like it, but I can't as it is simply too far removed from genuine Tolkien lore. What they've done is create Generic TV Fantasy Show and just slap on the names "Elrond" and "Galadriel" and "Celebrimbor" on certain characters.
The most recent, episode four, is a doozy in that regard. I looked up some reviews online to see if it was worth watching, or what was skippable in it (there's a lot of skippable content so far) and oh man. I couldn't believe the first one I saw, so I looked up a couple others, and it was true.
Even a generally favourable review thought this point was clumsily done:
Before I start ranting, let give them one good point here - the Adar character is interesting and had me wildly guessing could this possibly be Maeglin? but I doubt the show would go there.
Let me set it up. Why do the Númenoreans pre-Downfall hate and resent the Elves?
Tolkien:
"The Númenóreans …became thus in appearance, and even in powers of mind, hardly distinguishable from the Elves – but they remained mortal, even though rewarded by a triple, or more than a triple, span of years. Their reward is their undoing – or the means of their temptation. Their long life aids their achievements in an and wisdom, but breeds a possessive attitude to these things, and desire awakes for more time for their enjoyment. Foreseeing this in part, the gods laid a Ban on the Númenóreans from the beginning: they must never sail to Eressëa, nor westward out of sight of their own land. In all other directions they could go as they would. They must not set foot on 'immortal' lands, and so become enamoured of an immortality (within the world), which was against their law, the special doom or gift of Ilúvatar (God), and which their nature could not in fact endure.
There are three phases in their fall from grace. First acquiescence, obedience that is free and willing, though without complete understanding. Then for long they obey unwillingly, murmuring more and more openly. Finally they rebel – and a rift appears between the King's men and rebels, and the small minority of persecuted Faithful."
Tamar: She summoned the Elf to court. Just this morning. Elf's mate attacks four guildsmen, and Míriel has her up for tea?
Guildsman: Probably she called the Elf in to punish her. Tamar: Or to ask her for orders. And while the Elf whispers poison in our Queen's ear, who's speaking for us?
(And then Pharazon shows up to calm the crowd and take the opportunity to do some populist speechifying about 'Númenor for Númenoreans' and handing out free drink).
Yikes. Tolkien - the resentment is based on fear of death versus Show - dey took er jerbs.
Do you really wonder it's getting negative reviews and not simply from trolls and racists?
This is really a fallacious point because it doesn't consider the overall sample size of all ratings and the demographic mix, as well as control for the effects of top-down intervention (Rotten Tomatoes was recently said to supress negative votes of the Rings of Power or something like that). I can just as easily look at the strongly positive distribution of Letterboxd and the fake-seeming reviews to conclude it's *this* site which is astroturfing and should be ignored as outlier.
Yes, exactly. Extremes happen all over reality depending on what you measure and how you sample and countless other choices. The number of starving people by nationality was a very extreme distribution in 1945, the number of killed combatants by gender in nearly all wars is very extreme. The percentage of planets who have life is, famously, an extremly extreme distribution, so extreme it has a paradox named after it.
God doesn't exist, but if they do I don't think they have a particular fondness for the normal distribution, it's just a useful tool that happens to describe lots of situations well. But so is newtonian mechanics.
Okay, I had a look at one review from the 99% positive critic's reviews over on Rotten Tomatoes, let's hear it for The Curvy Critics.
Her synopsis of what the movie is all about:
"The Woman King brings us into the year 1823. Orphaned at birth and raised by an abusive guardian who seeks only to marry her off for money, young Nawi petitions for entry into the Agojie, led by the single-minded Nanisca . To defend their people against the oppressive and heavily armed Oyo Empire, the Agojie places their candidates through intense training with Nawi rising to the cream of the crop as an outstanding, ferocious soldier. As the Agojie prepare for the fight of their lives against both the Oyo and the Portuguese slave traders with whom they are in league, long-buried secrets come to light and harrowing stories of personal sacrifice arise, which prove to only strengthen the bonds between these unstoppable warrior women."
Oooh, those wicked Portuguese trying to enslave the free people of Dahomey, right? Let me look up who these "oppressive Oyo Empire" is. So, were they Wicked Slavers? The answer is "yes, but"
"The Oyo Empire, with its capital at Old Oyo near the Niger River, prospered on regional trade and became a central facilitator in moving slaves from Africa's interior to the coast and waiting European sailing ships. The trade in humanity was so large that this part of Africa became known simply as the 'Slave Coast'. The Oyo eventually succumbed to the expanding Islamic states to the north, and by the mid-19th century CE, the empire had disintegrated into small rival chiefdoms.
...By the 18th century CE half of the slaves taken from Africa came from the southern coast of West Africa, and the area controlled by the Oyo Empire, the Kingdom of Dahomey (c. 1600 - c. 1904 CE, modern Benin), and the Kingdom of Benin - the Bight of Benin - came to be widely known as simply the 'Slave Coast' (the 'Gold Coast', another lucrative trade hub, was further to the west). There were two main reasons why the slave trade centred here: firstly it was one of the most densely populated areas of Africa reachable by the Europeans, and secondly, the Oyo Empire, and to an even greater extent the Kingdom of Dahomey, provided the necessary command infrastructures to organize the movement of slaves from the interior to the coast. In return, the Oyo received European goods which they could use themselves or trade with neighbouring states."
'To an even greater extent the Kingdom of Dahomey'? And Dahomey itself wanted to establish links with the Portuguese:
"Dahomey sent at least five embassies to Portugal and Brazil during the years of 1750, 1795, 1805, 1811 and 1818, with the goal of negotiating the terms of the Atlantic slave trade. These missions created an official correspondence between the kings of Dahomey and the kings of Portugal, and gifts were exchanged between them. The Portuguese Crown paid for the expenses travel and accommodation expenses of Dahomey's ambassadors, who traveled between Lisbon and Salvador, Bahia. The embassies of 1805 and 1811 brought letters from King Adandozan, who had imprisoned Portuguese subjects in the Dahomean capital of Abomey and requested for Portugal to trade exclusively at Ouidah. Portugal promised to answer to his demands if he released the prisoners."
But those wicked Portuguese were carrying off slaves to Europe, right?
"The Europeans were interested in beads, cotton cloth, ivory, and slaves, which they could then trade on to other West African peoples in exchange for what they prized most of all: gold and pepper (the only two goods in demand in Europe). West African tribes sought, too, the fine cotton cloth of India, glass beads, and cowrie shells which the Portuguese brought to Africa."
Say it ain't so, NYC Movie Guru!
"In 1800s Africa, General Nanisca (Viola Davis), trains the Agojie, a group of all-female warriors, to defend the Kingdom of Dahomey from the nefarious Oyo general, Oba Ade (Jimmy Odukoya), who's kidnapping and enslaving the women of Dahomey. Izogie (Lashana Lynch) and Nawi (Thuso Mbedu), who develops a romance with Malik (Jayme Lawson), are also among the warriors of Agojie. John Boyega plays Ghezo, the King of Dahomey."
Oh good, glad to see they clear up all that nasty propaganda about it being a struggle between the Oyo and the Dahomeans for access to the coast, trade (including slaves) with Europeans, and gaining territory. Thelma Adams (another Rotten Tomatoes positive critic) over at AARP Movies For Grownups sets us straight on what it's really about:
"While the movie’s treatment is surprisingly conventional, the tale of women empowered to own their own bodies couldn’t be timelier."
Unless, of course, you're one of the women enslaved by the Dahomeans to be sold on, but let's not mention the war, hmmmm?
I don't know, this seems like assuming an aweful lot of things about how people vote and how movies induce positive\negative feelings and how those feelings translate into votes and how marketing works and etc....
Fundamentally, I don't *see* why shouldn't the vast majority of people either love something very very much or hate it very very much. I don't watch a lot of movies (really barely at all), but I read, and when I see a Goodreads page for a particularly divisive book (not because woke stuff, I keep very far away from those), the ratings *are* indeed either all 4\4.5 stars glowing reviews, or 1\2 damning reviews. It happens.
Regarding your last remark, this... seems like the opposite effect of not gaming the system. I have heard about voting-system-shenanigans before in the context of the tech forum HackerNews, and it does the opposite for me, I trust HN's upvotes and rankings *less* because of it. It's impossible to define what a fair "anti-gamable" voting system looks like, even without the internet's anonymity on top ruining things. Even the braindead one-person-one-vote privileges the majority, which isn't necessarily optimal in media of all things, and is trivially gamable with the internet. Everything else is just downhill from here.
The best possible thing would seem to be : just delay as much choices as you possibly can and delegate them to the user, gather all votes you can, then let the user filter and re-arrange and ignore and amplify as they like, being careful not to privilege any default over any other. But of course, tools are only good because they limit choice, and not everybody wants to be data scientists in order to know if a 2-hour movie is worth watching. The next best thing is to try lots and lots of combinations of rules and filters and choose the "best" according to an intuitively-normative metric, like (say) total profit of the movie or post-view satisfaction of viewers who *provably* watched it. Offer all of those filters on the raw data, each tagged with the metric it optimizes.
"Regarding your last remark, this... seems like the opposite effect of not gaming the system."
It's an all-female, and more importantly, all-black movie set in Historical Africa about All-Black Female Warriors fighting evil European slavers. What critic is going to give that a negative review, be they working for legacy mainstream media or Just Some Person with their own Youtube channel? It would be asking to have your head cut off and put on a spike as a warning to traitors.
Okay? And why do you imagine people don't give such media positive reviews for ideological reasons?
She-Hulk is absolute garbage, but large numbers of people are lapping it up because something something women!
As Freddie DeBoer points out, the reason these types of media include social justice themes is precisely because its easier than engaging in good writing, and the producers can just hide behind cries of 'racism/sexism' when people point out how bad the writing is.
Oh, and let's also consider the MASSIVE factor here in that movies with explicit right-wing themes simply do not get made any more, so there's no possible way for 'right-wing' movies to be review-bombed, so of course close to 100% of review-bombing is going to be done by anti-leftists.
"the show is mediocre and forgettable (so basically a normal Marvel product), but certainly not "absolute garbage"."
I haven't seen it and have no intention of watching it, despite Tatiana Maslany being a talented actress, because of the trailer clips I saw.
In one episode, She-Hulk twerks with Megan Thee Stallion. Funny, I seem to recall some articles back in the day about how white women twerking was cultural appropriation and they shouldn't do it. I suppose it's okay, though, if you're One Of The Good Ones?
And seemingly the recent episode was "She-Hulk buys a suit"? Thrill-a-minute TV right there!
"If you get called a racist for loudly proclaiming that The Rings of Power is the worst show ever *specifically* because of the casting choices, then frankly you walked into it with both feet, and the producers of that show have all the incentives to exploit your knee-jerk reaction for their marketing."
Casting choices that make no sense except as "tick off the diversity boxes". Two minutes of backstory as to where Dísa and Arondir come from, that's all that is needed, but the show can't provide it. So Arondir is fairly conspicuously Lone Black Elf in the show so far, same with Dísa (I think I saw another black Dwarf in one of crowd scenes in Khazad-dum but I can't be absolutely certain).
That's not "these characters are established as part of the world", that's "we went all-in in looking for media cool points about how Diverse And Representative we were".
I don't care about Tar-Míriel's casting since she's fairly appropriate for the role (though I am chuckling about when we get to see her father and he's white; Pharazon her cousin is white; Invented Son of Pharazon is white, and Elendil, cadet branch of the royal family is white. Mommy must also have been black but since she's presumably dead, we don't have to bother with casting a third black woman in our progressive, representative, Tolkien-for-the-modern-world show).
Ditto with the Southland villagers - they're all too white, where the show could legitimately have cast black and brown actors. But that would have deprived us of the scene where White Young Guy is Racist to Black Elf, and we can't have that, can we?
I wouldn't give a damn if they had cast the entire show, from Galadriel on down, as black/brown if the showrunners could only write decent dialogue, but they can't. That's the worst of it - it's not Tolkien, it's Generic Fantasy with Tolkien names slapped on.
Last night I stumbled across this very long, very funny (and very mean to the Welsh which is not fair) review on Youtube. Warning for language, I guess:
>But that's all the reason I need; I have no sympathy either for hysterical, paranoid right-wingers who give 1 ratings to movies and series they haven't watched just because they perceive them to be "woke"
You know what they say, no smoke without fire. Hysterical right-wingers can only exist because of the hysterical wokies who jump up and down like excited dogs (but way less cute or useful) whenever a giant corp throws them a bone. Those human-shaped things exist, are more influnetial and amplified (because what corporation doesn't like free dicksucking and bootlicking ?), and the hysterical right wingers are probably doing us all a favor by exerting a braking effect, though not nearly enough to cause a visible difference, it seems.
Oh I agree that anger is not the correct response, not the obvious loud type at least, that just further vindicates the wokies. Snarky contempt and relentless mocking and transgression seems to be the things that really are devastatingly effective against the religion. It's an espcially fragile religion after all, with lots of touchies-feelies. But framing the reaction to something as the problem while ignoring the decade-long action that spurred it seems to be a disingenuousness of the first rate.
>Marvel is so "woke", all its movies are financed and materially supported by the DoD
Citation needed, why would the Department of Defense finance cringe superhero movies ? (except possibly Iron Man because it shows US military action in favorable light), or for that matter any movies at all except promotional documentaries and "historical" movies about the US's enemies ?
Also this is irrelevant. I don't get why being useful to the US military is somehow an argument against a thing being woke, wokeness is percisely defined as being so wrapped up in the wokie's own petty irrelevant fantasy world that they care for no one or nothing else, so of course wokies are very useful idiots to all sorts of organizations, it comes with the territory. When your face gets bloody red at something as an automatic reflex, all sorts of people will figure out how to narrate the world to you so as to exploit that. The oft-cited video about progressives screaming at Occupy Wall Street protestors about who gets to talk before whom comes to mind. Wokeness is a religion, and all non-personal religions are extremly useful as mob command-and-control.
I'm really struggling with the "No Smoke Without Fire"->"Witch Hunts, Lynch Mobs" connection. I can see why this could be the case if I had meant by the phrase something like "Any and Every Accusation is Evidence Against Someone" but this is really not what I meant, what I meant is : In feedback loops, all of the loop is equally at fault, except the loop-starter, who is more at fault than all else.
Being offended is a feedback loop : sombody gets offended, those who offend them now learns that there is a thing that offends them, and does it more, which restarts the loop. It makes no sense here to blame only the offended and not the ones offending them for fun and profit. In fact, in the particular case of movies and media, you should blame the invading aggressors who started the whole thing, namely the wokies. You could certainly blame the offended as well - like I said, they're not even doing the best thing by their own metrics - but most of the blame should be assigned to the one who started the loop.
..
I'm not ignorant of how the US military funds and endorses movies of certain kinds, I said as much in my comment after all, I know that e.g. Transformers and movies like it are heavily funded and given access to military bases in return for sucking dicks back. Top Gun drived enlistments in the US Navy up by blah%, I know all that. What I specifically asked for is why are marvel movies relevant for the military, except possibly Iron Man and maybe Captain America. Yes, I mostly didn't watch any of those movies except Iron Man, I despise superhero universes even from before their Awokening. No, I don't think it's unreasonable for me to demand sources of you and not google myself, why don't *you* do the googling, it's your claim after all.
Regardless of anything, this point wasn't largely about whether or not the DoD funds cringe movies, my point is that being woke is *aligned* with being useful to organizations like the CIA and the DoD, not evidence against it. The CIA has a disgustingly hilarious woke promotional video after all (https://www.youtube.com/watch?v=X55JPbAMc9g), I was just baffled at the apparent contradiction that you seem to think DoD's endorsement of a woke movie is.
all you have to do is remove 1 and 10 ratings from the stat (most likely those voting 1 and 10 are more ideological) then calculate stats again. Rings of power for instance is average 6.7 movie if you remove 10s and 1s
People are really just turning off their brains in answering this stuff. "Oh we've seen it all before and it all goes away".
No, policies change, the radical of the previous generation becomes the norm of the next generation through institutional dominance, the left win the battle and move forward and fight the war on another front. Things settled down on the race front compared to its peak at the 1960s because civil rights were achieved. A fundamental reordering of American society occurred and the media/schools eventually raised enough kids to support the new order that it stopped being something that there were meaningfully two sides on. It didn't just 'go away'.
Gay stuff for example isn't as big of a focus any more because gay marriage got passed and corporations are all-in on being pro-gay. Now race and transgender stuff is centre stage, but it's entirely unclear how or why these things will go away because there's isn't a neat set of policies that can put in place to 'fix' these issues. And the race stuff in particular seems to take over all other issues, even those non-culture war related, making it somewhat resistant to being displaced.
There really is a huge divide amongst people and any prediction that this will rapidly diminish is an extremely radical prediction that requires strong evidence. The only reasonable hypothesis I can think of is that left-wing institutional dominance is becoming so great that younger generations are being indoctrinated (and I intend this term descriptively rather than purely pejoratively) into the left-wing side of the culture war that what happened with e.g. segregation in the past, where there's no longer any kind of political force behind anybody remotely opposed to legal desegregation policies will happen with all other culture issues today, especially with increased non-white immigration. Even though conservatives have more kids, having conservative parents often isn't enough to compete with an overwhelming institutional dominance, and in any case they may vote republican but abandon right-wing culture war issues (the way that nearly no conservatives today oppose laws against interracial marriage), which I think is something that obscures the extent to which the left have truly dominated the culture wars (as does left wing rhetoric about how people with more progressive cultural views than democrats 100 years ago are "white supremacists").
I think as whites become a minority and the Democrats become emboldened in their 'racial equity' policies we'll actually see greater polarization, and a legitimate (peaceful) secession attempt of the most conservative states wouldn't surprise me at some stage. If DeSantis or some other 'Trump with half a brain' figure becomes president in the future, I think we'll really see the left become increasingly hostile towards sharing a country with conservatives. Or perhaps, by the time things get to this stage, there will have been enough immigration that conservatives just simply lose every fight and have to put up with what the Democrats want.
I think a lot of Democrats are absolutely convinced that racial equity policies will eventually achieve their stated goals and then all of these problems will dissolve (and/or non-whites will at therefore have enough power to literally or figuratively destroy opposition to equity policies/ideology). But this is extremely unlikely, which means left-wing predictions in this regard should be heavily discounted.
Obviously, AI is a wildcard here and all bets are off if/when it advances to the point that society is fundamentally reorganized. But until something radical like that happens, there's no good reason to think things will fundamentally change. The idea that all of this will just fade away are mindlessly optimistic in a way that the evidence does not justify.
Otherwise, I'm humble enough to admit I don't have any idea what will happen and anyone being sure of the outcome (other than appeals to societal reordering due to technology advances) is probably full of it.
My fantasy is that we'll reach a point where someone says something nasty in classic culture war fashion, and people roll their eyes and say, "Oh, that's so 2016".
A wave of reaction and quiet sweeping of past excesses under the rug, and a total shitstorm after a decade once designer babies / AI rights / other sci fi issue brings out the worst in people again.
In fact, the current trans rights shitfest seems like a prelude to a proper transhumanism / morphological freedom shitfest, along with perhaps a cognitive and neurochemical freedom shitfest that's long overdue as the previous consensus of the "war on drugs" is losing support.
Are the "Culture Wars" even a real thing, or just a boogeyman that Very Online people get needlessly worked up about? If you look at political polls, people on both sides tend to care most about issues like tax policy, gas and food and rent prices, infrastructure and education spending, and environmental protections (not in the sense of "we need to stop climate change," which is more of an activist concern, but "we need to make sure our local area has clean air and water and green spaces"). In other words, a lot of dry, boring, technical matters of fiscal and administrative policy that are nonetheless important because they have a direct effect on their lives. Abortion might be the sole exception, since that's a cultural issue that a great deal of people seem very concerned about, but outside of that, I'd be surprised if even 10% of the population cared about, thought about, or even knew about most of these "Culture War" disputes that get so much attention on social media.
Do I need to remind you that less than two years ago, nationwide riots were raging because a black person was killed by police?
And it's absolutely irrelevant if the average person cares about this stuff, what matters is what is happening at the most powerful institutions in the country, and yes, they overwhelmingly care about this stuff and its not going away.
Question: What's the difference between a social phenomenon in which people believe they are in conflict, and a social phenomenon in which people are in conflict?
I think it might be helpful to see if you can identify any past culture wars that have ended, and say how they ended. Did the culture war from the 1960s end? I think there's a sense that the right won in the 1980s, but there's also a sense that they never ended and that the current culture wars are somehow the same ones. Are there older conflicts that you would count as culture wars?
I'm not going to venture a prediction about how a conflict ends until I can come up with some description of what it would even mean to end!
I think the left clearly won the original culture war. The average young person can't even imagine the average American today supporting segregation, or the US explicitly having white only immigration policies (they'll call stuff these things but its in some big conspiracy sense, not the in the sense that these things originally existed). They won so overwhelmingly that what was radical in the early 60s is the absolute norm today (and in some cases, even considered reactionary) and are literally not even debated any more. You can say this is a good thing, but what's not up for debate is that the left won without question. (But remember, this is the culture war, and the failures of socialism are different to this).
It was a long time between the youth movement (basically, postwar babies getting their driver's licenses between 1963 and 1967) that Hollywood merchandized as The Sixties and Ronald Reagan. I hope we don't have to bounce off the funhouse walls for another decade; the woke, genderist diktats and pronouncements have gone well beyond tiresome.
I think they're a lot like missing children on milk cartons. You get a period of very high visibility and Something Must Be Done public angst, and then it sort of fades away as people get tired of the complexity, bored with the sturm und drang, or just find some other squirrel to chase.
A lot of things have faded away, I would say. We don't get worked up about demon rum and temperance the way we did in the 1900s and 1910s. We don't get worked up about how firm on Communism one is or Who Lost China? like we did in the 50s or early 60s. Nobody has given a damn about state's rights except fitfully since 1860 approximately. Environmentalism morphed from pollution in the 70s to climate change in the 00s and 10s, so it's a little weird that came back after a period of quiet in the 80s and 90s.
On the other hand physiognomy-based disenfranchisement and discrimination started out with the blacks in the 50s and 60s, moved to the women in the 70s and maybe 80s, then disappeared for while -- we actually thought we'd conquered racism for a decade or so there -- and has come back with a vengeance, except that blacks seem to need to share the stage with the transsexuals right now.
I don't really get any sense of "end" in any final resolution kind of way, the way one can point to an end of the Second World War. I don't even get a sense of cycles and pendula. It feels more or less chaotic, like a bunch meme stock bubbles exploding and shrinking away.
Agreed. Looking back just a few years ago, MGTOW/MRA/feminism etc... was all over the place. Northing really changed or resolved, the arguments just lost energy and got supplanted.
Nobody will win or lose the culture wars. Things will probably continue to get more polarized as things get worse. Right vs left will continue butting heads until the societal problems get bad enough that gendered bathrooms and team mascots no longer seem like important topics. And whoever wins that mess will be random and situational.
As an aside, I'm not sure what you meant by your meme stock analogy - they aren't random.
Nothing changed? Do you know how many people lost their jobs/careers (rightly or wrongly) because of metoo?
And why would you bring up MRA/MGTOW? These were fringe movements that had no institutional support whatsoever. They unquestionably lost the culture war.
The culture war is still ongoing, its just that race and transgender stuff has taken centre stage.
>And why would you bring up MRA/MGTOW? These were fringe movements that had no institutional support whatsoever. They unquestionably lost the culture war.
I know this is a dead thread and a new one is out there and you probably won't see this, but just in case you have email notifications on . . .
I think this is overstated. MRA, as I understand it, was at least partly originally formed around the very deep imbalance in how the courts handled divorces, especially in regards to children. I've had a couple of friends go through that and made sympathetic noises along those lines and been told that in point of fact, the old 80s and early 90s model where the dad gets screwed and the mom gets everything has really changed in the courts, and very much for the better.
I think this is actually another issue of real rightwing progress, though I only have anecdote for it.
I think this is the wrong question. There will always be some issues we're arguing about, so the culture wars won't end. Rather, the issues will settle out somehow or another, and we'll change the subject.
To see what I mean, think back.
Prohibition, interracial marriage, the death penalty, concealed carry, and gay marriage were all live culture war issues (pick your own list if you prefer). All of them are pretty much settled. The culture war didn't end, it moved on. It will again.
So, rather than ask about the current culture war, you'd have to ask about particular current issues.
I'll throw out my takes, but you may not even agree with the issues I have here:
Trans stuff will be state by state. It will be status quo in lefty states, and serious restrictions (bathroom laws, possibly pronouns required to comport with birth certificates, probably any sort of medical treatments restricted to 18 and up, possibly any psych treatment as well) in righty ones. We'll get there in the next couple of years, argue loudly about it for a few years and then move on. There won't be a national consensus but we'll stop arguing about it.
History/CRT stuff will go like this too, I think, and resolve relatively soon. My state and states like Florida will teach history the way it was taught back when I was a kid. The Civil War was fought over states' rights, the real villains were the abolitionists and the fire-eaters, who forced a war that didn't otherwise have to happen, slavery was bad to be sure, but then no real further discussion. Lots of focus on good men on both sides and honor and all that. Essentially no coverage of Reconstruction or the Redemption, American history picks back up being interesting with Teddy Roosevelt and then makes for WW1. Lefty states will keep in the new stuff focusing on slavery, cover things like the cornerstone speech and the confederate constitutions, talk about R.Lee taking slaves during the Gettysburg campaign and the degree to which black regiments made up Grant's later forces, and cover Reconstruction in a positive light and treat the Redemption as a successful armed coup. And *then* skip to Teddy Roosevelt and WW1. In 10 years, it will simply be known that Blue and Red America teach somewhat different history, that we aren't doing anything about it, and therefore we'll stop talking about it. Oh, also, DEI trainings will stop in Red states.
Guns have moved on from concealed to constitutional carry, which I expect to be won in the next decade or so. You'll see occasional bitching about this after mass shootings (as you do now) but no one will run on restricting guns past the primaries, and no serious restrictions will be passed or even be a part of a general election campaign.
I'm really not sure with abortion. It will be live for a long while, I think. The left might get firmer control of the federal gov't and try to legislate nationally (which I think gets struck down). The right might get the same thing (which maybe gets struck down, but I'm less sure). Some states are going to pass travel restrictions and test the courts. I think those are going to hold. I think we're going to start seeing criminal penalties against the women not just the doctors, but I really feel unsure here. This one has been a cornerstone of the culture war for fifty years, and might stick around another 50.
I'm probably missing something, but that's what I've got right now. Notice that I think that three of the four are pretty much over by 2032. No idea what replaces them.
>the Civil War was fought over states' rights, the real villains were the abolitionists and the fire-eaters, who forced a war that didn't otherwise have to happen, slavery was bad to be sure, but then no real further discussion.
When/where was this if you don't mind me asking? I was schooled in a very white part of Minnesota in the 80s and 90s and this is a radically less progressive view of the war than we were taught then. And we had huge units on the reconstruction in multiple years.
In fact I would say the majority of our American History education was Revolutionary War > Run up to Civil War > Civil War > Aftermath of Civil War > Women's Suffrage > 60s Civil Rights > Return to Revolutionary War.
In basically a cycle that lasted all through 3rd grade through 12th.
Things like WWI, WWII, and Vietnam were just sort of glossed over, and the civil war was by far the biggest topic.
I went to public school in Georgia in the 90s/2000s and was taught a very lost cause adjacent version. Slavery was a factor in the civil war, but it was mostly about states rights, and the north started it. Carpet baggers were focused on as villains more than slaveowners
For whatever it is worth, when a good but not great student at my school went from Minnesota to Missouri in the mid 90s they immediately moved her up two grades.
Good to hear. I figured with stuff like that Texas textbook a few years back that the Red states and especially the old South were hanging in with the old settlement. Do you think that's going to change? I don't really see how you can teach the civil war or the 60s civil rights movement without falling afoul of the new anti-CRT laws, for example.
Of course, of particular concern for me and mine is what the state (still Missouri) does with regard to influencing teaching in the city of St Louis. So far things seem fine on that front.
I don’t think you need to run afoul of CRT laws at all to discuss the civil rights movement. The Civil rights movement is practical antithetical to the CRT movement.
The law that Missouri proposed and (I think) didn't pass this last winter forbade teaching that any "identifies people or groups of people, entities, or institutions in the United States as inherently, immutably, or systemically sexist, racist, biased, privileged, or oppressed".
If you can't teach that the Jim Crow south was racist, then you flat out can't teach about it. Or the three fifths compromise. Or the armed forces up to WWII. Or the segregated unions (which still exist in my city). I don't see how you teach about a lot of history if you can't acknowledge how racist the institutions and people of the time were.
Which, again, isn't currently a problem, since as nearly as I can tell this language was removed and I think the final bill failed. But I'm sure we'll get something sooner or later, and it wouldn't be at all surprising given Jeff City if what we get ends up de jure outlawing the teaching of history, in the three felonies a day sense.
Not so much end as go in cycles. I think the current ones are running down, but what the new ones will be I have no clue (furrydom? God alone knows). I also think if we are running into a depression or crash or "hey, this winter we will all freeze in the dark", that is going to put the kibosh on a lot of current culture wars stuff and let it die off. We'll have a lot more to worry about than Piss Protests https://www.vice.com/en/article/jgpj5y/pissed-off-trannies-ehrc-protest if the lights are not being turned on and there is no heating.
No one living today can possibly know this answer, honestly. Personally I am sure only that (a) it won't be anytime soon, and (b) it won't be approximately a re-run of past culture-war cycles.
The winning side will become sclerosed, the losing side will evolve to become hip & subversive, the winning side will see it's gain turn to ash as a it's new generation move to the next fight, the loser side will start making gain, then it starts all over again until the nomadic hordes come in.
Or, to the extent they do, it will be through shifting terms of debate/discussion in a manner which is only clear decades later (and maybe not even then, did the 'Political Correctness' culture war end when it mutated into the 'Wokeness' culture war? Did the 'Gay Rights' culture war end, or mutate into the various 'Trans/Gender Rights' debates?
This is one where I am humble enough to think I have no idea. Though I do think they get worse before they get significantly better. We haven't it the "bottom" in the market yet.
It looks to me like, in the US, the right is increasingly embracing the "punk rock"; the right is becoming the counterculture, the left is becoming the authority figure waggling its finger at those who misbehave. The right is the coalition rebel forces, the left is the empire.
Given the history of countercultures, on the specific issues that the right is pushing back against the left's authoritarianism - I expect the right to win. This particular fight has been swinging back and forth for decades now, even if, in retrospect, it is sometimes difficult to figure out who was who. Approximately: 30s-40s, leftist authoritarianism (Prohibition); 50s-60s, right authoritarianism (Nuclear family); 70-80s, left authoritarianism (Equal access media laws tearing down religious radio stations); 90-00s, right authoritarianism (Anti-atheism); 10-20s, left authoritarianism (many names).
And over the next thirty years, the right will, in its swing to ascendancy, forget how these things go, and become the authoritarian figure once more, waggling its finger, as it loses its coalition to the left, which once more becomes the ragtag band of rebels fighting The Man.
It helps that an overwhelming silent majority of non-extremely-online people are more right than left, by the current definition. The left has jumped the shark and is evaporatively cooling itself into irrelevance.
Left and right aren't hard lines; stuff moves between them. Environmentalism, for example, has historically been a right-wing issue; it is, after all, fundamentally an enterprise in conserving things the way they are.
Pay attention to your confusion on this matter, because it's part of the propaganda in the water supply: You have been taught to believe that the left and right each represent some kind of coherent ideology.
No. Look around: The modern left defends corporate products on the basis of their adherence to ultimately superficial identity tagging, and the modern right criticizes the corporate nanny state and the military-industrial complex and the use of anti-terrorism laws to pursue domestic ideological groups. These dominant ideologies are diametrically opposed to the dominant ideologies in the left-right schism of twenty years ago.
Yet there is a pretense, a rationalization, that the wild fluctuations in actual policy positions all represent a coherent internal ideology - and this ideology happens to define both why you support the side you support (Democrats support civil rights / Republicans support civil rights), and why you oppose the side you oppose (Democrats are racists / Republicans are racists). The ideologies aren't even distinct - they both claim the same values for themselves, and decry the same evils in their enemies - they just change the flimsy cardboard rationalizations they use to claim that whatever bag of policies their constituents happen to support match their ideological virtues, and whatever bag of policies their opponents' constituents happen to support match their ideological evils.
It was part of first wave feminism, and you're going to have a hard time mapping any version of feminism to "right-wing".
Note that the contemporary "war on drugs" doesn't propose to touch alcohol or caffeine, and is politically distinct from the war on tobacco. WoD is fundamentally conservative - these mind-altering drugs have been part of human culture for ever and ever, and we have adapted to them, but *those* mind-altering drugs are new and unknown and scary. Keep things the way they have traditionally been, that we know works.
Prohibition was progressive - based on the eternal progressive belief that they can make people better.
Don't think being endorsed by feminists makes something left-wing. E.g. there's a branch of feminism which is against BDSM and against legal prostitution - positions which I think most would describe as more right-wing than left-wing.
Caffeine is close to unique in USA culture. It is the one mind-altering drug that employers routinely provide to employees (yeah, the Air Force fed meth to its pilots at one point - there are a few rare exceptions). I think one could reasonably call caffeine an occupational drug rather than a recreational drug.
I think it was a tangle that doesn't map well to the modern political framework. Lots went into support for it. Protestant churches seeking to moralize man, progressives believing that it would civilize and improve society, anti-immigrant types who associated drinking with foreigners like the Germans and (especially) the Irish, women's rights advocates who saw it as a means to curb domestic violence, and lots of other things all over our modern political spectrum.
Tricky question - what do you mean by “the culture wars?”
At a a high enough level the answer is pretty obviously “never” since cultura conflict won’t ever just “end,”so I’m sure you mean it in regard to some specific issues.
On the other hand, we seem to be fighting about damn near every issue in the US right now- we even suck in non-cultural matters like pandemic response and make them cultural. So I think a good grasp of what specific cultural tension points we’re talking about is needed to really offer up a worthwhile opinion.
>On the other hand, we seem to be fighting about damn near every issue in the US right now
It seems that way, but that's only because there are areas where one side or the other won so completely that there's no longer a conversation. (I'm going to make a post to this effect in a bit.)
We're not arguing over interracial marriage. We're not arguing over the death penalty. We're not arguing over alcohol prohibition. We're not arguing over conscription. There's a lot of stuff we're just not arguing about anymore, because one side won, one side lost, and now we're doing something else.
Interestingly we are still arguing over the death penalty, it's just that we're not arguing very hard over it. This is an interesting example of an issue that never really got comprehensively settled one way or the other (in the US), people just seem to have got bored of arguing about it.
Did people genuinely care less about the issue, or is it just that the media has found other issues to stir the pot on? I feel like a death penalty debate could easily be stirred up again if CNN put its mind to it -- live coverage outside the prison every time an inmate was being executed, a few hours a day of talking heads debating it, emotive interviews with the condemned man's mother, and pretty soon you could get everyone hot and bothered about the death penalty again.
>Interestingly we are still arguing over the death penalty, it's just that we're not arguing very hard over it.
Right. This is what I mean. I'm sure there are still passionate advocates, but it's basically a settled majority/minority position and it doesn't come up in mainstream debates. The current things will mostly (IMO) go that way too.
Interestingly, I looked up opinion polling on the death penalty and it's dropped from 80/20 for all the way down to 57/43 for in the last thirty years, so maybe it'll make a comeback as a national issue when trans stuff or crt stuff leave the national stage.
54% of the population is in favour of the death penalty, 43% against. That's one of the least settled issues out there.
It's interesting to look at this plot showing support for the death penalty https://news.gallup.com/poll/1606/death-penalty.aspx against this plot showing actual number of executions https://en.wikipedia.org/wiki/Capital_punishment_in_the_United_States#/media/File:Usa-executions.svg -- the death penalty became heavily unpopular in the 1960s, which coincided with a massive decrease. But then in 1967, the Supreme Court banned it, making it more popular again. In 1977 they brought it back, and both its popularity and actual number of executions continued to explode until the 1990s when popularity and executions both started to decrease again.
I wonder if the death penalty in the US would have gone away much sooner if the Supreme Court hadn't tried to ban it.
Abortion was quite settled until activist right wing justices overturned it. Why wouldn't the same justices also overturn restrictions on the death penalty?
Abortion wasn't settled because it was (and is) a source of huge argumentative energy every election cycle. The death penalty is . . . not. We do it, we mostly approve of it, and while there's a minority that's very interested in talking about it, they're small and have no impact in even creating a national debate. That's what I mean. Abortion has never been settled in the same sense.
People will get bored and something else takes its place. I don't think salience shifts owing to resolution. There's still room to raise the stakes in the same areas.
None of those options seem anywhere close to 80% likely. In fact, I'd give them all under 1% odds of happening within the next century.
Leaving aside the X-Risk stuff (since that probably involves a difference in opinion on technology rather than politics), what makes you think that either a new American Civil War or a new World War will happen anytime soon?
Great-power wars are pretty common if you look over history. We only really have one data point suggesting nuclear weapons changed this ("there wasn't a great-power war in the 60s-80s"), and my suspicion is that this wasn't *just* nuclear weapons but also had to do with the national/governmental character of the USA and USSR (also we came damned close).
Re: boogaloo - it won't happen absent a constitutional crisis, certainly, but there are a few obvious candidates for one of those (a serious challenge to the SCOTUS by the other branches whether by impeachment/packing/(especially) refusal to obey judgement; a repeat of Bush v. Gore with a hated candidate like Hillary/Trump winning the litigation; hard hit on debt ceiling defunding the police and military; Article V convention occurring).
#2's doing most of the heavy lifting in those numbers; I think it's ~60% likely by 2050. But some of that is from "#1 or #3 looks like it's happening and the PRC overestimates how much shit it can get away with in the chaos" (that's why I said "one or more" originally).
Wait...what Great Power war has happened since the 80s? Or for that matter since 1945? And 1945 to 2022 is quite a long stretch of history.
One hopes you aren't counting proxy wars, which have (1) occurred throughout history, and (2) almost definitionally exclude existential struggles (since if the Powers concerned were willing to begin one of those, they wouldn't be fighting by proxy).
The Korean War involved great-power forces shooting at each other directly, so I only count the Long Peace from 1953. I agree that proxy wars don't count, though.
I say "60s to 80s" because "too soon after the last war" and "hegemonic stability" are known exceptions to "great-power wars are common". 50s were too soon after WWII, 90s/00s/kind of 10s were unipolar.
Apparently Einstein's death was announced by the pilots of some passenger planes:
https://expo.mcmaster.ca/s/scientists-for-peace/page/einstein-s-approval-of-the-manifesto-his-death-bed-letter
Has this happened for any other great scientists/intellectuals either in those days or recently? Was Einstein a special case, or have pilots just stopped doing that sort of thing?
Started my own of these. Take care :)
https://ageofsubjectivity.substack.com/
https://siderea.dreamwidth.org/1775300.html
Depression isn't the only cause of suicide, People kill themselves because of severe pain that they don't think is going to get better, and sometimes they're right about how intractable the pain is.
A comment by user Evan Sp. on Freddie De Boer's latest post about language :
>>---------------------------------------------------
10-15 years ago, descriptivism was left-coded and prescriptivism was right-coded. The right said "speak properly" and the left said "let people be as they are -- all language is legitimate."
But when the left gained cultural power in the past several years, progressive organizations started endorsing prescriptive changes in language e.g., latinx, pregnant people, etc. that were used little outside of small political circles.
Power corrupts! Even in petty linguistic debates.
Of course, none of this matters. Like everything else, language evolves through shared mechanisms of popular usage, activist innovation, and elite endorsement, and neologisms succeed and fail based on murky societal machinery.
---------------------------------------------------
Not to rehash the debate but proponents of using literally in figurative language, like myself, are being prescriptive. This isn’t just something that crept into the language, it’s always been correct.
The Descriptivism vs. Prescriptivism debate is mostly an uninteresting one for me.
I do think that one of FDB's main points is fallacious, he seems to be saying in this point : "Descriptivists claim that they're not prescribing, but - you see - they *are* indeed prescribing, they are prescribing descriptivism", which is... just a trivial gotcha ? This is like saying """Post-Modernists say they are skeptical of grand narratives, but - my friends - isn't being skeptical of grand narratives, in and of itself, a grand narrative that they believe in ?""". Or """The priest tells me not to judge other people, but isn't *he* judging my tendency to judge other people by that statement ?""". Almost any opinion against anything that is general enough can be gotcha-ed in this way. It's lazy and uninteresting.
I see Descriptivism as a perfectly coherent way of thought. It *is* prescribing, it never said it doesn't prescribe, it's a normative opinion about how to study languages after all. It prescribes that you shouldn't prescribe language to its practitioners when you're studying it. Biologists don't genetically-engineer animals to the forms and functions that they *think* will be better, they study the forms and functions *as they already exist now*. Similarily, linguists should study dialects and other forms of languages as they are spoken\written\performed, not as they wish them to be.
I think that the point that FDB *should* be arguing, and is indeed implicitely arguing beneath the fallacious main point, is that everyday-life invocations of Descriptivism as a free anything-goes invitation is annoying and cringe. Descriptivism is the attitude of linguists, it evolved to fill a very specific niche in a very specific intellectual context, we're under no obligation to follow it in our everyday life or any life where we're not studying languages professionally. It would be like telling people who prefer cats over dogs as pets "But... But... you *should* live with animals as they are, not as you wish them to be", pet lovers are not biologists, they are allowed to have favorites. Just like ordinary people in everyday life are allowed to have preferences in language and they are allowed to hate and argue against other ways of practicing it that they don't like.
Also, Merriam-Webster is a woke organization who was caught changing the definition of words to better match The Faith : automatic +2500 cringe points.
I posted the comment above because it's a relatively novel point in this debate-space that I have never thought of before : Descriptivism vs. Prescriptivism as a political tool. Whenever you - a political faction - don't have the tools to enforce Prescriptivism, you preach Descriptivism (or, more accurately, the anything-goes caricature of Descriptivism that allows you to speak as you like against the dominant forms of language). Whenever you do have the tools to enforce Prescriptivism, you sing the praises of Prescriptivism and enforce it.
https://www.adept.ai/act
Startup which involves an AI piloting your browser.
Having nearly been felled twice already by electric cars gliding past as silent as ghosts, I wonder if drive tones should be made mandatory, analogous to ring tones on phones! I think I'd go for clip clopping horses hooves, or maybe a chuff chuff steam train noise.
I only hope musical jingles are excluded though. Otherwise walking down the street in electric vehicle traffic it will sound like one is surrounded by hundreds of chiming ice cream vans!
I like your suggestion about horses hooves!
At about 15-20 mph you get tire/road sound but slower than that, yea, Ghost in the Darkness.
I thought electric cars were already required to make a little noise as a safety measure for pedestrians.
One would think so, but I'm pretty sure the cars I encountered made the minimum noise a one ton powered object in motion possibly could!
I imagine before long, if they aren't already, electric cars in motion will be legally be required to make a noise comparable to a reasonable quality internal combustion engine. That noise will also need real-time dynamic adjustments to match the car's acceleration, even an artificial squeal when braking sharply, or turning if the tyres don't themselves squeal.
Here's some discussion, but I'm not sure what the current situation is.
https://en.wikipedia.org/wiki/Electric_vehicle_warning_sounds
I am stricken with dread upon reading this suggestion. The idea of a street full of artificially generated car noises sounds like a hellscape. Doubtless they would all be custom, and all of them doubtlessly awful. The aural equivalent of xenon headlights.
Let's not forget the *current* situation is a hellscape! It's only through constant exposure that the roar of traffic isn't identified as awful. The deep feeling of well-being when I'm nature is in no small part due to the soundscape. I'm willing to take a few more accidents to improve the quality of life in cities dramatically. Also, anti crash software is improving rapidly, so this might just be a transition period in any case.
Say what you like about that tiger creeping up on you - at least he does it quietly!
They should play a recording of someone saying "vroom vroom".
Doctors of AST: what is your relationship with radiologists? Do you ever disagree with their reading of an image, do you have access to images, do you consult the images yourselves or just go off of the report? Has your hospital/health care system made it harder to work with radiology by centralizing or outsourcing it recently?
I'm asking out of curiosity because I've heard of that type of centralization, outsourcing image reading to doctors outside the system (even the country), and wondering if other specialists ever second-guess radiology.
Also curious to hear how this sort of centralization affects the radiologists and how you interact with the other specialties, if at all.
Thank you!
You may want to ask this on the next Open Thread; comments this late rarely get replies.
https://danluu.com/cocktail-ideas/
Good article on people's tendency to talk nonsense when they have opinions without specific knowledge and how hard it is to know when you're having an opinion without specific knowledge.
Here's what I believe to be a powerful marker-- the word "just" as in why don't people just do whatever you think will make the situation better. Why don't fat people just eat less? Why don't people just stop having wars? Why don't people just stop committing crimes? Why don't the police just stop abusing people?
"Just" means you're ignoring why something isn't happening.
I don't like the example of people not knowing how bicycles work in detail, it's a meaningless gotcha in the wider context of the article.
In "The Design Of Everyday Things", the author recounts a study of how people failed to recognise (not draw, just recognise) their country's coin designs among multiple similar but fictitious designs. (e.g. The face of the person on the coin is to the left instead of to the right, the phrase on the coin is written slightly differently or in a different position, etc...) The author explains this is because the brain only understands objects\systems\phenomena enough to distinguish between them and other similar objects\systems\phenomena. For example, if you have a red notebook and a green notebook, your brain might tune out completely the drawings on the notebooks' cover or the size, it just wants to distinguish between the 2 and the red-green distinction is as clear as anything and better. If you got a third notebook that is also red, only then might your brain start paying attention to other qualities of the other red notebook.
The "Design"'s author phrases this as a general principle : the brain remembers only enough to discriminate between alternatives. If a certain piece of knowledge is not necessary to distinguish between 2 decisions or situations that the brain needs to distinguish often, it's most likely not kept in memory, why should it? Do any kind of people besides physicists and bicycle manufacturers ever need to know how bicycles work in detail to distinguish between 2 similar situations?
The rest of the article is nice, we already have ~13 slightly different ways of expressing "They think they know but they actually don't" but "Cocktail Party Ideas" is a welcome addition nonetheless, certainly better than the overused and oft-misunderstood Dunning-Kruger effect. The author could certainly use less arrogance.
https://danluu.com/futurist-predictions/
Dan Luu takes a skeptical eye to the accuracy of futurists. This seems relevant also to Scott's claimed success in predicting scale improvements to image-generation AIs.
That was a great read!
The answer is plastic. https://marginalrevolution.com/marginalrevolution/2022/09/plastic-might-be-making-you-fat.html
This was one of my first thoughts when I read the first few articles on obesity by SlimemoldTimemold. The different responses of the sexes is interesting too... we see the same in humans somewhat.
Belief in the significance of obesogens could change prevailing assumptions about the reliability of focusing on calories. If that happens, then medical attempts to prevent weight gain also become more likely. Will low doses of e.g. semaglutide (Wegocy), currently used for the obese, eventually be trialled to prevent weight gain for people with normal BMIs? (https://www.worksinprogress.co/issue/the-future-of-weight-loss/, https://www.nytimes.com/2021/02/10/health/obesity-weight-loss-drug-semaglutide.html)
Has Scott himself ever blogged about this?
As well as obesogens, he might turn his attention to possible "gendergens". There always have been camp guys and butch women (putting it crudely) but most of these have been and are content with their gender. In proportion to population sizes though, were there really as many people with gender dysphoria in the past as there seem to be today?
Perhaps there always have been, but obviously those in the past would have simply had to make the best of a bad job.
But if not and there is a genuine upsurge then maybe chemicals introduced and widely used only in recent times have an effect on gender preferences in people predisposed to their effects, possibly as embryos.
Yeah, if more people would focus on this then maybe we could find out what the exact' bad' chemicals are. And maybe discover that yeah, that plasticizer has an unwanted fat response, but this one here is fine. (Kinda like different fluoro-carbons and ozone... though a lot more difficult problem to figure out.)
How Bad are Historians?
I've been reading two books on William Marshal, a prominent medieval figure — born the fourth son of a minor baron during the Stephen and Matilda civil war, he spent a decade or two as one of the top tournament knights in western Europe and before he died was regent of England. His other distinction is that he is, I think, the only knight of the period for whom we have a biography written shortly after his death.
Both books agree that the biography is not entirely accurate, which is true, but they disagree about which parts are wrong. Crouch believes that William's father saved Matilda from capture and lost an eye doing it, does not believe that William was accused by rivals at the Young King's court of an affair with Henry's wife. Asbridge doesn't believe the first, does believe the second. In both cases, the author puts his view not as "I think this is what happened, but ..." but as a simple fact — "this is what happened."
This is a case where they have opposite views but more generally, a problem I noticed reading Crouch before I looked at Asbridge, they treat guesses as facts. In one case Crouch gives a footnote to support his guess. If you read it it turns out that there are three primary sources for the question, the biography and another source agree, the third doesn't, and Crouch simply asserts on that evidence that the biography is lying and goes on to describe events after the relevant scene on that assumption.
Is this sort of intellectual arrogance, treating conjectures as facts, typical behavior for academic historians? It it only for books aimed at a general audience, which both of these are, trying tell an entertaining story without confusing the reader with alternative interpretations of the evidence? If I read journal articles by the same authors would they recognize the uncertainty of their interpretation?
For the curious, the books are _William Marshal_ (third edition) by David Crouch and _The Greatest Knight_ by Thomas Asbridge. The biography is _The History of William Marshal_, translated by Nigel Bryant.
Archeology seems rife with this stuff as well. If you follow up on the expansive descriptions of ancient cultures and peoples, many times the only evidence for it is something like a broken dish, or a bone that looks like it was made into a tool. You can sometimes read page after page of description about how this culture supposedly lived, which even if it's a pretty educated guess, is still wildly made up. We know *far* less in fact about almost all ancient cultures (especially pre-circa 4,000 BC) than most people commonly assume.
I have now finished Asbridge, and found another and more important disagreement. The _Histoire_ claims that John, on his deathbed, asked for William (not present) to forgive him for all he had done to him and take charge of John's son. That's the case where Crouch confidently asserts that the _Histoire_ is lying, and goes on to describe William's appointment as regent as a coup. Asbridge accepts the _Histoire_'s account, noting that it is supported by another source.
Would you please give the page numbers or some aid to finding the footnotes or passages you're pointing out? Otherwise it is difficult to re-trace your line of inquiry, and so answer your first question.
Asbridge discusses who saved Matilda in chapter 1, section "The Civil War," page 13 of the kindle, which is how I am reading it. He accepts the story about William being accused of an affair with the Young King's queen in Chapter 6. He discusses what King John said on his deathbed near the end of Chapter 13, Section "The Greatest Choice," p. 340 of the kindle.
Crouch discusses the saving of Matilda on pages 16 and 17, the accusation of adultery on pp. 58 and 59, describes the Histoire as lying about John's deathbed on pages 158-9.
Thanks!
Yes in popular histories. Less so in journal articles. Classes and the like are somewhat in between which is not to their credit.
Historians are some of the worst public intellectual academics in my experience. They tend to have extremely specific specialties (which is good) that they then apply to broad modern issues (which is bad) while simultaneously arguing they don't count as "theories" that should be subject to empirical scrutiny or verification (which is worse). This is how you get a specialist in pre-war Republican economic ideology or Interwar Italian cinema writing what's basically political red meat about modern issues and then responding with snobbery when challenged. Even in books about their specialty they often fall down.
I'm interested in this.
Can you give the page numbers for the passages you're referring to? It shouldn't be too hard to find what the most recent scholarly interpretation is.
To a first approximation, articles are peer-reviewed more thoroughly than academic books, and popular books sometimes not at all. What you describe would not tend to pass peer review in a good journal. There is a sort of semi-convention where senior professors get to write a book that is a "this is how I see it" -- and you can take it or leave it but their intuition is honed on decades of research and so is worth taking seriously. Crouch's book strikes me that way. I don't read it as arrogant, but I can see how one might.
Ashbridge I don't know, but I immediately get warning flags about that book, as it's written first to be entertaining and uses a vague system of endnotes for its references.
If you message Brett Devereaux on Twitter you might get some interesting opinions on this. Not the medieval stuff specifically but history scholarship generally.
I've lately been working my way through a bunch of English political history books, mostly royal biographies. I've frequently noted substantial disagreements in interpretation between different authors covering overlapping grounds. The most glaring example I can think of off the top of my head is where Henry VIII's clever "compromise" idea during his divorce proceedings against Catherine of Aragon by which he would have remained married to Catherine but received special dispensation to also marry Anne Boleyn, based on Old Testament precedent for plural marriage by the Kings of Israel and Judah and by various Patriarchs. Peter Marshall's "Heretics and Believers" attributes it to Martin Luther, Allison Weir's "Six Wives of Henry VIII" (if I recall correctly) attributes it to the Pope, and Carolly Erickson's "Bloody Mary" implies it to be Henry's own idea. Of the three, I think I believe Marshall the most, as he's the most recent source of the three and because he sounds like he's citing a specific letter from Luther to Henry (unfortunately, since I've been consuming them as audiobooks, I can't easily check footnotes to see what sources each gives.
I get the impression that a big part of the problem is how sparsely documented Medieval and Early Modern Europe were by modern standards and how even the "good" contemporary sources tended to be severely unreliable narrators. Any coherent narrative necessarily needs to make a ton judgements and interpolations, and I've noticed that a lot of the reasoning behind these (especially in books targeted more towards popular audiences) often gets relegated to the footnotes or skimmed over entirely.
That said, I really appreciate authors who actively discuss points on which they disagree with other historians with at least a brief discussion in the main text of how they disagree and their sources and methods for reaching their contrary conclusions. Of the authors I've been reading, Ian Mortimer seems to do the best job of this, although that may be driven by his advocacy for hypotheses that are significantly contrary to the prevailing interpretations by other academic historians. Antonia Fraser also seems to do a particularly good job of showing her work around her interpretations of uncertain or controversial questions.
On unreliable narrators, I've been getting particularly frustrated with the degree to which Tudor historians tend to rely on the Imperial Ambassador Eustace Chapuys. To Chapuys's credit, he had excellent access to most of the important figures of Henry VIII's court, especially Catherine of Aragon and her daughter Mary but also Henry himself among others, and he wrote extensive and detailed dispatches. Unfortunately, he was a highly partisan figure who tended to lie a lot.
The further back you go, the worse the documentation tends to get. I started my current dive with Marc Morris's book on Anglo-Saxon England, which contained some bits which were almost entirely reliant on archaeological evidence due to an almost complete lack of contemporary written records. He also at one point cited Beowulf as an illustration of court life, albeit with the qualification "As a historical source, this story has the disadvantage of being completely made-up: the monsters and the dragon are something of a giveaway."
https://timothyburke.substack.com/p/the-news-rising-for-air-from-the
A professor on how hard it is to find out how decisions were made, even in recent history.
I hope you get meaningful responses from people familiar with medieval historians. Before settling for reading the Very Short Introduction to the Crusades, I considered longer books. Asbridge was one of the authors in question. I don't recall any reviewers criticizing him for jumping to conclusions.
For a while I've wondered about something related. Take a group of people with shared background and intellectual interests. Compare the ones who publish a lot, the ones who publish a little, and the ones who don't publish. How much difference is there between the group means on some hypothetical measure of self-confidence about one's own interpretations ('jumping to conclusions', 'intellectual arrogance', etc.)?
Any difference in those means wouldn't necessarily be causal. How much you write could instead cause your self-confidence in your own interpretations to rise or fall, for instance.
My experience is that the primary distinctions are (1) how obscure your specialty is, and (2) how often you go to conferences. Both of those things influence how often you are faced with pointed and in-your-face questions from peers who disagree with you, or are at least highly skeptical, and I think it's only that experience that teaches people to be careful about what they're saying even when they speak from a position of expertise. I don't think publication quantity per se is as useful, except insofar as it is is a proxy for either of these.
Otherwise...it's an almost ubiquitous human failing to assume that if you are an expert inside region R, then you are also an expert inside R + dR, where the size of dR/R is contingent on your natural character and experience but is almost always greater than 0.
I recently came across this article (https://jeatdisord.biomedcentral.com/articles/10.1186/s40337-022-00548-3) detailing case studies of individuals with fatal anorexia and proposing a set of clinical criteria for terminal anorexia nervosa.
Anorexia bestows a strikingly high death rate. The absence of professionally condoned protocols for anorexics facing the end of life stages is a huge disservice for those of us with severe and enduring anorexia. It's one thing to believe that anorexia should never be terminal, in the same way that HIV should never be terminal. But it seems inhumane to not offer end of life care to individuals in the final stages. What is behind the lack of acceptance in the medical community that anorexia can be terminal?
The paper proposes characteristics of terminal anorexia as : diagnosis of anorexia, being of age 30 or older, prior engagement in eating disorder care and consistent expression that further treatment is futile. Do you think any of these characteristics are unnecessary? What would you add to the criterion? The paper also stipulates that an individual must have a life expectancy of within 6 months in order to receive medical aid in dying.
Do you think a terminal diagnosis is ever appropriate or does it really indicate a failure of the treatment system to support complex patients, particularly those who are marginalized in traditional treatment. Could this diagnosis be weaponized against those that are non compliant with treatment? Non compliance shouldn't be viewed as a bad thing, but rather as an indication that the individual has a will to live agentically and that the treatment provided is failing them. Existing eating disorder treatment was designed for young, cisgender, white women and thus is less effective for POC/older/men/LGBTQ patients. If people are noncompliant or nonresponsive to a treatment that was never even designed for them are they truly beyond help, or is our system broken?
It's also worth noting that in a specialized hospital it is possible to refeed and weight restore nearly every patient. As weight restores, the majority of medical complications cease. Anorexia is almost never medically terminal.
I am a PhD student. I have been through revolving doors of inpatient treatment and I truly cannot fight any more. I can no longer live with this disease and I cannot maintain the minimum nutritional intake for living. I am not willing to participate in recovery oriented treatment and I am no longer trying to prolong my life. I believe that any further treatment will at best only result in brief improvement and is unlikely to provide long-term quality of life. How should I best advocate for my right to die with dignity? What can we do to advocate for a professional consensus for terminal anorexia and patients' end of life rights?
>Existing eating disorder treatment was designed for young, cisgender, white women and thus is less effective for POC/older/men/LGBTQ patients.
"Cisgender" isn't a thing.
"Identifying as having a gender that corresponds to the sex one has been assigned at birth; not transgender."
I don't know of another word that means precisely that
You either have gender dysphoria or you don't. What next, you're going to create a specific word for people who aren't schizophrenic?
And notice that 'cisgender' is a label created and almost exclusively applied by people who don't actually identify with that label themselves. This is frowned upon in most other contexts, but the people who came up with the term 'cisgender' did so precisely to make it seem like gender dysphoria isn't a disorder, when it very obviously is.
Lots of cis folks use cisgender to refer to themselves. You see it in the wild all the time, if you live in certain bubbles.
Not sure who came up with it, but unless you know for sure I don't think the smart bet is trans-folk. There just aren't that many of them. More likely in my opinion to have come from the sort of ally who insists that everyone in the room announce their pronouns at the start of a meeting, whether or not there's any ambiguity for anyone present.
There are several words that describe people who don't have a disorder or symptom. "Neurotypical." "Able-bodied." "Asymptomatic." "Uninjured." Almost any word with the prefix "non" or "un-."
Any time you need to contrast people who do have a condition with people who don't, you'll invent a word to do it. If I say "Autistic people often have more difficulty with social interaction than neurotypicals" or "Only able-bodied people should go mountain climbing" would you go after me with the same anger as people who use "cisgender"? Would it really make a difference to the culture wars if the official term was something like "nondysphoric" rather than "cisgender"?
These examples kind of prove the point. Those terms have only become popular very recently, and are primarily used by the same crowd as the "trans/cis/etc" people. Google Ngram confirms this for neurotypical, asymptomatic, and uninjured - all three words plateaued in the 1990s and then started growing rapidly in around ~2000. The word neurotypical didn't even exist before the 90s.
Able-bodied actually used to be more popular in the 1800s, although looking at the examples shows that's mostly related to political / legal / military usage, not with modern grievance studies connotations.
Anyway, to answer your question, yes, I do have a problem with "Autistic people often have more difficulty with social interaction than neurotypicals." Better would be "Autistic people often have more difficulty with social interaction than other people." Words like neurotypical are profoundly impactful on the culture war, because they are literally the symbolic representation of an outgroup.
Words like these are formed in a kind of linguistic judo designed to divide people as much as possible. The trick goes like this: first, someone makes up a term for a concept, usually some sort of identity group. Let's say they had a valid reason to do so, and that this concept has a legitimate useful meaning. And this identity differs from the norm in some way, otherwise there would be no need for the term. Normally, the way you would describe people who don't fit into this identity is "non-X", "un-X", "normal people", or "other people".
However, you don't like just using the negation of the term like that, so you make up a new word for the concept of *literally everything except this identity*, and you try to make the word sound as symmetric as possible with the normal identity, e.g. cisgender vs transgender, neurotypical vs neurodivergent, abled vs disabled. And now what you've done is two-fold. First, by giving both groups symmetric-sounding terms, you've put the abnormal minority identity on an even linguistic playing field with the normal identity, masking the reality of its abnormality and its small population relative to the norm. Second, you've just created a term that *literally* means "my outgroup" for the people in the aforementioned identity group. This gives them a word to rally against and fester hate towards, inflaming the culture wars to unprecedented levels. It's really hard to build an internet sub-community solely based on hating "the normals" or "the non-mentally-ill", but it's easy to do for "the neurotypicals."
>It's really hard to build an internet sub-community solely based on hating "the normals" or "the non-mentally-ill", but it's easy to do for "the neurotypicals."
Spoken like someone who's never heard 4channers talk about "normies."
Also, if you are correct that having a word that doesn't use "non" or "un" is in some way significant for "putting them on a level playing field," why is that a *bad* thing? Why is it important for our language to constantly remind us that autistic people are unequal, abnormal, that they have the playing field sloped against them, or whatever metaphor you choose?
The conflict here is that you need cisgender only in the case where you've committed to referring to trans women as "women, full stop, no question". Otherwise you'd just say something like "designed for young white women, and less effective for trans women" or similar.
There's personality types (mine included) that hate this. Because the logic goes "Trans women are women, full stop, because they identify as such. But we need a word to differentiate them from natural-born women, because sometimes we are going to have to talk about the many ways they are physically and psychologically distinct; we will allow this ONLY when it's guaranteed to help trans people get something, or win an argument."
Where this gets really wacky (and most clearly a political/power thing) is you can use cisgender here to mean "Born as a woman, fits in the cluster of woman-things physically" because here it's seen as beneficial to trans people. If this usage works, they get personalized anorexia treatments. But when the same trans-concerned people talk about another instance where cisgender/trans distinctions matter in the same way (sports) they forget the word immediately and entirely - trans women go back to being women, full stop, with no differences at all.
I think if I saw this as a situation where truly everyone knew the distinction, and you could use "woman" for a trans woman with everyone knowing it was a politness thing that wasn't meant to carry data, I'd probably be OK with it. But there's so much "we want the power to enforce language" stuff mixed in that it makes it really hard - most of the time when this comes up in my life, it's really clear that the person doing it just wants to prove they have the power to make me say something they think that I think isn't true, so they can feel they were able to make me bend the knee.
I've actually softened on this over the years - like, I'd probably at this point be fine with "woman/transwoman", since transwoman carries a distinct definition that doesn't have the "Listen, these are all 100% women, except where they arent and someone might die, and then you can have a term that means "woman" again so long as it's clear to everyone that our fist is still firmly clenched around your windpipe" baggage Ciswoman does.
Right. My problem with it is twofold. For one, it's creating a category distinction where there shouldn't need to be one. But even if I was okay with the category distinction, I also have a problem with the choice of the term "cisgender," which seems deliberately designed to be abstract, esoteric (cis- mostly used in chemistry settings), and somehow symmetric to "transgender" as if they are just flip sides of the same coin. It's kind of like the word "gentile," which describes an odd amalgamation of very different kinds of people defined only by not being part of this religion that 0.2% of the world follows.
I would be more okay with a distinction of "normal woman" vs "trans woman", or even "biological woman" vs "trans woman". But "cis woman" vs "trans woman" puts two very asymmetric groups on an even playing field.
"Assigned" is an interesting choice of word - is, say, skin color assigned?
And "at birth" seems like a misnomer - most of the time, gender is, in your words, "assigned" significantly before birth during an ultrasound.
It makes sense when you know the science
What word would you use to get a similar point across?
Observed.
That's what I was getting at with "is skin color assigned?"
> It makes sense when you know the science
"The science". What "science" is that exactly? There are a lot of sciences around, which science do we have to know for it to make sense?
when you google "what does assigned sex at birth mean", the first link for me is https://www.bmc.org/glossary-culture-transformation/assigned-sex-birth. There is other discussions of that terminology on sites not directly related to a hospital.
The parent comment seems confused with the definitions of gender and sex, and why psychology and biologists makes a distinction between the two
Humans are whatever sex they are born with. Like all animals. The only outlier is the very few instances of intersex where there is some ambiguity.
Interesting, what specific treatment for anorexia do you have in mind that was designed for young cisgender white women and does not work for POC/older/men/LGBTQ patients?
https://www.amazon.com/Not-All-Black-Girls-Know/dp/1556527861
Not All Black Girls Know How to Eat: A Story of Bulimia
Admittedly, this is a book about bulimia rather than anorexia, but it makes it clear that a lot of people, both sufferers and professionals, assume that anorexia and bulimia are diseases of white women, which means that anyone else is less likely to be diagnosed or treated.
Realistically, no one can stop you if you are determined to die. But it is unreasonable to expect people in general to help you do it, when the only reason is that (from their point of view) you are not in your right mind and are misunderstanding the nature of your future and what is valuable. That is even more true if they care about you as a person, or even category of person.
I can readily understand that you want to take command of your life, and your destiny -- particularly if you have been involved in lots of medicine, or the law, both of which are notorious for taking lightly or even ignoring individual choice and viewpoint. Too often, and even with the best of intentions, they end up treating you like a case or a disease and not an actual real person. It ought to be better, but unfortunately like all institutions, these institutions are run by human beings, and human beings are not perfect, they screw up, all the time, even when they are trying their best.
In general most of us would strongly support an ambition to take command of your destiny (because we want that for ourselves). But we also generally draw a line when that command involves destruction of life -- that of others, but also that of yourself. It is very likely if you were to find a way to satisfy your ambition that doesn't involve taking a life (including your own), that you would find most people would be very much in support. It's 2022, you can be or do almost anything you choose, and there is an enormous respect for the right of the individual to be who he or she chooses. You can become a powerful lawyer (perhaps one who advocates for better treatment options for the anorectic), or you can be a granola hermit, live in a cabin without running water and commune with the birds and trees. You can become a painter and paint powerful images that uniquely explain what it's like to be inside your head (which would also benefit others trying to understand the anorectic mind), or you can sell yachts and spend all your dough on traveling to Thailand and climbing K2 and not talking about it with anybody. Any path you choose to create is yours more or less for the asking (and the required effort), including paths niether you nor I can imagine right now.
It is actually pretty common among human beings to predict the future poorly. For example, I have spent most of my working life doing something quite different from what I thought I'd be doing when I went to college. I am not married to the woman I thought was the love of my life when I was 25, and I'm glad of that. None of my children turned out the way I guessed they would, when they were born, or even toddlers, and yet they are all precious to me, I am proud of them and love them to bits. Many of the friends and activities I enjoy most right now I just stumbled into over the course of living, and I could never have predicted before they happened that they would happen, or that they would be important. The future is extremely hard to predict in general. I would never attempt to predict your future, even if I knew you extremely well. Perhaps you will die, perhaps this year, perhaps even within the next week or two. Or perhaps you will not, and you will become a person with extraordinary stories to tell -- a kind of person I appreciate more and more as I get older: the variety of actual lived experience and the insights people derive from them is better than anything even J. R. R. Tolkien or George R. R. Martin could dream up.
I have known two young women who were anorectic. The one I know very well, because she's family, survived. She is now in early middle age, has a husband who adores her, a couple of cats, and a well-paying job she loves in a career that is about a thousand miles from what she thought she would like, and what her parents thought she would be. She no longer lives near her family, and she has different interests and friends. If I ask her what made the difference, she doesn't really know. No treatment or intervention or pep talk or anything she read or heard seemed sufficient, she said they seemed all equally worthless. She just decided one day to do something different, because she could, because life offered more possibilities than death, and then because she had an iron will (not uncommon among anorectics) she made it happen. I wish I knew more than that, but I don't. (It's certainly likely if you yourself talked to anorexia survivors, and there are a large number of them, you would learn much more than I could ever know.)
Some years ago -- actually more like 10-12 years -- I read an article in a major East Coast magazine on people who had jumped off the Golden Gate bridge, which was a subject of great interest to me because when I was young a close friend of mine did just that (he did not survive). Two things in that story really stuck with me. The first was that a study was made in the late 70s of the relatively few people who survived the jump, and it was found (to the surprise of the investigators) that 94% never even attempted suicide again. The second was a comment made by one of those survivors, which perhaps gives insight into the phenomenon. I wrote it down, because it was so important to me. It was from a surivor named Ken Baldwin. He said: "I still see my hands coming off the railing. I instantly realized that everything in my life that I’d thought was unfixable was totally fixable--except for having just jumped."
I hope you find a way to a different path. We need you. You are an important person, how important we don't even know yet, because you aren't yet all you could be.
Yes!!! All of this. Please talk to some others who have survived anorexia before you jump.
Here's an angle which might be relevant-- if good treatment for anorexia is possible (I'm really not sure), the standard for getting it-- very low BMI-- may be inappropriate. I've seen complaints that a person can be fat, and still eating so little that they're damaging themself, but they can't get treatment. The same might well apply to people of more average weight.
This seems downstream of more general "Right to die" advocacy.
That said, what would you want to see here? I can't really think of a good way for the medical community to support you here. I'm trying to say this as gently as possible, and honestly can't figure out a kinder way to phrase this, but if someone is choosing to not eat until they die, what kind of support can the medical community give here? Are you just thinking pain meds? Do you want an IV drip? Intentionally keeping a person alive while they starve to death seems really, really, monstrously cruel.
There is a time and a place for calling suicidal people cowardly, but it probably isn't here and now.
I specifically wasn't, though. I called out the people who demand that *others* kill them.
What would the time and place be?
Well, if Ludex wanted to make a top level post objecting to right to die legislation I think it would be reasonable. Describing the activist's goals as "Horrific and cowardly" is a little too "boo outgroup" for my taste, but I think Ludex would be well within his rights to do so. I just think doing so when the top level poster seems to be a suicidal person who is trying to get medical assistance in dying is unkind, unneccesary, and kind of in bad taste.
Part of the right to die advocacy is from people who aren't physically capable of killing themselves.
What does anorexia feel like from the inside?
There are books, you know-- and all cases of anorexia aren't going to be the same.
That being said, this youtube channel is pretty vivid.
https://www.youtube.com/user/ofherbsandaltars/videos
One often sees discussions about well-known existential hazards to the human race. But what about left field risks that nobody anticipated, perhaps a disastrous unexpected consequence of something almost everyone assumed would be a marvellous idea?
World Government would be one such example, in my opinion, should it ever be attained before there were flourishing human colonies throughout the Solar System and behond. But the merits of that are far from generally agreed, and anyway this post is about something else.
Probably most people would agree that the World would be a better place if everyone's IQ was bumped up by, say, twenty points. No doubt that will soon be a realistic possibility with genetic tweaks to the unborn, and much in demand. But I think the opposite is true: It would be disastrous, and increase the level of strife and contention.
Aren't most terrorists, for example, besides the patsies they persuade to sacrifice themselves, better educated and smarter than the average Joe? What if everyone in society felt they were intellectually special and demanded to be heard and became bitter if they were but one voice among the multitude. Highly intelligent people can be very quarrelsome and arrogant, whereas we lesser intellects (speaking for myself!) are mostly content with the status quo, and that means on the whole a more stable and peaceful society, instead of the opposite.
And don't get me started on intellectually enhanced talking pets. That would open up a whole new can of worms! :-)
There’s evidence that 25% of janitors are smarter than 25% of PHD students when tested. A smarter population might be a bit restless in menial duties but many would be happy enough. The economy would perhaps be more egalitarian.
Source?
sauce
There seems to be a recurring pattern of you asserting things and then being hostile towards the idea of having to make any effort at substantiating them.
Its easy to find the source for that. No idea what else you are referring to but you seem to enjoy drive by comments about spelling or demanding sources. The last 6 of my comments have replies from you, none of them worthwhile.
This kind of posting is generally decried as sealioning. Anyway you are added to my personal blocking list, a list of one. And reported.
>Aren't most terrorists, for example, besides the patsies they persuade to sacrifice themselves, better educated and smarter than the average Joe? What if everyone in society felt they were intellectually special and demanded to be heard and became bitter if they were but one voice among the multitude.
"Special" is only meaningful in a relative sense. If everyone is "special", nobody is. Being bright (compared to today's standards) but no more so than anyone else likely wouldn't make you feel special. There may be a transition period where today's tasks are easy for the average person, but if even the high-IQ people are getting a bump then they're going to be operating on a higher level themselves and so remarkable today becomes unremarkable tomorrow.
> Highly intelligent people can be very quarrelsome and arrogant, whereas we lesser intellects (speaking for myself!) are mostly content with the status quo, and that means on the whole a more stable and peaceful society, instead of the opposite.
And I don't know where you're living, but lower IQ (than yourself) people are certainly not okay with the status quo. "Inequality" is a huge issue at the moment.
And your basic claim is trivially wrong. The most strife-ridden parts of the world are amongst the lowest IQ, whereas western Europe is extremely peaceful and is self-destructively tolerant. If sub-saharan africans had a mean IQ of 100 with a SD of 15, do you imagine there would be *more* e.g. ethnic or religious conflict there?
Mostly agree. We are breeding a population of smarter people. (Since elite schools select for smarts, and you've got a good chance of marrying someone you meet in school.) Smart costal elites and all us dumb f's left in the middle. (Personally I'm happy living in the middle.)
The evidence goes the other way.
OK Show your work. I get my take from "Coming Apart". https://en.wikipedia.org/wiki/Coming_Apart_(book)
The evidence is that assertive mating does exist but that in general we are all getting dumber for a number of reasons, mutational load, and dysgenic effects caused by the educated classes having fewer children relatively.
Ahh, it doesn't matter whether we are on average getting dumber or smarter.
Elite selection is working on the difference.
Assortative.
I do wish that this commenting system allowed me to block certain posters, don't you?
"Idiocracy." :)
Well, it depends what aspect you focus on. I could believe that assortive mating is stronger than in centuries past, producing a bubble of super-elites, but I know your point that said bubble is small and smaller every generation because smart people have fertility well below replacement.
I think the problem with terrorism and such isn't too much intelligence, it might be placing too much value on everyone making a large difference.
I've only listened to about half of this piece about The Hero's Journey, but it's got material about how it used to be that heroes were godlike, and then it became normal but rare to be a hero, and now everyone is supposed to be running their own destiny.
https://www.youtube.com/watch?v=nDEgJdSfcZ4
Maybe the interesting question isn't figuring out unlikely disasters (or at least disasters that look unlikely from here), it's building resilience.
> it might be placing too much value on everyone making a large difference.
That would certainly be a large part of the problem, the more so because by then most of the low hanging fruit will have been picked. For example, every math problem solved and refined, every poem written, can never again be solved anew or written quite the same. The only possible novelties will be ever more specialised and arcane.
One could argue that people in future, feeling smothered by these past achievements, will do what they have always done, and just ignore them where possible. There are already literally miles of bookshelves groaning with worthy past literature most of which practically everyone is blissfully unaware of and will probably never be read again.
But even that won't be possible if instant recall, Google on mega-steroids as it were, one day makes all this searchable and available for instant comparison with supposed novelties.
Also, advances in AI may well have consolidated and entrenched one agreed sensible and sound world view about most things, so that dissenting opinions will have even less weight, and find it harder to make headway, than they would now.
There's a Spider Robinson story "Melancholy Elephants"(1982) which makes the point, though I think he underestimates the importance of rhythm and microtonality for possibilities for new tunes.
I also think there are some interesting possibilities for increasing human sensory ranges to make new art possible. Still, there are probably limits.
Pressure on people to all have political effect is probably more dangerous than pressure to create new art.
Enter Carol Dweck
This would be at least as much about resilient institutions and skills as about emotional resilience.
I'm going to hazard that the former derives from the latter, and not vice versa. I doubt you can build resilient institutions out of fragile components (fragile people), notwithstanding the construction of bridges out of straws that beginning engineering students do for laughs.
> And don't get me started on intellectually enhanced talking pets. That would open up a whole new can of worms! :-)
Well, I'm all out of terraforming charges, so if anyone wants to go there, they're on their own.
Ten questions about the limit of human intelligence : https://aeon.co/essays/ten-questions-about-the-hard-limits-of-human-intelligence
Related essay by the same author : https://arxiv.org/abs/2208.03886
Aren't most of our moral choices these days fucking phony? Like plastic straws or no bags at the grocery? None of it means shit in the scheme of things, yet we are supposed to pretend it does.
It gets hard to believe that any so-called moral choices have any reality behind them.
morality theater for normies is there, sure.
that doesn't mean there aren't plenty of potential choices of high moral significance, it's just that, as ever, high moral ground is tough and only for the few.
there are people founding U of Austin or putting careers on the line pushing against orthodoxy. or people who risk their lives uncovering Falun Gong organ harvesting. or just people who move to Taiwan because they care, or simply give a few thousand bucks to ukrainian military:
https://bank.gov.ua/en/news/all/natsionalniy-bank-vidkriv-spetsrahunok-dlya-zboru-koshtiv-na-potrebi-armiyi
Doesn't seem like a lack of worthy causes to care about to me. But, as usual in the free world, it's mostly on us to care or not.
Sure, using a paper straw is pretty pointless, but there's no reason to generalize from that to other things that you consider real moral choices. Do you try to avoid harming other people, do you help those close to you lead more fulfilling lives, and so on... In general, the existence of BS doesn't mean everything is BS.
How do you define "moral choices?"
Decisions like straws or bags fall into the general category of 'retail consumption ethics', and I think your statement is broadly in that arena. People make many of those decisions largely out of tribalism, either consciously or unconsciously, with little actual impact to the real world.
But there is obviously a much bigger world of moral choices that do have a lot of impact. at least in a marginal sense, e.g. "should I cheat on my taxes?"
Isn't the decision to cheat on your taxes mostly about weighing the potential consequences of the action rather than a moral choice? I think most people would cheat on their taxes with a clear conscience if they 100% knew they would get away with it.
I think the main moral choice people make is: Should I cheat on my spouse? Yet I rarely hear that brought up when people get into ethical discussions.
Just because you don't evaluate something as a moral choice doesn't mean it isn't one. I agree most people probably think in practical terms more than moral ones, but that doesn't make the moral angle vanish.
As to spousal fidelity - my experience is vastly different. Between advice columns, /r/AmITheAsshole, etc., I've seen a ton of discussions around the ethics on that issue.
In some ways, that's really good! You don't have to make life or death decisions anymore, those things suck. We have successfully created a utopian society where the worst things we have to worry about are straws, instead of surviving the winter or if the black plague will get us.
Given the energy crisis and food shortages, surviving the winter might not be the best example.
No, the worst thing we have to worry about is being cancelled because we used the wrong kind of straw. Or maybe being killed in a nuclear war because Vladimir Putin had a temper tantrum after the Fourth Battle of Kharkiv, but for the purposes of the analogy, we'll go with straw-based ostracism.
In the Before Times, you risked being ostracized by the tribe because you e.g. carelessly let a pack of wolves get at the sheep. And it makes sense that you'd want to impose that sort of penalty on people who make that sort of mistake. Now, we've still got the "must ostracize people for endangering the tribe" impulse, but we direct it against people who use the wrong sort of straw, because that's all we've got. That sort of takes the shine off the "utopia" where none of your decisions can really hurt you.
Cancellation is a pretty big upgrade from exile, imo. At least when i'm cancelled i can still get food, shelter, friendship, and other necessities. Exile back in the day was seen then worse then death!
But yes, you are correct we still have the urge to punish people, and do so even when the punishment is disproportionate to the crime. Isn't really relevant to the op tho.
Sure, that can be bad i suppose, but imo it seems better to have fake problems we turn into real problems, then just have real problems in the first place. Besides, not really relevant to OPs question about morals.
Because some entity outputs the string "suffering isn't important", should that make suffering/happiness axiologically neutral for *you*, as someone who experiences it?
For "survival reasons"?
Are you saying that you would choose to be kept alive (and survive) forever in an infinite torture machine?
Or do you mean that you prize evolutionary fitness above all else? In that case, would you let aliens inflict infinitely painful experiments on you forever, if in return, the aliens made sure you had more descendants than any other living organism?
It sounds like you do have a value system after all. You might say that your aversion to suffering is just behavioristic, without any value-making assumptions to it. But I doubt you would agree to be reprogrammed to seek out suffering as your goal, even if you were compensated for this with money. There seems to be something about the experience of suffering itself that you value negatively, apart from evolutionary fitness/etc. This seems hard to square with genuine nihilism.
It seems to me there are two main approaches to understanding the world/existence/reality. One is to assume dualism, which leads to a scientific and perhaps rationalist approach. This the bottom-up view.
The other is to assume sensation comes first. This is the poetic approach. As the poet Octavio Paz writes: "Poetry is the testimony of the senses."
These different approaches don't necessarily contradict one another. There is no reason they can't fit perfectly together. Yet we are a long way from a unified theory of reality, so those two perspectives remain in conflict.
I tend to believe that the poetic reality is likely closer to the truth. Our senses are subjective yet also objective. What we feel/taste/touch/hear is real. What we think may not be.
I think we should dedicate more effort to understanding the world from the poetic view and less from the scientific view, which is too subject to fashion.
>One is to assume dualism, which leads to a scientific and perhaps rationalist approach. This the bottom-up view.
Don't you mean reductionism?
By bottom-up I mean the belief that every effect has a cause. The effect is at the surface and the cause is beneath. Whereas a poetic perspective focuses on sensation and experience and is agnostic to causes.
One problem is that that all the hard, important poetic work that has been done can't be condensed because it loses its value upon abbreviation. Science, OTOH, advances in the direction of simplicity.
https://interessant3.substack.com/p/interessant3-9
Three Interesting Things Once a Week. Pretty simple, really.
So I wrote a novel that is supposed to promote Effective Altruism with funding from one of ACX plus grants that never got announced here. I've reached a point where it is fairly polished, and I'm looking for feedback, and also ideas about ways to get it the biggest audience when I publish it.
Here's the google doc with the current text, feel free to read it if a novel about Effective altruism and two people struggling for control of one body sounds cool to you. https://docs.google.com/document/d/1ZppL3mlO6M98TLQk2IAdL3nM__Pmjxirt59WXGzbIMM/edit?usp=sharing
I'm very late to the party on this one.
The idea seems incompatible with itself; the Effective Altruism stuff is completely undercut by the massive, ongoing invasion. I think you need to cut one or the other, which is going to mean major rewriting.
I would not recommend publishing anytime soon, you need more practice writing first. Shelf this one, get a few more stories written, get a better feel for pacing and exposition, and come back to it with more seasoned eyes.
I read a few pages but found the style of the soliloquies too exuberant and repetitive, as if you are trying to drum your thoughts into the reader's head! "High? Did I say the mountain was high? It was really, _really_ high" that kind of thing (quoting from memory).
IMHO you should try and cultivate a more sparing and easy going style, and stand back and let the reader interpret your meaning more instead of trying to ram it down their throat!
Also I didn't like the effing and blinding in the descriptions. Maybe that is fine with your target readership, and they may even expect it. But, without wanting to sound like a prude, it seems to me gratuitous and off putting.
Edit: You may find the following blog article about Effective Altruism interesting:
https://samf.substack.com/p/the-politics-of-effective-altruism?utm_source=substack&utm_medium=email
I haven't read your novel (may look at it later), but can I make a small point? Please don't start a sentence with "So", unless it is a logical consequence of the previous sentence!
Starting sentences with "So" for no apparent reason is the besetting sin of most academics. But to normal people it sounds insufferably pompous, as if the pronouncement they are about to give is unchallengeable wisdom "So be it .."
My biggest suffering during Covid was having to listen to medics and pundits, wheeled onto TV one after another to pontificate about the pandemic and, guess what, they almost invariably started every reply with "So"!
So, please stop doing it everyone! :-)
If all of them do it, clearly it part of normal English usage.
Seamus Heaney in his fairly famous 2000 translation of "Beowulf" departed radically from tradition in translating the initial Hwæt as "So." where it had always been "Lo!" or "Hark!" or some such. He said[1]:
"in Hiberno-English Scullion-speak, the particle ‘so’ came naturally to the rescue, because in that idiom ‘so’ operates as an expression that obliterates all previous discourse and narrative, and at the same time functions as an exclamation calling for immediate attention."
----------------------
[1] https://www.superlinguo.com/post/122708669336/say-hw%C3%A6t-translating-from-old-english
I'm surprised. I think starting sentences with so is colloquial. "So, there I was..."
It’s also colloquial where I come from, in Ireland. In fact Seamus Heaney translated that first word of Beowolf (hwæt) as So. He felt that the other translations were too strong, that hwæt was more of a slight interjection not the normal Hark!.
He used so, because it was common enough usage to him.
So it seems (DOH! Now you've got me doing it! :-)
But seriously, I guess really it's no more arbitrary than starting sentences with "Well", which no doubt annoys some people.
Hmm interesting, yes, I did a bit of searching on the web and it does seem to be more a US colloquialism. British people, mostly young people and PR representatives and politicians (the latter two being much the same these days!), tend to ape American turns of phrase to try and sound trendy. But I wonder why academics, of all ages, are so addicted to it!
I can imagine someone in the UK saying the example you gave, without it sounding jarring. But it would most likely be in the middle of some story, where the "so" was indicating a consequence like "as a result" or "next thing you know" ..
There's an article on it at https://www.spectator.co.uk/article/it-s-so-annoying
Another example of an author successfully using novels to make ideological arguments would be Dickens. A more sophisticated but less successful one would be Trollope — who has Dickens as a character in one of his novels, portrayed critically.
I haven't read your novel, but I see two problems with using a novel to make an ideological point. The first is artistic. In my experience, no plot survives contact with the characters. If you are committed in advance to where the story will go you are not free to let the characters you have created act as those people would, so risk making them puppets rather than people.
The second is an issue of honesty — it's too easy to cheat. You can give the people who disagree with you bad arguments, the people who agree with you good arguments. You can arrange to have policies you disapprove of have bad results, policies you approve of have good results. You can thus make the case for your position look much stronger than it is. That's a good deal of the reason that, although I have written three novels and they are affected by the fact that I am both an economist and a libertarian, none of them is an argument for either libertarianism or economics. I leave that to my nonfiction.
In General i agree, however Dickens works although he makes his bad characters caricatures and his good characters saints. Maybe we are too far away from the era to worry about the misrepresentation of workhouse board of gentlemen. Maybe they were decent folk, in general.
Sorry, I had to give up at around the part where the guy (I can't call him the hero, he's too damn annoying) is flying alongside a dragon. Oh, and congratulations on making *flying alongside a dragon* TEDIOUS, because your guy is so busy patting himself on the back about being smart and a rationalist and smart and an altruist and smart and did I mention he's really smart?
There is a way to do infodumps and lectures about your pet philosophy, and I'm afraid you haven't cracked it yet. Well, this is what first novels are for - write write write, produce bad work; write something else, write write write, it's still bad but you're learning; repeat until eventually you produce something that can stand on its own two feet.
Your guy is too busy *lecturing* about the free market and I can't remember what-all, then every so often you remember "Oops, I've left him naked on top of a giant mountain peak, better mention something about that". Right now, I don't care if he *could* be the One True Saviour, Because He's A 21st Century American, Goshdarnit! of this world, I would like him to be hit by a truck again. Or eaten by a dragon. Or something, because he is so tiresome.
"All these dumb chuckleheads couldn't figure out how to Do Good with their accumulated wealth, but since I am a Bay Area altruist, I am sooooo way smarter than them, I can fix this world easy-peasy!"
Yeah, I don't think so.
I disagree with the Jacks!
First I think that the concept of writing a novel to promote some ideas has a looong and respectable history. It is especially common in science fiction, HG Wells for example had a very clear political agenda for the majority of his novels, or more recently HPMOR has of course a very clear message! I don't see the problem. Yes, authors of fiction often have values that appear in their novels, and? For me it is a problem only if it is very "on the nose". Or it can be for readers that disagree with the authors, for example religious readers of His Dark Materail are frequently bothered by the very clear anti-religious messages of the novels. But then what?
I also disagree with the evaluation about the quality of you writing itself. I love great writers, those who have incredible writing, who are able to evoke in a few sentences a striking scene that will remain in the readers' memory... and I also love writers who have a classic writing style but have an interesting story to tell. For me, you write well, your descriptions are clear and telling, and yes, of course, you can improve in writing quality but what you have written seems to me already of a quite sufficient quality for a novel that could be very enjoyable to read.
Honestly, it's not very good. The prose is competent but no more than that and almost nobody wants to read a novel that doesn't have way, way above-average prose.
You have smarts and talent, but it isn't in novel-writing.
The concept of writing a novel to promote a Movement is pretty ridiculous. Harriet Beecher Stowe pulled it off, but I can't think of another successful example. Can anyone?
Ayn Rand managed to create a movement, but I certainly wouldn't call her fiction work "good". Her essay collections are much better, but very few people want to read essays.
Atlas Shrugged is way better than her essays
I was going to say Ayn too. And I loved her when I was about sixteen, so I would dispute good. The Clansman could also count, though maybe only as a bank shot. It was the novel on which Birth of a Nation was based.
I'm currently working on my first work of fiction, and I am very sensitive to feedback. So, keep that in mind when I say that I mostly agree with Previous Jack. Some thoughts:
- Plus one to his point about writing a piece with an explicitly political/ideological purpose. That is an incredibly high bar to clear.
- Double spacing would dramatically improve the readability.
- Your writing reads very much like someone who is trying to write. I don't mean that to be discouraging. I see a lot of good pieces and parts - you clearly have the chops to make something good. But, as someone who is currently trying out a lot of different voices on for size, I can recognize you as someone who is adopting a voice.
Have you ever tried doing a podcast? It's kinda the same - "I'm a good conversationalist, so obviously I'll be a good podcaster." (No, no you won't.) Your writing has that 'this is my first podcast' feel. It takes a lot of work for your writing to feel natural. I learned that firsthand with my book review (Society of the Spectacle, my first piece of public writing ever), which was very polarizing mostly due to stylistic/rhetorical choices that distracted from the points I was trying to make. Scott makes it look easy. It is not.
How do you make your writing feel natural? I'll let you know if I discover the secret (after I make a lot of money selling webinars). But my first piece of advice would be to try writing some of those passages as if you were a different person, or in a different mood, or from another perspective, or in a different tense.
- Lastly, you are doing yourself no favors by announcing what you are doing and why. If you say "I wrote an EA novel", pretty much everyone is gonna assume that it is bad from the jump. Same for any movement novel. You are digging yourself a hole and affecting your feedback because of that.
I have had that same problem more generally as both a writer and a critic. When my friend wrote a novel, I noticed that I was judging his work in a different way than I would judge a work by my favorite author. With a proven professional, people assume it will be good until something jars them out of the reading experience. With an avowed amateur, readers question every sentence and every word until they get hooked despite themselves.
Point being, when you are trying to solicit meaningful feedback, try presenting yourself and your work differently. You might be surprised by how the criticism changes. Who knows? Maybe if you send this exact same draft to a different group of people with a different introduction, you might get a different reaction.
As Previous Jack said, you have smarts and talent. Some people straight up can't write, and you definitely don't fall in that category. You *can* do this, and soliciting feedback and iterating is the right way to go about it. Keep working, we look forward to seeing the next draft!
Not wild horses, but curiosity was enough...
I still agree mostly with Jack. There are quite a lot of good things about your novel - it's interesting, has some good ideas, an engaging story etc etc. Unfortunately novels get to be judged the other way round - from the top down with reviewers listing the things that grate or a clunky or lack plausibility. And that always comes across as harsh, like Jack Wilson's comment which I sort of disagree with - your novel is a bit better than that.
Specifically I agree that to a first approximation it's impossible to use a novel to sell an ideology. Yours is a good an attempt as many, but there's a real limit to how many people (apart from the already signed up) who can bear to listen to missionary zeal.
A couple of things I found odd - the non-profane language all the way through apart from one character who says 'Fuck' all the time. Fuck this, Fucking that... Yes, I know it distinguishes him from his other half but there may be less clunky ways to do that. And the line 'Fuck. Fuck. Fuckity. Fuck.' Isn't punctuated the way someone would say it, so of course people are going to stumble over it.
Also (especially given the language above) you have for some reason avoided any dalliance with romance at all. It seems odd - I know it can be difficult to write about, but it left me feeling something was missing.
If I'm honest, my curiosity was nudged by wondering what a defence of EA might look like to someone like me who is profoundly anti-EA. Obviously we could argue all night and not make the slightest progress... so I'll make just the one point.
Right at the beginning of chapter one you declare -
<"You should always consider the possibility that you are wrong, and making a mistake>
This is something I hear frequently in EA - and broader rationalist circles. And yet I have a profound feeling that at no point in the novel have you questioned your ideology. The fundamental beliefs (and I would say faith) are simply not up for discussion or consideration. The uninitiated 'aliens' are just people who haven't seen the light yet and so their naive objections are feeble straw men.
Of course if you were really open to thinking you are wrong - and making a terrible mistake - you wouldn't be writing a novel trying to convince others. But the plonking of that dictum right at the beginning of your book jarred quite a lot, because you've clearly made up your mind and 'Considering the possibility that you are wrong' isn't something you give the impression of doing at all.
I enjoyed the story and particularly the ending, except for the last few sentences. Even if I was a big fan of EA, I think I'd have still thought that it was too much spoonfeeding. As if at the end of Animal Farm Orwell had said "Oh, btw I think totalitarianism is really really bad...". You've made your point!
I haven't read your novel - wild horses couldn't pull me in the direction of a novel that was pro EA, but I think Jack's advice is spot on. Really - there's a lot of insightful stuff in there. And yes, above all, keep writing.
And good luck!
Agree with the Jack above. Keep writing. Always keep writing if you want to be a writer.
A discussion about adding voting restrictions came up on a different site a while back, and among more foundational arguments against it I remember thinking "we already have voting restrictions, it's called children," and it suddenly occurred to me; why DO we restrict children from voting? The only arguments I can think of in favor of restricting children from voting are:
1. They aren't well informed, which is true for most adults as well
2. Their parents would pretty much just vote for them, which seems like a feature; if a mother of four gets four more votes than a bachelor I'm fine with that.
3. Their parents or other authority figures would exert too much control in trying to get them to vote a certain way. I'm pretty sure this is already pretty bad, since everyone already knows they're going to grow up and vote in the future, but maybe it would be problematically worse. Campaign ads during Looney Tunes.
4. The system couldn't handle that many new voters, which if true would need fixing in the long term anyway.
5. edge cases involving newborns, which can be avoided by making the children be able to write their name or something.
The upside would be getting people to vote in the first year they want to, whichever year that is, and not giving them the sense that their input into important events is completely excluded, which seems like it could either convince people to never vote, or radicalize to "make up for lost time".
Overall takes, any thoughts I've missed, anyone know anywhere that's tried it?
We don't allow children to vote because they aren't as cognitively developed as adults.
Should children be tried as adults whenever they are charged with a crime?
I was smarter as a child than the average adult criminal is, so unless you want to try dumb adults as "children", then there's no reason to have different criminal standards arbitrarily based on age.
Also, do you not understand how awful it would be to have people trying to influence children to vote for their party? I'm not saying it would be particularly effective, but the possibility for real harm exists.
>which seems like it could either convince people to never vote
That seems extraordinarily unlikely. It seems much more likely that people who don't vote do this because they're uninterested in politics, and children are much less interested in politics than adults, so it seems silly that they would have voted as kids.
>or radicalize to "make up for lost time".
Again, very unconvincing. A majority of people do not become radicals, which makes 'not being able to vote' a poor explanation, and since its so rare it seems like the sort of thing you're almost bound to become anyway.
You assume that it would be the default that children are allowed to vote, and that we need some sort of justification to deviate from this default.
But we can also see it the other way: by default, children do not have any freedom of actions. They are not allow to vote, yes. But they are also not allowed to make contracts, to decide on their medical treatments, to decide where they live, and so on.
This goes pretty far. In Germany, children are not allowed to buy anything without parental consent until they are 12 (because every transaction is a contract). If they want to buy a toy from their "own" allowance, and the parents don't want that, then the transaction in the shop is invalid.
So in general, children get such rights step by step between the age 12 to 18 (ages vary from country to country).
So by default children cannot vote, and the only question should be when exactly they obtain that right.
How is that German law enforced?
In practice, if a child walks into a shop alone, then the shop can sell a toy to the child, assuming that the parents consent with that. If the parents at home are happy with that, nothing else happens.
But if the parents at home don't like that their child bought that toy, they can go back to the shop and undo the transaction. (Money and toy are returned.) Legally speaking, the purchase then has never happened, because the child was never able to make a binding contract. Principles like "pacta sunt servanda" (contracts are binding) do not apply to small children.
EDIT: A very quick google search suggests that similar rules apply in many US state and probably other countries as well.
I'd never heard of this-- that children are restricted from making small purchases.
The American version (as I was taught in my high school Business Law class) is that minors are entitled to return anything they purchased, regardless of the store's return policy, as any contract entered into by a minor is voidable at the discretion of the minor.
It seems to be the same in (some states of) the US:
"Under state laws, parents of nonemancipated minors can void purchases and other contracts their children have made without adult permission, especially those involving face-to-face transactions, where sellers are in a position to know or suspect they're dealing with an underage consumer"
It seems that there are exceptions, and the details are complicated. From
https://www.consumerreports.org/cro/news/2014/07/are-you-responsible-for-purchases-your-kids-make-without-your-permission/index.htm
I wonder how often the law is enforced, especially for small purchases.
>1. They aren't well informed, which is true for most adults as well
Agreed.
>2. Their parents would pretty much just vote for them, which seems like a feature; if a mother of four gets four more votes than a bachelor I'm fine with that.
This is still true for the majority of adults if you just change "parents" their authority figures. Their favorite celebrity, politician, etc. The vast majority vote for people and with their mid/hindbrain, instead of voting for well thought out policy with their forebrains. This is no different than when literal children do it wrt their parents.
>3. Their parents or other authority figures would exert too much control in trying to get them to vote a certain way. I'm pretty sure this is already pretty bad, since everyone already knows they're going to grow up and vote in the future, but maybe it would be problematically worse. Campaign ads during Looney Tunes.
This is no different than campaign ads during Game of Thrones etc. Staring at a box watching stuff that didn't happen or stuff that doesn't matter is childish no matter what. Just because they inject sex and violence for the big kids doesn't make it any different.
>4. The system couldn't handle that many new voters, which if true would need fixing in the long term anyway.
The system handled the 19th amendment fine
>5. edge cases involving newborns, which can be avoided by making the children be able to write their name or something.
There are also edge cases involving people in comas.
>Overall takes, any thoughts I've missed, anyone know anywhere that's tried it?
Overall, there is no reason to not allow children to vote when the current ethos amounts to begging the worst examples of homo sapiens over 18 to please cast their very important, very well informed vote. Many people on this forum live in a bubble of sorts -- most people are 20+ IQ points lower than you which is A LOT. They can hardly read based on tests like those from the famous Robin Hanson post. They are basically not capable of doing anything but their mid-low 5 figure highly supervised jobs. They cannot lead, they cannot learn much, and they cannot think. Most people can't learn algebra -- just talk to teachers in average school districts. Many students are pity passed with Cs through high school math and never get it. These people are so far from being capable of having any sort of informed view on 21st century politics that you might as well let children vote. In fact if rationalists are about 120 IQ on average, the typical rationalist was probably as intelligent as the average adult at 12 or 13 years old. They have already committed a crime against you by not allowing you to vote in middle school.
Voting confers legitimacy if it is done by the free and the equal. Children are not free - in the sense of being able to unilateraly make life decisions and enter into legally binding realations - which, as you mention, eventually boils down to giving their *parents* more votes, hence failing the equality prong.
Anyone who finds themselves on the losing end of either the "unequal" or "unfree" aspects of the system is fully justified in denying its legitimacy and undertaking whatever steps might be found fit or necessary to free themselves from its rule.
It would at least help to hammer on the fact that votes (at least if they're at a polling place) are secret.
We'd at least have gotten later start times for high schools much sooner.
Most children have close to zero knowledge of and interest in politics. So their votes would be completely uninformed and unmotivated, besides barely understood ideas put in their heads by adults around them. These votes would thus discredit democracy and debase political discourse even more than the bear pit it already is!
In the UK two hundred years ago it was illegal for teachers to teach children history, let alone politics, more recent than twenty years in the past. Any teacher with the bright idea of teaching what we would call current affairs would soon find themselves arrested for sedition! But, like sex, total ignorance of politics, and deliberate isolation from it, as children didn't seem to affect the interest of most in it as adults (even though only property owning males had the vote back then). So maybe it wasn't such a bad approach :-)
Could you expand on your "illegal to teach history" factoid? Two hundred years ago, most schooling in the U.K. was private and, I think, essentially unregulated. What laws or precedents would prevent a teacher from telling kids about recent history or politics?
I read it years ago, and forget which book. But there is probably a reference to it in this article: https://www.jstor.org/stable/4285072 (It'll cost $50 to buy though!)
I could find no reference supporting that in the article.
Most "adults" have close to zero knowledge of and little significant interest (some get into it like sports but never understand it) in politics. So their votes are completely uninformed and unmotivated, besides barely understood ideas put in their heads by high IQ agentic adults around them. These votes thus discredit democracy and debase political discourse even more than the bear pit it was 200 years ago!
Perhaps it should be illegal for professors and journalists to teach "adults" history/politics more recent than twenty years in the past. Sadly most journalists and many professors think teaching what we call current affairs to these "adults" is a bright idea. Maybe they should be arrested for sedition!
I would think that a class of voters with no knowledge or interest would be neutral, not negative! Their votes would cancel each other out.
It’s only when you have a group *with* knowledge or interest that they will have any net effect.
I think the idea is that they have little intrinsic interest or capacity to think beyond what they're told. So children would be voting for what "adults" tell them to want, which mirrors how "adults" vote for what the actual Adults In The Room offer them.
3 main reasons:
1. The one that actually probably has the strongest effect: parents who actually have children on the whole think letting their children vote would be a nightmare
2. The one that actually means the most: letting children vote is probably very bad _for the children_. I fucking hate being advertised at as an adult. At least when you're a child it's only about your parents money and it's gated by your parents.
3. Pragmatically, children cannot reliably get access polls and we're not going away from physical+mail voting anytime soon
So at 14 somebody is mature enough to vote but they are not mature enough for Jeffery Epstein to pay them for sex.
That one threw me for a loop; nobody told me that Epstein was in the habit of actually paying the girls.
Upon looking into it, though, it doesn't seem to have all been consensual commerce.
Kelsey Piper had a fun article on this: https://www.vox.com/future-perfect/2019/9/10/20835327/voting-age-youth-rights-kids-vote
She basically came to many of the same points (but didn't even bother saying the children should be able to write their name - if they can't write their name, then they're probably going to cast an invalid vote anyway, which is fine).
One objection not addressed in the article is that individuals under 18 are treated differently in justice system. If they are considered less responsible than those over 18 for stealing then maybe they aren't responsible enough to vote.
As regards that objection, it is worth noting that it's fairly common for teenagers to be tried as adults for more serious crimes, though I'm not sure of the precise criteria used.
I think at #1 you're mistaking the important aspect of being "well informed." Voting is not meant to be rocket science, you are not asking everyone to be an expert on some abstract social issue, should we send humans to Mars or not? Devalue the dollar or not? et cetera.
What people are expected to be very well informed on is their own interests, so that when Congress proposes to, say, tax the bejeesus out of gasoline because EVs are cool and Gaia loves them, each individual voter will know very well how that would effect him or her personally, and can send that feedback (via a vote) back to the government. It's best to think of voting as an information-gathering mechanism, where we collect everyone's opinion on how X or Y will affect him personally, and funnel the results back to everyone who wants to know. The fact that it can result in power changes is mostly because we can subsume the entire issue of feedback from The People into the simple question of: who should make decisions on their behalf? From that choice all else follows. So instead of having voting on every issue under the Sun (although living in California one feels we edge ever closer to that madness), we just vote on who should *decide* issues.
From that point of view, it's easy to see why we exclude children: they don't understand their own interests, pretty much by definition. They'll eat ice cream for breakfast, lunch, and dinner, fail to dress warmly in cold weather, and cross the street without looking. So asking them their opinion on how X or Y will affect them is useless -- they won't answer accurately.
Of course, some adults won't answer accurately either, but that's on the margin, and usually has to do with very hypothetical changes, like how would colonization of Mars affect you? rather than more realistic changes like how would $8/gal gas affect you? Of all the kinds of expertise people have, the one we can most rely on is expertise in knowing what they like and don't like.
>From that point of view, it's easy to see why we exclude children: they don't understand their own interests, pretty much by definition. They'll eat ice cream for breakfast, lunch, and dinner, fail to dress warmly in cold weather, and cross the street without looking. So asking them their opinion on how X or Y will affect them is useless -- they won't answer accurately.
51% of US adults are obese or severely obese and another 30% are overweight. https://www.niddk.nih.gov/health-information/health-statistics/overweight-obesity
If they aren't literally eating ice cream for breakfast, lunch, and dinner they might as well.
It's hard to exaggerate the severe failure that being obese is in 99% of cases (I am excluding stuff like cancer treatment making you fat). 99+% of obese people should basically be told what to eat by a guardian type figure. In history, they were, and that is why they weren't fat (in combination with less food available). They were serfs and slaves for a reason. If they can't even make good decisions for their own body, they shouldn't have a vote that's equivalent to mine, or else you might as well let 10 year olds vote too. You might say children are worst, but children over the age of 5 or so can actually take care of themselves at least as well as an obese "adult". There are many recorded cases of such children surviving in the wild as ferals, being homeless, etc. Children have self-preservation instincts and enough knowledge to at least match the performance of a 350 lb 35 year old. The gap between "adults" and children is much narrower than most people presume, because they have comically low expectations for children and a comically high estimation of the ex-serfs, which lived and evolved as children for 1000s of years. The biggest gap is between real adults and serfs/children. Then you go from people who can run billion dollar enterprises down to people who can't properly feed themselves without strict external discipline from a master-figure, whether it be a lord or a parent or an owner.
P.S. your distinction between normative politics and the need for expertise is messier than you think. It would be nicer if it worked along the lines of "everyone knows what is in their best interests, and delegate to experts to work out the details," but this is not the case. For example, if people wrongly think that having a ton of Mexican immigrants will improve the economy, or specifically their own wealth, because they fundamentally do not understand stuff like HBD and economics, then they will vote for someone who will let in immigrants and hurt them. So they hurt themselves due to lack of expertise. You need to understand many things at a high level to understand what is in your best interests, especially in this complicated world. Even your example, taxing, can get economically complicated quickly such that most people, even including me (I'm not yet as educated on economics as I want to be) won't really know what is going to happen under a new tax policy and if they will really have higher utility or whatever or not. In fact in basic micro economics it is often expressed as to how most people don't understand that there is apparently no difference between a sales tax and a production tax. Most consumers would prefer the latter but really it impacts them in the same way. They can't make an informed vote without knowing this and it only gets more complicated.
The idea that obesity is entirely based on diet is contested. I think there's a very good argument that our current issues with it as a society are primarily due to environmental contaminants.
https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-part-i-mysteries/
lol, for the record, are you fat? I'm skeptical of their claims because of the law of conservation of mass and energy. I am more open to the claim that certain chemicals including junk food have made the low agency more likely to overeat than in the past, but this point is moot. The proximal cause is still their constant diet of soda, doritos, and McDonalds. If you show me a study that says that the obese present with diets which contain a normal amount of calories then I will change my view. I tried looking for a study like this and couldn't find one but maybe my codewords were wrong.
Also your link is written by a somewhat inadequately learned individual, just wanted to point out something I noticed:
>Kitava is a Melanesian island largely isolated from the outside world. In 1990, Staffan Lindeberg went to the island to study the diet, lifestyle, and health of its people. He found a diet based on starchy tubers and roots like yam, sweet potato, and taro, supplemented by fruit, vegetables, seafood, and coconut. Food was abundant and easy to come by, and the Kitavans ate as much as they wanted. “It is obvious from our investigations,” wrote Lindeberg, “that lack of food is an unknown concept, and that the surplus of fruits and vegetables regularly rots or is eaten by dogs.”
His overall contention of course is that some contaminant is making Westerners phenotypically different from this isolated tribe which evolved under conditions of plenty. The idea that a cold winter race might tend to store more fat for genetic reasons (not storing fat, eg throwing away excess energy, does not violate conservation of mass and energy. But magically putting on weight you didn't consume because does) is evidently lost on this writer. He is inadequately learned in the subject of biology to be writing on biological phenomena.
He also seems really biased towards excusing fat people for their horrible diets. It is possible that stuff like lithium exposure and microplastics and certain genes cause your body to store more excess energy from your calorie surplus than it otherwise would holding your diet constant. It would be highly preferable for fat people if they could simply live the eternal Marvel movie of constant eat-whenever-you-want, whatever-you-want without any consequence, eat least visual and embarrassing, for their behavior. Nonetheless, BMR is still about 2000 calories per day. https://www.nejm.org/doi/full/10.1056/nejm199212313272701
If you just eat your expenditure, you won't get fat. It's that simple. And evidently the toxins aren't bad enough that they're reducing expenditure to low levels somehow. It's 2000 a day for obese people in that study. So these people are fat because they eat too much.
Personally, I gain fat. I'm not like those islanders. I gained about 10 lbs a year from high school to the beginning of college. Stayed 6 ft tall and went from 145 lbs to 195 lbs. I was overweight and if I didn't change my diet I would be obese in another few years (I forget where the cutoff was). I didn't feel like my diet was that unhealthy but whatever. I changed it, I lost 30 lbs, started exercising, got into fitness, actually exercised enough to eat 2800-3000 calories at day at parity. And I was eating a pretty good diet before all of this compared to what I saw most people eating. Maybe without microplastics I never would have gained the weight, but who cares, I adapted to my environment and you can too. Fat people will not be excused for their gluttony.
"I'm skeptical of their claims because of the law of conservation of mass and energy."
Nobody denies that, but the amount of food you eat depends on the amount of hunger you feel.
The amount of hunger you feel, as well as the amount of calories you burn through unconscious processes, are obviously regulated by mechanisms that are outside your conscious control, but are part of your body, and therefore can go haywire.
I'm 5'3", 140 lbs, so no, not fat. On the edge of overweight but always in healthy range on BMI charts. Out of shape currently, but used to cycle 50-100 miles a week, too, when I lived in a more bike friendly place. Your uncharitable bias against fat people is noted, though.
I believe in the environmental contamination explanation for the obesity epidemic based on the series I linked you to and other articles I've seen, including the one linked in this open thread. I didn't believe it before but they made a very compelling argument. On an individual level obviously diet and exercise make a difference, but there are a lot of factors in the obesity epidemic on a societal level. If you think the problem is gluttony, you'll have to explain why even rich people in the past, with the ability and drive to be gluttonous, weren't nearly as obese as modern day gluttons, if they were even obese at all.
I also find the idea that serfs etc weren't gluttonous because their lords controlled their diets is absurd. But still, you have to explain why the lords weren't obese gluttons themselves.
>If you think the problem is gluttony, you'll have to explain why even rich people in the past, with the ability and drive to be gluttonous, weren't nearly as obese as modern day gluttons, if they were even obese at all.
Simple, superior moral fiber.
As for the rest,
"The most significant change in per capita caloric
consumption over the past century occurred between the
1950s and the present. Adjusting for loss, it is estimated
that average caloric intake increased from 1,900 kcal per
capita in the late 1950s to 2,661 kcal per capita in 2008,
representing a 761-kcal increase over 58 years. The bulk
of the calorie increase (530 kcal) occurred from 1970
until 2000." https://sci-hub.se/https://aspenjournals.onlinelibrary.wiley.com/doi/abs/10.1177/0884533610386234
This has only increased since 2008, can't find a study but up to date websites claim it's 3000 per day.
https://www.annualreviews.org/doi/pdf/10.1146/annurev.publhealth.29.020907.090954 "The pooled model results, excluding Australia and Finland, suggest
that calories account for 93% of the change in
obesity from 1990 to 2002."
Not to mention way more low IQ people are obese and very few, eg, college professors are obese. 14% of doctors and professors were obese while 28% of high school teachers and 48% of nurse assistants (lower than RN) were obese https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4681272/
>1) The idea that feudal lords paternalistically micromanaged what their serfs ate is...interesting, to say the least. Whatever their lack of legal rights, on a day-to-day, sociological basis serfs generally managed their own lives and economically supported themselves, making their own decisions about how to grow food on their allotments.
You're straw manning me to make me look stupid. I was alluding to laws restricting serf hunting and fishing and their generally restricted diets. Many people throughout history have been enslaved and they are generally fed by their masters. Urban Roman and Greek slaves, black cotton picking slaves, etc were fed by their masters. Serfs and slaves were not allowed to eat to obesity, through a combination of legal restrictions and production restraints of their times. These people were bought and sold, they were the property of the masters, where they lived, what they did for a living, their religion, often how many children they were to have, and yes, to a large extent their diets, were determined by their masters or lords.
>2) Support for increased immigration on economic grounds is substantially *higher* among more-educated people.
You seem to have reading comprehension issues (my priors are high for this since the vast majority of the population does according to tests). I never claimed anything contrary to this point. And "educated" is doing a lot of work here. The average college graduate from 10 years ago had an IQ of 108 per pumpkinperson, and I have heard this has declined. I estimate the average rationalist to have an IQ of 120. So there is already a big gap between your expected IQ and the intelligence of an "educated" person. Elsewhere in this thread I have discussed how journalists and professors function like people worry how parents would were children given the right to vote. For various reasons, susceptibility to certain "parents" for "adults" may not decrease monotonically as IQ increases. But the overall effect is there despite education level https://www.pewresearch.org/politics/2018/06/28/shifting-public-views-on-legal-immigration-into-the-u-s/
>What people are expected to be very well informed on is their own interests,
An interesting use of the passive voice. You might expect people to be very well-informed on their own interests, but I don't.
If I did think people voted based on an accurate assessment of their own interests, I'd expect them to vote randomly or not at all, since the odds of your vote actually changing anything of importance are so minuscule as to not be worth the effort of identifying the candidate that benefits you the most.
It's been said that Socrates was against Democracy for 2 reasons:
1. People wouldn't understand what is in their interest.
2. People would understand what is in their interest -- and vote for it.
"so that when Congress proposes to, say, tax the bejeesus out of gasoline because EVs are cool and Gaia loves them, each individual voter will know very well how that would effect him or her personally,"
I don't think that's the standard at all. I don't think most people even know what Congress is proposing, they just vote for the letter by the person's name because their friends say they should.
One assumes you know a surprisingly ignorant collection of people. This is not how anyone I know votes, but YMMV as they say.
You probably live in a bubble of 120+ IQ people (top ~10% of the population).
Hardly. I certainly did in college and graduate school, but I've been in the real world for 45 years now. I know policemen, parole officers, people who have been convicted of petty theft, people who've spent years in AA fighting addiction, teachers -- both good and bad, at all levels from kindergarten through professional school -- physicians, surgeons, gardeners, people who do drywall work, salesmen, illegal immigrants both young and old, people who are rabid socialists and others who fly MAGA flags. It's kind of what happens to you if you live long enough.
How many specific people have you talked to for more than, say, 5 hours? You don't really know the cop who pulled you over, or your kid's teacher who you talked to for 20 minutes during parent day, or your gardener who you make small talk with for a few minutes when he comes and goes.
Most people are surprisingly ignorant.
In broad areas of philosophical discourse and intellectual debate, sure. They have real lives to live, after all. But on the areas in which they make their contribution, and which affect them every day, I think not.
I find that if I spend 30 minutes talking to the HVAC guy when he's working on my A/C, or the ICU nurse when she's assessing my dad post-op, or even the gardener when he's deciding whether to prune the roses this week or next, and by how much, my conclusion is that almost all people know a great deal of pretty subtle detail about some area or other, however they earn their living, and are pretty smart about deploying it.
Doesn't mean they can write brilliant prose paragraphs about the theory of utility monsters, of course. But they might be able to fix a jet engine or grow a bushel of corn, which seems useful.
>They have real lives to live, after all.
The idea that petty labor, often superfluous in this economy, is more "real" than high intellectual pursuit, is precisely ass-backwards.
The HVAC guy, the ICU nurse, nor the competent gardener are typical Americans.
There's a good reason for that. The standard of proof for statements by politicians of what happens to those blue-painted people over there yonder is going to be much lower than statements about what is happening to YOU and people you know. If Governor Newsom tells me his latest nostrum will be good/bad for me, then I am going to be pretty dang critical of his prognostication. I know myself well, I know my situation and all its complexity well -- I'm an expert on that stuff, and therefore his theories will be pitilessly examined and the odds that I'm likely to just accept it okey doke if you say so bub are nearly zero.
On the other hand, if he proposes that something will be good for Native Indian casinos or pot growers in Ukaiah -- well, I have a lot less expertise to deploy. I *can't* be as mercilessly critical, and I'm much more likely to accept or reject the hypothesis on the basis of tribal loyalty ("A Democrat proposed it! It must be noble/selfless/good/corrupt/deceptive/evil!"), or because I heard some talking head I admire rant on it during a 30-second TV spot, or 140-character Tweet, or because I did/did not have my morning bowel movement on schedule.
In short, I think politicians are much better able to manipulate voters through tribal hominid instincts if they stick to topics on which relatively few people have any kind of direct experience, and of course they know this. Religious crusades, foreign adventures and wars have been a great traditional field for such demagoguery, and the environment and social justice for groups and lifestyles about which many of us know very little are pretty much the modern equivalent.
Ummmmm... voting is supposed to reflect whatever the individuals voting want it to reflect. So that their preferences get expressed in policy.
Also, yes, the information gathering mechanism is one part of the value of democracy, but the much bigger and more significant one is that it disperses power amongst the people.
It doesn't disperse power at all. There are about 800,000 people in my Congressional district. That does not mean I have 435/800000 of a vote when Congress considers any bill. Power is concentrated in the hands of people who actually wield it, which is not me. What I can do is (help) choose the people who will wield power, which is surely a form of power, indirectly, but as I said above the main reason for having the whole representative republic thing is not so much that I can exert power, because I don't, and not even so much that one man doesn't exert very much power (because we have a President who exerts enormous power), but so that I can (help) choose the right people to exert the power. The value is in the fact that the people I choose will represent (in theory) my interests, that is, it is the communication of my interests to the governing body that is important here.
That is, after all, why that person is called my "representative." He is presumed to "represent" me -- to speak for me, to be aware of my interests -- and that is the purpose of the election, to convey that crucial information.
The franchise gets extended when the party in power sees an opportunity to gain a reliable voting bloc.
There have been efforts in many countries to lower the voting age to 16 because that would tend to give a reliable voting bloc to the left-of-centre party. But if you go any lower than that then the effort required to pander to them is probably higher than the potential gain. No politician wants to give speeches aimed at the 5-9 year old demographic.
Oh I dunno. It doesn't seem to me like much would need to change (about the speeches).
In the USA, we also restrict felons and non-citizens from voting. I think the idea overall is that we want to restrict voting to people with sound judgment who are committed to life in the USA. At the same time, we have to be really careful and conservative about what rules we use to draw that line.
Hence, we go with three criteria that are as legally objective as possible: age, felony conviction, and citizenship. By keeping these criteria steady, we avoid the problem of parties redefining voting criteria in favor of new potential constituent groups (not to get culture warry, but US examples might be Republicans restricting voting ages upward, or Democrats expanding voting rights to green card holders).
These might not be the objectively best way to approach it, but I think it captures what would be seen by many people as common sense.
I also disagree with the felon conviction one, it seems like a crutch to lock in laws against a shifting mindset. Felony drug convictions are no longer able to vote to legalize drugs, on the ground that drugs are illegal. Surely if the laws are just, then the number of prisoners can't hope to outweigh the ordinary citizens and there's no loss in letting them vote.
Citizenship's a different thing, you should absolutely have to commit to be subject to the result of the vote before you can vote, which foreign nationals aren't doing.
I feel obliged to note that there's a way to route around this, basically to the tune of "disenfranchisement for felony is okay, but drug possession and similarly-legalisable* things shouldn't be on the list".
*By "legalisable" I mean "society wouldn't explode if you legalised this". Fraud, rape and murder are obviously not legalisable in their normal senses, although consensual "rape" (because of AOC violation, because drunk, etc.) demonstrably is.
Do you think a society in which a majority of people want to legalize murder will be saved from exploding by denying them the vote? I just don't see a legitimate to it. If a vote is related to the crime they're the most motivated party, if it's unrelated to the crime then it's unrelated to the crime.
>Do you think a society in which a majority of people want to legalize murder will be saved from exploding by denying them the vote?
No. The principled rationale for denying criminals the vote is typically along the lines of "they are demonstrably shit at decision-making and/or evil and thus contribute only noise", not "society will explode if you legalise murder"; it is extremely obvious that murder will never be legalised because even murderers don't want to get murdered.
I'm not particularly sold on this rationale; I'm just pointing out that your specific objection can be routed around. Kind of steelmanning.
Recently, there was a Guardian article about a subreddit that I'm a longtime member of, and overall it was surprisingly fair and accurate. However, there was one bit that baffles me - they claim that "to the floor" was a popular meme in the community despite noone ever saying that. A Reddit search shows only a single post containing the phrase, from three months ago with just 7 votes.
I just can't understand how something like that could happen. Obviously, they did do research for the article, and I don't think even the most cynical and underhanded writer would just make shit up, especially in an article that is otherwise accurate, so they must have gotten the idea from somewhere, but I have no idea where.
https://www.theguardian.com/technology/2022/sep/09/bitcoin-buttcoin-online-community-praying-for-cryptos-death
Keep in mind how almost all journalism works. Someone with no direct experience in an area asks questions of people who are involved. They also do some basic research, nowadays probably mostly in Google.
The number of points of contact a journalist has with any of their facts is minimal, maybe just one reference. One of their sources might say [thing they regard as fact] and the journalist has no ready means to verify it. It often gets printed as such. A thing that happens frequently in a community may only have been relayed to the journalist a single time, which in their minds is exactly equal to a thing that only happened once - so long as they hear about it. Journalists frequently do not know which things to fact check or review, so even if they are attempting to be careful something like the false meme can still easily slip through. And that's assuming the journalist understands the material they are researching. Science journalism is often terrible, because the journalists don't even know enough about the subject to identify basic misunderstandings.
>I just can't understand how something like that could happen.
Simple: journalists lie.
You're missing the part where everything else in the article is accurate. This is exactly the kind of dumb political hot take I was hoping to avoid.
Journalists tend to plan out their story before they do any research on it. They know two or three things about bitcoin, and one of them is the "to the moon" meme. So they assume there must be a "buttcoin" equivalent and they ask someone what it is. That someone shrugs and says "uhh to the floor I guess" and that gets printed.
But the other memes mentioned (e.g. "few understand" and "this is good for bitcoin") were accurate. In your hypothetical, why wasn't that inverted too?
Wow, that was a pretty sad read. Imagine giving up permanent rent-free real estate in your head like that. Did your childhood bully become a crypto billionaire and you're still holding on to resentment from that or something? Do you need a hug?
Eh I actually think it's probably net good for there to be people bashing crypto since it probably raises the awareness of the scams among potential future victims.
This is just a prediction that I'm making. Wanted somewhere to post the prediction so I would be commited.
I can't bet on PI because I am too poor right now. Made some good money on the Bernie market in 2019, though.
In almost every single Senate, Gov, or even state Presidential GE poll released by the pollster Trafalgar they have hit the Republican candidate's final vote share dead on. Within 1% in 90% of cases. Then within 2% in another 8% of cases and finally I found two polls where they missed by 3%. It doesn't really matter whether the poll was released 1 or 2 months ahead or in the final week.
I'd like to predict that based on the 8 Senate polls so far, sadly NH wasn't polled since the primary is tomorrow, and for some reason no Florida poll, that:
JD Vance 49% in Trafalgar poll - loses
Mehmet Oz 44% in Trafalgar poll - loses
H. Walker 47% in Trafalgar poll - loses
A. Laxalt 47% in Trafalgar poll - loses
B. Masters 44% in Trafalgar poll - loses
Ted Budd 44% in Trafalgar poll - loses
R Johnson 44% in Trafalgar poll - loses
T Smiley 46% in Trafalgar poll - loses
I'd expect Hassan's opponent to get sub 47% in the NH T-Poll and lose and I expect Rubio to be on the edge 48-51 like Vance, but also to lose, pending results from a Florida T-Poll.
I don't normally watch the polling as closely as I have this year, so maybe Trafalgar was dealing with similar patterns in previous years, but the timing of the polls may matter a lot. After Republicans made significant gains for much of the year, the Democrats recently made a swing upward. If your measurement period is during this (apparently real) upward swing, then it's going to be worse for Republicans than if you were looking at polling data from even a few weeks prior. I have also heard that Republicans are now moving back upward again in polls, such that the final polling may be different than what we're seeing now - despite the current polling apparently being at least mostly accurate.
Trafalgar released their new PA poll today:
New independent @trafalgar_group
#PASen #Poll (9/13-15)
47.7% @JohnFetterman
45.9% @DrOz
3.5% @Erik4Senate
0.5% Other
2.4% Und
Literally 45 minutes ago. Of course I also told Cahaly about my theory the day before they announced their new polls so I expected them to improve for Rs. They will also do AZ/NV/WI. In any case this would still be a loss for Oz.
FYI in their September 2020 poll of North Carolina (https://drive.google.com/file/d/1PIe4nvZYQ2UZ5e30NenDolEgi1QlYl8_/view), Trafalgar had Thom Tillis at only 45.3%, but he ended up winning with 48.7%. So your prediction seems a bit overconfident.
It is an interesting finding that Trafalgar's estimates for Republican candidates seem to be better than their estimates for Democratic candidates though.
Shannon Bray got 3.1% in that race on top of an October surprise sex scandal. Also Trafalgar had an October poll with Tillis at 48.6. In other presentations of this theory I discussed averaging the last 2-3 Trafalgar polls as well as considering the power of third parties. 3.4% would count as a 3% like Heller's race. I do concede I actually didn't check this race for my original analysis. So there are now 3 polls with a higher than 3% shift.
I don't think the 2020 NC Senate race changes the analysis that much considering the above factors but I do feel foolish for forgetting about it.
Do you have any knowledge or guesses as to what about Trafalgar's polling is causing this to happen?
Trafalgar is a really interesting pollster. Last I looked, it seemed they were the only pollster who, when faced with low single-digit poll response rates, recognized that non-responders might be different from responders and tried to do something about it. (Maybe I missed someone, but it looked like all the other polling companies just stuck their heads in the sand and assumed that, as long as they weigh for demographics, they'll be OK.) The details on this are somewhat short, except for them presumably asking people how they think their neighbors would vote. I don't blame them for keeping their methodology secret, but your observation makes me even more curious as to what they are doing.
I'm guessing their accuracy on the R side might be due to their focus on counting the undercounted exactly right, and that they just don't put the same effort into counting the undercounted on the D side. ("Do you think your neighbors will vote for Trump?" seems like a one-sided question.) But that's just a guess.
Trafalgar is a very pro-Trump pollster in their surveys and their public presence. Many people actually argue that they don't even do actual polls but that it is fake but I think 538 probably did due-diligence before giving them an A- rating so I assume they do actually poll.
Trafalgar discusses I think 7 data gathering methods they use as a mixed model on their website if I am recalling correctly.
My suspicion is that Trafalgar assigns R leaners but not D leaners and that may explain why their D numbers are always too low and somehow no undecideds ever move to the R column. But they are still more consistently accurate on R vote share than you expect of some other pollster who did that so they have *something* in their special sauce.
I think their bias, and they are considered extremely R biased, prevents them from getting correct D numbers. Which is weird. Unless they are primarily an activist pollster for their public releases you'd expect them to want to get D numbers correct.
Cahaly is apparently going on the Star Spangled Gambler podcast next week and the host said he would ask Cahaly about how his R numbers are so good and also about my claim that they are predicting a Dem sweep according to my theory. We'll see if they actually do that.
Noted. I'll take the under (over?). I think at least 1-2 of those candidates will win, probably more like 3-4.
Something weird with my comments here seeming to double post. Had to delete and rewrite this.
You think 3-4/8 will win or are you counting all 10 races including Rubio and Hassan?
The Economist's model says that most likely outcome is Democrats pick up 1 seat. Their odds of keeping control of the senate is 3:1. The two reasons they gave is that the Republicans chose weak, Trumpy candidates, and the abortion decision is unpopular.
Yes all the major models are pretty in sync about the results. Important to remember that we haven't gotten serious polls from the major pollsters post Labor Day. The point of "Trafalgarian Augury" is that the models will be off until we get those polls whereas R vote share in Trafalgar polls is accurate regardless of other data. Of course maybe those polls will have bad news for Dems, who knows. The Suffolk poll for Tim Ryan suggests they'll be good news but the correlation would 50% or something so it doesn't assure that the models will have to update.
So the prediction is that the Democrats pick up 5 seats in the senate? But the Republican in Washington (T Smiley) ends up doing better than Republicans in Wisconsin, North Carolina, Arizona, and Pennsylvania? That doesn't sound that plausible to me. I'm expecting that you've misread something somewhere if you claim that there is a pollster whose individual polls have nearly all been within 2% of the final margin, including polls three months out from the election. (It really doesn't seem possible for a good pollster to hit the final margin within 2% from three months out, because a significant number of races should move by more than that much over the course of three months.)
It seems like I need to clarify something, which happens often with this claim. I am talking specifically and only about R vote share. Trafalgar's polls are garbage if you are looking at Dem vote share or final margin.
A classic example is the 2018 Nevada Gov race with Laxalt, the current Nevada Senate candidate, and the current governor Steve Sisolak. Everyone roasts Trafalgar because they had Laxalt up 6 and Sisolak won by 8.
However that is irrelevant for my theory. You see their final poll has Laxalt 45 and Sisolak 39. If you subscribe to "Trafalgarian Augury" you can already guess where this is going. The final result was Sisolak 53 and Laxalt 45.
In any case I checked a triple digit number of polls in a mid double digit number of races. I only found 2 polls that were off by 3% on the R vote share and none worse. This is only Senate/Gov/Biden-Trump Pres races, to be clear.
Very interesting! It still seems surprising that Republican vote share would be so stable throughout several months of polling. In 2016 in particular, I would have thought there were some big gains and losses over the fall. (Though now I see that you mention only Biden-Trump, not Clinton-Trump, which does raise questions about how it would generalize to a second presidential contest.)
Trafalgar is putting out, for some unexplainable reason, new polls this week for WI, PA, AZ, and NV. Governor and Senate though I don't much care about races for Governor outside examples of their polling accuracy. Much rather have Florida and NH but w/e. So we'll see. These polls will be anywhere from 30 to 23 days newer than the existing Trafalgar polls.
Very curious if the R number stays the same and if the D numbers fluctuate.
Under my theory if the R is stable but D is moving that suggest the R numbers are believed by Trafalgar to be solid whereas the D numbers could move down to change the topline margins because Trafalgar seems to be a sort of "activist pollster" for right wingers.
You sure 49% is a loss? Tight races sometimes have both candidates in the forties.
I wouldn't be so sure that 47% is a loss, either. There are third party candidates in some of these elections.
In previous races polled by Trafalgar candidates at 47% or less lost regardless of the presence of 3rd parties.
So generally for Trafalgar polled races 49% is a pure toss up. I just wanted to commit to a concrete prediction for each race.
Trafalgar gave Youngking 49.4 I think and he got 50.6, but in some of the biden/trump state polls for 2020 trump would get 49 and lose 49.4 to 49.6.
Well as I said we have to wait for the Trafalgar Florida poll and probably the NH poll. So maybe Trafalgar gives him 52%, that'd be a sure win using "Tralfagarian Augury".
There have been several polls putting Rubio in danger recently, but nothing as reliable as Trafalgarian Augury. However it is a pretty common belief that Florida is not as Red as Ohio. So with Vance at 49% I'd expect Rubio to be in the danger zone.
How often do small or micro communities branch off into their own "unnetworked" site? How do "censorship" concerns affect this?
I'm asking because themotte.org finally started running their own site and, as far as I know, have abandoned the subreddit. This seems like a very rare thing but I'm wondering if this is more normal than I anticipated for two reasons:
First, between datasecretslox and themotte, I know of two forums that have split off over the SSC-sphere over the past couple years, both over "censorship" related issues. I don't pay attention to too many other online communities, is this normal?
Second, themotte based their setup on something called rdrama, which is apparently another reddit sub that got kicked off and became their own little thing. And I think the_Donald branch-off is still around. But I'd never even heard of rdrama and it was weird to find this whole little hidden community.
So yeah, most people on the internet tend to go to a few large sites but I'm wondering how many little "dark" communities there are that have branched off. Is this common because, well, one of the definitions is that they're insular/not easily findable via the big sites.
Here's the communities that I know that got kicked off reddit and started up somewhere else:
/r/fatpeoplehate and /r/greatapes back in 2015: the first real round of subreddit-level censorship. Both set up on voat, which managed to limp on for a few years under heavy DDoS attack before giving up.
/r/the_donald: moved to thedonald.win, which lasted until just after the 2020 election. They tried to start up america.win after that, but this seems to have vanished too.
/r/themotte: already a group of refugees from /r/slatestarcodex, they moved again to themotte.org
One thing I've noticed with all these communities (except /r/greatapes which I was never any part of) is that none of them really survived well. Communities tend to lose a lot of their best posters in the process of moving and descend into shallow caricatures of themselves. I looked at themotte.org the other day and saw a lot of low-effort "boo other side" comments that wouldn't have passed muster on /r/themotte, let alone on /r/slatestarcodex.
/r/themotte always had a problem with low-effort "boo other side" posts, due to a weird moderation failure mode where a poster could make essentially any kind of insinuation directed at a group of people, but calling out his prejudice slash intellectual dishonesty in response constituted a personal attack and was therefore verboten.
I ate my share of bans over at TheMotte but I would say that is slightly unkind characterisation.
They certainly did bend over backwards to give the benefit of the doubt, and how you phrased your response often seemed more important than the actual content: if (say like me) you tended to respond "you blithering idiot", then yeah, that was a personal attack. Some people did dodge around the rules very successfully by sticking on *just* this side of the line of stating outright the bad stuff, especially if they were trying to bait responses of the "you blithering idiot" kind.
But some people did not "call out prejudice/intellectual dishonesty", they went on rants and accusations of their own, eventually got bounced, then to this day maintain it was all mod prejudice and the right-wing/left-wing slant of the sub-reddit (TheMotte has been accused of being, at the same time, in thrall to the far-right and hopeless puppets of the progressives) while they were only being moderate and pointing out the bad stuff.
I think to say that DSL 'Split off....over "censorship" issues' gives a completely false impression of the reason for DSL's existence.
Scott deleted his blog, and soon afterwards, a few of his regular commenters got together and created a forum where some of the spirit of SSC could continue, in a forum setting. There was no way of knowing if Scott would ever blog again. Of course, it turns out he did, and does, but as with the way of these things, DSL having been brought into existence by unfortunate circumstances, found no reason to self-cancel and indeed thrives to this very day.
"Splitting Off" is a misrepresentation - Scott lists DSL at the top of his community links,
and comments there.
Alright, I'm not 100% sure what you heard, here's what I'm trying to say.
Scott deleted his blog over potentially being doxxed by the NYT, which certainly seems "censorship" adjacent.. Scott wasn't censoring anyone...alright, take that back, we had the "Reign of Terror" back in the day but basically everyone was ok with it and definitely all the DSL people were ok with it. But yeah, the core thing I'm trying to understand is whether these spinoff "dark" sites/forums are everywhere or just "CW"-adjacent.
Second, splitting off seems fair because DSL definitely has its own identity and "vibe" and that's a very intentional decision by obormot et al. The two most obvious examples are that DSL tends to focus much more on CW while SSC is still, like, more weird techy stuff, and DSL tends to stylistically prefer shorter, catchier posts while SSC is generally prefers longer, ramblier posts. Scott seems to be on good terms with ACX/SSC associated sites like themotte and DSL but they very much their own entities
There is also TheSchism, which is still on Reddit, and which in its turn split off from TheMotte as, and this is my take on it, a place for the left-wing/liberal/progressives who felt they were being dog-piled any time they tried to say something by the righties.
It never had the same amount of engagement, and it seems (again, to me) that once you kick out all the witches, or set up a witch-free space, all the people there are so in agreement that they don't really feel the need to discuss things at great length. "X is good". "Yeah, I agree". "Me, as well" doesn't lead to the same kind of 1000+ comment threads.
There's also /r/culturewarroundup, another splinter of themotte which seems to still be up and running.
As far as I can figure out it's not exactly a further-right splinter of themotte, it's just one that has offloaded the pretence of high-minded discussion and just gone into full "Can You Believe What Outgroup Did This Week?" mode. I just checked and it seems to be reasonably active, attracting a manageable 200 comments per week on its culture war roundup threads. Seems quite successful, if that's what you're into.
random questions, expecting subjective, anecdotal, and ungeneralizable answers from people i have never met: https://docs.google.com/document/d/1ktNf4E-uHjf2XqWPLZLmBzQAbDfUczSMCFIcgLKHLvA/mobilebasic
I’m considering another - and final, I’m 48 - career move. But I’m a little lost, and I’d love advice from this group.
I’m an attorney (20+ years). I’d like to start taking night classes for computer programming. There’s a lot of activity in the legal tech world, but not a ton of overlap between the tech side and the law side. I think that someone with skills in both areas could do very well.
But I know very little of the tech world. I’m just starting investigating this, and I need some fundamental advice. Any advice is welcome, but the questions I can think of are:
1. How important is credentialism? Do I need a degree, or are targeted classes enough? I’m old-ish and have three kids; I probably can’t take four years of classes, not to mention the expense.
2. Assuming I don’t need a degree, what classes are necessary, what languages should I know?
3. What math classes are useful/necessary?
4. How important is the name of the school? Anything I’m likely to be able to do would be Long Island local- there’s no MIT in my future
5. What other independent activities can someone take to demonstrate skills? I assume “I took a class” isn’t half as good as “here’s a fun project I did myself”.
6. Anything other advice is welcome!
Lot of this will depend on what kind of programming you want to do. Web Developer will be different than something more theoretical or "hard core" like working at intel programming chips.
For web dev type stuff (start ups, most large companies, agencies and such) my answers would be:
1. not very at all
2. Its easy to learn the basics of a language once you understand the main concepts. But its helpful to know Javascript because it so pervasive. Then something like Java, Ruby, Python is good. But, really knowing the language isn't very important. At my company we regularly hire people who don't know Python even though that's what half our system is written in.
3. I don't us any math concepts beyond algebra on a regular basis, but maybe you can impress people in an interview with some fancier.
4. Not important
5. Yes, projects! You could have a high school education but if you built a few moderately sized apps, you'll know much more than someone who went to Cal Tech and graduated top of their class but only knows the theory. In my experience, having a CS degree has a very low correlation with knowing how to actually code.
I suggest doing projects that are beyond the typical suggestions (todo lists, blogs, etc.). Either build something that you wish existed or just copy an existing app (unless you are trying to be a designer, the look of the app can be exactly the same - its that you did it that's important).
I was a consultant for a while then went to a coding boot camp and have now worked at SaaS companies for over 5 years. I don't have any reason to think a CS degree would advance my career (any job that required that is probably not a place I want to work). At my currently company, probably a third of the programmers don't have any formal CS degree (there are about 40 total). These people are at multiple levels of seniority, not just juniors.
Your experiences as a lawyer likely means you are personable and can do well in a cultural/non-technical interview which will really be a big leg up. Also consider roles that are coding adjacent but leverage your existing skill set. Things like support engineering or implementation engineering are technical roles but also require working with customers to solve their problems. You can start in these roles then pivot to full time programming role after a short time. (This is what I did).
1. Not terribly important. I know people who had completely unrelated degrees and ended up with good tech jobs. Some of them went to a 6 month coding boot camp, which both taught them useful skills and helped them get job interviews which eventually led to jobs.
2. This somewhat depends on what technical work you are going to be doing. Front end vs back-end vs full stack being the biggest distinctions, but also where you work matters. FAANG companies will emphasize different things from startups which will emphasize different things from banks. I work in backend, and say C++, Java or Python are safe languages to learn (Python has the advantage of being probably easiest to learn, although you may want to learn one of the other two as well at some point). Class-wise I'd look into Discrete Math, Basic Algorithms and datastructures, and some familiarity with databases.
3. Discrete Math is the big one. Linear Algebra is quite useful in some fields but if you're looking for a crash course I would skip it.
4. I would strongly recommend against going back to college for programming. Coding bootcamps take less time, many will charge you a percentage of your future salary, which is both cheaper than college overall and incentives them to get you a job, and they will work harder to get you employed than a college will. They won't teach you everything a 4 year degree would, but the good ones are pretty impressive in what you do come out with.
5. Just start coding. I'd say mainly to learn the stuff, although some people do contribute to open source projects or have web pages where you can see things they've programmed. Personally, I pay no mind to this when I'm interviewing candidates, but some people may. Really, programming interviews are just like long tests, if the candidate can answer the questions, they're probably good.
6. Generally I recommend people switch careers to tech. That said, I don't know enough about your job/background to know if it is a good idea. If you're 48, it may not be worth the trouble to have a major career change and, no offense intended, you should be honest with yourself about your motives for the change and if this is some sort of mid-life crisis or a well thought out decision.
I will also note, the field does have a lot of age discrimination (in favor of the young). Someone making a switch to tech out of the blue around the age of 30 is accepted, but someone doing it at the age of 48 is not commonly seen (at least by me). People over 40 can have trouble getting some jobs even with many years of experience.
I'm in Long Island too (Syosset). If you have more questions and want to discuss it more, I'd be happy to discuss this more.
You can check out /dev/lawyer, I like his blog and the takes on copyright.
5. Honestly anything you can complete is a boon. When starting, think a little smaller, and finish a few things. My first program I used daily was a program to download NASA's picture of the day and set it as my desktop background.
But finishing stuff is hard, so it's good to pick some intentionally small stuff to start with.
Are you already doing computer programming? If not, perhaps learn some _before_ hopping into classes. It is incredibly accessible. Online resources are vast... tutorials and people.
You should make sure you enjoy this before a career change. Consider that a hobby might provide more fulfillment than a job. Also a junior dev job is likely to pay much less than being an attorney... a senior dev job is likely to pay less a lot less than being a senior attorney.
1. You need a degree. But you have a degree. This degree is not CS, and you should take coursework. But being able, understanding, and _experience_ are most important.
2. Learn all the languages you can. Learn assembly at some point. Take logic (you might have already!), algorithms and data structures, discrete math, and anything else that seems interesting.
3. Discrete math. Maybe linear algebra. Logic. Anything else that seems interesting.
4. None, unless you want to be a prof.
5. The only thing you can really do to learn, is to do. Classes are good but will teach you little. Write programs. Do projects. Make things. If you're not already doing this, start today.
You should do something you love. You can start today and find if you love programming, and you can do it even if it is not a career. Are there technical tools you lack? Things you wish existed? Try making those. Need is a very good driver, and doing is the only true teacher.
>2. Learn all the languages you can. Learn assembly at some point. Take logic (you might have already!), algorithms and data structures, discrete math, and anything else that seems interesting.
Hard disagree on the assembly language rec. Don't do that too yourself. Especially starting at your age. You don't have the time left to so casually flush that much of it.
*Am 48 in three weeks, took assembly at 20. It's not necessary outside of very specialized career paths.
It really depends on what GP wants to do with his tech knowledge. If they want to build legal-tech tools or products, then a CS education at is absolutely useless.
> 2. Learn all the languages you can. Learn assembly at some point. Take logic (you might have already!), algorithms and data structures, discrete math, and anything else that seems interesting.
> 3. Discrete math. Maybe linear algebra. Logic. Anything else that seems interesting.
Completely disagree. Algos/DS is useful. Calc101 and AP Discrete-math and AP statistics level knowledge is more than sufficient for any generic Software engineer job.
It will probably come down to industry-focused fullstack development skills or Data Science skills depending on what kind of legal-tech they want to contribute to.
Now, if OP wants to go into CS research then they'd probably need those skills, but they'd probably also need to spend 6 years doing a PhD on minimum wage, which I doubt they'd want to do at 48.
____________
I would suggest not wasting time on learning too many things. Learn the basic bootcamp level skills in the area you are interested in and pick up a part-time low-wage job. The industry is in very high demand, so anyone who will take a low-wage and has basic skills gets employed easily.
Then you can learn on the job.
Could cities reasonably prevent bars from overserving patrons to the point of intoxication- like, imposing a 3 drink limit per patron? And probably a minimum cover tab too, to minimize bar hopping after you've had hit your per-bar limit.
Alcohol continues to be society's most damaging drug by a huge margin, and alcohol taxes do seem to be proven pretty efficient at reducing alcoholism, alcohol-related violence, etc. I'd argue that over the decades/centuries, high levels of consumption have just become less & less socially acceptable- people used to drink WAY more in the 18th & 19th centuries. Given that- the idea of people routinely getting drunk and committing crimes at a public venue that's licensed and regulated by the state seems rather odd, yeah? Ask the local police department where most of the crimes are committed every Friday & Saturday night, year in and year out.
I think society can reasonably say- public drunkenness & its associated social ills (fights, etc.) are just not acceptable in our downtown. If you'd like to get drunk you can certainly do so in the privacy of your own home, or at a private party- but not in the middle of our city's commercial district, at a state-regulated & licensed establishment. All bars & restaurants now have a 3 drink maximum per patron, and if we see that this just leads to a lot of bar hopping, we're going to make bars pool driver's licenses associated with one's tab on a common server each night to prevent it. Basically- our commercial district is not available for large-scale intoxication, disruptive behavior and petty crime. Seems reasonable eh?
"Basically- our commercial district is not available for large-scale intoxication, disruptive behavior and petty crime. Seems reasonable eh?"
Al Capone says yes, this is very reasonable and he hopes you can get the city government to implement it ASAP.
Yes, yes, your version is different. But it's not really that different, and will have the same failure modes. There will still be places people go to get drunk, and your police won't be able to stop them, and you'll have less influence over what goes on inside. And the worst sort of people will have an extra chance to get rich.
A lot of people responded with some variation on this, I don't find it very insightful. The difference between Prohibition and just restricting how much a bar can serve you is that the former actually outlawed all alcohol sales. I agree if you literally outlawed booze you'd create a black market, but this is..... just a 3 or 4 drink maximum at the bar. If you want to get hammered, you can just buy a bottle and go home to do it.
I'm pretty confident that in the 21st century the police can find literal speakeasies in a city. (For one thing, every idiot would post photos of it on social media!)
Alcohol sales is already a highly regulated business, and 48 states already make it illegal to overserve someone. I think giving bars a firm number rather than vague criteria to determine inebriation is much more fair. Has Al Capone started his black market in say Utah, where alcohol is extremely regulated? I think people just repeat their Prohibition analogies without looking a little more at the current regulatory environment
If you want to get hammered, you go to the bar that will let you get hammered. Even if that's illegal.
During Prohibition, alcohol wasn't just *sold* illegally. It was *served*, in bars, even though that's an extraordinarily risky way to sell an illegal product. It happened because the demand that Capone et al were serving wasn't just "imbibe alcohol", but "get drunk in the company of friends, or friendly strangers who I maybe wouldn't want to invite home and in any event I'm going to be too drunk to handle the logistics of a house party".
That's a demand that lots of people have, and that lots of bars (though not all of them) exist to serve. And we know what happens when you make it illegal to satisfy that demand by making it illegal to operate a bar - the Al Capones of the world will operate bars anyway, even though just the existence of a bar serving any alcohol is theoretically enough to bring down the wrath of the law. If it's *legal* to operate a bar in general, and the law can only touch you if it can prove that you served more than the legal number of drinks to a customer who will be working with you to establish plausible deniability on that, then it's going to be even easier to operate an illegal bar than it was for Capone. And he didn't have any trouble with that. At least, none that couldn't be solved with a bit of bribery and/or submachine gun fire.
So, those are your choices. People getting drunk in bars, or people getting drunk in bars with a side order of bribery and automatic weapons. Probably less of that than in Capone's day, at least.
The 80/20 rule applies to bars. Bars survive financially on the 20% of customers who have 6-12 drinks per night. Get rid of them and the bar fails.
Having spent a lot of time in bars and witnessed a lot, I'd bet that most of the violence in bars doesn't come from the regular drunks but from angry men who have only had a few and were looking for a fight from the start.
Don't we already have this? It's already illegal in nearly every state to sell alcohol to an intoxicated person. They don't have some maximum number of drinks they can sell you, but they are required to see if you look drunk, so you can't get around it by going to another bar.
"How many states have underage furnishing and sales to intoxicated persons (SIP) laws?
All states prohibit sales to minors, and all but two states – Florida and Nevada– have at least some form of SIP laws, which legally require that alcohol retailers’ staff look for behavioral signs of intoxication prior to serving or selling alcohol."
http://alcohol-psr.changelabsolutions.org/alcohol-psr-faqs/commercial-host-dram-shop-liability-faq/underage-furnishing-and-sales-to-intoxicated-persons-laws/
There are still dry counties in the US where you can't buy alcohol. Utah has restrictions on the alcohol content of beer. Many places have restrictions about what types of places can server alcohol (must serve food as well, can't serve food as well, can't have certain other actives on the same location, etc.).
So, sure you could make this rule. But the US won't ever make it. And I can't see it being at all effective in reducing the type of behavior your talking about.
> the idea of people routinely getting drunk and committing crimes at a public venue that's licensed and regulated by the state seems rather odd, yeah? Ask the local police department where most of the crimes are committed every Friday & Saturday night, year in and year out.
Sure but that is also because bars/nightclubs/downtowns are where people are at night. Everywhere else they might be is probably closed. The alcohol surely makes things worse but its not the only variable.
> If you'd like to get drunk you can certainly do so in the privacy of your own home, or at a private party
I haven't seen any evidence that getting drunk at home leads to less bad behavior. Things like domestic violence (which is very correlated with alcohol use) sure happen at home 99% of the time and not at a bar.
>Sure but that is also because bars/nightclubs/downtowns are where people are at night.
And most of the fights involve 2 men and 1 woman. Alcohol consumption isn't even the most significant variable.
So instead of buying drinks at the bar, you want them to buy drinks at the liquor store and bring them to the bar themselves?
You may as well close the bars then, as most couldn't exist without customers who drink past the point of intoxication.
Bartender here. Once again, I agree with Other Jack. Most bars have tight margins. They depend on regulars who get shitfaced. Even more so, they depend on drunk people buying food specifically because they are drunk. Cut off the drinks and you cut off the food sales - and therefore cut off most bars.
I haven't dug into the stats, so someone may be able to contradict this. But the biggest preventable crime and/or danger to the public from bars comes from drunk driving. The other negative aspects - fights, mostly? Not sure what you're objecting to, maybe people having sex in public restrooms or pissing in the street? - are probably not going to be much reduced by changes to the law. It'll just start happening in house parties in neighborhoods and cause all sorts of problems in places that are scattered throughout the city and therefore harder to observe and regulate. People are going to drink and do stupid shit.
To my mind, seems like the 80/20 proposal would be to subsidize Uber and Lyft and try to eliminate drunk driving.
We already fought this battle in the late 19th and early 20th century and settled on a compromise that is generally accepted. I doubt many people are interested in rekindling the temperance movement and smashing up saloons.
If the government can successfully eradicate the use of heroin, cocaine, methamphetamines and fentanyl, _then_ we can talk about alcohol.
Otherwise it's the same problem as gun control -- you make life worse for the law-abiding majority while doing nothing to stop the actual criminals.
Couple observations:
A) Bars/Clubs pull in a significant amount of tax revenue, especially from alcohol sales. Local governments would be reluctant to limit that.
B) Most people aren't going to stop getting as drunk as they want and committing crimes. Prohibition more or less proved that. Containing them to certain nightlife/commercial districts is likely much better than letting it spread to private parties and speakeasies in residential areas.
Seems pretty unreasonable to me. Personal freedom is important, even if that leaves us free to do stupid stuff.
You have the personal freedom to shoot a gun, drive a monster truck, smoke a joint etc. etc.- in designed private property areas. Not in a downtown commercial district. How is 'getting blackout drunk' different?
"You can't consume toxins that kill 95k people in the US every year in our city's downtown. Unlimited toxin consumption is allowed at home or on private properties"
You appear to misunderstand the nature of civil rights, at least in the United States. It is extremely difficult to circumscribe them in a public area -- the government generally needs to have what the constitutional lawyers call a "compelling interest," and the mere hypothesis that some restriction on the rights of free association and bodily autonomy would in general lead to less public disorder isn't sufficient.[1]
After all, you could (and people once did) make a formally identical argument that blacks and whites should be segregated everywhere in public, since it would significantly reduce interracial strife and potential violence. But it would never pass constitutional muster, because the government does not have a compelling interest that could justify abrogating the First Amendment right of free association and the Fourteenth Amendment right of equal protection.
--------------
[1] States get away with it with respect to drunk driving laws because the state issues licenses for driving, and can revoke them for whatever reason it chooses, id est it is not a constitutional right to be able to move around *via automobile* on public roads.
As the reply below me notes, race-based restrictions are (deliberately) treated with 'strict scrutiny'. This is demonstrably not the case for alcohol regulation, as:
1. There are tons & tons of dry towns, cities, and even counties all across the US- alcohol is completely banned. There were even multiple dry states for 40+ years after Prohibition was lifted (the Bible Belt ones), and this was never ruled unconstitutional. It can't be ruled unconstitutional because the 21st Amendment *explicitly gives any 'possession' of the US the right to regulate alcohol*
Section 2. The transportation or importation into any State, Territory, or possession of the United States for delivery or use therein of intoxicating liquors, in violation of the laws thereof, is hereby prohibited
2. Not only does the Constitution explicitly give any 'possession' of the US the right to regulate alcohol- in practice there are thousands of such regulations. Bars are highly regulated man! Any place that sells alcohol already has restrictions on when they can sell (set hours of operation), who they can sell to, how much they can sell, and so on
3. It's already illegal to overserve customers to the point of drunkenness
https://dui.drivinglaws.org/resources/dui-and-dwi/can-bartender-arrested-serving.htm
Again- this is *already criminalized now*. Arguably it'd be more fair to simply set a hard limit in terms of number of drinks, than to subject servers to highly subjective standards about how inebriated someone is
You are greatly overstating your case. First, no right to association has been recognized by the Supreme Court outside of association for the purpose of speech, and rights related to intimate relationship (marriage, child rearing, etc). See https://www.mtsu.edu/first-amendment/article/1594/freedom-of-association
Nor is the right to bodily autonomy, to the extent it exists, implicated by law re the consumption of alcohol. (That should be obvious, since laws barring entirely the consumption of marijuana are perfectly constitutional).
A law such as that proposed by the OP could only be challenged under the Equal Protection clause, and, since it does not discriminate based on a suspect class nor in regard to a fundamental right, would be subject to rational basis review, whereby a law is constitutional if it is rationally related to a legitimate goal, a very low bar. https://en.wikipedia.org/wiki/Rational_basis_review
BTW, that is the flaw in your segregation analogy: A race-based law is subject to strict scrutiny, an extremely high bar that requires that the law be necessary for a compelling state interest. https://en.wikipedia.org/wiki/Strict_scrutiny
I see I expressed myself poorly, and thank you for the opportunity to clarify.
I am of course not suggesting state governments can't regulate the sale of alcohol, either by blanket prohibition or by a variety of time place and manner restrictions. Clearly they can, and have, and will. (My opinion on whether that is practical or not is contained in a comment further up.)
What I was addressing was only nifty's very first line, in which he or she appeared to suggest that civil rights can readily be curtailed if a person happens to be in a public space, and if the state sees some rational interest in doing so. So far as I know, that is not the case. Civil rights are no less potent in public spaces as private (I'm not even sure what they mean in the private space anyway), and they are by definition things the majority cannot infringe just because it wants to, or see some rational interest in doing so. To the extent the Supreme Court has allowed any infringement at all, it requires a compelling -- not merely rational -- state interest.
My example was meant to show a provocative illustration of the point, that segregation was certainly argued at various points to be a rational interest of the state, because of the reduction in interracial friction and the potential for violence. (Whether those things are true or not is irrellevant to the argument, as what matters is whether the majority believed them to be true.) What the Supreme Court has said in general, and which we all kind of instinctively agree on with respect to segregation, is that merely having some kind of reasonable hypothesis of how such-and-such abridgment of a right might be in the public interest is insufficient. Civil rights are exactly those things that cannot be abridged regardless of the reasonableness or general belief in such theories.
So to the extent anyone can raise a constitutional civil right issue with respect to any proposed regulation of alcohol, the mere fact that the law only applies in public places, and that the hypothesis that it will produce some general public good is widely believed, are insufficient to allow abridgment of those rights. Whether any rights are actually implicated by any alcohol regulation would of cousre depend on the nature of the regulation.
No, civil rights generally cannot be curtailed merely because a person happens to be in a public space, but my main point is that there is no constitutional right implicated by the laws that the OP suggests
How about if I offer a place for my friends and even friendly strangers to come and consume those toxins, on my own property. Surely no problem there, right?
I think given that people mostly don't live downtown, there isn't much incentive for society to reasonably say "public drunkenness & its associated social ills (fights, etc.) are just not acceptable in our downtown" when they don't really experience those ills.
Do black (African/Caribbean) people have biologically different voices from white (European) people?
When I hear a black person on the radio, I can often tell without any explicit mention of race. This is especially true for black women.
Of course, there are massive social and cultural factors in play. The community and region in which you grow up will affect accent, dialect and vocabulary choice. If you grow up around black people, you will speak and sound like other black people for this reason. (Note: I live in the United Kingdom, which has a very high level of regional accent variation, with the majority of black people living in London and other major cities, so my experience of these effects will be magnified.)
But I believe, even accounting for all this, that there is often a racial difference. Black women in particular have voices that sound lower and raspier/coarser than white women.
It would not be surprising if this were true. Black people have different facial features from white people. (You can see this most obviously by comparing photos of albino black people and albino white people.) This presumably results from differences in bones and muscle, which could easily affect voice production too.
But I have been unable to find any information about this online, whether research papers or blog posts. Part of the problem is that my Google searches tend to return results about dialect. If there is research on this, I'm not sure what search terms I should use to find it. (Conversely, I was able to find discussion of what it means to have a "gay voice".)
To everyone saying it's just accents:
1. I provided evidence that anatomical differences between races exist
2. If it were a matter of accent, it should be difficult to tell men and women apart who have the same accent. The opposite is true. It's exceptionally easier, which means accent is not what is solely relevant here. And these male female differences correspond to similar vocal tract anatomical differences between races.
I'm not sure why everybody just launches into anecdotes when actual data is readily available:
*Significant vocal tract anatomical differences exist between races (and obviously the sexes):
https://www.sciencedirect.com/science/article/abs/pii/S0892199705000718
>Acoustic pharyngometry evaluates the geometry of the vocal tract with acoustic reflections and provides information about vocal tract cross-sectional area and volume from lip to the glottis. Variations in vocal tract diameters are needed for speech scientists to validate various acoustic models and for medical professionals since the advent of endoscopic surgical techniques. Race is known to be one of the most important factors affecting the oral and nasal structures. This study compared vocal tract dimensions of White American, African American, and Chinese male and female speakers. One hundred and twenty healthy adult subjects with equal numbers of men and women were divided among three races. Subjects were controlled for age, gender, height, and weight. Six dimensional parameters of the speakers' vocal tract cavities were measured with acoustic reflection technology (AR). Significant gender and race main effects were found in certain vocal tract dimensions. The findings of this study now provide speech scientists, speech-language pathologists, and other health professionals with a new anatomical database of vocal tract variations for adult speakers from three different races.
*And as long as we're talking about whether percieved differences in voice between races actually exist, yes, they do: https://read.dukeupress.edu/american-speech/article-abstract/86/2/152/5900/DO-YOU-SOUND-ASIAN-WHEN-YOU-SPEAK-English-RACIAL?redirectedFrom=fulltext
>In paired dialect identification tasks, differing only by speakers' sex, New Yorkers were asked to identify the race and national heritage of other New Yorkers. Each task included eight speakers: two Chinese Americans, two Korean Americans, two European Americans, a Latino, and an African American. Listeners were successful at above chance rates at identifying speakers' races, but not at differentiating the Chinese from Koreans. Acoustic analyses identified breathier voice as a factor separating the Asian Americans most frequently identified from the non-Asians and Asians least successfully identified. Also, the Chinese and Latino men's speech appeared more syllable timed than the others' speech. Finally, longer voice onset times for voiceless stops and lower /ε/s and /r/s were also to be implicated in making a speaker “sound Asian.” These results support extending the study of the robust U.S. tendency for linguistic differentiation by race to Asian Americans, although this differentiation does not rise to the level of a systematic racial dialect. Instead, it is suggested that it be characterized as an ethnolinguistic repertoire along the lines suggested by Sarah Bunin Benor.
I'm sure there are some average physiological differences, but there are a lot of voice actors of every race who could easily fool you.
That's absolutely irrelevant. It's like saying that Yao Ming is a hall of famer, therefore the reason why blacks outnumber asians in the NBA by orders of magnitude has nothing to do with biology.
It doesn't seem like quite the same thing to me. The ability to vocalise, and to understand and express the meaning of words in a play/show is very well distributed, in my opinion.
Thanks. This is exactly the kind of study I was wondering if someone had done, and confirms my suspicion that there are biological/physiological factors that affect voice in a significant way.
I think the relevant terms I would have needed to find this through Google are "physiology/physiological" and "voice quality" (where "quality" means "kind/type", not how "good/nice" something is).
Considering Laurence's comment about how pitch can vary even with the same person speaking different languages (which wouldn't be physiological), it seems like physiology may not be the only factor or even the dominant factor. Based on the sources I've seen and people have linked to, I don't have a good idea of what the likely relative contributions of physiology versus psychology/socialisation are to voice pitch and timbre.
I find it interesting that many of the responses were about accent and dialect, even though I explicitly stated that this was not what I was thinking about. I think that, considering speech and voice as a whole, accent, dialect and vocabulary choice are usually far more obvious (and probably more reliable) indicators of the speaker's race than pitch and timbre; maybe this is why people gravitated towards that discussion. I found that the comments on this blog post discussing racial differences in voice quality tended the same way:
http://dialectblog.com/2013/04/17/race-and-voice-quality/
Many of the comments there are primarily anecdotal, but of those who do think that black people have different voices from white people, they tend to agree with me that black voices tend to be deeper and "huskier". I particularly enjoyed one commenter's description of how we was convinced there is a racial difference in voices by listening to Klingons in Star Trek: TNG and predicting that Worf's actor must be black.
Well, it's probably because the rest of us were too lazy to look it up. Thanks for setting a better example!
Grew up in majority black community, listening to local female dj, always thought she was black, around 7,8th grade friends and I were shocked to discover she was very very white. It was shocking not just because of our expectations, also because we often heard white people speaking like locals, it never sounded right.
Second anecdote: there's a true story movie about a black guy who infiltrates the kkk with Kylo Ren
This is probably really about accent and dialect, rather than voice pitch.
Comedian/actor Sacha Baron Cohen's first big character success, Ali G, played on this idea. When the character was created, many white British youths fantasised about and imitated aspects of black hip hop culture and gang culture, including speech. Ali G was a parody of these white people. He attracted some criticism because people thought he was a parody of black culture, rather than a parody of white people imitating black culture.
I don't think we should base our understanding of the world on two people.
The other thing I noticed in the US is that liberals have different voices to conservatives. less Deep. NPR vs Fox
NPR Voice is definitely a thing and has grown more extreme (effeminate) over the decades. Not sure it is representative of liberals not in media.
Do you have any supporting evidence for this claim? I would not be surprised if the media voices have different styles, but e.g., is there any study where people listen to speakers reading out some neutral text, and then perform better than chance at identifying their political views?
no, it's just something I noticed as an outsider when I lived there. Also, quit Sea-lioning
How is it sea-lioning to ask you to provide evidence for your assertions? We're supposed to just take your word for it?
This seems not to be the case on the U.K., and definitely not for well educated black people. I was watching the BBC coverage of the Queen’s passing and the presenter Clive Myrie sounded just as posh as the rest of them.
It occurs to me that I should give a couple of examples of people who I think sound "black", but not just because they speak London Multicultural English.
Looking at Wikipedia's category of black, British MPs, consider Diane Abbott or Dawn Butler. Both have voices that I would consider to be lower and huskier than is typical for a white woman.
I am from the UK and was talking mainly about my experience listening to people from the UK.
Your suggestion that Clive Myrie "sounded just as posh" sounds to me like a claim about accent/dialect, not a claim about pitch and timbre. Even so, I must disagree with you partly on this point. Yes, he speaks clearly and sounds educated. Yes, his accent is not recognisably black. However, while I would say it is definitely more Received Pronunciation than Estuary English, he sounds nowhere near as "plummy" as, say, David Cameron, Boris Johnson, or Jacob Rees-Mogg. However, I guess if you were comparing him with other BBC presenters of the current generation, "just as posh" is probably fair. He certainly doesn't sound "common".
the test here Colin is to close your eyes and do a blind test. Get someone to help you with different presenters, preferably ones you don't know. That Clive sounds less posh than Johnson doesn't mean you could work his race out on the radio. I just checked on Diane Abbot and I closed my eyes and didn't hear much huskiness.
https://www.youtube.com/watch?v=1smqFBUeWdM
Anyway this thread can only be subjective.
For goodness' sake, why do you insist on using these non-random examples? The average UK politician sounds different to average UK person, therefore they're not good examples. How many MPs have thick cockney accents? None?
Given the pains mainstream broadcasters take these days to avoid giving offence and not cause prejudice against specific racial groups, it is amazing the way voice disguises, in true crime programs and suchlike, make perp interviewees' voices sound like those of some blacks! The quality of this kind of voice is hard to define, but it has a sort of resonant sound which (I guess) may be mainly the result of a larger jaws and thus mouth and/or larger nasal cavities.
By contrast, some American voices, especially female voices with what I believe is a Brooklyn accent, seem to have become squawkier and more nasal with every passing decade! At this rate, by 2050 the speech of some American women will sound like short bursts of fast forwarding on an analogue tape recorder! :-)
Okay but there is OBVIOUSLY massive selection going on there. Of course a news presenter sounds posh, they wouldn't have hired him if he sounded caribbean and couldn't say 'th'. Do you imgaine that newsreaders are selected randomly from the population?
The claim the op makes is that you can tell the difference with in the US. I am pointing out that you can't tell in the UK.
Okay, and that's wrong, because you're not basing it on a representative sample of the population, you're basing it on a single individual selected precisely because he sounds "white". I've met white people in the US whose voices sound indistinguishable from those of 'black-sounding' black people. It doesn't mean that mean differences in voice quality don't exist between the races, because I would be basing my view of things on an unrepresenatative sample.
In order for what you are saying to be true, we would need some randomly selected whites, blacks and whoever else from the UK population and blindly guess the race from the voice.
If you're claiming that its possible to find people of different races between whom no difference in voice quality is detectable, then yes, this is obviously true but also irrelevant. The claim "you can't tell in the UK" implies no mean difference.
I am not just talking about this individual, he was an 'example". In Britain as a whole there is no black accent except regional accents that apply to anywhere. A black man from Liverpool sounds like a white man from Liverpool. And this is true across class divides. Footballers dont have black accents.
The only perhaps racial accents are London Multicultural London English, but White people mimic that as well.
>I am not just talking about this individual, he was an 'example".
Yes, and it's *literally the worst possible example you could have provided* because its the opposite of random.
>In Britain as a whole there is no black accent except regional accents that apply to anywhere.
This isn't about accents. This is about voice quality. Two people with the same accent can sound very different. Blacks and whites in the US can have the same accent and yet have distinguishable voices.
For goodness sake, white men and white women from the same place will have almost identical accents, and yet >9/10 we have absolutely no difficultly identifying the sex of the person talking.
>Footballers dont have black accents.
They sound different to white footballers on average.
I think I agree mostly. Certainly, I would expect a black man who grew up in Liverpool to have the same accent/dialect as a white man who grew up in Liverpool.
I think I have a perception of London Multicultural English as being "more black" (but see my comment about Ali G above), which is presumably because such a large proportion of black people in the UK live in London.
But I think there is a further effect in play. When you have a somewhat insular ethnic minority community, that community can develop and maintain its own accent/dialect variation, which effectively becomes a racial accent. You can hear this most obviously if you go and talk to someone working in a kebab shop or as a taxi driver in a major city outside London. (For readers outside the UK: A very high proportion of people working in these jobs are south Asian; that is, Indian, Pakistani or Bangladeshi.)
I wanted to flag this up as something I was aware of potentially existing because it can be difficult, as a listener, to separate perception of accent from perception of voice quality/timbre.
So, can you tell when it's say, Nigerian instead of American? I was listening to the audiobook of One Half A Yellow Sun and it's actually startling when a black American woman who moved to Nigeria talked (they have voice actors for the audiobook).
I'm not quite sure what you mean by American? Unless you are talking about Native Americans, which I doubt, I'm not sure what "American" would mean racially.
Are you comparing a black woman who grew up in America and then moved to Nigeria, with a black woman who grew up in Nigeria, where both are speaking English?
Obviously
It could possibly be true, but evidently the sociocultural environment has a huge effect on how a person's voice sounds. Multilingual people can drastically change the 'default' pitch of their voice depending on what language they're speaking in, without even realizing it. Given that such differences can be very large within one person, it would be very unlikely that racial differences are especially influential.
I found your remark that multilingual people change pitch when changing languages very interesting. This is a possibility I had not really considered before. I had assumed that the voice pitch people speak with "normally" was mostly/entirely physiologically determined.
While looking for references to back up your remark, I came across this, which has a nice summary of research on "default voice pitch" (for which the technical term seems to be "F0") in the introduction:
https://linguistics.ucla.edu/people/keating/keating-kuo_2012.pdf
*Significant vocal tract anatomical differences exist between races (and obviously the sexes):
https://www.sciencedirect.com/science/article/abs/pii/S0892199705000718
>Acoustic pharyngometry evaluates the geometry of the vocal tract with acoustic reflections and provides information about vocal tract cross-sectional area and volume from lip to the glottis. Variations in vocal tract diameters are needed for speech scientists to validate various acoustic models and for medical professionals since the advent of endoscopic surgical techniques. Race is known to be one of the most important factors affecting the oral and nasal structures. This study compared vocal tract dimensions of White American, African American, and Chinese male and female speakers. One hundred and twenty healthy adult subjects with equal numbers of men and women were divided among three races. Subjects were controlled for age, gender, height, and weight. Six dimensional parameters of the speakers' vocal tract cavities were measured with acoustic reflection technology (AR). Significant gender and race main effects were found in certain vocal tract dimensions. The findings of this study now provide speech scientists, speech-language pathologists, and other health professionals with a new anatomical database of vocal tract variations for adult speakers from three different races.
Some African-Americans learn to sound "white"; it's just an accent. Your Googling turns up stuff about dialects because that's what it is.
*Significant vocal tract anatomical differences exist between races (and obviously the sexes):
https://www.sciencedirect.com/science/article/abs/pii/S0892199705000718
>Acoustic pharyngometry evaluates the geometry of the vocal tract with acoustic reflections and provides information about vocal tract cross-sectional area and volume from lip to the glottis. Variations in vocal tract diameters are needed for speech scientists to validate various acoustic models and for medical professionals since the advent of endoscopic surgical techniques. Race is known to be one of the most important factors affecting the oral and nasal structures. This study compared vocal tract dimensions of White American, African American, and Chinese male and female speakers. One hundred and twenty healthy adult subjects with equal numbers of men and women were divided among three races. Subjects were controlled for age, gender, height, and weight. Six dimensional parameters of the speakers' vocal tract cavities were measured with acoustic reflection technology (AR). Significant gender and race main effects were found in certain vocal tract dimensions. The findings of this study now provide speech scientists, speech-language pathologists, and other health professionals with a new anatomical database of vocal tract variations for adult speakers from three different races.
Anecdata:
When I was in college, I found I could identify black people from New York, sight unseen. It was pretty clearly some kind of local accent, because it was very location specific.
OTOH, I've heard any number of British people, who turn out to be second generation from all over the world, including from mostly black colonies, and I can't distinguish them by voice.
Reasoning:
I doubt there's a voice difference. I'd expect accents - plural, not singular - and also dialects - with varying degrees of difference between white and black people form the same location, depending on details of local customs. Children seem to absorb whatever way of speaking they are raised in, and sound like their parents and neighbours, whether or not those are foster parents.
Like so many other things, I'd expect that if you went out objectively measuring voices by some set of objective criteria, you'd find that different races have heavily overlapping distributions with slightly different averages.
To pick a simple example I would not be surprised if the average pitch of a black man's voice is deeper than the average pitch of a white man's voice which is deeper than the average pitch of an Asian man's voice, even when all are raised in the same way speaking the same dialect.
Percieved differences in voice between races do exist: https://read.dukeupress.edu/american-speech/article-abstract/86/2/152/5900/DO-YOU-SOUND-ASIAN-WHEN-YOU-SPEAK-English-RACIAL?redirectedFrom=fulltext
>In paired dialect identification tasks, differing only by speakers' sex, New Yorkers were asked to identify the race and national heritage of other New Yorkers. Each task included eight speakers: two Chinese Americans, two Korean Americans, two European Americans, a Latino, and an African American. Listeners were successful at above chance rates at identifying speakers' races, but not at differentiating the Chinese from Koreans. Acoustic analyses identified breathier voice as a factor separating the Asian Americans most frequently identified from the non-Asians and Asians least successfully identified. Also, the Chinese and Latino men's speech appeared more syllable timed than the others' speech. Finally, longer voice onset times for voiceless stops and lower /ε/s and /r/s were also to be implicated in making a speaker “sound Asian.” These results support extending the study of the robust U.S. tendency for linguistic differentiation by race to Asian Americans, although this differentiation does not rise to the level of a systematic racial dialect. Instead, it is suggested that it be characterized as an ethnolinguistic repertoire along the lines suggested by Sarah Bunin Benor.
If our perception of reality is just a rendering of physical/base reality in our consciousness why is it mostly beautiful? Why is our rendering of a wave beautiful if it is just a bunch of colorless molecules? Why do we perceive nature as mostly beautiful, even sublime? Is it an evolutionary aspect of our brain/consciousness to make existence bearable?"
Our perceptual systems evolved in a given environment. There's obvious reasons why it would lead us to have preferences for certain perceptions.
I've heard that default ideas of beauty can change-- in the west, beautiful land was at least fairly level and fertile, and the the idea of the sublime (mountains, storms) came in a century or so ago.
On the other hand, there are so many cliffs in traditional Chinese and Japanese art that I was surprised to see a field with a lot of flowers in a Kurosawa movie.
Not all of life is beautiful. War and devastation suck. And so when it is beautiful, we should totally enjoy it. I'm heading out on a walk, it's been wet here and mushrooms are popping up all over. (Well mostly in the woods.)
It's not a rendering of base reality though. All of our perception is an extremely limited construction based on a very narrow selection of information. We take a certain range of the electromagnetic spectrum as input, and then apply layers and layers and layers of post-processing to that. Early layers give us experience of color and brightness, and later ones give us depth, motion, and face perception. Yet, few (arguably any) of those experiences exist in reality. Colors may correspond to certain wavelengths, but our perception has nothing in common with base reality when it comes to faces, motion or even brightness, which are constructed from scratch.
The reason why nature is beautiful doesn't directly follow from this, but you can easily bridge the gap. We perceive things the way we do because this apparently was most conducive to our survival. One explanation is that we developed some neural algorithm that intuitively evaluated spaces for habitability, and somewhere with lots of greenery, a varied landscape, and long sightlines gets evaluated positively because it means fertile land and lots of different foraging/hunting opportunities. So we call that intuition 'beauty'.
I am trying to make sense of the comments. Let's see......We have no direct access the base reality. We apply a multitude of layers to a small range of electromagnetic input. I assume the more sophisticated the organism the more sophisticated the layers. Hard to prove on the level of consciousness but I assume this is done on the level of neurology. Ok going on..... evolution gave humans a beauty circuit but not for specific things because we have not enough genes to support that. I am aware of the synchronic and diachronic as well as individual differences between appreciating certain qualia or even aesthetics but I am not convinced. All aspects of beauty are cultural? Are there culture that find no aspect of nature beautiful? What about the sublime? Awesome yet fear inspiring? A mighty waterfall or storm clouds? As for socialization I propose a Bach cantata against screeching noise {yes not 100 percent but feels universal both historically and throughout cultures today if asked for preference say in a closed room for 24 hours. } There are many assumptions here and it seems like we are going out of our way to avoid some sort of dualism or simulation. Where am I going wrong?
I think "beauty is cultural" and "beauty is innate" aren't necessarily exclusive. Cultural influences might cause someone to dislike what most cultures would like and vice-versa, just like how people learn to like spicy food and bitter tonic. Brutalism is the first example that comes to mind. I'm just spitballing here but if the perception of beauty has a survival advantage, then you'd expect it not to be sensitive to cultural effects, at least not to the point that one's sense of beauty is the complete opposite of another's. But if the experience of beauty is just a side effect of how we process stimuli, then it might be more malleable.
You didn't evolve to find specific things beautiful you don't have enough genes for that.
Evolution gave your brain a beauty circuit and yours was trained to find nature beautiful, but not everyone's is. Many people don't find nature beautiful. And many find things antithetical to nature beautiful like cities or sci-fi scapes.
Nature is a reasonable candidate for beauty because you live in it, it's often a matter of life and death, and it's legible: you can learn a lot about it with your senses. Beautiful doesn't always mean good, I find the utter barren rock scape of Mars beautiful. A better shorthand for beauty might be "compelling".
It's actually even more fun - I seem to recall reading that historically nature was perceived as hideous. Which jives with how we'd have less appreciation for natural spaces back when they were more likely to kill us than present an opportunity for a nice stroll.
Did a short google and the only article I found was specific to mountains (https://www.theguardian.com/artanddesign/jonathanjonesblog/2011/feb/08/mountains-leonardo-giambologna-art) so maybe that's what I'm pulling from, but still interesting.
Very interesting. It seems like a phenomenology thing, the much debated capacity to see the "itness" or "thingness" of a thing/phenomena before culture teaches you to see it in a certain way. Wonder how that will pan out with objects in space ..we see galaxies as mostly beautiful but will we be acculturated to see dark frightening rocks as such? Maybe it also has to do the time we spent gazing at them and how threatening we perceive them to be so we see far celestial objects as beautiful as well as the Moon and Mars neighbors we are somewhat intimately familiar with.
Interesting phenomenon: the sound of the surf breaking on the beach is similar to the sound of a nearby highway. The label I put on the sound makes me experience it as beautiful or ugly.
> Why do we perceive nature as mostly beautiful, even sublime?
I guess the purpose is to make us relax, thus conserve calories. Nature is beautiful when there are no predators, decaying corpses, dangerous insects, spiders, or snakes nearby.
Does anyone know of any large scale data sets on human mood?
I'm interested to know how often and how much people are unhappy. If we took a million people and sampled their self reported mood up to several times a day for six months... we might learn whether unhappiness is in fact a fairly common experience, and whether there are clusters in how it behaves (I'd expect people to be somewhat arranged along the Neuroticism axis of the 5 factor model...).
Anyway I think that probably there have been apps supporting better mental health etc that have got and perhaps published this data - but I cannot find it. I had a look around and I could find stuff about how mood was correlated to social forces, and sentiment analysis on social media, but nothing that I could use to explore the questions I'm asking.
Maybe the ACX community can point me in the right direction!
Thanks :)
Not necessarily what you're looking for, but the Qualia Research people did a transition graph between emotions:
https://qri.org/blog/wireheading-done-right
http://web.stanford.edu/~cgpotts/papers/conditioned-sentiment-expression.pdf
For high school students, there is the huge PISA datasets which also contains questions about the well-being of students. Just once per every 2-3 years, but it covers a huge number of students and countries.
https://www.oecd-ilibrary.org/sites/c414e291-en/index.html?itemId=/content/component/c414e291-en
there was a study i participated in years ago called BRIGHTEN Study/was ginger.io app where it messages you several times a day or several times a week about your mood. I'm not sure what came of the results, they never sent them out to the participants like i was told would happen.
I was once a subject in a study of mood. I got pinged at random times during the day, and was the report my activity at the time and how happy/unhappy I was. I found those happiness ratings extremely hard to do. I'm not convinced that happiness/unhappiness is a dimension that's meaningful all the time. If I'm quite happy or unhappy I'm clear about that, but lot of the time I'm in a state where I'm neither. There are things I can say about the state, but how happy or unhappy I am isn't one of them. For instance right now I'm neither happy nor unhappy, and honestly I do not think it's accurate to say that I'm on the midpoint of the scale either. What's much clearer to me is how invested I am in what I'm doing. When I saw Andrew O's question I wanted to answer it. Now that I'm answering it, I'm invested in being clear, getting my idea across. That's keeping me typing energetically, and I would definitely not welcome an interruption right now -- so in that sense I "like" writing this -- I want to do what I am doing right now. But I think for me at least it would be forcing an inappropriate template on my state to say that writing this is making me somewhat happy. It's much more accurate to say I am engaged, interested, invested. Everybody has a certain weight at any given moment, but maybe we don't have a certain quantity of happiness at every moment.
I'm pretty sure Daniel Kahneman has done some of these studies (as have many other psychologists, though Kahneman describes them in Thinking, Fast and Slow).
What you're describing is known as Environmental Momentary Assessment, or EMA, and I have a few colleagues working on it. They use an app called Ethica, which asks people to report their mood on various scales several times a day for a number of weeks or months. However, I don't know if unhappiness per se is something that EMA captures well. Absent chronic stress, I would expect that hedonic treadmill effects keep people at roughly the same level of (un)happiness, depending on how you define it. But if you do some Google Scholar searches, you could probably find something interesting.
Thanks - I'll check that out!
This doesn't directly answer your question, but most of the research in this area refers to 'well-being' rather than 'mood'; searching on the former may be more fruitful, in case you haven't tried that. For example, searching '+patterns +wellbeing ' in Pub Med turns up > 30k hits.
I doubt anyone has sampled a million people multiple times a day. I was involved in one study for a larger employer that did exactly what you describe - randomly polled their ~10k employees through the day, over many months, about their well-being, via a smartphone app - but the data and findings are not publicly available.
Anyone here with aquarium hobby or knowledge about biological filtration?
TLDR: I am looking for some research articles or other materials about biological filtration in aquariums.
A lot of the information I found so far online is not very rigorous.
I am looking for some pointers or recommendations to good research/books/anything serious to help me answer these questions:
1. Why does it take so long for the bacteria to colonize the filter (even when the bacteria are added explicitly)? How do parameters of the water influence this? Can it be shortened?
2. Is filter media really necessary? It is usually said filter media provides surfaces for bacteria but these are just a fraction of other surfaces combined in an average tank (substrate, rocks, etc.). Wouldn't just establishing a continuous flow of water be enough? If not, what is the appropriate amount of filter media?
3. It is recommended to run the filter all the time. Are the filter bacteria really so fragile that they wouldn't survive with some fraction of time with the filter turned off? This does seem counter intuitive to me.
4. It is said filter media transplantation to a new tank usually does not work and cycling has to be started again. What is the reason behind this? Can it be done somehow?
5. How do water parameters affect the capacity of filtration?
6. How does the strength of water flow affect filtration?
A lot of answers to these questions I found on aquarium webs/blogs/discussions are quite contradictory and sometimes I get the feeling nobody really knows what they are talking about or are just repeating what others said. That's why I want to look at some more rigorous research.
Also, do you think any of these would be a good research question?
What's more important than the precise absolute values is that tank conditions do not change quickly. (this goes double for marine aquaria. The ocean is a very consistent environment.) Small tanks can be a false economy because they change too fast. Don't rush to put fish in before the system has equilibrated.
> (substrate, rocks, etc.). Wouldn't just establishing a continuous flow of water be enough?
This is called an under-gravel filter. water is drawn down through the substrate and up a tube. They used to be more popular a couple of decades ago and are not great for live plants.
I think most online places will be able to help you. https://fishlab.com/online-aquarium-communities/
My favorite 'nerd out' book on aquaria and ecosystems is "Dynamic Aquaria" by Walter Adey.
Looks like the first edition can be had for under $10. Highly recommended if you want to think of your aquarium as an ecosystem.
I worked in sewage treatment for a while, and would point you toward that in terms of pure biological filtration mechanics questions. I don't have much experience a closed system including fish, though.
In my experience, this level of information basically does not exist for hobbies. If you have the kind of mind that asks these kind of questions, and the patience/time/space (and probably disposable income) necessary to test them yourself, you could probably become a minor celebrity among your hobby's community. (See the "brulosophy" folks in the homebrewing community) Be ready to become a villain to a small subset when you inevitably slaughter some sacred cows.
The best you can hope for is that there _might_ be literature from research/large scale aquariums but I doubt it, as they tend to use completely different systems and techniques relative to home aquariums.
In general, this "best practices" in hobbies (and in most things really), comes from people knowing a certain thing like "there is a bacterial community that helps filter water, and bad things happen if it isn't robust enough". They then start throwing things at the wall until they find something that works, with no rigor or methodicalness to try and find _which_ things specifically, and in what amounts and under what conditions actually matter. The real world is complex and finding the boundary conditions among many interacting factors is difficult, so mostly people don't bother, they just do the thing that works, which usually consists of some mixture of things that have no effect, things are done in complete overkill, and a couple things that are completely essential, and no way of differentiating them.
What kind of aquarium are you looking to run? I generally had success not doing even half of what is "best practices", but the one thing I did do that I believe helped a lot is use 1/2 a gravel base from an already healthy aquarium.
I'm a high school student with an interest in computer science, but it looks like coding will be a less valuable skill in the future with things like Copilot coming out. Is majoring in comp-sci still a good idea? Also, if not, what are some other intellectually-demanding majors that are more resilient to automation (Maybe stuff in math or bio)?
Copilot is garbage because getting code 95% of the way to correct is worse than getting it 0% of the way. Debugging takes most of the development time.
Majoring in comp-sci is one of the few reliable ways to still be relevant in 20 years.
The vast majority of time at a programming job is figuring out exactly how/why something has broken.
Most coders only write quite a small number of lines of code per day, the majority of the work is figuring things out.
Until we have true AGI coding is going to continue to be a busy job.
90% of my time spent programming is not writing code, its deciding what to write and how to write it. Things like Copilot are just like having a map function built into a language. It makes it easier/faster to do routine things. Copilot just greatly expands what can be considered routine.
Computer science being fully automated is essentially the definition of the Singularity, at which point any decisions you've made prior are irrelevant. Conditioning on the Singularity *not* happening imminently, there's still growing demand for *skilled* developers.
I'd still learn to program. It's not like auto mechanics went away when we got power tools.
And a lot of the copilot stuff is just letting you go faster, not really creative. So it might actually raise programmer productivity as long as we don't run out of problems to solve. And that's hella true.
In my dad's day, you'd program computers using punch cards and assembly language. In the intervening decades there have been a bunch of innovations to make programming computers easier and massively increase the productivity of each programmer. This has not decreased the demand for programmers.
AI-assisted code generation will definitely be part of programming in the future; ultimately it will make programming harder rather than easier, since each programmer will suddenly be responsible for even more code, even more complexity, making decisions at an even higher level of abstraction.
I can't give career advice, but my studied to be a journalist nephew has now pivoted to a job as a web dev (if I'm getting the term correct) and is doing well in that.
(So uh, yeah, I suppose he did "learn to code").
So if he could do it, you can find a field you are interested in and study for it.
Copilot, if you excuse my wording, is utter and hot garbage. Copilot is not going to replace programmers anymore than GPT3 is going to replace writers. Copilot frequently regurgitates code from its training data as-is. The fundamental assumption underlying Copilot is wrong and misguided : good code can't be created like natural language, because natural language is very redundant and error-resilient, a couple of mistakes here and there don't impede the flow of meaning, this is the exact opposite of how code works, a single pedantic mistake can spell disaster and completely trash the entire process. I can't emphasise strongly enough how different code is to natural language, you might as well try to synthesis Math proofs or Physics papers with GPT3.
Even granting that Copilot is an average human programmer (this is extremely wrong, but grant it to make a more interesting point), it still won't replace human programmers. The fundamental bottleneck in any large software (>30K lines or so of code) isn't the skill of the individual programmer, but the Managment and Communication of the group of programmers making the system. If you can't make Copilot contribute during a code review or attend an agile standup or really just talk to humans for 5 minutes and understand the massive amounts of social context implied in the conversation, then you can't replace programmers.
Automation fears are largely unfounded and hysterical. The vast majority of human work, when considered end-to-end, is AGI-hard because, trivially, it involves talking to humans and convincing them of things using mental models of their internal state, as well as open-ended interaction with the world that can never be done successfully if all you have ever seen before is 100 TB of dead data scraped from Wikipedia. The poster boys for automation (factories, trucks,....) are tasks which require very little human interaction from start to finish, and this is relatively rare (and even then, they are still not completely solved).
Finally, modern neural networks is a very lame kind of intelligence, and will never pass the Turing test in by 2100. My Turing test is if you can convince a non-horny heterosexual man that his conversational partner is a woman worth dressing up for, or an equivalent. It's amazing how far dumb matrix multiplication can get you, but it will never get you as far as a system developed over the course of ~10 million years and is full to the brim of special circuits and tricks. Keep throwing more hardware and more Wikipedias on a problem like "automate human conversations" while using only numerical neural networks, and I gaurantee you (1) Failure (2) Pyrrhic successes, i.e. You successfully created your mAI Waifu^TM chatbot but only the USA's department of defense can run it and your competitors simply pay real girls instead and operate a wildly successful business. Plenty of human work fundamentally involve human conversation.
> Automation fears are largely unfounded and hysterical. The vast majority of human work, when considered end-to-end, is AGI-hard because, trivially, it involves talking to humans and convincing them of things using mental models of their internal state, as well as open-ended interaction with the world that can never be done successfully if all you have ever seen before is 100 TB of dead data scraped from Wikipedia.
I can't help but notice that since the industrial revolution, the increase in productivity per worker often went with a reduction of work force. There is little solace for a fired factory worker in the fact that the factory still employs some workers.
Historically, I think that while all jobs required some interfacing with other humans, the vast majority of the jobs often was back-breaking labor with a bit of social interactions sprinkled on top: a charcoal maker probably did not spend half their time making Charisma based rolls to sell their product for the best price. Instead, 95% of their time was probably spend on Strength/Constitution based rolls.
> The poster boys for automation (factories, trucks,....) are tasks which require very little human interaction from start to finish, and this is relatively rare (and even then, they are still not completely solved).
I would call the profession of computer even more of a poster boy for automation.
While I agree that the current approach to AI is very much brute-force (a human child can learn a language on orders of magnitude less data than GPT-3), computing power will only get cheaper while humans (hopefully) will only get more expensive to employ.
Even if copilot can never replace a good programmer, or GPT-3 a good author, or DALL-E a truly creative artist, this does not mean that no jobs will be lost.
> There is little solace for a fired factory worker in the fact that the factory still employs some workers.
That mostly depends on where on the bell curve do you fall. Technology is a massive force multiplier for IQ, so with automation your economic viability quickly separates into the extremes of either worthless or irreplaceable.
The difference here is that what we want out of software keeps increasing.
Sure, but technological innovation also creates jobs. I mean, there were very few "computer programmers" in the 1950s and what they did bore little relation to what the millions of them do now. There were no commercial airplane mechanics in the 1920s, all the guys who might've been good at it worked on tractors or mine pumps instead.
Whether on balance more jobs are created than lost, and whether the new jobs are of higher satisfaction or lower, and what effect this all has on the distribution of incomes, is an exceedingly complex question which I expect to be debated until the Sun burns out. But history does suggest that on the whole technological innovation is good for everybody, jobs included.
Yeah, everyone wants better tools.
>looks like coding will be a less valuable skill in the future with things like Copilot coming out.
Maybe, but on the other hand, copilot (and, most importantly, the hacked model of copilot that is bound to be leaked eventually) may act as a multiplier for your output, making your skill -or a slightly different skill- that much more valuable. See Jevon's Paradox (https://en.wikipedia.org/wiki/Jevons_paradox).
I don't know if Math would be a good alternative, but I'd wager that if comp-sci is decimated by AI, then bio will get an even worse deal (and if it isn't, then bio will become much closer to comp-sci than it is now)
Copilot is not anywhere within a few light years of replacing programmers who can do things beyond the first year courses. If you enjoy programming and are able to graduate college you'll be safe for longer then your lifetime from AI takeover. Besides, computer science is the job that makes AIs! What could be more resistant to automation then the one who designs the automations?
I'm not sure about light-years, but I think I overcorrected on the performance of Copilot on interview problems. Thanks for the encouragement.
I think you will be fine. But what are your goals? If you really want a degree that is for sure going to be remunerative going forward I would think some sort of robotics engineering or a wide variety of other engineering degrees are the way to go.
If you just want to make money try to go into high end math/statics/modeling and then to finance.
I find it amusingly hypocritical that I complain about pushing politics in movies, and yet the recent few movies I enjoyed a lot were strongly political. It's just... someone else's culture war, so I don't mind.
* Hindus and Muslims should live together in peace
* caste discrimination is bad
* Britain is evil (except for that one girl who falls in love with the protagonist)
EDIT: For the record, I didn't mean that all those three happened in the same movie, so you don't have to guess. I have watched many Indian movies recently.
RRR is spectacular. I could ding it as the final fight scene as being a little too long, but it's incredibly engaging for me compared to most movies.
A small mystery-- I watched it on Netflix. Why are the British speaking Spanish? (It's all subtitled.)
Apparently, it's political beyond the Brits being the bad guys and the revolutionaries being the good guys.
It's a lot more related to Hindu Nationalism than one might hope. I recommend seeing the movie first.
https://www.vox.com/23220275/rrr-netflix-tollywood-hindutva-caste-system
At length: https://buttondown.email/riteshwriter/archive/6-unpacking-rrr-indian-politics-and-cinema/
i will ask for the first two movies.
btw you might want to reexamine what about "political" movies turns you off. politics is a mjor aspect of society and it would be a shame if movies were completely barred from adressing it.
WARNING: CONTAINS SPOILERS FOR SOME MOVIES
Sure, you can't avoid politics completely, also these days almost everything is political. I guess what you *can* avoid, is letting the political message make your movie predictable.
For example, suppose an American movie starts with a group of men trying to accomplish some goal, there is a woman who wants to join them, and some of the guys says "lol, girls can't do this". I think at this moment you can make a safe prediction about how the movie will end: the girl will accomplish the goal, all guys will suffer a humiliating defeat. So even if this is the next Predator movie, there is just no tension left after the first five minutes, when the words "too dangerous for a girl" have summoned a plot armor for the girl.
As a contrast to this, consider the Alien movie. It happens to end in a similar way, but it wasn't auto-spoilered, because the movie was *not* about gender conflict. There was a chance, at least during the first half of the movie, that anyone could get killed. The female actor did not have a plot armor. -- This is one good way to make a strong female character. The other possible way is to own it: if you call your movie "Xena: Warrior Princess", no one has a right to complain that it was about a warrior princess kicking everyone's ass and surviving unlikely odds. What you should *not* do, is take the Star Wars universe, and make a Xena out of it.
Okay, back to Indian (I think the best ones are often Tamil) movies. "Article 15" is a political movie that owns it. If you have read one sentence on IMDB, or if you have seen the "what the fuck is going on here?!" excerpt on YouTube, you know it is going to be a movie about how caste discrimination is bad. I liked it. It probably helps that on one hand I happen to agree with the political message, but also I do not hear it often (so I am not *tired* of hearing it yet again). Also, the movie does not have the black-and-white woke morality; people from an upper caste are also allowed to be good guys who fight against discrimination, they are not reduced to mere "allies" and told to step aside because their origin already made them too tainted to make their own decisions about right and wrong.
A great movie I would recommend is "Maanaadu" and I strongly recommend watching this movie without knowing *anything* about it. Do not even look at the IMDB page! Knowing the genre of this movie is already a kind of spoiler; it is in my opinion an even greater experience if you have no idea. One of the best movies I have ever seen. (Also, contains the Hindus and Muslims cooperating, and even explicitly comments on that fact. It happens organically; there is a good in-universe reason for that.)
END OF SPOILERS
You think the protagonist in Alien doesn't have plot armor? Of course she does. Obviously she's going to defeat the monster and survive, how else is the movie going to turn out? It doesn't matter what gender the protagonist is or how woke the movie is, they're going to defeat the monster and survive.
Look, while I agree that it is unfair to the English to cast the Raj as solely torturers and murderers, on the other hand who doesn't enjoy a good "Brits out" movie? If you've been on the other side of the Mother of Parliaments and colonialism, that is 😁
There is definitely a political angle pushing an idealised and indeed fictionalised hyper-patriotism, which is indeed a problem of its own, but uh, let me link to our own contribution to the genre (no tigers, though, alas!)
https://www.youtube.com/watch?v=O3MGrqybcYg
https://www.youtube.com/watch?v=wYtE0mpdk5M
https://www.youtube.com/watch?v=lEjEGbAFzJU
> who doesn't enjoy a good "Brits out" movie
English-speaking Canadians. Offered independence after WWII, they declined as it would be too much work.
Lagaan?
RRR is great, if that's what you're asking
Tiger attack!!! https://www.youtube.com/watch?v=DDAHHPGcLzo
The CG could be better but, wow.
I'm starting to think that political arguments really have nothing to do with values, and are entirely just arguments over facts.
I could be convinced otherwise if someone could show two schools of political thought which are identical in terms of their factual (i.e. predictive) views, and yet disagree only in terms of their normative views.
I think the opposite is closer to the truth, and these values drive what people view as "facts".
Most people are strongly opposed to the view that genetic variation explains racial behavioral differences, but this is almost never because they have looked into it and have a reasoned belief that the evidence falls in favor of this being false. Their underlying value of race equality means that they're hostile to even looking into at all and many oppose the research being allowed to exist.
Their factual understanding of the world is different to me, but this is largely a product of their values in the first place. This applies to most hot button issues.
My point, but better (although I'll abstain from the race discussion). I think Mark is correct about where we end up, but I'm not at all certain that he has cause and effect correctly ordered.
I think it's both, actually. There are some real value differences that will not go away through shared understanding (abortion for sure, illegal immigration and gun control in most cases). But you hit the nail on the head in terms of why things are getting worse. If you optimize for freedom and I optimize for justice, we can respectfully agree to disagree. If we do not have a shared reality - different facts, different predictive models, different histories - then there is no conversation to be had.
In all three of those cases I think supporters and opponents will disagree about likely consequences
To pick one, gun control opponents think that private ownership of guns reduce the chances of a government engaging in tyranny, and also reduce petty crime. Control control supporters generally think both of these claims are overblown.
Likewise, gun control supporters think that gun control will likely reduce violence. Gun control opponents think the violence will happen regardless.
I suspect these disagreements on these predictions are probably sufficient to predict someone’s values here; i will be very shocked if we can find two people who read this blog who agree on the relative likelihood of the above claims and simply disagree on their relative values.
Well, now you're either muddling your point or I misunderstood what you were trying to say.
If you're saying that people with political disagreements almost always disagree about the facts and consequences of their favored positions, that is certainly true. And also trivial.
It seems like you're trying to parlay that banal observation into the assertion that there *are* no value differences, period. Which... maybe, if you extrapolate to the extremes? We certainly know that people tend to abandon any and all values if the (perceived) risk or reward is great enough. Whatever value I cherish the most would almost certainly get tossed aside if I was convinced that sticking to my guns would result in calamity.
But if we leave aside the hypotheticals, I think that insight is less helpful. In the real world of real decisions about politics, people place an (arguably irrational) emphasis on values. And I would assert that those decisions are not a complex calculus, but in fact what they appear to be on the surface. People oppose gun control because it is their constitutional/God-given right - period, end of story. If you could convince them that their position would result in armageddon, they would probably change their mind. But they aren't listening to you, or thinking in that mode. (Not scoring political points, liberal side is a mirror image.)
It would be tiresome to list examples of principled stands - you know that some people take them, even when they agree with their opponents that standing on principle will have a worse outcome overall.
You know what - I think I must be misunderstanding you, because I have no idea what you're trying to say after reading your last comment. If your point is that true principled disagreement is rare, then yes but trivial. If your point is that on the majority of policy positions, political parties disagree about the facts and consequences of said policies - then yes, but trivial. Not trying to be rude, just confused now.
> It seems like you're trying to parlay that banal observation into the assertion that there *are* no value differences, period. Which... maybe, if you extrapolate to the extremes
This is exactly where I’m going here. It sounds kind of crazy, but the more I play around with it the more it seems like it might be right.
I think what we call our values are just compressing long cause and effect sequences. The reason we get mad and walk away is from an “expected value” calculation on the likely benefits of continuing the conversation with someone whose reality model is widely divergent from our own.
Can you point to an example of someone saying, “sticking up for this principle causes nothing good but we should do it anyhow?” I think most people defending principles will say something like, “yes it has costs but it has these benefits which outweigh the costs.”
I agree with Jason, in that I'm not certain you have cause and effect correctly ordered. But I do think that the endpoint is basically as you say.
Yeah, more thought makes this seem like all I’m doing is playing with the consequences of wishful thinking.
Hmm, this is confusing terminology:
Fact: The maternal mortality rate in territory X (% of mothers who die as a result of pregnancy) is Y.
Theory: If the area had better health care in pregnancy, this rate would be significantly reduced
Value: It's important to reduce this death rate.
I encounter politically aligned arguments at all 3 of these levels. Commonly arguments mix at least two of the levels.
Noting that if the theory is tested, it can move up to provisional fact.
But I'd argue that those "facts" are generally provisional - territory Z might have excellent health care, with the majority of <i>their</i> maternal mortality coming from some other cause.
I think it’s more that people emphasize (or in worst cases, create) different sets of facts in order to create the best argument for their values, so the sides in a value argument always look like they are arguing from different sets of facts.
The argument over minimum wage, for example, doesn’t seem to be likely to be settled by one set of facts just winning on the basis of being the truth of the matter. Your favorable facts for your perspective just prompts the other side to emphasize or create different favorable facts of their own.
If we discovered next week that fetal heartbeats didn’t start until the 25th week, the pro life movement wouldn’t give up and go home, they’d just consolidate around different facts while working to debunk the study. I’m sure there are at least come in the pro choice movement who would do the same if we discovered fetal cognition beginning in the 5th week.
I think facts (unfortunately) have a tendency to be treated more as weapons, and although at times there are fact-weapons so potent that they can settle a discussion even over value-objections (fetal cognition at the 5th week, I think, would be a nuclear weapon of a fact that might actually end the abortion debate for 99.9% of people, which is crazy to imagine an end for), I don’t think facts are the source points of the arguments themselves.
I have, at times, crafted an argument based on the facts (as best as I can compile them), and have had to change my position based on the overall body of evidence.
This should probably happen to me more often.
I agree with this framing - values shape which facts we consider relevant and how we interpret them.
What do you think of the claim that people identify with their beliefs - specifically, value beliefs - and so they don’t want to change their value beliefs because doing so feels a bit like dying? Changing your value beliefs in a significant way is effectively ending one identity. Our brains are tying to keep “us” alive and thriving by valuing some outcomes over others, and I think it’s easy for a brain to identify itself with its values, even more so than the body in which the brain resides.
I'd agree with that - it certainly feels true that admitting I got a fact wrong feels much easier than admitting I had wrong values. Facts are external to me, while values are internal/personal, and accepting that I misjudged something external to me doesn't carry nearly the emotional baggage that admitting something internal to me was "wrong" would.
Maybe this is one of the things that makes a certain kind of sophist/philosopher/nihilist (like myself) be able to love arguing so much and to change my stripes so often. I just don't have this instinct at all. I regularly over the years have thrown away the cloak of one little cluster of values and adopted a different one if it looks a little "warmer" intellectually.
And if it is some issue I feel 60/40 about, just having the people around me strongly argue the 60 side is enough to make me want to vociferously defend the 40 side.
IDK I grew up a traditional democrat, was like a Chomsky-ite in HS, somewhere between a Chomsky-ite socialist/libertarian and communist in college, a more traditional liberal again just after college, but then rapidly drifting off into the radical centrist wilderness as I got older. Where I have all sorts of different idiosyncratic views that don't map well onto either party and sometimes are very centrist and sometimes are quite extreme (but in both directions).
And when I encounter new info that pushes me over the edge in some direction I may at times radically change my positions. The facts (as best as we can do) matter.
A lot of this resonates with me, and I felt weird that people seemed to have genuine feelings about facts.
I notice myself drifting slowly to the right, after many years in the "a pox on both houses" mindset, which itself followed wild fluctuations between far left and libertarian positions.
Thought I'd bounce off both of you:
Would you say "I don't commit to positions based on ideology, I follow the evidence" is an ascendant value for you? That might account for what you're describing and still fit within the "reassessing values is easier than reassessing facts" framework.
If so, as a thought exercise, what would it take to get you to reassess "I don't commit to positions based on ideology, I follow the evidence" (or whatever your personal equivalent) as a value and say "on reflection that value was wrong," and how hard would that be to do, personally, compared to changing position on a particular fact?
I think I recognize what you're talking about. There's a familiar scene that plays out repeatedly, all the time, in contemporary political disputes. People call each other names and impute value disagreements, when they really want the same thing.
But there's a whole universe of political questions that don't have anything to do with values or facts, which usually take the form of questions like "who should our city hire to clean its streets?"
That often leads to meta-questions like "what is the best way to allocate street-cleaning contracts?", and so political philosophies are born (or ideologies, if you prefer, not that they are identical). As meta-discussions get farther away from the concrete circumstances that inspired them, they can go in a few directions:
- A consensus develops around a broadly-satisfying solution.
- Or the problems are hard to resolve, so support coalesces around temporary solutions.
- Or no one takes the meta discussion seriously, or the discussion is merely for show, because of course the Mayor's niece gets the contract.
What's more, solutions are often unstable: even a consensus around a seemingly durable solution can weaken, as people come to realize that the principles underlying it lead to repugnant conclusions, or as it comes under a material threat when changing social factors undermine the solution or the consensus around it.
But even if we put aside zero-sum distributional questions and their differential impact on various social groups, even if we focus only on theoretical or ideological political arguments, there's yet another problem! Framing the question of political conflict in terms of factual and normative disagreements omits other important dimensions of political thought: incommensurability and prioritization.
For people that share terminal values, a good amount of political disagreement happens around situations where the two values cannot be reconciled or times when we can't advance both at the same time.
Thus, I'd agree with a much weaker version of your original assertion - something like "_many_ political arguments really have nothing to do with values, and are entirely just arguments over facts." But if you're feeling like that's the **only** kind of disagreement, I suspect that you're either 1) looking only at a particular community with a strong cultural consensus or 2) looking only at certain low-resolution types of political disputes: that is, looking at arguments carried out at an ideal or philosophical level rather than arguments about concrete, local circumstances.
I think maybe i can rephrase this as, "absent disagreement on facts, value disagreements are naturally going to arise, causally, from resulting from the factual disagreement"
So until i see disagreements about some system where people agree on all the facts, it's had for me to believe that _pure_ value disagreements are actually, really, truly a thing.
I agree with your first statement, but I am quite certain I see "pure value disagreements" all the time.
Let's just focus on prioritization for a moment: how much you value your values is itself a value.
I'm not trying to be clever! Here's a concrete example: I'm embroiled in an ongoing discussion with a NIMBY woman in my town. Recent zoning revisions enabled developers to build new construction on her street and she's pissed. She's written letters to the editor of our local paper and spends a non-negligible amount of her time fighting the new construction.
The existing houses on her street are predominantly circa-1900 single-family dwellings in various states of disrepair, and it's one of the last semi-affordable neighborhoods around here (in reality, calling her neighborhood "affordable" is a major stretch). She argues for the preservation of her neighborhood character and points out that our zoning changes were intended to promote affordable housing, but the greedy developer wants to build expensive, new market-rate construction on her street: no middle- or lower-income buyer will be able to afford it.
I want to go full Matt Yglesias on her and explain the economics of the situation. But I don't think it will do any good, because even if I stipulate that she cares about making housing in our town affordable, she wants her neighborhood to stay the same, and she cares about that more!
That is, for my money, a real value difference. (This is also without going into the "greedy developer" issue - very likely another major value difference between us.)
If this were true you'd expect updates on factual evidence to produce wide shifts in political belief. But I rarely, if ever, see this.
Why would we expect this?
My prior is that people are generally terrible at Bayesian updates to their belief system unless they are in environments where they are rewarded only for being correct about anticipating the future.
So maybe we could expect macroeconomic investors to have largely convergent beliefs about how governments work, with respect to finance>
Because if disagreements were primarily about facts then changing the facts should change the disagreements. You are now asserting not that you disagree with that or my initial post but that people simply don't update their facts. But this isn't true either. People update their facts all the time. It's why new arguments and talking points arise so much.
Can you give an example, then, of a widespread disagreement about values, among people who agree on the facts?
Sure. Communist China and the US both agree that Taiwan is run by the Chinese Nationalists. The Communists hold the value that China should be united, the One China principle. The US does not hold this value. This is a disagreement between roughly 1.6 billion people and so fairly widespread.
If you need a domestic American example then Justice Scalia and Justice Sotomayor both believed that the second amendment exists. As far as we know they held no disagreements about gun crime statistics or ownership and if they had diverging opinions on the effect of policy they didn't share them. There was no disagreement about the content of the text either. But Justice Scalia held the values of a legal originalist while Sotomayor held the values of a legal realist thus leading to directly opposite opinions on gun control.
There are points where common values reduce a conflict to be about facts. But that by no means exhausts the category of disagreement.
Great! I think this example is summarizing a factual disagreement between the two of us: namely, which factual beliefs are relevant in these disagreements? This might seem like i'm dodging or 'moving the goalposts', but i think where we disagree on is whether these conflicts stem ultimately from broader factual disagreements.
For example, i believe the disagreement between the US and china over how taiwan _ought_ to be run stems from a factual disagreement over what causes human flourishing. For example, both disagrees disagree on, say, the consequences of legally protected freedom of speech.
The communist party believes that freedom of speech will lead to cultural degradation and moral decay, and ultimately weaken a people and make them subject to the control of forces that aren't looking out for their wellbeing.
Americans (some of them) believe that freedom of speech allows for the exploration of new, better ideas, which ultimately promotes human flourishing.
I believe the disagreement over taiwan is downstream of these higher level factual disagreements. I think the same is true with Scalia vs Soto mayor
> Justice Scalia held the values of a legal originalist while Sotomayor held the values of a legal realist
Why did they hold these values, though ? Clearly, they weren't born this way.
We can ask, 'what are the likely consequences of these values being held' and this is purely a factual belief where the two are likely to differ.
Scalia likely believed that, absent faithfullness to the text, the courts will lose their legitimacy because they will become politicized and become a kind of super-legislative body that makes rules, rather than merely being tasked with interpreting them. Scalia probably believed that absent faithfullness to the text, the courts would lose their legitimacy and the american experiment would come to an end in authoriatarianism. Sotomayor probably believes that absent a consideration of social interests, the courts would become widely seen as merely defending a corrupt status quo, and that that if the system as a whole isn't perceived as fair, people will stop supporting it, and this would be bad. I think both Scalia and Sotomayor want courts to continue to enjoy widespread legitimacy, but they disagree on what kinds of rulings will cause the courts to maintain their legitimacy. So there's a single number here - percentage of the population that views the courts as legitimate" - and even if they _both_ want this number to be higher, i think they probably disagree over which actions will reduce the view of the supreme court as legitimate.
Not if political arguments are mostly expressive instead of truth seeking. Sports arguments (which team, player is better) is also about facts not values, but fans of different teams don't agree either
What's your definition of facts and not values then? Because it seems to me that you are now trapped either admitting sports fans do not update their facts, such as who won the last game, or saying that such things are not facts at all.
But "who won the last game" is not an argument sports fans have, as you suggest. What _do_ sports fans argue about most often? Is it over whether playing beautifully or effectively is more important (which they sometimes argue about) or what player should have been MVP, what team was actually better despite the results of a game, etc? They share the same value, but argue over facts which are not as easily settled as "who won the last game" because that would be silly.
I don't follow your logic. Take which should be the MVP. Don't they disagree on what values an MVP should have? You're right they don't disagree on (say) batting averages. But they still argue. Which seems against your point! Unless I'm misunderstanding.
But they value the same thing: having the "most valuable player", in the sense of making the most difference for a team to win games. That is a factual disagreement.
When Billy Beane came around to argue that the batting average was not that important, he wasn't argue that he didn't put a moral value on batting average, he was arguing that it did not make the team win as many games as people once thought. That is factual. And that is what fans argue about all the time.
Oh man sports fans disagree about facts constantly.
Ask a Notre Dame fan and a Miami fan whether Cleveland Gary’s knee was down before he dropped the ball at the ND goal line in 1988 and you will definitely get different facts. Heck, it’s part of the fun.
Sports has the advantage that in order for the fun to exist at all we have to both agree to live with the fact findings of some kind of arbiter, but that’s harder to translate to politics.
Yes, they do. But they also fight even when they agree on the facts!
Not with a Leviathan!
I think there are people who really want advantages of their ingroup and are neutral to inimical about their outgroup.
I think if you ask these people why, what they will say is that you can’t operationalize concern for everyone.
For example, I love my kids and will ignore other kids drowning in distant metaphorical ponds (of which there are an inexhaustible quantity) in order to teach my own kids to play piano. This isn’t because I don’t want to stop other kids drowning in ponds, it’s because my ability to help my own kids is far far greater than my ability to help kids far away.
That's part of it, but there are also people who have large demographic groups they don't care about.
I've wondered whether demands for people to care equally about very large groups or possibly everybody makes them less helpful.
They might have been willing to be charitable in their city, but if they're told they have to care about everyone or it's not good enough, they say fuck it.
"They might have been willing to be charitable in their city, but if they're told they have to care about everyone or it's not good enough, they say fuck it."
Yup. I have a handful of people that I care about, and that's the end of it. I'm academically interested in the rest of humanity, but only academically.
While I think you are right this is mostly what is going on, there are the occasional people who really are like "fuck em" to most of humanity.
I agree with you to an extent, that far more of our disagreements involve factual discussions about best outcomes. Pretty much everyone wants to feed the poor and have a healthy economy. We certainly disagree about the best ways to reach those outcomes. If you push that hard enough, you can squint and say that everyone wants what is "good" and not what is "bad."
I think that's taking it too far. Communists really are collectivist in mindset, and really do value things contradictory to Libertarians, who instead value the individual over the collective. Pushing that hard enough to say that they are only differences in facts (presumably about how to make the best society) elides more than it illuminates.
If you ask communists or libertarians why these things are important, they will often argue that in their absence, really bad things will happen.
Communism seems to say that money and ownership are totally unnecessary for human prosperity. Libertarians clearly disagree. This is a causal belief, not a value one.
Your model explains everything and therefore nothing. You have abstracted too far and lost lots of valuable information.
In our libertarian / communist example, the libertarian will say "bad things will happen" and the communist says "no they won't" and you are calling that a factual disagreement but it isn't.
What they are actually saying is "anti-liberty things will happen and these are more valuable than the equality things" and the communist says "you're wrong, the equality things are more valuable" and this cannot be settled by fact.
The libertarians predict that "attempting to force equality will lead to the destruction of all wealth, as productive people leave the country and the incentive to invest or save disappears. It will lead to hyperinflation, as money printing will be inevitably used. The authorities will control all speech and opposition, jailing and torturing dissidents." The communists say, no, those things won't happen. And if they do, well, then it wasn't communism.
The communists predict that "absence of any centralized control will lead everyone to become slaves of the few people with money, who will control everything, with everyone else begging for scraps. They will have 'freedom' in name only, but without money, will be slaves in all but name" The libertarians say, no, those things won't happen. And if they do, well, it isn't libertarianism.
The model does not explain everything. Specifically, it cannot explain " a group of people who agree on the likely distribution of outcomes resulting from some policies, and simply disagree on the desirability of those outcomes."
If you can show that libertarians and communists agree on the _consequences_ of widespread liberty or widespread equality, then i'll agree that i'm wrong.
How do you view classical liberals and communists having different ideas about the desirability of the status quo? Is that also a factual disagreement?
I think in both cases, it is an unfounded theory based on values. Both sides, essentially strawman their opponents in their mind and then declare as certainty, the disastrous results of the other side succeeding.
I don't think I can provide the evidence you ask in the second paragraph but I mostly agree with you. Mainstream / top level / social media arguments would seem to be entirely arguments over facts (or, worse, virtue signaling, dunking etc). But I don't think it's entirely that. Underneath I do think camps actually want to optimize for different things (individual liberty vs collective good, etc). So my personal approach to parsing this mess is to ignore the vast majority of it and cut straight to the point of what is being optimized for. Most political ads, slogans etc are illegible as they relate to the underlying value and its distinction from other options. They are noise. This is probably the main reason I am frustrated by politics generally, even if leadership, governance, economics are interesting topics when discussed like adults.
I think it is "what we want to do" first, and the model of the world is created afterwards as a justification for the proposed actions.
Different goals will automatically lead to different models (because different actions need to be justified), but that doesn't mean that the difference in the models was there first. Models can even be updated, if necessary, in a way that "coincidentally" still justifies the original goals.
For example, in the current war in Ukraine, the Russian side has already presented several different models of the situation -- maybe Ukrainians (and Belarusians) are just confused Russians, who need to be reminded of their former glory... or maybe we need to protect the Ukrainian population against an imminent Nazi threat... or maybe NATO is trying to use Ukraine as a base for their nuclear strike on Moscow, and Russia has to defend itself. Well, maybe this, maybe that, who cares anymore... but the conclusion is always that Russia needs to get Ukrainian territory under its military control.
I think the Nazi thing was about protecting Russian-speaking Ukrainians from Ukrainian-speaking Ukrainians.
To be fair, there were/are some literal Nazis in Ukraine, and Russian-leaning Ukrainians (not necessarily _speaking_, those are not the same thing, especially after the war started) were sometimes in danger.
Which shows that the best lies are lies of omission, and blowing true things out of proportion.
Freddie had a weird post on the usage of "literally" the other day. The question is can you use "literally" like this
I literally walked a million miles yesterday.
I say you can.
The debate centred on whether *literally* could be used to mean figuratively, which is what some dictionary suggested. I disagree, you wouldn’t use figuratively there. The sentence is itself figurative. It’s hyperbole. And with hyperbole the entire sentence is read as not literal.
(Using literally as an intensifier doesn’t change that.)
I think there's a bit of is/ought or motte-and-bailey going on here.
Freddie: "Literally" ought not be used as a generic intensifier.
Merriam-Webster: "Literally" is, in fact, used as a generic intensifier, and has been for over two centuries.
F: Yes, but it shouldn't be, and you're trying to sneak in an "and it should be" instead.
Generic Descriptivist Narrator: Thus we see that the term "literally" *can* be used as a generic intensifier, but there is social pushback to this use: the listener reacts with outrage rather than incomprehension. Given that "truly", "actually", and "for real" appear to have undergone similar transitions, it is likely "literally" will also end up losing its meaning of "corresponding to objective reality" definition despite this pushback. Whether or not this is a desirable outcome is beyond the scope of this observation.
Perhaps we can just use "literally literally" to indicate that we do not mean literally figuratively. Then, if that becomes used figuratively, we just add a "literally" on front and so on.
Or we could go all Olaf Quimby II and just prohibit figurative use of language.
I'm in favor of this.
Alternatively, I occassionally literally say "figuratively literally" when using "literally" figuratively, eg. "Dude, I hated the proto-hobbits on Rings of Power so much I figuratively literally rage-floated outside of my body."
That was...awesome. Still second place to Morris Bishop's doggerel on prepositions, but only just.
Ha, thanks!
Do you really, Christina?
I really do. Although usually via text rather than verbally, now that I think about it. I think it's funny.
Sure you can. Language is what people say, and plenty of people use "literally" as a mere intensifier, which is what it's doing here. The sentence makes perfect sense, provided you understand this particular colloquial meaning of "literally," and I would say it's rare for people *not* to know this meaning exists.
On the other hand, does it makes you sound nekulturny? Also yes. It's the modern equivalent of talking like a Valley Girl ("Like, totally!").
There's nothing colloquial here.
Literally when used as an intensifier in hyperbole isn't to be taken literally, just as any other intensifier in hyperbole isn't to be taken literally. The only confusion is perhaps, that the word. "literal" is being used.
"Colloquial" refers to the acquisition by "literally" of the meaning "really, a lot, surprisingly much of [whatever word follows]," because this meaning was acquired through quotidian usage and conversation, and has (so far as I know) no compelling linguistic or etymological roots.
So what do you think about "this is literally genocide"? Do you consider "okay, that is plainly hyperbole and rhetoric, it's not meant to be taken as fact" while the person using it does so consider it to be real and factual?
It's a statement I see used by some people in the culture wars, and if you don't agree that X, Y or Z is literally genocide, you are ostracised as a X-phobe who hates those people and wants them to die.
Well I would hope it is hyperbole, and would treat it as such.
My take is that prescriptive linguistics is mostly just descriptive linguistics of the educated middle class. If I tell you that using language in a certain way is incorrect, then it carries an unsaid "... if you want to sound like an educated middle class person".
Quite why I, as an educated middle class person, am so keen on making sure that everyone else talks like an educated middle class person, I'm not sure. Certainly I'm keen on making sure my kids sound like educated middle class people for a good reason. Why it should bug me when a random stranger misuses language though? The generous-to-myself interpretation is that I think classism is a big problem in society and I don't want to see people limit their own opportunities by failing simple shibboleths. The less generous-to-myself interpretation is that I feel like people who don't use language "correctly" are failing to show sufficient deference to my educated-middle-class-ness and that this annoys me.
Nonetheless, language exists in a constant state of flux, with linguistic "innovations" constantly bubbling up from the lower classes and the upper-middle classes trying to bat them back down. Sometimes we are successful, sometimes we give up. (There's also a whole class of linguistic innovations which are pushed down on us from above for political reasons.) I'm an observer of the struggle but I'm also a proud participant, happily fulfilling my role as an upper-middle-class grammar Nazi batting down lower-middle-class linguistic idiosyncrasies.
"My take is that prescriptive linguistics is mostly just descriptive linguistics of the educated middle class."
It's kind of circular, but I endorse this.
But putting aside wanting or trying to sound like educated middle-class for benefit, there is a meaning to words.
And if "literally" is reduced to just a filler word, an intensifier like "very" or "greatly", or just becomes a word to be stuck in like "um" and "ah", then we have lost a tool of language. We have lost a way to convey meaning. "Literally walked a million miles" and "he literally tried to strangle me" become the same thing, which is meaningless. Which is a poor look-out, because if Jane is trying to tell you that John really did try to strangle her and you take it as "oh, all she means is that they had an argument", then we've lost a way to communicate reality.
And it's just as bad if we go the other extreme, where wanting to use terms like "this is literal genocide" are to be taken as factual, not rhetorical, communication. In that sense, if Jane and John only had an argument and she tells you "He literally tried to kill me", now you are obligated to react with appropriate shock and horror as if John really had tried to kill her, not just have a loud disagreement. Otherwise you are A Bad Person.
I want words to retain as much of their meaning as possible, and I don't want to give in easily on slippage, because until there is agreement one way or the other on what "is" means, then we can't even talk to one another, something that leaves us all worse off.
That post was terrible. It was either a ridiculously bad and nearly definitionally wrong take or else it was a terrible effort to communicate some idea that might be correct but I can't tell because I didn't understand it.
Yeah, I really wish FdB had given an example of exactly who he's arguing against so we can see if they a) exist and b) mean what he says they mean.
Anyway, I also want to add that I have a (reprint of) Fowler's English Usage, 1926 edition, and the entry for "literally" is along the lines of "this battle is already lost, and the fools who use the word as an intensifier have won". So if anyone wants to argue that this will damage the language, you either have to accept that the language has already been damaged, and here we are, or explain why after 100 years now the shit is REALLY going to hit the fan.
>Yeah, I really wish FdB had given an example of exactly who he's arguing against
Either this is sarcasm by means of irony, or both of you are reading his newsletter with images turned off.
I saw that photo. I think the newest comment in this thread sums up perfectly what is going on with the is/ought motte/bailey. That picture is describing what _is_ happening in language. FdB is assuming that this "is" is actually an "ought" like the prescriptvists are saying. Someone describing what _is_ happening is a fundamentally different thing than someone describing what _ought_ to happen. FdB argues they are the same. This is patently untrue. If the point he was _trying_ to make is that some descriptivists are actually _prescriptivists_ who are saying that, in addition to describing how things are used, they are also claiming that this is the "correct" way, then he did a terrible job of conveying that, and if he had conveyed it more clearly, I would A) disagree that most descriptivists are secretly doing this and B) think that this a totally banal and uninteresting point about the few who are, even if he could have found some non-trivial group of people doing it. I am as uninterested in a permissive prescriptivist as I am in a restrictive prescriptivist.
So he is either making a boring point that some people thing language should be used one way and other people think it should be used another way or he is making a wrong point that people describing how language is used are the same as people claiming that there _is_ a right way.
The comment you invoked (which, agreed, adequately sums up the argument) does not say what you think it says. In particular, the one it accuses of motte-baileying is Merriam-Webster. Which, obviously, you don't respond to a debate about "ought" with the statement on "is" unless you think it's relevant, especially not in the "everyone that disagrees is angry and silly" tone. But apparently it gives you plausible deniability of "just describing stuff", making it the motte to the "this settles the debate" bailey.
Freddie, for his part, correctly calls this bullshit out. Language is not some independent process, it's something all of us users actively participate in and shape. More importantly, it's something we're using for a particular reason - communication, and this requires us to establish a shared understanding, which in turn requires us to continuously resolve differences and ambiguities. And while most ambiguities are benign and can be easily resolved from context, not all are. If the case of "literally" is too marginal for you to care about, imagine a world where "no", in addition to the current meaning of "negative", has an additional attested meaning of "affirmative". (This is, hilariously, not a made-up example, "no" means "yeah" in Polish - different pronunciation, same spelling. My peer group is largely bilingual and prone to code switching, and I've been asked several times whether my text message used "Polish no or English no?") These kind of contradictory definitions literally cannot coexist, the conflict renders the word unusable. It invariably must resolve in one of three ways - one of the two uses prevails over the other, or some different set of words takes over to convey the same semantics.
It's natural to bring attention to and try to resolve those kinds of semantic conflict, because we literally wouldn't be able to communicate otherwise. It's natural to insist on the semantics you're accustomed to, especially when the other side has many more options to switch to. If that's prescriptivism, then everyone necessarily engages in it all the time (just observe how many internet discussions turn into semantic squabbles; including this one, right now) and the term is meaningless (at least as a description of a distinct intellectual position).
But I think it's not. I think the word carries at least two additional assumptions, that the prescriptions are arbitrary, and that they're made from a position of authority. And there's only one linguistic authority (Merriam-Webster) with an arbitrary (based on the term being attested and included in the dictionary, thus completely ignoring the actual unresolved problem of mutual comprehension) prescription here.
Finally, I think your problem is believing the conflict to be between descriptivism and prescriptivism. But nobody frames it in those terms, not Freddie, not the post you cite approvingly, not even Merriam-Webster. You seem to be an outspoken descriptivist, and reflexively chose what you (mistakenly) see as the descriptivist side to support. And I get the sentiment, I really do, trust me, I ain't no prescriptivist either. But here's the thing, I don't think anyone would admit to being one at this point, it's nearly universally understood as a bad thing, for what I believe are very good reasons. (The "descriptive linguistics of the educated middle class" take really nails one of them.) To engage in a bit of self-awareness, the only point of invoking it in discussions like this is as an accusation to tar your opponent with. This can only be a viable tactic under an assumption of a shared understanding that prescriptivism is, in fact, bad, otherwise the other side could simply [serious bearded man face: "Yes"] out of it. The war you're trying to fight has long been won, and fixating on it prevents you from noticing all the complexity that exists outside of it.
I did miss that the girl-yelling image was a tweet by a dictionary.
I think this depends on the usage? But I don't think that was Freddie's point (at least not in it's edited form). As I read it, the point was that the descriptivist position is:
1) Literally can mean either literally (it actually happened exactly as described) or as an intensifier.
While the prescriptivist position is
2) Literally should only mean literally (it actually happened exactly as described)
And his point was that those two positions are in fact in conflict and to shrug and say 'words can mean whatever their users understand them to mean' is to adopt a descriptivist position, not to be neutral.
I would. I literally would (and sometimes do). Use "figuratively", that is. I'm one of those evil traditionalists who believe that language is a tool of communication and we need (maximally) well-defined terms and concepts to maximize mutual understanding. And if you're not allowing me to clearly distinguish cases where I'm not using hyperbole from those where I am by marking the former with a single unambiguous word, then I'm going to take the next best option and clearly mark the cases where I am, in fact, using hyperbole, to establish that whenever I don't, I should be interpreted as being literal.
Using figuratively here is bad English. I wouldn’t say it’s grammatically incorrect, but it is tin eared.
Hyperbole doesn’t need to be signalled by using the word figurative, no more than any other use of figurative language. Wordsworth didn’t have to say that he was figuratively wandering lonely as a cloud which in any case wouldn’t have parsed so well.
To make your sentence clearly hyperbole, you exaggerate it. That’s all humans need.
This is the moment to point out that Freddie's entire point was not to discuss the object-level usage of "literally", but to observe that what the people on your side of the argument are doing is pure, figuratively unbridled prescriptivism. Unlike most, you seem to be explicit about this, and what's your beef with him, I don't know at this point.
As for me, I can only restate my argument that, yes, hyperbole doesn't need to be signaled, and normally isn't. It's the lack of hyperbole that needs to be signaled instead, in cases where what would normally be interpreted as exaggeration is actually an accurate description of reality. If only English had a word to convey that...
I'm not the person you replied to, but I still fail to understand how merely _describing_ that some people use language in a certain way, without trying to dictate whether it is or is not correct, can be "precriptivist". I am not telling anyone how they _must_ use language, merely documenting how many people _do_ use language. You are free to decide that some of those people are doing it "wrong" (I personally am very curious what authority you could appeal to to make such a decision, but I don't actually care that much), but the fact that you think they are wrong does not mean they aren't doing it.
To make a very extreme analogy: One person says murder is always wrong and no one should ever murder. Another person documents that some people do, in fact, murder. These two people _are not doing the same thing_. The person describing the reality of the world is not simply using a different set of ethics/morals but is instead engaged in a totally separate endeavor, from which we can not, in any way, deduce their stance on murder.
As far as I could tell, Freddie was trying to say that the person telling you how many murders occurred is merely subscribing to a different ethical framework than the person saying murder is bad, but that fundamentally they were engaged in the same kind of activity. This is ludicrous. If that is not what he was trying to do, then he utterly failed to communicate whatever idea it was that he was trying to get across. Which is ironic given that one of the most common arguments for prescriptivism (including seen in this comment thread) is that it increases mutual understanding.
"in cases where what would normally be interpreted as exaggeration is actually an accurate description of reality. If only English had a word to convey that..."
Hyperbole that isn't exaggerated enough, isn't hyperbole. Rather than say:
"I figuratively ran 20 miles yesterday"
to indicate that you actually ran less than 20, exaggerate more. Say you literally ran a billion trillion miles.
I am not prescriptive about this, you can grammatically say figuratively, but it would be in general badly worded English, but not incorrect.
Wordsworth also did not say he literally wandered lonely as a cloud. "No, dude, like, totally nebulous. Literally like a cloud in my lonesomeness. Absolutely, yah".
if he did it would have been clear that he wasn't being literal.
Well but what if he said he "figuratively walked ten miles yesterday". That might be necessary because someone might walk ten miles.
I am in the camp that we sort of have to be descriptivists, but it would be BETTER to be prescriptivists (unfortunately that is a losing battle because linguistic bad actors are always moving things around for their individual benefit to everyone's cost).
<Well but what if he said he "figuratively walked ten miles yesterday". That might be necessary because someone might walk ten miles.>
With hyperbole you have to exaggerate, so that the thing you are claiming is clearly impossible. You can't eat a horse.
"we sort of have to be descriptivists, but it would be BETTER to be prescriptivists"
I am all for fighting new forms of language, so that what survives is better, but this use of irony in hyperbole is centuries old. And it isn't just the word "irony' but all intensifiers are not taken literally.
People absolutely use hyperbole with phrases that are not clearly impossible. All the time?
Ok then, if not impossible, then extremely exaggerated. And you know I don't really want to work on the grey areas here where hyperbole isn't understood by the listener as hyperbole because the speaker didn't exaggerate well enough . That's on them. The solution is not use the word "figuratively" but to get better at hyperbole.
The outcome you described seems pretty positive not negative. Having four different past tenses of "run" creates real costs on society, for I would argue fairly nebulous benefits.
I agree that four past tenses of "run" have fairly nebulous benefits. There are a lot of other cases where near-synonyms are more useful:
e.g. perfume, aroma, smell, odor, stench
Personally, I lean toward the prescriptivist camp in preferring that meanings not blur too much. If "literally" gets used too heavily as an intensifier, it will become very awkward to explain that one is describing some event literally. ( Hmm... What do courts do when a witness uses "literally" as an intensifier in sworn testimony? )
I don't think there's a question about whether you "can." Anyone *can,* and there's no way to stop it. There's just opinions about how to use the word.
At this point, "literally" often functions as an intensifier meaning "I'm using the word 'literally' figuratively to indicate how intense X felt."
It's a bit of a joke, that's all. Might as well enjoy it, as you can't stop it.
But it’s not used as figuratively. That’s my point. The entire sentence is figurative ie hyperbole.
Somebody will tot up all the steps they took throughout their entire life, work out it came to a million miles, and triumphantly announce "I literally walked a million miles".
And then we'll see what is or is not "the entire sentence is hyperbole" 😁
come on Deiseach. The definition of hyperbole is literally that it is clearly exaggerated. Meriram says "extravagant exaggeration".
I am not making that up. If there is a hint of truth in the hyperbole, it fails.
You: I literally walked 20 miles yesterday.
Me: ok, that's longer than normal:
You: no, you clown. I am exaggerating. I only walked 5 miles. It's hyperbole.
Me: oh, ok. I'll talk to someone else now.
compared to:
You: I literally walked a million miles yesterday.
Me: Really, now many?
You: 5 miles according to my phone.
100%
The "literally" has *not* shifted to mean "figuratively". There is a linguistic shift happening, but it's intensification, not a change to the meaning of a word.
It's easier to see (especially in slang) with a few other examples:
"A sick worldview" / i.e. "an unhealthy worldview" (negative intensifier)
-->
"A sick kickflip" / i.e. "a great kickflip" (positive intensifier, doesn't mean "healthy")
"ridiculously presented" / i.e. "joke-worthy presentation" (negative intensifier)
-->
"ridiculously fun" / i.e. "very fun" (positive intensifier, doesn't mean "seriously")
Edit: formatting, but still looks horrible. Substack should really support lists, or bold, or anything, really.
Ha, what a fun example, thanks!
Thinking about this a bit more, I realize this happens with *so many* words *all the time*. People never seem to get hung up on it... except in the case of "literally". I suppose it's a combination of 1) its use in a purely denotative form still being common and 2) the denotation itself being so clearly and immediately broken when used as an intensifier.
I don't understand how money works.
I'm not talking about the colored paper in my pocket, this is easy enough. I'm talking about the myriad forms of even-more-imaginary forms of paper and computer memory, that people somehow agreed has some value and acted in accordance.
Post any kind of material (books, games, essays, articles, videos, podcasts, people, social media threads, etc...) that you think can help me understand.
What are "Derivatives" or "Futures"? Is the first one related to its math analogue, or the second one related to its programming analogue ? Why is Fractional Reserve Banking not a scam ? and if it's a scam, why do the people who understand it not start a revolution ? Why is the stock exchange useful ?
As a concrete test case, I want to read a non-dumbed-down (~2nd or ~3rd year university-level) account of the 2008 financial crisis and understand what it's saying and why it's true. There is no thing special about 2008 crisis for me, it's just a famously complex and intricate financial phenomenon that provides a good test flight of my understanding of money. You can post any other financial crisis or phenomena that you think will help me better understand how finance works or test my understanding.
I have a background in CS, I love historical explanations that trace how a complex thing started one small piece at a time. I love multi-viewpoint explanations and I feel I'm being lied to or sold something when I detect an ideological bias in the educational material. I don't have a full time devotion to this task, and I'm not interested in making money using this understanding.
> Why is Fractional Reserve Banking not a scam ? and if it's a scam, why do the people who understand it not start a revolution ?
Because the system works (until it doesn't), and people who understand it either quietly reduce their dependence on it or abuse the cheap leverage opportunities.
In addition to the already mentioned Matt Levine, I strongly recommend Patrick McKenzie's substack Bits About Money. Actually, I think Patrick is even better than Matt in this case. Matt talks about the world of investment and finance while Patrick talks about *money itself*.
Here's a particularly relevant one about what it means to be money and how money is actually created: https://bam.kalzumeus.com/archive/stablecoin-mechanisms-and-use-cases/
One answer to "Why is Fractional Reserve Banking not a scam?" is that it's no secret. If you don't want to risk it, don't put your money in a bank.
A better answer, in my opinion, is that a scam is something you're hoodwinked into, but fractional reserve banking is simply mandatory. Not putting money in the bank, that is, holding paper money or coins, does not prevent losing value due to failures (or "features") of the banking system. Not holding money at all would, but that's possible only for a small number of protected autochthonous people; the rest of us have to pay taxes. Naturally they have to be paid in government sanctioned money.
It's not. A lot of people who use check cashing places don't have bank accounts. For people who live paycheck to paycheck, using a bank can often lead to the ambush overdraft fee, whereas with the check cashing places your costs are up front and clear, so they can end up being a better option.
This is an old piece, but good. https://www.newyorker.com/business/currency/the-high-cost-for-the-poor-of-using-a-bank
The most intuitive argument for fractional banking is found in the bank-run scene in "It's A Wonderful Life:"
https://youtu.be/OTJCI1FNBfA?t=198
great speech, not true anymore. Cash is created by bank loans and not borrowed from bank accounts.
Less intuitive but more musical is "Fidelity Fiduciary Bank" from Mary Poppins.
Would it not generally be a better world if all economic theories and propositions based upon them were required by law to be presented in the form of musical dance numbers? I think so.
Did you see The Big Short?
I think you can treat it more as a documentary than you might want to at first glance. It doesn't provide technical insight into the the 2008 financial crisis, but it does document the emotional arch of the story.
To try to answer your question: The economy is the collection of decisions we're making as a society. While a technical understanding of money could help you understand some of those decisions, it can't provide the full story. You're asking a valid question, but I don't think even a great answer would be satisfying for you.
Also, what did you mean by computer memory? I didn't understand that bit.
>Also, what did you mean by computer memory
I meant that the vast majority of today's money are merely entries in computer memory and don't have any other existence. I don't mean this to imply "they're not real" - to be clear : *they are not real*, but not because they are entries in computer memory, their paper equivalent is equally not real.
I do appreciate the fact that money and monetary technology and practices can't be understood without understanding the economy in\around which it appeared. After all, money is just an accounting trick that **supposedly** represent "Value", any abstract proxy for human satisfaction, exactly the thing which Economics and the related social sciences study. Money is to Value like Books are to Thoughts.
But I believe there is a huge part of Economics that is not the study of money, and I want to minimize knowing about that part (Nothing against it, I just have finite time and motivation and intelligence). I want to know about Economics only the things that will allow me to understand Money, or why Modern Money is mostly fraudulent fiction, like I suspect it of being. I believe it's a reasonable and attainable goal to try to understand finance and money without, as much as possible, getting drawn into the tarpit of Economics.
To answer the specific question about derivatives - no, it is not related to the calculus concept of a first derivative. The etymology of "derivative" in this context is literally a contract that "derives" its value from some other thing. So if I promise to sell you an apple for $1.00 next week, that is a future. If I promise to pay you the grocery store price of an apple next week in exchange for a dollar today, that would be a derivative. In this case, our contract derives its value from the price of apples without us ever needing to exchange an apple between us.
Once upon a time, banknotes were not money. You went to the bank and cashed the note, and they paid you out in coin.
And then I have no idea what happened. I gave up trying to understand economics, because for every thing that happened, you could find an economist saying it was bad and the wrong thing. The economy is growing? Bad. The economy is shrinking? Bad. There's a recession? Bad (well, I think we all agree on that one). There's a boom? Bad. Full employment? Bad. High unemployment? Bad.
Jeff Bezos is worth however many billions *until* he tries to cash those out, in which case he will crash the value of his stock and be worth peanuts. It's only a fortune as long as you treat it as imaginary.
It's fairy gold, that looks like gold when you get it into your hands, but turns to withered leaves in the morning.
I remember deciding one day to improve my knowledge of economics, and went book hunting. The first one I found looked right up my alley, and so I bought it and read it. It was _Basic Economics_, by Thomas Sowell.
The funny thing about it is that I'd recommend it highly, but not for what you seem to want. It doesn't explain economics as if to a programmer; it's a polemic. It's just that it's a very well-written polemic that you could learn some things from. So, there's one.
+1 for Matt Levine, although if you're trying to wrap your mind around money, his columns quickly become advanced. If we viewed the topic of money as a layer cake where the top layer is basic grade school stuff and the horizontal dimension is the various ways money is used and managed, Levine's columns come off as a brief stop at the top layer, quickly shooting down into some random part many layers deeper. If you want to fully grasp the top 2-3 layers, you'll need to read a lot of his columns. On the upside, I find him great for looking at some structure invented to use money to do something worthwhile, thinking, something about that structure doesn't quite add up, and Matt will come along and say, "yep, you got it, that's what's fishy about it".
For general econ, I like Don Boudreaux's advice: avoid Modern Monetary Theory (MMT), or macroeconomics at first. Go straight to microecon, and stay there longer than you might think you ought to. MMT will likely fill your head with stuff you'd just have to unlearn later. Microeconomics also goes by the name Price Theory; I recommend David Friedman's textbook on it. (Plus, he might even answer questions about it here if they're good questions.) Search for it; you can find a free copy online.
For spot definitions of terms, I like Investopedia. Well written, direct.
If you want details of the 2008 crisis, honestly, watch the movie _The Big Short_. It has surprising amounts of detail, and it's *funny*. Or, get the book it's based on.
Reading Noah Smith's substack recently I had an insight; nobody including the experts fundamentally understands how this stuff works.
Economists shouldn't pretend to be physicists, they should pretend to be monks. Instead of pretending that they're explaining economics, they should instead lead us in meditating on the mysteries of economics. The question of "how does money work, really?" isn't just something that the economic profession are hiding behind abstruse explanations that you haven't been able to penetrate yet, it's a fundamental and ineffable mystery of the universe; we can feel it a bit more intuitively through meditation but we will never fully understand it.
I thought as much while reading through "Money, The True Story Of A Made Up Thing". Whenever the book described how "experts" were trying to get out of a crisis like the 1930s or the 2008 by flailing hard and changing lots of things, I got the impression of somebody messing with something they are fundamentally unequipped to reason about.
Like, imagine you're trying to solve a recursive system of equations, say it's the linear system {x+y+z = 1, x-y-z = 2, x+2y+4z = 3}, but you don't know this is called "A Linear System Of Equations In 3 Unknowns" and there are entire books on how to solve it. You only have 3 dials representing x, y and z, and you're just fiddling with them, trying to get them into a configuration that respects all 3 equations. Solving the linear system must appear awfully hard to you, every single change you make appears to affect the entire system, and there is no clear way of knowing which dial to change or by how much. A trivial problem becomes impossible when you try to solve it without tools optimized for its understanding and solution. A good tool of thought, Computer Scientist Alan Kay is fond of saying, is worth 80 IQ points.
The economy in general, and the financial sub-system that torments me in particular, appears to be a huge recursive web of cause and effect that interact and evolve in hideously complicated patterns. Every explanation seems to be a Just-So story, a way of fiddling with the dials without reasoning about the underlying rules governing them. It's extraordinarily dumb and unjust how we are forced to trade our labor for a fiction that we can't even hold in our head satisfactorily.
Like many things in modern civilization, I honestly find modern money disgusting. The notion of "This is a thing that you can't possibly hope to understand let alone control, and yet *it* can control and ruin you in ways you can't even name" breaks me, it's like a secular Problem Of Evil. Lacking any realistic means of abolishing this criminal way of life, I find myself studying it in disgusted fascination, like how one might study a plague-inducing virus.
TL;DR of my other post:
"Money is the thing we use to exchange real items, and it is valuable because we think it is valuable."
This is a true statement that (nearly?) fully explains money, but understanding it is hard and requires a lot of thought.
(Easy to explain, hard to understand, but it's not an ineffable mystery)
There exist real things, like apples and houses, that require other real things (time, human labor, energy, other real objects) to make. People desire having real things, for personal reasons.
Since many real things take skill to make, and can be made in batches, you get a society with more real things per person if people specialize in making one specific real thing.
However, once you are specializing in making one class of real thing, you need a way to acquire all the other real things that you want for personal reasons.
It is slow and time consuming to trade your real things for someone else's real things.
It is faster to be able to exchange "tokens of real things", since then you don't have to carry the real things around.
It is even faster if the token of real things is used as a universal token for all real things.
However, since these tokens are not themselves real things, the people who control the creation of new tokens have immense power, since they can exchange new tokens for real things.
The best tokens to hold, when you aren't holding real things, are tokens made by token-makers that are unlikely to make many new tokens. Historically, that was using tokens made of real metals that were hard to mine. In the present day, that is using tokens made by stable governments who appear stable.
Also in the present day, there is an emerging token that is made using a consensus algorithm on a lot of computers. Since this consensus algorithm has been running for over a decade without many problems, and the social consensus appears resistant to new-token-making outside the current rules, it is becoming more valuable, especially as the stable governments making the most valuable tokens become less stable and more inclined to make more tokens.
bitcoin has collapsed recently.
The minimum price of bitcoin in 2021 & 2022 is higher than the maximum price of bitcoin before November 2020.
A store of value shouldn't be volatile in either direction.
Correct. Bitcoin's volatility reflects fundamental uncertainty about its future value, which means that it is not acting as a useful present store of value (except for nations with >70% annual inflation, like Argentina).
As Bitcoin has gotten more valuable, its up and downswings (as a % of market cap) have gotten smaller. In the world where it becomes solidified as digital gold, I'd expect that trend to continue until it was about as volatile as gold (if it ever reached gold's ~$8 trillion market cap).
I don't know what you mean-- it looks like it's been puttering around 20K for months.
It’s not a start-to-end treatment but if you read Money Stuff by Matt Levine you’ll get first-principle explanations of lots of financial instruments, and at least gradually get there. Plus he’s an excellent writer and manages to make the subject matter interesting and amusing, which I find novel.
The basic way of thinking about derivatives and futures (and options) is that they are bets about some proposition that have varying risk and payout structures. Derivatives are contracts that pay out based on the performance of some other asset class, but allow you to structure the payout and risk profile differently.
Typically these instruments are either standard and sold by any market maker, such as options - with standard-ish approaches to avoiding losing money if the market maker loses the bet, or specific synthetic products that say Goldman will come up with and market. You can also make custom bets if you have enough money.
The reason you want all of these complex bets is that in general you want to be able to bet on any trend where you think you have information advantage (say, I did a study and think there will be growth in the real estate industry in city A), to build a balanced portfolio that spreads risk across many categories and therefore reduces total risk, or, to hedge against a specific risk your company/entity faces (say, I grow corn and want to hedge against corn prices going way down next fall; so I can sell futures for some of my product to reduce the variance in my returns.)
Here's a video that does a good job of taking you from "I'm trading my apples for your pears" to "fractional reserve banking creates money":
https://www.youtube.com/watch?v=8xzINLykprA
Notably, fractional reserve banking cannot create new physical-dollar-tokens (one very narrow definition of money), but it can increase the number of people who are owned dollar-tokens, therefore increasing a different definition of money.
Did you mean owed dollar tokens?
I think people over complicate this a lot, often by not even using common terms, which are readily available. The money in deposit accounts ( which is what I assume you mean by "ow(n)ed dollar-tokens") isn't a guarantee of money, it *is* money. You can buy items with your card, and 97% of cash is not in coins.
In an economy where banks never failed or had to suspend operations, a dollar in a chequing account is identical to a dollar bill in your hand.
In the real world, there are sometimes (rarely) very important differences relating to who is holding the "actual" dollar.
Stepping up one level of abstraction, owning wealth in the form of shares/bonds and owning wealth in the form of dollars is actually often quite different, which is why you see a "dash for cash" any time there is a recession (the velocity of money slows down in a recession, so prices for most financial assets fall & the people holding cash can then buy at a discount if they dare).
I can't help because I share your sense of bewilderment and skepticism about a lot of the educational content.
I'm just commenting because I super appreciate that you posted this comment.
Futures are just a obligation to buy at an agreed price in the future. The person buying the future is trying to either assume that prices will rise, the seller is trying to get a guaranteed return in the future. An option is merely the option to buy something at an agree price in the future. There is no obligation.
I have tried finding a very good podcast episode on futures, but alas, I cannot. I will comeback if I do.
I have some other materials to share, two books by Yanis Varoufakis: "Talking to my daughter about the economy" and "Adults in the room".
While the first might sound like a "dumbed down" version, it is maybe only slightly so but I found it very well written and insightful nonetheless. Maybe see it as a general historical overview of why and how money and debt exists beyond the trivial facts. Its short, 4 hours as an audio book.
The second one is about the greek/EU financial crisis around 2015 (which itself is caused by 2008). It gives the listener/reader a glimpse into the financial and political establishment in mostly Europe, but also venturing a little outside to the US. Its around 16hrs of listening and definitely falls into the historical category. Extremely insightful in my opinion.
Both might have some bias, but two answers to that: 1. Much less so than one might think and when he is biased, he is very transparent about it. 2. He argues very well that money is fundamentally a political "thing" and therefore talking about money cannot be without bias. (he even argues that some systemic financial problems in Europe are caused by institutions like the ECB mandated to act unpolitically, which he thinks is a contradiction)
Cliff notes says:
Medium of exchange:
Store of value:
Unit of account
The first means we use it, rather than barter to exchange goods - it is a currency. The second means that it largely keeps it value over time. Of course there is inflation but you find with hyper inflation that people stop using the currency that is inflating ( so it stops being a currency in that sense).
Unit of account means we can compare items by value - this costs $200, so it twice as expensive as that which costs $100. Can't do that with barter.
So far so easy. You can see why electronic money is money in this sense. I can buy with a debit card. A credit card creates money, to be repaid later. There are other things described as money (in the broader definition of money lime M3) that are a bit confusing, but mostly they can be easily turned into a tradable currency on demand. A savings account that can't be accessed for months, isn't currency. A deposit account that can be used to buy with a card now, is.
The Bank of England describes money creation here:
https://www.bankofengland.co.uk/knowledgebank/how-is-money-created
That BoE article and its companion piece are *superb*.
Absolutely.
This is a massive rabbit hole.
I'm going to go out on a limb and claim that there's no way to describe how money works without smuggling in some sort of political ideology.
>if it's a scam, why do the people who understand it not start a revolution ?
From my perspective, this revolution exists and is ongoing, it's called bitcoin. Of course, this means most of my peers think i'm a crank. Time will tell who is right.
Bitcoin isn't money because it fails all three tests.
I said bitcoin was a revolution, not money.
That said, it is currently money to a small set of believers. I personally use it as a store of value and unit of account, and have done so for years.
You might use it as a medium of exchange, but it is too volatile to be a store of value or unit of account.
No, I use it as a store of value. I don’t care how many dollars my bitcoin can buy on any given day; I care how many bitcoin I have.
So, to be clear, are you saying you don't care what you can buy with those bitcoin? Or that a falling bitcoin makes the limited amount of things you can buy more expensive.
I recommend the book "Slouching Toward Utopia" by Brad deLong.
For anyone who wants a bit more, here's a review I saw recently: https://www.vox.com/future-perfect/2022/9/7/23332699/economic-growth-brad-delong-slouching-utopia
I haven't read it yet (though I have the audiobook waiting for when I get bored with language flashcards again), but as a long time reader of his blog, I second your recommendation.
As someone who criticised the excessive length of some of the entries in the book review contest I guess I should appreciate the brevity of this one, but still feel like a middle path might be best.
I did not mean it as a review boy I’m happy to say more
I have created a cloud service for understanding text. Just understanding. Suitable for chatbots, classification, etc. It's called Understanding Machine One (UM1)
Code to test it is on github.
API description is in Chapter 9 at https://experimental-epistemology.ai/um1
Algorithm is described in Chapter 8.
If you can handle it, read Chapter 7, which discusses the main cognitive dissonance of ML.
I'd love to hear some comments on any of the content.
I think it might be helpful if you could explain what "understanding text" means in this context -- for people like me who are not ML specialists. For example, I am somewhat familiar with machine translation, classification, and text generation, but I am no data scientist. Does "understanding" mean something like "generating embeddings" in this context ? Can I use your API to speed up the training of my classifier, or is its purpose something totally different ?
UM1 is a half-transformer, transforming text to a list of integers where the numbers are ID numbers for neurons that were activated during the reading (we use DISCRETE neurons). These represent phrases and concepts learned in the learning phase. The system will determine which discovered concepts are most salient and return those to the caller.
The caller can directly use these numbers in their business logic. The test code shows how to use Jaccard distance of these numbers for reliable by-content classification.
This is a way to avoid dealing with natural language at all when creating things like mail filters and chatbots. Send the raw text to the service and just use set theory on the results. This is a simpler API than DL and it is still 100% unsupervised. The site discusses how to use the Numbered Neuron API to classify into predetermined classes without doing supervised learning, ever, at any stage.
We have strategies for creating filters that are trivially tunable by end users based on this tech.
Ok, so I can basically use your engine for transfer learning, to jump-start the training of my own transformer (or classifier or whatever), right ? This does sound like it's generating something like embeddings (although I'm no specialist, I could be wrong); but I don't see how I can reasonably expect to use it as a classifier without at least some degree of supervised learning -- that said, I have not read your article yet.
UM1 *is* the classifier. You do not need another one.
The blog (and another post below) discuss how to use the Numbered Neuron API to classify samples into buckets. Each of the 200 tests in the test suite (on GitHub) is one target phrase that needs to be classified into one of five buckets by semantic similarity in a multiple choice test.
The entire bucket definition is that the user provides a canonical phrase to use as the bucket's focus. The idea is that any input sentence that means the same thing as some bucket's canonical means the wild text can be classified with that canonical because of high overlap in activated neuron IDs, as computed by Jaccard distance.
We claim that the responses are semantically somewhat stable even if the input language used is syntactically different. We call it "Semantic Stability".
This is why you want to use an Understanding Machine rather than to painfully parse words; words are treacherous and we need context based disambiguation to get to the next level.
Parsing words is a bother and you have to do it all over in order to handle more than one language. Better to use an external Understander. All a client app needs to do to support classifying in French is to translate the canonicals into French and specify that it wants to use a French-trained Understander.
I still don't know what that means.
Can you provide some examples of requests / responses from the service? And describe how those responses demonstrate understanding?
Sure. As to examples, if you download the test code you can see what is going on. In the simple case, you send it text strings, you get back a list of numbers. It can be more complicated than that. If you send a JsonArray inside a JasonArray... arbitrarily deep with text anywhere at any level, you get back an isomorphic structure with the texts replaced by their Understandinds (yet another JsonArray, or JsonObject if you want any metadata beyond the pure Understanding). And the test code uses that to pack all 200 test cases with 6 strings each into one query in order to save 1200 TCP roundtrips.
If you want to classify to 10 categories, you start by sending ten canonical sentences to the system and you get a list of numbers each. Save those. Now when you get wild user input that you want to classify to one of those ten cases, send the wild text in, get the numbers, and see which of your 10 cases matches with the largest overlap in the returned Concept IDs. This is what Jaccard distance does.
For the main question: (quoting from the blog)
To find all documents on topic X, start with submitting one or more samples of topic X. If you want to detect the meaning of "I would like to know the balance in my checking account" in some tellerbot you are writing, then you can send that phrase as a topic-centroid-defining "Canonical Probe Phrase" to UM1 and save the resulting moniform in a table. The value in the "right hand column" in the table could be a token such as "#CHECKINGBALANCE" to use subsequently in Reductionist code, such as in a surrounding banking system.
UM1 is not a transformer; it can be described as a half-transformer, an encoder that encodes its understanding of incoming text as lists of numbers. The table of moniforms you build up when starting the client system will be used to decode the meaning. This is done entirely in the client end, in code you need to write or download.
To decode the Understanding received after sending UM1 a wild sentence (chatbot user input, a tweet to classify, etc) your client code will compare the numbers in the wild reply moniform to the numbers of all the moniforms in the probe table we built when we started, using Jaccard Similarity as the distance measure. The probe sentence that has the best matching moniform is the one we will say has semantics closest to the wild sentence.
Jaccard Distance tells how closely related two sets of items are by computing the set intersection and the set union between two sets A and B. The distance is computed by dividing the number of common elements (the intersection) by the total number of elements in either set. This provides a well behaved distance metric in the interval [0..1] as a floating point value. The canonical moniform with the highest Jaccard score is the best match.
In UM1, the ID numbers represent dimensions in a boolean semantic space. If the system has learned 20 million Nodes, each representing some identifiable language level concept, then we can view the numbers returned in the moniform as the dimension numbers of the dimensions which have the value "true". Consider a moniform that has 20 numbers (it varies by message length and input-to-corpus matchability) selected from a possible 20 million to get an idea of the size of the semantic space available to OL.
In some DL systems for language, concepts are represented by vectors of 512 floating-point numbers. In this 512-dimensional space, DL can perform vector addition and subtraction and perform amazing feats of semantic arithmetic, like discovering that KING - MALE + FEMALE = QUEEN. With boolean 0/1 dimensions, closeness in the semantic space becomes a problem of matching up the nonzero dimensions, which is why Jaccard distance works so well.
Traditional NLP is often done as a pipeline of processing modules providing streaming functions to do word scanning, lowercasing, stemming, grammar based parsing, synonym expansion, dictionary lookup, and other such techniques. When using UM1 you do not have to do any of those things; just send in the text.
Note that UM1 does not do any of those operations either. It just reads and Understands. And because OL learned the morphology (such as plural-s on English words) used by UM1, the system can be expected to work in any other learned language, even if morphology is different.
This is an update of my long-running attempt to predict an outcome of Russo-Ukrainian war. After more than a month when nothing worth updating happened, we have major developments. Previous update is here: https://astralcodexten.substack.com/p/open-thread-234/comment/7955016. (note: I have a limited time for responding to comments, maybe it might take me a few days).
15 % on Ukrainian victory (up from 8 % on July 25).
I define Ukrainian victory as either a) Ukrainian government gaining control of the territory it had not controlled before February 24, regardless of whether it is now directly controlled by Russia (Crimea), or by its proxies (Donetsk and Luhansk "republics”), without losing any similarly important territory and without conceding that it will stop its attempts to join EU or NATO, b) Ukrainian government getting official ok from Russia to join EU or NATO without conceding any territory and without losing de facto control of any territory it had controlled before February 24, or c) return to exact prewar status quo ante.
45 % on compromise solution that both sides might plausibly claim as a victory (up from 29 % on July 25).
40 % on Ukrainian defeat (down from 63 % on July 25).
I define Ukrainian defeat as Russia getting what it wants from Ukraine without giving any substantial concessions. Russia wants either a) Ukraine to stop claiming at least some of the territories that were before war claimed by Ukraine but de facto controlled by Russia or its proxies, or b) Russia or its proxies (old or new) to get more Ukrainian territory, de facto recognized by Ukraine in something resembling Minsk ceasefire(s)* or c) some form of guarantee that Ukraine will became neutral, which includes but is not limited to Ukraine not joining NATO. E.g. if Ukraine agrees to stay out of NATO without any other concessions to Russia, but gets mutual defense treaty with Poland and Turkey, that does NOT count as Ukrainian defeat.
Discussion:
In a nutshell, Ukrainians managed to concentrate powerful forces on the insufficiently defended part of the Russian frontline, achieving complete surprise and total rout of Russian defences, which then triggered chaotic retreat slash surrender of Russian forces concentrated on the different part of the front, threatened with encirclement. Pretty classic maneuver, well known from history books. Overall extent of the Ukrainian victory, as of now, is still unclear, and battle is ongoing, which complicates predictions.
Well, I did not expect Ukrainians would be able to do that. This indicates far lower ability of Russian command to see what Ukrainians are doing (in military lingo, I believe it is called situational awareness), and also a lack of metawareness, in a sense that they did not know what they did not know; otherwise they would not concentrate so many of their resources in attempts to break through Ukrainian lines around Izyum (and also further southeast around Bachmut), leaving large section of the frontline so poorly defended. Furthemore this shows that Ukrainian army is very good, but I knew that already.
Other important thing that is happening, also good for Ukraine, is that since my previous update 538 increased odds of Democrats retaining their majority in the House of Representatives from 15 to 26 %. I think that US support for Ukraine in the future is going to be higher if Democrats win.
Now, I am still not ready to declare imminent Ukrainian victory in the whole war. Russia still has a powerful army, controls large swathes of important Ukrainian territory, has far more resources left to mobilize than Ukraine. Future of Western support to Ukraine still remains highly uncertain. I also think, although this is more subjective, that Russian command in this war has shown an ability to learn from their previous mistakes.
BUT, of course this shows major flaws in Russian decision making, which might not be fixable. In the past, I have lost any confidence in predictions of the impending collapse of the Russian regime (those long predate the war), simply because they are just being endlessly repeated with varying justifications and regime is not collapsing. Now, I guess those guys gained back some credibility. Total collapse of the Russian army, in a 1918 Germany style, just became a lot more likely than it was a week ago. And obviously, this situation might cause antiwar sentiments in Russia to rise, especially since Russian government might feel compelled to intensify mobilization, both manpower and industrial, either to replace unexpected losses or just ensure that this disaster will not be repeated.
*Minsk ceasefire or ceasefires (first agreement did not work, it was amended by second and since then it worked somewhat better) constituted, among other things, de facto recognition by Ukraine that Russia and its proxies will control some territory claimed by Ukraine for some time. In exchange Russia stopped trying to conquer more Ukrainian territory. Until February 24 of this year, that is.
Bilbo at one point describes himself as butter that has been scraped over too much bread. I think that may describe the present status of the Russian army. They don't have the resources to defend all of what they have and continue pushing in the Donbas, which they are committed to doing. The result, as we just saw at the north end of the line, is that moving enough troops to defend one area (Kherson) leaves them with not enough to defend another.
The sensible response would be to stop attacking in the Donbas but that may be prevented by internal politics. Failing that, perhaps even if they do that, they are at risk of another successful Ukrainian advance, this time probably in the south. I'm not confident it will happen, but I think the chance of a long war of attrition is now much lower than we all thought a month ago.
I feel like last week was Ukraine's El Alamein. Not a strategic turning point but a narrative one. Aside from the morale effect, Ukraine went into the Kherson offensive wanting to prove to Europe that it can do more than hold off the inevitable (otherwise why shouldn't Germany buy Russian gas this winter) and the last week's events demonstrate that.
My impression is that everyone was predicting a long grind with battle lines not moving much (for how long? did anyone say?) until this Ukrainian breakthrough. Did anyone get it right?
If you're expecting a substantial Russian victory, what are the stages?
A lot of people including myself (in one of open threads) predicted counter-offensive. That Ukraine will undertake it at some point before the winter was basically an undisputed idea in Russian war-related communities. The breakthrough was more of a surprise, though not by much: as people mentioned already, Russian forces are spread too thin. The reluctance of Russian Army commanders to use reserves to hold at least some ground near Kharkov was a bitter surprise to many, though, and generated a lot of negative comments about Ministry of Defense.
No one I read can see a clear path to a total military victory for Russia or Ukraine. Russian commenters either hope for economic devastation of Ukraine during the winter, or some kind of mobilization (limited or total) in Russia that will put a lot more warm bodies on the frontline, which may help to push it back. A few suggest using tactical nukes, but they were doing it since day one, I think those are just people who want to see the world burn.
Barring some unlikely disaster, I personally think the counter-offensive largely exhausted itself, and further gains by Ukraine will be slim, though there is talk about yet another prong of being prepared in Donetsk region, where until now Russia continued to slowly gain ground.
Recent days saw Russia destroying Ukrainian infrastructure, parts of which was left untouched before - if this goes on, it is likely that we will see a largely frozen frontline with Russian side hoping that anti-infrastructure campaign will force Ukraine to re-start negotiations with some concessions it wasn't ready to offer before.
The consensus in Lost Armour forums is that Ukraine won't do that, and instead will badger USA to deliver long-range missiles to retaliate against Russian infrastructure, from which point the war will probably escalate further, probably with limited mobilization on Russian side.
"If you're expecting a substantial Russian victory, what are the stages?"
That this was an exception and now we return to a long grind, which Russia ultimately wins. Still very plausible, altough less likely than a week ago.
Russia seems to have loosened ROE with last strikes at power plants (which repeated today); those plants are much harder to replace then military equipment (Ukraine haven't built a single new one since gaining independence) and given widespread use of electric trains losing them can hurt Ukrainian logistics a lot.
Realistically, I think that Russia is very likely to get what is actually wants from Ukraine: all of its Eastern territories including Odessa. This turns the remainder of Ukraine into a landlocked rump state, secures Russia's trade routes to the Black Sea (and allows it access to Moldova), and gives it some territory and a bit of an industrial base (assuming any survives), including nuclear power plants. Sure, capturing Kyiv or the entire Ukrainian territory would be nice, but it's not really an immediate requirement for Putin's imperial ambitions.
Why do you think this is likely, or in what way? I haven't seen any serious analysts suggest Russia could progress anywhere in the south, let alone take Odessa.
I think that the Ukrainian war is a war of attrition, and Russia can afford more attrition. I agree with @alesziegler when he says that a Democratic victory in the midterms (assuming it happens) looks bad for Russia; but ultimately, American support for Ukraine is going to run out eventually; in four years if not two. All Russia has to do is hang on until then, maybe trading incremental advances for incremental defeats. By contrast, once Ukraine runs out of advanced weapons, they're done -- at that point they're looking at a rapid collapse.
Why do we care whether American support for Ukraine runs out in four years or two, if the war is going to be over in less than a year? This is a war of attrition. At the rate the Russian army is attrited, it will cease to exist in a year. It will have zero tanks and zero infantry; it might have some artillery left, but artillery without an infantry screen is just free guns for whoever wants to claim them.
There are still scenarios where Russia wins, because the Ukrainian army breaks first. These are much less likely than they were a week or two ago, but it's still possible. But one way or another, it's going to happen in less than a year, and probably less than six months.
The US is highly unlikely to stop supporting Ukraine in six months to a year.
Saw your comment this morning and wanted to ask... why *is* American support for Ukraine going to run out? Look at it from a purely American perspective for a moment. America is getting to bleed our enemy (or at least long time rival) Russia and watch them slowly gut their army for a generation and detonate their economy. All at the low, low, cost of lots of money to American arms manufacturers, gas crisis in Europe, and an ocean of Ukrainian blood. Honestly, seems like a great deal... for the US. And we get to do so in a cause everybody (that we care about) agrees is just and good. And we get to test all our weapon systems in real combat and take the data back for further development and refinement.
You reference Democratic victory, but is this really a partisan issue? I'm sure there's plenty of Republicans who understand that a weaker Russia is good for the US. With the US not having to pay the cost in blood or energy, it's not like the public is going to care if we support Ukraine for the next decade.
Anecdote, but this is my boss. He's a Trump Republican and he thinks this is a big waste of money. He figures this is either our business - in which case we should declare war already - or it's not - in which case we should keep our money at home and let those people fight it out.
Logically, you are probably correct. Politically, though, Republicans believe in "America First", which means spending as little money as possible (ideally, none) on foreign wars. They campaign on this, in fact.
My question is how many Republicans believe in "Russia First?" How much of the noise about Putin being the Destroyer of Wokeness translates into support for America invading Ukraine in support of Russia?
My take is that the recent return in maneuver tank warfare reflect the attrition situation exactly the other way around: Ukrainians have now better and longer-range artillery, counter-radar capacity (HARM) and apparently also (somehow) ability to deny Russians any benefits of the air superiority they supposedly should have had since February. If Russia had working aviation, they should have been able to destroy the Ukrainian offensive in their tracks.
It is an indication how the combined military-industrial complex of the West and allies is much more capable at supplying Ukraine with advanced weapons than Russian domestic industry.
The only way I can see the local war situations turning favorable for Russia is that China steps in with logistical support to match European and the US donations. However, that would make them a North Korea in a US - China proxy war: hardly an envious position, no matter the eventual aftermath.
The ability of the Western military-industrial complex to supply Ukraine in arms and munitions for months is sadly rather dubious. We are simply not geared for a long term massive war. On the other hand Russia seem to have even bigger material problems.
Obviously we in the West have the technical ability. What is uncertain, is whether we have political will.
Russia has massive materiel problems regarding modern weapons technology: precision rockets, guided air-to-surface munitions, agile tanks, etc. However, they have a virtually infinite supply of WW2-era weaponry, and a massive supply of bodies to wield those weapons. That is why time is on their side, I believe.
I do not intend to step on anyone's toes here but the Russo-Ukrainian War is a European war. It concerns mainly Europe and it is primarily Europe that is holding Ukraine under its arms. While the US is providing some very fancy weapon systems (and probably a respectable load of invaluable intelligence as well) it is Europe that is giving most of the financial support that enables Ukraine to continue function as a state. And the European aid is not going to run out. At least not before Russia's resources run out.
Strong take, but I think too strong. I don't see it as certain that Europe holds up through the winter. As I understand it, the "campaigning season" is going to close in a few weeks when the weather turns, and everyone is going to be where they are until spring. That's a long time watching nothing happen while paying through the nose for energy you know you could get for cheap if you just toss the Ukrainians to the wolves.
I hope you're right, but I don't think it's as sure as you do.
I don't think campaign season is a thing, any more. War started in February. See also WW2, which saw its share of succesful winter offensives. Mayby it is still somewhat more difficult to attack in winter, but by no means impossible.
And that applies for Russian offensive, too. Maybe they will be able to do some smaller local attack again, who knows
This is just not true. Look as the numbers, a staggering part of the military and financial support for Ukraine comes from the US: https://www.ifw-kiel.de/topics/war-against-ukraine/ukraine-support-tracker/
+1. This is also a great point (and thanks for reminding us Americans we’re not the pivot point of the world - embarrassing oversight seeing as I was in the EU lesss than 2 weeks ago!)
It's not a great point, it's false. See mudita's comment above; eyeballing the chart the US seems to be providing slightly more aid than all of Europe put together.
One thought that I’d add to PS’ responses is that someone could have said much the same thing in 1968.
“America has great odds to win in Vietnam. Chinese support for the North has to run out eventually - all the US and the South need to do is hold on until then.”
Unity of political will on the part of the aggressor is not guaranteed, and if that fragments the rapid collapse can easily run in the other direction.
@alesziegler says that a democrat victory in the midterms is good for Ukraine. Which should be pretty obvious tbh since the current democrat president is very much pro Ukraine...
Is he? I thought Hunter was out of that job by now.
I think there are actually three counterarguments to that. From the more specific to the more general:
1. This assumes that Russia is losing less and/or replenishing more "advanced weapons" than Ukraine. Until now, the opposite has been the case: many Russian technical capabilities have deteriorated significantly (and will take many years to build back), from APCs to PGMs. Ukraine has gained many new capabilities it didn't have at the start, from howitzers to anti-aircraft to HIMARS. There's even been talk in the past few days of supplying Western battle tanks. If this trend continues even for one year, it will be not easy at all for Russia to hold on to the Ukrainian territories it now holds.
2. A war of attrition is not just about the equipment, it's also about personnel - and Russia has huge problems with soldiers. It doesn't have enough to defend such a long frontline, and many of the troops they do have are poorly trained, poorly equipped, poorly motivated and exhausted (which are some of the reasons they were so easily overrun in the Kharkiv region, besides poor situational awareness and poor command). And Russia has no good way of fixing this problem. They struggle to find volunteers in the required quantity and quality, and to train them. Even if there's a general mobilization (which Putin seems to avoid at all costs), it will take a lot of time and might or might not be really effective. On the other hand, Ukraine has plenty of manpower (and by now also the opportunity to train them properly, both in Ukraine and in the West).
3. Finally, while it seems likely that this will once again turn into a war of attrition after the current phase, it might not. A lot can happen in two (or four) years. Lawrence Freedman has this quote from Hemingway in his recent post on the possible course of the war:
“How did you go bankrupt?”
"Two ways. Gradually, then suddenly.”
(https://samf.substack.com/p/gradually-then-suddenly)
And Russia seems more likely to be going bankrupt at the moment. Not necessarily all the way to regime fall, but certainly with regard to its ability to wage this war.
I think Ukrainians are going to win according to alesziegler criteria (90% possibility). It only requires Ukraine to push Russians back to 2021 borders which is the most likely outcome. He talks about Russia giving concession to Ukraine to join the EU and/or NATO but I don't understand why Ukraine would need to ask for permission from Russia?
The main thing that allowed me to correctly predict that Ukrainians will fight and will not allow occupation of their country was from knowing the Ukrainian mood. Zelensky asking for weapons and not the ride was highly predictable from understand the mood in Ukraine.
I don't know what the mood of people in Donets and Luhansk is. If it is more favouring Russia, then Ukraine might not be able to retake those areas meaningfully. The Crimea appears to be mostly pro-Russia, therefore I cannot make any predictions in this regard. I might be wrong but I just don't know. But for the rest of Ukraine I think it is clear that Ukraine will regain these territories and will continue their integration with the EU. And that is what matters the most.
I even think that some criteria what it means for Russia to win, are too sophisticated and hedging. The reality is that most Russians don't care and those who care are motivated mostly by the idea that Ukraine is a false nation that should be incorporated into or at least strongly controlled by Russia. That is completely crazy idea by modern standards and not going to happen. It could have worked in the Middle Ages but even 100 years ago the USSR couldn't absorb Ukraine and erase its identity. No chance of that happening today. Ukraine may lose some territory but will remain as an independent nation that is even less controlled by Russia than before (99% possibility). That alone should count as a strong win from global point of view.
There is no way they can get to Odessa, unless situation drastically changes
I think it is very unlikely in this war. Imho they would much sooner go for general mobilization on the scale Ukraine had conducted, it is unclear whether they have a political will to do even that
I'd agree with that. Broaching the nuclear taboo, especially against a non-nuclear power would have tremendous negative consequences for Russia abroad that would last decades, and it might not even end the war itself - it could just as easily *worsen* the situation for Russia by prompting Ukraine's allies to escalate involvement while Russia's allies pull back their support or are pushed to pivot from tacit acceptance to joining in economic sanctions.
Heck, an order to fire a nuke may create the exact combination of "necessary in the eyes of leadership to keep leadership personally in power, but terrible for the country overall and, hey, if leadership is gone does that make an opportunity for me personally?" that prompts a palace coup.
So unless something dramatically changes, nukes seem like they are (thankfully) off the table for the time being.
Yeah they could probably carpet bomb several cities for the same cost in political capital as using nukes.
Except they can't carpet bomb several cities, they don't have enough serviceable bombers and they don't have enough control of the skies.
Hmm, no Open Thread 240.5 this week? It did feel like a long time between OTs...
Wrong place. Sorry
This really need to be done site by site, since the costs are mainly local. Also, the assumptions about whether the fuels produced/not by fracking would be produced otherwise.
I’m looking for a detailed and accurate cost-benefit analysis of fracking. Most of what I’ve encountered so far is polemical in one direction or the other. Any suggestions?
This really need to be done site by site, since the costs are mainly local. Also, the assumptions about whether the fuels produced/not by fracking would be produced otherwise.
TLDR- OODA Four Humors books.
'Surrounded by Idiots' by Thomas Erikson.
Erikson is a management consultant who visits offices and types people by a four humors variant called the DISC method. Dominant, Influencer, Stable and Compliance. I'm not a people person, but the books gave me a better take on people I work with. 'Surrounded by Idiots' is if everyone around you is a different type and life is one long misunderstanding.
Erikson is confident he can type people with a month or so observing. Hard sell? Yes. But he's not typing them for all times and all peoples, he's typing how they act in office drama he's seen. Per Montaigne men do more from habit than reason.
It's easy to map DISC as the four humors and the OODA loop. Dominant as Decide and Choler, Influencer as Sanguine and Orient, Stable as Melancholy and Observe, Compliant as Phlegmatic and Act. Often wrong, but easy, and these are just rules of thumb.
'Surrounded by Bad Bosses and Lazy Employees', also Thomas Erikson.
This is the best Erikson book. Decades of experience as a management consultant describing bosses and employees. The first half boss, the second employee. How they should get along and why they don't. Driving forces as well as personality types. 'People quit their boss, not their job'. The tendency to start a job with full commitment and minimum skill, and finish with minimum commitment and maximum skill, and what you and the boss should do at each stage.
He has a lot of good, pointed anecdotes. He even throws in a good case for the reason decent people flinch from American journalism- the Pyramid story. Headline, maybe a good first sentence, probably not a good first paragraph, endless burbling. In his telling it is a good way to reach all four personality types, not an abomination against God Man and Devil as everyone thinks. Bossypants types skim the headline with decision, job done. Emo types skim the first sentence for the slant. Rocks read the first paragraph. We do our part. Nit-pickers read the whole thing in the hopeless hope of some news value. The human comedy of humors is covered, why should I sneer? Because it's a good target for random contempt. Because I've spent decades reading crap excreted by low-IQ journalists, sloppily edited by Satans in green eyeshades, stuffed in at random by rightly bored typesetters around the ads. It's bad luck to sneer at a style of writing millions of people have read for the last century. Okay.
Erikson never mentions journalism or the phrase 'pyramid story'. He just sees a useful template for reaching most types of people. Okay.
Here is Erikson at his best, a sensible expert with three decades of experience.
'Surrounded by Psychopaths' and 'Surrounded by Narcissists', by Erikson at his worst.
'Psychopaths'? Kitsch. 'Narcissists'? Kitsch. 'Malicious' and 'Selfish' are English. So, indeed, is 'just not that into you'.
The worst books he's written, with good stuff from other books drowning in drivel. Erikson is a man of sense and one who knows the world, and you feel a good mind trying to make this kitsch make sense, as no one can. If he had written 'Surrounded by Malicious People' he could have written a useful book about malicious people with a sense of human nature. No. He goobers about amygdala as if malice changed your brain into a space alien supervillain. If he had written 'Surrounded by Selfish People'- if he had written 'Surrounded by People Who Just Aren't That Into You'-
Bah.
Pretending selfish and malicious people are different from us, stuffed in our test tubes and dissected by our pseudoscientific gibberish. Look into your heart. You are malicious and selfish and just not THAT into me. Me too. Even GK Chesterton.
'Emotions of Normal People' William Marston.
Marston is the source of Erikson's DISC personality profile. Marston is also known for his blood pressure lie detector. Marston is also known as the the creator of Wonder Woman, wearer of booty shorty and wielder of the Lasso of Truth. Marston is also known for taking his women tied up and attached to his lie detector, so he could screw with their minds as he hurt their pussy. Or for finding True Love by Scientific Proof, can't say. He lived with his wife and mistress and died of cancer, not feminine outrage.
The first part of the book is his deep thoughts on evolution. He's not that deep. He's not a biologist. Skip the first 90 pages. It picks up pace as he criticizes competing 1920's psychologists. He is clever, polite, firm. Then the book starts. It's about his four humors DISC theory, as seen in decades of interviews using his lie detector. DISC was originally Dominance Influence Submission Compliance, with Submission changed to Stability by Erikson's generation to avoid hurting middle management's feelings and scaring the office people. And another reason.
You have read feminist stuff about the Evil Patriarchal Science claiming women are naturally submissive, science proves it, the little darlings love it, it's better for them anyway. Here it is, the distinguished thing. He goes into detail. He shows that women's vulvae moisten the more as they are more submissive. It is his life's work to make all women to be Love Leaders who submit to their one dominant man. Works for a lot of happy families. As the D party has cracked down on this our fertility rate has dropped like a falling safe. If Erikson didn't skip this part he'd never get work.
. . . then in the last hundred pages Erikson betrays the Patriarchy. In the natural act a woman's special place dominates a submissive phallus. Wives should all have jobs so their husbands know they live on sufferance. Margaret Sanger is right, most marriages should not have children. Companionate Marriage is his ideal. Another Patriarch's youthful thrusting ends cockadroop.
The last hundred pages are busy. He writes about his lie detector, which proves you are lying because your blood pressure shows Dominance, the first, sane, socialized start of anger. His version is much better than his competitors, who think you are a liar when your blood pressure shows Submission, the first, sane, socialized start of fear. Between the two we always lie, but okay.
It says something awful about human nature that we invent these wonderful emotion detectors and use them as crappy lie detectors. It says something about Marston that he thinks anger and fear are crazy. Lions and gazelles, rabbits and dogs, me and life's infelicities, all nuts by him.
In the last hundred pages you start to see what the first hundred pages tried to show, a truly scientific advance on Darwin's 'The Expression of Emotions in Animals and Man'. Darwin, a naturalist, typed facial expressions everyone can see in humans and other animals. Marston goes deeper. He sees the first tremors of intent in the blood. I don't think this was taken up by the world of science. Paul Ekman uses high-speed photographs of faces for micro-expressions, but this looks like a genuine lost treasure. It makes sense for Marston to extend this into bacteria and evolution and so forth and I should give the first hundred pages a fair reread and I just can't. I've read too many Derp Thoughts on Evolution.
It's too late for me, but I hope someone smarter looks into this. Any Paul Ekman students out there?
Thomas Eriksson is a fraud and his books has no actual grounding. Still, he managed to trick companies and even governmental organizations to buy his courses, not only wasting countless tax kronor, but also doing active harm since his pseudo-science have influenced hiring and promotion decisions (yes, the respective managers are responsible as well, that does not absolve Thomas).
Thomas won an awards as "misleader of the year": https://www.vof.se/utmarkelser/tidigare-utmarkelser/arets-forvillare-2018/
My overall impression is that all of DISC is pseudo-science.
It's hard to deny that some people are more bossy or easier to push around, or that some people are more people person and some are more job-focused. I agree that 'psychopath' and 'narcissist' reek of kitschy pseudoscience and, worse, a failure to read GK Chesterton.
Why do you think China hasn't solved their low birth rate problem with a radical social policy? If they could implement the one-child policy, why not something like two-or-more-child policy(with exceptions for certain groups perhaps)?
I know that Scott wrote a post on why low birth rate isn't such a serious problem, but that only applies when you're not a nation challenging the United States for global hegemony. In so called Cold War 2, population absolutely matters, and surely the CCP realize this as well.
I have often wondered why no technical solution hasn't been applied. What would that be? an example would be Egg harvesting and IVF, which has more of a chance to produce twins.
We don't have uterine replicators yet. The uteruses presently available to China are finite in number, and owned by people who have very definite ideas about how many babies they do or do not want to make. The ones who want to make more babies, only very rarely have any difficulty making babies up to their preferred number.
The CCP could in theory determine that those uteruses are the property of the State and will be impregnated no matter what the host wants. Reasons why this would be a bad plan, are left to the student. But if they do go that way, they don't need fancy technology to impregnate whatever uteruses they can commandeer for the purpose.
Otherwise, China will only make as many babies as the women of China each individually want. If that's not enough for someone else's plans, they need to figure out how to persuade the Chinese women.
I think you've gone off the rails here a bit. I wasn't really demanding ownership of uteruses but egg freezing to maintain fertility over time. Then of course IVF produces more fertility
In fact people do actually say they want more children than they actually have in modern societies, and for the purposes of this discussion urban China is modern.
In the US although the fertility rate has declined for couples, the desire for the number of children has stayed the same. Therefore there is something in modern life -- perhaps starting a family later, child care costs, or housing costs - that stops people who do couple up and start a family from having the number of children that they do want.
https://news.gallup.com/poll/164618/desire-children-norm.aspx
I think his point is that however many embryos or eggs you have, you still need uteruses to grow a baby, so that's like the limiting reagent.
They are trying for it, it just takes a long time to shift gears when you've been pushing "one child only" policies (including forced abortions) for decades and then want to convince people who have grown up as single children that okay, now you can have two kids! three kids!
Would you be surprised if people were slow to adapt, given that they have prudent suspicions about "and what if I have two kids but next year the policy changes back to only one? what happens to me and my family?" because I certainly would not be surprised.
Notably, the "two child policy" and "three child policy" wasn't "you should have 2 / 3 children", they were "it is now legal to have 2 / 3 children".
China has not yet passed a single pro-natalist law, they just took most of a decade to fully unwind they extremely strong and coercive anti-natalist laws.
There's a lot of inertia in the ship of state, and they went from a One-Child (Max) Policy to a Two-Child (Max) Policy to a Three-Child (Max) Policy to only just in July 2021 removing limits on having more children. They haven't yet begun to try to actively boost the birthrate because 14 months ago they were still trying to lower the birthrate.
Secondarily from "it takes time to build pro-natalist consensus out of anti-natalist consensus", there's also a failure of imagination. They don't have any obvious templates to copy from around the world that could double the birth rate using secular means, so they'll need to create something from scratch. The fact that it's unproven weakens the pro-natalist faction in internal arguments because the anti-natalist faction can (somewhat plausibly) claim that the pro-natalist faction's goals are impossible or too costly.
Like Scott, I don’t actually agree that low birth rates are a disaster. The official statistics for China aren’t that different than the west. However they are trying some ideas, including letting the real estate market correct right now.
Maybe the think the absolute population advantage is suffuent and in the short rum higher polplation growth would only reduce resources useful for direct challenge.
To some extent they have introduced radical policies to try to increase birthrates. The "double reduction policy" that came in a year or so ago is a really big deal and directly aimed at this problem.
To give a brief background, China is similar to other East Asian countries in that its education system is based on highly competitive high-stakes exams. So like other countries in the region parents felt forced to sign up their children for all sorts of after-school classes to give them an advantage in those tests. This was a major cost for middle-class families and often cited as one of the barriers to having children.
So about a year ago the central government decided to get rid of this barrier by banning all for-profit tutoring of school-age children in academic subjects. This was a sector of the economy providing millions of jobs and bringing in tens of billions of dollars per year, so destroying it was a really big move.
There's a certain amount of cynicism about other motives behind the policy (increasing social control of what children are taught, reducing foreign influences by cutting off international curriculums and foreign English teachers etc) but most observers agree that increasing birthrates is at least a major part of the motivation.
"So about a year ago the central government decided to get rid of this barrier by banning all for-profit tutoring of school-age children in academic subjects."
Hmm... That is an ... interesting choice. To the extent that the tutoring was just a zero-sum competition for the high-stakes exams, this might have been the right choice, but, to the extent that it actually imparted useful knowledge or skills, the CCP may find itself wishing that they had subsidized the tutoring instead.
They prevented births by abortions and sterilizations, which are one-time interventions, and pretty cheap at that. To encourage (or even require) births would need far more expensive interventions that last over many years, since presumably you want people to not only have children but also rear them to adulthood, which takes many years and costs lots of time and money. You also need a longer-term more complex enforcement regime if you want to enforce births rather than prevent them. It's pretty cheap to know when someone is pregnant and enforce an abortion, but it's expensive to create a regime that figures out when someone could get pregnant and enforces a requirement to do so and go on to give birth.
So I expet the expense of encouraging (or requiring) births above what people naturally want to do is much, much higher than the expense of suppressing births below what people naturally want to do.
Meh humans are fairly short sighted. A one time $20k award would probably get you a decent number of extra children.
An interesting theory, but the empirical evidence is not encouraging. In Germany for example they have Kindergeld, which I'm told is something like 200 euros/month/child until the child reaches 18, sometimes later, and yet Germany's fertility rate is right in the middle of the Eurozone (and far lower than the government would like it to be). I believe a number of Eurozone countries are experimenting with cash prizes, so to speak, of $3k-10k per kid, without as yet a whole lot of success.
One could reasonably argue the money isn't enough -- and if your $20k weren't enough, I'm sure there's a number where it *would* be enough -- but that's kind of my point. You can spend a lot less than $50k/child to ensure that a child isn't born. But the other way around is much more expensive.
I mean I think my view of human nature is jaundiced enough that I would say $200/month for 18 years is a much smaller incentive to get actual behavior today than $20,000 up front.
I didn't think of this at the time I first say the comment, but I would like to argue that this isn't even that illogical a preference. Babies are expensive. You need a bunch of gear. And right when you're getting that gear, you also miss a bunch of work, usually unpaid. And then you have to either lose an income or pay childcare for a few years. And that childcare is most expensive when they're small, getting progressively cheaper as they approach pre-k age.
Kids get cheap again around when they go to school, but those first few years are rough, and if you're like most people, you're earning less money during those years than you will five or ten or fifteen years later. I think dumping a small windfall on new parents would make a lot more difference to most of them than the monthly payments, even if the monthly payments come out to more in the end.
Then why do people buy annuities? Or invest, for that matter? Maybe you're making some assumptions about the circumstances of the people who can provide the babymaking? Not disagreeing with that necessarily -- you could be right that a flat cash prize would be a better way to go than spread-out incentives, for the people most likely to respond.
But on the other hand, historically speaking, what are the incentives for babymaking? One might argue they are a consistent and modest bump up in social status and power, more like the Kindergeld than the big one-time cash prize. Seems complicated.
(1) In the West an annuity of $2,400 a year for 18 years would probably cost $25,000, not that far off from $20,000, because financial assets grow at about 7% a year:
https://www.bankrate.com/investing/annuity-calculator/
(2) In a Communist regime with less stable property rights, a lump sum is more valuable than it is in a country with more stable property rights.
(3) In the West you can BUY an annuity as an individual, you cannot SELL an annuity as an individual. In the rare cases where random individuals chose between lump sums and annual payments [lotteries], you see a larger basis for lump sums than you'd expect from the market for annuities because large chunks of the population are poor & have higher time preference than peopel with assets.
Part of it's probably the now trading off against the later - children are demographically bad before they're demographically good. If the PRC thinks the decisive time is the next couple of decades (and Xi is incentivised to behave as if it is; he's 69), then it's worth holding off.
Another part is that enforcing a procreation minimum is trickier than enforcing a procreation maximum, due to the fact that not all of the factors preventing one from having kids are within one's control (whereas it is easy to avoid having kids). You almost have to do it with incentives and childlessness taxes rather than direct criminal penalties.
OTOH, tax incentives and penalties are really easy to administer.
That seems ... really hard to do. Are they going to create a national childcare infrastructure? Give everyone a bigger house? Ban women from the workplace? Ban contraception? Government attempts to raise the birthrate almost never work. When they do work, they work only a little, and none (that I know of) worked any better than the Georgian Patriarch pledging to personally baptize your 3rd-or-higher kid.
https://en.wikipedia.org/wiki/Ilia_II_of_Georgia#Initiative_to_increase_Georgia's_declining_birth_rate
Maybe the CCP should encourage Mormonism?
"Government attempts to raise the birthrate almost never work."
I think this is the guts of it. Countries from Japan to Hungary to Sweden all have low birth rates, they've all tried a variety of interventions, none of them have been successful.
The variety of interventions tried are all quite small / weak, both as a % of GDP, a % of societal status reallocation, and an expected stability value. The case to keep an eye on is Hungary, where there appears to be a stable ruling coalition that does somewhat sincerely prioritize this. As the regime stability becomes more apparent & their existing pro-natalist measures therefore become more credible, I'm expecting that the birth rate will increase considerably over the next decade [unless the ruling party falls or the ruling party abandons pro-natalism].
Do you think that a TFR 0.5 or more above the pre-pandemic (2019) TFR is an acceptable measure of a considerable increase? Under the conditions that the ruling party stays in power and retains pro-natalist aspirations, I predict there will not be an increase in Hungarian TFR of 0.5 or more from the 2019 level by 2032.
As I mentioned in reply to WoolyAI, there is a drastic potential intervention: cutting pension payments to those with 0 or 1 children. But I find it quite unlikely the Hungarian government will try this. I think the government's foreign proponents and detractors alike overrate how different it is from other Central European governments.
I would say that that's a reasonable definition (TFR in 2019 1.49, so you are predicting that Hungarian TFR will be 1.98 or less in 2032).
The problem with operationalizing this as a prediction, IMO, is defining "retains pro-natalist aspirations". I would define a government with pro-natalist aspirations as one that steadily increases the % of GDP spent on child support as long as TFR is below replacement level (meaning that if they are pro-natalist, they have a target that they are trying to reach, and if they get signals that their current measures are insufficient they will try harder).
There is a more radical type of intervention. Alessandro Cigno and Martin Werding are interested in connecting "a person's pension entitlements to his or her number of children and the children's earning ability—proposing that, in effect, a person's pension could be financed in part or in full by the pensioner's own children." (https://mitpress.mit.edu/9780262537247/children-and-pensions/)
The more popular way would be giving a pensions boost to parents with a lot of children; the less popular way would be making a pensions deduction from those with 0 or 1 children.
For more context, maybe see Pensions and Fertility: Back to the Roots: The introduction of Bismarck’s pension scheme and the European fertility decline. (https://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp1734.pdf)
I think it's highly unlikely that any particular government will be the first to try something so unpopular. However, there are around 200 governments, many of which govern countries with low fertility. I predict that it will, eventually, at least be debated more than it is now. (I have no opinion on this because I have no clear opinion on pro-/anti-natalism.)
Well how are you going to get that into law and action, barring an East German police state with half the population informing on the other? I don't see how you could conceivably get a democratically-elected legislature to enact all this drastic reform *unless* it was very broadly popular, which means almost all people are really wanting to have more babies, in which case...why don't they just go and do it? Why not just trundle off to the bedroom, lock the door, and make a 3rd baby, instead of going through all this indirect rigamarole?
I mean, if the problem is you'd really like to have a 3rd child, but tax law/inheritance law/the cost of education/the cost of childcare are holding you back, then I would expect people to advocate powerfully for various measures directly addressing the cost of young children. Which...they kind of aren't. The Biden Administration had grandiose plans for climate change, subsidizing daycare, free pre-K education for all, cheaper family healthcare plans, and free community college education, among other things. Guess which survived the inevitable need to compromise? Climate change and healthcare costs. Adult but not particularly prospective parent concerns.
It would be deeply unpopular at first, I expect. But barring a robot-savior scenario or an unexpected burst in general tax revenue, what's coming instead will also be unpopular. Doing nothing would be highly unpopular (because it means leaving the elderly short of money needed to stay in their houses and afford medical care). Putting the elderly into hostels to be looked after by immigrants from young countries would be highly unpopular.
Apart from an existing voter cleavage along family size, what would make a politician take the risk of proposing it? Probably the interpretation that this is basically a collective action problem among young couples, and solving it would be mostly popular among those couples 1) if they realize what the (above) alternatives are and 2) if there is a baby boom before the next election.
If one young couple has one or two additional children, this won't help their retirement funding unless the children, upon becoming wage earners, choose to pay their parents.
However, if the majority of young couples had one or two additional children, there would be a notable impact on the demographic pyramid. Each retirement would be rather easier to fund. There would finally be plenty of workers to take care of the retired, too.
So a politician might gamble that the policy would be mostly popular among younger couples for solving the collective action problem, and mostly popular among older couples who have enough children. That sounds like a large share of the voting population of most low-fertility countries. People like me, who would be upset and inconvenienced by this because they are childless for non-financial reasons, are a relatively smaller group.
I'm not sure what you mean about informers. Don't you think OECD governments mostly have adequate records on who has what number of children, through birth certificates, censuses, taxes, immunization records, school records, etc., already? Governments need this information at present, to provide the child subsidies that already exist (which in most countries seem to be having limited effect on TFR).
Come. It would be deeply unpopular for *decades* because that's how long it would need to go on before any economic results visibly showed up. At the very least, you need your first bumper crop of babies to become productive tax payers and start forking out Social Security in goodly amounts, which for most people doesn't even happen until their 30s or 40s when they start to make good money. There's no way such a policy could be voted in, or, having been voted in notwithstanding the polls, wouldn't be promptly reversed by the legislators who replaced those who had voted in defiance of the popular will.
And, as I said, if somehow people *did* start to realize demographic implosion was bad for the future, and had sufficient long vision and discipline to do something about it, like endure a big intervention in private life by government, why would they not just take the easy step of...having more babies on their own? I mean, babies are kind of fun. Plus there's more to be gained from grandchildren than their SS taxes, if they're related to you.
It's an interesting problem-- an economy gains temporarily from people having fewer children. For a while, the proportion of working adults goes up-- there are fewer children to take care of, and large numbers of adults haven't aged into retirement yet.
Then the bill comes due, with a smaller work force, and people who need help are becoming more numerous, people who aren't going to mature into workers.
Any society which tries to reverse this is going to have to increase the number of dependents (additional children) while having less productivity to support them. Eventually, they'll reach maturity, but we're talking about a 20 years or so, I think. Maybe a little less if they go with the premise that there's a lot of useful work (including mental work) that can be done younger than we want to accept these days, but not a lot less.
I thought communism was intended to be more cooperative than competitive and to include stable social supports. What went wrong?
China has not been a communist nation for quite a long time, even if the ruling party still calls itself a communist party. It is currently a mixed economy dictatorship, less capitalist than the U.S. but not by a lot.
Also, whatever communism was intended to be, it ended up in the Soviet Union as a society of extreme inequality, a couple of poor first world cities and a vast third world hinterland.
Was Soviet inequality extreme? I got the impression that it was substantial, with party bosses having vacation homes and a fair amount of money. But not extreme compared to eg many third world countries where the elite is very rich
Are there third world countries where the elite have their own traffic lane? My understanding is that that was the situation in Moscow.
My picture is mostly from the book _The Russians_.
A strong central government has a *lot* of levers (of varying degrees of evilness) that it can use to increase the birth rate. China's currently not trying to, either because it thinks it isn't important OR doesn't think it's strong enough to survive trying to use those levers.
Levers include:
Significantly hiking base rate taxes but then providing large tax rebates to parents per biological child so people with 2 bio children have the same tax rate as before the reform and those with more pay less tax.
Blocking children from going to university unless they have at least 2 full bio siblings.
Outlawing abortion & contraception.
Banning childless women from professional employment.
Etc.
I'm reading through the western canon, and I feel like I'd have a much better time if I had at least one friend that has done (or is doing) the same thing that was interested in talking about it. How do I find friends who want to talk about great books? In SF, btw.
(If this is you, let me know!!)
The reading list I use is this one, because I purchased this set of books a while back: https://en.wikipedia.org/wiki/Great_Books_of_the_Western_World
I'm also interested! And in the Bay Area.
After taking a quick look at the SJC list, I'm curious what drew you to those works, rather than something more modern?
I'm a bottom-up learner, so I really don't feel like I understand something unless I know the fundamentals underlying it. Since so much of our culture is built on the bedrock of these these works (and because they still hold so much relevance after all the ages), I have been really enjoying working forwards in history from the far past! It makes me all the more excited to read more current works and be able to see the echoes of these past works reverb through them.
I'm interested! Live in the Bay Area
Awesome! you can send me an email at taylorlapeyre at hey.com. Maybe we can meet up some time!
I'd be interested if you don't mind chatting/calling online (I live in Vienna)! You can message me at emmett.rocks with the usual gmail ending!
Is being located in SF essential, or are you also open to meeting online?
I'd be happy to meet online! But I think programs like the Catherine Project are much better at coordinating those than I would be. What I'm really searching for are buddies in my immediate community to become friends with.
How do you define the Western Canon? I looked into doing this a while ago but struggled to find any kind of definitive list.
There is no such thing as a western canon, there is a different literary canon in every European language. You can't read French poetry in English. You would have to be a polyglot to read "the Western canon".
I'm using the Saint John's College reading list. My list for the first couple years is here https://www.notion.so/projectwren/44ad5a8fe5494b6ba71b9b07e294976d?v=54c0bc0492dc4b3dbb2db6233f12eead
I've tried to read a lot of those books and gave up on most of them because they're all super boring. How do you manage it?
Hm, I guess I just don't find them boring! They make me think in ways that normal books or articles don't.
I'm not sure how well regarded this source is but looks reasonable to me: https://en.wikipedia.org/wiki/The_Western_Canon
There’s a St John’s University affiliated great books reading group that meets weekly near Powell St BART. I attended for a while pre-pandemic and the discussions were excellent. They were on book three of In Search of Lost Time when I dropped in. Might be worth a google. I’m not sure how else to find them at this point.
Ah, this would be perfect. I'm actually using the SJC curriculum as my reading list. I'll do some googling as well.
Look into the Catherine Project. Their tutorials might be what you’re looking for.
Been doing it for about a year now! On my third tutorial — fantastic organization. But I was thinking more about making actual friends with similar interests.
Latest updates in the field of 3D printing firearms https://www.youtube.com/watch?v=_dBJUifMtTA&t=669s Here's a hobbyist who's printed a copy of the MP5, the legendary late 20th century German submachine gun. (You've seen it in a million movies even if you don't know what an MP5 is). As far as I can tell it's quite functional. He has another video where he's printed his own AR-15 and it clearly jams quite a bit- on the other hand, I think we can see where the general direction of technology is going here.
Seeing as this blog likes to write about the latest AI updates, I thought people might be interested in where the field of essentially creating semi-automatic weapons at home is going. They seem to be overcoming problems with printing the receiver specifically, which is traditionally metal. (Perhaps Glock will be an inspiration here!) It's also possible to cast your own bullets at home too.
Anyways, I'm neither praising nor criticizing, but simply noting real-world advances in the field. I'd imagine the ability to create one's own semiautomatic weapon at home will be widespread in a decade or so, definitely two. This has some policy implications!
I'm not going to watch the video, but I'd wager quite a bit of real money that he didn't 3-D print the barrel, or the bolt.
And if you can't do that, nothing else really matters. Yes, *at the moment*, in the United States, you can buy gun barrels and bolts over the counter (or internet), but that will change as soon it has to to prevent 3-D printed guns becoming more than curiosities. It isn't written by the hand of God, or even the Founding Fathers, that the law can only restrict the purchase of lower recievers.
I would suggest that maybe PLA isn't the best thing to make a gun out of.
There are plenty of examples of improvised firearms from long before consumer FDM became popular.
e.g. this pistol made from junk in Papua: https://web.prm.ox.ac.uk/weapons/index.php/tour-by-region/oceania/oceania/firearm-383/index.html
So I think that both sides of the debate are misdirecting their attention.
I've often heard that it's cheaper to just buy a black market gun where it's illegal, but in Japan that recent assassination was by a guy who made a homemade smoothbore (like a shotgun) out of some pipes, and supposedly with an electrical primer.
The gun in the video uses an off-the-shelf barrel and bolt assembly, so I don't think you could make one in a country with restrictive gun laws like Japan. Anyone know how hard it would be to 3D-print the whole thing?
As you mention, the sticking points with a lot of homemade firearms are the barrel, chamber, and bolt, all of which must withstand high pressures and impacts while being pretty precisely shaped. They're essentially impossible to 3-d print in plastic (at least, if you want to fire the thing more than once!) 3-d printing isn't the only trick in the book, though. Recently there has been a lot of innovation using electrochemical machining to make rifled barrels and chambers to quite high levels of precision. This video provides a decent overview https://www.youtube.com/watch?v=TSM6fBdmuso
TLDR- OODA Four Humor book
Kitsch? No, but with the saving leaven of kitsch that makes genre easier fun than classics.
'The EMS OODA Loop' by Brian Sharp.
Sharp is an experienced Paramedic, topped out, Flight Certified Paramedic. Observe, Orient, Decide Act is simple enough to be fed Marines with their crayons. As an inexperienced first responder I could see things I should have done. A useful checklist for writing reports, and if I ever remember it at an incident I will use it.
It's not hard to see Observe as Melancholy, Orient for Sanguine, Decide for Choler, Act for Phlegmatic. The old Four Humors, the OODA loop, tomayto tomahto.
Sharp writes clearly but with no style at all. I idly fancied him locked in some silent cell lit by the last burning copies of Strunk & White, forced to rewrite every paragraph per Fowler's 1928 'Modern English Usage'.
Nature is basically admitting to explicit efforts at censorship of politically incorrect research findings.
https://www.nature.com/articles/s41562-022-01443-2
Not surprising of course, and if anything it's refreshing to see it stated so (relatively speaking) plainly, but this is going to make some of the most politically salient behavioral research harder to conduct/promote and skew people's perceptions of what is true (to the extent that they actually care about what research says beyond its use as ideological confirmation).
Science magazine admitted to doing that, in the context of "here is why we decided not to censor this article," more than two years ago
https://daviddfriedman.blogspot.com/2020/08/covid-invisible-elephant-in-room.html
When you actually read the guidance, they don't seem to be censoring research at all. They're just asking that you be careful in how you report it.
Kenny, what about this section?
"Although the pursuit of knowledge is a fundamental public good, considerations of harm can occasionally supersede the goal of seeking or sharing new knowledge, and a decision not to undertake or not to publish a project may be warranted."
It's difficult not to read that as explicitly supporting the prior suppression of research ("a decision not to undertake...a project may be warranted") as well as, of course, a decision to suppress its later publication. To be sure, they say all the things you say about style and tone up higher -- that manuscripts should not be malicious or thoughtlessly written. But they *also* say what I've quoted above, and that, at least for me, is where the immediate stink of dishonesty arises, regardless of the amount of soothing and commonplace rationality that precedes it.
It's as if one read a nice long opinion from the Committee on Public Safety, about how important it is to treat everyone with dignity and respect, and follow the law, and respect established social norms, blah blah motherhood and apple pie, but of course every now and then we might just have to relocate a few difficult individuals to a re-education camp for the collective good.
If Nature didn't *mean* that bare statement to mean what it seems to mean, they are very good writers, and I'm pretty sure they could have phrased it to avoid any possibility of misunderstanding. I don't think they did, because I think they really mean it. They really do think it's right and good for some projects to be not be done, or not published, regardless of whether they are honest and factual, because of downstream consequences.
There may be arguments for accepting that proposition -- none come to my mind, and indeed I recoil from it as blasphemous to the concept of science and free inquiry -- but I just find it difficult to believe that proposition isn't what they mean, given what they wrote.
It would be fascinating for someone to test it empirically, e.g. by submitting to Nature two nearly identica pieces of synthetic scholarship which merely varied in whether the "facts" they "uncovered" supported or did not support the fashionable shibboleths. That would be real and valuable social science research.
Thanks for identifying that specific point that I had missed.
I still see nothing about dishonesty there, just a general concern for the effects of one’s actions. Research that can’t be expected to help and can be expected to hurt shouldn’t be done. Nothing false should be said.
I think that fact makes it hard to test your hypothesis, because it’s not going to be possible to submit two papers of the sort you mention unless at least one of them is fraudulent.
I spoke poorly. There's nothing about what they wrote that is dishonest. The stink arises from the fact that I find it very difficult to believe, people being the way they are, that once you accept that it is permissible to suppress research at all, it will always (or even mostly) be the case that it will be suppressed only for the purest and most disinterested of motives -- and not because that suits current shibboleths, the interests of the current in-group, political or economic convenience, et cetera. That is, I find this planting the seeds of future dishonesty.
That's why I tend to be a free-speech absolutist. Once you admit that OK well some speech is "too dangerous" to be allowed in a republic, it's a terribly slippery slope down to the Alien & Sedition Act, and eventually Minitrue. People are just not angels, and that kind of power just corrupts. I don't find it plausible that the editors of Nature are going to be unusually angelic.
I'm made even more suspicious that it doesn't seem to occur to *them* that they might have a problem with that, which bespeaks insufficient humility, or (circling back to the dishonesty miasma) they are being disingenuous (admittedly an interpretation more negative than I think plausible). In science it's been a credo for a long time that the human tendency to bullshit yourself is so strong that we have to go to unusual and fanatical ends to ensure that we don't, or at least that it is held in check. Giving yourself the power to say well if this is Evil Research in some not easily measureable way, just according to my theories of downstream effects, then it must not see the light of day, seems like going in the opposite direction -- trusting far more than history justifies in the ability of men to be objective.
I had in mind that *both* papers would be fraudulent (that's what I meant by "synthetic" scholarship). It might be hard, but it's been done before, e.g. to test perceptions of bias in publishing or employment, and then there is Sokal's famous hoax. I'm sure they leave a bad taste in the mouths of editors, but I don't see it as any more unethical than scads of undergraduate psychology research, provided everyone is debriefed afterward (and the papers themselves sufficiently anodyne).
No journal commits to publishing every paper that is submitted, and is in the topic of the journal, and is done correctly, and yields accurate results. It always matters that the results have sufficient amounts of novelty and interest. This is already a commitment to “suppress” research that isn’t of interest to current researchers in the field. Some people object to this on grounds similar to the ones you mention, that this makes academic fields prone to fads and fashions. I think this isn’t a bad thing, because these fads and fashions are ways to coordinate research on topics in a way that is more effective than spreading the field too thin. But regardless, I don’t think there is anything qualitatively new about this explicit new rule, at least for the dynamics of research.
Come, that's not a reasonable gloss on "suppression." Things are "suppressed" when they are hidden *even though* they are of interest to end consumers. If end consumers aren't interested in the first place, it isn't suppression.
I mean, I've heard that dysfunctional use of "suppression" from cranks all my professional life. "Phys. Rev. Lett. rejected my paper "proving" the Second Law of Thermodynamics is wrong and is established only because of a giant conspiracy between the Illuminati and Roman Catholic Church. Suppression of free inquiry!" Er...no...it's just that nobody gives a rat's ass about crank theories without very heavy evidence, which you haven't provided.
And if that's all they were doing -- not publishing stuff nobody cared about -- then (1) I wouldn't object, for just the reasons you state, but (2) they wouldn't have to issue a manifesto about it, because that's already been part of scientific publishing for centuries.
What they are explicitly saying is that *even if* the article meets all our other criteria for publication -- true, sound, based in fact, of relevance to current scientific debate and/or of interest to our subscribers, stated in a respectful and objective manner -- we *still* might not publish it, because we have imagined certain downstream effects that we think are bad for society, because someone died and made us Tsar or something.
If you don't think that's troubling, try imagining it with the ideology on the other foot -- try imagining the NSF in a future Joe Fundamentalist Administration saying it's not going to consider grant applications that, while otherwise meeting all their criteria, might reveal that there's a genetic component to being gay, because revealing that truth would cause downstream "harm" to their efforts to get every person who professes to be gay into "conversion" therapy.
Pretty ugly, no? If you're relying on the pure motives and sterling character of the gatekeepers into whose hands you have given the power to say what gets said and what doesn't, this...does not have good historical precedent.
One late-breaking thought on this - there seems to be a lot of consensus in the thread that this is a very bad move on the part of Nature. And I'm inclined to agree; "censorship = bad" is a pretty strong belief of mine.
But, for the sake of argument, do any of you think there might be some legitimacy to this move (or at it might at least be more legitimate) given how shoddy science reporting can be?
I shared this elsewhere, but I'm putting it here too because it's funny (https://www.youtube.com/watch?v=0Rnq1NpHdmw). If this is your media environment, is there any shade granted to Nature's decisionmaking?
"We refuse certain studies on race/gender because our ideology opposes them" is one thing.
"We refuse certain studies on race/gender because the media picks up all our work, reduces it to clickbait headlines, and before you know it half of the people you know think potatoes cure erectile disfunction, and that's funny when it happens with potatoes, but way less funny when their lazy takeaway is 'racism/sexism is *totally* based on science," however, is very much another.
Does that impact anyone's evaluation of this? I still think censorship is the wrong way to approach these issues, but I think acknowledging that the issue exists at least gives me some sympathy for Nature's position.
This doesn't seem to be what's going on. They aren't asking people to censor their research at all. Just to avoid phrasing it in ways that are derogatory to certain groups. No actual research would be censored, if it is phrased in ways that are accurately supported by the data gathered.
1- I'm never sympathising with a woke institution, Nature is pathetically and obviously doing this for <wink wink> reasons, and those reasons are not good and not respectable. They can fuck right off with their "eVeRy ThiNg iS pOliTicAL" bullshit and shoving their pet issues into science. No convincing steelmanning can be found when you're unironically saying things like "Scientists must consult with activists and advocacy groups". If you want sympathy for Nature, don't waste your time reading the rest.
2- More interesting questions : What should we do about bad science? Or bad reporting of okay science? Well, the answer is incentives of course.
A- First, there is the massive competition among scientists, the "Publish Or Perish". As a guy who likes small societies and small local governments, I'm inclined to say this is the inevitable result of centralization of scientific prestige and the general Moloch-ness of large scientific institutions, but this misses the very real point of resource scarcity. Science, and knowledge seeking in general, is fundamentally an idle pursuit. Just like the friends you make spontaneously without meaning to often turn out to be the best friends, the things you discover spontaneously without meaning to tend to turn out to be the best science. Scientists ideally don't have to justify themselves and beg for grants. Competition for metrics is a kiss of death to any community except possibly extremely narrow things like Chess and competitive programming puzzles.
I don't know what can be done here, I can just unhelpfully say "Just invent post-scarcity civilizations bro" but this is clearly not actionable and might not even solve the problem, as scientists can always find other scarce things to compete over.
B- Second, on the reporting side, there is the issue of pop science porn. Again, my root-problem-seeking side will just notice that this is the inevitable result of the ever-more-extreme division of labor of a complex civilization, you can't escape over-simplifications and ignorance with specialisation, they're the name of the game. But, mainstream media is so very bad that it's not hard to beat them.
My recommendation if you want to get good science reporting and also keep your labor-divisioned civilization intact : fund the shit out of volunteers like Kurzgesagt and Verisitium. Those people manage to beat the living daylight of *professional* science reporters with nothing but youtube revenues and sponsors. They are a proof by construction that entertainment is not mutually exclusive with fidelity, and that not all over-simplifications are created equal. Fund them, join them, hire them, whatever.
The bigger problem is giving a shit in the first place, I don't think the morning shows or the tabloid papers just give the smallest shit about how accurate their reporting of science is. I can almost hear them say "Who gives a fuck bro, none of this is remembered for 5 seconds, go touch grass". How are you going to make them even acknowledge there is a problem? How does the awful and atrocious covering of science material in K12 education contribute to and sustain attitudes like this towards science in general? Those aren't easy questions. Good science reporting is a solved problem, there is *always* that one guy\gal who's just begging to explain that Very Complex Topic to a general audience, they exist in abundance, I suspect I even have this bug when it comes to topics I love in computer science. The bottleneck is Who Gives A Shit? Very few, relatively speaking.
3- A much bigger question than the previous : Can Knowledge ever be harmful?
In a very small nutshell, yes. Any intelligent agent will process sensory inputs and respond with behaviour, so of course anything you know can potentially change your behaviour for worse by any definition of "worse". In computer security, any untrusted inputs to a computer program is a potential source of vulnerability, up to and including an attacker hijacking the program entirely and executing arbitrary code of their choosing. If we conceptually regard a human brain as a program and the world as a huge source of untrusted inputs, then of course there is, for every conceivable brain-type, something out there in the world that, if known, will make it think and\or behave worse, for every possible definition of worse.
But it's not clear what to do about this. Consider a video of a kitten abuser, is it good for me, a kitten lover, to watch it to be filled with righteous anger and mobilised to help kittens, or is it bad because it might make me suicidal and devoid of all hope in human kindness? Is it good for Darth, a kitten abuser, to watch it because it encourages and reinforces his behaviour and provides him with an example to follow, or is it bad because the reaction to it will serve to show him how hated he and his behaviour are ? Difficult to say, and the answer varies by question and type of people, and you can't easily do controlled experiments.
You're acting like this is a one way street. By NOT publishing certain research, then an at least equally bad ideology of e.g. blaming white people for black people's problems (and all of the policies implied from this) is seen to be vindicated. The lazy takeaway now IS that the science supports them, and that was with some studies showing that this is untrue. But without these studies, things are made only worse. It seems like you're simply defending them because you happen to fall on the same side of a two-sided issue (while pretending only one side exists).
One way to tell if this is Nature's issue is if they are concerned with instances of partisans and journalists grabbing the conclusions from some paper they didn't read to push a political agenda, or if it's only particular partisans and particular political agendas that are the problem.
Not a particle. Even if I believed this was their goal, and it wasn't instead a squalid little issue of virtue signaling to bolster their feelings of relevance in an era when traditional scientific publication is under great pressure from arxiv and open-access publication (not to mention tweets and blogs), the proposition is egoistical and wicked.
Choosing whether to say the truth or not, or how to say it, based on how unknown strangers will react, is manipulation, propaganda, a form of deception -- an attempt to get people to think something other than what they naturally would, when given a particular set of facts.
There are certainly times and places where that is a necessary evil, and will do some good -- e.g. I'm thinking of the general restraint of news outlets these days in broadcasting the details of a suicide, on the reasonable grounds that it encourages copycats and serves hardly any useful purpose -- but a scientific journal has no business getting into that beyond a core insistence on phrasing and discussion being strictly fact-based, highly restrained as to speculation, and highly avoidant of imprecise or emotionally laden terms -- all of which have been standard for scientific publication since Isaac Newton and with which I agree. Not only do they lack any shred of competence in deciding how and when to manipulate for the greater good, they lack the responsibility, and it is anyway outside their core mission -- which is to publish the truth, and nothing but the truth.
It very rightfully makes people suspect them of being willing to compromise on what "the truth" is, and from sins of omissions to sins of commission in that regard is not so very far a distance that people would comfortably rely on them never crossing the line.
This is awesome. Now I can credibly claim that all science is fake news, recruiting for my sex cult just got so much easier.
I believe this is the 3rd slogan raised on the beautiful marble exterior of the Ministry of Truth: "Ignorance Is Strength." You would not want people divided, arguing, skeptical of each other and of the wisdom of our experts by experiencing any nasty barrage of data which merely happened to be measureably true, would you? That way like social chaos, surely. Debate, disunion, a failure to all agree on the same ideas, the socially delibitating insistence on individual liberty and freedom of conscience that retards social progress, weakens the collective will, saps the strength of the state. Better to think carefully about whether there are indeed things We Should Not Ask About, or, to quote from the article itself:
"Although the pursuit of knowledge is a fundamental public good, considerations of harm can occasionally supersede the goal of seeking or sharing new knowledge, and a decision not to undertake or not to publish a project may be warranted."
That's plainly said. Not all questions are permissible, and not all answers should be shared. And people have thought this way for most of human history, barring the weird three-century interlude after 1665 or so. That we might return to the more nuanced view of "the truth" that is in our nature is hardly surprising. It takes an unusual and practiced fanaticism to follow the facts wherever they may lead, and no matter the consequences.
Remind me again, back when all the fuss over embryonic stem-cell research was going on, and religious groups wanted it not to be publicly funded because they considered it unethical and immoral?
And all the scientists in favour of it told them that, fundamentally, they could blow it out their ears, science wasn't to be hampered by none of this social considerations morality shit?
How the turn tables, indeed.
Not quite the same issue. I think the argument on stem-cell research was that the research itself was immoral, like medical research on patients who have not consented. Doing the research and not publishing the results would not have solved that problem. The argument this time, I think, is that telling people about the results of the research is immoral.
Well, that's why public funding of science is a tricky proposition, and you certainly wouldn't want just the scientists deciding what to fund. I never listen to interested parties explanations of how their position is actually objectively the most ethical, anyway. By me if your salary depends on a particular view of an ethical question, it's asking a lot for you to give any kind of objective evaluation.[1]
But I'm less unhappy about that particular froofrah. Scientists are supposed to clamor for funding, and pursue every interesting angle, and devil take the hindmost. Look! If we put two U-235 atoms together, they tell two friends, and they tell two friends -- kablooie! Isn't that cool? Let's try it...!
That's what they're going to do, and that's their expected role, and that's fine. I can count on them to be eager beaver amoral learning machines. And on the other hand, I can count on philosophers and thinkers to ponder the question of Should we? and give strong opinions about it, and I strive for hope that I can rely on politicians to sum up the philosophers' reservations, the scientists' enthusiasm, the mood of the public, the needs of the future, and make some reasonable decision. That's what we pay them for.
Things get all muddled when people won't play their roles. When the scientists attempt to be amateur philosophers and politicians, when the politicians play amateur scientist and/or minor prophet ("Only I know what God wants!"), or when the voters can't be bothered with taking ultimate responsibility for what they want.
----------------------
[1] The sneering is another issue entirely. I have known a few scientists who think that way, can barely keep their contempt for the unwashed masses who pay their salaries under control, and I keep calling at each meeting of The Brotherhood for these bad apples to be horsewhipped and branded on each buttock as a warning to others, but alas the Committee on Public Safety keeps tabling the motion. I point out if we do not police ourselves we will be policed, only more roughly and indiscriminately, but people just want to talk about the Christmas social. I dunno it's almost like being smart in one area gives the average hominid the fatal delusion that he's smart in all. Not a good design -- I shall have words with the Creator if and when I get the chance.
But you see those morals/values were the morals/values of bad guys. My morals/values on the other hand are excellent and without err.
I am thinking a magic fairy that makes all journals who do this put "Ignorance is Strength" as a subhead would be fun.
Yup!
Edit: Not being glib or sarcastic, just agree wholly with your comment, and sometimes it's just nice to know someone read and agrees with what you wrote.
What a kind impulse. Thank you
I'm finding this take increasingly tiresome.
The branches of science that this is going to affect are mostly going to be the social sciences. As the replication crisis has showed us, the overwhelming majority of social science is junk. It is not unreasonable (nor is it "censorship", except in the broadest sense) to hold research that produces an antisocial conclusion to a higher standard of proof, especially when it's statistically likely to be false knowledge.
And yes, there are going to be cases where it's misapplied to harmful effect, because that's an innate property of any form of bureaucratic oversight. But I don't think it's unreasonable on the whole.
The example I spotted two years ago, in _Science_ not nature, was in medical science.
https://daviddfriedman.blogspot.com/2020/08/covid-invisible-elephant-in-room.html
The editors made it clear that they had considered refusing publication not because the article was wrong but because the knowledge would be misused, although they decided the knowledge was valuable enough so it was worth taking that risk.
Good catch, the writing was on the wall there.
Easter Island Syndrome. Many people have wondered why, when their timber supplies were dwindling and things were not looking good, the Easter Islanders spent their last remaining resources building giant freaking god statues. But I think this is a common human impulse: if you can't solve the problems that are within your purview, you...go big. Gigantic, if possible. Like Hitler in his bunker with the Soviet tanks 300m away and a dozen men remaining under his command, dreaming of The Super Weapon that will turn it all around.
I think when people (and institutions) start seeing the basic tasks within their ambit as slipping beyond them, they start grasping at grandiosity, hoping for some miracle reversal. In this case, Nature, like all scientific journals, is in trouble and has been for many years, because they're being disintermediated by the Internet. Who *needs* to subscribe to Nature any more? Who needs to compete for publication in their pages? Increasingly, the answer is...not as many as you'd think. Not as many as they'd hoped. Not enough to hire another assistant editor at a nice 90,000 pounds salary, or sponsor a Mediterranean workign conference. So it does not surprise me to find them acting a little desperate, grasping at ways to suddenly become a lot more relevant than they are.
Hangover from the Whole Earth Catalogue, where it was said that we are as gods and might as well get good at it.
>The branches of science that this is going to affect are mostly going to be the social sciences. As the replication crisis has showed us, the overwhelming majority of social science is junk.
The replication is overblown, and most people widly overestimate the rate of successful replication in "non-junk" sciences
Additionally, the kind of research disproportionately afected by these policies are stuff like psychometrics/intelligence research, which are amongst the most rigorous and empiricaly validated areas of research in social science
>It is not unreasonable (nor is it "censorship", except in the broadest sense) to hold research that produces an antisocial conclusion to a higher standard of proof, especially when it's statistically likely to be false knowledge.
And how on earth do you define "anti-social conclusion"? Because it really, really sounds like you're implying that contradicting PC beliefs is "anti-social". I think it's "anti-social" to blame white people for black people's poor socio-economic outcomes when the science doesn't support this.
How do you distinguish an "antisocial conclusion" from a conclusion that inconveniences your political faction or offends you personally?
I know this is intended as a "gotcha" question, but for the former, you basically can't. Politics isn't some abstract thing neatly separable from society, to the point where it's not unreasonable to *define* politics as "the direction you think society should go". I would hope that most people are basing their politics on what they think is morally good for society.
>I would hope that most people are basing their politics on what they think is morally good for society.
I doubt this is what most people are doing with politics, nor what we even want most people to do. I think mostly they base it on what they think it good for them, and then latch on to some stories that pretty up that behavior as "morally good".
I mean, I was going to add a second paragraph to the above comment responding to the "offends you personally" quip by saying "any aspiring rationalist should be able to recognize that no one is able to truly construct a system of morals and ethics that doesn't ultimately boil down to how they *personally* feel about things on some level", so I don't disagree with you here.
I don't think the people making these decisions really care about it that much. They don't want to be piled on on twitter and don't want to be one of the bad people".
You are assuming their value of science/truth has a much higher value in their telos network than it does. Academia is absolutely overrun with people where advancing knowledge/understanding is a distant distant priority among their goals.
I think it's important to emphasize that part of my point is that not all studies advance an understanding of truth - in fact, some do the opposite (just ask Dr. Wakefield), and that's especially common inhuman psychology. There's utility in trying to minimize the harm of that false knowledge.
I mean if you general position is: "Hey the Research we publish in our journal is wrong 40% of the time so we are going to be super careful about what we publish if we think it has negative impacts on the world at large". I get that.
But 1) I doubt that is what they are thinking. And 2) Isn't the main solution there to raise your standards and be more picky, not to start inserting more ideology into your selection process?
I mean, figuring out how to do (2) effectively and efficiently is kind the biggest open question in scientific publishing right now, so I think they have to be forgiven for not immediately solving that.
As far (1), I'm giving them the benefit of the doubt of not wanting to openly say "most of what we publish is actually garbage, whoopsie", and if it means I'm inappropriately steelmanning to counter others' straw/weakmen, I'll own up to that accusation.
So some research is done that undermines one of the tenets of AGW. But that is settled science, we shouldn't publish it because of the 'harm' it will do. (Would you have restricted research into nuclear physics, if you knew about the harm of the atomic bomb?)
>Would you have restricted research into nuclear physics, if you knew about the harm of the atomic bomb?
I alluded to this further down in the thread, but Eliezer (ironically?) suggests almost exactly that in HPMOR, and again in Three Worlds Collide, that knowledge of nuclear physics should have been restricted to a conspiracy of science for those who were trained in the methods of rationality, because of the harms it did to society. It's an interesting concept.
So your answer is; Yes, or maybe? I'm hoping nuclear energy (fission) will be more useful than nuclear weapons are harmful... but that's still an open question. (It's going to be hard to hide fusion... since how else does the sun work?)
Yeah, you know, we tried that "conspiracy of knowledge for those who are trained in the arts". It was called alchemy, and it succeeded in obfuscating what it was about so well that there are still multiple interpretations of what the symbolism and terms and processes meant.
If Yudkowsky's method had been adopted throughout history, we probably might be at the stage of - ah, but no, I cannot reveal to the vulgar gaze the sacred hidden mysteries! Who am I to draw back the veil of Isis for the profane and those who have not risen through the apprenticeship to mastery?
Sure, but that idea was put to a practical test and failed laughably. The US government did everything humanly possible, within (broad) interpretation of the law, wartime emergency powers, and almost unlimited willingness to spend money and use force to restrict the knowledge of physics necessary to build atomic bombs from the moment of inception of the MED.
And how did that work out? The Soviets had a bomb within 4 years of Trinity, and even if everything had been published openly it would hardly have taken them much less time, just given the necessary construction of industrial plant and plutonium farming. It's not clear the enormous security effort delayed Soviet acquisition of the technology by a month, let alone the decades you'd need for this to be any kind of plausible idea in the real world.
Indeed, I can't think of any recorded historical cases of secrecy retarding the development of atomic weapons by any nation that is willing to go to the (very large) expense involved. Nor can I think of any other dangerous technology that has ever been kept secret for any significant length of time, once it is known among a modest group of individuals. Zero-day exploits are yet another example.
I find this rationalization increasingly tiresome as well.
People discussed how the social sciences suck since the freaking 1980s without the gay "Muh vulnerable groups" tones that ooze out of this article (characteristic of activist "science" or "tech").
When Philip Tetlock showed that most political "experts" are no better than flipping coins on average, he wasn't raving about how that harms queer folx. He talked in detail about how they got predictions wrong, how they re-wrote their predictions after the fact to make it seem like they got it right, how it's a disaster that people like this are in charge of most governments and other powerful organizations, then he discussed pretty actionable measures to hold supposed experts to better objective standards. At no point did Tetlock ever advise "if your experts are saying the wrong things about $PET_GROUP, that's a clear sign that they're wrong".
There is not a single line in this article that says that only wrong or fraudulent social science should be rejected. (This is a challenge, find me a line that you think says otherwise and if 5 people agree with you I will say that I'm a dumbass who can't read.) In fact, almost every single paragraph begins with the (implicit or explicit) acknowledgments that "harmful" science may be perfectly true and pass all traditional tests of good results in its respective field, and that still doesn't make it okay or publishable.
From the TFA:
>Sexist, misogynistic and/or anti-LGBTQ+ content is ethically objectionable. Regardless of content type
So, MonkeyPox is 98% a gay epidmeic and was started by the sexual practices of west european gay men. This is, by any reasonable interpretation, an "anti-lgbtq+" fact, it relates a negative thing, a disease, to the coddled population and its lifestyle. Reality just so happens to be anti-lgbtq+ sometimes, this fine article is saying that lgbtq+ feelies override reality. Or are we allowed to notice only anti-lgbtq+ epidemics but not other anti-lgbtq+ things ? They didn't mention that if true, and I find it hard to see how epidemics differ from any other unpleasant and unwoke fact to merit an exceptions.
Finally, it's amusing and instructive to see the kinds of groups that they say can be harmed by research.
- Ctrl-F for "men", manually skip irrelevant results, only relevant result : "Researchers are encouraged to promote equality between men and women in their academic research"
- Ctrl-F for "misandry" or derivatives, 0 results. Ctrl-F for "misogyny" or derivatives, 3 results.
So apparently, Science Must Respect The Dignity And Rights Of All Humans, but we only need to single out certain very specific groups, and only the 50% or so of humans that nearly all mainstream media already talks about their problems incessantly, the other 50% or so of humans can fuck right off. We might care, we most probably don't, but the certain thing is that we won't even mention them, unless to provide contrast for one of our $PET_GROUP. Yay equality.
Have you actually read the recommendations? They don't say anything about not reporting facts about current monkeypox cases being 98% among men who have sex with men. It's only if you say negative things about gay people that you would be violating the policy.
I did read the article, since I posted it in a previous open thread (it's from 18 Aug after all). Hence why my bold challenge above, feel free to claim it.
Any sufficiently advanced "I Don't Want To Censor X, I Just Want People To Say X In Certain Very Specific Ways" is indistinguishable from "I Want To Censor X".
What *IS* "Negative Things About Gay People" ? is it slurs ? was that not already banned in academic publications ? Is it "Gay People Have More Promiscuous Sex Lives And Thus Spread More Diseases" ? I doubt even this is tolerated in Nature, and it's a fact.
I have extensive experience with at least 3 distinct types of authoritarians, and every single act of censoring by them is always, *always*, justified by "This Is Not Censoring, You Can Still Say Those Things, You Just Can't Say Them In Certain Harmful Ways". The disallowed Harmful Ways are never elaborated on or clarified any further. Indeed, in practice, every single Way of expressing Those Things turn out to be Harmful and disallowed according to them. It sure is a very strange coincidence to Not Want To Censor Things but your (very vague) guidelines end up Censoring Things anyway. Some bad uncharitable folks might even accuse you of meaning it.
Here is a question to chew on : Why didn't this article cite examples of bad phrasing that they don't want in their journal ? (preferably with a suggested good phrasing of the same general meaning next to it), it shouldn't be that hard should it ?
I am pretty sure there are already publications in Nature Human Behavior that make the point you claim would be banned (that many populations of gay people have more tightly connected sexual networks than straight people, and thus that certain infections have an easier time spreading in these networks). No one would publish a paper whose conclusion was *just* that, because that is a well-known point already and publication needs to add something.
What are the three distinct types of authoritarian?
1- Muslims wanting to censor criticism of the Hadiths (collections of written-down originally-oral tradition and stories about Mohammed and his companians, wives, etc...).
2- Proponents of a military dictatorship wanting to censor critcism of the "achievements" of the regime (consisting of ugly and ill-planned urban projects, like new cities in the middle of the nowhere and new bridges for regions that didn't need them).
3- Feminists, wanting to censor any discussion of male issues.
In all 3 cases, the authoritarians never admitted they are trying to censor things.
- Muslim Sheikhs and Imams always maintain that you can certainly *say* things about Mohammed and his life and the Hadiths about them, only if those things are bad you are a very bad person and deserve bad things to happen to you. You also have to cite examples and arguments from "approved" sources only, not - for example - the bad scholars with bad opinions.
- Military authoritarians insist that criticism is good for the nation, if only it's done in good faith and with accurate information. It turns out that good faith and accurate information is suspiciously correlated with not criticizing the Comrade In Chief : all good-faith accurate-info critics begin their criticism with singing his praise and their criticism amounts to saying that the regime's only flaw is that it doesn't have 50 of him, and all bad-faith misinformed critics happen to think he's an incompetent and genocidal dumbass.
- You can talk about Men's issues in feminist-controlled networks and conversations, but only if you acknowledge that it's all their fault, they deserve it anyway, feminism is never responsible for any single bit of it, and more feminism will be good for them.
"fact, it relates a negative thing, a disease, to the coddled population and its lifestyle"
Agreed. My first thought on reading the article was that half of epidemiology would be censored by these criteria.
In addition, "identify content that potentially undermines the equal dignity and rights of humans of all races/ethnicities" sounds like it would censor any study that evaluated the effectiveness of a quarantine, since a quarantine inherently limits the rights of those people quarantined. Do we _really_ want to hide evaluation of which quarantines worked and which failed?
You have significantly mistaken the point of the article. It said nothing about holding any rresearch to any higher standard of proof -- indeed, no issue of reliability or testing thereof is discussed anywhere in the article. The article begins by assuming that the research publication to be considered is factually based, and contains no error or fabrication -- that is, it is otherwise suitable for publication in Nature.
I mean, that would make sense, right? Why would Nature ever say "Hey guys, ordinarily we might publish articles that contain some pretty iffy data and suspect observations, but in certain cases we won't for the following reasons, we'll want to have some extra verification then." It would call their existing publication model into deep question, for them to suggest that they would *ever* publish *anything* that they weren't persuaded was objectively true and well supported by its data.[1]
What the article addresses is two things:
(1) The style of presentation of the research. They lay out conclusions that should not be asserted, and styles of discussion that should not happen, arguing that these statements and ways of framing discussion can do harm that outweighs the value of the knowledge gained by reading the paper (presumably because of the weight of "science published in Nature" behind the forbidden language).
(2) Whether the research, even if factually based, and even if certain conclusions or implications are readily apparent from the observational data, should nevertheless be denied publication, again because the social harm exceeds the social value of discovering some new facts or other.
It is, in short, an argument that some truths are too dangerous to publish, and some ways of speaking about the truth, or discussing it, are also too dangerous to publish. It says nothing about any issue of reliability.
----------------
[1] They may well be wrong about that, and future discoveries prove that, but that happens and everyone understands that, and it doesn't change the basic fact that no journal ever publishes anything the editors don't think *at the time of submission* is true to the best of everyone's ability to know.
It was good when PNAS no longer claimed they actually published the "highest quality scientific research", although people could still quibble over whether that's honestly the goal they're attempting to achieve.
https://statmodeling.stat.columbia.edu/2017/10/04/breaking-pnas-changes-slogan/
It is relevant that it really is possible to prove anything, "statistically," if you have enough time and money. Run any study N times and only report the results of the attempts which support your position. If that were the intent of Natures decision then it would be clearly defensible.
If the argument here is “social science is all made up anyway, so we might as well just insist on made-up stories we like”, then I don’t accept the premise. “The overwhelming majority of social science is junk” seems very overblown. Even if it were true, Nature should be publishing the stuff that isn’t junk, not making the problem worse.
Well, they should have higher standards for the research, and not allow only crappy ideologically confirming research, for one; also their conception of what is "pro-social" is likely to be completely ideologically skewed in one direction and to be quite narrow, creating a major temptation to suppress even good research on ideological grounds, even if it's true.
> the overwhelming majority of social science is junk
I don't know if that generalization applies. It was really concentrated in psychology, specifically social psychology.
That might be partially because other disciplines in the social sciences *don't even have* quantitative studies that could fail to replicate.
True, but that also varies by discipline.
I think the issues are how broad and ideological a lot of the definitions of are.
>produces an antisocial conclusion
Research finding that widespread emotional abuse would lead to better X is antisocial. Research findings that "maybe women aren't as good at throwing" (and there has absolutely been even tamer stuff than that suppressed) is hardly antisocial.
And I absolutely do think it trickles into the hard sciences a bit. Especially with regard to anything that touches on biology/medicine.
It's close to the opposite of newspeak, actually. It's the original meaning of the word, before it became more commonly used as a synonym of "unsociable".
https://www.merriam-webster.com/dictionary/antisocial#:~:text=First%20Known%20Use%20of%20antisocial
Having an antonym of pro-social is a useful construct, I see it used here in there in rat-circles, including the comments here.
>>> I don't think it's unreasonable on the whole
Hmmm. Not my take, so could you unpack that a bit? What makes this a 'reasonable' move? What's the 'unreasonable' step they could have taken, but didn't? What are the safeguards to prevent misaplication?
Sorry, second question- in your answer, you seemed to say that you agreed with the new guidelines definition of "antisocial". Is this true, and if so, could you expand on that?
It's reasonable in the sense that I don't think they're lying about their aims - there is definitely human psychology research that has had a negative effect specific groups, and society as a whole (i.e. antisocial, as opposed to pro-social), and it's not a bad idea to at least make an effort to avoid that. Specifically, there's a rich history of junk science being used to justify antisocial and discriminatory ideologies - phrenology and Nazi race science are some hopefully uncontroversial examples of that, though they're obviously more extreme than what's being discussed here.
>Specifically, there's a rich history of junk science being used to justify antisocial and discriminatory ideologies
Yes, but this is exactly the problem. Nazi race science was a case of an ideologically-censored scientific establishment dutifully rattling off "facts" that legitimated the designs of those in power.
You can't know whether this current censorious paradigm will wind up as discredited as that one.
(It's hard to point to what's wrong in current understanding for the very reason that it's current, but to give some examples of modern policies causing harm to a group that may or may not be counterbalanced by their theoretical benefit: affirmative action has significant overhead and denies jobs to those apparently most qualified; transition therapy is generally sterilising.)
I would predict with relatively high confidence that even the least-scrupulous portions of wokeism will not be regarded as poorly as Nazism. Contrary to what either party says, the US is not actually a totalitarian state in any sense that you could meaningfully make that comparison.
Do you think Nature's approach is more or less likely to take wokeism in a bad direction (that is, more poorly regarded in the future)?
I did say "discredited", not "reviled". SJ isn't over yet, though.
Can you give me an example of research that has a negative effect, either on specific groups or on society as a whole?
By which I mean, please indicate the journal article or book that had the negative effect. People keep referencing 'Nazi race science' but what effect do those articles have today, and what is the recent research that is of concern?
Any specific examples I dig up are likely to be controversial, and risk starting a discussion on HBD, which Scott has indicated is a topic we should try to avoid unless necessary. To that effect, "The Bell Curve" is the first thing that comes to mind, which I don't want to discuss for that exact reason.
My point is that it's *reasonable* to not want that sort of thing to be published unless it's beyond repute, even if some here will disagree if that is *correct*.
Slight tangent, but every time these sorts of topics come up, it reminds me of Eliezer's suggestion (in jest?) in HPMOR that some ideas are dangerous enough to society that they are best kept inside the "conspiracy of science", where only those well-versed in the methods of rationality have access to them.
"The Bell Curve" is only tangentially about ancestry, it's major message is that intelligence has a large genetic component and that is a fact that should be known.
"Let us pray it is not true.... but if it is, let us hope it does not become generally known"
- the wife of the Archbishop of Canterbury, on learning of Darwin's theory of evolution, and considering the implications thereof.
> My point is that it's *reasonable* to not want that sort of thing to be published unless it's beyond repute, even if some here will disagree if that is *correct*.
Doesn't this create a chicken-and-egg problem? The scientific journals will not publish it, until it is scientifically proven. Of course it is not scientifically proven -- no scientific journal has published it yet!
I have no idea where you are coming from on this. The premises (note 1) of TBC *are* beyond repute, and rejection of the facts published in that book has done nothing but harm to US (and Western) society.
I really expected better than this. I really thought you had something which had done actual harm in mind, and not just something 'politically incorrect'.
If that's all you have - that we should not discuss things with unpleasant implications - or if your stance is that only 'the right people' should be discussing the implications of scientific research, then holy cow, man, are you ever in the wrong century and in the wrong crowd, and yes, those guidelines in Nature are exactly what people are concerned they are.
Note 1: TBC notes that intelligence of an individual is shaped by both environment and genetics, and that like other geneticly influenced human traits- height, disease rates, bone density - there are variations which can be detected at the group level. What to do about this, and how to assess the human worth of people who are smarter or dumber than ourselves, is left as an exercise for the reader.
Apparently, many fail at this exercise, and never intended to treat those people less gifted than themselves as fully human.
I'd say that dissemination of marketing techniques is probably antisocial insofar as marketing is negative-sum.
(Don't think Nature had this in mind, though.)
*deletes reference to other Nature approved antisocial research, on the grounds of not starting *all* the flame wars*
Likely not.
Note that this is for "Nature Human Behavior". Nature has hundreds of journals: https://www.nature.com/siteindex.
Relate thread from the chief editor: https://twitter.com/KoustaStavroula/status/1562034181894455296
I get why it raised eyebrows, but I am not sure how important this particular journal is considered to be, how common such changes are in the life of a journal, and how common they are in journals in general.
It’s a pretty important journal because it includes much of the social sciences in its remit - also genetics. (I’ve published there.) Yes, the new rules seem pretty shocking.
Meh to the extent it further formalizes what is already a big problem in science, it isn't good.
Part of me feels like academia is slowly being eaten from within by a modern theology. Maybe there will be some retrenchment, but I increasingly despair that in the long run the universities will not remain the preeminent place to understand truth about the world. Which is sad.
A wonder if we are slowly moving to some situation where we will need a new model, or where the university system will eventually need a massive defrockment like the monasteries once did. The reports from my friends who are academics are not encouraging.
About half basically are on board with this kind of thinking (I am not interested in the facts if they conflict with my political ideology (or just an outright refusal they could conflict with their political ideology)). And the other half tend to think it "isn't a problem", but if you ask them if they feel comfortable publishing findings that are politically charged they quickly re-evaluate and say "no, gee maybe it is a problem". And these are people with tenure.
Or they say, "I would only publish that if I could get some co-authors from the right demographics to deflect the heat", etc. Which is just so alien to me. These are people who generally were hard core "truth or bust" people who seem to have had their time in academia erode that rather than reinforce it.
Maybe it is not a problem if you are teaching intro Calc or whatever (though I suspect it causes issues even there all told), but definitely the ones who work in social/behavioral modeling just view whole swathes of hypotheses as off limits.
It is like this sketch: https://www.youtube.com/watch?v=owI7DOeO_yg
But where instead of the issues they are unwilling to look into being "kill all the poor", they are incredibly tame things that shouldn't be controversial. "Do gender roles in abstract sexually reproducing agents increase fitness?", or "Do systems for punishing cheating improve or degrade test result quality on X tests?".
I'm interested more in learning more about pharmacology, and a lot of intro books I'm seeing on amazon are geared towards clinicians rather than interested laypeople (ie, people like me). Would any of you all have any book recommendations?
It wouldn’t hurt to pick up a PDR
Surf Thread
I took a look. I see why they need to go here to solicit feedback. Their feedback page is broken! Also, (at least on android and the brave browser) their mobile site is broken! It only lets you enter text when you turn on desktop mode. My advice to to hire testers.
That's one bug I'd never find - I do as little as possible on mobile, especially when it involves entering text. Keyboards and large screens are just so much easier to deal with.
Hobby horse aside, I find it difficult to imagine how a web site could help people make friends, let alone monetize doing so. Acquaintances, sure. Friends in the sense of "someone who follows my blog and/or whose blog I follow" - sure. But some largish percent (well greater than 50%) report they can't see a person as having human feelings without meeting them face to face - and behave accordingly to people they experience as text on a screen. Maybe video interaction might enable a larger proportion to experience each other as actual people. But that's still a large way away from becoming actual friends.
Maybe my initial take of the site was wrong, but I saw it as intending to manly facilitate IRL friendships. Is that not what you saw?
Looked at the site using a desktop browser. Didn't find any way to give feedback.
Did find that their implicit definition of friend is something like my definition of acquaintance.
I think the idea is that by organising people into groups around shared interests, Surf is going to drive in-person meetups that will lead to the formation of new friendships.
How did you think it was supposed to work?
If I'd expected anything, it would be something like a dating site, but for friends rather than romantic partners. Mostly though, I didn't know, and feel like I still don't.
I replicated this: I can't enter text in the text box on an Android. The keypad doesn't come up.
Would there be a substantial long-term impact on culture if somehow it gets established beyond reasonable doubt that someone other than Shakespeare wrote everything that's traditionally attributed to him? Or likewise for any other household name? Do people care about such "questions" for about the same reason that they gossip about celebrities, and it's just as trivial?
it wasn't Shakespeare, it was the other guy from Stratford called Shakespeare.
(also no)
I am reminded of the old "history of the world according to student bloopers" document which used to go around, back in the days when memes were in text form. It claimed "Shakespeare's plays were not written by Shakespeare but by another man of the same name".
Works better for Homer, since practically the only thing we know about him is that he wrote the Iliad and the Odyssey.
snap
It was a known fact for hundreds of years that the plays attributed to "William Shakespeare" were written by him. This was known to his friends and family, to the players he worked with every day, to the theatre management, to the printers who prepared his plays for publication when the Company allowed that, and to the other playwrights of the day with whom he socialised, and in a couple of cases collaborated. The idea that "somebody else" wrote the plays is very modern, and was partly a consequence of the decadent phase of Romanticism (lone, neglected poet only recognised after his death) and partly simple snobbery (no commoner like Shakespeare, from a provincial town whose father was a small businessman can ever be our great national poet.)
If somehow the contrary could be proved, it would up-end all literary history, all literary criticism, and all dramatic tradition, and reveal a literary conspiracy unmatched in the annals of history. How, for example, would things have worked in practice when Richard Burbage says one day, just before Hamlet is acted for the first time, "Will, that speech is a bit long: can you shorten it by a third and take some of the difficult language out?" And Shakespeare goes rushing off to consult with Sir Francis Bacon, Queen Elizabeth, the ghost of Christopher Marlowe, or any of the other dozens of writers who have been proposed. Battalions of critics and historians kept busy for decades, popular books by the hundreds ...
He was also illiterate, unlike all other actors, so it was even harder for Shakespeare. The real difficulty was when he was writing the later plays and had to consult with the Earl of Oxford who was dead at the time, and John Fletcher, who wasn't.
A Shakespeare by any other name would write as sweetly.
Most of the interpretive framing around Shakespeare the man is junk. Having a different name to hang all that hopeless projection on wouldn’t change a thing.
Yes, I would consider this essentially trivial, the sort of academic question that most obsesses lesser minds, like “Was homer actually a conspiracy?” or “Who was the *historical* Jesus/who *actually* wrote the Gospels?”
I'd say that the historical Jesus thing is somewhat more interesting, for anthropological and history of religion-type reasons. Whereas Shakespeare is only relevant insofar as he wrote the works attributed to Shakespeare, arguments that rest on the accuracy of Biblical descriptions are an important part of Christian apologetics.
It isn’t, because it’s a field that exists solely on the pomposity of textual critics.
If there was any evidence that wasn’t hot air, it would be interesting. Pretending that you can slice and dice the texts to a More Historical version is so much academic make-believe, as rigorous as a seance and as empirical as faith healing.
I doubt there'd be any major impact on culture, because most people don't care that much. For most people who believe the "it's not really Shakespeare" theory, it's just a bit of trivia they pull out to sound smart at parties.
It doesn't appear unlikely that the body of work we attribute to Shakespeare is actually the work of several shakespeares, quite possibly in different eras.
It is accepted in biblical scholarship that most, if not all, of the gospels were composed by quite a few writers -- although Q is thought to be the source of three. The evolution, as it were, of these narratives has been studied through centuries, so it's not at all unreasonable that Shakespeare or even Chaucer's works were serially collective projects.
Yeah, 4 writers.
And even if that were true why would it be true that Shakespeare collaborated on most of his plays, except for the few where we we know he did he largely wrote alone. They certainly weren't written over "centuries" - the first folio is generally considered canonical. This doesn't stop different interpretations for the stage, or literal Bowdlerized versions later on.
There are lots of sources for Shakespeare and Chaucer, which are well known by the specialists. There are no known earlier versions of Shakespeare and Chaucer by different authors. If there were, it would be huge news.

Growth mindset, Wyclif's Dust! Imagine the vast number of new academic jobs created by the "Shakespeare Collective" Studies Departments! Everybody could put in their contender for 'who wrote Shakespeare?' and the beauty of it is, nobody need be wrong!
From Kit Marlowe to Lizzie herself, anyone and everyone could and can be part of the Collective! And that's only the beginning - imagine the increase in gender and queer studies when we have more than one white guy to write papers about!
As for Chaucer, now come on - are you really maintaining that some customs official could be the Father of English Poetry? 😉
By Lizzie you mean Lizzo, right?
Why the heck not? Black female representation! The *true* Dark Lady of the sonnets!
"Would there be a substantial long-term impact on culture if somehow it gets established beyond reasonable doubt that someone other than Shakespeare wrote everything that's traditionally attributed to him? Or likewise for any other household name?"
Unless that 'someone' already has an identity I don't see how it could. The definition of Shakespeare for most people is, "The guy who wrote those plays and sonnets."
It if turns out the Shakespeare was really Cervantes then things get more interesting.
Wait until it turns out that Shakespeare was actually a black transwoman.
I’m curious what you and your readers think of the casting controversies in the Rings of Power series. I wrote a piece explaining why fans might not like it without necessarily being “racist”.
https://sinistradelendaest.substack.com/p/opposition-to-black-characters-in?r=qz5p0&utm_medium=ios
Haven't been watching Rings of Power, but generally:
If there's a high-profile casting controversy about a movie or TV show, and the controversy is that they "whitewashed" an ethnic character by casting an A-list white actor, then it *might* be a good movie or show that just wimped out and chose A-list marketability over authenticity once they found that e.g. Will Smith wasn't available. Or it might be crap. Fortunately, in that case you can probably get fair warning from the reviews.
If there's a high-profile casting controversy about a movie or TV show, and the controversy is that they recast originally-white characters as colored, or originally-male characters as female, it should be presumed crap until proven otherwise. There will always be grumbling in the dark corners of the internet when that happens, but it only becomes a high-profile controversy if the producers signal-boost it with their response. Which they do because it preemptively discredits *other* criticism of the production, and gains them unearned favorable reviews because almost nobody in the mainstream press is willing to risk offering an unfavorable one.
My beef with the casting is more general than that. And the doubling down on "if you criticise the show, it's because you're a racist" didn't endear them to me. For all the talk about Diversity and Inclusion, we've got what?
One (1) Black Dwarf who is an invented character, not main, surrounded (so far as I've seen) by white Dwarves.
One (1) mixed-race Elf (the actor is Puerto Rican, so Black Hispanic, although I'm probably getting the fine nuances of US racial classification wrong) amongst, you guessed it, majority white Elves. And not a major part, although he probably will be prominent in the sub-plot about the 'forbidden romance' (which I don't see going anywhere) and the Adar Orc-father bit.
The Harfoots. Oh, let me get started on the Harfoots. Apparently none of them have ever discovered the use of a basin of water to wash their faces. And they've all got cod-Irish Hollywood diddley-eye accents, so thanks Payne and McKay for keeping alive the hoary old "pig in the parlour Irish" stereotype. I feel a Dylan Moran clip coming on:
https://www.youtube.com/watch?v=b29nhQojAAI
Why couldn't they let Lenny Henry keep his real accent, it's a perfectly fine accent? Why not let all the Harfoots keep their accents, it would fit with their multiracial nomadic tribe thing. But no, they must be "Faix and begorrah, us have only each other, so we do, bejabers".
Celebrimbor is too old. The actor may be perfectly fine, but he's too old for the character. Celebrimbor is Galadriel's first cousin once removed, and if we're getting Young Piss And Vinegar Galadriel (and apparently we are, like it or lump it), he should be equally young or younger. The only reason I can see for this characterisation is that they're going for "all the old and/or white guys are wrong, Galadriel is right and the only one who is right AT ALL TIMES" storyline. Oh, and that they've never read the books, but I think that three episodes in, that's apparent.
Gil-galad is allowed look like an Elf, which is something. There have been suggestions online that it's a rights thing, that Warner Bros studio has its lawyers on a leash slavering like wargs at the merest hint that the visuals of the movies will be copied by the TV show, so they can't make their Elves look like Peter Jackson's Elves.
But the responses in the media about how Elves live so long, they'll change over the years, aren't really satisfactory. So I suppose Elrond and Celebrimbor are going through their teen rebel phase? "You're not the boss of me, I'm going to cut my hair short like an Edan!"
The one place they *could* legitimately have cast full of brown-skinned people, which would not have contradicted canon, was their invented Southlands village of Tirharad (after all, if you're going to plonk it down with both Harad and Khand on the borders, and populate it with descendants of the Men who fought on Morgoth's side, well duh, right?) It would make the modern political references to racism and prejudice even more pointed, to have an occupying force of Westerners watching over brown-skinned people who legitimately felt aggrieved that they were being placed under suspicion for the sins of their ancestors.
But that wouldn't have permitted a white guy to be racist to a black Elf, so we get Tirharad pretty much all-white, except for Invented Female Character To Be In Forbidden Romance, where the actress is Iranian (does that count as not-white? To me she would be white, but if we're going by 'Middle-eastern is not white' then okay) and her son, who is slightly more not-white (the actor's father is from Indonesia).
(As an aside, I'm going to say here that I think Halbrand is not Sauron, that he *is* some 'King of the Southlands' and that he's Bronwyn's missing husband and Theo's father, which is why Theo's blood activated the black sword because of the whole 'bloodline in the service of Morgoth' thing).
Númenor is also multi-cultural, but that doesn't stick out so badly since most everyone is just a spear-carrier rhubarbing away in the background, except when called on to beat up Halbrand. Most of the important characters are white except for Tar-Míriel, and I honestly don't mind that too much. At least she's human playing a human character, and at least they put some effort into making her look like a queen. I'm much more annoyed about the alterations to her character, turning her into a Queen-Regent and eventually some kind of Warrior Queen, which is going to be tough to explain how she gets the throne usurped out from under her by Pharazon, but eh. I think I see the shape of the plot they're going for here, and if only they could write a decent script (instead of pseudo-profound bollocks about sinking rocks), then the stakes would seem appropriately high - that the monarchy *is* under threat by the King's Men and that the mind of the people *has* turned against friendship with the Elves, so that a popular (or populist?) uprising headed by Pharazon would mean the spectre of civil war (and that she might not want to fight her own people, or even that she can't trust her army *would* all follow her).
I think that's about it: the rest of it is that I have a feeling Meteor Man might be Saruman, not Gandalf, but I wouldn't put it past these chuckleheads to have him be Gandalf.
Mostly it's that the pacing is terrible, we're three hours in and not very much has happened. The scriptwriters can't write decent dialogue to save their lives. And it's both too dependent on lore for the casual audience (how are they supposed to know about Valinor and Morgoth and who Elendil is and all the rest of it, given that the plot jumps around the map from one character to the next without establishing anything?) and too divergent for people who are familiar with the canon (e.g. the likes of me complaining about what they did to Finrod, what they did to Celebrimbor, what they did to Elrond - 'you can't attend the council because you're not an elf-lord'? Bitch, his father is literally THE MORNING AND EVENING STAR) and most of all what they did to Galadriel. They have about five episodes left in this first season, if they want a season two they better give her some self-awareness and character-growth sharpish, because right now she's unpleasant brat who grimaces when faced with an argument based on reason and logic.
https://www.youtube.com/watch?v=v-XWIqm-DzY
Elendil is likeable, though, how did they manage that? Or miss that, rather, to have one character in the entire set of scenes on Númenor who wasn't something you wanted to stab in the face? And his invented daughter, who is there to give "female energy" to his household, apparently. So she makes sure there are plenty of throw pillows and clean shirts and potted plants and scented candles, mmm?
"Celebrimbor is Galadriel's first cousin once removed, and... he should be equally young or younger."
I haven't seen the show but want to point out "Once Removed" is a mark of generations; the child of your first cousin is your first cousin once removed, and you are theirs. So Celebrimbor should either be old enough to be Galadriel's father, or young enough to be her son.
The latter, since his grandfather is Galadriel's uncle, and his father is her first cousin. How much younger is hard to pin down, because there aren't any definite birth dates stated for either of them; they were both born in Valinor during the Years of the Trees.
I think I understand the casting for the storyline they *seem* to be pushing but really Celebrimbor *should* be young and ambitious and more easily taken in by Annatar than Elrond and Galadriel and Gil-galad who all suspect and reject him.
Given how long elves live, generations are not a very reliable indicator of relative age. It’s quite possible for your father to have sired you a few hundred years after his nephew had a few kids. Even for humans, being significantly younger than your first cousin’s child is not impossible, even if it’s rare.
(I’m not claiming this happens for this particular pair of characters, either in the show or the books, just that it’s eminently possible.)
Heck, Galadriel could in principle just have a baby sometimes after LotR ends, and it’ll be younger than all the (hundreds?) of generations that happened during the last age or so that are nominally lower on the genealogical tree than it.
Here's one simple possible explanation: in the show's version of middle earth, hobbit skin colour inheritance works more like eye colour does in the real world, so that it's possible for two parents to have a child with a completely different skin tone.
This is a far smaller divergence from reality than magic existing. We accept that magic exists in the show universe even though it doesn't really make sense because it makes the show better. Similarly, allowing hobbits to be of different races reduces the restrictions on who can be cast and so increases the expected quality of the casting, on average making the show better.
Quota casting is generally going to drive down the quality of casting not increase it.
Yep - in particular, having the very stringent quota that *100%* of characters who were white in JRRT's imagination must be played by white actors would tend to drive down quality.
Yeah but we all know that isn't how the quotas work.
It's literally what OP was proposing. (Well, almost. I got the impression OP would have been satisfied so long as 100% of the hobbits were played by actors of the same race, regardless of what that race was. But that's still very stringent.)
I believe it's also how casting was done in the original LOTR film trilogy.
"Similarly, allowing hobbits to be of different races reduces the restrictions on who can be cast and so increases the expected quality of the casting, on average making the show better."
Except who are our two main viewpoint Harfoot characters? White Poppy and White Nori. Nori has a black stepmother, and the leader of the tribe is black, but that's our main characters; the black leader and then our two white female stand-ins for Frodo and Sam.
To be blunt, the only actor I think was cast for acting ability and not merely ticking the "we have reached our quota of non-white casting" boxes is Lenny Henry. I can believe he was cast because his colour means nothing when it comes to the character. The rest of them, I can't get over all the fa-la-la about the diversity and modernisation and representation chat. So the black Dwarf is not cast because she's the best actress for the part, but because she's a black actress, and so on.
I got a bit miffed about having dwarves with high amounts of melanin as they are supposed to spend much of their lives underground, so why would that ever evolve?
It seems entirely reasonable for elves though.
There might be a misconception here wrt actual humans. High melanin did not evolve in response to intense sunshine. Instead, it is the base state for humans - normal humans are dark-skinned. Low melanin evolved for humans who migrated to high latitudes where sunshine was insufficient to generate vitamin D3, which is essential for life.
Fair enough, but there is no reason at all why it would be the base state for a cave-dwelling humanoid subspecies (dwarves).
I wouldn't care about this if they wrote in a line explaining she came from a different Dwarven stronghold or had more visible black Dwarves around her. The problem is one single solitary black character in a sea of otherwise white ones does make that character stand out and if there is no explanation for it in-world, then it breaks immersion.
Let Dísa come from the Blacklocks who originated in the East. Intermarriage between two royal houses of the Dwarves. There you go, guys, when you are doing all your publicity material, just slip this in as a reason why she has different skin tone to her husband. But no, it's all "first black dwarf!" and then "that's racist!" in response to criticism.
https://tolkiengateway.net/wiki/Blacklocks
agreed! but also, why not?
Because melanin production evolved in our African plain dwelling ancestors as hominids lost more and more of their body hair. Our closest ape relatives have thick, dark hair and white skin. Looking at hairless chimps can be instructive (https://www.bbc.com/news/uk-england-leicestershire-36924808.amp). Also they are ripped AF, which is sort of fascinating to look at.
The point is that homo didn't start producing melanin until it needed it, and l don't think that dwarves would either.
I suppose it depends on which age of middle earth some proto-hominid moved from the savannah or forest into caves to eventually become dwarves.
The diverse casting has mostly been applied to extras and show-only invented characters. I basically lump it into the same category as "Numenorean beards": if you go by the lore, then all of the Numenorean characters, Aragorn, Boromir, and Faramir should have no facial hair at all. LOTR and ROP do give some of them beards, and it doesn't really matter because whether or not the adaptations feature bearded Numenoreans has no real impact on either its core themes or even really the characters themselves.
"The diverse casting has mostly been applied to extras and show-only invented characters."
That is what annoys me most about the whole thing. Amazon are pushing back on criticism very heavily with the "oh, so you're all a bunch of racists, eh?" and yet, for all their talk of 'representation', the black characters are minor ones, invented out of whole cloth.
So why not a black Galadriel, then? After all, if the "you accept dragons and wizards, why can't you accept a black Elf" argument is valid, then it's equally valid to cast a black actress as Galadriel. Major, major character played by a black person, huge representation, big important heroic role, not just shoved into a corner as a minor part or as a villain, right?
Because even they know it wouldn't effin' fly, and saying objections were racist would be laughed out of it.
I would totally watch a drama in which *all* of the Elves were cast as black, and all the Men as lily white, and they had to figure out how to work together in the struggle against Morgoth and/or Sauron. That would put a fascinating gloss on the inherent tribalism and mutual suspicion that is (only lightly) mentioned in Tolkien's original work, that arose from very different lifestyles. I think this could be done very well -- you could have enlightened members of both species who worked to overcome mistrust, you could have assholes who made it worse, you could have a lot of people just trying to muddle along.
It would certainly be a radical departure in some ways from canon in style, but maybe not that much in substance in a certain sense -- I think Tolkien did (albeit fairly lightly) consider the theme of difficulty in cooperation and the dangers of tribalism in his work. It would be -- well, could be, in skilled hands that I admit would not be super likely to actually handle it -- an extension that is not a faithful recreation of the original but also didn't spit in its eye. It also wouldn't reek of bullshit tokenism -- there would be a powerful narrative *reason* for making race-based casting decisions.
I think one takeaway I have had from a variety of projects is that generally if the casting and writing is concerned about representation, it often isn't as concerned about actually being, you know, good.
IDK it is an effective marketing strategy these days, plus free PR when you can spend time calling out a few "twitter racists", so it is hard to blame them.
But often in say games, if someone is selling you on a game with "It has a female protagonist", the game is on average kind of below average. Because if the game was good, they would just sell you on that.
It does seem kind of weird that a movement that is so concerned with "appropriation" is also very excited about appropriating whatever it can.
Why aren't hobbits and elves played by hobbits and elves? Black or white, humans are the wrong race to play those characters.
The meal riders for the Hobbits are too expensive.
Ha!
Good joke, but *do* hobbits actually eat more than humans, or does it just seem like a lot because hobbits are smaller?
Hobbits must have a fantastically inefficient digestive system. I wonder what their body temperature is.
IIRC they have two breakfasts before elevenses, after rising at about 9:30.
I get up a little earlier, but I do try to have at least 2 breakfasts before morning tea. Mind you, I'm only 5 and a half foot tall and may have a lot of hobbit in my genes.
Hobbits & elves are even worse to work with on a set than children or animals.
What does deeply disappoint me is that the Elves don't behave like Elves, Galadriel is Batman because apparently that's how a "strong woman" acts these days, the plot is way off canon, and everyone is making dumb decisions. I guess we need r!Rings of Power.
That sort of "strong female character" is very much early 2000's-era Joss Whedon or Quentin Tarantino. People who actually care about female representation have moved on to "flawed female character" as a more useful test.
The name of your Substack makes me think you're not a completely impartial observer on this matter.
To respond more directly to your piece, you say "If black hobbits, elves, and dwarves exist... It requires disbelieving in evolution and approaching the series with the mind of a biblical creationist."
But biblical creationism isn't far off from canon. Eru Iluvatar created Elves and then Men (some of whom became [evolved into?] Hobbits). Aule created Dwarves. Morgoth magically transformed Elves into Orcs. Other sentient creatures have more complicated origins. So we have both creation and possibly evolution in play, as well as magical transformations and other forces. Any of those could explain differences in skin tone.
Assuming that Men had time to evolve into Hobbits, I'd imagine they also had time to make some long migrations, evolve skin colors accordingly (much quicker than speciation), and migrate on back.
Furthermore, Hobbits had three ancestral strains: Harfoots (depicted in The Rings of Power), Stoors, and Fallohides. Harfoots had the darkest skin and Fallohides had the lightest skin, canonically. Assuming that LOTR-era Shire Hobbits represented an interbreeding of all three, it's to be expected that Harfoots would have darker skin than Shire Hobbits.
I was just about to respond along these lines, but you did it better than I would have.
Arguments from evolution just don't do it for me inside fantasy settings because there's no guarantee that any of the ordinary genetic principles hold. Maybe regression to the mean doesn't happen because a deity had decreed secret complex dance moves in the great sarabande of alleles. Or something. Otherwise, you'll have to explain the biology and energy-economy of dragonfire (and flight, etc., etc.) instead of falling back (as OP does) on 'mythology familiar to Northern Europeans', as if that were somehow more dispositive than 'racial demographics familiar to North Americans'. Having it both ways is asking a little too much.
(I actually liked the analogy of the Lego-breathing dragon, but I think it overreaches - in terms of explanatory burden, it would correspond better to a cyborg hobbit.)
I responded over there. I'm an American but I get the impression you're overgeneralizing from the US to the present-day UK.
I'm not bothered by the casting choices. The main thought it brings up for me is: if there is racial diversity within a single village or city of a rapidly reproducing species (like humans), I want to know the backstory. Was there a recent merger of groups that hasn't had time to homogenize yet? Or is there longstanding prejudice that prevents them from intermarrying? How does that play into the worldbuilding and storytelling? If that backstory is totally ignored, that's disappointing.
With the dwarves, it seems straightforward. We only really see three of the seven clans, but we know they communicate and trade with each other. Plus IIRC from the Silmarillion Durin's Folk are basically a mix of dwarves from other clans since he awoke alone.
Hobbits are harder, but we don't know how many groups of Hobbit-like creatures are wandering around or how frequently they interact. Plus the Hobbits have always been deliberately anachronistic in Tolkien's works - they feel like a slice of the 19th century British countryside dropped into High Fantasy, and even the Harfoots kind of feel that way.
With Arondir, it's basically "elves reproduce slowly, he's probably from the Teleri group that was largest and pretty spread out before they kind of come together again later, and there's diversity among elves. They might -mostly- look fair-skinned and dark-haired with no beards, but you do get odd exceptions like Nerdanel's father from the Silmarillion who was both red-haired and bearded."
Excellent article, though I don't like even the indulgence of using the term "racist" like this uncritically, because it reinforces the belief that "racist" is a meaningful and valid word/concept.
When you say "its not racist", you're making it seem like "racist" is a meaningful thing that somebody can be, and that it would indeed be bad if those critical of the show were indeed being "racist". You're clever enough to know this isn't true, but when you use their language like this I think you've already lost.
There's no principles at play here here other than black nationalism and corporate virtue signalling. These people really just do not care at all about any of this nuance. These are the same people who lost their minds over Scarlett Johannson being in Ghost in the Shell. The correct response to accusations of racism is *not* to sincerely proclaim that "no, it's not racist!" any more than an analogous reponse would be right in the case of being called an infidel by a muslim or a counterrevolutionary by a communist. If you agree to the terms of their ideology, then you lose.
Edit: Sorry, finished editing this after you'd already responded, but the susbtance is still the same.
How is the term “racist” not a meaningful concept? It seems like in instances where someone holds prejudice against others purely based on race, it would be a useful thing to have the word to describe them as such.
Also, what is your proposed solution to allegations of racism? If someone called me an infidel or a communist, I could choose not to engage, but the fact is that I could also engage and perfectly explain why I am not, in fact, either of those. What makes “racist” any different in the validity of responding to it?
The problem with "racist" isn't that it has no meaning, but rather than it has a hundred mutually contradictory ones. A person can be racist for knowing crime statistics or for burning a cross in some black dude's backyard or for any number of intermediate points between. Which makes it pretty hard to use the word to communicate meaningfully.
Because they control the language, and will change the meaning of these words in a way that maximally advantages them in the discussion. The moment you think you've pinned down a consistent definition, they will just change it so your argument no longer works. It is not a coherent concept because its meaning is constantly shifting to serve the interests of the ruling class.
For example, let's say you're trying to argue that something is racist. You show that it fits some dictionary definitions of racism, and even some of the ways the word is used in common parlance. But let's say they like this thing, and they think it is a good thing. This cannot stand for them, because their most fundamental axiom is that "things that are racist are bad." So they will just change the definition of racism, via things like literally changing the dictionary definition, mass social media campaigns, censorship of the old usage of the term, etc., so that the thing that they like is no longer "racist".
>>> proposed solution to allegations of racism
Same as allegations of being a boor, or being fat, or stupid, drowning puppies, or having no taste in clothing.
They are all insults meant not to communicate information on the face of them, but to either spark conflict or label one as outgroup.
Someone who cared for you would not say such things. Everyone else should be ignored until they learn better manners.
Disagree - if A tells B "you are too fat and it's lowering your life expectancy, you should lose weight" then often B should give this serious consideration, even if A said it in a mean way.
I think they are talking about the "broader use of "racist" that has become so popular. Similar to say righties calling Clinton or Obama a "communist". Does it make sense for them to take time to explain they are not communists? Maybe, maybe not.
In some ways even edifying the attempted slur with a response gives it power.
If your response to "You're racist!" is "No I'm not," you're conceding a huge amount of ground. The underlying premise of that accusation is "The world is divided up into good people and bad people. Anti-racists are the good people, and racists are the bad people." The response "No I'm not" concedes all of that and is basically an act of begging, "Yes, I fully accept your way of framing the world, but please believe me, I really am one of the good people." But of course this will never work, because the people you are begging are the same people you just conceded the right to frame the world to, and of course they get to decide who the good and bad people are.
The premise is that racism is bad
He's saying that denying being racist is also tacitly acknowledging that people fall either simple racist or non-racist groups
That's what I'm saying. "You're racist" can be directly translated as "You're one of the bad ones," the response to which should be to deny this classification system on the whole, not to engage in the losing battle of arguing that you are one of the good ones.
Yes, you’re right. I tried to use quotation marks to indicate the nebulousness of the term, but I think using it is unavoidable when you’re trying to reach the other side.
I'm not buying that Amazon could have avoided or significantly lessened the controversy by only releasing non-confrontational images of the nonwhite characters.
Of course online media were going to report on the grognards and trolls - the controversy was clearly going to be a pretty good source of clicks. Not sure how Amazon could have prevented this reporting.
Right. Evidence in favor of this theory is that pretty much no one had a problem with diverse casting in the past ~20 or so years when it wasn't done for performative reasons and didn't have as much explicit wokeness in the content itself. The negative reactions seen recently is not just "diversity -> bad", it's "diversity + explicit woke content -> evokes a mental image of the people who are pushing this stuff on us -> bad."
E.g. there are tons of universally well-liked movies with female protagonists, but these are usually the ones where the protagonist just happens to be a woman and it is otherwise a normal movie, not the ones where the whole story is about how they have it so hard because they're a woman.
"not the ones where the whole story is about how they have it so hard because they're a woman"
Oh gosh, three episodes in and this Galadriel is a thundering bitch. The only time she smiles is when she's having that slo-mo horsey ride (a Youtube review said "Maybe this is the key to her whole character, that when she was twelve, her dad didn't give her a pony"). Otherwise she is needlessly confrontational to everyone. When Halbrand, the ragged guy pulled off a raft in the middle of the ocean, can manage to be polite and diplomatic in the court of Númenor, it may be the writers hinting that he is not just the ordinary guy he pretends to be, but it just comes across as basic common sense not to piss off the powerful people who have you at their mercy.
That Galadriel, after her recital of her own titles, can't manage to be civil for five minutes is mind-boggling and infuriating. She demands everything, scowls when she doesn't get her way, and resorts to threats of theft and murder when everyone doesn't fall down at her feet. Throw her back in the Sea, Elendil, and let her swim home! With any luck, she'll be eaten by the Sea Worm and poison it to death, so two problems solved!
Or even, 'story about X who had it so hard (because X)' will appeal to some folks, but 'X has it so hard (because X)' is not necessarily a storyline with universal resonance.
> E.g. there are tons of universally well-liked movies with female protagonists, but these are usually the ones where the protagonist just happens to be a woman and it is otherwise a normal movie
Not saying you're wrong, but can you give some examples?
Terminator.
In terms of the original, mostly, but I feel obliged to point out that in Terminator 2 Sarah Connor goes on a deranged (as in, portrayed-as-deranged) rant about how men are evil.
"Fucking men like you built the hydrogen bomb, men like you thought it up. You're think you're so creative. You don't know what it's like to really create something... to create a life. To feel it growing inside you. All you know how to create is death, and destruction."
This is definitely a rant that wouldn't work coming from a male character.
Of course, T2's not exactly a female-protagonist movie; Sarah's credited #2, but she's the least important of the main trio (unlike T1, where she's credited #3 but is definitely the main character).
The 'men like you' bit could easily come from a man not 'like that'.
Hmmmm. I agree that Terminator wasn't a female-led film in the annoying "we've got a WOMAN in the lead, WHADDAYA THINK ABOUT THAT, BIGOTS?!" way, but I think her femininity is more than incidental - it drives a lot of her response to the Terminator. You can probably read the film as an allegory for domestic violence or something. Also, she's not the star!
Sarah Connor is a very popular movie character that men have no difficulty at all believing in and enjoying. That's the point.
It disproves the idea that men are beastly and sexist and that is why they don't like watching modern films with unappealing female characters. The problem is the character, not the audience.
Alien.
Thanks! Alien is an especially interesting case here, because the script was intentionally written to be gender-neutral: https://en.wikipedia.org/wiki/Alien_(film)#Cast
I agree that this tactic is the actually annoying thing. Diversity only becomes annoying when it becomes performative.
For Scott's next book review contest, would it be more convenient to use a wiki where each review gets its own page?
Reviewers would be previewing their own formatting (using anonymous accounts), so there would be no more surprises about how it looks on Substack. Readers would find reviews using the random button in the sidebar. Finalists would be listed on the main page.
Possible problems: Does each image need to be uploaded? Will participation decline? Would Scott find it inconvenient to keep tabs on the wiki and Substack?
Edit: Forgot to mention what I imagine would be more convenient about the voting - voting on the page of the review itself, rather than navigating elsewhere.
or everybody uses their sub stack accounts.
I would appreciate this very much, at least for the first round. Reading a giant Google doc is really unpleasant/annoying. Anything that would load better and remember its place better on a phone would be an improvement.
Someone actually built a nice website that would give you a random review this year, that was cool (I unfortunately can't find the link right now).
I like the idea. I find large Google Docs more tedious to navigate than Wikis. I think the "Find random article" would also work better than Scott reminding people to read things randomly.
In terms of editing, the submissions could still be the way they are, and only one (or a handful) of the organizers could take care of uploading them into the Wiki, with no edits afterwards.
Would be cool to also upload all past reviews into the same one.
Would this be in lieu of actually publishing the reviews on the substack one week at a time? If so, seems like that would really stifle discussion (as everyone would be reading/commenting on all of them at once), and maybe more importantly (at least to Scott) wouldn't really be creating "content" for the substack.
Plus people would no longer be able to read directly from their emails, which seems like at least a minor disadvantage.
Back in April, Scott released them all at once in a few gargantuan Google Docs, so we already were reading many of them at once, right? I agree that commenting on finalists all at once sounds inferior to starting comments one at a time, when each finalist gets released by email.
The advantage of commenting on a wiki is faster page-loading, and use of formatting, e.g. #links to particular passages in the review. The advantage of commenting here is no need to create another account. The latter might win out.
Arguably outweighing the inconveniences of doing something new is the creation of an easily searchable repository which gains 100-200 quality-rated longform reviews per year, for as long as Scott wants to keep going.
He could also set up a few categories for important issues, so future readers could easily find e.g. all years' healthcare reviews together.
Yeah, when Scott released finalists a few people would have already read them in the "preliminary round", but the point is that most people were reading it at the same time.
And I guess, more importantly, commenting at the same time: I think these sort of discussions have a shelf-life: if I write a comment and someone replies an hour later, I'm way more likely to reply than if I get the reply a month later.
This is obviously a dumb question because if it wasn't you'd have already mentioned it, but can't anyone edit a wiki at any time? And so you'd risk changes being made to your review? (As a rule the ACX commentariat is one I'd trust not to do that maliciously though)
It's a good question. An administrator could protect the page at the same time as they make it visible (which is after the reviewer says they are done). Or a bot could automatically revert any edits not by the reviewer or an administrator.
Edit history is visible to all.
My latest article in which every piece of artwork is made by AI got a ton of negative feedback from my follows. Like "go jump off a bridge" bad. This may be because I am more in the writing community rather than the rationalist one but I was wondering if others who use DALL-E or Midjourney in their writing are getting blowback on putting artists out of business.
For reference:
https://shortstory.substack.com/p/every-piece-of-artwork-in-this-article
One of my kiddies recently told me that they wanted to be a graphic designer when they grow up. Having had a go with Midjourney, and observing the pace of improvements in this area, I do wonder if there will be any sort of viable career in this area in a decade or so.
For someone who has just left art school, or is trying to earn money as an artist, it must be very concerning indeed.
On the other hand, I have been using Github Copilot for a while, and find that it is an excellent tool but very unlikely to replace any actual coders.
I personally think that there will be either the same or even more graphic design work in the future, it will just look dramatically different than today's graphic design work.
If I had to guess, it will involve prompt crafting, as well as stitching together and editing AI model images. Graphic designer productivity will go way up, which will mean prices will go way down, but that far more project will use graphic designers than currently.
Overall corporate art will get better as local businesses/ads will start to have the polish and quality of the large corporate space.
My favorite analogy for this: Do we think there are more or less professional photographers than their used to be professional portrait painters pre-photography?
How many people pre-photography hired a portrait painter for their wedding, versus hire a photographer (and even a videographer, often these days!)
The details of what doing this kind of work looks like will almost certainly change drastically, but I think that the number of people doing it will not change much.
The question is then: will more graphic design *work* entail more graphic design *workers*? Cf. the "death" of American manufacturing, when more is actually being produced than ever before, just with far fewer than the peak number of manufacturing workers.
I can't see anyone telling you to jump off a bridge, but two of the comments do say "this is depressing".
You didn't see the private messages. Jumping off a bridge is a euphemism for much stronger language.
Oh dear. That's completely unwarranted and I'm sorry you had to deal with that. I thought your post was very good.
Philistine here, I'm sure, but I thought the pictures were gorgeous.
While you certainly don't deserve to be told to jump off a bridge, I think you need to consider how the written content of your article looks from the perspective of a visual artist. Since you say that you're in the writing community, I'll try to mirror your ideas to writing (and specifically short stories):
1. Everybody will use it in the next two years. The process of making short stories will be deceptively easy. You give the AI the first few lines of the story and a general theme. Then the AI uses others' fanfiction works and short stories to create a new story. It will get quicker and cheaper over time.
2. <insert same thing that you said about legal protection but for characters and plots>
3. You'll no longer need to browse the erotica section of Amazon or AO3. You don't need to hope someone else shares your kinks. You can tell the AI what you want included or excluded.
4. Original content for blogs, newspapers, etc will accelerate. Longer books will be produced faster. Details help provide clarity and are also just important for understanding. People will rely much less on copywriters and freelance writers because new stories can be generated in minutes. They won't be gone. Original, well-written research papers will always have a place in the world.
5. AI will create ideas for you. <insert same ideas, but characters and settings instead of mediums and subjects>. Here's a short story one of the Africans caught in the Atlantic slave trade which is much more evocative than I thought possible.
6. AI isn't perfect - yet. First, it has troubles with dialogue that sounds realistic. It also struggles with exact continuity. Creative writers are going to be okay for now as getting the precise flow of time can be tricky. Lastly, proper grammar is tough. This is easily overcome with Grammarly, and isn't a major barrier.
Final thought: there will be displacement and job loss in the creative writing industry (especially in short stories where you have less time to mess up continuity!) We will have many more stories, but all writers must level up their game.
---
My comparisons aren't perfect, but I think I highlighted the biggest issue: ignoring the effort *and thought* that goes into making art of visual or literary type. I imagine you don't like being told that an AI could totally replace you and all of your short story writers while maintaining 90% of the quality. Or that you shouldn't expect to be paid as much because you're easily replaceable by something that is likely feeding off of your own work without credit. Or that you would still be useful, but only in specific cases like writing instruction manuals power tools.
To be clear, I think you're right in a lot of ways! I know that many company blogs will appreciate not needing to pay for a stock photo license when they can type a few words in to get something that will only be glanced at anyway. But I think that you missed the reason why visual artists were mad at your article.
Edit: also a disclaimer, I'm not a visual artist by trade but I am trying to start it as a hobby.
Thankyou for this excellent comment. Sadly, your examples as it relates to writing might also be coming true much sooner than many people think.
I don't think that AI will actually take over either visual or literary art. In addition to what Machine Interface said about the effort to get small corrections compared to a human, there's also value derived from the knowledge that there's a person behind the story or image. You can learn a lot about a person from their art.
Have a look at Yuumei's art (https://www.yuumeiart.com/). You can see a recurring theme of nature: bathtubs full of flowers, musical instruments turned into koi ponds, cities being overgrown by vibrant grasses and leaves. You can tell without a word that she probably cares for the environment and loves nature. A step over to her blog confirms it: she ran a donation campaign for several climate and wildlife charities such as Rainforest Trust and Ocean Conservancy. An AI would have no such personality or story beyond a few words. I'm not much of a reader, so forgive my generic example for writing: you can see Harper Lee's desire to comment on race and prejudice in To Kill a Mockingbird.
As I said in my earlier comment, I fully believe that certain jobs like stock photography (writing example: corporate blogs? advertising?) will be replaced. Those seem like fairly soulless jobs (sorry stock photographers! I'm sure you're great people!) But art and writing to *tell* a story and not just look nice or be interesting will still have a place because it's about connecting with people.
This is good, also anonymous people online are mostly monsters and so they say crazy things.
So we have had twenty-six United Nations Climate Change Conferences, and as far as I can tell, there is lots of talk about "projected degrees of temperature rise", but no goal for "projected peak CO2 concentration".
Why is that?
The IPCC does have predictions for that - the one I found with a quick Google says that CO2 concentration will be between 550 and 970 ppm by 2100, depending on our policy choices. I suspect there's less talk of it because temperature is usually the outcome we care about, although CO2 does have some direct effects like ocean acidification.
And increased crop yield. Also greening of the Earth.
Temperature rise is probably a more useful metric in some ways. I would guess that it has an influence on things like ocean water levels (melting ice caps) and general quality of life around the world (places without A/C aren't prepared for higher temperatures and living inside of an oven seems dangerous).
Is gravity quantized or continuous? Since it's a function of mass and distance (which is quantized), I guess it depends on whether mass is quantized. I think there's no particles having mass less than neutrinos, so maybe mass is quantized and the smallest quantum of mass is the neutrino?
But since gravity is inversely related to distance, it can be made arbitrarily small by increasing the distance, so if gravity is quantized it's a weird kind where there's no smallest quantum unit.
Well, this is not my field, but I think most of us think it's rather a choice between "Is the correct description of dynamics in our universe a lot like QFT or not?" If the answer is "yes" then gravity *must* be a quantum field, because QFT doesn't admit the possibility that some fields might be classical, some others quantum. Either QM is a correct description of our universe's dynamics, or it is not, I don't think there's a lot of taste for some kind of "sometimes" or "in some areas" answer. The fact that people have a hard time figuring out a quantum field theory of gravity that looks a lot like, say, QED, is one of the reasons to think maybe QFT isn't quite right. But on the other hand I don't think anyone is even trying to come up with a classical equivalent to QED, so if you asked most people I think they'd probably say gravity will end up with some theory that has a "quantum" feel to it.
Rest mass is certainly quantized, in the sense that for any given field you only get excitations that are...well, quantum. 1 electron, 2 electrons, et cetera, but never 1.4335578 electrons. Same with all other particles. (Photons are a weird case because they have a rest mass of zero.) But it feels like you mean rest mass per se, something independent of particle identity, e.g. *all* particles of *any* fields must have masses that are multiples of some even more fundamental quantum of mass. I dunno if that's an aspect of any current theory.
I also don't know if spacetime itself would need to be quantized in a quantum theory of gravity, in the sense that Points A and B cannot be chosen arbitrarily close -- this is not my field as I said. Usually quantization happens in the amplitude of excitations of the quantum field, e.g. if the field for gravity is the metric tensor then maybe spacetime can curve away from flat in only tiny discontinuous jumps.. If I had to speculate wildly on what this means, I would guess it means the location of events in spacetime cannot be nailed down precisely, there would always be some indeterminacy.
Oh boy, I love thinking about what could be responsible for MoND (Modified Newtonian Dynamics.... see the Triton Station blog.) I keep going back to some low energy quantum state of the universe. And if there is such a thing as a graviton, then it's lowest energy state is something like a particle in a box the size of the universe... which gives it a frequency of ~ one over the age of the universe. Now besides not knowing if gravitons are real, I also have no idea how to calculate their energy given their wavelength, but if the energy is also related through Plank's constant (as with photons), then this is a very small amount of energy... but still non-zero and quantized.
All the above could be BS too.
Interesting - the Hubble Parameter (AKA Hubble Constant) has the value that is almost equal to one over the age of the universe. And yes it is a frequency.
But what does that mean?
Gravity does do this one weird trick that we've only ever seen quantum fields do before: https://bigthink.com/starts-with-a-bang/quantum-gravity/
So yeah, maybe its quantized, maybe not.
If you assume its quantized and do the same maths you do for the other forces, it works out just fine at normal energies. But at very short distances (or black holes) the maths gives infinite numbers of infinite terms, which means either its not quantized, or something else in addition must be going on.
https://www.quantamagazine.org/why-gravity-is-not-like-the-other-forces-20200615/
"All models are wrong, but some are useful". Quantum gravity might be wrong, but it still behaves like its right, at least some of the time.
Wait, the Ahranov Bohm effect says nothing about the quantisation of gravity as it assumes a classical potential. And it doesn't really say anything about the observable consequences of the potential, as we already know the potential arises from approximations Einstein field equations. It isn't "real" beyond that. So how's that bigthink piece say anything about quantisation.
There's no particular reason to believe that distance is quantized.
Planck length is a thing. Wikipedia says - "It is possible that the Planck length is the shortest physically measurable distance..."
That's a factoid, in the sense that it's commonly believed but there's no real reason to believe that it's true.
The lowest energy photon is something like Plank's constant divided by the age of the universe.
That's not the least energetic photon possible, and in fact you could detect one with less than half the energy by simply(!) having an antenna that spans the entire observable universe, though?
Does having an antenna the size of the observable universe even make sense? I mean, even assuming you can magically make enough material just appear/assemble/whatever in the correct configuration, what *is* the correct configuration in an expanding universe? What does it mean to detect a photon with the wavelength of a few billion light-years? When do you detect it? What does it mean that the universe became a few orders of magnitude larger during one wavelength? When (and how) do you even determine if your antenna is the right length?
Having some expertise in the field, I will try to point out some misconceptions:
- Gravity is not a "function of mass and distance". Mass, or, rather, energy/momentum/pressure do warp spacetime, at least classically, but it's not a straightforward relation, curved spacetime does not require mass or energy.
- Mass is unlikely to be quantized, and one argument against it is that the usual quantization limit, "Planck... something" fails for mass, since Planck mass is about 20 micrograms, which is rather large.
- Gravity is inversely related to distance only in the classical Newtonian approximation, which does not hold in general. For example, it does not hold for black holes.
- As Larry pointed out, "quantization" does not necessarily mean the existence of a smallest possible unit of something, though sometimes it does, like with electric charge.
- Gravity may well not be a fundamental force, and so may not have a quantum limit at all, but rather be emergent from, say, the Hilbert space of quantum mechanics when the number of states gets stupidly large.
- We just don't know much about gravity at distances below a few millimeters. Well, we know that particle collisions in LHC and whatever we observe in nature don't seem to create either stable or insta-evaporating black holes, so that limits the size of extra dimensions, if any.
We can detect relatively easily that light is quantised - for example we can show that if we shine light of a particular colour on a sensor and dim it enough, we will eventually find that instead of a continuous signal getting weaker and weaker, we will eventually detect a photon of a fixed energy now and again.
We can't do this experiment with gravity. The reason is that gravity as a force is extremely weak. The only reason we know of it is that gravitational charge goes in only one direction, and like charges attract rather than repel. So you can get a static accumulation of gravity the size of a planet, or larger. You can't easily get a static accumulation of positive charge bigger than the nucleus of a large atom, and that has to be held together by the strong nuclear force or it will explode (the energy of an atom bomb, though not a hydrogen bomb, basically comes from the compressed electrostatic force breaking out). [On the more peaceful side, it's what makes hydrogen bombs hard to ignite.]
But to detect photons you need electromagnetism, which happens when electric charges oscillate at high velocity. Because there are positive and negative charges tied to objects of very small mass, it's easy to make them oscillate, and the resulting waves in the electromagnetic field are strong and easy to detect - for example light. Look closely enough at the light, and you will see the photons.
The corresponding effect in the gravitational field is called gravitomagnetism. The waves and other features exist and have been detected, but to get gravitational waves powerful enough to detect you need something like two neutron stars in close orbit. We can't make detectable gravitational waves on Earth, while it's easy to make light.
As well as being quanta of a weak force, the gravitons we might detect are very low frequency. There's no equipment we can currently conceive of building that would demonstrate their existence. The main reason for believing in them is a belief in the consistency of the fundamental forces of nature.
I don't think a field being quantized means that there is a smallest increment of that field's strength. A field being quantized means that its effect is communicated in discrete units called quanta.
For instance, the photon is the quantum of the electromagnetic field, but I don't think there is necessarily a "least energetic photon possible" or something like that (you could talk about photons with arbitrarily large wavelengths, and thus arbitrarily low energies).
So in the sense you are asking, gravity is probably continuous, even if we eventually describe it using a quantized field rather than a classical theory, as we do currently.
Of course, the current theory is a classical theory anyway, so in the current description of gravity everything is continuous.
Or at least that's what I remember from my undergrad physics studies.
AIUI, there's a "least energetic photon detectable" due to the finite size of a buildable antenna (even theoretically, due to the event horizon produced by the accelerating expansion of the universe).
This is very interesting and I someone can answer this. Perhaps one explanation is that distance itself is also quantized?
I’m organizing several conferences for entrepreneurs, scientists and innovators in Próspera, the startup city.
Scholarships for flights and accommodation options are available.
More here: https://infinitafund.com/scholarship
And here:
Prospera Healthtech Summit, September 23-25, 2022: https://infinitafund.com/healthtech2022
Prospera Edtech Summit, October 28-30, 2022: https://infinitafund.com/edtech2022
Prospera Fintech Summit, November 18-20, 2022: https://infinitafund.com/fintech2022
I was recently thinking about something that I did not have a good grasp of and felt this community would provide helpful commentary on.
Many of the nerdy communities I participate in joke they self-select for intelligence. SSC/EA/basketball analytics discussion groups etc. At the same time, from my observation, all of these groups have very minimal East Asian representation.
Is there writing on this issue? Have people already theorized why this is?
I suspect cultural factors + smaller number of people of who grew up in the West in families comfortable enough to let their children waste tons of time on the internet and not feel academic pressure. I really don't know though.
My immediate thought is that going from the East Asian languages to English is really hard, and the hardest discussions to learn to follow are nebulous concepts like what people who self-select for intelligence are going to talk about. I suspect there aren't many native English speakers in the intelligence circles on their end either.
So is the idea that these circles should mainly be European second-language English speakers?
Not exactly this, but related:
https://slatestarcodex.com/2015/02/11/black-people-less-likely/
East Asian culture tends to be more practically minded. Intelligence in and of itself is not that valuable if it's not making you money. Spending lots of time online having intelligent discussions normally doesn't lead to money.
Maybe not directly, but it keeps the brain cells ticking over and considered posts are practice for improving reasoning and communication skills and articulacy. Both of those results, it seems to me, indirectly tend to improve money making skills, or success anyway, in jobs requiring mental effort and in particular interviews for the same.
I quite strongly disagree. Most jobs are very specific. Even reading papers/books for work is in 99% cases irrelevant luxury.. and you're talking about random intellectual chitchat on the internets.
In my mind this community selects on intelligence and certain impracticality/lack of focus/whatever your name for addiction for random intellectual rabbit holes. As a result from my perspective at least it's pretty obvious on average members on this community actually underperform on success conditional on intelligence - my Chinese friends are probably less smart than my rat friends, but there is a lot more Goldman/Citadel etc there! (though admittedly the observation is based on NY rats more than the valley, I could imagine with valley culture these over-intellectualism attitudes are less of a negative)
You all may enjoy my interview of the brilliant engineer and blogger Austin Vernon.
We discuss how energy superabundance will change the world, why software hasn't increased total factor productivity, how Starship can be turned into a kinetic weapon, why nuclear is overrated, blockchains, batteries, flying cars, finding alpha, Toyota Production System, & much more.
https://www.dwarkeshpatel.com/p/austin-vernon
That guy's blog is fantastic. Tying into the bit about nuclear, he had a very good blog post on this:
https://austinvernon.site/blog/nuclearcomeback.html
That was excellent. Thanks.
yes, we discuss it in a lot of detail in the podcast
Hi Dwarkesh,
I've noticed over the last year, you've seemingly gone from a normal young person to an internet micro-celebrity (I mean this warmly; I hope you take it as such).
I was curious how this has impacted your life - both in terms of how you view yourself (and the paths you want to pursue) and your social life. Does your IRL network know about your success/treat you differently?
Not really impacted my social life at all, and you and IRL people get used to it pretty fast and go back to your normal relationship dynamics (thank God).
But I definitely do have a lot more confidence and ambition than I did a year ago, and that definitely affects what paths I end up pursuing.
Well, let's look at that Letterboxd link:
"General Nanisca as she trains the next generation of recruits and readies them for battle against an enemy determined to destroy their way of life"
And what is the way of life General Nanisca is fighting to protect against the wicked enemy? Constant warfare, so bad that the male population has dropped enough the kingdom of Dahomey *needed* to recruit female soldiers, widespread slavery, and using slaves as human sacrifice.
That's a *little* different than the blurb would lead you to believe. I think some of the negative reviews may be based on "this is not the true history" and not review bombing.
Speaking of which, Amazon owns IMDb and admitted they were fiddling about with reviews for Rings of Power. So any reviews that were too negative were deleted:
https://www.reddit.com/r/lotr/comments/x46870/imdb_have_deleted_all_the_negative_reviews_for/
This includes honest reviews (I have seen even generally positive reviews that go 'visuals are awesome, writing is poor and pace is terribly slow') as well as any trolls. So it's not as simple as being made out.
I've done a *lot* of complaining about Rings of Power. I wanted to like it, but I can't as it is simply too far removed from genuine Tolkien lore. What they've done is create Generic TV Fantasy Show and just slap on the names "Elrond" and "Galadriel" and "Celebrimbor" on certain characters.
The most recent, episode four, is a doozy in that regard. I looked up some reviews online to see if it was worth watching, or what was skippable in it (there's a lot of skippable content so far) and oh man. I couldn't believe the first one I saw, so I looked up a couple others, and it was true.
Even a generally favourable review thought this point was clumsily done:
https://www.youtube.com/watch?v=W3BwFsN_ERE
Before I start ranting, let give them one good point here - the Adar character is interesting and had me wildly guessing could this possibly be Maeglin? but I doubt the show would go there.
Let me set it up. Why do the Númenoreans pre-Downfall hate and resent the Elves?
Tolkien:
"The Númenóreans …became thus in appearance, and even in powers of mind, hardly distinguishable from the Elves – but they remained mortal, even though rewarded by a triple, or more than a triple, span of years. Their reward is their undoing – or the means of their temptation. Their long life aids their achievements in an and wisdom, but breeds a possessive attitude to these things, and desire awakes for more time for their enjoyment. Foreseeing this in part, the gods laid a Ban on the Númenóreans from the beginning: they must never sail to Eressëa, nor westward out of sight of their own land. In all other directions they could go as they would. They must not set foot on 'immortal' lands, and so become enamoured of an immortality (within the world), which was against their law, the special doom or gift of Ilúvatar (God), and which their nature could not in fact endure.
There are three phases in their fall from grace. First acquiescence, obedience that is free and willing, though without complete understanding. Then for long they obey unwillingly, murmuring more and more openly. Finally they rebel – and a rift appears between the King's men and rebels, and the small minority of persecuted Faithful."
The show: (transcript courtesy of this site: https://tvshowtranscripts.ourboard.org/viewtopic.php?f=1484&t=56471)
Tamar: She summoned the Elf to court. Just this morning. Elf's mate attacks four guildsmen, and Míriel has her up for tea?
Guildsman: Probably she called the Elf in to punish her. Tamar: Or to ask her for orders. And while the Elf whispers poison in our Queen's ear, who's speaking for us?
(Guildsmen arguing, clamoring ) ( lively chatter )
Tamar: Elf ships on our shore? Elf workers taking your trades? Workers who don't sleep, don't tire, don't age.
Crowd: No!
Tamar: I say, the Queen's either blind or an Elf lover. Just like her father. Elf lover!
Crowd chanting: Elf lover! Elf lover! Elf lover! Elf lover!
(And then Pharazon shows up to calm the crowd and take the opportunity to do some populist speechifying about 'Númenor for Númenoreans' and handing out free drink).
Yikes. Tolkien - the resentment is based on fear of death versus Show - dey took er jerbs.
Do you really wonder it's getting negative reviews and not simply from trolls and racists?
This is really a fallacious point because it doesn't consider the overall sample size of all ratings and the demographic mix, as well as control for the effects of top-down intervention (Rotten Tomatoes was recently said to supress negative votes of the Rings of Power or something like that). I can just as easily look at the strongly positive distribution of Letterboxd and the fake-seeming reviews to conclude it's *this* site which is astroturfing and should be ignored as outlier.
Yes, exactly. Extremes happen all over reality depending on what you measure and how you sample and countless other choices. The number of starving people by nationality was a very extreme distribution in 1945, the number of killed combatants by gender in nearly all wars is very extreme. The percentage of planets who have life is, famously, an extremly extreme distribution, so extreme it has a paradox named after it.
God doesn't exist, but if they do I don't think they have a particular fondness for the normal distribution, it's just a useful tool that happens to describe lots of situations well. But so is newtonian mechanics.
Okay, I had a look at one review from the 99% positive critic's reviews over on Rotten Tomatoes, let's hear it for The Curvy Critics.
Her synopsis of what the movie is all about:
"The Woman King brings us into the year 1823. Orphaned at birth and raised by an abusive guardian who seeks only to marry her off for money, young Nawi petitions for entry into the Agojie, led by the single-minded Nanisca . To defend their people against the oppressive and heavily armed Oyo Empire, the Agojie places their candidates through intense training with Nawi rising to the cream of the crop as an outstanding, ferocious soldier. As the Agojie prepare for the fight of their lives against both the Oyo and the Portuguese slave traders with whom they are in league, long-buried secrets come to light and harrowing stories of personal sacrifice arise, which prove to only strengthen the bonds between these unstoppable warrior women."
Oooh, those wicked Portuguese trying to enslave the free people of Dahomey, right? Let me look up who these "oppressive Oyo Empire" is. So, were they Wicked Slavers? The answer is "yes, but"
https://www.worldhistory.org/Oyo_Empire/
"The Oyo Empire, with its capital at Old Oyo near the Niger River, prospered on regional trade and became a central facilitator in moving slaves from Africa's interior to the coast and waiting European sailing ships. The trade in humanity was so large that this part of Africa became known simply as the 'Slave Coast'. The Oyo eventually succumbed to the expanding Islamic states to the north, and by the mid-19th century CE, the empire had disintegrated into small rival chiefdoms.
...By the 18th century CE half of the slaves taken from Africa came from the southern coast of West Africa, and the area controlled by the Oyo Empire, the Kingdom of Dahomey (c. 1600 - c. 1904 CE, modern Benin), and the Kingdom of Benin - the Bight of Benin - came to be widely known as simply the 'Slave Coast' (the 'Gold Coast', another lucrative trade hub, was further to the west). There were two main reasons why the slave trade centred here: firstly it was one of the most densely populated areas of Africa reachable by the Europeans, and secondly, the Oyo Empire, and to an even greater extent the Kingdom of Dahomey, provided the necessary command infrastructures to organize the movement of slaves from the interior to the coast. In return, the Oyo received European goods which they could use themselves or trade with neighbouring states."
'To an even greater extent the Kingdom of Dahomey'? And Dahomey itself wanted to establish links with the Portuguese:
https://en.wikipedia.org/wiki/Dahomey#Portugal
"Dahomey sent at least five embassies to Portugal and Brazil during the years of 1750, 1795, 1805, 1811 and 1818, with the goal of negotiating the terms of the Atlantic slave trade. These missions created an official correspondence between the kings of Dahomey and the kings of Portugal, and gifts were exchanged between them. The Portuguese Crown paid for the expenses travel and accommodation expenses of Dahomey's ambassadors, who traveled between Lisbon and Salvador, Bahia. The embassies of 1805 and 1811 brought letters from King Adandozan, who had imprisoned Portuguese subjects in the Dahomean capital of Abomey and requested for Portugal to trade exclusively at Ouidah. Portugal promised to answer to his demands if he released the prisoners."
But those wicked Portuguese were carrying off slaves to Europe, right?
https://www.worldhistory.org/Kingdom_of_Benin/
"The Europeans were interested in beads, cotton cloth, ivory, and slaves, which they could then trade on to other West African peoples in exchange for what they prized most of all: gold and pepper (the only two goods in demand in Europe). West African tribes sought, too, the fine cotton cloth of India, glass beads, and cowrie shells which the Portuguese brought to Africa."
Say it ain't so, NYC Movie Guru!
"In 1800s Africa, General Nanisca (Viola Davis), trains the Agojie, a group of all-female warriors, to defend the Kingdom of Dahomey from the nefarious Oyo general, Oba Ade (Jimmy Odukoya), who's kidnapping and enslaving the women of Dahomey. Izogie (Lashana Lynch) and Nawi (Thuso Mbedu), who develops a romance with Malik (Jayme Lawson), are also among the warriors of Agojie. John Boyega plays Ghezo, the King of Dahomey."
Oh good, glad to see they clear up all that nasty propaganda about it being a struggle between the Oyo and the Dahomeans for access to the coast, trade (including slaves) with Europeans, and gaining territory. Thelma Adams (another Rotten Tomatoes positive critic) over at AARP Movies For Grownups sets us straight on what it's really about:
"While the movie’s treatment is surprisingly conventional, the tale of women empowered to own their own bodies couldn’t be timelier."
Unless, of course, you're one of the women enslaved by the Dahomeans to be sold on, but let's not mention the war, hmmmm?
I don't know, this seems like assuming an aweful lot of things about how people vote and how movies induce positive\negative feelings and how those feelings translate into votes and how marketing works and etc....
Fundamentally, I don't *see* why shouldn't the vast majority of people either love something very very much or hate it very very much. I don't watch a lot of movies (really barely at all), but I read, and when I see a Goodreads page for a particularly divisive book (not because woke stuff, I keep very far away from those), the ratings *are* indeed either all 4\4.5 stars glowing reviews, or 1\2 damning reviews. It happens.
Regarding your last remark, this... seems like the opposite effect of not gaming the system. I have heard about voting-system-shenanigans before in the context of the tech forum HackerNews, and it does the opposite for me, I trust HN's upvotes and rankings *less* because of it. It's impossible to define what a fair "anti-gamable" voting system looks like, even without the internet's anonymity on top ruining things. Even the braindead one-person-one-vote privileges the majority, which isn't necessarily optimal in media of all things, and is trivially gamable with the internet. Everything else is just downhill from here.
The best possible thing would seem to be : just delay as much choices as you possibly can and delegate them to the user, gather all votes you can, then let the user filter and re-arrange and ignore and amplify as they like, being careful not to privilege any default over any other. But of course, tools are only good because they limit choice, and not everybody wants to be data scientists in order to know if a 2-hour movie is worth watching. The next best thing is to try lots and lots of combinations of rules and filters and choose the "best" according to an intuitively-normative metric, like (say) total profit of the movie or post-view satisfaction of viewers who *provably* watched it. Offer all of those filters on the raw data, each tagged with the metric it optimizes.
"Regarding your last remark, this... seems like the opposite effect of not gaming the system."
It's an all-female, and more importantly, all-black movie set in Historical Africa about All-Black Female Warriors fighting evil European slavers. What critic is going to give that a negative review, be they working for legacy mainstream media or Just Some Person with their own Youtube channel? It would be asking to have your head cut off and put on a spike as a warning to traitors.
Okay? And why do you imagine people don't give such media positive reviews for ideological reasons?
She-Hulk is absolute garbage, but large numbers of people are lapping it up because something something women!
As Freddie DeBoer points out, the reason these types of media include social justice themes is precisely because its easier than engaging in good writing, and the producers can just hide behind cries of 'racism/sexism' when people point out how bad the writing is.
Oh, and let's also consider the MASSIVE factor here in that movies with explicit right-wing themes simply do not get made any more, so there's no possible way for 'right-wing' movies to be review-bombed, so of course close to 100% of review-bombing is going to be done by anti-leftists.
"the show is mediocre and forgettable (so basically a normal Marvel product), but certainly not "absolute garbage"."
I haven't seen it and have no intention of watching it, despite Tatiana Maslany being a talented actress, because of the trailer clips I saw.
In one episode, She-Hulk twerks with Megan Thee Stallion. Funny, I seem to recall some articles back in the day about how white women twerking was cultural appropriation and they shouldn't do it. I suppose it's okay, though, if you're One Of The Good Ones?
And seemingly the recent episode was "She-Hulk buys a suit"? Thrill-a-minute TV right there!
"If you get called a racist for loudly proclaiming that The Rings of Power is the worst show ever *specifically* because of the casting choices, then frankly you walked into it with both feet, and the producers of that show have all the incentives to exploit your knee-jerk reaction for their marketing."
Casting choices that make no sense except as "tick off the diversity boxes". Two minutes of backstory as to where Dísa and Arondir come from, that's all that is needed, but the show can't provide it. So Arondir is fairly conspicuously Lone Black Elf in the show so far, same with Dísa (I think I saw another black Dwarf in one of crowd scenes in Khazad-dum but I can't be absolutely certain).
That's not "these characters are established as part of the world", that's "we went all-in in looking for media cool points about how Diverse And Representative we were".
I don't care about Tar-Míriel's casting since she's fairly appropriate for the role (though I am chuckling about when we get to see her father and he's white; Pharazon her cousin is white; Invented Son of Pharazon is white, and Elendil, cadet branch of the royal family is white. Mommy must also have been black but since she's presumably dead, we don't have to bother with casting a third black woman in our progressive, representative, Tolkien-for-the-modern-world show).
Ditto with the Southland villagers - they're all too white, where the show could legitimately have cast black and brown actors. But that would have deprived us of the scene where White Young Guy is Racist to Black Elf, and we can't have that, can we?
I wouldn't give a damn if they had cast the entire show, from Galadriel on down, as black/brown if the showrunners could only write decent dialogue, but they can't. That's the worst of it - it's not Tolkien, it's Generic Fantasy with Tolkien names slapped on.
Last night I stumbled across this very long, very funny (and very mean to the Welsh which is not fair) review on Youtube. Warning for language, I guess:
https://www.youtube.com/watch?v=vT6bIea7YMo
>But that's all the reason I need; I have no sympathy either for hysterical, paranoid right-wingers who give 1 ratings to movies and series they haven't watched just because they perceive them to be "woke"
You know what they say, no smoke without fire. Hysterical right-wingers can only exist because of the hysterical wokies who jump up and down like excited dogs (but way less cute or useful) whenever a giant corp throws them a bone. Those human-shaped things exist, are more influnetial and amplified (because what corporation doesn't like free dicksucking and bootlicking ?), and the hysterical right wingers are probably doing us all a favor by exerting a braking effect, though not nearly enough to cause a visible difference, it seems.
Oh I agree that anger is not the correct response, not the obvious loud type at least, that just further vindicates the wokies. Snarky contempt and relentless mocking and transgression seems to be the things that really are devastatingly effective against the religion. It's an espcially fragile religion after all, with lots of touchies-feelies. But framing the reaction to something as the problem while ignoring the decade-long action that spurred it seems to be a disingenuousness of the first rate.
>Marvel is so "woke", all its movies are financed and materially supported by the DoD
Citation needed, why would the Department of Defense finance cringe superhero movies ? (except possibly Iron Man because it shows US military action in favorable light), or for that matter any movies at all except promotional documentaries and "historical" movies about the US's enemies ?
Also this is irrelevant. I don't get why being useful to the US military is somehow an argument against a thing being woke, wokeness is percisely defined as being so wrapped up in the wokie's own petty irrelevant fantasy world that they care for no one or nothing else, so of course wokies are very useful idiots to all sorts of organizations, it comes with the territory. When your face gets bloody red at something as an automatic reflex, all sorts of people will figure out how to narrate the world to you so as to exploit that. The oft-cited video about progressives screaming at Occupy Wall Street protestors about who gets to talk before whom comes to mind. Wokeness is a religion, and all non-personal religions are extremly useful as mob command-and-control.
I'm really struggling with the "No Smoke Without Fire"->"Witch Hunts, Lynch Mobs" connection. I can see why this could be the case if I had meant by the phrase something like "Any and Every Accusation is Evidence Against Someone" but this is really not what I meant, what I meant is : In feedback loops, all of the loop is equally at fault, except the loop-starter, who is more at fault than all else.
Being offended is a feedback loop : sombody gets offended, those who offend them now learns that there is a thing that offends them, and does it more, which restarts the loop. It makes no sense here to blame only the offended and not the ones offending them for fun and profit. In fact, in the particular case of movies and media, you should blame the invading aggressors who started the whole thing, namely the wokies. You could certainly blame the offended as well - like I said, they're not even doing the best thing by their own metrics - but most of the blame should be assigned to the one who started the loop.
..
I'm not ignorant of how the US military funds and endorses movies of certain kinds, I said as much in my comment after all, I know that e.g. Transformers and movies like it are heavily funded and given access to military bases in return for sucking dicks back. Top Gun drived enlistments in the US Navy up by blah%, I know all that. What I specifically asked for is why are marvel movies relevant for the military, except possibly Iron Man and maybe Captain America. Yes, I mostly didn't watch any of those movies except Iron Man, I despise superhero universes even from before their Awokening. No, I don't think it's unreasonable for me to demand sources of you and not google myself, why don't *you* do the googling, it's your claim after all.
Regardless of anything, this point wasn't largely about whether or not the DoD funds cringe movies, my point is that being woke is *aligned* with being useful to organizations like the CIA and the DoD, not evidence against it. The CIA has a disgustingly hilarious woke promotional video after all (https://www.youtube.com/watch?v=X55JPbAMc9g), I was just baffled at the apparent contradiction that you seem to think DoD's endorsement of a woke movie is.
all you have to do is remove 1 and 10 ratings from the stat (most likely those voting 1 and 10 are more ideological) then calculate stats again. Rings of power for instance is average 6.7 movie if you remove 10s and 1s
People are really just turning off their brains in answering this stuff. "Oh we've seen it all before and it all goes away".
No, policies change, the radical of the previous generation becomes the norm of the next generation through institutional dominance, the left win the battle and move forward and fight the war on another front. Things settled down on the race front compared to its peak at the 1960s because civil rights were achieved. A fundamental reordering of American society occurred and the media/schools eventually raised enough kids to support the new order that it stopped being something that there were meaningfully two sides on. It didn't just 'go away'.
Gay stuff for example isn't as big of a focus any more because gay marriage got passed and corporations are all-in on being pro-gay. Now race and transgender stuff is centre stage, but it's entirely unclear how or why these things will go away because there's isn't a neat set of policies that can put in place to 'fix' these issues. And the race stuff in particular seems to take over all other issues, even those non-culture war related, making it somewhat resistant to being displaced.
There really is a huge divide amongst people and any prediction that this will rapidly diminish is an extremely radical prediction that requires strong evidence. The only reasonable hypothesis I can think of is that left-wing institutional dominance is becoming so great that younger generations are being indoctrinated (and I intend this term descriptively rather than purely pejoratively) into the left-wing side of the culture war that what happened with e.g. segregation in the past, where there's no longer any kind of political force behind anybody remotely opposed to legal desegregation policies will happen with all other culture issues today, especially with increased non-white immigration. Even though conservatives have more kids, having conservative parents often isn't enough to compete with an overwhelming institutional dominance, and in any case they may vote republican but abandon right-wing culture war issues (the way that nearly no conservatives today oppose laws against interracial marriage), which I think is something that obscures the extent to which the left have truly dominated the culture wars (as does left wing rhetoric about how people with more progressive cultural views than democrats 100 years ago are "white supremacists").
I think as whites become a minority and the Democrats become emboldened in their 'racial equity' policies we'll actually see greater polarization, and a legitimate (peaceful) secession attempt of the most conservative states wouldn't surprise me at some stage. If DeSantis or some other 'Trump with half a brain' figure becomes president in the future, I think we'll really see the left become increasingly hostile towards sharing a country with conservatives. Or perhaps, by the time things get to this stage, there will have been enough immigration that conservatives just simply lose every fight and have to put up with what the Democrats want.
I think a lot of Democrats are absolutely convinced that racial equity policies will eventually achieve their stated goals and then all of these problems will dissolve (and/or non-whites will at therefore have enough power to literally or figuratively destroy opposition to equity policies/ideology). But this is extremely unlikely, which means left-wing predictions in this regard should be heavily discounted.
Obviously, AI is a wildcard here and all bets are off if/when it advances to the point that society is fundamentally reorganized. But until something radical like that happens, there's no good reason to think things will fundamentally change. The idea that all of this will just fade away are mindlessly optimistic in a way that the evidence does not justify.
Otherwise, I'm humble enough to admit I don't have any idea what will happen and anyone being sure of the outcome (other than appeals to societal reordering due to technology advances) is probably full of it.
My fantasy is that we'll reach a point where someone says something nasty in classic culture war fashion, and people roll their eyes and say, "Oh, that's so 2016".
Fantasy indeed.
Why on earth would they end?
A wave of reaction and quiet sweeping of past excesses under the rug, and a total shitstorm after a decade once designer babies / AI rights / other sci fi issue brings out the worst in people again.
In fact, the current trans rights shitfest seems like a prelude to a proper transhumanism / morphological freedom shitfest, along with perhaps a cognitive and neurochemical freedom shitfest that's long overdue as the previous consensus of the "war on drugs" is losing support.
Are the "Culture Wars" even a real thing, or just a boogeyman that Very Online people get needlessly worked up about? If you look at political polls, people on both sides tend to care most about issues like tax policy, gas and food and rent prices, infrastructure and education spending, and environmental protections (not in the sense of "we need to stop climate change," which is more of an activist concern, but "we need to make sure our local area has clean air and water and green spaces"). In other words, a lot of dry, boring, technical matters of fiscal and administrative policy that are nonetheless important because they have a direct effect on their lives. Abortion might be the sole exception, since that's a cultural issue that a great deal of people seem very concerned about, but outside of that, I'd be surprised if even 10% of the population cared about, thought about, or even knew about most of these "Culture War" disputes that get so much attention on social media.
Do I need to remind you that less than two years ago, nationwide riots were raging because a black person was killed by police?
And it's absolutely irrelevant if the average person cares about this stuff, what matters is what is happening at the most powerful institutions in the country, and yes, they overwhelmingly care about this stuff and its not going away.
Question: What's the difference between a social phenomenon in which people believe they are in conflict, and a social phenomenon in which people are in conflict?
I think it might be helpful to see if you can identify any past culture wars that have ended, and say how they ended. Did the culture war from the 1960s end? I think there's a sense that the right won in the 1980s, but there's also a sense that they never ended and that the current culture wars are somehow the same ones. Are there older conflicts that you would count as culture wars?
I'm not going to venture a prediction about how a conflict ends until I can come up with some description of what it would even mean to end!
There used to be literal war between Catholics and Protestants in Europe.
I think the left clearly won the original culture war. The average young person can't even imagine the average American today supporting segregation, or the US explicitly having white only immigration policies (they'll call stuff these things but its in some big conspiracy sense, not the in the sense that these things originally existed). They won so overwhelmingly that what was radical in the early 60s is the absolute norm today (and in some cases, even considered reactionary) and are literally not even debated any more. You can say this is a good thing, but what's not up for debate is that the left won without question. (But remember, this is the culture war, and the failures of socialism are different to this).
I can remember when a boss chasing a secretary around a desk was a staple of lightweight humor.
It was a long time between the youth movement (basically, postwar babies getting their driver's licenses between 1963 and 1967) that Hollywood merchandized as The Sixties and Ronald Reagan. I hope we don't have to bounce off the funhouse walls for another decade; the woke, genderist diktats and pronouncements have gone well beyond tiresome.
I think they're a lot like missing children on milk cartons. You get a period of very high visibility and Something Must Be Done public angst, and then it sort of fades away as people get tired of the complexity, bored with the sturm und drang, or just find some other squirrel to chase.
A lot of things have faded away, I would say. We don't get worked up about demon rum and temperance the way we did in the 1900s and 1910s. We don't get worked up about how firm on Communism one is or Who Lost China? like we did in the 50s or early 60s. Nobody has given a damn about state's rights except fitfully since 1860 approximately. Environmentalism morphed from pollution in the 70s to climate change in the 00s and 10s, so it's a little weird that came back after a period of quiet in the 80s and 90s.
On the other hand physiognomy-based disenfranchisement and discrimination started out with the blacks in the 50s and 60s, moved to the women in the 70s and maybe 80s, then disappeared for while -- we actually thought we'd conquered racism for a decade or so there -- and has come back with a vengeance, except that blacks seem to need to share the stage with the transsexuals right now.
I don't really get any sense of "end" in any final resolution kind of way, the way one can point to an end of the Second World War. I don't even get a sense of cycles and pendula. It feels more or less chaotic, like a bunch meme stock bubbles exploding and shrinking away.
Agreed. Looking back just a few years ago, MGTOW/MRA/feminism etc... was all over the place. Northing really changed or resolved, the arguments just lost energy and got supplanted.
Nobody will win or lose the culture wars. Things will probably continue to get more polarized as things get worse. Right vs left will continue butting heads until the societal problems get bad enough that gendered bathrooms and team mascots no longer seem like important topics. And whoever wins that mess will be random and situational.
As an aside, I'm not sure what you meant by your meme stock analogy - they aren't random.
Nothing changed? Do you know how many people lost their jobs/careers (rightly or wrongly) because of metoo?
And why would you bring up MRA/MGTOW? These were fringe movements that had no institutional support whatsoever. They unquestionably lost the culture war.
The culture war is still ongoing, its just that race and transgender stuff has taken centre stage.
>And why would you bring up MRA/MGTOW? These were fringe movements that had no institutional support whatsoever. They unquestionably lost the culture war.
I know this is a dead thread and a new one is out there and you probably won't see this, but just in case you have email notifications on . . .
I think this is overstated. MRA, as I understand it, was at least partly originally formed around the very deep imbalance in how the courts handled divorces, especially in regards to children. I've had a couple of friends go through that and made sympathetic noises along those lines and been told that in point of fact, the old 80s and early 90s model where the dad gets screwed and the mom gets everything has really changed in the courts, and very much for the better.
I think this is actually another issue of real rightwing progress, though I only have anecdote for it.
I think this is the wrong question. There will always be some issues we're arguing about, so the culture wars won't end. Rather, the issues will settle out somehow or another, and we'll change the subject.
To see what I mean, think back.
Prohibition, interracial marriage, the death penalty, concealed carry, and gay marriage were all live culture war issues (pick your own list if you prefer). All of them are pretty much settled. The culture war didn't end, it moved on. It will again.
So, rather than ask about the current culture war, you'd have to ask about particular current issues.
I'll throw out my takes, but you may not even agree with the issues I have here:
Trans stuff will be state by state. It will be status quo in lefty states, and serious restrictions (bathroom laws, possibly pronouns required to comport with birth certificates, probably any sort of medical treatments restricted to 18 and up, possibly any psych treatment as well) in righty ones. We'll get there in the next couple of years, argue loudly about it for a few years and then move on. There won't be a national consensus but we'll stop arguing about it.
History/CRT stuff will go like this too, I think, and resolve relatively soon. My state and states like Florida will teach history the way it was taught back when I was a kid. The Civil War was fought over states' rights, the real villains were the abolitionists and the fire-eaters, who forced a war that didn't otherwise have to happen, slavery was bad to be sure, but then no real further discussion. Lots of focus on good men on both sides and honor and all that. Essentially no coverage of Reconstruction or the Redemption, American history picks back up being interesting with Teddy Roosevelt and then makes for WW1. Lefty states will keep in the new stuff focusing on slavery, cover things like the cornerstone speech and the confederate constitutions, talk about R.Lee taking slaves during the Gettysburg campaign and the degree to which black regiments made up Grant's later forces, and cover Reconstruction in a positive light and treat the Redemption as a successful armed coup. And *then* skip to Teddy Roosevelt and WW1. In 10 years, it will simply be known that Blue and Red America teach somewhat different history, that we aren't doing anything about it, and therefore we'll stop talking about it. Oh, also, DEI trainings will stop in Red states.
Guns have moved on from concealed to constitutional carry, which I expect to be won in the next decade or so. You'll see occasional bitching about this after mass shootings (as you do now) but no one will run on restricting guns past the primaries, and no serious restrictions will be passed or even be a part of a general election campaign.
I'm really not sure with abortion. It will be live for a long while, I think. The left might get firmer control of the federal gov't and try to legislate nationally (which I think gets struck down). The right might get the same thing (which maybe gets struck down, but I'm less sure). Some states are going to pass travel restrictions and test the courts. I think those are going to hold. I think we're going to start seeing criminal penalties against the women not just the doctors, but I really feel unsure here. This one has been a cornerstone of the culture war for fifty years, and might stick around another 50.
I'm probably missing something, but that's what I've got right now. Notice that I think that three of the four are pretty much over by 2032. No idea what replaces them.
>the Civil War was fought over states' rights, the real villains were the abolitionists and the fire-eaters, who forced a war that didn't otherwise have to happen, slavery was bad to be sure, but then no real further discussion.
When/where was this if you don't mind me asking? I was schooled in a very white part of Minnesota in the 80s and 90s and this is a radically less progressive view of the war than we were taught then. And we had huge units on the reconstruction in multiple years.
In fact I would say the majority of our American History education was Revolutionary War > Run up to Civil War > Civil War > Aftermath of Civil War > Women's Suffrage > 60s Civil Rights > Return to Revolutionary War.
In basically a cycle that lasted all through 3rd grade through 12th.
Things like WWI, WWII, and Vietnam were just sort of glossed over, and the civil war was by far the biggest topic.
I went to public school in Georgia in the 90s/2000s and was taught a very lost cause adjacent version. Slavery was a factor in the civil war, but it was mostly about states rights, and the north started it. Carpet baggers were focused on as villains more than slaveowners
Southeastern Missouri in the 80s.
For whatever it is worth, when a good but not great student at my school went from Minnesota to Missouri in the mid 90s they immediately moved her up two grades.
I thought that was kind of sad.
Wow.
Same, I was educated in North Carolina public school and nothing close to the "lost cause" narrative ever came up.
I was taught, in a Georgia public school, that the War was only a little bit about slavery.
Good to hear. I figured with stuff like that Texas textbook a few years back that the Red states and especially the old South were hanging in with the old settlement. Do you think that's going to change? I don't really see how you can teach the civil war or the 60s civil rights movement without falling afoul of the new anti-CRT laws, for example.
Of course, of particular concern for me and mine is what the state (still Missouri) does with regard to influencing teaching in the city of St Louis. So far things seem fine on that front.
I don’t think you need to run afoul of CRT laws at all to discuss the civil rights movement. The Civil rights movement is practical antithetical to the CRT movement.
The law that Missouri proposed and (I think) didn't pass this last winter forbade teaching that any "identifies people or groups of people, entities, or institutions in the United States as inherently, immutably, or systemically sexist, racist, biased, privileged, or oppressed".
If you can't teach that the Jim Crow south was racist, then you flat out can't teach about it. Or the three fifths compromise. Or the armed forces up to WWII. Or the segregated unions (which still exist in my city). I don't see how you teach about a lot of history if you can't acknowledge how racist the institutions and people of the time were.
Which, again, isn't currently a problem, since as nearly as I can tell this language was removed and I think the final bill failed. But I'm sure we'll get something sooner or later, and it wouldn't be at all surprising given Jeff City if what we get ends up de jure outlawing the teaching of history, in the three felonies a day sense.
Not so much end as go in cycles. I think the current ones are running down, but what the new ones will be I have no clue (furrydom? God alone knows). I also think if we are running into a depression or crash or "hey, this winter we will all freeze in the dark", that is going to put the kibosh on a lot of current culture wars stuff and let it die off. We'll have a lot more to worry about than Piss Protests https://www.vice.com/en/article/jgpj5y/pissed-off-trannies-ehrc-protest if the lights are not being turned on and there is no heating.
No one living today can possibly know this answer, honestly. Personally I am sure only that (a) it won't be anytime soon, and (b) it won't be approximately a re-run of past culture-war cycles.
The winning side will become sclerosed, the losing side will evolve to become hip & subversive, the winning side will see it's gain turn to ash as a it's new generation move to the next fight, the loser side will start making gain, then it starts all over again until the nomadic hordes come in.
The won't?
Or, to the extent they do, it will be through shifting terms of debate/discussion in a manner which is only clear decades later (and maybe not even then, did the 'Political Correctness' culture war end when it mutated into the 'Wokeness' culture war? Did the 'Gay Rights' culture war end, or mutate into the various 'Trans/Gender Rights' debates?
This is one where I am humble enough to think I have no idea. Though I do think they get worse before they get significantly better. We haven't it the "bottom" in the market yet.
It looks to me like, in the US, the right is increasingly embracing the "punk rock"; the right is becoming the counterculture, the left is becoming the authority figure waggling its finger at those who misbehave. The right is the coalition rebel forces, the left is the empire.
Given the history of countercultures, on the specific issues that the right is pushing back against the left's authoritarianism - I expect the right to win. This particular fight has been swinging back and forth for decades now, even if, in retrospect, it is sometimes difficult to figure out who was who. Approximately: 30s-40s, leftist authoritarianism (Prohibition); 50s-60s, right authoritarianism (Nuclear family); 70-80s, left authoritarianism (Equal access media laws tearing down religious radio stations); 90-00s, right authoritarianism (Anti-atheism); 10-20s, left authoritarianism (many names).
And over the next thirty years, the right will, in its swing to ascendancy, forget how these things go, and become the authoritarian figure once more, waggling its finger, as it loses its coalition to the left, which once more becomes the ragtag band of rebels fighting The Man.
It helps that an overwhelming silent majority of non-extremely-online people are more right than left, by the current definition. The left has jumped the shark and is evaporatively cooling itself into irrelevance.
If this were really true, most elections would be won by right-wing candidates by overwhelming margins.
Nice capsule summary of the decades! Basically agreed, that seems like a good description of how the political winds have shifted.
Cue Billy Joel:
https://www.youtube.com/watch?v=eFTLKWw542g
See also: https://www.youtube.com/watch?v=M2iNLt_hUZg
Love that song!
Was Prohibition really left-wing? Seems comparable to the modern war on drugs which most would describe as more right-wing than left-wing.
Left and right aren't hard lines; stuff moves between them. Environmentalism, for example, has historically been a right-wing issue; it is, after all, fundamentally an enterprise in conserving things the way they are.
Pay attention to your confusion on this matter, because it's part of the propaganda in the water supply: You have been taught to believe that the left and right each represent some kind of coherent ideology.
No. Look around: The modern left defends corporate products on the basis of their adherence to ultimately superficial identity tagging, and the modern right criticizes the corporate nanny state and the military-industrial complex and the use of anti-terrorism laws to pursue domestic ideological groups. These dominant ideologies are diametrically opposed to the dominant ideologies in the left-right schism of twenty years ago.
Yet there is a pretense, a rationalization, that the wild fluctuations in actual policy positions all represent a coherent internal ideology - and this ideology happens to define both why you support the side you support (Democrats support civil rights / Republicans support civil rights), and why you oppose the side you oppose (Democrats are racists / Republicans are racists). The ideologies aren't even distinct - they both claim the same values for themselves, and decry the same evils in their enemies - they just change the flimsy cardboard rationalizations they use to claim that whatever bag of policies their constituents happen to support match their ideological virtues, and whatever bag of policies their opponents' constituents happen to support match their ideological evils.
It was part of first wave feminism, and you're going to have a hard time mapping any version of feminism to "right-wing".
Note that the contemporary "war on drugs" doesn't propose to touch alcohol or caffeine, and is politically distinct from the war on tobacco. WoD is fundamentally conservative - these mind-altering drugs have been part of human culture for ever and ever, and we have adapted to them, but *those* mind-altering drugs are new and unknown and scary. Keep things the way they have traditionally been, that we know works.
Prohibition was progressive - based on the eternal progressive belief that they can make people better.
Don't think being endorsed by feminists makes something left-wing. E.g. there's a branch of feminism which is against BDSM and against legal prostitution - positions which I think most would describe as more right-wing than left-wing.
Caffeine is close to unique in USA culture. It is the one mind-altering drug that employers routinely provide to employees (yeah, the Air Force fed meth to its pilots at one point - there are a few rare exceptions). I think one could reasonably call caffeine an occupational drug rather than a recreational drug.
I think it was a tangle that doesn't map well to the modern political framework. Lots went into support for it. Protestant churches seeking to moralize man, progressives believing that it would civilize and improve society, anti-immigrant types who associated drinking with foreigners like the Germans and (especially) the Irish, women's rights advocates who saw it as a means to curb domestic violence, and lots of other things all over our modern political spectrum.
Tricky question - what do you mean by “the culture wars?”
At a a high enough level the answer is pretty obviously “never” since cultura conflict won’t ever just “end,”so I’m sure you mean it in regard to some specific issues.
On the other hand, we seem to be fighting about damn near every issue in the US right now- we even suck in non-cultural matters like pandemic response and make them cultural. So I think a good grasp of what specific cultural tension points we’re talking about is needed to really offer up a worthwhile opinion.
>On the other hand, we seem to be fighting about damn near every issue in the US right now
It seems that way, but that's only because there are areas where one side or the other won so completely that there's no longer a conversation. (I'm going to make a post to this effect in a bit.)
We're not arguing over interracial marriage. We're not arguing over the death penalty. We're not arguing over alcohol prohibition. We're not arguing over conscription. There's a lot of stuff we're just not arguing about anymore, because one side won, one side lost, and now we're doing something else.
Interestingly we are still arguing over the death penalty, it's just that we're not arguing very hard over it. This is an interesting example of an issue that never really got comprehensively settled one way or the other (in the US), people just seem to have got bored of arguing about it.
Did people genuinely care less about the issue, or is it just that the media has found other issues to stir the pot on? I feel like a death penalty debate could easily be stirred up again if CNN put its mind to it -- live coverage outside the prison every time an inmate was being executed, a few hours a day of talking heads debating it, emotive interviews with the condemned man's mother, and pretty soon you could get everyone hot and bothered about the death penalty again.
>Interestingly we are still arguing over the death penalty, it's just that we're not arguing very hard over it.
Right. This is what I mean. I'm sure there are still passionate advocates, but it's basically a settled majority/minority position and it doesn't come up in mainstream debates. The current things will mostly (IMO) go that way too.
Interestingly, I looked up opinion polling on the death penalty and it's dropped from 80/20 for all the way down to 57/43 for in the last thirty years, so maybe it'll make a comeback as a national issue when trans stuff or crt stuff leave the national stage.
"This is an interesting example of an issue that never really got comprehensively settled one way or the other (in the US),"
The Supreme Court has several rulings massively restricting the use of the death penalty, it's quite settled.
54% of the population is in favour of the death penalty, 43% against. That's one of the least settled issues out there.
It's interesting to look at this plot showing support for the death penalty https://news.gallup.com/poll/1606/death-penalty.aspx against this plot showing actual number of executions https://en.wikipedia.org/wiki/Capital_punishment_in_the_United_States#/media/File:Usa-executions.svg -- the death penalty became heavily unpopular in the 1960s, which coincided with a massive decrease. But then in 1967, the Supreme Court banned it, making it more popular again. In 1977 they brought it back, and both its popularity and actual number of executions continued to explode until the 1990s when popularity and executions both started to decrease again.
I wonder if the death penalty in the US would have gone away much sooner if the Supreme Court hadn't tried to ban it.
Abortion was quite settled until activist right wing justices overturned it. Why wouldn't the same justices also overturn restrictions on the death penalty?
Abortion wasn't settled because it was (and is) a source of huge argumentative energy every election cycle. The death penalty is . . . not. We do it, we mostly approve of it, and while there's a minority that's very interested in talking about it, they're small and have no impact in even creating a national debate. That's what I mean. Abortion has never been settled in the same sense.
People will get bored and something else takes its place. I don't think salience shifts owing to resolution. There's still room to raise the stakes in the same areas.
Probably (~80%) one or more of:
1. US civil war II
2. WWIII
3. Other GCR/X-risk
Will probably (again ~80%) be over before 2050.
In the case of #1 probably some sort of progressive pyrrhic victory. In the case of #2 conservatives win. In the case of #3 no-one wins.
None of those options seem anywhere close to 80% likely. In fact, I'd give them all under 1% odds of happening within the next century.
Leaving aside the X-Risk stuff (since that probably involves a difference in opinion on technology rather than politics), what makes you think that either a new American Civil War or a new World War will happen anytime soon?
Great-power wars are pretty common if you look over history. We only really have one data point suggesting nuclear weapons changed this ("there wasn't a great-power war in the 60s-80s"), and my suspicion is that this wasn't *just* nuclear weapons but also had to do with the national/governmental character of the USA and USSR (also we came damned close).
Re: boogaloo - it won't happen absent a constitutional crisis, certainly, but there are a few obvious candidates for one of those (a serious challenge to the SCOTUS by the other branches whether by impeachment/packing/(especially) refusal to obey judgement; a repeat of Bush v. Gore with a hated candidate like Hillary/Trump winning the litigation; hard hit on debt ceiling defunding the police and military; Article V convention occurring).
#2's doing most of the heavy lifting in those numbers; I think it's ~60% likely by 2050. But some of that is from "#1 or #3 looks like it's happening and the PRC overestimates how much shit it can get away with in the chaos" (that's why I said "one or more" originally).
Wait...what Great Power war has happened since the 80s? Or for that matter since 1945? And 1945 to 2022 is quite a long stretch of history.
One hopes you aren't counting proxy wars, which have (1) occurred throughout history, and (2) almost definitionally exclude existential struggles (since if the Powers concerned were willing to begin one of those, they wouldn't be fighting by proxy).
The Korean War involved great-power forces shooting at each other directly, so I only count the Long Peace from 1953. I agree that proxy wars don't count, though.
I say "60s to 80s" because "too soon after the last war" and "hegemonic stability" are known exceptions to "great-power wars are common". 50s were too soon after WWII, 90s/00s/kind of 10s were unipolar.