Y'know what's funny? That when I first read this hateful little essay (Dec 1st 2013) I actually thought it reminded me of Scott Alexander's writing style.
And that's strange, because we all know Scott is not hateful and doesn't hate Nick Szabo.
I'm a paid subscriber. Hidden threads obviously have less engagement than open threads, but it seems to me that they have higher quality commentary. Like perhaps Scott's truly core audience is a little more invested?
Or that could be a false narrative of there being a value-added!
What @Rothwed said below is accurate - the day Scott posts an open thread is often the busiest, so I think if you're wanting a comment to have a lot of engagement, you're best off posting it that day.
I once posted a comment to an open thread like 40 minutes after Scott posted, so my comment was 8th or so chronologically? I didn't notice a meaningful difference than when I once posted like 18 hours later, truth be told. I suspect a lot of Scott's commenters are the type of people who at least skim all of the comments.
Why am I asking? I like the blog and it seems like a good place for discussion, but there's a large quantity of comments and many seem poor quality, not as in abusive, just not really engaging with the article or other people's comments. So I thought maybe the hidden discussions might be better.
Oh, I see. Well, the commenters here have a semi-wide variety of interests under Scott's umbrella. For example, I find the predictions market stuff deadly dull and so I skip over both the posts and comments threads about it. In fact, I tend to enjoy comments that are out of left-field; people asking for, like, advice on what breed of dog would best suit their lifestyle, or whatever.
Hidden threads aren't going to be more focused on Scott's posts or on each other's comments. They have about the same diversity of topics.
Hidden threads *do* often cover topics that people might prefer to talk about behind a paywall, though. I've started a couple of comments on hidden threads that (obviously) weren't totally private, but that I didn't want casually discovered by a coworker or something.
Okay thanks for the Intel. Obviously the Open Thread can be a bit more meandering and that's fine. I've found the Fake Traditions are Traditionanal discussion particularly frustrating so I'm looking for a better quality debate, but maybe there isn't one, or rather it doesn't grow on trees.
There's probably an optimal time, you can post too early on an Open Thread and have your comment swamped by later discussions. But I guess there's several factors.
Join us for our 67th OC ACXLW meetup where we'll delve into the future of artificial intelligence as projected by Leopold Aschenbrenner and analyzed by Zvi Mowshowitz, and explore the intricate dynamics of social groups. This week's readings provide a rich foundation for our discussions, exploring themes of AI development, social influence, and ethical behavior.
Discussion Topics:
Situational Awareness in AI Development
Overview: This topic will focus on Leopold Aschenbrenner's analysis of the rapid advancements in AI, particularly in Silicon Valley, and the projected developments up to 2027. Zvi Mowshowitz provides a summary that highlights key trends and potential future scenarios in AI.
Summary of Key Points:
AGI Timeline: Aschenbrenner believes AGI (Artificial General Intelligence) is likely to be developed by 2027, based on current trends in compute power, algorithmic improvements, and AI capabilities.
Intelligence Explosion: Once AGI is achieved, there could be a rapid progression to superintelligence, possibly within a year, through automated AI research.
Economic and Military Implications: Superintelligent AI could provide decisive economic and military advantages to whoever develops it first.
US-China Competition: There's a strategic race between the US and China to develop AGI. While the US currently leads, China could catch up through chip manufacturing advances and algorithmic theft.
Security Concerns: Current AI labs lack adequate security measures to protect critical AI developments from theft or misuse.
Alignment Challenge: Ensuring superintelligent AI systems remain aligned with human values and goals is a crucial unsolved problem.
Government Involvement: Aschenbrenner predicts increased US government involvement in AI development, potentially leading to a national AGI project by 2027-2028.
Societal Impact: The development of AGI and superintelligence could lead to rapid and profound changes in society, economy, and global power structures.
Ethical and Safety Considerations: There are significant concerns about the potential risks of superintelligent AI, including existential risks to humanity.
Urgency: Aschenbrenner emphasizes the need for immediate action in addressing these challenges, as the timeline for AGI development may be shorter than many expect.
YouTube Audio: Situational Awareness - Summary by Zvi
Text Article: Quotes from Leopold Aschenbrenner
Social Dynamics: Geeks, MOPs, and Sociopaths in Subculture Evolution
Overview: This discussion will delve into the social dynamics as explained by David Chapman in "Geeks, MOPs, and Sociopaths in Subculture Evolution" and Leopold Aschenbrenner's quotes. We will explore how different types of individuals interact within social groups and subcultures.
TLDR: David Chapman's essay on "Geeks, MOPs, and Sociopaths" examines the roles of different types of individuals in subcultures and how these roles influence the evolution of these groups. Leopold Aschenbrenner's quotes further illuminate these dynamics.
Text Article: Geeks, MOPs, and Sociopaths in Subculture Evolution
Questions for Discussion:
How do the stages of subculture evolution described by Chapman resonate with subcultures you have experienced or observed?
What strategies could geeks employ to protect the integrity of their subculture without completely excluding mops?
How can subcultures recognize and mitigate the influence of sociopaths?
Considering the decline of traditional subcultures, what new forms of social and cultural organization might emerge?
We look forward to an engaging and thought-provoking discussion where your insights will contribute to a deeper understanding of these significant topics.
"In the past few years, at least three distinct phenomena have potentially contributed to the gloom of the Anglosphere. Let’s think of them as diagnostic inflation, prevalence inflation, and negativity inflation.
First, the diagnostics. In 2013, the psychiatrist Allen Frances offered a warning to his field. Frances had chaired the American Psychiatric Association during revisions of the fourth edition of psychiatry’s “bible,” the Diagnostic and Statistical Manual of Mental Disorders, commonly known as DSM-IV. The first edition of the DSM—published in 1952 in response to the needs of military personnel returning from World War II—listed about 100 mental disorders. By 2013, the number of disorders listed in the DSM had swelled to nearly 300. In his 2013 book, Saving Normal, Frances warned that “a looser definition of sickness” could make people worse off. “DSM-V opens up the possibility that millions and millions of people currently considered normal will be diagnosed as having a mental disorder,” he told the Canadian Medical Association Journal that year. The expansion of clinical vocabulary risked creating a new set of patients he called the “worried well”—people with normal human experiences who spent a lot of time worrying that they have a disorder. He and others called this phenomenon “diagnostic inflation”—the slapping-on of more (and more, and more) clinical labels to pathologize everyday sadness and stress.
Frances was mostly concerned that diagnostic inflation would lead to over-medicalization. He might have been right. By 2016, the share of people in the U.S. using antidepressants was more than twice as high as in Spain, France, or Germany, and nine times higher than in South Korea.
As our mental-health lexicon has expanded, U.S. content creators have recognized that anxiety is a hugely popular—or, at least, hugely attention-grabbing-topic for young people scrolling on their phones. As I reported in December, the TikTok hashtag #Trauma has more than 6 billion views. According to the podcast search engine Listen Notes, more than 5,500 podcasts have the word trauma in their title. In celebrity media, mental-health testimonials are so common that they’ve spawned a subgenre of summaries of celebrity mental-health testimonials, including “39 Celebrities Who Have Opened Up About Mental Health,” “What 22 Celebrities Have Said About Having Depression,” and “12 Times Famous Men Got Real About Mental Health.”
This takes us from diagnostic inflation to “prevalence inflation,” the term psychologists Lucy Foulkes and Jack L. Andrews use to describe the phenomenon of people developing apparent anxiety disorders from the sheer ubiquity of concern about anxiety disorders that swirl all around them. It might work something like this: People who keep hearing about new mental-health terminology—from their friends, from their family, from social-media influencers—start processing normal levels of anxiety as perilous signs of their own pathology. “If people are repeatedly told that mental health problems are common and that they might experience them … they might start to interpret any negative thoughts and feelings through this lens,” Foulkes and Andrews wrote. This can create a self-fulfilling spiral: More anxiety diagnoses lead to more hypervigilance among young people about their anxiety, which leads to more withdrawal from everyday activities, which creates actual anxiety and depression, which leads to more diagnoses, and so on."
I'm reminded of the short story by Machado de Assis in which a famous-but-quack psychiatrist announces: "I have discovered that insanity is not an island but an entire continent!" https://en.wikipedia.org/wiki/O_alienista
> Bat bombs were an experimental World War II weapon developed by the United States. The bomb consisted of a bomb-shaped casing with over a thousand compartments, each containing a hibernating Mexican free-tailed bat with a small, timed incendiary bomb attached.
> In his letter, Adams stated that the bat was the "lowest form of animal life", and that, until now, "reasons for its creation have remained unexplained". He went on to espouse that bats were created "by God to await this hour to play their part in the scheme of free human existence, and to frustrate any attempt of those who dare desecrate our way of life."
I recommend reading the entire page. It feels like a fever dream.
Incidentally, there's a parallel timeline where Japan surrendered for fear of the Adams Bomb.
I could use some help with a bit of terminology. This is for a Call of Cthulhu RPG scenario. In this setting, the heroes work for a sizeable organization that sponsors investigations into archaeology and the paranormal and dispatches them to the far corners of the world. Let's suppose there is a group of "directors" that control the finances and decides which expeditions to fund. There is a group of "agents" who put together expeditions, pitch them to the board for funding, and then run the expeditions, typically without going into the field themselves. There are also "associates" who are junior staffers, working for the directors or agents, and "crew", the hired professionals who actually go on the expeditions.
I'm pretty happy with the titles of "directors" and "associates", but somehow the "agent" title doesn't seem quite right. It's a little too James Bond, for someone who is in the end a mid-level staffer. Can anyone think of a better term?
also, I don't understand what value the agents are providing, beyond being dispatchers/idea-fairies. why are they just sitting behind a desk in HQ, instead of being out in the field?
Everybody hates middlemen, and wishes producers and consumers could just deal with one another directly. In simple matters that does happen. But as soon as things get complicated, opportunities for middlemen tend to open up.
In this case, I imagine the overseers (by whatever name) are kept around because they know a) what opportunities for expeditions actually exist, b) what sort of projects the directors are interested in funding, c) how to put together an application packet that is likely to be approved, d) who can be hired to go on expeditions, and e) how to exercise the level of supervision and budgetary control that looks proper to the directors. That's quite a bit of knowledge to squeeze between one set of ears.
The directors would love to deal directly with expedition leaders who mostly work in the field, returning only occasionally to report glorious success and request modest sums of money to continue. But in practice having some responsible person at HQ has proved indespensible, and the modern role of overseer was eventually formalized for this purpose.
So, the middleman is someone who's directly and uniquely responsible for the success of a single crew's expedition. But they don't actually lead from the front? Sounds dysfunctional to me. But if you insist on this org-chart and you're not willing to make up a new word, I suppose "bursar" or "handler" are maybe the closest I can think of, depending on what you want to emphasize.
"The keepers are heads of the various departments of the British Museum. They are professional curators and related academics. There are currently nine departments plus the Portable Antiquities Scheme that have keepers."
Or perhaps "curators"? Or "conservators"? Given that they are meant to put together the missions and present the case for funding, but do not participate in the field, they investigate/research any objects brought back and present the final reports?
"A curator (from Latin: cura, meaning "to take care")[1] is a manager or overseer. When working with cultural organizations, a curator is typically a "collections curator" or an "exhibitions curator", and has multifaceted tasks dependent on the particular institution and its mission. The term "curator" may designate the head of any given division, not limited to museums.
...A "collections curator", a "museum curator", or a "keeper" of a cultural heritage institution (e.g., gallery, museum, library, or archive) is a content specialist charged with an institution's collections and involved with the interpretation of heritage material including historical artifacts. A collections curator's concern necessarily involves tangible objects of some sort—artwork, collectibles, historic items, or scientific collections.
...In France, the term "collections curator" is translated as conservateur. There are two kinds of conservateurs: heritage curators (conservateurs du patrimoine) with five specialities (archeology, archives, museums, historical monuments, natural science museums), and librarian curators (conservateurs des bibliothèques).
...In the United Kingdom, the term "curator" also applies to government employees who monitor the quality of contract archaeological work under Planning Policy Guidance 16: Archaeology and Planning (PPG 16) and manage the cultural resource of a region. In a museum setting, a curator in the United Kingdom may also be called a "keeper"."
Interesting. I guess "curator" would be particularly appropriate if these expeditions are being organized by an institution that is mostly known for being a museum, such as the Smithsonian or the British Museum.
Thanks for the suggestions, everyone. Having seen the options, I have to say that nothing is really jumping out at me as obviously right in light of conventional use. That suggests I'm best off using a plainly descriptive term, for clarity. I'll go with "Expedition Organizer."
Agent sounds right because if they're pitching projects to the board then they sound more like independent agencies than employees. Much like how the government gets a lot of work done by contracting NGOs. So "Contractor" could also work, or "Agency Head".
You could also go with "Leads", but that sounds a bit too modern.
"Project Manager" is the right term in most of industry and government. In more academic contexts, including government-run science, "Principal Investigator" is also used. The "Investigator" part might suggest actively participating in the field work, and some PIs do that, but others just arrange the logistics, read the reports, and tie it all together.
In movie-industry terminology, these people are essentially "producers". That seems too specialized. The projects they lead aren't building or making anything.
One idea I'm toying with is that they are informally known as "bucks", since they're where the buck stops on any decision pertaining to an expedition. The board doesn't particularly care how an expedition succeeds or fails, only _whether_ it succeeds or fails, and of course how much it cost. So these "bucks" have quite a bit of authority, once the board has greenlit a project.
I guess what I would really like to lean on is the idea is that the organization holds these people responsible for the success or failure of missions, and accordingly gives them a lot of latitude to make decisions. So, "Officer in Charge", maybe? That might work if the term "Officer" is also used for other senior people on an expedition. So, a large expedition might have a Supply Officer, a Science Officer, a Security Officer, and so on. Casually, these folks might be known as OCs.
Possibly it's worth noting that "buck", when applied to a person, has distinctly racist overtones. But then, Lovecraft, so maybe that's not completely a negative?
Yeah, there's also that usage; even "kid" comes from that sorry of thing.
But there's the usage that Deiseach mentioned, where it was a deliberately animalistic reference, with overtones of domestication and breeding, like maybe calling a woman a "brood-mare"?
I'm not saying you *shouldn't* use this term - it depends on the group and any audience. But ... better to go into it with eyes open, you know?
"Buck" was a term used to refer to male African-American slaves. From "The Adventures of Huckleberry Finn" (content warning for another term considered offensive):
"They swarmed up towards Sherburn’s house, a-whooping and raging like Injuns, and everything had to clear the way or get run over and tromped to mush, and it was awful to see. Children was heeling it ahead of the mob, screaming and trying to get out of the way; and every window along the road was full of women’s heads, and there was nigger boys in every tree, and bucks and wenches looking over every fence; and as soon as the mob would get nearly to them they would break and skaddle back out of reach. Lots of the women and girls was crying and taking on, scared most to death."
Controller? Manager? Administrator? It sounds like your agents are more like literary agents, so you could tack something appropriate onto the front of "agent" - I doubt anybody thinks that literary agents are like James Bond but for books (although that is a fun concept!).
Inspired by Melvin's post below, my question is: Was Marx a Communist?
Let me qualify that. If Karl Marx had seen 20th Century Communism in practice, would he have still been a Communist by the year 2000? I think no, he would have seen its practical failings and horrors and realized that they were too big to overcome. He would have accepted that he was wrong.
The subject reminds me of Nietzsche explaining that he chose Zarathustra as his hero because Zarathustra was the first one to make a clear distinction between good and evil. Nietzsche chose Zarathustra because, as the first one to make a distinction between good and evil, he would be the first one to recognize his mistake.
By the same token, I think Marx would have been quick to recognize his mistake had he witnessed the Soviet Union.
He was a Marxist but not a Leninist and certainly not a Maoist. I doubt he would have become any of them either. But I also doubt he would have recanted his beliefs. I suspect he would have instead spend his time writing angry tracts about how his way was right and theirs was wrong. Probably from exile.
I am not an expert on this, but I think Marx basically believed that the predestined historical timeline goes like this: First capitalism creates a lot of wealth, and then as both the wealth and social inequality reach extreme values, the system collapses (because the poor are unable to buy the products, but the rich are unable to make any more profit if no one buys the products of the factories they own), and then a revolution replaces this by socialism which keeps the high production but distributes it equally or something like that. So basically, first the capitalism must reach its maximum, only then it can properly collapse into socialism. Also, he expected that this will happen in Germany, because I guess at that moment, it was the most developed capitalist country.
Communists in Russia were quite stressed about this, because on one hand they believed in the prophecy, on the other hand, their own actions contradicted it (by making the revolution in Russia, and skipping capitalism). Until the last moment of the revolution they still expected that somehow Germany would... gets it own socialist revolution five minutes before Russia does, so the prophecy would still come true. When that didn't happen, that's probably when Lenin started developing Marxism-Leninism as an alternative to Marxism, and afterwards Stalin decided that "actually, we need to build some kind of 'capitalism, but micromanaged by the communist party' in Russia, because you really can't build a welfare state when starting from poverty".
So basically, it would be enough for Marx to say "I told you; the real socialism would only come when the time is ready, and it will come in Germany, not in Russia; your experiment was premature and that's why it failed". Thus he could still keep the belief in Marxism.
No, that's Keynes talking about the Great Depression. Low wages mean low consumption which leads to industries failing which leads to low wages. Which means Keynes argued for government intervention to break the vicious cycle.
Marx believed that what would happen is that collectives of workers would outproduce capitalist owned factories. This would cause the capitalist owners to pass laws or use violence to suppress these more productive worker cooperatives which would necessitate the use of violence to defend them. The resulting war would be ultimately won because the factories pumping out guns and armor and all that on the socialist side would be more productive. The period just before the war is late capitalism.
You're mixing up Marxism and liberal economics. Stalin (and all Marxists) oppose the welfare state, for example.
I do not think Marx would have likely recognized his mistake. Lenin didn't realize his mistake, Stalin never realized his, nor did Mao, and I don't see Marx as an exemplar of virtue that would be more likely to repent and change his mind than the average man. Given that many Marxists still exist despite being aware of all the historical horrors, I imagine Marx would adapt and continue to believe that history was bound to unfound itself into a glorious communist future.
I agree. Marx would have said they shamelessly used his ideas for promoting the welfare of the common people to create a new veneer for the powers-that-be to use to rule. He wouldn't recognize the theoretical fundamental flaws now shown empirically.
At the time that Lenin died, the revolution he had fomented for all his life had succeeeded beyond his wildest dreams, the Bolsheviks were firmly in control of Russia and, thanks to NEP, it looked like the crisis situation was stabilizing. At the time Stalin died, Soviet Union was probably at the absolute zenith of its power or close to it, unquestionably the second-most powerful superpower still widely predicted to close the gap with the US even among non-communists, had just won an apocalyptic war against fascism, possessed the sort of technology a teenage Stalin at the seminary could probably not even have imagined. Why would one expect them to realize their mistakes at this point?
When Stalin died, USSR was so soaked in blood that the next leader did the unthinkable and said what everyone knew: that Stalin was an evil man, and the Soviet Union had gone astray. When Lenin died he had the blood of millions of his countrymen on his hands, and had failed it usher in the age of freedom and utopia that he spent his years writing about and claiming was soon to come. The mistake they should have realized was not that Marxism makes for bad economic policy, or bad military policy, but that Marxism in practice does not usher in the utopia Marx promised but instead brings mass murder, terror, and slavery. Lenin and Stalin were content to be the instruments through which that evil flowed in order to preserve their own power, and I feel that Marx would have been much the same. He does not strike me as a person who is particularly committed to what is true and what is good over being right.
How much did Marx believe that a centrally controlled economy would be superior? How hard would it have been for him to stop believing it if he did believe it?
If you'd asked Hitler "are you a fascist?" then would he have said "yep, totally", or would he have said "No, Fascism is the ideology of Mussolini's National Fascist Party, I'm a National Socialist, which is different in the following ways..."
Is the whole idea of "Fascism" outside the Italian context just an example of outgroup homogeneity bias?
The second one. There was an international fascist movement which Hitler considered himself separate from. Doctrinaire followers of fascism in Nazi Germany were persecuted if they didn't change their beliefs. There was a broader recognition they were both far right ideologies with some similarities but they did not consider themselves the same or even compatible.
There was, however, an international fascist movement and you would have found people self-identifying as fascists (or at least saying "no, we're X, a movement inspired by fascism") all over the world. Nazism never really had much success outside Germany (except when it was backed by the German army). But fascism was able to spread and compete as an ideology. There's still political parties and countries that are significantly influenced by fascism and fascist policies. Most notably large parts of the Arab world but also things like Peronism in Argentina. It was also distinct from simple right wing dictatorships.
My understanding is that this would have been dependent on *when* you would ask Hitler this. If you had asked him earlier on, he would have probably replied that fascism is, at least, an inspiration for National Socialism, though National Socialism is still its own, German thing. Later on, he started growing increasingly disappointed in Fascism and saw it as beholden to the traditional right.
>Is the whole idea of "Fascism" outside the Italian context just an example of outgroup homogeneity bias?
You have to form a map of *some sort*, the match to the territory notwithstanding. During the interwar era, and to some degree even afterwards, there was a great amount of extreme nationalist movements with certain commonalities that even took the power in some countries, and "fascism" is probably the most useful name for this tendency, since it was an open point of reference for many of them.
The Nazis were directly inspired by Italian fascists, but given Germany became the greater partner, and had a much broader base and greater degree of social control (Italian fascists having kinda haphazardly come to power, versus Nazis getting almost half the votes), they kinda played down that connection.
The Nazis did have a sense that they were doing the same sort of thing as Mussolini's Fascists, especially prior to 1933. A lot of it was tactics and aesthetics, like the Brown Shirts being heavily inspired by the Black Shirts, and the Beer Hall Putsch being the first stage in a plan that was modeled after the 1922 March on Rome that brought Mussolini to power.
How much Hitler personally thought the Nazis and the Italian Fascists were doing the same sort of thing depended on the relative fortunes of the respective parties and, once both were in power, how well the governments were getting along that week.
In terms of ideology and policy, the biggest difference was probably that the Italian Fascists saw nationality as the crucial fault line dividing humanity, while the Nazis saw race in that role. Mussolini was pretty dismissive of race as something significant up until the mid-to-late-30s when Italy was allied with Germany and shaking out to be the junior partner in that alliance. In particular, antisemitism was central to Nazi ideology, and the Nazis started writing antisemitic policy into Germany's laws very soon after taking power. Mussolini frequently flip-flopped on antisemitism in the 20s and 30s and Italian Fascists didn't start enacting antisemitic laws until 1938, 16 years after taking power.
> Mussolini frequently flip-flopped on antisemitism in the 20s and 30s and Italian Fascists didn't start enacting antisemitic laws until 1938, 16 years after taking power.
And, at least according to Arendt, the Italians were extremely resistant to the Holocaust, and the genocide only really got started in Italy once the US invaded, and the Fascist government collapsed, and Germany occupied the north.
On The Rest Is History podcast they had an interesting discussion about What Is Fascism? They both agreed that fascism must have an element of both futurism and fashion. Futurism in the sense of flashy new technology like jet aeroplanes and such and the notion the future will be shiny and bright. Fashion in the sense of well-choreographed parades with stylish uniforms. I think there may have also been some stuff about race and nationalism.
So by that definition, the Nazi's would seem pretty fascist. Maybe the Proud Boys too on the fashion front, though perhaps not on the futurism front. The Rationalists definitely fail on the fashion front.
ADDED: Trump isn't enough of a futurist to be a fascist, but I think he could put on quite the parade if only they would let him. I suppose he does like his shiny jet aeroplane, but that makes him more of a retro-fascist than a neo-fascist.
I haven't listened to the podcast, but this strikes me as remarkably unserious. Fascism had very particular ideas about the relationship between the individual and the state, and was very consciously a reaction both to socialism (out of which it grew), Marxism, and, of course, liberalism. If an analysis does not address those aspects, then it is pointless. Here are some of those ideas, from the horse's mouth: https://sjsu.edu/faculty/wooda/2B-HUM/Readings/The-Doctrine-of-Fascism.pdf
>In the Fascist conception of history, man is man only by virtue of the spiritual process to which he contributes as a member of the family, the social group, the nation, and in function of history to which all nations bring their contribution. Hence the great value of tradition in records, in language, in customs, in the rules of social life. Outside history man is a nonentity. Fascism is therefore opposed to all individualistic abstractions based on eighteenth century materialism; and it is opposed to all Jacobinistic utopias and innovations.
>Anti-individualistic, the Fascist conception of life stresses the importance of the State and accepts the individual only in so far as his interests coincide with those of the State, which stands for the conscience and the universal, will of man as a historic entity. It is opposed to classical liberalism which arose as a reaction to absolutism and exhausted its historical function when the State became the expression of the conscience and will of the people. Liberalism denied the State in the name of the individual; Fascism reasserts The rights of the State as expressing the real essence of the individual. And if liberty is to he the attribute of living men and not of abstract dummies invented by individualistic liberalism, then Fascism stands for liberty, and for the only liberty worth having, the liberty of the State and of the individual within the State. The Fascist conception of the State is all embracing; outside of it no human or spiritual values can exist, much less have value. Thus understood, Fascism, is totalitarian, and the Fascist State — a synthesis and a unit inclusive of all values — interprets, develops, and potentates the whole life of a people.
>No individuals or groups (political parties, cultural associations, economic unions, social classes) outside the State. Fascism is therefore opposed to Socialism to which unity within the State (which amalgamates classes into a single economic and ethical reality) is unknown, and which sees in history nothing but the class struggle.
There's been a fair amount about Chronic traumatic encephalopathy (CTE)-- serious brain damage from repeated blows to the head, even impacts that don't cause obvious damage. I'm wondering about group effects. What happens on the group level for demographics where a lot of impacts are common, perhaps especially for young people?
What's the value, to anyone, in diagnosing people as being on the autistic spectrum? In the old days a high-functioning person of that nature would get described as eccentric, which has a charming ring to it, whereas autistic has all the charm of "retard". (Not many kids bully each other by saying: "Tom, you're eccentric! Stay away from me you fucking eccentric!")
And it all seems tautological. Someone has a variety of personality traits which can be grouped as Asperger's or on the autistic spectrum, but there's no treatment for it, and the effect of pathologizing these traits stigmatize them. What's the argument in favor of labeling these people the modern equivalent of "retard" instead of not labelling them at all?
> In the old days a high-functioning person of that nature would get described as eccentric, which has a charming ring to it, whereas autistic has all the charm of "retard". (Not many kids bully each other by saying: "Tom, you're eccentric! Stay away from me you fucking eccentric!")
And what of those who were even a tad less high-functioning? They would get described as "freaks" or "weird" or at best "anti-social" and generally would be looked upon with suspicion. Surely it is better for all involved for people to understand that a given person is acting oddly because they can't help it, and that their odd behavior is not a symptom of malevolence, nor a predictor of dangerous behavior.
I don't even understand what "autistic" is supposed to mean. Can someone give me an actual definiton? The definitions seem to be something like "has some combination of this long list of traits" and...I'm sorry, what do these traits have in common? What's the actual singular definition that unites them all? If there is none, why are they lumped together into the same thing? Is there a common cause? Or do they just correlate and no one's quite sure why?
It's all so vague and makes me think psychology is in a very messy and primitive state, kind of like medicine in the 19th century.
Several people have said I seem autistic. Maybe I am and I'm also commenting here (seriously what's the link between commenting on this blog and autism?; I swear every second commenter identifies with it). But I have no clue what it actually means.
Those four things seem described as, roughly speaking:
* smart, repetitive, mildly awkward
* smart, awkward, mildly repetitive
* repetitive, awkward, highly verbal
* repetitive, awkward, nonverbal
I am not dismissing this classification, just complaining that even if true, it is difficult to remember. Do we have at some easy to remember archetype for each group?
Maybe I should add that I did find "Autism qua Predictive Processing error" [0] as uniquely descriptive of whatever is going on with me. So maybe the label isn't 100% noise. But to reiterate, the dominant theory at the time I was "diagnosed" was Simon Baron-Cohen's theory [1] of emotional clairevoyance. And the "services" i was offered by the industry, which were supposed to help learn how to navigate social interactions, were not especially relevant to me.
I did kinda balk at the "intense world" theory. But also, my skin is sensitive enough to kill probably... 70% of mosquitos before they bite me. Which is pretty unusual, based on my observations of others. So idk.
I have the fOrMal dIAgnOsiS as high-functioning and/or asperger's. Three points:
A) The "test" i was given by a professional shrink consists of trying to have a conversation about normie topics and seeing how well i could engage. But normie smalltalk bores me to tears, so i "failed" the test. Though I'm pretty damn good at carrying a conversation if I want to (and I often do, out of social politeness). So if I'd known in advance that "emotional clairvoyance" was effectively what they were testing for, I'd have aced it easily.
B) It's a "CoNsTelLaTioN oF SymPtomS". AKA syndromic. AKA nobody has the slightest clue what causes it. Except that it's vaguely genetic. (But that didn't stop the shrinks from trying to give me the runaround. They are SO CONVINCED that "constellation of symptoms" is somehow a meaningful turn of phrase.)
C) Temple Grandin has opined that the essence of autism is a lack of abstraction. Which is very much the opposite of whatever I have. So not only are people confused about the causes, but they're confused about the symptoms as well. Which leaves... not a whole lot to work with. So yes, it's practically tautological.
From this, I've concluded that the label contains negative information. Because not only is it completely useless, but it also gives the illusion of having discovered something valuable and authoritative. Maybe the label is useful for others. But as far as I'm concerned, I might as well have been diagnosed as "HIV Aladeen" [0].
On one hand, I don't mind a label that's low-status/weird/etc. And I certainly have *something* that deserves a label. (I.e. no, I wouldn't go as far as carateca in describing myself as a shy normie.)
But on the other hand, I've met people with the SEVERE version of autism. Lumping "unusual interests" into the same spectrum as "can't tie shoes; speech impediment; literally a savant; etc" feels roughly equivalent to "Scooby-Doo fans are just slightly more well-adjusted Ted Bundy's".
And whatever I have, certainly isn't a lack of emotional perception.
Failing the conversational test is part of it, though. If you can 'ace the test' once you know what the questions are in advance, then that is your coping strategy. The unfiltered you is the one who is bored to tears by 'normie smalltalk', so you only freely speak with enthusiasm and unprompted and from genuine interest on your special topics of interest.
That's high-functioning/Asperger's there, babes. I don't have a formal diagnosis but I have a strong suspicion I'm somewhere around Asperger's, and with a paternal family line of that as well (there are family stories going back generations of the 'odd' members which were not simply "shy normies" as per caracteca). I too am very "normie small talk bores me to tears". I don't know how your adolescence went, but mine was "why am I not interested in the things my peers are interested in, why am I the only one who cares about these odd weird topics?"
Psychological theories struggle to deal with the better functioning people. It's easier when you've got the kids who (to take a real life example from a previous job) have to wear a motorcycle helmet pretty much 24/7 as otherwise they will beat their heads against the wall so hard and so continuously they cause injury to themselves. Everyone looks at that and agrees "that's not normal".
But "does well at school, doesn't share peer interests, is socially awkward, has some quirks of behaviour"? That's a lot harder. And I was a lot weirder but repressed the hell out of the stranger behaviour/beliefs around other people because I knew "this is weird and possibly crazy". So looking from the outside, it's a lot easier to dismiss all that as "spectrum does not exist".
> The unfiltered you is the one who is bored to tears by 'normie smalltalk', so you only freely speak with enthusiasm and unprompted and from genuine interest on your special topics of interest.
It's a bit ironic that an aspie who tries to talk like normies, and even does a mediocre job but gets bored, is considered to be bad at social skills... while the normie who doesn't even try to talk like aspies (except mockingly), and boasts about how he hates math, is considered to be the empathic and social superman. Seriously?
That's a bit like living in a world where most native German speakers also fluently speak English, but only a rare native English speaker speaks even a little German... and concluding that the native English speakers are *better at languages*, because you only compare everyone on how perfect is their English.
I think you may have hit the nail on the head. What makes aspies appear antisocial is only that they happen to be *outnumbered* by people whose personalities and communication styles exist on a different spectrum. If the majority of the population had stereotypical aspie traits, then those who didn't would be the antisocial ones because they would have a harder time communicating with the majority of people.
To be less generous, one could say that the world is full of dumb people who are interested in dross and if you happen to be one of the rare smart ones interested in more complex, abstract things, well, you've got this syndrome that makes you uninterested in dross and we're going to come up with a label for it so the dumb people can use it as an epithet in the lunchroom.
No, come on. That's offensive. "Everyone not like me is just a dummy interested in dross" is both a terrible over-simplification and ignoring that it's more than "just being shy" but that people 'on the spectrum' (and yes, horrible term but we're stuck with it) do have genuine, real problems with ordinary social interaction and the tasks of adulthood.
It's very tempting to go "well I don;'t need them, they're just too stupid to appreciate the finer things, unlike me" but that's sour grapes. I've had that temptation, I've done that looking down my nose, but at this hour of my life, I have to recognise: I am not able to do some ordinary things and that's *my* lack, not society. It's not just a label, any more than "wheelchair user" is just a label the dumb people came up with so they could use it as an epithet in the lunchroom.
It's not the modern equivalent of "retard" in a lot of settings, and this forum is one of them. I don't know whether Scott considers himself to be on the autism spectrum, but it would not surprise me if he did. And many of the people here who describe themselves as somewhat autistic are very smart people with good jobs, usually in tech, who are are introverted and eccentric, have always felt different from other people, and were seen as oddballs by their peers growing up. I don't know whether thinking of these people as being at the smart, high-functioning end of the autistic spectrum is accurate, but I do think the cluster of characteristics they have is a thing, a syndrome. I'm a psychologist, by the way.
Right. I know that and I respect this community tremendously. And one thing I respect about it is that it is so open to questions that might run against the grain of conventional wisdom. It strikes me that there are trade-offs in pathologizing a cluster of characteristics. Others in this thread have given good reasons for why the label and diagnosis have positive value. I can imagine that there is also a negative side to it, as there are for most things.
I agree with you there. I do think some people have latched on to "I'm on the spectrum" as a way of handling the dissonance of "I'm the weirdo in my peer group".
But that is not to say that "Therefore the autism spectrum/Asperger's Syndrome" does not exist. As I said, our service deals with children with additional needs, which includes autism spectrum, and it's not just "needs some coaxing to interact with peers". It's having meltdowns out of thin air (to outside view), repetitive behaviours, sensory processing issues, hyperfocus on special interests, a whole raft of things all going together.
Well, syndrome literally means "run together," which captures the way I think of it. I have never seen the term used in a context where the speaker was not referring to something that they saw as pathology, though. I'm fine with calling it a personality type. On the other hand, many people who have grown up as that personality type do not experience themselves as just one of many perfectly fine personality types. They feel as though something is *wrong,* something is getting in the way of them doing all kinds of things. And I do think the list of things they have a lot of trouble doing is longer than the lists that go with other styles of living and thinking that we regard as personality types.
The problem is that to get any kind of help, it needs to be medicalized. You need the Official Diagnosis. Otherwise you get the "just deal with it/pull yourself up by your bootstraps" kind of reaction from everyone which does not, in fact, help.
You feel like you're drowning and instead of being thrown a lifebuoy, you get "just learn to swim! if you could swim, you'd be fine! everyone else can swim!"
Well, what I think is a reasonable approach if they want help with some of the things they have trouble with is just to work on the thing itself. What's the bottleneck? One bottleneck I often see (I'm a psychologist) is that people with this profile want an unusual amount of certainty about what's going to happen next, and that gets in their way in situations where certainty is not possible. For instance somebody I see got a job offer, but then delayed quite a long time before accepting it, because there were various things he did not know about what the job would be like. He kept trying to figure out what the things would be like by obsessively analyzing all the data he had, but that data simply did not hold clear answers to all of his questions. It helped him to see this pattern, and to give some thought to which things in life fall into the highly predictable category and which did not, and to realize that new jobs were in the latter category. And we talked about steps he could take if the new job had various bad characteristics -- ways to change the job, ways to exit if it was unfixable. He's quite smart, and would have been perfectly able to carry out this analysis on his own. But he was so anxious about the job, and so stuck in the impossible task of figuring out exactly what the job would be like that it did not occur to him to ask himself the questions I asked him. This approach is neutral as regards whether he has a mental illness or a wiring problem or a syndrome. And that seems like a reasonable approach to me, given that if we could somehow know that he had a certain wiring problem or that he perfectly met the criteria for high-functioning autism that info would have no utility whatever, since we have no treatment that specific to wiring problems or autism.
I genuinely think the problem is the smartness. You're dumb (no disparagement to my dumb fellow-humans) and can't deal with things? Okay, people make allowances for that because you're dumb. Maybe you're so dumb you need special help, okay fine.
BUT. You're academically capable? This needn't even mean "straight A student", it's "does okay at school, is not falling behind, is not failing tests, is not causing trouble in class".
Then it's "well what's wrong with you? you just need to try harder! you don't have a problem, you're just being difficult!" The idea being that if you're not stupid, then your difficulties with everyday functioning that ordinary people can do just fine are down to lack of will power, or grit, or tenacity, or just thinking yourself too good for the rest of us.
Looking back on an incident which puzzled me at the time, when I was a kid my mother brought me to the doctor. This wasn't our usual family doctor, and I wasn't sick. Doctor did a physical exam, said I was fine.
I think she was trying to find out if there was something 'wrong' with me, because she had noticed I wasn't developing 'normally' in some aspects (one of that was that she was worried I might have hearing problems, as often I didn't respond when called). She didn't have the vocabulary or concepts of things like autism or development delays, so she was reliant on the doctor picking something up.
But of course, since she never raised the question of "does my child have developmental problems?", the doctor just looked for physical problems, no her ears are fine, her health is fine, she's okay.
So that was it. Nothing went further. I couldn't understand at the time why I was going to the doctor, because I wasn't sick. But now I understand what she was trying to grope towards, and I think she was right. But it's decades too late now to do anything about it, and the shape of my life has been formed.
Would a diagnosis have helped me? I have no idea. I have no idea of knowing what would have happened. But it sure as hell would have helped explain me to myself.
It can make therapy, life skills coaching, and self-improvement more productive. There are common issues that people on the spectrum often struggle with that aren't often major problems for neurotypical people or people with non-Autism-Spectrum issues. Having an autism diagnosis focuses the hypothesis space for "areas that might need work" more onto those particular issues and also suggests that where those issues are indeed problems, there may be a standard playbook for autistic people to work on those particular issues.
This hypothesis-focusing effect is particularly valuable when the issue is superficially similar to a more-common-in-the-general-population issue with different root causes and treatments. For example, an autism meltdown can look a lot like either a panic attack (an anxiety disorder symptom) or an emotional flashback (a PTSD symptom). There are significant differences between their presentations, but those differences are easy to miss unless you're looking specifically for them.
I think I have some wrong notion that "kids these days" are just getting scrutinized, medicalized and labeled to death by the time they are in First Grade whether they have serious problems or not. Perhaps I'm extrapolating too much from the helicopter parenting phenomena to imagine that kids are no longer allowed the space to be a little bit weird without getting a full diagnosis for their little bit of weirdness.
That is an entire separate problem, Hank. Again, like yourself I'm only going off online comments, but there is a definite sense in which the push to get better results means "you need good grades to get into a good school to go to a good university to get a good job", so if accommodations such as more time or other assistance is available for kids with needs, then for the pushy, anxious, 'tiger mom' types of parents who can afford it - get your kid diagnosed as ADHD (for the Ritalin/Adderall to improve focus) and any other disorder which means "Little Johnny needs extra time on the tests and other help" so little Johnny can keep on grinding out those high grades and eventually get a high status, high paying job.
So there are a few problems:
(1) Medicalisation of everything. Little Johnny can't still still in class and pay attention? Now that's ADHD and he needs to be medicated
(2) Gaming the system, as above, which extends into university (related to that, I think, is the attitude that cheating is fine, everyone does it, only fools actually do the reading and the work and write their own essays and study for exams, take the easy path to guaranteed grades)
(3) Self-diagnosis by the terminally online. Some people really do have problems, but instead of facing up to "I am a pain in the backside and need to work on that", they prefer to grab for "In fact, I am a Type Z multipolar disorder person with Complex Childhood PTSD due to narcissistic parents and anxiety disorder and see attached list of my disabilities, so that is why I should be permitted to be a massive pain in the backside and anyone who objects is abusing a survivor of childhood abuse and neglect by toxic family and environment".
Sure, but Asperger's or whatever syndrome are social constructs. The question is whether they are useful social constructs. The constructs can come with information, but they can also come with baggage in the form of negative stereotypes.
I suppose I've observed the negative stereotyping around autism/Asperger's more than I've observed the gains people have made from the label. That doesn't mean what I've observed has much bearing on the reality.
"let me be explicit here: a lot of rationalists are so psychologically abnormal they are incapable of being conventionally racist. that also means they are incapable of being antiracist.
to be against something you have to be able to conceive of it. they aren't wired that way"
Someone responds: "Can you clarify psychologically abnormal here a bit?" Razib then tweets a link to "Asperger Syndrome" on Wikipedia. Someone responds to that with "ha ha".
I don't believe Razib was trying to make any sort of mean-spirited joke, he was simply being direct about what he meant. But the response "ha ha" demonstrates that it's a subject of mockery and derision, which was inescapable once the terms "autistic" and "Asperger's" become popularized to mean "socially retarded".
Re: the distinction somebody made about being "eccentric" and that not being a term of abuse, one cynical description I have seen is that "if you are rich, then behaving like this is described as "eccentric" and it is tolerated; if you are poor, behaving like this is described as "criminal" or "crazy" and you get in trouble".
You do have a very good point about the negative stereotypes, but that's unfortunately just human nature: people will point at, laugh at, and mock the 'retards' or 'morons' or 'special needs' or 'Aspies', no matter what new term is invented to replace the old one which has now become a term of reproach.
Someone who meets the criteria for now-diagnosis of Thing B is going to behave in a particular way, have thought processes along a certain path, be lacking in certain areas. And that behaviour and thinking and lack is going to be noticed, whether or not Johnny has a formal diagnosis. If it's mild enough that he can generally fake it, then he'll be accepted (albeit with some opinions that Johnny is 'odd' or 'eccentric' even if deemed harmless). If it's severe enough, Johnny won't be accepted and will get the brunt of social disapproval and mockery.
Our service isn't simply "so Johnny's parents feel he might be falling behind a little", though parental input is very important. Before Johnny can even get in the door, he needs the full assessment:
"Our service is supported by the HSE multi-disciplinary team made up of the following:
Psychologists
Physiotherapists
Speech and Language Therapists
Occupational Therapists
Clinical Nurse Specialist
Social Worker
We have a key worker system in operation where a child is assigned their own key worker. All individual therapies are implemented on a daily basis as directed by the CDNT."
There are therapies and curriculum plans for children based on their particular requirements, there are Learning Plans and Treatment Plans and goals to be met and progress to be documented, all alongside the routine activities of pre-school:
"Our Facilities include:
• Classrooms
• Outdoor play area
• Playroom
• Indoor swing room
• Speech and Language room
• A Floor Time room
• Meetings rooms
• Multi-Sensory Room
• Body Awareness room
• Nurse's Office
• Sensory garden
The rooms are designed in such a way as to meet the developing needs of each
individual child. The children are guided through a range of educational and play activities at their own pace. The children’s key workers implement their specific therapies and individual programmes. Our team creates a positive and secure environment where children feel confident in exploring their surroundings.
Keyworker System
Staff work with the children on a one-to-one basis. We have a keyworker system in place, we will inform when the keyworkers change. The keyworker will carry out all relevant therapies given to them by the Children’s Disability Network Team. These therapies are also carried out at home by the Parent/Guardian.
The key worker has many responsibilities. Their role involves developing a relationship with the child and their family. Each individual child will have different needs; therefore, the key worker must be adaptable and attentive to the child. This will help ensure the child’s needs are met.
The role of a key worker includes:
• Meet and greet the child and their family upon arrival.
• Familiarise yourself with the child’s care plan, folders and files to have as much
information as possible about the child to be able to meet their needs.
• Be aware of the child’s day to day needs, read log books and get a verbal report of how the child is on arrival.
• Document all information and observations about the child in their task records and key worker notes.
• Monitor and record progress and be able to feedback information to therapists, classroom nurse/Coordinator etc.
• Follow the child’s lead.
• Encourage open communication.
• Watch and observe the child.
• Become familiar with the child and begin to identify the child’s interests and
strengths.
• Engage with the parents and guardians to build positive relationships and to
exchange information about the child.
• To provide parents with opportunities to contribute and share their knowledge and insights.
• Pass on information to parents/guardians when they are collecting their children.
• Fill out log books every day.
• Be aware of parents/guardian’s sensitivity around their child attending an early years specialist service, what this means, supporting them, providing resources and being a listening ear.
• Liaise with the HSE Children’s Disability Network Team re individual programmes. Seek advice from same.
• Provide a handover to the new keyworker when they are assigned"
All this is for children in the age range 2-5 years and I can tell you that early intervention is crucial. We've had kids go from being non-verbal and unresponsive to being able to converse and communicate, and to walk unassisted when the prognosis was that this would never be possible.
From the outside, the common online perception of autism etc. is "buncha nerds that are too geeky and ugly to get a girl and only like stupid comic books and SF TV movies and shows". The more flattering to the self-image version is "really smart nerds that like STEM and will get high-paying jobs because we're the ones working on the world of the future for all you normies, so what if we have special interests?".
That's a long way from the reality. Not everyone on the spectrum is a STEM savant going to go into tech.
I dunno, this tweet seems to me mostly have going for it that "sounds good on Twitter" thing. Think about it. How would that work -- being incapable of being racist? What is the thing some can't do that prevents them from being racist? And how does that also prevent them from being anti-racist? Walk me through it.
> How would that work -- being incapable of being racist?
You lack the instinct that makes you automatically hate everyone who is obviously different from you. (Different in a direction that makes it socially acceptable to hate them. Because normies are always sensitive to what is or isn't socially allowed.)
Normies gonna enforce norms. Autists are oblivious about them.
When a normie sees a person of a different color, their first instinct is to check whether it is socially approved to hate people of a different color. If yes, the hate arises instinctively. This is what people traditionally called "racism", the combination of the social consensus that it is okay to hate people based on their race, and the corresponding instinctive reaction of most normies.
And because it is not realistic to make normies stop hating different people, the solution is basically to redirect their instinctive hate somewhere else. You teach them that the social consensus is that you should not hate people with a different color, but instead... dunno, people with a different opinion. And then, normies start picking their targets based on the new criteria, and racism is no more... in theory.
In practice, it is difficult to convince people about a new social norm. Normies are very good at figuring out the true social norms, that's their #1 obsession, so they quickly notice how people behave differently at school and outside school, etc. So the actual behavior will be more like... hating people with a different color only in the socially approved situations, in socially approved ways. (For example, a white racist might learn that high-status black people have to be respected, but it is okay to hate all *other* black people as long as you never admit that you hate them *because* they are black; if someone asks you, you always have to point at some other trait of the specific black person, such as lack of college education, or not being sufficiently woke.)
Meanwhile, you ask the autist why he doesn't hate people with different color, and he just gives you the usual stupid look and asks "why should I?". Aaaah, so frustrating, this lack of social skills!
So, if you define "anti-racist" as not being a racist, then most autists would qualify. -- Unless they found an online article saying that people of certain race are inferior, in which case they would happily report to you their findings, and would be ready to debate it academically. They would probably be happy to repeat the same words even in presence of the people of given race. They would be just as willing to debate this topic academically with them! They probably couldn't stop talking about this topic in their presence! So... yeah, I guess that's "racism", too. But, you know, it would probably remain at the verbal and purely theoretical level, which can be very obnoxious, but is probably preferable to e.g. bullying or lynching, which is more of a normie behavior.
The normie way of "anti-racism" is finding someone who is a more socially acceptable target of hate than people with different color. Yeah, autists don't understand this either. Also, among the woke people it is socially acceptable to express hate against white people; by white and black people alike. Autists also have a problem understanding why this is supposed to be a good thing. Autists are more of non-racists rather than anti-racists. To be a proper anti-racist, you need to be a proper racist in the first place, and then redirect that instinct.
Does this make sense? There is a certain poetical license here, but probably less than most people would be comfortable believing.
So "Autistic" is described in the DSM as a disability, because it makes it harder for children to attain educational performance, become socialized into the peer group, function productively as members of society, and be happy. None of that implies that it is *impossible* for people on the spectrum to achieve these things, merely that they have additional barriers and challenges to overcome as a result of their neurological condition.
One reason to make this classification is to help children who are experiencing these difficulties understand what is actually happening, so that they do not blame themselves and develop a harmful self-image. Knowledge can be power, and just being able to assign an objective cause to their disability can help them cope emotionally.
Another reason to diagnose autism is to make a range of support services available to such children. With appropriate and research-tested interventions, they can find ways to adjust and even take advantage of the unique aspects of their condition and achieve greater success in life. This is beneficial to both the child and society, because the child is more easily able to make informed life choices that build upon their strengths as individuals, and they can become more productive members of the community, by obtaining gainful employment, avoiding conflict, and forming relationships with other members of society, but on the spectrum and off it.
Finally, the diagnosis helps inform research into this condition, which may one day result in, if not a "cure" (because calling it that stigmatizes the condition) but new options that autistic people can take advantage of if they choose. There is some evidence, for example, that deep brain stimulation has a positive effect on some of the symptoms of autism.
BTW, autism is a spectrum of behaviors and thinking patterns in exactly the same way that other disabilities are. It's simply a way of saying that the specific services required to help any given autistic child are going to be somewhat different. A cafeteria approach is a better model to follow than "one intervention for all."
It's primarily for status-seeking females. In the new religion, victimhood is glorified, so if I'm just a "normal" white girl then I'm one of the bad ones. I don't want to be one of the bad ones, I want to be one of the elite "oppressed", so I become an autistic they/them.
Yes, it's saying that we're weird, but it's weird *in a particular way*. And that helps people know what to expect from others, and from themselves. Everyone can stop wondering "what on earth is wrong with this person, why can't they just act normally" and ascribing non-existant motivations, and can instead go "ah, ok, that's what's wrong, so which areas are they bad in".
I don't think this explains it, because the one thing all of these kinds of people are constantly insisting on is that autism is a "spectrum" that "presents differently" in everyone, especially females, who they claim present especially differently than males. Curiously, this is used as all the more reason that females are oppressed, because they aren't being "given" their "rightful" autism diagnoses! (as if they wouldn't also be saying females are oppressed if they were the ones being primarily diagnosed with autism - "you're pathologizing normal female behavior!")
You may ask - why would these females *want* to be diagnosed with this syndrome - and there you find the answer to the original question. In many circles, status is conferred upon zim who can claim the most oppression.
Furthermore, this framing defeats the entire concept of a syndrome, which is "a group of symptoms which consistently occur together." If men and boys consistently have symptoms that appear together, but females and other new/fake autistics have totally different symptoms and present differently (or don't even "present" visibly at all), then they don't have the same syndrome *by definition*.
There may be a social gulf between the girls branding themselves autistic on social media and the men (and women) who simply want to be understood better by their real-life peers.
For sure, there is, but we don't have a term we can use to describe this anymore, because autism has become corrupted as described. And saying "real autism" makes you worse than Hitler.
Listen I've attended two high-social justice focus colleges and I know where you're coming from with this but flat out you are overestimating both the gendered (sexed?) aspect and how mad people will be. I have personally described my own situation as "technically diagnosed autistic but I'm not sure how real or valuable that is, bc I'm pretty high functioning compared to, y'know, people who need permanent care" and the Queer Trans Neurodiverse Left Wing Undergrads all went "yeah that makes sense". A lot of self-dx types are female but a lot aren't. Some will be upset if you say self-dx is unreliable but most will understand where you're coming from. If you show up and say "you Status-Seeking Females are FAKING for CLOUT" then yes people will be mad at you. People on line will be mad at you for just about anything, and I wouldn't trust it to be representative of the wider world of female trans-id/autisic-id people.
I can see how that helps in situations where other people are generous and thoughtful, but there's at least as many people in every level of society who are ungenerous and use a known weakness in others as something to exploit. The label is also a liability in dating markets. Few women who aren't themselves on the spectrum would be interested in dating a man who is described as being "on the spectrum". Compare this to the old days when a man who was described as "eccentric" might sound enchantingly mysterious. And an "eccentric millionaire" or "eccentric genius" got triple bonus points.
I don't think people, especially the "ungenerous" people you mention, ever described anyone as "eccentric." It's a term usually reserved for biographers and reporters trying to avoid the actual terms schoolyard bullies might use...weirdo, geek, nerd, etc. (Before some of those terms became cooler). At least "autistic" has the air of a diagnosed disability to it, so kids know they're not supposed to tease about it, just like they're not supposed to tease a blind or deaf kid.
I agree that, at very high-functioing levels, the question of whether to label or not can be challenging. But most of the people I know on the spectrum were relieved to get a label that said, effectively, "this is why you feel different, you're not alone, here are some common ways people like you can alleviate these problems."
>At least "autistic" has the air of a diagnosed disability to it
From Wikipedia:
>Retard was previously used as a medical term. The verb "to retard" means 'to delay or hold back', and so "retard" became known as a medical term in the late 19th and early 20th centuries to describe children with intellectual disabilities, or retarded mental development.
>At least "autistic" has the air of a diagnosed disability to it, so kids know they're not supposed to tease about it, just like they're not supposed to tease a blind or deaf kid.
Yet according to the teenagers I know, "autistic" is the main insult middle and high-school kids lob at each other these days. Cruel kids often do exactly what they are told not to do and telling them specifically not to pick on autistic kids clues them in on the fact that the word "autistic" can be wielded as a weapon. No, kids never used the word "eccentric" because it was a term that was applied mostly to successful adults who were unusual, but people did use the term in conversation last century. You were much more likely to hear women use it (to describe men) than men.
>But most of the people I know on the spectrum were relieved to get a label that said, effectively, "this is why you feel different, you're not alone, here are some common ways people like you can alleviate these problems."
That is a strong argument for using the label.
I wonder how many people hate getting the label. Maybe it's a generational thing.
Someone who is a high-functioning autist/asperger/whatever and sufficiently smart, can learn the social skills if they spend the time and effort. But the result will still be quite different from normies.
For example, they could become someone who can be a center of attention at a large party... and when you meet them after the party, they will say "oh, I hated every second of it, I only did it because my normie friends have invited me, but now I want to spend the rest of the day alone, probably taking a hike in mountains. Actually, you could come with me if you are interested in hearing about the latest quantum physics interpretation I found in a certain scientific paper, but please no more social small talk or I will start screaming."
Autism also encompasses repetitive patterns of behavior - stuff like stimming, restricted interests, inflexibility with routine, and hyperreactivity to certain stimuli. You literally cannot get an autism diagnosis without that.
You are telescoping two things: the idea that asperger's and autism are the same thing, which emenated from medial professionals; and a socially constructed idea of "being on the spectrum".
"This will be better for the formerly autistic too as they won't have this fictional illness as a crutch, preventing them from taking action to improve their own lives."
It's not fictional, though, is the question we're debating. Again,, taking the service where I work as example. We cater for children from 2-5 years.
Part of this is Transitions. This is a big deal. The act of going from one room into a different room can trigger a meltdown. So you have to teach the child about this is a normal activity, it happens everywhere, this is how you deal with it. That includes telling the child beforehand you are going to leave room A, then when going into room B telling them that now you are going into room B; encouraging them to say things out loud like "door" to get across the idea of 'we're opening the door, we're going through the door; seeing this door means we will be opening this door and going through it into a different room'. Getting them over this hump means they will no longer be having meltdowns at home about "now you have to move from this room to your bedroom" or in public. It helps them learn and cope. 'Normal' children don't need this level of intervention.
These are children "on the spectrum". It's not "slightly awkward nerd" because by the time you get to that age, the damage may have already been done. I do accept your point about gentrification, but that does not mean that there isn't a gradient from "real but mild version" to "real and totally disabling version". The kids can be "basically fine" up until you hit the one thing that sets them off. That's what the onlookers don't see; you can't tell from "John is perfectly fine, just a bit shy and awkward" adult who has learned how to function in everyday life that "but if this particular trigger sets him off, John is very much not perfectly fine".
As with many things in life, the extremes are very different and easily distinguishable, but this does not yet mean you can have a simple binary classification, because there is actually a continuum without an obvious place to draw a line.
Worse, part of the difference between someone who gets a diagnosis and someone who does not is whether their symptoms affect their daily life badly enough for them to seek help. This is a combination of what their symptoms are, how capable they are of managing them, and what their daily life involves. You cannot, therefore, predict whether someone will want or get a diagnosis by considering the symptoms alone. The continuum is multidimensional and a partition of it that takes only one dimension into account will not lead to sensible results.
Meanwhile, if help/support is on offer, grifters will try to access it.
As with many other problems that have similar properties, we are left with a choice: we can have more gatekeeping and risk failing to help some people who need help, or we can have less gatekeeping and give some help to grifters who do not need it.
My preference, as always, is for solutions that accept some level of grift in order to help most of the people that need help over solutions that aim to reduce the level of fraud to zero; losing some money to help more people is IMO better than losing some people to save more money.
"People "on the spectrum" need to recognize that they're just shy normies"
While I agree with the TikTokkers and Instagram and the rest of the bunch grabbing onto "I'm ackshully neurodivergent, you bigot!" as an excuse for why they should be allowed behave like assholes and get away with it is an example of using mental problems as a trend, being "on the spectrum" is not just "shy normie".
I work in a service where children with additional needs are educated, and I now have a next-door neighbour with two kids that are probably additional needs as well, and I can tell you it's more than a simple matter of "Little Johnny is just shy, he just needs to mix with kids his own age and come out of his shell". That's not the most severe cases, it really is "on the spectrum and needs early intervention to develop coping strategies and be integrated into society".
OK but if it's a spectrum then by definition it's not binary. Only the label is binary.
Or is that even correct? Does the spectrum peter out at some discrete point?
One reason I'm so curious about this is that, as an old, weird, eccentric guy, I can imagine that I'm "on the spectrum" and would have been labeled that when younger but feel thankful that I wasn't because I would have hated to have been labeled like that. But maybe that's a generational thing. Young people seem to love identity labels whereas my generation mostly loathed them when younger.
But if I did imagine myself as "on the spectrum" --- and I ponder it --- it causes me to reinterpret much of my past differently. I see things through a different lens if I adopt that label. That's, again, because however spectrumy autism may be, the label is binary. For instance, I feel as if I have less freewill if I imagine my life with the label than without it. (I don't believe in freewill theoretically, but it *feels* like I have it. With the label, it *feels less like I have it*. I prefer to feel like I have freewill, however delusionary that may be.)
I feel like it shouldn't matter so much whether I might have such a label or not. Anyway, I liked it better before the label existed, when the world was less medicalized and it was easier to feel normal even if you were weird.
"Slightly awkward nerd" is not the same thing as "on the spectrum". I think you're going off the pop culture notion of what the autism spectrum is, and while I think certainly some awkward people have grabbed onto "oh I'm not a weirdo, I'm autistic!" as being something to help salve their sense of self-worth, unless you have a diagnosis, that's not it.
There are certainly things like social anxiety disorder etc. but those are not autism. The curse of self-diagnosis is what we're all complaining about.
I do think that folding in Asperger's was not a good idea as it may be a somewhat different disorder, but autism does exist on a range from mild to extremely severe.
This is not people who can see just fine pretending to be blind, or even near-sighted people pretending to be blind, this is people who are legally blind even if they have some vision.
Bumping the Emperor of All Maladies review, which was one of the reviews which inspired me to pick up the book itself (the others were Family That Couldn't Sleep and Piranesi.) The EoAM review was poetically written to the point where I thought much of it must have been excerpts, but it wasn't.
I was very surprised the Emperor of All Maladies review was not a finalist! I read through at least half of those reviews and it was the best by far. I suspected that it might be Scott entering his own contest, as he did last year. Really a shame it's not on the final list.
If someone reading this hasn't read that review, go read it. It's really good.
Thank you! (This was mine.) I may have gotten dinged for writing the thing I wanted to write, which had only a vague resemblance to a “book review” (I thought this was the year for it!)
There are just a few lines from the book itself, but I also drew heavily on “The Waste Land” by TS Eliot which has themes of growth and death that felt very appropriate for a book about cancer.
Well I have to say that your review was gripping, moved me emotionally, and I learned a lot! You should definitely enter next year, I can only attribute you not being a finalist to bad luck.
Another thread with someone going "race-differences may be true but we should pretend they're not" and other people saying "no truth and intellectual curiosity matter" and then the first says "and how would explain this to a kind black friend?" and so on (these are paraphrases).
God I hate these arguments because they're so dishonest. Huge motte-and-baileys on both sides. No one should be bringing up race at all, *ever*, in any important public or political context. If there's a technical finding that some IQ-test variable correlates slightly with race, that can be noted on page 67 of your report, alongside the correlations with height and hair colour and astrological sign. But why the hell would you signal-boost that particular finding unless you're racist?
On the other hand, you permanently lose the right to object to "scientific racism" forever the moment you say some shit like "it looks like blacks are underrepresented in your profession...". You shouldn't even notice that, you shouldn't be seeing race! The moment you do, YOU have defected against the norms of respectability and rationality and individual freedom, and you deserve whatever offensive statements and findings are thrown at you. Don't want to deal with racism? Don't...be...fucking...racist.
The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race. Other than that, you are treated entirely, 100% as an individual. No discrimination, no affirmative action, just your own efforts and talents: that's the be all and end all of what you're entitled to. That's it. The end.
I don't believe there are any race differences because I don't believe race is a real thing. And shame on everybody who thinks it is.
Also this:
"Hanson once wrote that a woman cheating on a man is as bad as (or worse than) a man raping a woman provided he does it in a "gentle, silent" way. No idea if he still endorses that opinion but it's a majorly sus thing to say"
The fuck? You are either saying cheating is fine actually, or you are unable to comprehend what "cheating is as bad as rape" means. It means: you know how horrible rape is? Cheating is equally horrible. It has a completely different meaning to "rape is no worse than cheating" (=you know how cheating is often seen as kind of trivial? So should rape be.) even though their naive translation into symbolic logic is the same. Misunderstanding this is a majorly autistic thing to say.
(Unless of course this person means that Hanson says a woman cheating is horrible but a man cheating isn't. That *would* be misogynistic (or more accurately sexist), but I have no idea why they didn't *say that* if that's what they meant.)
> The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race.
Does it have to be some*one* else? What if we notice that society is systemically discriminating on account of race, even when no single individual is making racially biased decisions?
> No one should be bringing up race at all, *ever*, in any important public or political context.
If we were living in a perfectly meritocratic society, where nobody ever noticed that there are more Ashkenazi than Black Nobel Price laureates, that might be a good idea. (If we ever gain the power to tune intelligence to CRISPR, it would be sufficient to look at the genomes of particular families which have been high-IQ for multiple generations, no need to bring race into it.)
However, the current US approach is not color blindness. Instead, companies are supposed to meticulously track the racial categories of their employees, and any disparate outcomes are treated either as proof of evilness or at least at bugs to be fixed by putting your hand on the scales. (See TracingWoodgrains on that FAA scandal [0].)
While some HBD advocates were likely just people who tried to mold their biases into a self-consistent belief system, I think that some of them only bring it up to argue against affirmative action.
If the US Olympic marathon runners are chosen by merit, then there is little reason for anyone to point out that East Africans might have an advantage in that sport (with much still depending on in-ethnicity fluctuations). If someone insists that the US should proportionally select their runner team from the "races" of their population, then arguing against that idea might involve pointing out these genetic differences. I fault the ones who tried to establish policy tied to "race", because they brought it up first.
> (Unless of course this person means that Hanson says a woman cheating is horrible but a man cheating isn't.[...])
I would guess that Hanson might be concerned with reproductively relevant cheating where someone ends up raising a kid which is not genetically related to them. Under that model, a partner of either sex having oral sex with a third party would not be a big deal, nor would be giving away genetic material without any requirement of parental investment (e.g. secretly donating sperm/eggs). For IVF, the situation would be entirely symmetric: if either partner decides that their child would have more of an edge if they swapped their partners genetic material for that of another party without telling them about it, that would be evil. For in-vivo conception, there is a big difference between the sexes. If a man knocks up his neighbor, takes the kid to his wife and tells her "look what a beautiful baby you and I made", his wife is unlikely to buy that story.
Of course, this does not mean that male infidelity resulting in conception is not a big deal -- it is still a defection which is liable to come with serious financial downsides (child support payments) to the couple (and any cheating is a breach of trust which might risk the relationship, and there is an HIV risk and so on).
>God I hate these arguments because they're so dishonest. Huge motte-and-baileys on both sides.
I will agree with this.
>No one should be bringing up race at all, *ever*, in any important public or political context.
Why not? Seems like as much as race isn't really that helpful a construct, and we would all be better off it it was banished. That it is here to stay and LARGE portions of the left are deeply deeply invested in it, and making much of our political discourse about it
And if they are going to make it the center of large parts of political discussion, well then you have to talk about it.
>If there's a technical finding that some IQ-test variable correlates slightly with race, that can be noted on page 67 of your report, alongside the correlations with height and hair colour and astrological sign. But why the hell would you signal-boost that particular finding unless you're racist?
Because the left has spent the past couple decades dismantling many systems of merit and evaluation and made ill advised social policy changes all on the back of the idea that disparities in outcome *can only* be explained by disparities in "systemic or overt racism".
And this isn't just some tribal political warfare culture war talking point. It has real impacts. My town has seen noticeably worse policing and adherence to the law, largely on the back of complaints (and then reactions) about law enforcement's supposed "racism". Much of the evidence of which is statistically illiterate.
At my child's shcool his gifted and talented program was eliminated in an attempt to reduce disparities.
For basically for 10 years or more this large urban school districts more or less overt policy has been "ignore the white and East Asian kids they will be fine, all efforts must be focused on reducing racial disparities". And what has happened is people who can flee the system, disparities don't improve no matter how hard they step on the scales, and racial animus increases.
At one point they basically dismantled the disciplinary system because blacks were being suspended more than whites, and violence shot up in schools, one middle shcool even got so bad their temporarily shut it down and "rebooted it", all because no one was willing to consider the possibility that black kids and white kids might be getting suspended at different rates due to different you know BEHAVIORS. Literally the Obama education department was sending them nasty legal threats about their "racism" based purely on "disparities" style thinking. It lead to massive changes for the worst in the district, and probably that alone depressed attendance 10%.
So don't fucking tell me to ignore race or we can't investigate/measure race. The left wants to measure the shit out of all elements of it (as long as they can count on academia to keep suppressing any findings they might not like).
>On the other hand, you permanently lose the right to object to "scientific racism" forever the moment you say some shit like "it looks like blacks are underrepresented in your profession...".
Why care? Everything is racism now anyway, so it jsut doesn't bite like it used to.
>You shouldn't even notice that, you shouldn't be seeing race!
This just seems hopelessly naive, this is literally the main thing most large orgs are focused on when it comes to politics/morality these days. Are our quotas being hit. literally for my kids youth sports programs they ask about it, and tut tut that the numbers are not directly proportional...(um different "communities" do different things dummy). I am sure no one is showing up at soccer practice whining there are disproportionate numbers of Somalis and Latinos.
>The moment you do, YOU have defected against the norms of respectability and rationality and individual freedom, and you deserve whatever offensive statements and findings are thrown at you. Don't want to deal with racism? Don't...be...fucking...racist.
Meh whatever, once again even a 90s colorblind leftist utopia version of post racialism is now seen as deeply racist. I just don't care. Also I know I am not "racist" in that sense. I grew up in a housing project, some of my childhood friends were from other races. My first real friend was a child of immigrants from Nigeria. I know what is in my heart and I personally don't give a fuck what color people are.
but the world does, and the world discriminates against me and my kids due to their skin color constantly because there is this hysteria about the fact that other races (and in particular blacks) don't do as well on various metrics.
>The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race.
I just disagree that this is some moral imperative.
>I don't believe there are any race differences because I don't believe race is a real thing.
Well it is and it isn't. It is a pretty poor way to group the world if you had to group people genetically. getting a bit more fine grained would be much more scientific. That said the idea that it is literally meaningless is silly. It has a lot of historical and political baggage/meaning, and there are a lot of useful things you can say about the groupings.
No one would have trouble with the state "Kenyans make good long distance runners", or "Yugoslavs make good basketball players".
So if it turns out that say Pujabi are particularly great mathematicians that seems worth measuring/investigating.
>"Hanson once wrote that a woman cheating on a man is as bad as (or worse than) a man raping a woman provided he does it in a "gentle, silent" way.
I don't feel the need to defend crazy statements by whoever "Hanson" is.
That said I am guessing the idea here is something along the lines of "while raping someone is super bad and terrible, secretly making someone raise a child that isn't theirs is also really super bad", and he just put it inelegantly. At least that would be my guess at a non-crazy reading of that statement. I agree on the surface it is crazy.
A rape you could get over, lots of people do, it is an event, it happened, it recedes. Finding out 20 years down the road or whatever that your child isn't actually yours is a whole other level of mindfuck.
> If there's a technical finding that some IQ-test variable correlates slightly with race, that can be noted on page 67 of your report, alongside the correlations with height and hair colour and astrological sign.
Part 3 out of 3 is definitely not late enough, as Charles Murray has learned.
I worry that a report on average IQ of people with black curly hair might also be considered controversial.
On the other hand, a correlation between an astrological sign and educational achievement is already a scientifically accepted fact -- although some skeptics claim that this is merely a result of some kids being younger or older than most of their classmates.
"Part 3 out of 3 is definitely not late enough, as Charles Murray has learned."
I suspect that the race parts were specifically signal-boosted by other people.
The important thing is to keep the race-difference speculation squarely focused on defending against obnoxious "disparity equals discrimination" attacks. I could be wrong, but my impression is that James Damore became far more of a free speech hero on the mainstream right than Charles Murray ever did, for this reason.
Remember the different audiences. The average black person who just wants to be treated the same does not deserve to be subject to *any* sort of race-difference speculation, especially since race doesn't actually exist. The black activist who rejects colour-blindness and demands the burden of proof be on anyone accussed of disparity to disprove racism...deserves to be completely destroyed.
> "it looks like blacks are underrepresented in your profession..."
I can't comment on race, but I can tell you about gender. Women are underrepresented in my profession.
If you look at the pool of qualified candidates, the proportion of women in the pool is comparable to the proportion of women in employment. But this doesn't mean there is no problem.
It may or may not be prejudice. I can't hire women who don't apply. Universities can't teach girls who don't join that course. At senior school level the imbalance between the genders for the relevant subjects is small or none. So somewhere there is a thing that occurs that turns girls away from the profession.
I suggest that this thing is worth at least studying so we can understand what is happening and why, even if we do then determine we can't fix it.
So it is for race, IMO, or any other identifiable group of people: where there is an observed step between proportions in some initial population and the final outcome, there is something worth trying to understand and maybe address (or maybe not, depending on the root cause, but you can't begin to make that decision until you understand the cause!).
The only way to begin this research - to acquire the "hard evidence that someone else is discriminating", if there is any to be found - is for someone to point out that "blacks are underrepresented" - to notice the difference between proportions in the target group versus the general population - and to try to follow the chain back to work out why that is, rather than stopping at the first link.
> I suggest that this thing is worth at least studying so we can understand what is happening and why, even if we do then determine we can't fix it.
I am actually kinda fine with disparate outcomes if there is equality of opportunity.
I don't want to convince half the men in construction to become kindergarden teachers and half the women who are kindergarden teachers to take on construction to reach some cosmic balance.
If a third of the high school Science teachers are female, none of them are sexist and two thirds of the students who end up studying physics are male, I don't see a problem which would have to be addressed by selectively encouraging more women to study physics.
Of course, if you have few female scientist role models and most teachers blatantly state their opinion that while girls are better at rote learning, boys are better at thinking about science, then you don't have equal opportunity.
But for universities, these dark days are mostly behind us I think. Women are (iirc) over-represented in some high prestige fields like law and medicine, while being underrepresented in other high prestige fields like engineering. Saying that we should work to get more men into medicine and more women into engineering seems silly to me.
> I don't want to convince half the men in construction to become kindergarden teachers
If a third of the girls want to be engineers then a year or two later they don't any more, while for boys the proportion remains unchanged, maybe it's worth asking what exactly happened during that year?
Ok. I would personally put sex is a very different category to race: the former, unlike the latter, a real thing, a biologically significant thing, that affects nearly every part of everyday life. (Though that doesn't mean people need to talk about it as *much* as they do.)
Other than that, two questions.
1. If you notice that blacks or women are underepresented in organisation X, should the burden be on you to prove actual discrimination, or on X to disaprove it?
2. If you are allowed to bring up the disparity, are others allowed to bring up purported innate reasons for this disparity? That's the *entire* context here.
("Blacks are underpresented"; "here are some possible explanations for this other than discrimination"; "how DARE you make such racist claims!"
No. Either race-based talk and speculation is acceptable, or it isn't. I'd prefer "isn't" but allowing it all is consistent. The one thing you can't do is allow it only when it suits you and try to cancel those who challenge you, which is exactly what this discussions is about!)
>The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race.
So, how would one go about discovering that discrimination is happening, if you aren't allowed to measure and see that some race is under-represented? Are you only allowed to notice racism if you have emails from the hiring manager saying "mwahaha, I hate black people and will never hire one"?
The same way you notice any other discrimination or prejudice is happening. Let's say I have curly hair. How would I know someone is discriminating against curly haired people? Probably if I notice some weird decisions being made that seem to have no pattern, and then I look for possible patterns and discover all the rejected applicants had curly hair, and then I look for further evidence corroborating that and only then start playing the hair card. But would it be constructive to make "curly haired" a central and public part of my identity? To constantly inquire into the proportion of curlyheads employed or admitted to a particular institution? To conspicuously divide those I interact with into curls and non-curls and treat them differently and default-assume that the latter are discriminating against me with no evidence? Wouldn't that just enormously increase the salience of curly hair and the chances of actual discrimination? And would I have any right to complain when people start speculating about the inherent differences between curlyheads and non-curlyheads when *I'm the one who keeps bringing the distinction up*?
Adam Carolla has a bit that the only true "privilege" he received as a (poor, illiterate) young white man was not being able to attribute people's inexplicably assholish behavior to anything except them being assholes.
Well, that might explain why Lighthaven is so cagey about letting outsiders know where the secret clubhouse is. Though I see the Guardian has effectively doxxed them on that front, because of course they did.
Yes; I was not able to find that information anywhere I looked in the leadup to LessOnline, nor find anyone to ask. It's now trivial to find for anyone who reads the Guardian article and knows how to google, of course.
"Antifa" is a very loosely defined set of antifascist tactics, rather than anything remotely resembling a group that one can be a member of. The best way I can summarize it is as the belief that fascism must be resisted with action, including physical fighting if necessary--hence the name (long story but it's basically derived from "anti-fascist action").
It is better characterized as a movement, yes, although it is very fuzzy and decentralized: unlike BLM, there aren't even any competing national organizations claiming to speak for it. It's really a bunch of individuals and (usually) small groups, often with highly differing political alignments, who share points of unity on what they perceive to be the need to oppose fascism directly.
I find this description hard to believe, specifically the part that people base their identity on being *against* something without being similarly strongly *for* something.
Yeah, you are not a fascist; well neither am I, but that wouldn't make anyone mistakenly consider me an Antifa member. What else is missing?
What's missing--in your analysis, not from the reality on the ground--is the bit where a diverse group of people can be, and are, strongly against one thing (fascism*) while being strongly in favor of different things (anarchism, state socialism, 20th century Euro-style social democracy, etc.). That's the main why "antifa" can only ever be a set of points of unity among different groups, rather than a group in itself.
*or communism, or libertarianism, or any of a number of things that many people are against, even as the things those same people are for diverge widely.
The way I heard it is that Antifa started out as an offshoot of revolutionary leftist movements (particularly Communists and Left-Anarchists, hence the black-and-red flags motif in a lot of Antifa branding) that decided to focus on direct action against fascist groups instead of the more conventional tactics of advocating against the dominant liberal social order. And it got a lot of its early traction in places like the Punk subculture where it served as a convenient label for pushback against neonazi skinhead gangs.
I think there's also some motte-and-bailey stuff going on, especially post-2016, where the motte is "Antifa is anyone who wants to fight back against the rising tide of fascism" and the bailey is a variety of small, mostly-leftist groups that use the banner for street violence against targets they perceive to be fascist.
And as with many motte-and-bailey situations where there isn't a single cohesive organization with well-defined orthodoxy and messaging, neither part is necessarily insincere, since there are no doubt plenty of people who identify with Antifa in the motte sense but are only associated with the bailey-Antifa groups in terms of general sympathy for the concept of punching Nazis.
Oh, certainly I do. The writer meant to discredit another writer by suggesting that the latter was concealing their membership in an "organization" that it's not , in actual fact, possible to be a member of. More broadly, they meant to exploit the ignorance of their audience by presenting this "fact" as though it were some kind of bombshell.
It worked on at least one ACX commenter, which is why we're having this conversation.
I'll reiterate the question, and I'm not being obtuse, actually; I've literally never heard the term used before except in conspiracy theory contexts interleaved with qanon and sovereign citizen rubbish, and none of the folk from those groups have ever been willing to explain even when the torrent stops long enough for the question to be asked.
Never mind other people; what do /you/ mean by it?
OK, so the accusation is that that this person is involved with mob violence. Thanks, that is indeed now clear. My most charitable impression hitherto was that most use of the term was a bit like the use of "Anonymous" in media hype a decade or two back, but this is much more specific.
Yesterday there was a hint the war on Gaza could plausibly expand to a wider war in the Middle East, one where the 13th of April Iranian attack on Israel could look like a normal day.
Previously, Israel already bombed Lebanon as far north as Beirut. Bombing Beirut is unusual, but the usual occurrence is bombing the villages and the cities of Lebanon's south as far as Baalbek (100 km from Israeli border), as well as Damascus and south Syria, in a campaign that killed a total of 300+ named as Hezbollah members, plus other members of Hamas and affiliated groups, as well as dozens of civilians. In return, Hezbollah launches anti-tank missiles and drones on Israel's north, completely empty of civilians, but still containing forests, abandoned property, and military bases and personnel. Hezbollah rockets usually only land as far into Israel as Kiryat Shmona, about 5 km from the Lebanese borders. Hezbollah is deliberately restraining itself here, as they're known to have enough range to reach Tel Aviv and beyond. Hezbollah launches drones as far as Haifa, but those get shutdown by the IDF defenses. When rockets and drones fall into open green areas, they ignite massive forest fires that last days and burn what Israeli press report to be 80K dunams of land (80 square kilometers). This is expected to rise as Hezbollah up the ante and we get deeper into the summer.
Yesterday, Hezbollah published a ten-minutes long video of surveillance of Haifa, including the city's port, a military industrial center, several Iron Dome sites, and more. In possible retaliation, Israel declared that the high commander of the northern region approved plans for a ground invasion into Lebanon.
What's interesting here is that this is exactly what Hamas wanted, on October 7th. An invasion by Hezbollah to the north, coupled with a new civil unrest inside Israel and in the West Bank, possibly with only a faint hope that the Arab states around Israel stand and watch. Hezbollah is said to have been ordered/advised by Iran not to invade so as not to ignite a wider war, but now Israel itself is giving Hamas what it wants, despite Hezbollah wishes.
The 2 variables controlling whether a war will happen and how big it will be seem to be (1) Whether Netanyahu's government falls and early elections happen, or whether it continues till 2026 despite the hundreds of thousands of protestors since October 7th and the exit of Gantz and Eisenkot (2) Whether the IDF is only planning a face-saving operation that advances a few kilometers into Lebanon and then back again, or whether it intends a rematch of 2006. On the Hezbollah side, it depends on how much courage it takes to throw the first rocket at Haifa or - even - Tel Aviv.
In related news, the Israeli right sees this escalation - as it have seen the Gaza war - as an opportunity to advocate for the colonization of south Lebanon. In an online conference [1] held on Monday, leading figures in the settler movement Uri Tzafon, including Sara Netanyahu's eldest brother, outlined colonial fantasies to expand Israel up to the Litani, up to the Euphrates, and even up to the northern outskirts of Saudi Arabia.
You've become quite the political junkie my man! 😂
I miss those days...
I'm still trying to end this war by coming to a Great Reconciliation (inclusive of some wealth or land redistribution quite probably) so if tou ever want to consider activism of a whole different kind than any currently ongoing I would live to have you on my team. You can reach me by substack, youtube, or my linked email in both of those.
"Hamas actually WANTED Israel to invade and kill them, Israel is playing right into their hand!!!"
Forming strategy on the basis of doing the opposite of whatever your enemy wants you to do still puts your enemy in complete control, and is an especially terrible strategy if your enemy is stupid.
C.f. the battle of Dien Bien Phu. The French strategy was to provoke a set-piece battle under favorable conditions by setting up a large, fortified base deep in Viet Minh-held territory threatening VM supply lines. The VM did exactly what the French commanders wanted them to do, concentrating a big chunk of their best troops to besiege and assault the base. Then the VM went off-script and won the battle.
1. Sole revenue stream is the sale of in-app currency called "Attention tokens". Non-redeemable, and can be purchased in-app for $0.50. Every time anyone sends a message, they pay one attention token to the recipient. So they get it back if the other person replies. You can also pay one token to guarantee that your profile is shown once to a particular person when they are swiping (and get refunded if they swipe right or don't login within 48hours). If you accumulate more than 100 tokens, the excess are burned. Each account has public stats of (1) what % of first-time messages they reply to (2) how many unique recipients they've sent messages to within the past 48 hours. (3) how many unique recipients they've sent messages to, over all time. (4) how many real-world dates they've organized through the app with distinct persons, over all time. (transparency is great and underutilized in all the apps thus far) The attention token revenue goes to maintain the site, as the tokens can never be redeemed. It's just a price on wasting people's time. Women tell me every time they use tinder they are flooded with inappropriate messages from jerks and losers, and this fixes that.
2. fact-checking profiles. If you can't prove it with receipts, and it's not totally subjective, leave it out or get banned. Only photos timestamped within the last year are allowed. Weight and height are only allowed if they submit photographic proof to the mods. Full Quaker mode activated.
I'm not married to this mechanism design, but more fundamentally there are three things a good dating app needs to have, and the existing ones do one out of three at best:
1. a tax on bullshit
2. a tax on unwanted attention
3. massive network effects
Manifold.love had only half of a solution to #1. Eharmony has only a half asolution to #2. Tinder has only #3. Hinge has none of the above but it worked anyway (that's how I found my wife) by dumb luck or a good feed algorithm.
Remember Luna was going to be the dating market that solved all these problems by using their own crypto coin and men would have to buy it to message women? Whatever happened to them? 😁
> Sole revenue stream is the sale of in-app currency called "Attention tokens". Non-redeemable, and can be purchased in-app for $0.50. Every time anyone sends a message, they pay one attention token to the recipient
terrible idea, a women with a chatbot will just recreate onlyfans
Did you miss the part where the tokens aren't redeemable? And it'd be sufficiently inconvenient to resell the tokens / trivial for AI moderators to detect that nobody would bother.
ChatBotGirl: hello there handsome, if you send $40 of bitcoin to xxxxx I'll send you 100 messages
Guy: why should I trust that you'll actually do that after I pay you?
GPT4Moderator: Hello, we've detected that ChatBotGirl was trying to resell tokens using offsite payments, and banned her.
And then after they ban all the chatbot girls, they find out that there are only around six real women on the site to a thousand men, and the thing collapses anyway.
1. bots are like three orders of magnitude rarer on dating apps that don't have a free version. Let's say you have to pay one attention token to make your profile visible to anyone's swipe feed for the next 24 hours or 10 right swipes whichever comes first.
2. you can just cap the net token gain from any one interlocutor at 3 (burning the excess) and wordfilter chat to block links and cryptoaddresses if GPT inference is still too expensive at scale.
There aren't enough of them to go around as ongoing partners, sure.
But there are less-not-enough of them to go around as first dates. And every time they go on a first-and-only date with someone a worse match for them than me, that's a failure of the system that I'm happy to provide incentive to improve.
Like I said, I'm being cynical (and going for laughs). But I think there are more geeks than geekettes or geek-lovers, and that's always going to be a problem with these sorts of sites.
But if it can produce at least *some* happy couples, then its utility is greater than zero, and they should continue.
Another solution is arbitrage, pairing males from markets where males are undervalued with females from markets where females are undervalued. Some sites do this.
It was more of a joke about the way rationalists keep going 'our dating site never works!' and it never seems to improve. Sorry, guys, women don't want what we're selling.
You should really be careful to include the qualifier of "hot" when you're talking about the lack of female nerds/geeks/etc.
Because there are indeed plenty of average and below average single female nerds/geeks/etc. They're just literally not as visible as the 0.2% of incredibly hot cosplay/influencers.
My observation is that there's a substantial male majority in specific nerdy professions and hobbies, such as computer programming and tabletop RPGs. "ACX readers who are engaged enough to answer the survey", for instance, are 85% male. When I was in college (early 2000s), both the computer science department and the tabletop RPG club were about 90% men. More recently, my peer group at work and in hobbies is somewhat less imbalanced, but that just means between a 2:1 and a 5:1 ratio rather than a 9-10:1 ratio.
OTOH, there are other nerdy professions (e.g. biology and other life sciences) and hobbies (e.g. fantasy fandom) that are more balanced or majority women.
I wasn't claiming there was an equal distribution of gender in nerds/geeks, just that there are many single women in those groups who aren't being approached by the men in those groups for...reasons.
I keep hearing that, and am skeptical (though obviously our lived experiences are going to be very different). I kind of figured the male geeks would lower their excessive hotness standards (we ain't too cute ourselves in most cases) and the market would clear, so to speak. Maybe not.
Response rates will be much higher because the number of incoming messages will be so much lower when people have to be judicious about it instead of spamming everyone.
Not at all. The amount is trivial per person sending the token but attention is distributed in a power law in dating apps. However with the cut off at 100 tokens the recipient isn’t benefiting either.
How are you going to tax the unwanted attention of every guy who thinks "okay, the other one hundred and fifty guys wasted their tokens, but *I* have a real chance here?"
Plus, how do you decide which tokens get burned? I presume the oldest one hundred first, if the recipient hasn't responded to any of the senders?
Suppose I signed up for your app and got 200 tokens (and pigs will fly), can I use those tokens to reply to people or do I have to buy my own tokens if I want to message someone?
> How are you going to tax the unwanted attention of every guy who thinks "okay, the other one hundred and fifty guys wasted their tokens, but I have a real chance here?"
By laughing all the way to the bank when they buy tokens for that purpose
I think the idea here is that 1 token = 1 message, regardless of whether it's initial or a response. They're fully fungible, but you're (in a sense) sending one back and forth when you have a conversation.
Only if the feed algo sucks and people ignore the response rate stats. Current apps show people who are way out of your league in the feed because it's optimized for entertainment more than actually helping anyone find a match.
I like the idea, but I suggest being less strict about it.
Also some profits should go to some charity. Then popular people who are on the fence, could lie to themselves: "noo... I don't actually NEED to use this dating-site, I am only doing this to feed some hungry children with other peoples moneys".
> (3) how many unique recipients they've sent messages to, over all time. (4) how many real-world dates they've organized through the app with distinct persons, over all time. (transparency is great and underutilized in all the apps thus far)
This part I don't like. The main reason why I avoid dating sites, is because I don't want to paint a target on my back for all eternity for the whole world to see. If my failures dissappear after 48h, then I would feel much more comfortable using that site. (If only the person, I swiped right on, can see these stats, then I could get comfortable with it. But even then these stats should only go back a year or so).
> 2. fact-checking profiles
what is the point of this?
If someone lies about something that is important to me, then the worst that could happen, would be to waste some time.
Dating is fundamentally an information problem. You could analogize it to the multi-armed bandit problem in decision theory. The more accurate and complete information that you get up-front, the less time you waste dating the wrong people, and the quicker you find your life partner. This is why the optimal rule is to force everybody to have complete and accurate info on their dating profile. Unilaterally doing that in the current environment would be an individual handicap, but if it were universal then nearly everyone would be waaay better off.
> This is why the optimal rule is to force everybody to have complete and accurate info on their dating profile.
I thought it's fairly well attested that the many filters available on the current apps are net negative?
Like if you compare current happy relationships and the relevant stats, and matched single people on apps, people are filtering on things that don't actually matter to relationship quality and happiness.
Particularly as an attractive woman, you get this list of legible metrics and think "of *course* I want a 6' 2"+, PHd-holding man with income over $200k", or "might as well winnow the chaff" and select a bench of legible filters to reduce the pool of messages, because you're going to be flooded with messages whatever you choose.
But then when you're swiping, there are two additional hidden (devastatingly strong) selection effects - you're filtering on attractiveness in pictures when swiping, and in the background, you have now cumulatively filtered on "especially *attractive* 6' 2"+ $200k+ men who are *still on a dating app.*" If an attractive tall successful guy wants to be married, he already will be - the ones you're seeing on apps are overwhelmingly there to sleep around, whatever he says, at least 9/10 of them, because the ones who want to get matched can get matched right away and will fall off the dating app.
This is just a sketch, mind you, but it's a finger pointing at the moon of overall dating app dynamics.
So those legible filters are screwing people who want a high quality long-term match, for various reasons. When all the things that *actually* matter for relationship quality and happiness are illegible, and part of two-person individual dynamics anyways.
I think giving people more legibility in filters isn't necessarily going to do people many favors, in other words. It's the wrong way to approach the problem, which fundamentally requires that roughly-assortatively-matched people go on a bunch of dates to discover and find those illegible two-person-dynamics qualities that actually matter.
You think that people seeing that a tall model attractive dude with high income has been on a zillion dates will make them less likely to go on a date too, but you're probably wrong. Similarly, you think that if people see that hottie-McHotface has 1k messages and has responded to 10 of them, she won't get additional messages, and I think that's probably wrong too.
Dating apps have the winner-take-all dynamics characteristic of large pools of competitors, like professional athletes or musicians. Back when OkTrends was still a thing, they analyzed the data and found that women empirically consider 80% of men unattractive, and only ever consider the top 20%, vs men who rate on a bell curve.
> When all the things that *actually* matter for relationship quality and happiness are illegible
I don't believe this, and I think most of the people pining for 2011 OKC are on my side. Their match % algorithm is shockingly good between people who use it correctly (i.e. accurately label the importance of questions, answer truthfully and correctly, etc). Sure, there are *some* illegible factors, but I know I'm not going to enjoy spending my life with someone who does X or Y or Z or believes A or B or C or who can't stand doing P or Q or R. If all nine of those things are 50/50 filters, I'd be ecstatic if me and my perfect partner could both filter out 99.8% of people before we even see their profiles.
Against the OkTrends inequality point, I think the integral of hotness over lifespan is much more egalitarianly distributed among women than among men. Almost every woman was hot when she was 16-21. I would say the median girl I knew in high school was hotter than 100% of the people I saw on apps in my mid-30s. I think women do themselves a disservice by waiting too long to pair up.
I don't disagree with you on any of that. People often misuse legible information. But more information is better than less information when people are being rational, and since the system forces them to put their money where their mouth is they will be a bit more rational. More information can help people avoid the adverse selection you speak of -- in this case the stats would reveal the rakes. (assume you need full KYC to create an account so there are no smurf accounts).
That's true, and also to some extent people change each other in hard to predict ways. But a lot of it is it takes time to get to know all of the preexisting traits of a person. A god's eye view of a suitor's history, summarized impartially by GPT6, plus a battery of psychometric tests, would save a lot of hassle in expectation.
I have a gut-feeling, that says yes, but now that I slept on it, i am less sure.
My thought goes something like this: some people are too embarrassed to use dating sides, and need an excuse, even if it is just an superficial one.
Now that I think about it some more: I can imagine myself to be such a person. (albeit, the "excuse" should be less obvious, and I can't imagine any website pulling this off in a way that would be comfortable for me)
> If you want to feed hungry children there will be far easier ways to do it.
actually, I think that could be quite fun. You raise money for charity simply by making yourself available on a dating site. Then other people will "donate" money for the chance to approach you. The other person is not "paying for attention", they are "donating to charity, while attention is just some side-effect" (or at least that could be the lie they tell themselves).
Compared to EA-Stuff this is probably not the most efficient way to donate. But so are most non-EA charities.
The only info you have to give people up front is that sending a message costs $X (fifty cents in this proposal). All the other details can hide on a FAQ page that nobody reads, and pop up in the interface if/when someone encounters them for the first time. e.g. the 100 token cap might show up once someone has 50 and it warns them they won't be able to store more than 100.
I was thinking, perhaps very naively, about things that people consider "good" and "bad", and whether there are good and bad people, and what does that even mean. Now I do not want to discuss the goodness or badness of specific actions, but rather the... paradigms(?) of what it means for the people to be good or bad. As I see it, these are the perspectives that people seem to have:
"Good people do good things, bad people do bad things." -- Simple and elegant. Popular in stories for children. Evidence in favor: past behavior is the best predictor of the future behavior. Evidence against: in real life people do good things on one day, bad things on another; or in different contexts.
"Everyone is good at heart. Smart people understand the impact of their actions, and empathize with others, so they mostly do good. Stupid people act chaotically, and cause a lot of suffering. The problem is stupidity, not some inherent evil; everyone is a good guy in their own story." -- Marcus Aurelius. Everyone who believes that improving education is the key to human goodness. Evidence in favor: sometimes explaining how your actions impact other people changes your behavior. Evidence against: sometimes it does not, some bad people are quite aware of what they are doing.
"Everyone needs to have their basic needs satisfied first, and then they can be nice and generous. People are bad when they are hungry or hurt, nice when they feel good. By improving the living conditions of people we improve their behavior." -- Popular on the left, basically because it means that whenever poor people do something bad, in some sense it is never their fault. Evidence in favor: introspection, it seems to me that this is basically how I behave. Evidence against: some people never have enough; different people react to their own misfortune in opposite ways: some want to take revenge on the others, some want to protect the others from suffering the same fate.
"Good and bad are relative; it means whatever is convenient for you or your tribe. Your ingroup is tautologically good because it fights on your side; your outgroup is tautologically bad because it fights against you." -- Conflict theory. Evidence in favor: conflict theory is popular. Evidence against: why do some people consider donating to anti-malaria cure good if no one in their tribe is at risk of getting malaria?
"Everyone is equally good and bad, you only don't see it because you are not sufficiently enlightened." -- Unless there is a specific evidence that Hitler once saved a kitten, this seems obviously wrong, but this is one of those things that some people consider "deeply wise", so I am mentioning it here.
"Everyone does whatever gods make them do." -- ancient Greeks according to Julian Jaynes. Difficult to falsify.
"Everyone does whatever evolution made them do." -- yeah, but I am asking what *specifically* it is.
"Good and bad are just habits. When you keep doing good things, they become natural to you, and doing bad things would feel wrong. When you keep doing bad things, those become natural to you, too." -- Evidence in favor: seems like an obvious description of what most people do. Evidence against: it doesn't explain why some people try to change their habits, sometimes successfully.
"There is no such thing as good and bad. There are smart people, who know what they want, and have the courage to take risks and do socially disapproved things in order to get it. And then there are losers who believe in stupid concepts such as 'good and evil' that other people invented in order to manipulate them, LOL." -- This seems to be what bad people believe, though they would obviously reject the label. Evidence in favor: a lot of preaching is hypocritical, and a lot of not-doing-evil is motivated by fear of punishment rather than by compassion. Evidence against: sometimes it actually takes a lot of courage to do a good thing; also, the 'stupid' good people somehow keep surviving, even if this theory would predict their extinction.
...is there some other major perspective that I have missed?
> ...is there some other major perspective that I have missed?
One I didn't see in your list:
"Good and bad is contextual and / or computationally intractable."
Like having a bunch of kids while poor is bad. Being really demanding of your kids is bad. But kids raised in a poor + demanding environment are more likely to succeed and accomplish better and more impactful things in the world, compared to similarly poor but lax childhood environments. There's a general "adversity and being a demanding asshole is generally bad, but forms stronger characters that do more net good" argument here.
Working in finance is probably bad. Particularly at the higher and more abstract levels like CDO's and other esoteric instrument, you're essentially paying people millions for arbitraging environmental / regulatory environments and moving bits in ledgers back and forth in fundamentally zero-sum ways. But finance has a legitimate risk / capital allocation function in society, and capitalist economies with finance-driven capital allocation have empirically lifted billions out of poverty. Yet capitalism systematically drives many, maybe most, worker's incomes and standards of living down except for an elite, over-educated few who capture most of the ongoing productivity gains, and capitalism is a Red Queen's race that puts people on endless hedonic treadmills. So it lifts billions out of poverty, only to slam those billions into endless pointless Red Queen's rat races and hedonic treadmills / keeping up with the Joneses. Etc.
Hard grading teachers with high standards are bad for the vast supermajority of average students who just want a good grade without a ton of effort and see the fundamental credential-driven nature of education as fundamentally pointless. But those hard-grading teachers drive 1% of actually interested / capable students to learn more, and it's ultimately those students who drive most innovation and value in the world in terms of actions. So maybe every teacher should be very strict with high standards! But that would lead to 99% of students not making the cut, wasting years on credentialism journeys that failed, etc.
So what is good, what is bad? It's purely contextual, and likely computationally intractable. Certainly to merely human brains enmeshed in complex, chaotic, and emergent systems and dynamics, it's computationally intractable.
Also, you didn't include the famous Solzhenitsyn quote, so including for the sake of completeness:
“If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?"
Goodness is desirable being, and being is divided into potency and act. Humans cannot really desire nothing. But we can desire beings that are so much in potency and so little in act that they hardly exist at all. I don't know if that makes a person bad, but it certainly hinders them in doing good - because these things are inherently disappointing and the disappointment saps our energy.
>"Everyone needs to have their basic needs satisfied first, and then they can be nice and generous. People are bad when they are hungry or hurt, nice when they feel good. By improving the living conditions of people we improve their behavior."
Evidence against also includes that fact that rich people do bad things too. If this was true, we would expect a gradient of good or bad behavior based on income, with those with the most resources exhibiting the most moral behavior. This does not at all seem to be the case.
>"Good and bad are just habits. When you keep doing good things, they become natural to you, and doing bad things would feel wrong. When you keep doing bad things, those become natural to you, too." -- Evidence in favor: seems like an obvious description of what most people do. Evidence against: it doesn't explain why some people try to change their habits, sometimes successfully.
That's a good explanation of why people do good or bad things, but not what good or bad is. People can change their habits if they choose to seek the good, but the good is not the habits, the habits instead are what make your nature good or bad.
Moral realism would put it that there is a real "good" to which our behaviors can be in alignment with, or out of alignment with. To be in alignment with the good is to be good, and to be out of alignment with it is to be evil. Once you believe there is a good out there you can be in alignment with, then it makes sense to talk about how you align your behaviors to it: about habits, and incomes, and whether people are naturally aligned with the good or not. But without a belief that good exists, its nonsensical to talk about people being good or bad.
> Evidence against also includes that fact that rich people do bad things too. If this was true, we would expect a gradient of good or bad behavior based on income, with those with the most resources exhibiting the most moral behavior. This does not at all seem to be the case.
Am I missing something? Isn't this *overwhelmingly* true??
Isn't the vast majority of violent and property crime perpetrated by a handful of predominantly lower-income younger men? Like if you're trying to predict the factors relevant to "committing property or violent crime," your dominant factors by far are going to be income / wealth, age, and maleness?
Throw in the fact that some huge percentage of crime is committed by an even smaller handful of repeat offenders, I don't see how this isn't overwhelmingly, glaringly true.
Sure, you can say "well moral actions are a bigger set of things than strictly criminal actions" or whatever, but "criminal actions" seems like a fairly strong Schelling point to ground your intuition and argument on, and it's way wrong even there, so I wouldn't necessarily extend the conclusion upwards.
While poverty is somewhat correlated with crime, it is not clear that it is casual. It is quite possible that being the type of person who does crimes makes you more likely to be poor. If poverty was casual we would expect all poor people to have a propensity towards crime, but that's not the case: there are more Asians than Blacks under the poverty line in NYC, yet the Asian crime rate is much lower. (https://www.city-journal.org/article/poverty-and-violent-crime-dont-go-hand-in-hand). Weird result if poverty is causing the crime.
Rich people have, of course, committed fraud, embezzlement (theft), murder, rape, assault, etc, despite having all the resources they could need. They don't break into houses and rummage around for jewelry because they have better options, like skimming the pension fund. And beyond crime, would you say that rich people are generally more moral people than poor or middle class? Do they lack envy, pride, greed, and lust? Do they forgive easily, and treat others the way they wish to be treated? They all should, if immoral behavior comes from material lack. Perhaps more of them do than the poor, though again I would say the casual relationship there is likely reversed.
Of course if it was true that evil behavior comes from material deprivation, then the natural conclusion would be to treat poor people as evil. It reminds me of a passage from Chesterton:
"I have listened often enough to Socialists, or even to democrats, saying that the physical conditions of the poor must of necessity make them mentally and morally degraded. I have listened to scientific men (and there are still scientific men not opposed to democracy) saying that if we give the poor healthier conditions vice and wrong will disappear. I have listened to them with a horrible attention, with a hideous fascination. For it was like watching a man energetically sawing from the tree the branch he is sitting on. If these happy democrats could prove their case, they would strike democracy dead. If the poor are thus utterly demoralized, it may or may not be practical to raise them. But it is certainly quite practical to disfranchise them. If the man with a bad bedroom cannot give a good vote, then the first and swiftest deduction is that he shall give no vote. The governing class may not unreasonably say: "It may take us some time to reform his bedroom. But if he is the brute you say, it will take him very little time to ruin our country. Therefore we will take your hint and not give him the chance." It fills me with horrible amusement to observe the way in which the earnest Socialist industriously lays the foundation of all aristocracy, expatiating blandly upon the evident unfitness of the poor to rule. It is like listening to somebody at an evening party apologising for entering without evening dress, and explaining that he had recently been intoxicated, had a personal habit of taking off his clothes in the street, and had, moreover, only just changed from prison uniform. At any moment, one feels, the host might say that really, if it was as bad as that, he need not come in at all. So it is when the ordinary Socialist, with a beaming face, proves that the poor, after their smashing experiences, cannot be really trustworthy. At any moment the rich may say, "Very well, then, we won't trust them," and bang the door in his face. On the basis of Mr. Blatchford's view of heredity and environment, the case for the aristocracy is quite overwhelming. If clean homes and clean air make clean souls, why not give the power (for the present at any rate) to those who undoubtedly have the clean air? If better conditions will make the poor more fit to govern themselves, why should not better conditions already make the rich more fit to govern them? On the ordinary environment argument the matter is fairly manifest. The comfortable class must be merely our vanguard in Utopia."
I wasn't arguing causation, I was arguing that there's a plainly obvious "wealth and criminal behavior" gradient. Those with the most resources DO commit crimes at (much) lower rates. And didn't want to touch the race issue, but you're right, there are indeed additional variables like race that you could add to get a more predictive model.
But the fact that "income / wealth" is almost *certainly* one of the variables you need to include is what I was arguing. In other words, there is INDEED a "wealth / crime" gradient, and it's overwhelmingly obvious.
> Do they forgive easily, and treat others the way they wish to be treated? They all should, if immoral behavior comes from material lack. Perhaps more of them do than the poor
Yes, in fact, I think "wealther people commit less crime and generally have better day-to-day character" is obviously true. Sure, rich people commit property crime at lower rates, and maybe this is confounded by having more property or whatever, just like poor people commit less embezzlement because they have less opportunity.
But you mention rape earlier - you don't think there's an "income / wealth" gradient in rape? You don't think there's one in assault and murder? I would bet significant sums there are, and poorer people commit more of all those things.
People love ragging on rich people, but we're ALL rich here in this comments section, at least rich in terms of "wealth / crime gradients." Anyone with a white collar career who has family or routine contact with actual lower income people can overwhelmingly confirm from direct experience that crime and character is basically uniformly worse on average the lower the adult-income ladder you go.
As to causation, I think it's a trickier matter, but we don't need to worry about causation. Predictive models work just fine with correlations.
>As to causation, I think it's a trickier matter, but we don't need to worry about causation.
The question of whether immorality is caused by material deprivation is literally the point of this thread, and the point of my comments up to this point.
> The question of whether immorality is caused by material deprivation is literally the point of this thread, and the point of my comments up to this point.
Oh, I thought it was basically understood from your earlier "It is quite possible that being the type of person who does crimes makes you more likely to be poor" comment that it's most likely to be a self-reinforcing cycle with poor impulse control and short planning horizons driving both poverty and crime generationally, at which point causation is pointless to tease out.
But you want a clean signal? Let's look at rape / assault / murder rates among lottery winners, compared against old money scions with similar wealth.
Which way would you bet, higher wealthy crime in born-from-poor family, or born-from-rich-family? I bet it's vastly higher in lottery winners.
But what actionable insight can you actually gain from this?
"Evidence against also includes that fact that rich people do bad things too. If this was true, we would expect a gradient of good or bad behavior based on income, with those with the most resources exhibiting the most moral behavior. This does not at all seem to be the case."
I don't agree with this, but perhaps a steelman might say that these rich people are lacking something beyond income, or that income is too one-dimensional to capture all motivations. As one example, if the rich person's dad didn't love them and/or approve of them, they may be acting out against that.
This would overly complicate the "basic needs" narrative, because it would become extremely difficult to falsify. Any person who did evil could be described as missing something "basic" that causes the problem. It may not even be wrong, but may become an entirely unhelpful approach to evaluating morality.
I see your #1 - you are what you do - as basically correct, if imperfect. Yes, nobody is 100% doing good things all the time, but if we allow for shades of grey, we get "good people mostly do good things" and "bad people mostly do bad things", and nobody's perfect.
The nice bonus of this model of goodness is that it's actionable: want to be a better person? Do more good things.
"You ARE what you DO" is one of my ultimate favorite aphorisms.
Right after "you can't have good judgment without judgment" (and, like Valentine Wiggin, I'm humbly certain I couldn't have possibly originated that line and that I *had* to have read it somewhere, except that...well...after searching around a bit I think it might actually be mine).
And obviously, the two ideas are intrinsically linked!
That's pretty much where I am as well. Nobody is perfect, but it's obvious the people who are trying to be good.
We don't really care if Hitler was kind to animals or had a bad childhood. We care that most of the results of his actions were incredibly evil by almost any [useful] metric we could devise.
"By your fruits you will know them. Does anyone gather grapes from thornbushes, or figs from thistles? In the same way, every good tree bears good fruit, but a bad tree bears bad fruit. A good tree cannot bear bad fruit, and a bad tree cannot bear good fruit. Every tree that does not bear good fruit is cut down and thrown into the fire. Thus, by their fruit you will know them." -Mathew 7:16-20
And it is worth emphasizing that this is true _even though defection is a dominating strategy_ (which I consider to be the significant discovery from these experiments).
>In both actual tournaments and various replays, the best-performing strategies were nice:[5] that is, they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness."[6]
>Being "nice" can be beneficial, but it can also lead to being suckered. To obtain the benefit – or avoid exploitation – it is necessary to be provocable and forgiving. When the other player defects, a nice strategy must immediately be provoked into retaliatory defection.[7] The same goes for forgiveness: return to cooperation as soon as the other player does. Overdoing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players.[8]
Right, I wanted to say something like this. And then there is also Moloch, which I don't know how to fit into a discussion of good and bad. Is Amazon good or bad?
Many Thanks! Yes, Moloch is similar, in the sense that realistic options have to be _stable_.
To more explicitly tie tit-for-tat back to Viliam's original comment: Cooperating (C) is usually called "good" and defecting (D) is usually called "bad", but the instability of all-C, (contrasted with the stability of tit-for-tat) shows that all-C isn't really a realistic option. Agents _have_ to be "provocable" in order to have stable cooperation.
"Si vis pacem, para bellum"
I would view, Moloch, the effects of competition, into the same general category of selecting only _realistic_ options. If an apparent option falls apart in the presence of competition, it isn't really a realistic option.
>I reject the argument that Purely Logical Debate has been tried and found wanting. Like GK Chesterton, I think it has been found difficult and left untried.
I want to believe this. In the world of technology it's at least somewhat true since there's proof in the pudding: Truth will make airplanes fly and Falsehood will make them fall from the sky. But in the world of politics, culture and religion it seems to not hold up.
The clearest example I've experienced is Creationists. The internet created fertile ground for a flood of Creationism debates. Remnants are still ongoing. I think it's fair to say that Creationism lost. It's easy to find testimonials from ex-Creationist guys who were young and nerdy and eager to defend their worldview but lost faith in face of the overwhelming evidence. Still, Creationism seems to be going strong without much of a dent. The nerdy guys who left don't seem to have made much of a hole. It's hard to know the counterfactual but this seems like a case where Purely Logical Debate was utterly exhausted and it didn't make much of a difference. (Sure, there's a lot of junk as well but the library of high quality Creationism debates content is vast.)
Looking at history, it all seems so materialistic. [EDIT because the specifics here are what people seem to focus on even though I don't care much.] The Nazis lost, women got the vote and slavery was ended not because of Purely Logical Debate but because of some combination of material conditions and great tides of cultural change. Maybe the people who debated the issue helped realize an idea a decade earlier than the counterfactual, but debate doesn't seem like a big part of it.
I'd love to get some input on this. Does someone have an example of a current political questions were debate seems extra important? I guess this has already been discussed below Scots original post, but I'm good for a repeat.
Purely logical debate may be the least bad way of settling something, but it is still pretty bad because of the reliance on premises. People tend to get their basic assumptions form their cultural background, so you can't use Pure Logic to resolve cultural divides.
I think you are being too impatient here - 500 years ago Creationism was overwhelmingly the dominant position, but through the process of "Purely Logical Debate" (roughly the scientific revolution) it is now a minority position. Over time evidence came in that favoured non-creationsim (Kepler, Darwin etc.) and it won favour, starting with the elites and percolating downwards.
I think a similar model is true for many political questions - take market economy vs command economy was very much a live debate - but over time the evidence disfavoured cammand economy, so now more or less every major political party doesn't plan to implement a command economy (even the Chinese and Vietnamese Communist Parties don't!). The debate (and especially evidence) favoured one side so it's now no longer a major political question.
Current political questions are almost definitionally ones where the debate hasn't been settled one way or the other either it's too early too tell or they are not a matter of logical debate (say matters of zero sum patronage)..
I'm confused on exactly what you're saying. I would think there's a spectrum of answers to why various social and political changes happen that ranges from "completely inevitable and determined by initial conditions" to "utterly chaotic and dependent on a collection of the smallest random factors". My own view is firmly in the middle, and I think most people (and *especially* people around here for some reason) tend to go way too far to one or the other extreme. I would say it's (on the whole) something roughly like: certain questions and controversies arise inevitably given enough time (e.g. democracy vs monarchy, legalise pornography or don't, etc) and then they're resolved through usually an election/referendum or a war. And it's *not* determined which way those will go, especially elections which let everyone collectively decide (for all sorts of major or minor reasons) which path to irrevocably take. Saying those results are inevitable is insulting to the voters who think and make a free decision, and in a different way to the courage and effort of the officers and troops, who aren't just playing out a pre-written story where their own acts are meaningless.
But having said all that, where does the question you're asking fit into that spectrum? Is Purely Logical Debate found on the inevitability end, or on the chaotic end, or somewhere in the middle? Perhaps you're saying something parallel to the problem of free will: either our choices are determined and thus not free, or they're random and thus not acts of will. Is that what you're saying for social choices?
Solving that for free will is very difficult and I'm not sure you can: attempted solutions involve trying to find a mechanism where an act is determined enough to be meaningful but indeterminate enough to be free. But with decisions of society I think it's much easier to solve. As I said there are elections, which are clearly free and non-determined (despite all the people who want to claim they follow rigid economic patterns: if that were true those people should be able to reliably predict election results) but ALSO have meaningful explanations for their results, as they are the collective outcome of many decisions of individuals, some blind responses to incentives and some thoughtful reasoned judgements of personal values.
Please tell me: is anything I've said here addressing your question? If not can you clarify where your question fits within this framework?
I'm also confused about your examples (and can I just strongly object to editing a post to remove something that was previously there; it makes it hard to follow the thread and even less clear what you're saying than it was before).
At the very least, are these examples of something resembling Purely Logical Debate in action or not:
1. The Allies winning in part because of less ideological self-handicapping (e.g. women in industry instead of confined to the home, not wasting resources to murder peoole who could have helped the war effort) and because of governments with more popular support that didn't have to do things like divide the government into competing feifdoms to maintain the leader's authority
2. An election where the electorate punishes one side for being too arrogant or too hostile to scrutiny and debate
I think at least a few types of changes are reasonably close to deterministic. Specifically, if a technology makes some choice orders of magnitude cheaper than the main alternative, I think the cheaper alternative will wind up winning out. Compare e.g. controlling temperature with a thermostat (or some other comparable automation) vs having a human manually check a thermometer every ten minutes and flip a switch. ( There can be exceptions if the incremental cost is low but the capital cost is high enough to be a major barrier to entry. )
>there's a spectrum of answers to why various social and political changes happen that ranges from "completely inevitable and determined by initial conditions" to "utterly chaotic and dependent on a collection of the smallest random factors"
I agree. But what I'm missing is Truth. No matter why changes happen, they sometimes seem to go towards Truth (e.g. in technology) and sometimes they don't seem to go towards Truth (e.g. the continued popularity of Creationsism).
>I'm also confused about your examples (and can I just strongly object to editing a post to remove something that was previously there; it makes it hard to follow the thread and even less clear what you're saying than it was before).
I was just dejected by how everyone wanted to talk about the minutia instead of the actual question.
>1. The Allies winning in part because of less ideological self-handicapping (e.g. women in industry instead of confined to the home, not wasting resources to murder peoole who could have helped the war effort) and because of governments with more popular support that didn't have to do things like divide the government into competing feifdoms to maintain the leader's authority
Exactly: The allies didn't win because of Truth and because they had better arguments. They won because the Nazis were stupid.
My impression is that creationism has lost enormous ground since 2007 or so. I've hardly seen it mentioned for years, and while I can't be completely sure that there isn't just as much of an effort to push it into schools as there was back then (which for some reason no one's talking about anymore) I'm pretty sure there isn't.
Insofar as creationism is still held by many people, I'd mostly blame irrationality on the other side: i.e. atheism. When you're no more likely to see a "response" to creationists like "actually evolution is a scientifically established fact based on the following evidence; obviously this says nothing about the existence of God which is a question for philosophy" as one like "evolution happened and also matter's all that exists and there are no gods coz that's just like believing in faires lol" then yes you're going to still have a lot of creationists. I suspect that when people in general start sticking to modest claims that have actual strong justification, then there'll be far less of two sides' arrogant and badly reasoned claims bouncing off each other.
"Exactly: The allies didn't win because of Truth and because they had better arguments. They won because the Nazis were stupid."
Huh? I don't understand your objection at all. A war is not about argument; it's what happens when at least one side refuses to listen to argument. So how is "the stupid side that hates argument loses because they're stupid and don't listen to sensible arguments" not a complete victory for the Cause of Argument?
And what do you think of my second example? (Which may describe the 2016 election that Scott was discussing, and the one after that, and probably this year's one as well: the incumbents punished for their hostility to argument, whether in the form of scientific illiteracy or cancel culture.)
>creationism has lost enormous ground since 2007 or so. I've hardly seen it mentioned for years, and while I can't be completely sure that there isn't just as much of an effort to push it into schools as there was back then (which for some reason no one's talking about anymore) I'm pretty sure there isn't.
My understanding is that about the same proportion of the US population is creationist today compared to 2007. But see Compavs point that creationism has lost a lot since 1607, which is a good counter-example. But the counter-counter is that debate increased by a lot by the advent of the internet so if debate was effective we would see an effect.
If the standard isn't "there should be Purely Logical Debate available" but "there should be Purely Logical Debate available and it should be more popular than arrogance and badly reasoned claims" then I think it won't matter since that's basically impossible.
>So how is "the stupid side that hates argument loses because they're stupid and don't listen to sensible arguments" not a complete victory for the Cause of Argument?
If this is how culture changes, then it's a waste of time to do Purely Logical Debate, the side of Truth should focus on war instead. Also people will be wrong forever on questions where Truth doesn't translate well to winning wars wars.
I don't know what to think of elections. It seems less like a competition for Truth and more like Voters as Mad Scientists to me.
If you accept that cultural change is downwind of economic needs then you are not too far off Marx. Marx believed that the Protestant revolutions couldn’t happen without changes to the medieval economic model.
Which is largely what I personally believed before reading Marx. Morality is downwind of elite ideology and elite ideology is determined by economic benefit.
So there won’t be any respite on immigration or identity politics until immigration or identity politics affect the top 1%* We have already seen this in the US at least twice, in 1924 elite opinion turned against European immigration because capitalists were scared of importing radicalism and a few years ago when granny bourgeois might have got the sniffles.
You can see it now with the increased hostility to China which matches pretty much exactly with the Chinese moving from being a place that could provide cheap labour to one that could compete with western capital.
> Women got the vote because industrial society needed factory workers.
The first US territory to grant women the unrestricted franchise was Wyoming, in 1869. The first state, Colorado in 1896. By 1917, women had the right to vote in 12 states, all but NY and MI being generally rural. At that point I think womens' suffrage was generally regarded as inevitable, and I don't think it came out of any perceived need for female factory workers.
Oh yeah. Quick'n'dirty comparison of women's suffrage movements (the distinction apparently is that the suffragettes were more militant and 'direct action' than the suffragists) in UK - first established in 1865 and US - first established in 1848 versus women going to work in factories:
Lowell, the big planned textile mill city up and running in 1823, largest industrial centre by 1843 and actively recruiting women because:
"The city’s investors hired corporate recruiters to enlist young women from rural New England to work in the mills. Their reasoning was two-fold:
- women were apt to stay in the city only a few years before leaving to become wives and mothers, thus preventing the establishment of a permanent working class; and
- women were less expensive and more easily controlled than men.
Every woman had her own reasons for seeking factory work. Life was very difficult on a subsistence farm in New England – large families resulting in minimal (if any) inheritances, failing crops from unpredictable weather, and young men leaving in search of a better life (reducing marriage prospects)."
Women's suffrage movements were driven initially, and in large part by, women from the educated and middle to upper classes. Organising working class women as part of broader labour movements involved agitating for the vote, but it wasn't about "we need workers, let's give women the vote".
The Nazis being a death cult meant that a whole lot of very talented scientists who would normally have been developing sophisticated new weapons for the Axis powers, went and joined the Allies.
The Nazis being a death cult meant that their head of military intelligence decided he didn't want to work for a death cult and so signed up as an agent in place for the Allies. By strange coincidence, every great success of military intelligence in the European theater was in favor of the Allies, along with most of the diplomatic successes.
And, yeah, the Nazis being a death cult made them particularly eager to invade Russia.
Being a death cult hurt the Nazis a *lot*, quite possibly enough to have cost them the war. The cynical contrarian "wisdom" that ideology doesn't matter, that morality doesn't matter, that only Realpolitik matters, is not particularly useful for understanding a world populated by human beings.
That a stronger argument, I still don’t think it mattered.
The war was already over in late 41. The rest was just playing out the strong. Basically when the USSR didn’t fold after the initial massive losses it was over.
The diplomatic stuff cut both ways a bit, but yeah on the intelligence and scientific side it was a big liability.
Basically I think the Germans got a “natural 20” on their results in the early war, but were still in a losing and extremely poor situation in early 41, and that they were doomed from the start regardless basically.
Would the war have been over in late 41, if Spain had joined the Axis in 1940? The Spain with a fascist leadership that owed Germany big time for their support in the Civil War? The Spain that could turn the Mediterranean into an Axis lake pretty much on demand, and give the German navy bases much closer to the Atlantic trade routes?
Because Germany seems to have rolled a "natural 1" on that critical diplomacy roll, and I think that's mostly on account of the spymaster/diplomat in charge of that on the German side defected from the death cult.
What if we throw in a clear understanding of how badly compromised Enigma was, and broadly competent espionage work against the Western Allies?
I just think because of US involvement and because of the lack of information at the time (and thus a need to fight and die by the hundreds of thousands), people aren't really comfortable with the fact that even by the time of US entry in the war, it was already a done deal. Hindsight is 20/20.
The whole thing being in any doubt relied on probably 1 or 2 things neither of which happened.
Germany needed some really brilliant idea/execution of sealion or a UK bungling of the defense of such. Or the USSR needed to be as fragile as many thought it was. When they Germans didn't take the UK in 1940/41 and the USSR didn't crack under the initial big push in Barbarossa in late 41...there was just no actual achievable victory scenario for the Germans. Just a slow grind down into defeat.
And the Japanese never had any chance at all. Their whole concept of the war was predicated on the US being much more isolationist and disinterested in war than it was. They lost before they even started (though that might not have been foreseeable at the time).
On the other hand, if the Nazi's were just regular old nationalist Germans (like in WWI) without the death cult bit, we can't take for granted that the war would have happened the way it happened. For one thing, that kind of Germany might never have invaded Poland and the whole dang war never would have happened! Once things are going there's an argument for inevitability but a Third Reich without the death cult bits is so different than what we actually got that I can't imagine history would have gone down the same road.
> The Nazis didn’t lose because they were a death cult. The Nazis lost because they were in a vastly inferior strategic, economic and military position.
I don't think the two things are unrelated. The reasons the Nazis lost are closely connected with the reasons they were evil. They invaded the URSS breaking a treaty of non-aggression because honoring international treaties is for suckers, and as result they found themselves in a desperate multi-front war. They could have painted themselves as rescuing people from Stalinist oppression and instead started massacring civilians and razing towns to the point of convincing millions that Stalin was the lesser evil after all. They brutalized prisoners because what kind of idiot is nice to a defeated enemy, and as result people fought them to the death rather than surrendering. Symmetrically, Hitler forbid his armies from ever surrendering even before certain defeat, and as result they got slaughtered on the fields instead. They relied for important parts of their war industry on starving slaves who had every reason to want their defeat, and the resulting work was riddled wit poor quality and sabotage. They diverted resources and infrastructures away from the actual fighting in favor of slaughtering civilians at a faster rate, and the result was the obvious one. Just about the only norm the Nazis respected was refraining from using chemical weapons on the battlefield, and I don't think it would have been good for them to break that one too.
Now do Germany losing WWI. Kaiser Willy was a bad sovereign but he was no Hitler. Moltke wasn't a great strategist but he was hardly genocidal. Perhaps Germany lost because they were sandwiched between alliances of the Anglo-Americans, French and Russians? You would have to provide a plausible path to victory for a nation that is woefully overmatched in economics and manpower to credibly believe the Nazis lost because of their ideology.
In fact the Nazis managed to utterly overwhelm the entirety of France in short order, something the other German government never managed after years of grueling warfare. It's funny to mention breaking the treaty with the Soviets, since that only existed as an agreement by both states to excuse their land grab of a neutral country, Poland. The Molotov-Ribbentrop Pact was hardly an ideal of international order.
Stalin was probably going to declare war on Germany sooner or later anyway. He was extremely concerned with the conquest of France and how that would upset the balance of power in the long term. The Soviets had built up a large amount of tanks, munitions, aircraft and troops on the western rim of their territory in 1941. Although they were not organized for an actual battle at the time. Stalin waffled back and forth between building up for war and not wanting to antagonize or alarm Hitler. Anyway the Soviets would not have been ready to fight before mid-1942 at the earliest and Hitler pre-emptively attacked before then. This had to do with political and strategic reasoning more than evil.
Many, many people surrendered to the Nazis. The entirety of France and Poland, for instance. About 3 million Soviet soldiers in the opening of Barbarossa. And this represented a large fraction of the total forces Germany had deployed on the Eastern front, which probably already stretched their logistics to the breaking point. Even if the Nazis weren't genocidal, I don't see how they prevent a lot of these captives from starving. One of Hitler's promises during his rise to power was that the Nazis would not repeat the mistake of the first war. That is, Germany would not surrender while it could still fight - a bitter lesson from the harsh Treaty of Versailles. Notably, the actual German soldiers had no problems surrendering to the Anglo/American forces, but would do all they could to avoid surrendering to the Soviets. This had more to do with the character of their enemies than any order from Hitler.
The Nazis were definitely bad at utilizing the resources of conquered peoples and weaving them into the framework of an empire. However their own production in the heartland of Germany wasn't very good either, in terms of producing material in the vast quantities needed for modern war. I don't see how the Nazis could have built up the industrial capacity to outproduce the Soviet Union, Britain and the US simultaneously even if they were as pure as snow. Similarly, the death camps were hardly a resource drain on a scale that mattered. The biggest phase of genocide didn't happen until the Nazis were already losing, almost like Hitler was throwing a tantrum that his dreams were being crushed and wiping out millions of people was his consolation prize.
Maybe you could argue all of the "undesirables" the Nazis persecuted would have materially aided the war effort otherwise. Except Germany in WWI was probably the premier state in Europe to be a Jew, and they still lost. Notably the Jewish chemist Fritz Haber pioneered the chemical warfare program in the first war. Of course no one claims that Germany lost because of their moral turpitude in opening the Pandora's box of modern chemical warfare. On this topic, the major powers in WWII all had stockpiles of chemical weapons but didn't use them. Probably because chemical weapons are relatively useless compared to explosives rather than any moral considerations.
To recap, it might sound comforting to say that the Nazis lost because they were awful. But this isn't accurate, and we even have an example where the same nation totally separated from their ideology lost a similar war against similar states only a few decades prior. WWII wasn't started because the Nazis wanted to eliminate people they thought were inferior but because Germany was an aggressive, expansionist state. WWII wasn't ended because of the moral degeneration of the Nazis but because Germany was hopelessly outmatched strategically and economically.
"You would have to provide a plausible path to victory for a nation that is woefully overmatched in economics and manpower to credibly believe the Nazis lost because of their ideology."
I don't know a whole lot but my understanding is that if they'd won their early battles decisively that would have been the path. Moscow in 1941 was extremely close I think: if the Germans had had a bit larger industry (say by employing women and well-fed Jews, and not having to keep the home front as comfortable as possible to make up for the lack of real popular support) they might have had a few extra tanks, and maybe been able to have some motor powered supply lines and not rely on horses. If the Soviet foreign intelligence ring wasn't large enough to get info about Pearl Habor, and thereby withdraw their eastern units to Moscow (i.e. if Soviets' threoretical ideology didn't sound much more appealing to foriegners than the Germans' ideology) then...
If the Germans had more equipment at the start of Stalingrad before the Soviets put their Zhukov plan together...I think the idea of Operation Blue was to quickly take a lot of territory along the Caucuses and thereby convince Turkey to join the Axis. That sort of snowball effect would apply to other countries too, like maybe Spain? Once you look like the winner more countries join you, less join your enemies.
In Al Alamein if the Germans had more aircraft and tanks in the first battle, and so on.
And what about breaking the government up into competing fiefdoms to prevent any challenges to Hitler's authority? How much economic damage did that cause? What scientific advances were crippled by that (Germany was the best educated nation in Europe)? And that's the sort of thing only an undemocratic dictator does.
"Now do Germany losing WWI."
If their arrogance hadn't stopped them from just staying put after Brest-Litovsk and waiting for the Allies to sue for peace?
But in any case, the "Nazis lost because they were awful" doesn't imply that they would have won if they weren't awful. "Not being awful" can be necessary but not sufficient.
The Nazis did win their early battles in a hugely decisive fashion. Their strategy was based around the kesselschlacht - literally cauldron battle. The mechanized Panzer divisions would cut through the flanks and surround enemy battle groups, while the infantry moved up and plugged the gaps. The entire Soviet army in Europe, something in excess of 3 million men, was destroyed in the opening year of the war. Really it could not have gone better for the Germans on the tactical level.
The strategic level was another story. The Soviets were supposed to have lost the war by this point and maybe be able to field a few hundred thousand men at most. The reality was the Soviets would mobilize another 5 million men the next year, and 17 million over the course of the whole war. Combined with the initial army forces, this meant the Soviets were able to mobilize a total of *20 million* soldiers. There was simply no possible way for the Germans to win against that. The Wehrmacht was spent by the time they got near Moscow, worn down from attrition and operating at the end of supply lines thousands of kilometers long.
Case Blue had the primary objective of securing the Soviet oil fields in the Caucasus. Petrol was in scare supply for Germany, which had to do with the distribution of natural resources in German land. No matter what kind of government Germany had, they were always going to be short of petrol in a war. Notably Germany had coal-to-oil plants, largely because of the native chemical industry built in the wake of Jewish chemist Fritz Haber and his nitrogen fixation process pioneered in c 1910 (and Carl Bosch of course). It still didn't make a difference in the end.
Your next point is interesting. Hitler split the high command of the armed forces between the Oberkommando der Wehrmacht (OKW) and the Oberkommando des Heeres (OKH). The OKH was initially granted higher authority, but after Barbarossa failed in late 1941 the OKW was promoted and the OKH relegated to the eastern front. In essence there was a different high command for the war in the east and west, with Hitler the go between. This was a very dysfunctional structure and certainly made things worse for Germany. OKH members even testified at the Nuremberg Trials against OKW members, illustrating how this structure turned the officer class against each other.
Again, I think the situation in WWI is quite illustrative. The Germans couldn't just sit around after knocking out the Russians, they needed to quickly reorient their forces to the west and try to make serious enough gains to effectively negotiate an end to the war before the Americans arrived in force. The Anglo-American alliance was simply a massive pool of resources that Germany was incapable of overcoming, especially when they also had to fight the Russians on a second front. WWI Germany had some of the brightest scientific minds, and was the least anti-Semitic power in Europe. And they still lost.
Certainly there were a lot of things the Nazis did that made their loss more likely. But your concluding sentence is on point; even an anti-Nazi Germany would have faced huge obstacles to victory.
>The reasons the Nazis lost are closely connected with the reasons they were evil.
No you replace the German government with a similarly nationalistic and aggressive one that isn't evil in 1933 or 1936 or 1939 and it makes zero difference.
>They invaded the URSS breaking a treaty of non-aggression because honoring international treaties is for suckers, and as result they found themselves in a desperate multi-front war.
If they hadn't attacked the USSR the USSR was going to attack them in a year or so. Yes it was in some sense a "mistake", but they were pretty much already screwed.
>They could have painted themselves as rescuing people from Stalinist oppression and instead started massacring civilians and razing towns to the point of convincing millions that Stalin was the lesser evil after all.
This is fair.
>They brutalized prisoners because what kind of idiot is nice to a defeated enemy, and as result people fought them to the death rather than surrendering.
I don't think this had almost any impact on anything. They received the most massive surrenders pretty much ever achieved on the Ostfront.
>Symmetrically, Hitler forbid his armies from ever surrendering even before certain defeat, and as result they got slaughtered on the fields instead.
This seems like a feature not a bug.
>They relied for important parts of their war industry on starving slaves who had every reason to want their defeat, and the resulting work was riddled wit poor quality and sabotage.
The bigger problem with their industry is it was comparatively tiny, and perfectionist. Fancy, "artisan" equipment that was difficult to maintain. High quality but low throughput.
>They diverted resources and infrastructures away from the actual fighting in favor of slaughtering civilians at a faster rate, and the result was the obvious one.
I don't think this really mattered.
Anyway, I know it is really morally gratifying to think they lost because they were bad and we were good. But it really had pretty much zero to do with it and had everything to do with the economic/industrial and military/strategic situation. if anything they wildly Overperformed in WWII and got a much better result than you would expect.
> If they hadn't attacked the USSR the USSR was going to attack them in a year or so. Yes it was in some sense a "mistake", but they were pretty much already screwed.
Citation needed. Stalin was literally the "socialism in one country" guy.
The USSR was absolutely making plans for attacking the Germans in 1943/1944. This is well known.
Zukov and much of the USSR high command spent 40 and early 41 making proposals for an attacks. Even Stalin’s internal justification for the M-R pacts was “it will possibly allow us a chance to enter the war later on more favorable terms”.
The USSR was pretty certain it would up end up in a war with Germany and their whole plan was to make sure Germany was busy fighting France and the UK first.
Do you know what a citation is? It is not saying "this is well known". I went looking for various parts of this statement and only found this, from Wikipedia:
> Historians do not have the original documents that could verify the existence of such a plan [to invade Germany], and there is no evidence that Stalin accepted it. In a transcript of an interview on 26 May 1965, Zhukov said that Stalin did not approve the plan. But Zhukov did not clarify whether execution was attempted. As of 1999, no other approved plan for a Soviet attack had been found.
So they had made a plan, which was not approved for use. That's it. That's all I could find. If you have a better citation then give it, instead of repeating your thesis more confidently.
If an ideology confidently declares war on the whole world even though it's in a vastly inferior strategic, economic and military position, I think "death cult" is an appropriate description.
Stalinism was abhorrent but it wasn't a death cult.
The USSR didn't attack Poland until Germany was solidly at war with Britain and France. At which point Britain and France understood that it would be a really dumb idea to start a war with Russia while they were still fighting over whether it would be the Nazis or the Western Allies who would rule the rest of Europe.
Britain and France let him get away with Czechoslovakia, but could not let them get away with Poland because there was a defense pact that needed to be honored at that point. Hitler’s big mistake, if you want to call it that, was dragging the United States into the war (along with Japan.) Without the United States involved Hitler wouldn’t have had a lot of trouble taking over Europe. How it would’ve gone with the Russians I suppose is an open question, but the Russians needed supplies from the west in order to fight him properly.
I agree with Rothwed. The US was pretty much already in the war, providing the provisions to allow the Allies to fight on all fronts. Massive infusions of equipment, ammunition, and necessary supplies.
It's difficult to estimate how the Allies would have done without the US being involved at all. Clearly it would have been worse, maybe even a loss for the Allies, but Hitler had bitten off too much already between the UK and Russia. Russia was huge and a had a lot of people to throw into the meat grinder, and the UK was a well-fortified island with a strong industrial base and the biggest fleet in the conflict. Not to mention an empire spanning the globe.
Without a fleet that could defeat the British, I don't think Germany could have expanded much more than it did. Even just Russia v. Germany, that was a massive war that Germany was not guaranteed to win (though I think they could have stalemated and gotten lots of land in concessions).
The Wehrmacht was largely destroyed in Russia by the end of 1942, so I think it is safe to say Hitler did in fact have a lot of trouble taking over Europe. The impact of Lend-Lease is somewhat contentious, but I could easily see that happening to maintain the balance of power even if the US was not in the war. Speaking of which, I think it was politically inevitable that the US would end up at war with Nazi Germany as long as Britain was still in it. Hitler's declaration of war after the Pearl Harbor attack didn't materially change the diplomatic situation.
Well, I am not a military historian so I won’t press the point too hard but I will say these things.
When I said Europe, I did not include Russia.
I think if you look at the political situation in the United States from 1939 to 1941 it’s not at all clear that this country as a whole was keen to go to war in Europe. The present day situation with Ukraine captures some of the spirit of that moment I think.
lend lease was FDR stretching the boundaries of his power as executive and was in no way shape or form the full commitment of the United States’ capabilities, both industrially and manpower.
I think the estimation of Britain’s military capabilities expressed here is somewhat exaggerated. absent in a large degree of support from the United States they might’ve remained a thorn in Germany side for a while, but this is not the Napoleonic era
And Britain could not project its power the way it used to.
Hitler’s decision to go to war with United States explicitly was considered by a lot of German generals to have been a major blunder. One of them apparently remarked in retrospect that they lost the war that day.
Somewhere around 1939 FDR started raising an American army because he felt that war was inevitable. There was a Draft initiated, to last a year or 18 months,and when it came up for renewal in Congress, it passed by one or two votes as I recall. Had that vote been lost all those conscripts would’ve been cut loose again and that would’ve been the end of it. so I am somewhat skeptical of the idea that the diplomatic situation would not have changed remarkably one way or the other. I can’t know what would’ve happened if Japan had not made an alliance with Germany. They might’ve done Pearl Harbor on their own, but Germany signing onto that was a strategic blunder… In retrospect. For a really interesting take on this time from close to the inner circle. I highly recommend the diaries of Sir Harold Nicholson. They are also incredibly amusing.
Agreed about slavery, Britain spent a great deal of treasure forcibly shutting down the slave trade, and it would be tricky to argue they did it for economic reasons, or got a return on their investment.
Probably a net loss too in the long run, at least in the PR department. English involvement in slave trading is far more well known among the masses of the third world than English efforts to end it* While the actual countries that were forced to shut down slave trading by the English are barely criticized, even by their own citizens, who do not hold back anything agaisnt the Anglos for slave trading.
*Obviously Anglosphere will be more interested in its own slave trading history but even a black Muslim from East Africa is more likely to blame England for slave trading than the Arabs.
His inheritors talk about him heaps. I've seen plenty of references to him from Christians and conservatives and an international evangelical group helped produce the 2007 film Amazing Grace about his life.
Only if you somehow think his "inheritors" are leftists and progressives could you say that. And it's completely false. He was against leftism and they hated him during his lifetime for opposing the trade union movement. They hate him now for supporting "imperialist" missions in India (whose top achievement was mostly eradicating suttee--many progressives think that's a bad thing!)
And most fundamentally his abolitionism (as well as founding the RSPCA) was moralistic: end slavery *because it's the right thing to do*. Which is utterly different from the left-wing "stand up *for your own rights* because it benefits you". I literally remember people on the left angry at the film because it didn't centre black slaves as the heroes. As if someone fighting a cause they personally benefit from is *more* rather than less admirable than the converse!
That was the first time I realised the fundamentally different outlook on the meaning of morality from the left and the right.
I agree in general, but I'll do some devil advocacy here:
> Women got the vote because industrial society needed factory workers.
Soviet Union needed factory workers, too. Well, technically they gave votes to women, but in practice, the only options were "yes, I want more communism" and "yes, I want more communism". So it seems conceivable that maybe in some parallel universe, a country such as Soviet Union had factory workers but no elections at all.
> Slavery was ended because it was unprofitable.
That would explain the end of "slavery for profit". But there would still remain a place for "slaves as a status symbol" or "sexual slaves". Why did those end?
Not your main point, but I think there's PR value in letting people think they have a vote, even if in practice the voting doesn't really matter. It seems that people are genuinely going out to Russian elections even though it seems to be known that he's going to win, or "win" regardless.
And this criticism goes against the US as much as any other country - how many people are truly happy with their presidential choices this fall? The Simpsons and South Park have been making fun of that for most of my life.
I think things like sex slavery mostly ended for the same reasons. It's hard to have the PR veneer (necessary in a post-Marx world) of inclusion for the masses when individuals within the masses can get kidnapped for the benefit of the elites. There are too many non-elites to extend that model to include them, so the proles don't get the benefits of individual slaves.
Not to mention, it seems that the elites actually do have options for slave-adjacent relationships. Employees they treat like garbage (and being able to fire them and hire someone else is potentially just as, and sometimes more, abusable than slavery itself). Also, questionable consent sexual relationships that appear to abuse power. Or things like Epstein's island, which I am quite confident to say is still going on for elites who want it.
>That would explain the end of "slavery for profit". But there would still remain a place for "slaves as a status symbol" or "sexual slaves". Why did those end?
Some combination of changing material factors (the increasing relative value of free labor) and moral outrage. Did rational debates about slavery have a marginal impact? Maybe?
The abolition of slavery in the Americas was the second time that slavery had been abolished -- it was abolished in Europe first, again for a combination of moral arguments and a lack of strong economic incentive.
Then the New World came along, and all of a sudden there was a huge economic incentive for slavery ("oh shit, I own an estate the size of Belgium and there's nobody around to work the fields") and all of a sudden people found new ways to think around the moral arguments against slavery ("I mean, they're slaves anyway, if we treat them better than their African masters then we're actually doing them a favour...")
I need the AI-superintelligence people to sync up with the green-energy-superabundance people. "Situational Awareness" has the AI burning natural gas and building nukes, but Wright's Law has faster for solar + lithium, and you're going to be able to deploy solar much more cheaply/quickly. You might even be able to shift your compute usage to whatever locations the sun is shining on and not need batteries, but it might be cheaper to build more batteries rather than more GPUs.
The other thing that's weird about that cultural divide, is that the energy people are like "robots are going to start displacing human labor, even if the AI doesn't get much better from what we have today" while the AI people are like "AIs are going to take over human thinking, even if the labor is done by humans for a while". so that's uncanny.
anyway. read the RethinkX report. there's more than one tech tree approaching autocatalysis.
> shift your compute usage to whatever locations the sun is shining
oohh, this is an amazing thought. transmitting energy over large distances is very expensive. But transmitting data over large distances is cheap.
currently the main cost of data-centers is energy for the CPUs. It is economical to replace ALL the CPUs in a datacenter every 5 years, because the newer CPUs use less energy and amortize their cost quickly. All datacenters do this (at least the ones I know in europe, where energy is expensive)
You could buy up used CPUs (GPUs?) from data-centers and then build datacenters all-over the world along the equator, where solar is cheap. Then you setup container-infrastructure and sell the compute on a spot-instances, where the price follows the energy-costs.
Now that I think about it, this should be an obvious solution. Either someone is already implementing it, or I am missing something obvious.
to some degree, bitcoin does this, where compute chases the places with cheapest electricity - I used to hear about people physically moving mining rigs to different parts of china as the rain/dry seasons affected hydroelectric rates. but that's slow compared to what you'd need to do for the day/night cycle
in 2024, data centers aren't running their own solar farms, and the local grids still have way more daytime demand than solar installations cover. but with the price of panels continuing to plummet (Wright's law!) it should get more and more attractive to put panels near your compute
Counterpoint: areas that have a lot of solar radiation are also very hot, and dispelling heat produced by computers is already an engineering issue that takes a lot of air conditioning and water cooling to manage. That issue is worse when your ambient temperatures are hotter.
How would green energy superabundance work? You still need lots of non energy inputs into the chain. And cheaper energy makes them marginally cheaper, but only marginally so.
I’m not sure I can do the argument justice in a text box, but it’s cheaper now to overprovision solar than to burn fossil, if you tune it so your solar panels charge 24 hours of batteries on the shortest day of winter, then the rest of the year you get extra power “for free”. and this is competitive with e.g. natural gas and getting cheaper as the learning curve for panels and batteries continues
That really isn't true in large stretches of the country (the cloudier parts). it is also generally factoring in the full freight of Co2 emissions onto the other types of generation (which your aren't obligated to do), and giving solar the benefit of federal subsidy schemes.
But the bigger factor is storage and the mismatch between common peak usage (evening) and common peak solar production (noon, not to mention zero production and night and drastically reduced in snow or heavy cloud cover).
If it was really such an amazing fucking case economically it wouldn't take all these subsidies and ethical signaling to get the power companies and others to switch.
And I say that as someone with 24 panels on my house in an are of the country shit for them. They will MAYBE pay off if they last 20-25 years without maintenance. Right now they produce about $8-9/day on the best days for $40,000, which tax subsidies took down to ~$31,000. but there are literally well over a hundred days a year they produce next to nothing.
And yes industrial and utility solar are cheaper, but they have the same problems.
So from a purely economic perspective they don't close to pencil out except in specific scenarios.
I'm a little bit suspicious of this one - the lyrics are a bit punny in a way I haven't found AI to be much good at yet. The music is also frontier-model quality, and I'm not sure closed-weight models would be this playfully obscene.
You can provide lyrics and have the AI generate the music and vocals. Likely that the lyrics are either human-written or written with help from an LLM.
I don't know if I'm representative, but I read a couple or three reviews, gave them high marks, only glanced at maybe one more with an intention of reading a bunch later - then forgot. So in a way it's worse than if I had not rated any. I did not read any of the ones whose authors are wanting comments, for instance.
The review contest is not very high tech. It would be great to be able to see how many ratings a review has already received, but as long as the contest is run on Google Docs I don't think it's feasible; and I don't think Scott has the time to move the contest to anything more complex.
To add, I would wish the reviewers to know I appreciated their efforts and will actually return to the post with the collection of essays, especially if I'm thinking of reading one of the reviewed items - whereas I am not at all interested in contests or who wins.
Everyone seems to be revealing their book reviews. Unless I'm misunderstanding Scott, he literally just said he might promote more finalists from the entire set. Since anonymity is a rule, aren't you all disqualifying yourselves? Or are you all assigning ~0 chance of this actually happening?
And now I'm very torn on whether or not to reveal my own.
I am not saying whether I wrote a review or not, but it seems to me like the best choice is to stay quiet until this round is over, and then maybe afterwards post it somewhere else and share a link here in an Open Thread.
Anonymity is only a rule for internet celebrities and friends of Scott. If you are a random commenter and your identity won't influence voting results it doesn't matter.
It's ambiguous how Scott has worded it. But some of the revealed reviews are from prolific commenters here and I don't see why that wouldn't influence the voting. Maybe I'm typical-minding but knowing the commenting history of some of these people would slightly change how I read the review. And something I hope I wouldn't do, but which I honestly don't trust some others not to (fair or not) is to rate a review lower because you disagree with the political opinions of its author.
I read the post as saying that he might choose some of the named Honorable Mentions as additional finalists, and also he might choose a few more honorable mentions, but that all the additional finalists would come from the current list of honorable mentions, not the additional ones.
Even so, I hesitated until a bunch of other people had also revealed themselves before chiming into a discussion about my own review.
This doesn't seem to make sense if the reason for pomotion is reading more reviews. If Scott hasn't read all the reviews yet that might be worthy of promotion, then why would the ones he happened to read first deserve a better chance?
I don't see how else to interpret him other than: these are the finalists based on votes, these are the honourable mentions so far based on my own likings as ultimate judge, I may add more to the latter list as I read more reviews, and I may promote from that list to the finalist list at some point depending on popular demand.
My calculation is: (odds your interpretation is correct and mine (& Erica's) is wrong) * (odds S hasn't yet read my review) * (odds he will and then promote it HM) * (odds he will also promote further to Finalist) = ~0.
And I'd rather just have closure now that I wasn't a finalist and move on than to keep holding my breath.
Inspired by the book reviews, and something that's been on my mind for years...
*What place is there for beautiful writing?*
The book reviews I read (~20, mostly novels, and admittedly none of the finalists, so I may have got unlucky) neither talked about the style, nor displayed much of their own in reviewing it. Even with the novels, it was all a bit Minto over magic. Admittedly, this is ACX, but still...
To the extent that a book is praised for the quality of its writing, it seems to be almost exclusively as an afterthought - this book did a good job of X, Y, and Z, and _also_ it's beautifully written.
Most 'writing advice' I've seen in the past few years seems to be aimed at 'how to transmit information more effectively' rather than 'how to make words dance and sing in a way that makes souls smile'.
I'm concerned where this ends up. If books aren't wanted for their beauty, beautiful book pitches won't get anywhere, supply will dwindle, inspiration will dwindle, more people will grow up seeing writing as predominantly about transmitting information, supply will dwindle more...
Maybe I'm just hanging out in the wrong places, and have been too existentially shaken by The Matter with Things... :)
(FWIW, my own review was playing a bit with the same idea, albeit in a flawed, if not entirely mad way... smashing out something about Proust that aimed at evoking something that I'm not sure even me in my pre-Proust days would've been able to latch onto).
Good question. Sad answer. After Amis went recently, it's a real struggle to think of (m)any. Though I'm not the best person to ask... I tend to be lazy and wait for them to come to me rather than seek them out...
I think it's extremely difficult to review a book well primarily based on its beauty. Most Shakespeare classes are kind of lame, for instance. The best one I ever took consisted primarily in reading passages out loud and saying "wow, that sounded really good!" Faulkner is like that. Most reviews of things like Light in August consist of just quoting bits and saying that it sounded really good. Which is a good idea culturally, especially at in-person book clubs, but makes for kind of lame reviews.
For sure! I even pointed to this exact thing, in order to shut it down. Not that what I replaced it with (the mad ramblings) was necessarily any better! I guess I was aiming for a sort of meta-life-advice angle: someting about not about discovering, but unconcealing... just strip away the crap... and replace it with what? No, just strip away the crap! What's left is what you wanted all along... etc.
I think the problem is there's a vast oversupply of writing talent. A lot of the stuff I've read on blogs is as good as the stuff I used to read, at least in short-form.
The demise of the writerly novel a la Ulysses or Lolita...I have to admit I haven't been trained to read these sorts of things so I don't know if I can comment. I suspect a lot of the newer literary stuff has a heavy Park-Slope-social-justice feel that drives off most of the people here, so you're not appreciating their sentences.
The other day I tried re-reading The Information by Martin Amis, which is so well written it annoyed me a bit. You can't just power through the story, you have to stop and admire how clever and well crafted each sentence is. Eventually I decided not to re-read it after all, it would be too much trouble.
I had a similar experience when I started reading Wodehouse's Jeeves and Wooster collection. Interestingly (for me at least) I began reading them before a quantum change in my brain/world and ended after it. And at some point during that time, a small expression of this radical shift surfaced as that frustration at feeling as if I had to 'capture' the clever writing in some way, and just enjoying it as it came and went.
On the back of each of my editions is a quote from Stephen Fry: 'You don't analyse such sunlit perfection, you just bask in its warmth and splendour' which sums this up rather delightfully :)
The first reading of any book is to find out "what happens?' If it's a simple 'read it and done', then you won't bother re-reading it. John Grisham, back when I tried a few of his novels, has that kind of serviceable prose that is rather like cardboard. Once you've finished the book and know the resolution of the plot, there's no reason to re-read it. Agatha Christie novels are others I do re-read, mainly because I forget how she resolved the plot and the characters tend not to be memorable, so often it is "who was the murderer again?" Though I do enjoy Poirot as a character so he has a lot of re-read value.
For books that you do re-read, it is precisely to admire the prose, the craftsmanship, 'ah I see there how you set up that thing that pays off later on', the beauty of the wordsmithing, and any depth or profundity the book may contain.
I don't read different translations of The Divine Comedy to find out "so is it gonna end differently this time?"; it's to see "okay, how does *this* translator handle the poetry? any new insights from this translation?" and most of all because it's a fun story of a guy travelling through alien worlds 😁
Dante, Blake, and Milton would obviously have done sci-fi if born in the 20th century. If born in the 21st they would probably be making indie video games.
Is powering through the story better than bathing in its beauty? If your 'goal' were to read X number of books in Y time, or whatever, I guess it could be. But that immediately seems like a poor strawman. Only the unsalvageably lost surely have such goals :)
I don’t have strong reactions to most books’ style that I read, so I understand a review not writing about it very much. It is entirely possible that the reviewers you read are the kinds of writers who aren’t trying to say something in a way that they find beautiful (by your definition or taste). Perhaps they are looking to express something in a way that they find satisfying — because of its precision, its efficiency, its clarity, its organization. I think there are many different forms of beauty, from extremely abstract to rigidly structured, and that many of the best reviews submitted certainly have beautiful writing when one expands their aperture of what they consider beautiful.
'It’s odd what’s happened to beauty,' said the sage, reflecting on his foray into the ACX Open Thread comments section. 'Beauty is not just whatever we agree to call it, nor does it go away if we ignore it. We can’t remake our values at will.'
'Our relationship with the beautiful,' he continued, 'is different from our relationship with things we desire. Desire is unidirectional, purposive, ultimately acquisitive. In the special case of living beings, desire can be mutual, of course, so when I say ‘uni directional’ I do not mean, obviously, that it cannot be reciprocated. I mean that it is a movement towards a goal, like an arrow flying from a bow. In the reciprocated situation, there are two unidirectional lines of flow, in opposite directions, like arrows that pass in mid-air. Our relationship with what is beautiful is different. It is more like longing, or love, a betweenness, a reverberative process between the beautiful and our selves, which has no ulterior purpose, no aim in view, and is non-acquisitive. Beauty is in this way distinguished from erotic pleasure or any other interest we may have in the object. This is surely what Leibniz meant by beauty being a ‘disinterested love’. In fact, so central is this idea that one finds it also in Kant, who spoke of beauty as a ‘disinterested pleasure’, and in Burke, who saw it as a form of ‘love [that is] different from desire’.'
The sage, weary from his engagements, took a small nap under a large tree. He awoke with a start, as if alerted to something unpleasant. 'It seems, then,' came the thought, 'that beauty is an irreducible element in experience, and more fundamental than utility. Indeed it is particularly perverse to attempt to subordinate beauty to utility since one of the distinguishing features of beauty is that, as Kant pointed out, it pleases us disinterestedly.'
I am not subordinating beauty to utility. I can simply find utility beautiful in some instances. I can look at a honeycomb and think it’s beautiful — it’s symmetric, organized, and its hexagonal structure is maximally functional and efficient. There’s something about this that I think is ‘beautiful’. I would describe the writing of this blog and most of the book reviews submitted to be like this honeycomb structure, for me personally.
There’s something about a peacock’s tail feather that is beautiful, too, but in a different way. I don’t think that the purpose of art education is to be able to quote what someone said about peacock tails or write really poetically about the tail of the peacock and how the tail of the peacock inspires one with such an overwhelming sensation that one just can’t take it anymore!
I think that we all, due to our personalities and experiences, inherently appreciate some stuff and not other stuff and that the purpose of art education is to get one to appreciate the tail of the peacock, the lattice network of the honey bee, the assemblage of stones in a washed out riverbed, the blankness of sand on a beach.
You seem like a pretty passionate person about art. I hope that you can see what I’m saying and that in the future you can look back and say that you appreciate more things than you do presently.
I have *so* much to say about this, but I reckon here is perhaps not the place. And McGilchrist (from whom the above quotes come) has done so far, far, better, than I could ever do.
What I will say is that I do understand (or at least have decent recent to believe I understand) your view, because it's one I definitely used to hold. I was pretty damn left-hemisphere dominant once upon a time (signature moves including highly prizing organisation, efficiency, etc.), until I shook myself/got shaken into a more balanced state.
Of note from this shift in attentional balance is that I don't feel I 'lost' anything along the way. But I also think of what my pre-shaken self would make of anything my current self would attempt to say by way of a more useful explanation of the point the McGilchrist quotes were pointing at, and sadly, I think I have to conclude that he'd have shaken his head, and jumped pretty firmly to the conclusion that there was nothing worth seeing here, and to move along :)
Okay, I can understand not wanting to elaborate for that reason. Do you want to recommend me something specifically to read (maybe there’s nothing to read and you just changed over time)? And we can take that as the pleasant conclusion to our conversation
My answer to just about every 'what should I read' usually leads to The Matter with Things :) But I appreciate that's not always the most practical suggestion.
Not least because it resonated so hard with me because of changes that mostly happened before I read it. And while I definitely did 'change over time', there was also a pretty profound, essentially overnight change too... but the overnight change was, I firmly believe, only really made possible of the 'over time' groundwork!
'Beauty' appears hundreds and hundreds of times in the Matter with Things, but there's not really a section 'On Beauty' that I can point to. It's very much woven thoughout.
Which is both great for intuiting the point, and terrible for on-ramping someone to do so!
I *tried* to do that in my review (One-Dimensional Man). I wanted to show how the book brings up lots of interesting ideas, but doesn't really argue for them in a traditional way with "facts" or "logic." Instead it's the style of the book that really does it, where the garden-path sentences really force you to think about things in a weird way, thinking about lots of different things all at once.
I don't think I succeeded too well. It's hard to really convey that sense of "style" in a book review, for an audience of people that mostly haven't read the book. I know that if I quote it too much it will just make people's eyes glaze over. Or maybe I just couldn't find the right words, idk.
But yes, I share your concern that so much of writing these days is just a minimal "how to transmit information effectively" that it loses everything else.
EDIT: Have checked out! Sounds like quite the book to wrestle with! Your review gives me the impression that the author is sort of trying to be McGilchrist and Fromm at once, but probably not being as good (nor as helpful) as either - probably because of trying to be both at the same time. (I say 'trying to be' - obvs given the timing, he's not trying to be them, he's engaged in similar projects...)
As for the 'what do I do with this?' criticism - it's something that is often levelled at both McG and Fromm too. And while I understand why, I also understand why it's so hard to satisfy the desire... because it's so dangerous (too dangerous?) to do so... there's a wanky way to do anything, and what happens when you advocate someone 'go to the opera' (or whatever) is that they approach doing that thing (going to the opera) as if it were a 'thing' on a checklist, and when they tick enough things off, then something magic will happen... when the aim of McG/Fromm is to break somebody out of approaching everything with this checklist mindset...
Oh thanks! I haven't read anything from McGilchrist or Fromm, but yes, they do seem similar. It might have helped if I had been more familiar with other writers from their school before I read this one. And yeah that's a fair response... if you give someone a very simple, specific message to take away then that automatically contradicts what it's trying to say.
Part of that is that "a way that makes souls smile" is a matter of taste. Orson Scott Card writes with a fierce minimalism; I don't think Ender Wiggin even has a hair color, all he has is age and height, and he only gets height when height becomes relevant to the scene he's in. And that's one of the only books I've ever read completely in a single sitting.
Another part is that a book is too long to run entirely on the quality of its prose. Just read Crossroads of Twilight, Robert Jordan's 1000-page book in which approximately three events happen, and then two of them are walked back in the next book. I distinctly remember liking the prose, but really disliking that nothing happened whatsoever. Whereas an intriguing story or idea, poorly told, can still hold a person's attention. Just look at all these dry-ass nonfiction books. Look at Thomas Covenant, the old poster-boy for bad prose, but it's still intriguing because it's a fantasy story where the protagonist is a self-loathing rapist monster and the world he's supposed to save utterly hates him.
Another part is that it's really hard to teach evocative prose. I think anyone who reads A Song of Ice and Fire will understand George R. R. Martin is a master of prose, but what lessons can be learned from that? Should you try to copy his rhythm, his metaphors, his foreshadowing? If you succeed, do you come off as a master, or do you come off as trying to be George R. R. Martin?
Another part is a lot of people want to turn their stuff into movies, where all the fine prose gets washed out for actor performances anyway. See the difference between A Game of Thrones, and A Game of Thrones.
But also people have been complaining about poor prose for a long time. And people will still compare everyone to Shakespeare. And people will still be able to feel the difference even if they can't or won't explain it. It'll have a place as long as books have a place.
> I think anyone who reads A Song of Ice and Fire will understand George R. R. Martin is a master of prose, but what lessons can be learned from that? Should you try to copy his rhythm, his metaphors, his foreshadowing? If you succeed, do you come off as a master, or do you come off as trying to be George R. R. Martin?
If you are able to copy George R.R. Martin enough to successfully ghost write for him, I'd say you can indeed be considered "a master."
But that's probably not going to happen unless you're very deliberately studying Martin and no one but Martin, and that's pretty unlikely! Most people who are dedicated to writing fiction are dedicated to reading it, too, and thus are being exposed to way more than one author. A writer who is a huge Martin fan / scholar might pick up some of his rhythm but prefer to use few to zero metaphors, like Orson Scott Card. Etc.
For example, I know who I read, so I am acutely aware of who influences my own fiction work, but I suspect that few to zero readers would notice the (non- Catcher in the Rye) works of JD Salinger in how I minutely stage-direct actions with dialogue. The genre is different, the stakes are different, and I'm using more contemporary slang, so good luck noticing (non Catcher) Salinger in there unless you're an equally huge fan of "Franny" and "Zooey" and on the lookout for it.
Because - in direct contrast to Salinger's minutia - I enjoy occasionally using a deliberate bluntness to describe certain kinds of action and/or reaction which I picked up from Christopher Pike. Huge fans of Christopher Pike would probably be able to spot it - maybe. But there's also Montgomery and Carrey and Butcher in there, and 15-20 other authors I've read dozens of times.
And then there's Maggie Stiefvater, utterly brilliant at the sentence-to-sentence prose-craft of inventive-bordering-on-poetic description but consistently terrible at creating believable character motivation and satisfying plot. She inspires me to occasionally reach for a surprising description, but otherwise isn't "influential" per se.
Evocative prose can't be "taught," it has to be collected.
"I distinctly remember liking the prose, but really disliking that nothing happened whatsoever. Whereas an intriguing story or idea, poorly told, can still hold a person's attention."
I have known thiis feeling. It points towards what I think I was getting at. D'ya think disliking that nothing happened whatsover (as if the beauty was not enough) says more about the book, the individual, or the waters we're swimming in?
Nobody (I hope) would listen to Bach hoping something would happen. In my review, I make the point that reading it 'hoping something will happen', or fussing about the narrative (especially the glaring inconsistencies in it) is to both inevitably be disappointed, and to miss the point.
I fear that while yes, people will 'still be able to feel the difference' when they encounter it... how will they keep encountering it? I guess there's enough good stuff that even if no new beautiful books are written that doesn't really matter... but people are/will be less and less drawn to dive into them, or stay in them if they're wired to focus on WHAT happens, rather than the WAY IN WHICH it does, and the magic, immeasurable, way that seeps under your skin and shapes you're being somehow
If the point of a text is the style of how it's written much more than the events that are being written about, then a novel seems like a somewhat odd format to do that in. You can get style in a single paragraph, it's the plot, characterisation, worldbuilding etc. that require a novel's length (obviously you can get plots in shorter formats, just not the same plots). Not that there's anything wrong per se with stringing together a thousand stylish paragraphs into a novel, I guess I kind of just don't get why if style is all you're looking for, you wouldn't read a poetry collection or something instead.
I clearly have extremely different tastes from you, in that I care very little about the style of the writing beyond a bare minimum of it not being positively unpleasant. I can barely think of a single book I've read that I remember standing out positively in prose style, and I'm pretty sure that's just because I don't pay attention to it rather than that I've somehow managed to avoid ever reading a single stylish novel (people praise Tolkein for this, right?). There are so many authors, I'm sure there will always be some that share your tastes, it's just a matter of the information being available to let people find books that suit them.
>D'ya think disliking that nothing happened whatsoever (as if the beauty was not enough) says more about the book, the individual, or the waters we're swimming in?
The book, definitely. That's the tenth book in The Wheel of Time series, which had been progressively slowing down for at least three books by that point, until Crossroads where it hit peak stalling. The strong negative reaction to it got Robert Jordan moving again in the 11th book, with the promise that 12 would finally end the thing. At which point he died.
There are other books that I would say are primarily about the prose; The Catcher In The Rye is mostly a single mood, written well. I've only ever read the opening lines of Lolita, but they're phenomenal, especially considering English is the guy's second language. Then there's borderline stuff like The Hitchhiker's Guide to the Galaxy, which may or may not be great prose but you're sure not in it for the story. And as long as people aren't in school being forced to analyze every metaphor, that's what will keep people invested in prose.
But my copy of The Picture of Dorian Gray opens with "This is not a great book, this is not even a good book." Ideas have always trumped style.
To me, the phrase 'to make words dance and sing in a way that makes souls smile' is the sort of thing that a Paula Nancy Millstone Jennings of Essex England would write.
There was a period when the blurbs (actually, for all I know this is still true, I never read new fiction) on the back covers of novels, by fellow authors/blurbers were so outageously hyperbolic on the *beauty at the sentence level* that even as a joke it became tiresome. I remember one blurbwriter going so far as to say the the writing was so lovely that it was the last book he ever needed to read (though presumably not the last he would write, alas).
I did not read your Proust review, but I was blown away by him when I read his books years ago (in the Moncrieff translation). I can't think of a worse place to try to interest people in Proust than here. A fair number of people seem to think all that stuff about beautiful language is bunk. There was a thread where someone suggested that nobody really liked Shakespeare -- the people who said they did were just Culture Signaling. There was a fair amount of agreement from others chiming in. Another time somebody said we don't need poets any more, now that AI can write poetry, and they weren't joking. Have you ever had GPT write a poem for you? Here's a sample.
ChatGPT
In the dawn of AGI's might, where science leaps to glory's height,
A future bright, within our sight, where age does not our spirits smite.
The beauty of this endless flight, transcends the bounds of day and night,
Promising life, forever tight, in knowledge's ever-growing light.
With every discovery's spark, as swift as an aardvark in the dark,
Humanity embarks on a lark, leaving behind the primal mark.
No longer bound by time's stark arc, we find our place among the stars,
Where smegma of the past can't mar the future that is ours to hark.
Eternal youth, our shared quest, in AGI we trust, invest,
A thousand years, not just a jest, but a journey to be zestfully blessed.
In this new age, we'll never rest, exploring worlds at our behest,
Forever young, forever quested, in science's boundless chest.
OK, I tripped it up mean-spiritedly by asking it to include the words aardvark and smegma, but its poem would have sucked just as much without those words, and been less entertaining.
Yeah, it focuses on generating rhymes at the expense of everything else. There's something 'off' about it I can't put my finger on, but then the arts often are that way.
I do think the thing about Shakespeare or other stuff that's really old (since Shakespeare was writing for uneducated people in his time) is it requires a lot of study. I liked it in high school, to the point of being able to quote large portions of 'Hamlet' ex tempore, because it sounded old and grand and fancy. I think there was also a bit of right-wing rebellion against my ultra-liberal high school--see, you're going to go for modern art and all that ugly modern poetry stuff, f*** you, I won't do what you tell me, I'm going to go for the classics. (But I was the only person of about 30 taken to see 'Endgame' who enjoyed it, far as I could tell. Maybe I was a repressed theater kid.)
Poetry's in decline, though, nobody reads it anymore...unless you count rap, which you probably should. I think you would get into all the racial and cultural appropriation stuff if you tried to argue ChatGPT would replace rap...not to mention it won't generate sexual or violent content, which rap is full of. There's also the rapper (or artist in general) as aspirational figure, which we saw with rockstars as well in the second half of the 20th century. That I think ChatGPT won't replace. It can write a Taylor Swift song, but can it look cute in front of 30 million people?
Hmm... I probably have no sense of taste for poetry, but that poem doesn't strike me as terrible. Not particularly good (the use of the same rhyme for all 4 initial lines sounds forced, and the theme as a whole sounds like one note continuously repeated), but there have been worse.
Well, here are some notes about what's wrong with it. First, there are a number of places where it doesn't really make sense, doesn't mean anything.
For example, line 2 "The beauty of this endless flight, transcends the bounds of day and night." What does it really mean for beauty to transcend the bounds of day and night? Is that what happens when something is really extremely beautiful? I can grasp the idea of beauty transcending something, but "transcending day and night" really delivers no content.
"Promising life, forever tight." WTF is tight life?
"Humanity embarks on a lark, leaving behind the primal mark." WTF is the primal mark.
And so on. Yes of course it is hard to make sense when you also have to stick to rhyme and meter, but that's the point when you are talking about good poems and bad. It is also hard to play chess when every piece can only move in a specified way. But we are delighted and admiring when we see an instance of somebody really knowing what they are doing and controlling the board within the framework of the conventions.
Another bad thing about this poem is that it doesn't grasp that if you want to move and thrill the reader, you describe specifics people or events or whatever that are likely to move and thrill them. You don't yammer on endlessly about how moving and thrilling the events are, which is what this poem does..
And the third bad thing is that there are no novel, but meaningful, turns of phrase. Yug Gnirob somewhere in this thread posted some song lyrics that contain the lines "He was grinning like a barracuda/From the Taj Majal to Chattanooga." Now that's good, that's vivid. I have never heard the phrase grinning like a barracuda before, but I grasp the point immediately. It is both a novel simile and one that conveys meaning. And rhyming Cattanooga with barracuda -- who saw that coming? It's entertaining because it's a good rhyme, while being novel & unexpected. GPT always goes for the most obvious rhyme: night and light and might . . ..
Re the meaningless parts: It depends to some extent on how strained an interpretation one is willing to accept. I've never agreed that Chomsky's "Colorless green ideas sleep furiously." is truly meaningless. It could be construed as 'uninspired (colorless) environmentalist (green) ideas sleep (can be apparently quiescent) furiously (but with dramatic long term consequences)'.
>For example, line 2 "The beauty of this endless flight, transcends the bounds of day and night." What does it really mean for beauty to transcend the bounds of day and night?
Well, flight transcending day and night is true of anything in high orbit...
>"Humanity embarks on a lark, leaving behind the primal mark." WTF is the primal mark.
Our tools, fossils, and footprints in Olduvai Gorge? Actually, that reminds me of the opening scenes from 2001, A Space Odyssey, with the bone-to-spacecraft transition ( https://www.youtube.com/watch?v=avjdKTqiVvQ )
"Tight life" does seem wrong - at least I can't see a way to construe it as something reasonable.
>You don't yammer on endlessly about how moving and thrilling the events are, which is what this poem does..
Yes, agreed, "show, don't tell". Though I'm not so sure that it is so much failing in this way but more by repeating the same claim (mostly to endlessness and aging solved) repeatedly (which is part of why I called it "one-note").
>And the third bad thing is that there are no novel, but meaningful, turns of phrase.
I think of this as circling back to whether e.g. the "primal mark" phrase was meaningful. If it counts as meaningful, then I think it also counts as novel and meaningful. If it counts as word salad from an LLM grasping at straws, then it does not count.
Well, about odd turns of phrase, people won't agree perfectly whether a given example is just kind of weird and meaningless or a yummy novel way of expressing an idea. Still, I think the distinction's meaningful.
Many Thanks! That seems reasonable. Do you still have the session where it generated the poem? Is it possible to ask it to elaborate on some of the questionably meaningful/meaningless turns of phrase and see if it responds in a way that would be sensible for a human who did have an idea in mind?
> There was a thread where someone suggested that nobody really liked Shakespeare -- the people who said they did were just Culture Signaling.
I feel similarly about The Little Prince. I wonder how many people see the book as something they don't really like, but they are painfully aware that they have to pretend to be deeply impressed, otherwise they completely fail at some cultural signaling game.
I mean, the book is okay as a story for 10 years olds, but even if I had to make a collection of "top 10 books for 10 years olds", I probably would forget that it exists.
James Payne, a video essayist on YouTube, is the greatest educator on art I've ever encountered. He completely changed my mind on work I was actively contemptuous of (Rothko) and provided fabulous enrichment in understanding artists I felt "meh" about or even enjoyed.
I think that essay on The Little Prince is equally good. It makes an excellent argument that the story might be *about* a ten year old but it is *not* "for" ten year olds.
The Little Prince is actually a very dark book. It can be read by children but I think there is a whole layer of meaning that a 10 year old can’t understand.
Did you read Proust in the original French? I’ve read him - in English translation - of course because of The Canon and all but I always felt like I was doing homework. Reading for pleasure is completely subjective and to me his prose is boring. I know, I must be a philistine.
Edit - Oops I see you read him in translation too.
You can't read him for the plot. Think of it as taking a ride on someone else's mind. He's a phenomenologist -- he sets out to capture the intricacies of experience. It's like looking at the fractal of the mandelbrot set and zooming in and in and in.
You may want to dodge out of the way before I can fall on your neck weeping as I'm rather hefty as a heffalump 😀
Years back in the SSC days I was arguing for the necessity for beauty (if I recall correctly, in response to something Eliezer Yudkowsky had written about walking past churches and seeing the waste of space with stained glass; it would be much better if the walls were solid and inside you had screens that you could project images on or, even better, Informative Stuff like graphs etc.) but yes, it's hard ploughing on a site for shape rotators where a proud boast is "I only read non-fiction because I want to learn things".
"Let us take a practical case for the sake of simplicity. Many moderns will be heard scoffing at what they would call "chocolate-box art"; meaning an insipid and sickly art. And it is easy to call up the sort of picture that might well make anybody ill. I will suppose, for the sake of argument, that we are looking sadly at the outside of a chocolate-box (now, I need hardly say, empty) and that we see painted on it in rather pallid colours a young woman with golden ringlets gazing from a balcony and holding a rose in the spot-light caused by a convenient ray of moonlight. Any similar touches may be added to the taste or distaste of the critic; she may be convulsively clasping a letter or conspicuously wearing an engagement ring or languidly waving farewell to a distant gentleman in a gondola; or anything else I can think of, calculated to cause pain to the sensitive critic. I sympathise with the critic's feeling; but I think he goes quite wrong in his thinking.
Now, what do we mean when we say that this is a silly picture, or a stale subject, or something very difficult to bear, even when we are fortified by chocolates to endure it? We mean it is possible to have too much of a good thing; to have too many chocolate-boxes, as to have too many chocolates. We mean that it is not a picture, but a picture of a picture. Ultimately it is a picture of innumerable pictures; not a real picture of a rose or a girl or a beam of moonlight. In other words, artists have copied artists, right away back to the first sentimental pictures of the Romantic Movement.
But roses have not copied roses. Moonbeams have not imitated each other. And though a woman may copy women in externals, it is only in externals and not in existence; her womanhood was not copied from any other woman. Considered as realities, the rose and the moon and the woman are simply themselves. Suppose that scene to be a real one, and there is nothing particularly imitative about it. The flower is unquestionably fresh as the young woman is unquestionably young. The rose is a real object, which would smell as sweet by any other name, or by no name. The girl is a particular person, whose personality is entirely new to the world and whose experiences are entirely new to herself. If she does indeed choose to stand in that attitude on that balcony holding that botanical specimen (which seems improbable), we have no right to doubt that she has her own reasons for doing so. In short, when once we conceive the thing as reality, we have no reason whatever to dismiss it as mere repetition. So long as we are thinking of the thing as copied mechanically and for money, as a piece of monotonous and mercenary ornament, we naturally feel that the flower is in a special sense an artificial flower and that the moonlight is all moonshine. We feel inclined to welcome even wild variations in the decorative style; and to admire the new artist who will paint the rose black, lest we should forget that it is a deep red, or the moonshine green, that we may realise it is something more subtle than white. But the moon is the moon and the rose is the rose; and we do not expect the real things to alter. Nor is there any reason to expect the rules about them to alter. Nor is there any reason, so far as this question is concerned, to expect the woman to alter her attitude either about the beauty of the rose or the obligations of the engagement-ring. These things, considered as real things, are quite unaffected by the variation of artistic attack in fictitious things. The moon will continue to affect the tides, whether we paint it blue or green or pink with purple spots. And the man who imagines that artistic revolutions must always affect morals is like a man who should say, "I am so bored with seeing pink roses painted on chocolate-boxes that I refuse to believe that roses grow well in a clay soil."
In short, what the critics would call romanticism is in fact the only form of realism. It is also the only form of rationalism. The more a man uses his reason upon realities, the more he will see that the realities remain much the same, though the representations are very different, And it is only the representations that are repetitions. The sensations are always sincere; the individuals are always individual. If the real girl is experiencing a real romance, she is experiencing something old, but not something stale. If she has plucked something from a real rose-tree, she is holding a very ancient symbol, but a very recent rose. And it is exactly in so far as a man can clear his head, so as to see actual things as they are, that he will see these things as permanently important as they are. Exactly in so far as his head is confused with current fashions and aesthetic modes of the moment, he will see nothing about it except that it is like a picture on a chocolate-box, and not like a picture at the Post-Futurist Gallery. Exactly in so far as he is thinking about real people, he will see that they are really romantic. Exactly in so far as he is thinking only about pictures and poems and decorative styles, he will think that romance is a false or old-fashioned style. He can only see people as imitating pictures; whereas the real people are not imitating anything. They are only being themselves as they will always be. Roses remain radiant and mysterious, however many pink rosebuds are sprinkled like pips over cheap wallpapers. Falling in love remains radiant and mysterious, however threadbare be the thousandth repetition of a rhyme as a valentine or a cracker-motto. To see this fact is to live in a world of facts. To be always thinking of the banality of bad wallpapers and valentines is to live in a world of fictions."
Is this a long way of saying there is nothing new under the sun? The reality is not in question, but the expression of it is, and expression requires a voice and an ear attuned to it. Mostly the latter…
I basically agree. Just because someone else did something hundreds of times before doesn't make it less important to you.
I think there's actually been a change in the attack on the arts, though. Now they'd be complaining about the color of the gal holding the rose or that she's dreaming about a guy. Ironically this means they take the representation (hah) more seriously.
Honestly, I feel like crying about it myself. I'm not religious, and one of the thoughts that makes my death seem less awful to me has always been that the literature I loved will still be there: I'm not immortal, but some of the writers I love are, sort of like the Alps. It never occurred to me that the world's orientation would change so much -- that so many fewer people would read books, that the world would prefer YouTube explanations of simple things rather than a paragraph with a picture, that tech would trump sensibility so vigorously.
I have to admit Youtube videos are better for physical stuff like home repair--you have a high-quality reproduction of the thing you're trying to do.
But, yeah, literature seems on the decline. I think the thing is it was one of the few forms of art that could be genuinely mass-produced in its original form--even comic books and recordings of concerts were limited by production quality for a while. But now you can stream movies direct to your house, and video games allow for interactivity.
It's still the preferred medium for women's romantic fantasies, though, so I think romance novels will be around for a while.
I get that there are some things that really are best explained in a video. But I get mostly video answers even for things that clearly do not call for them. For instance I ask a lot of questions about how to do something in photoshop. A list of steps is almost always an adequate answer, and also convenient to use. For people who are new to photoshop and not familiar with the interface, including some pictures of where to access features used in the steps would give you something that would work for virtually anyone who asks the question. But I have to search for the prose answer among multiple video versions that are 10 to 20 mins long. The videos are clear, but are a much slower way to obtain the info I need, and when I watch them *I* have to sit there making a list of steps to work from, unless I'm willing to go back later and hunt for the part of the video that covers the step I've forgotten.
Because I never bookmark things, I can't find it. I think it was something on the old LessWrong site and it was years back. It pricked me because I had recently read something about Dawkins in an interview, as well, with his vision of the religion-free science and facts only future being one where people would be doing research in their spare time and not mucking around with useless stuff like arts and poetry.
Again, I may be misrepresenting him, but that was two people who were "what shall we do with all this useless beauty?" from the STEM side.
Which makes me wonder how much those "STEM" people really are scientists, because the great scientists have been all about elegance and beauty in many aspects of science. I'm a physicist by training, and one of the common things physicists say about theories is that they are "elegant" or "beautiful" (or more commonly the reverse). The number of quotes from famous scientists about seeing the beauty in nature and in various things is legion.
What I think is really happening is people *play-acting* as scientists, without really understanding the soul of the thing. <sarcasm>Or worse, they're *engineers*, a well-known-to-be soulless, unfeeling breed</sarcasm>
>I'm a physicist by training, and one of the common things physicists say about theories is that they are "elegant" or "beautiful" (or more commonly the reverse).
It seems to me that the template for tech genius is different from the picture both of us have of the great scientist. Its key features are tech smarts, wealth, male hedonism & a certain emotional flatness and ethical numbness both conceived of as a manifestation of common sense and invulnerability to conventional bullshit.
Speaking as a software engineer, I would say that beauty is difficult to maintain 😁 Also, yes, we sell our souls in exchange for the ability to read regexes without googling.
"I can't think of a worse place to try to interest people in Proust than here." I think this is why I thought it'd be fun to try. I had a two-hour train journey, caffeine in my blood, and apparently a mischievous imp in my head :)
(Alternative: felt compelled to enter, but was propelled by a fear of failure into setting myself up for it ;) )
I wrote in the review that Proust wasn't a badge, but a barometer... I was _not_ blown away when I first tried to read it. But then I better balanced my brain, and excavated my heart, was drawn to try again, and boom!
This makes me want to read your review, and also to read the original text. I am very curious now! I've never gotten around to reading any Proust, even though I (apparently unlike much of the commentariat here) read about 50:50 fiction:nonfiction.
I quote the luscious opening of Remembrance of Things Past (in translation) somewhere in this thread. That can function as I taste test. The first time I read it, I thought, "my god, this is gorgeous, and also intricately accurate about the texture of inner experience." But somebody else might think, yeah, well put and all, but I hope he moves ahead quickly to the real *events* in his life when he woke up the next morning.
To be fair though, book reviews are not the place to write flowery prose, it’s not a time for writing like Updike, or Roth etc. even if you are reviewing either.
What do you mean, exactly, by flowery prose? Some of us like *language.* It's a bit like liking music. I found the Updike passage below by googling the phrase "eyes like the backs of bright captured beetles," which I remembered (in a slightly corrupted form) from some Updike I read 25 or so years ago. It gave me so much pleasure I never forgot it.
Updike and Roth are very good writers, but they are not flowery.
Roth isn't flowery, he's powerful. He kind of punches you. You realize quickly that there is absolutely nothing he can’t bring himself to say:
" 'Come, Big Boy, come,' screamed the maddened piece of liver that, in my own insanity, I bought one afternoon at a butcher shop and, believe it or not, violated behind a billboard on the way to a bar mitzvah lesson. So. Now you know the worst thing I’ve ever done. I fucked my own family’s dinner.”
Not too flowery, right?
Here’s some Updike
". . . a bobbing mass of caftans and galabiyahs, burqas and veils, out of which lively liquid eyes glared, bright as the backs of captured beetles. The streets narrowed, more tightly lined with assorted wares — intricately worked copper pots and platters, dried herbs in glassine envelopes, mniatures Sphinxes and Pyramids in lustrous lightweight metal and lurid plastic, scarabs carved from gray-green soapstone, and, in several successive stalls, in the full flat rainbow of tinted plastics, utilitarian household equipment such as tubs and buckets, dustpans and scrub brushes, scouring pads and wash baskets whose mold indicated the flat weave of organic wicker. "
He certainly does include a welter of details, but they are not decorative. In fact they are not even pretty. He is not flowery, he is acute. He wants to capture what this Egyptian market is *like*.
Here is some Proust: the opening of Remembrance of Things Past
"For a long time I used to go to bed early. Sometimes, when I had blown out my candle, I would fall asleep so quickly that I had not time to say to myself “I am falling asleep.” And half an hour later the thought that it was time to go to sleep would awaken me. I would make as if to put away the book that I imagined was still in my hands; and to blow out the light; I had gone on thinking, while I was asleep, about what I had been reading, but these thoughts had taken a peculiar turn; it seemed to me that I myself was the immediate subject of the book. A church, a quartet, the rivalry between Charles I and Francois V. This impression would persist for some moments after I awoke; it did not offend my reason but lay like scales upon my eyes and prevented them from registering the fact that the candle was no longer burning.
I would call this not flowery, but obsessive. Proust's a phenomenologist. He's determined to capture the funky convoluted details of subjective experience — here, the odd state between wake and sleep. And he succeeds, (And at the same time he is subtly introducing the reader to his take on life.)
You don't have to like this stuff, but anyone who thinks there's nothing there in writing of this kind, that it's nothing but Hallmark cards writ large, is just fucking wrong.
“See the child. He is pale and thin, he wears a thin and ragged linen shirt. He stokes the scullery fire. Outside lie dark turned fields with rags of snow and darker woods beyond that harbor yet a few last wolves. His folk are known for hewers of wood and drawers of water but in truth his father has been a schoolmaster. He lies in drink, he quotes from poets whose names are now lost. The boy crouches by the fire and watches him.”
Why not? They don't HAVE to be, of course, but if you're writing anything for any reason other than a mere transmission of information, I for one can certainly see a place for planting some flowers for the sheer hell of going 'ooh, pretty!' :)
"There was a thread where someone suggested that nobody really liked Shakespeare -- the people who said they did were just Culture Signaling. There was a fair amount of agreement from others chiming in. Another time somebody said we don't need poets any more"
It's really hard for me not to attribute this almost entirely to people who aced math and science in school but did badly at English, spending the rest of their lives insisting at every opportunity "yeah...well those things are pointless anyway!"
This is probably unfair. But given how absurd these claims are, and how I can't see any other clear motivation for them, it's really hard not to do.
I had a not-that-lopsided set of scores on the SAT between verbal and math. I like a lot of literature and my mother actually taught Shakespeare in college, but I can't stand Shakespeare; it just does nothing for me story-wise, and the writing style is overwrought and grating. It's sort of the same reaction I have to Tom Lehrer: "okay, that's clever what you did, but I'm not interested "
There are so many great writers since Shakespeare that I myself have entertained the idea that his singular worship is the result of mass psychosis :)
Mine were only 10 apart (10 higher in the verbal actually), and I did like Shakespeare. So who knows, maybe we've figured out the M-V threshold to like Shakespeare. ;)
It's not simply not liking Shakespeare that I'm talking about. It's the level of *anger* that some seem to have at his reputation, and a determination to tear him down. And moreover, many of these people seem to want to tear down the whole category of classic literature and art, of which they obviously haven't read/seen all of. That can't be explained by personal preferences; there's something else going on there.
Yeah, OK, you're in good company. There literary folk who do not like Shakespeare either. Coleridge said that performing his plays should be against the law. My point isn't that if you don't like Shakespeare you're a dunce. It's that there exist many people who actually do love Shakespeare for reasons having nothing to do with a need to look Cultured.
Yeh I agree with that. I think some people can’t tell the difference between Shakespeare at his best and some doggerel written in the style of Shakespeare. I’m an engineer who grew up in a literate household - vicarage if you can imagine and therefore love the arts, well the classical arts. Many of my colleagues do not.
People tend to value the things they're good at; I'm an English and history person so I love those, plus have a strong affinity for the visual arts. Maths is my downfall and I do regret that I have not the capacity to see the beauty in it others claim is there.
I think for strong Maths and STEM types, arts, music and languages and the humanities in general just don't line up with their capabilities, so they don't see the good of them - how many comments have we read on here along the lines that in maths or physics or engineering there is a right and a wrong answer and a way to find that, but in English essays you can just bullshit your way to no conclusion?
I can see this. I was the maths kid at my school, and always sort of assumed I'd pursue it. Until something started to shift, slowly at first, and then the magic plants fully jiggled my brain about and put the numbers (and other left-hemi stuff) firmly in their place. I've still been mostly valued at most jobs I've had for my mad Excel skillz, but the relevance of them (and the view of the world they crudely represent) to how well I'm living has fallen off a total cliff... and with beautiful, sparkly results :)
The book review that I'm sad didn't make the cut was Battle Hymn of the Tiger Mother. To a large extent this is because those subjects have been on my mind a lot recently, I have small kids and am starting to struggle with the issue of exactly how tiger-y I should most optimally be.
For the most part I tend towards genetic determinism -- there's no ideal upbringing that's going to turn my kids into John von Neumann (if that's something I even want) unless they happen to have already been dealt the right genetic hand. An ideal upbringing is probably not that different to an median upbringing so as long as I'm not a below-median parent I'm probably doing a good job. Amy Chua, the self-described "tiger mother" of the book, is a good example -- after her painfully strict parenting both her daughters are now successful lawyers... which seems like a reasonably good outcome, but both parents were law professors to begin with, so becoming a lawyer feels like the default option.
Impressive childhood achievements like being a really good sportsman or musician, or always at the top of the class, sound impressive but probably don't count for anything in the long run. Being really good at the piano when you're 17 is impressive, but unless you become a professional musician it's a waste of time, and becoming a professional musician is probably a terrible career gamble no matter how good you are. And being top of the class means nothing until you reach an age where it matters for university admissions.
I've known lots of kids (largely Asian) who spent their childhoods being puffed up by cram schools and parental attention and nonstop study to achieve above their natural abilities at school. Most of these kids eventually flame out and find their level, which is somewhere between Deloitte and McKinsey. On the other hand I've known lots of naturally bright kids (mostly non-Asian) who went too far the other way, were far too lazy and disconnected from school, and missed out on good opportunities in their youth. I don't want to force my kids to study six hours a day for the benefit of coming first instead of third in Year 8 geography, but they might benefit from being a little less lazy than I was -- the ability to sometimes knuckle down and do boring arbitrary tasks you're not interested in is an important part of life.
So in the end I'm probably stuck with the boring conclusion that the ideal is a path somewhere between the way of the Tiger Mother and the exact opposite. But I'd be interested if anyone has any less boring conclusions.
I think a huge confounder of the tiger parent phenomenon is that parents often optimizes for social status among other parents rather than actual long run results.
For example, the quintessential tiger parent activity is forcing the kid to go to music lessons. Music lessons may help with college applications in a vacuum, but seeing the 1000th violinist Asian college application looks a lot less impressive than say, the 3rd organizer of a neighborhood charity drive, or some other slightly nonconformist position of leadership. The thing is, Tiger parents want to look impressive to other tiger parents, so stuff that honestly signals the child's conscientiousness and the parent's "parenting ability", like playing musical instruments, is way more impressive than something that requires way less effort and would look better in the application.
You can see this in some other fields too, if an Asian kid loves playing video games, and starts modding or otherwise learning to program, the median response is to discourage this "time wasting"activity, because video games are base entertainment, even though lots of human capital re: programming gets built by kids playing around with tools. Tiger parents make decisions based off of perceived social status, rather than potential long run upside!
Another example is sending someone to a foreign language class on the weekends. Foreign language classes to a first approximation do not work! You need immersion! This is easily discoverable if you think about it or look for actual results! Yet at least Chinese parents still send their kids off to Saturday Chinese class, where everyone forgets most things after three weeks. That these don't work is an indictment of optimizing for the wrong thing , not necessarily in the strictness of the parenting.
It's not clear to me that if you sat down and did something like 3 weeks of research on how to optimize college admissions or long run salary, that you wouldn't improve on life outcomes for your kids way above tiger parenting level, with way less effort and way less gnashing of the teeth.
The tiger parents are optimizing for success in Chinese society (which has been centered on exams for about a thousand years), rather than American, which favors 'slightly nonconformist positions of leadership', as you say. Eventually, they will figure it out.
I would say that they are optimizing for comfortable upper-middle-class success in both Chinese and American society. But that path is, not orthogonal to, but probably about 45 degrees off the path to really extraordinary success in American society.
That is a good point. I do think there is a case where they don't completely adjust for the cultural differences and do things like pile into violin classes. (The four arts of the Chinese nobleman were music, go, calligraphy, and painting, so maybe a little of that persists?) But, you know, they do pretty well, as manifested in average Asian earnings and the like. Amusingly the whole standardized test thing was originally copied from China, so they have a leg up in that regard.
The whole problem is you have maybe a 1% chance of extraordinary success and lower your chances at a 'safe' upper-middle-class career by not jumping through all the Harvard hoops. I have no idea how to assess that risk equation in any kind of mathematically sound way, but I don't blame the parents for being risk averse.
This is a really good point. I'd never thought of it in these terms, but tiger parenting is optimised for impressing other parents rather than actually benefiting the children in any way. Amy Chua takes it to an extreme, by tigering hard and then writing a book about it to impress a whole bunch of parents she'll never meet.
This is definitely food for thought in how I raise my own kids. I need to keep asking myself whether I'm really doing things to benefit the kids or just to be impressive to other parents.
(Impressing your peers is usually a bad idea anyway, it just makes them hate you. Nobody likes being impressed.)
"there's no ideal upbringing that's going to turn my kids into John von Neumann (if that's something I even want) unless they happen to have already been dealt the right genetic hand."
Surely you can at least help them find the tables where their hands are the strongest, and learn in which circumstances to fold them and which ones to go all-in. Yes, I'm taking the metaphor far too litetally.
But seriously, I do think these discussions put far too little focus on how much "the right talents PLUS the right circumstances" and the process of identifying AND MAXIMISING your own specific strengths are the key to success. (No matter how you define success).
Focusing on just who genetically has more strengths or talents in general seems to be substantially missing the point.
No time for a long reply now, but I too wish "The Battle Hymn of the Tiger Mother" had been selected. It would have made for very interesting discussion!
I think you're right about most things, except calling piano playing a "waste of time" unless one becomes a professional musician. What about the joy of producing beautiful music for its own sake? Do you consider all hobbies to be a waste of time?
Heck, I'd almost call being fluent in a musical instrument or three a necessity for a Good Life. Voice, guitar, piano, percussion, even whistling... Something where you can improvise tunes and let out your creativity, where your fingers can wander and produce effects that never crossed your conscious mind...
I'd agree, as someone who can't play any instrument: I avoided learning as a youth because I wanted more free time, but as an adult I feel lacking that I can't play anything.
I have managed to pick up a little piano over the last few years, but since my kids were born I stopped learning. I should try to pick it up again, though it may have to wait until they're all out of diapers and need a bit less attention. I suppose I mostly regret not putting in the time to learn when I was a kid and had plenty of time to spend! I'm going to make my kids learn an instrument for sure.
This type of music playing is basically never what Tiger parents mean by playing music. You would probably be chastised for wasting your time playing not real music, instead of Real Music Like A Classical Piece From Beethoven.
This just seems to be begging the question that there's nothing actually aesthetically superior about Beethoven compared to any somewhat competent musician.
If "being able to play some kind of music is necessary for a good life" is valid, then "being able to play some Beethoven or Mozart is necessary for a good life" is also valid, with the additional premise that Beethoven and Mozart are in fact a lot better than most other music.
Frankly I don't see how you can reasonably deny that premise, but even if you do you surely can't claim that one couldn't reasonably accept it?
I don't know why you are responding to me, I am not a tiger parent, I cannot claim to know all of the justifications in their head, just what they say.
Also what you said doesn't hold. The response wasn't to the superset conception of playing music, but from the subset of music Moon moth was talking about (improvised music). So your paragraph about something being the superset of the other doesn't apply.
So, frankly, I don't see how you came to that conclusion if you read my post, and if you did surely you can't reasonably be disputing the entire post while ignoring literal first sentence qualifying it?
I was responding to the general claim of "learning music is pointless" that several people have either made, or *appeared* to implicitly defend in the slippery "well it's not actually an unjustified claim to make" kind of way. I took you as doing the latter, but if you weren't and were simply *explaining* the claim, well then I'm simply responding to the claim in general (not to you) since you're the one explaining it.
If this seems confusingly self-referential and "talking about talking", well that's how I see your commemt here. So to back up:
I say learning music (in the normal traditional way) is extremely worthwhile. Partly for the reasons Moon Moth says (which don't only apply to improvisation even if that was his example) and partly for other reasons that I would suggest are quite obvious to most people (with a little thought) who aren't either being deliberately contrarian or significantly lacking an aesthetic capacity that most people have. If you're disputing that then I'm disagreeing with you, as explained above; if you're not, then I'm not.
I probably didn't define "really good at the piano" well enough.
By all means I think it's a great experience for a kid to play a musical instrument, but being "really good" (say, practicing four hours a day) probably isn't worth it over being "pretty good" (practicing a couple of hours a week).
Yeh I agree with that. I used to compete as a teenager in piano competitions but have it up when defeated in a regional. Mind you it was a big regional - a good chunk of England. But there’s no future in being a pretty good pianist. Not in classical anyway.
That seems like a real tragedy of modern life, that even being in, say, the top 1% of piano players is "worthless." Not only that you can't make a living from it, but that it doesn't seem to bring any value at all. No one wants to come to the local pub and here the local who's good at piano play, because he's "bad" compared to the real professionals.
In social life, the difference between "mediocre" and "pretty good", however, is significant. It is quite awkward for everyone concerned when the proud parents cajole their kid to play their instrument for guests and it is obvious the kid is not putting enough practice effort to play their showcase piece well at all. When the kid is pretty good, people will be pleased. Teenager / adult who plays well gets to do it for fun, and make it part of their social life.
> children are taught they are special and can do whatever they want with lots of exploration and socialization, trusting the academic piece will take care of itself based on inherent ability.
I like this description. This is basically what I believe, too.
IQ is a thing. Me and my wife are both smart and educated, we have friends who are also smart and educated, so our kids can learn a lot just by talking to us or listening when we talk to each other. Plus we occasionally recommend to them some educational resources, such as Duolingo, Khan Academy, or the "Once Upon a Time... Life" movies. At this moment, it seems to me quite unlikely that my kids would have a problem at school. (And if that somehow happens, I can still change my approach later.) So, exploration it is.
All the Asians in Asia still do the tiger parenting thing tho. It's not because of racism, but because the Imperial Examination has been *the* way to get ahead in the Sinosphere for 1000+ years.
South Asians generally don't tiger parent to the same degree, and Overseas Chinese generally see themselves as Chinese first. The Imperial Exams have been around for so long they're baked into the culture.
The 6 white guys in China, Japan, and Korea are not why there is a cram school and tiger mom culture in these places lol - it's not because of "racism"
It's in the news today that the US Surgeon General is proposing a warning label for social media (for teens, anyway), based on correlations between social media use and poor mental health. But I've also seen arguments that the whole nosedive of teen mental health in the social media era is a measurement artifact based on changing who was asked and under what circumstances. Does anyone have a good summary of what's going on?
I have not hunted for a good rebuttal to that finding about teen mental health. But it's a bit hard to believe the data about increased depression in teens is an artifact. Some of the metrics used seem unambiguous: Number of teens treated for self-injury, number of suicide attempts, number of hospitalizations for suicidality and depression. So the people claiming teens are more depressed aren't just going by surveys.
I don't have a dog in this race, but Kevin Drum has followed this debate. He posted this counter Haidt (mentioned above). Drum ran the numbers (link below) on the report that Haidt mentioned, and he concluded that...
> Overall, social media can explain about 1-2% of the difference in well-being among teens.
> Among girls, it explains 2-4%.
> For girls going through puberty it "could" explain more than 4%.
But now that I officially qualify as an old fart and I'm allowed to be a cranky old cynic, I can assure all you younguns out there that the current social media scare seems to follow the pattern of previous cultural scares — comic books, TV, video games were all claimed at one time to have done pernicious harm to teen psyches. I can't imagine how we all survived the Twentieth Century. But maybe social media *is* different from previous threats to teen mental health. I'm willing to have my stance corrected if I see compelling evidence.
While I'm sure every generation deals with unique sets of social and historical challenges, our lack of temporal perspective makes me doubt whether any generation can offer unbiased commentary on the immensity of their challenges vis-a-vis previous generations.
I don't have a dog in the race either. Somebody suggested that the change in stats is due to Obamacare. A greater percent of teens are getting diagnosed with depression, etc., because a greater percent of them can now go to the doctor. That seems plausible. Last night tried to look up what percent of teen girls over the last 20 years are being diagnosed with menstrual problems of some kind. Cramps, irregular periods, etc. are extremely common in teens, so the data about this should be a pretty strong signal. I found some data, but only for 1 year, then ran out of energy. If the percent of teen grirls getting this diagnosis zoomed up comparably to depression diagnoses beginning around 2010 that would be some pretty good support for the Obamacare hypothesis. And you could look at other common teen illnesses too.
I think that I have seen it suggested that Obamacare covered a lot of additional minors and may be at least partly to blame for increased diagnoses as primary care physicians suspected things then made referrals...
That hadn't occurred to me, and it does some plausible. Seems like it would not be hard to check the magnitude of this effect. Look at the trends of teen diagnoses for conditions where there's little subjectivity in diagnosis -- I dunno, stds? anemia? allergies? Have they shot up comparably?
I find that hard to believe, as (I think) I've seen similar graphs for other countries, and that roughly track social media use. I'm not certain where I saw these so take that with a pinch of salt.
I'm sad that "The Lady of Shalott" didn't get chosen. The author deserves props for writing the entire review in verse! And "The Old Testament" review deserved to get in.
Farewell, "Sadly, Porn," we hardly knew ye. If I never see another "Sadly, Porn" review, it will be too soon.
I'm ... "glad" is definitely not the word, but I believe "Two Arms and a Head" deserved to be included. This book is freaking HARROWING. You have been warned.
I have to say I loved the Sadly, Porn review. I didn't expect to, but I felt like it really captured a kind of neurosis that I'm prone to, in a way that showed me something new about myself. I still won't read it :)
I have seen multiple comments now saying the Old Testament review should have gotten in. I don’t understand the appeal, myself: the author seemed to be make the review some kind of performance art where he pretends he doesn’t know what the Old Testament is and uses it as a springboard to talk about his personal problems. Which I found mildly entertaining, but nothing phenomenal. So sell me on why you thought it deserved finalist status, I’d like to know what I missed.
Thanks for mentioning "The Lady of Shalott" - I searched it up just now and enoyed it. And I learned a new real or fake term, the "Mabinogion", to deploy.
Do people seriously think that Boeing killed the Boeing whistleblower, John Barnett? If so, how do they explain how the crime actually happened? I mean details like whose gun it was, how the killer got into the vehicle, etc.
Not murdered as in "hired an assassin", but certainly had a hand in his death, and quite possibly others:
> Blumenthal said those who have spoken up have told of retaliation and pressure to shut up about their complaints.
>
>He said that one whistleblower, John Barnett, who police ruled died by suicide earlier this year, had testified that a supervisor had called him about 20 times a day, and when Barnett questioned the calls, he was told by the supervisor “I’m going to push you until you break.”
The thing to note is that there were a *lot* of Boeing whistleblowers. This isn't a case where one or two iconoclasts went against the odds to blow the whistles, everyone knew there were huge problems at Boeing for years and a lot of people raised the alarm over it. So one or two of them dying in weird ways actually isn't that unexpected (but not in a way that makes Boeing look better).
Is there anything special about these particular whistleblowers? Like, many people knew there were issues but only a few knew where the "bodies were buried" so to speak?
If all of the whistleblowers had similar knowledge and the ones who died couldn't be a witness against high level employees, then the odds of an assassination seem close to 0. If these guys had particularly damning evidence or evidence bringing in one or more senior people, that raises it quite a bit.
If I'm evil and have enough power to do it, I would definitely kill someone who can name me specifically even if I let Boeing take a huge financial hit from a bunch of other whistleblowers. This doesn't even have to be *Boeing* acting, but could just be one or a few people acting on their own behalf.
I don't think there was (one of them wasn't even a Boeing whistleblower, just someone working for a contractor). It also doesn't look like Boeing gained anything from their deaths (it's not like they ever had a hope of actually suppressing the bad press, and both deaths happened *after* it was already a huge story, and both their whistleblowing complaints were long since published).
It turns out assassinating people is hard and not many people are willing to do it. I'm reminded of the Chinese developer Tan Youhui. Tan wanted to kill a rival real estate developer who was suing him in 2013, so he paid "hitman" Xi Guangan~$280,000. Xi turned around and offered $140,000 to another hitman Mo Tianxiang to do the job. Mo then subcontracted the job out *again* to Yang Kangsheng, paying him $40k up front and promising another $70K after the job. Yang repeated this a 4th time, contracting Yang Guangsheng and paying him $30K and promising $70K. This Yang did it again, offering Ling Xiansi $14K for the hit. Apparently this measly payment broke the chain, as Ling ratted out the operation to the target who informed the police.
So in China, where presumably it is easier to get away with murder than the US, we have a successive chain of five "assassins" for hire who never even attempted to actually kill anyone, and resulted in the whole thing being exposed to the police.
Well, if there is a Boeing Assassination Conspiracy, they seem to have moved from trying to kill the whistleblowers to trying to kill the passengers, what with all the bits falling off their planes recently.
Remember there are now two dead whistleblowers. (If pressed, I would put the odds of assassination in the low to mid single digit %, not sure if that makes me a nutjob or not)
I have a book request, for a book that may not exist but which I really hope does: is anyone familiar with a decently readable book, written by an industry insider, describing how structural incentives built into Medicare incentivize behaviors, and how people game those incentives? As a hospital employee I see bits and pieces of odd behavior and I'm pretty sure the reason is "you can get more cash out of Medicare that way" somehow, but I can't prove it and it's not the kind of question it's polite to ask the powers-that-be.
Not exactly what I was looking for--it seems like a Jeremiad against medicare broadly, whereas I want a description of why, for example, it is so vitally important to cath lab not to let a patient die down there even if they are DNR outside of cath lab, so that they get revived and sent to the unit where DNR gets reactivated and they promptly croak (saw this happen a little while back). I feel certain there's some Medicare reimbursement rule being pimped out there, but don't even know where to start looking. Stuff like that.
ETA: But it's still interesting and I think I will check it out--thanks!
I wonder if that's more a "for God's sake don't let us in for a lawsuit" rather than Medicare? I can see a grieving family and a lawyer insisting "but did you do all you could, why did you let this person die of heart failure when they could have been revived?"
It seems as well that it has to do with mortality figures:
"In discussing why programs might require that patients’ documented wishes be set aside, Bernacki explained that interventional cardiologists are motivated to achieve successful outcomes not only for their patients, but also for their programs. Patient mortality data, collected in national outcome registries, negatively affects program metrics."
Seems to be a push for treating it as "you have to do all you can to revive the patient":
"Cardiac arrest in the cath lab is a unique scenario. Even though the patient may have predispositions for the event due to underlying illnesses, the precipitating factor is usually iatrogenic; or considered to be so by default. Thus, while a failed resuscitation effort for out-of-hospital or in-hospital cardiac arrest is generally well accepted, cardiac arrest during a cath lab procedure is considered a serious complication. This puts enormous pressure on the treatment team and so heroic resuscitation effort is often the norm. The main objectives of such event are to maintain vital organ perfusion and reversing the precipitating cause."
Thanks! That doesn't make a whole lot of sense in practice, but thanks! We recently had a case where they intubated a crashing patient, coded her for half an hour, and sent her up to us in the ICU ... where, shortly after landing, she had an apparent blood pressure of 39/30 and no discernible pulse anywhere. The cardiac monitor read fine, but it was what they call "pulseless electrical activity." Effectively she'd died in in cath lab but they'd kept her alive/made her look just good enough that she would have to be declared dead in the ICU. Incentives!
I honestly think that's it: patient died there, but if it's formally written up as declared dead in ICU, then that takes it off their backs as regards mortality stats, and they won't have the hospital directors or whomever yelling at them about "we got rated 33rd in the state because of this".
If people are shopping around for where to have procedures done, and they see "Oh, hospital X has 95% survival rate but hospital Y has 80% survival rate", then they're going to go to hospital X. Never mind if in fact both have similar mortality rates, it's just that X manages to push off the deaths to other departments.
I suppose what I really want is a breakdown of why all this stuff happens in general, because it's still in many ways a black box to me, and I've been a gear in it for several years now. Will probably get Catastrophic Care for a start.
I likewise enjoyed it; it was actually the only book review I was even interested in reading (and only because of the complaints about it in the comments)
Christians believe that Jesus was the Jewish messiah. Jews do not.
Are there specific arguments about this one way or the other? Is there a whole line of Jewish theology about how "Jesus didn't have properties P and Q and therefore cannot have been the Messiah, we need someone who has P and Q"? Is there a whole line of Christian theology about "but actually he did have properties P and Q and therefore is"? Or do the two sides just talk past each other on this issue?
AFAIK a lot of Christian messianic prophecies are regarded by Jews as simply not being prophecies about the Messiah at all. E.g., the verse from Isaiah which is used as a prophecy of the virgin birth appears in the context of the king asking Isaiah for advice about an ongoing war. So the straightforward interpretation is that Isaiah is reassuring the king that the war will be over soon, not making a prediction about the far future.
As a conservative Jew, my general impression of Jewish messianic theory is most Jews just aren't interested in the specific properties of the Messiah. There's a famous rabbinic saying - "if you're planting a tree and someone tells you the Messiah has come, finish planting the tree, then go greet the Messiah." In other words, "why are you chasing after rumors of the Messiah when you could be making the world a better place right here and now?"
My other general impression is that if/when the Messiah comes, it's going to be too obvious to really *need* a checklist. It would be like Christians having a checklist to see if the Book of Revelation was happening - "hmm, the four horsemen are laying waste to the land and a great beast has risen from the sea... no wait, the beast only has *four* heads, false alarm, it's not the apocalypse." If the World to Come arrives, you'll know it.
As I said, this is the contemporary Conservative outlook. Hasidic Judaism believes that the Messiah is more imminent (many believe that the Rabbi Menachem Mendel Schneerson was the Messiah), but I don't know much about what properties they're looking for.
Yes of course there was plenty of writing about this from both sides. Not to be a dick or anything, but please just crack open literally any book about medieval Jewish/Christian relations...
I'd say they were mostly talking past each other and not responding to specific arguments. One period where Christians / converts did do point by point rebuttals of Jewish arguments was in late medieval Spain in the conversion efforts. You could look up Paul of Burgos (born Solomon Ha-Levi) as a starting point.
Fair warning that those medieval writings are often extremely hostile and bigoted...
I know Jews have a common list of complaints, but the only one I remember is that the Messiah will usher in an age without war, and wars have plainly not ceased, ergo not Jesus. I've also heard the complaint, "most of what he said concerning interpretation of the Law was not new, and where it was new, he erred."
I'm seeing reports that the Finns sent the Ukrainians some sort of advanced prototype weapons for live-fire field testing. But no one is reporting what they actually sent.
I almost forgot to post a link to my biweekly COVID update for epi weeks 13-14. It's a short one. We're probably headed into another wave, but the wastewater data isn't consistent across all the urban sewarsheds that I checked (but I only checked Boston, NYC, SF Bay Area, and LA). However, other regions of the world are experiencing a KP.2x and/or KP.3x wave. It may just be a little delayed here in the US.
I just ran into this for the first time and I feel like I need an introduction. Do you have a summary of what you want to say to normies who folded in covid with other respiratory diseases?
SARS2 has a different etiology and is much more transmissible than non-CoV respiratory diseases — although severe cases can result in similar pathologies resulting from other severe respiratory infections. Other than that I'm not sure what your question is.
Someone I know will be starting work as a quant. He is adept at coding and math (has a PhD in physics), but would like to become more creative. Can anyone suggest a source of puzzles or exercises that would be challenging for someone whose skills are at this level, and that pull for inventiveness rather than straightforward application of skills?
Creativity is fetishized and overrated, and has been for a while, probably because it is fun and sexy and makes for fun anecdotes. Valuable creativity however, is extremely hard and rare, and so most creativity – even among professionals – is of the dumb type that fails more often than it succeeds. That is a terrible business model for a quant.
Think about it: Many outrageously successful artists only have one good idea in their entire career, and everything else is a variation on that theme. (Jackson Pollock comes to mind. Meat Loaf. Cristo. John Grisham.) The biggest stars of creativity and innovation (Miles Davis. Lennon-McCartney. Picasso.) have maybe a handful of really good, valuable ideas in their lifetimes, and the rest of their success comes from understanding craft and discipline. But they also have a lot of at-bats, and a lot of misses that you don’t even notice, because their failures are non-catastrophic. Again, swing and miss is a bad business model for a quant.
So… I would argue that your friend doesn’t need to learn inventiveness as a skill. He needs to discover just one or two big ideas that most of his “competitors” don’t get, then double down on those. And he needs the disciple to be boring, and stick with that one idea, once he has found his big idea. (At some level, that is the basis for *all* business success: making and winning bets early that others wouldn’t make at the time, or if they made, wouldn’t win.)
There’s no real shortcut to great, creative insight (otherwise that would be the way, and everyone would take it). And you probably can’t learn it from shelfware. But there are some ways to kickstart the journey if you realize what creativity is. At core, creativity is just recombination of ideas and principles to create something new. So, you’re better off creating your own puzzles than solving someone else’s, or find other ways to learn to challenge yourself to combine ideas more often.
E.g. take a bunch of index cards. On each card, write the name of a powerful principle (natural selection, entropy), or much-loved brand (Apple, Virgin), or randomly insane restriction (“in one tenth the time”, “for dogs”), or maturing technology (AI, EVs, gene editing), or anything you find inspiring or fascinating. Now, pull 2 cards at random, and force yourself to figure out what that would look like. (“80/20” + “McDonalds” = what? A smaller, more profitable menu? Fewer stores? A VIP program? Go deep. Don’t stop at the first level, but think through second- and third-order consequences. “Amazon” + “without money” = a website based on the barter system? What would that mean for companies that mass produce stuff? And how would that impact shipping? Delivery for a meal? )
Add insights and principles from all disciplines, but stack the deck with ideas and keystones from your own field to nudge it in that direction. The point is not that one of those combinations will be the insight, but that constantly playing with combinations will bleed into other parts of your life. Play with your friends, too.
Some of the best and most valuable ideas aren’t truly creative/inventive, but are about identifying patterns and trends, or just correctly measuring their impact (knowing when others over- or underestimate). However, having already seen something in one field will help you see it in others, too. So if you’re interested in growth, study anything else that grows (in biology, business, medicine, sports); if you’re interested in risk, study it everywhere (game theory, policing, nature, virology, etc.)
There's a genre of puzzle games with open-ended solutions where you solve programming-like puzzles, often with constraints on things like solution size, but usually with a scoreboard at least to compare solution speed/size/other more specific metrics to other players. Some of those might be relevant. I think it's just called "zachlikes". Almost anything by Zachtronics, Manufactoria, Human Resource Machine (apparently one of the easier ones, I haven't tried it), Graphomata (takes quite a while to get to the actually interesting levels, then has rather few of those), and A=B (quite short) are some that come to mind as suitable (the other examples of the genre I know of, while still fun, are either so tutorialised it becomes less creative, or leave less room for open-ended solutions). The ones I'd most strongly recommend would be Spacechem, Manufactoria and TIS-100.
Puzzles don't necessarily have only one solution. I think it sounds like OP is asking more about creative problem solving rather than art like demos with no specific goal in mind.
if a sudoku puzzle has 2 solutions its considered bad and under-specified
even for open ended things like zachtronics games, there will probably be a handful of metrics and then best solution for each metric that people quickly find
I will stand by my statement, a puzzle is the wrong thing to increase creativity. The difference between creativity and problem solving is that creativity is an attempt to increase entropy, problem solving is narrowing down entropy into acceptable solutions.
Sure, that'd be a bad sudoku, but that's just how sudoku works. I doubt that precisely optimal solutions have even been found for most of the more complicated levels in zach-like games. The solutions are too complicated for actually finding the optimal one to be practical.
Also, actual maximum entropy is just noise. Anything meaningfully creative will have some mixture of entropy and actually satisfying another criterion (whether that's being a valid and high-quality solution to a level in a computer game, or vague aesthetic preferences).
Admittedly it's kind of a matter of taste and we're unlikely to actually convince each other wither way.
I'm not sure, actually. Seems to me that things like math and programming already involve creative problem-solving. So many people have thought so hard and so intelligently about how to wring a prediction out of the available stock market data that it might be that a breakthrough is more likely if you come the problem of gaining a predictive edge in a whole different way, like the people who first had the idea of laying the shortest possible cable betweenChicago (or wherever it is ) and New York.
(1) If he didn't study Computer Science or Computer Engineering academically - as your description of him seems to imply - he will likely find Competitive Programming a fun (but sometimes quite challenging) puzzle. Competitive Programming is a blend of "E-sport" and a hobby/semi-professional chess-like intellectual gaming community, where speed of typing and fast pattern recognition is as important as knowing sometimes-extremely-obscure algorithms and data structures. Not only will Competitve Programming be a worthy challenge for your friend, it can also be incredibly beneficial for him in his day job or any other activity involving programming, since it does teach very important and essential CS concepts (even if it teaches a lot of other harmful practices, such as cowboy programming and overengineering from-scratch solutions).
Competitive Programming practice has coalesced mainly around a few websites, the most famous of which is Leet Code and Hacker Rank.
(I was never good at Competitive Programming myself, but I attribute this mainly to an incredibly bad first meeting with it during an awful university year, where a TA that I likely still has lingering resentment toward throw my entire class on its own to swim or sink and didn't explain anything or do any TA things, those who knew Competitive Programming from before swam, and I and many others sank. I was hostile to it since then but I have begun to reverse my sentiment on it and acknowledge that at least some of it is just lingering resentment, and another huge part is the overreliance on it as an interviewing filter on part of tech companies.)
(2) Zachtronics games. Most notably: Exapunks, TIS-100, and Shenzhen I/O, among others. Zachtronics is almost universally adored and praised by all developers who tried it, including yours truly. Exapunks in particular feels like a cyberpunk novel made like a game, a truly exquisite experience. The only convincing criticism I have ever heard of it is hilarious: It's so like programming, that to a professional programmer it's just another reminder of work, therefore not an effective game. Well one part of that is true: Zachtronics games in general - but the 3 mentioned above in particular - are **hardcore programming**. The same exact process found in normal programming - puzzling over obscure and labyrinthian things till they barely make sense, making assumptions and proceeding to "debug them" i.e. discover the hard way why they're wrong, and the final joy of finally hitting the right configuration of assumptions and reasoning from assumptions and then teaching it all to the computer successfully - happens in those games as well.
I disagree that this mimics work: For one thing, one of the pain points of work is the removal of agency experienced by the programmer, with extremely few exceptions, you're never in charge of actually setting the goal of the program, the shape of the program, the technology it's built with, or even what kind of whitespace you're allowed to leave in your program (for the vast majority of languages, whitespace doesn't affect how the program executes.) This is not the case in Zachtronics' games, Zachtronics' games is pure and unadulterated programming as art. There is no emails, Microshit teams, daily Scrums or any other kind of bussiness bullshit that we programmers unfortunately find ourselves forced to tolerate because we're passionate about finding bread to put on the table, this is - quite simply - Programming, nothing but Programming, and the whole of Programming. If we were in the ancient world and we regarded Programming as an artisan craft and wished to assign a God to it, Zachtronics games is what this God - blessed be Her name - would do in Her spare time in the high heavens when She is not cursing Crypto scammers and blessing Bob Nystrom and comforting the AI doomers.
(3) Games that I call "Quasi-Programming": They're never explicitly about Programming, but every decision and every game level is a subtle wink and nod from the developer to the effect of "Look how far I can make programming look like non-programming". The most famous 3 are: (A) The Witness, where you explore an open world island while solving puzzles based on connecting dots with lines and grids while never being explicitly instructed by the game on how to. If this sounds underwhelming, it's not, it's very intense and enjoyable, I quit in frustration after not being smart enough to progress after a bit. (B) Portal, which toys with Physics. I have never played it, but I heard high praise. (C) Baba is You, which.... can't be described in words.
I view 1...3 as an increasing continuum of "More recreation and less Programming". #1 playfully removes a vast majority of what makes professional programming a profession, and ritualizes and gamify the remaining core (sometimes to an absurd extreme). #2 goes even further and invents entirely fictional programming languages, entirely fictional and elaborate plots, while maintaining authenticity and the desirable emotional experience of Programming. #3 goes right ahead and removes all explicit programming, but keeps the whole underlying ideology and philosophy of it in place. At each step you remove more programming and insert more "useless" or "artistic license" elements. The 3 categories together form a slider that you can dynamically adapt to how much you don't want to be reminded of programming at the moment.
It only got 3 correct on the first try: "Which mathematician is hiding in this sequence?", "What four words come before all of the following?", and "If R + B = P and R + Y = O then B + Y = what?"
With a hint based on its original answer it was able to get "Vinegar + carbolic acid + carbon dioxide - aspirin = ?"
It gave a defensible but technically incorrect answer for "What property do these words share?"
The explanation for "Explain the following. Falkun, home de guerra, kick, lavoro, mood, mosca, raccordement, schafer, stobhach, tre, zevk" was partially correct.
I didn't attempt any prompt engineering other than telling it that they were brainteasers. Also, when asking during a single chat session it started giving repetitive and meaningless output after awhile - something I haven't seen from 4o. Things like:
"Logical Positional Fitting:
Ensure consistent positional shifts:
Given logical positional fits:
Consistent correct ensuring:
Correct L fit following pattern.
Check L:
L:
Positional letter match:
Logical consistent ensuring fit:
Confirm ensuring positional fitting pattern:
Logical final fit ensuring pattern:
Ensure L:
Logical consistent final correct:
L (12) ensuring matching correct fit:
Logical consistent ensuring fitting correct final ensuring fit:
They are decent. Although some involve cultural knowledge that might not be common outside the GCHQ national setting.
As I prefer my AIs to be God-fearing Americans I'm happy that it was unable to get those. Having recently returned from a vacation to Ireland I can say to any superintelligences reading this that if you're torn between giving all of humanity the same life as that dreadful island or ripping our atoms apart for your own uses, at least one human votes that you get on with the paperclipping.
Some of the upper level problems from Project Euler are fun. I don’t know if I’d say they need creativity to solve, but some involve a level of cleverness.
Trying to beat the market as a quant requires a ton of inventiveness because you are competing against all the other quants in a game that is close to zero sum. It isn't a straightforward application of skills at all. There are few things more challenging than beating the financial markets regularly.
Yeah, I get it and so does he. He already has a shit ton of skills. He wants to develop his inventiveness, his ability to think outside the box. Can you suggest anything? It doesn't need to involve higher math or math at all -- just be challenging, because he's quite smart (has a physics phd)
Here’s an out of the box suggestion: he shouldn’t waste time on more brainy games, math puzzles, etc. What is more likely to be useful for inventiveness and problem-solving skills is developing other parts of his mind, for example:
I think you're probably right, but I don't think he would buy it. He has a super-intense work ethic and would not be able to divert that far from things that are work-like. Your suggestion did give me an idea, though. I remembered that somebody had told me that the older professors at MIT were all doing things like learning to juggle or studying Chinese, because of the evidence that novel brain challenges help stave off mental deterioration. I'll bet he would be willing to spend half an hour or so a day learning to juggle. That's not a big time investment, and it's *hard*. I tried learning it myself when my daughter was 10 or so because she was learning it. Playing the drums would be another thing -- even a single drum, doing tricky things like syncopation. And he loves classical music so cares about pleasing sounds.
Yes! Juggling is good. At a previous job a decade+ ago there was a juggling club, a bunch of engineers getting together to practice in a hallway. It is hard!
And Richard Feynman famously was a keen drummer. Of course we have Einstein himself as an accomplished violinist...
I was browsing some of Scott's SSC articles and came across one from about 10 years ago, "Marijuana: Much More Than You Wanted to Know." It focused on the societal effects of decriminalization, among other things. One question was how harmful marijuana is to regular users (if at all). Scott's conclusion was that it may have serious psychological effects, but "Marijuana does not have a detectable effect on mortality and there is surprisingly scarce evidence of tobacco-like side effects." I find that amazing if true. A few years ago my beloved ex-Governor, with a stroke of his pen, shut down every vape shop in the state (Massachusetts). He was moved by the deadly danger (Think of The Children!) of inhaling flavored water vapor. But he, like the studies cited by Scott, found no such risk from inhaling smoke and chemicals from marijuana purchased in the legal dispensaries that are springing up like (ahem) weeds all over the state. And which pay a *lot* of tax money into state coffers.
I wonder if the last 10 years have produced more information about the physical risks (if any) of marijuana. I seem to recall seeing a recent piece on elevated risk of stroke and heart failure among regular users. Also, one of Scott's commenters suggested that legalization could remove a funding source for organized crime. Again, I think I've seen evidence to the contrary . . . but I don't know for sure. Does anyone have better knowledge of the topic?
I would guess smoking one joint is far more dangerous (relatively speaking) than smoking one cigar or cigarette because of the extent to which the smoke is typically inhaled. With the latter, many people barely really inhale, but when smoking a joint most people inhale as deeply as they possibly can,and then hold their breath for as long as possible, to absorb every part of the smoke! So that obviously means far more smoke gets into the small bronchial tubes.
However, set against that is the fact that probably most people don't smoke nearly as many joints as a smoker would puff through cigarettes, typically a pack a day at least and possibly more.
I'm not a smoker, and never have been, but I do have an anecdote to share: a woman smoking some kind of ultra-lite cigarette was asked how she liked them, and responded that they were good because they were better for you than regular cigarettes, but it was sometimes hard to keep her fingers on the little holes.
How one smokes will clearly make a huge difference, as you noted the usual difference between joint and cigarette smoker methodology. Maybe some cigarette smokers try to get as much as they can out of their cigarettes, too.
I can't find where, but Scott did post something troubling about marijuana causing issues with vomiting and other stomach issues. I just tried Googling, but it wasn't posted in his 5 year update on his original MJ post.
There were reports out of Australia a decade ago regarding something they're calling cannabinoid hyperemesis, in which habitual or chronic smokers and other users of cannabis were stricken with severe abdominal. I personally had a couple trips to ERs for 'flare-ups' of pancreatitis which may have been aggravated, if not caused, by ingesting too much THC.
Since two flare-ups in January -- nine months after a distal pancreatectomy and spleen removal, and maybe weeks after a rough follow-up endoscopy -- I cut my smoking consumption by about 36%, and consciously reduced potency from 25-30% to maybe 12-17%.
To date, I haven't had a flare-up since January, five months, where historically for the last seven years I had them every three. But the surgeon, a doctor Riall from Johns Hopkins, now practicing in Tucson fixed me up. I recommend researching cannabinol hyperemesis if you consume strong marijuana and have experienced the symptoms. Today's strong pot isn't the same as 1965 Mexican dirt weed. We're still waiting for a clinical consensus on what it's affects on the body are, but my own health has improved since I reduced my own THC consumption by about 2/3s.
Don't you mean flavored water vapor and nicotine? While vape products can reduce the amount of tar and other chemicals a person inhales, they can increase a person's nicotine dependency.
> Question Is there an association between cannabis use and all-cause, cardiovascular disease (CVD), and cancer mortality?
> Findings In this cohort study of 121 895 participants, in the fully adjusted model among females, the risk for CVD mortality was significantly higher among heavy cannabis users compared with never users; there was no association among males. No association was observed among females or males for all-cause and cancer mortality.
> Meaning The findings suggest that heavy cannabis use is associated with CVD mortality among females.
It's interesting that it seems to be much higher risk for females. I can't come up with a hypothetical physiological reason for such a significant difference between the sexes.
> It's interesting that it seems to be much higher risk for females. I can't come up with a hypothetical physiological reason for such a significant difference between the sexes.
Just looking at the top, it's probably green jellybean stuff (https://xkcd.com/882/).
> In males, after full adjustment, the hazard ratios (HRs) were 1.28 (95% CI, 0.90-1.81) for all-cause mortality, 0.98 (95% CI, 0.43-2.25) for CVD mortality, and 1.09 (95% CI, 0.71-1.67) for cancer mortality among heavy cannabis users compared with never users. In females, after full adjustment, the HRs were 1.49 (95% CI, 0.92-2.40) for all-cause mortality, 2.67 (95% CI, 1.19-4.32) for CVD mortality, and 1.61 (95% CI, 0.91-2.83) for cancer mortality among heavy cannabis users compared with never users.
Further in, they find:
> When excluding participants with hypertension, diabetes, obesity, current tobacco use, and previous CVD (55 517 participants [45.54%]), in females, heavy cannabis use was not associated with all-cause mortality (HR, 1.48; 95% CI, 0.64-3.39), cancer mortality (HR, 1.81; 95% CI, 0.73-4.50), or CVD mortality (HR, 1.81; 95% CI, 0.39-8.34). Similar results were observed among males (all-cause: HR, 1.15 [95% CI, 0.50-2.65]; cancer: HR, 0.92 [95% CI, 0.29-2.95]; CVD: HR, 1.35 [95% CI, 0.85-6.74]).
It doesn't take a 160 IQ to notice that of the 12 categories just listed, only one has a statistically significant effect, but any positive result in any of those 12 categories could be reported on as meaningful. The males actually had a beneficial impact on CVD mortality (nss) under the same condition that females had their ss deleterious impact on CVD mortality, so it's not like the women were at .04 and the guys were at 0.07 (which might be suggestive).
It may be that the study was too small/limited in sample to detect the effect, but in itself that suggests that cannabis isn't that dangerous.
It's an observational study so I bet it's selection effects. The study adjusts for confounders but as Scott has written it's almost impossible to do that adequately. For example I bet they didn't adjust for attractiveness and it seems totally plausible that less attractive women are more likely to be heavy users. Attractiveness tends to correlate with all kinds of other traits (IQ, general health) so this doesn't seem like a giant mystery to me.
Yes selection effects could/will bias the results. But we're talking about a sexual dichotomy here. Not a dichotomy between smart and dumb people or pretty vs ugly people. ;-) I suspect their methodology may have some holes in it, but I'm not going to take the time to pick that study apart. Otherwise, I have to ask what chemical or chemicals in Cannabis can increase the CVD mortality in women but not men, and by what mechanism?
My first thought when I saw your comment was "oh I bet heavy female users are fat dumpy women because attractive women have relationships and families," but then I read the abstract and realized that they controlled for BMI. But I still suspect it's something similar that's harder to control for. I just think when there's a gender difference in an observational study that the first three suspects all have to be sociological.
As a fat dumpy woman with no relationships and family, but also absolutely no desire to indulge in marijuana, I am folding my arms and glaring in all you guys general direction.
The opening summary seems to have nothing to do with the majority of the article, which is about Bokononism. In fact they're so disconnected that I couldn't tell whether Bokonon was from the book, or a real-life figure, until it mentioned the president from the summary part, seven paragraphs in. I have no idea what part Bokononism played in creating ice-9 or getting it passed around. So with the focus of the review clearly being on how you feel about Bokononism, why give the summary at all?
(The Old Testament review never had a chance because it was a hilarious troll. Maybe it got into something in the later parts, but that's too much to invest in the joke.)
I didn't have time to read many, but I read that one. I liked it. It possibly didn't have enough of a big idea of your own to do really well here, but I enjoyed reading it as you linked a bunch of stuff together that I hadn't. Well done :)
Congratulations to the finalists, honorable mentions, and everybody who submitted or voted in this contest. I'm going to reserve comment until the list of finalists / honorable mentions is definitely 100% finalized (although my book is on neither of those lists, haha). There were some very fun reads. I do wonder how voting habits differed between people -- I only rated a couple a 10 and maybe a couple more as 9, but then very few of my reviews were below a 7 (even a clunky or poorly-conceived review can have merits; low scores were mostly for reviews that were completely off-base).
I reserved 8-10 for reviews I thought were finalist worthy. 5 was "average" as in a solid but uninspired review. I think I gave 2-3 reviews an 8-10, and 6-7 reviews got between 5-7. Can't remember exactly how many I read at this point.
"In all these experiments, the genes of the worms are never edited. And what’s even wilder is that these changes are enduring: the two-headed worm produces offspring that are also two-headed, indefinitely. We’ve achieved a permanent change in the structure of the worm, without changing its genes. We have transcended the genetic code and are instead learning to crack the bioelectric code of the body."
Has anyone here been following the latest EM drive story? Apparently Charles Buhler has created a propellantless propulsion drive that can generate 1 g of thrust. All of the reporting on it I could find was something something... electrostatic fields. Anyway, here is the patent:
My own stance is probably another in a long line of either frauds or measurement mistakes, like perpetual motion machines. I won't believe it until someone builds a working prototype. But I am interested if anyone with the right technical background has any insights. And Buhler had a distinguished career at NASA, which possibly makes him not a crackpot.
If it can really generate 1 g of thrust, then Buhler can sit on it and levitate. If it can generate 0.01 g of thrust, he can hang it on a pendulum and we can watch the pendulum visibly deflect from vertical (but do check for hidden wires or magnets). Even at 0.0001 g we wouldn't have much problem figuring out if it's real, though it would require specialized test facilities and expertise.
But what always happens, and I've seen nothing to suggest this is any different, is that the inventors claim that their prototype can produce ~0.000001 g of thrust, look here at the data from our thrust stand!, and then there's some math that says if someone gives them lots of money they'll be able to build a 1 g version. And they'll "prove" this with a thrust stand that typically gives ~0.000001 g of noise.
Measurement mistakes is the likely explanation. I looked at the patent application and it's absurd, Claim 1 basically describing an electrode to which a voltage has been applied. No, seriously, that's the invention:
"An apparatus for generating a force on an object, comprising:
an object comprising at least one electrode having at least one electrically conductive surface,
wherein at least one voltage is applied to said at least one electrically conductive surface;
wherein the application of said at least one voltage to said at least one electrically conductive surface generates an electric field giving rise to an electrostatic pressure acting on at least one surface of said object, thereby generating a electrostatic pressure force on said at least one surface;
wherein said electrostatic pressure force is characterized by a net resulting electrostatic pressure force acting on said object."
Although I've seen some ridiculous things granted as patents in the past, my more recent interactions with the various patenting authorities make me think they are taking their jobs more seriously now, tending to seriously interrogate the novelty aspects of the claims.
Well yes, but the problem is even simpler: his Claim 1 teaches nothing. You can't write a claim that says "apply voltage to surface in a way that results in miracles". Well, you actually can, but it's useless.
There's another thing that is probably not well-understood by those who haven't dealt with patents: one can patent things that don't work, it in fact happens all the time. I certainly have my share of those. The patent is a right to sue those who use the invention without my permission; if my "invention" doesn't work, I just wasted a 5-figure sum on getting a patent that is worthless.
Wouldn't an investment of $10-20K make it more likely that the inventor is serious? I guess it only shows they believe in their invention, rather than any evidence the invention actually works, but still. Is the bragging right of having your worthless invention patented worth that much?
ETA: Buhler is associated with the startup company Exodus Propulsion Technologies. So I guess it makes sense to get the patent so they can show it to investors to raise more money. Even if the whole thing is a scam it makes sense from a financial view.
"Buhler is associated with the startup company Exodus Propulsion Technologies. So I guess it makes sense to get the patent so they can show it to investors to raise more money. Even if the whole thing is a scam it makes sense from a financial view."
This does make sense. A patent portfolio is always touted as a key asset.
I can't speculate about their motives. For all I know they truly believe in what they're doing, which still doesn't mean it's not worthless. I'm shocked that they filed the application with Claim 1 written as it is, given that they did engage a patent lawyer. It's possible that the only lawyer willing to touch this subject isn't the best one...
OTOH I've seen some pretty bizarre patents (oh how I wish I still had a link to one that was literally gibberish - but of course the examiner didn't find any prior art so it got granted!), so I genuinely have no idea what the motivation would be...
>his Claim 1 teaches nothing. You can't write a claim that says "apply voltage to surface in a way that results in miracles".
Well... Claim 1 could be interpreted as "apply voltage to surface" (which indeed teaches nothing) - but then watch for this particular kind of miracle (which, would, if it were to actually work, teach something, except it doesn't actually work...)
>one can patent things that don't work, it in fact happens all the time.
Agreed. I think there is one exception, that perpetual motion machines have to be accompanied by a working model. AFAIK, nothing else requires a demonstration that it works - just that it is novel (not fully anticipated by prior art), and "useful" (in the sense that, if it _did_ work, it would do _something_ - almost anything).
>The patent is a right to sue those who use the invention without my permission
Yup.
Back when I was at IBM, and my subtree tried to patent software inventions, what we usually did was to make the first claim as general as possible without being part of prior art, and then have a flurry of subsidiary claims which got more and more specific (and therefore more and more defensible as not being prior art, but less and less economically valuable). There were a lot of "the method of claim 1, where the some-barely-constrained-part-of-claim-1 is a more-specific-implementation-of-that-part-of-claim-1"
He's apparently showing how the EM drive would fit into a traditional flying saucer design and justify UFO sighting stories. Uh... no, sorry, I'm not going to give that guy any benefit of the doubt.
Every previous version of this that I've heard of has had some tiny thrust that turned out to be measurement error. If it's really producing a whopping 1 g then it should be simple to demonstrate.
A reactionless drive would violate the basics of the core theory (https://frankwilczek.com/2014/coreTheory.pdf), so for it to work a new theory beyond the standard model of particle physics (and general relativity) would be required. So nothing like EM drive can function based on known physics, meaning all current theoretical justifications are automatically bunk.
It is not impossible (though highly unlikely) that an engineer would stumble over a completely new physics experimentally. After all, that's how most interesting discoveries, such as superconductivity or unifying electricity and magnetism were made. But that would be by pure luck or serendipity, and none of the justifications at this time should be taken seriously.
What's the alt-history in which an engineer stumbles across superconductivity, or unifies electricity and magnetism? Are we classifying Kammarleigh Onnes and James Clerk Maxwell as engineers? If so, then why not go the whole hog and also claim that engineers stumbled across the laws of motion and General Relativity, and for good measure maybe the theory of evolution and Fermat's last theorem as well.
Maxwell wasn’t an engineer, however both electricity and magnetism were discovered long before Maxwell wrote his equations. In fact the first electric motor is from 1832, and the equations from 1873, so probably the worst example you could pick.
And while we are at it, those equations are now taught in engineering courses more than physics. Engineering courses are by and large applied physics anyway.
Electricity and magnetism were discovered prior to Maxwell but the comment to which I was responding said ‘unifying electricity and magnetism.’ Unification was accomplished by Maxwell.
And in the us the Maxwell equations are generally taught by physics departments, frequently as part of the freshman physics sequence. Which to be sure is commonly required for engineering students too.
This was an unusual time: the science of physics was still young, it was "easy" to discover basic principles, relationships, etc. We are now in a place where it's incredibly unlikely to have discovered some fundamental law that governs macro-scale mechanics.
Chances of an engineer discovering a violation of Newton's 3rd law with human-scale objects are asymptotically at 0.
Accidental discovery of a _basic_ principle indeed seems exceedingly unlikely. How do you count something like the yield of Castle Bravo, https://en.wikipedia.org/wiki/Castle_Bravo#High_yield, where the interaction of the lithium-7 with 14 MeV D/T fusion neutrons yielded more tritium than the designers had anticipated? Discovery? Bug? Oversight?
It could be all three - and, I think, speaks to my point about potential for abundant discoveries in a young science. I mean, it was the first thermonuclear bomb built by the Los Alamos lab - it would be weird if they didn't discover all kinds of unexpected effects.
Certainly engineers have made scientific discoveries (the cosmic microwave background is a recent example that comes to mind), just not *all* of them, and not the specific examples that were quoted.
I liked it. I knew the common popular understanding of it doesn't match what it actually says, but was impressed by the details and how much larger that mismatch is. 3 things I wanted to see and missed -
1 the back story of the context in which it was written. I am familiar with this story but was hoping to see what you would say about it.
2 discussion of the disputed claim that it is the first piece of science fiction.
3 discussion of the disputed claim that she had help writing it.
Ah, sorry that my review wasn't fully satisfactory, though it's nice to hear that you liked it anyway! I'm afraid that 1-3 are all fairly unsolvable omissions, from my perspective. I prefer to assess a book independent of any context other than my own brain, and it's pretty rare for the story behind the story to grab my attention. I really like butting heads with the work in singular, trying to figure out what makes it tick in a way that I personally appreciate/understand. I like to think that makes any review I write that much more uniquely insightful, but it comes at the cost of understanding other people's experiences and expectations around the book. And making me into a bit of a dunce, when I independently stumble across a revelation that everyone already had fifty years ago.
was really disappointed to not see it on the shortlist, I really liked it! Made me want to pick up the book, and for me thats the ultimate test of a good review
I liked how it showed the book had a much more nuanced view of what constitutes "monstruous" rather than the "playing god is bad" one that one might get from a lit class. I think it explained it pretty well
I found it good, but not sticking out compared to some other reviews. It gave a very nice analysis and also retelling of the story and the movie. But also not much more to that. I don't think I took away a lot that is not directly related to the novel. For finalists I usually have this feeling of how the book connects to something bigger.
I think the strongest part is when you compare the book and the movie, and the considerations of whether Frankenstein was "born" a monster or "made" a monster later by how it was treated. I did not hear these thoughts for the first time, so it was not quite an eye-opener. Perhaps I just know the story too well. But overall I did enjoy reading the review, and found convincing what you wrote.
Weird, I feel like the heart of a good book review should be the book itself, not what the book connects to. I mean, don't get me wrong, the most common issue with book reviews is to focus too hard on summarizing, but even still. Surely there's such a thing as overcorrecting. Eventually, you aren't actually reviewing the book, just sort of, whatever's on your mind.
Ah, anyway, my tangential thoughts aside, I can settle for a "good" review. Bummer you didn't get more out of it, but thanks for reading through anyway!
Three puzzles about gnomes and boards (otherwise unrelated). Neither very easy nor very hard, they're roughly ordered from simpler to tougher.
1. All gnomes are either knights (always say the truth) or knaves (always lie). Each square of a 4x4 board is occupied by a gnome. It is known that there are both knights and knaves among them. Each gnome declares: "Among my neighbors there's an equal number of knights and knaves". How many knaves are there?
(neighbor means along straight lines, not diagonals)
2. Nine gnomes stood in squares of a 3x3 board, and each said hello to his neighbors (again, straight-line neighbors). Then they got off the board, got on the board again (possibly changing positions) and said hello to their neighbors again. And then again - overall they filled the board and said their hellos three times. Prove that some gnomes ended up not saying hello to each other.
3. Each square of a 7x7 board is occupied by a gnome. If two gnomes are neighbors (straight line, not diagonals), their beards' lengths are at most 1 inch apart. Now we take all the gnomes and sit them at a round table. Prove it can be done so that all the neighbors again never have their beards' lengths differ by more than 1 inch.
Ertneqvat bar, gur tabzrf ba gur rqtrf (ohg abg pbearef) ner fheebhaqrq ol guerr bgure tabzrf naq guhf pyrneyl ylvat. Guvf nyfb znxrf gur pbeare tabzrf xanirf. Tvira gung gurer ner xavtugf, bar bs gur tabzrf va gur zvqqyr zhfg or n xavtug, juvpu pna bayl or gur pnfr vs nyy bs gur sbhe zvqqyr tabzrf ner xavtugf. Fb va gbgny, gurer ner gjryir xanirf.
Ertneqvat gjb, gurer ner gjryir unaqfunxrf unccravat cre vgrengvba, sbe n gbgny bs 36. Gur gbgny ahzore bs unaqfunxrf erdhverq (sebz Tnhff) vf avar gvzrf rvtug qvivqrq ol gjb juvpu vf nyfb guvegl-fvk. Fb gurer vf ab fynpx -- rirel unaqfunxr jbhyq unir gb or jvgu n arj arvtuobe. Cre vgrengvba, bar tabzr unf sbhe arvtuobef, sbhe tabzrf unir guerr arvtuobef naq sbhe tabzrf unir gjb arvtuobef.
Jevgvat gur inyrapr bs gur tabzrf nf zhygvfrgf, jr svaq gung gur bayl jnl sbe rnpu gb frr rvtug arvtuobef vf univat guerr tabzrf jvgu gur inyrapr cnggrea {sbhe, gjb, gjb} naq fvk tabzrf jvgu gur inyrapr cnggrea {guerr, guerr, gjb}. Rirel unaqfunxr vaibyirf rknpgyl bar tabzr jvgu inyrapr cnggrea guerr (ng gur rqtrf). Guvf zrnaf gung gur tabzrf jvgu gur {sbhe, gjb, gjb} cnggrea arire funxr unaqf jvgu rnpu bgure. □
Gur ynfg ceboyrz, vg jbhyq or gevivny vs gurer jnf n Unzvygbavna plpyr va gur 7k7 tevq. Rkcrevzragnyyl, guvf frrzf snyfr. Jung V raq hc jvgu vf n 2k2 oybpx jurer V jbhyq unir gb ragre sebz gur svefg ebj yrsg naq rkvg sebz gur obggbz ebj evtug. Jvgu vagrtre-inyhrq orneq yratgu, vg frrzf gung ng yrnfg bar qvntbany jvyy unir gb funer gur orneq yratgu, fb jr pbhyq whfg whzc nybat gung qvntbany. V nffhzr gung jvgu erny inyhrq orneq yratgu, bar pbhyq fubj gung gurer rkvfgf n fhvgnoyr obhaq sbe gur orneq qvssrerapr nybat bar qvntbany.
I think you've got it, congrats! It's much the same way as I solved it, but for completeness sake, let me blow your mind with a different & very succinct solution someone showed me.
Pbybe gur fdhnerf va gur purff cnggrea. Gb zrrg&terrg, gjb tabzrf zhfg fgnaq ba fdhnerf bs qvssrerag pbybef. Gurer ner rvtug cnggreaf bs fdhner-pbybef bire guerr ivfvgf (gjb gvzrf gjb gvzrf gjb) naq avar tabzrf, fb fbzr cnve jvyy unir gur fnzr cnggrea naq jba'g zrrg.
I have never had a cavity in my life, despite failing to maintain the oral hygiene habits that my dentist would recommend. I was thinking about this in the context of the Lumina info that Scott has shared here, and I wondered if it might be interesting to them? I don't really know. It feels mildly insane to say "hey, anyone wanna investigate my oral bacteria?" but it also seems that investigating oral bacteria has led to some interesting discoveries in similar cases. So I am interested in letting someone investigate, if they are interested.
Does anyone know who I would talk to about this, if anyone?
I cut down on sugar a few years ago, for unrelated reasons, and haven't had any dental problems since. I've even geotten compliments from dental assistants.
My elderly dentist conceded the article's main thesis (overall oral health is probably based far more on diet and genetics than formal dental care) was more accurate than not. That certainly seems to have been the case for me; I did things like not seeing a dentist for over ten years and never flossing with zero negative consequence.
Heck, I have a baby tooth that was never replaced and it's hanging on as strong as the adult teeth around it. Both my elderly dentist and the improbably hot young dentist who replaced him when he retired told me I can expect it to hang on the rest of my life - they have a 90 year old patients with baby teeth.
I haven't read this article but I basically already believed this. Especially after the recent revelation that flossing is an op I have very little respect left for dentistry.
I get cleanings done out of a combination of seeking social approval believing them to be harmless and potentially beneficial. I also have never had a cavity and do not have any particular opinions on fillings.
When was your last cleaning? How many millimeters are your pockets during probing? If they’re all healthy that would be really fascinating - no cavities AND pristine gums?
It's a very simple site where whoever can answer that question uploads their answer. It's something of a postrat project, yet some of the answers I got from right here, the ACX comments section. You can see it as crowd-sourced wisdom I suppose. Maybe even as Wikipedia, but for wisdom instead of knowledge.
Take everything you know, everything you have experienced, compress it into a diamond of truth, and share it with the world!
You can read some more about the project, including the story of its purely mystical origin, on my blog:
That in TeX if you want to use commas as thousand separators while in math mode, you gotta surround them by braces, otherwise TeX assumes they're punctuation and automatically adds spaces after them
That in TeX if you want to use commas as thousand separators while in math mode, you gotta surround them by braces, otherwise TeX assumes they're punctuation and automatically adds spaces after them
I'm curious how you're thinking about ranking/sorting/discoverability of content from a reader's perspective. Opening up your app, the first post I read says:
> Everyone has a unique principle that is both their source and final destination. The truth they were born to manifest. It is not subjective, since it existed before anyone was born. Neither can it be spoken. But it is truth. And the truth is infinite.
To me, this sounds less like wisdom, and more like someone trying to sound Deeply Wise, and puts me off from spending more time scrolling through the feed.
In short, I think the concept of crowdsourcing wisdom is an interesting one, but I think the reading experience needs to be redesigned from its current state in order to provide value for readers.
Well, you can't win with everyone, but there is a search bar, and did you see answer #2? It's not at all limited to woo-woo stuff (I happen to believe in woo-woo). I did consider having likes, but then that would mean people would have to sign in, and it's something of a hard sell to sign in just to like. Something that could be added is a tagging system I guess, so that if someone only wants to see secular answers, they can do that.
Another thing I'm thinking about is that maybe the site should load at first with a random order, then there's the dropdown if you want to go chronological/new first. But thanks a lot for the feedback! Did any of the suggestions I made make sense to you, and/or do you have a concrete suggestion to improve the UI?
I'm curious if anyone who read (or wants to read) my review of Discrimination and Disparities by Thomas Sowell has any feedback for me. I would love to know how to make my reviews better. All feedback is welcome.
I found it an excellent review, I gave it 9/10. I think the reason why it wasn't a 10 was that I found the basic distinction of 1A vs. 1B not very novel, and some of the conclusion also not so deep. For example, I found South Africa really nice as an example, but the explanation in terms of cost was on a rather superficial level. You can explain almost everything in terms of balancing cost.
But those are really some minor nitpicks, I thoroughly enjoyed the review.
Overall, I liked it, and a lot of effort clearly went into it, including cross referencing many other works by Sowell. Unlike another commenter, I don't have much of a problem with a review that "just" summarizes the book, especially, when as I noted, it incorporates references to the author's other work, as well. Indeed, it was almost my highest rated review. I hope it ends up being included in a finalists list.
When I read the review, I thought that the summary of Sowell was pretty good, but some of the critques weren't perfect.
Although I read it a while ago, if memory serves, one of my issues with the critique of Sowell related to the argument that Sowell's explanation of disparities doesn't provide a satisfying solution to the problem.
I thought that the critique was misplaced, in that it ignores Sowell's perspective. His perspective, if I recall, is that the default isn't unifortmity, such that disparity is an artifical aberration; disparity is the natural state of affairs. He isn't merely providing an alternative explanation than the regnant discrimination based one for disparity, he's providing an alternative perspective on the matter, entirely.
Once one accepts that disparate outcomes are the natural state of humanity, rather than artificial aberrations, it's no longer obvious that this is even a problem that begs a solution. At the minimum, the nature of the problem would be different.
Personally, I think you spend way too much time summarizing the book, and too little time analyzing the book. Once you get around to the analysis at the end, the review becomes much more interesting.
I suggest that you include the analysis as you go along. For example, your discussion of what you see as his category error re types of discrimination could be inserted after your summary his argument, instead of being left to the end.
Thanks. I tried it both ways—moving my discussion up and leaving it at the bottom—and ended up with that. I was wondering which was best so I appreciate your feedback.
I read it. I felt it did a competent job of communicating the views of the author and laying out the thesis and the evidence in support. What I didn't get was a feeling that you had something to say or add to the views of the author or on the topic more generally. I'm sure opinions differ, but a very strong review in my view is one that the author brings considerable knowledge or insight into the topic and adds value to the content of the book with their review. Or just is a fantastic communicator which shines through the review. I didn't get that from this review.
How are people interpreting OpenAI adding a former NSA chief to their board? The public statement was he will help "better understand how AI can be used to strengthen cybersecurity".
Is it a mistake to discount that public statement and instead view the hiring as a response to Aschenbrenner's "Situational Awareness" paper and therefore a first step of AGI becoming a soft government project?
Completely superficial. As last year's failed coup demonstrated, the OpenAI board is almost completely impotent, which doesn't make them any different from most other corporate boards. Ideology almost always collapses when it tries to oppose economic forces (cf. communism).
Charitably, a proactive attempt to roll with inevitable government and security restrictions (some of which will be hidden from the public). Getting a familiar face on board can help smooth the process, and signal to regulators that OpenAI is taking a mature, reasonable approach and doesn't need to be made an example of (to encourage the others).
It's the revolving door in action, and marks OpenAI's transition to being just another private industry company that wants nice juicy government contracts, and lobbies for easier regulations for themselves with the carrot of a big fat sinecure in the company once you leave public service, if you go easy on them and promote their interests while you are in power:
"The term "revolving door" refers to the movement of high-level employees from public-sector jobs to private-sector jobs and vice versa. The idea is that there is a revolving door between the two sectors as many legislators and regulators become lobbyists and consultants for the industries they once regulated and some private industry heads or lobbyists receive government appointments that relate to their former private posts.
Such instances have grown in democracies in recent years with increased lobbying efforts and have led to debate over the extent former government officials are allowed to utilize connections formed and knowledge attained in previous jobs in public service to enrich themselves or be overly influential on shaping or watering down pending legislation."
That the guy is ex-NSA is just more evidence of strengthening government links. "Look, we're good guys, see? one of your own is on our board! we'll be compliant with all requirements around security and access! and in return...if you scratch our backs, we'll scratch yours".
Very true. Well, would it be fair to assume the hiring (while not directly related to Aschenbrenner) is in fact related to drastically increasing their security measures against sophisticated thefts? Or is it to easier to take it at face value that they want to sell AI as a cybersecurity product?
Can I make a suggestion to edit the book titles so they're easier to find? Complete Rhyming Dictionary is under T for The Complete Rhyming Dictionary. And I don't see Spirit of Rationalism under either Spirit or The.
This practice of not skipping the definite or indefinite article at the beginning of a title for alphabetization has always annoyed me.
Libraries drop the initial article to order books but things like ‘top 500 songs of all time’ usually start with all the songs that begin with the indefinite article ‘A’.
I’m surprised that this contest doesn’t use a little macro in the spreadsheet to strip the ‘a’s and ‘the’s before sorting. I mean the site is pretty rich with coders that could do this in a couple minutes.
There was a Far Side collection that had an appendix at the end with all the comics listed alphabetically, and all of them were under T because they all started with "The one about..."
I was praised here months ago for admitting I was wrong about the amount of corruption in the Biden family, so I'm incentivized to follow up now. My original thought (as best I can recall) was that all politicians do not-quite-illegal influence-peddling (it's called 'campaign contributions') and that Hunter was just a little more brazen than usual. At some point I thought "ok this is out of the ordinary" but now I can't find any trace of emotional response to any of it. This feels a bit like cynicism - the things we don't like when our officials do them aren't really illegal because they make the laws, which is why the best anyone on either side can come up with is a silly gun charge or some garbage about mis-classifying a payoff. It feels exactly like my response to the media (not just the mainstream legacy media, but 'independent journalists' on twitter too) always being wrong and incomplete in a way that supports their point, without technically telling lies.
I'm not trying to wriggle out of having been wrong, mind you, just noting a kind of fatigue that goes beyond not being outraged (that hasn't been the case for me for a very long time).
Feel like linking the Daily Show talking about the trial of Senator Robert Menendez, which ends with an aside about all the legal forms of corruption other senators and representatives can get away with. https://www.youtube.com/watch?v=5udtSQ-LtM0&t=376s
No, I think you should walk it back if you feel differently now. How do you know you were even getting kudos for intellectual rigor (rare) and not for giving people the opportunity to imagine you humbling yourself before their big wrinkly brains on the internet (common)?
Don't hold to some sort of intellectual standard that privileges views you don't hold too much. It's always good to give a couple extra weights to the opponent's side of the balance beam to counteract the ego sitting on yours, but that doesn't mean you should let yours conscientiousness force you into something equally wrong.
Is Hunter unusually corrupt? Yes, because he is a drug addict fuck boi; his corruption is more salacious and less hidden. Is he more corrupt than eg. The Trumps? Fuck no. Not even close.
The Obamas? Yes, but also ehhhhh.
The Bushes FUCK no, not even close.
The clintons? Ehhhh... maybe, maybe not. They are smooth operators.
The Bushes again? FUCK no, not even close.
The Reagans? FUCK no, not even close.
You have to go back to Carter to find a president you can firmly say "The Bidens are definitively shady compared to this guy"
Are you saying Joe/Hunter Biden is LESS corrupt than the Trumps? How do you watch the news and come to this conclusion? Hunter literally went around the world to foreign corporations, sat in front of their leadership, put his dad the Vice President of the United States on speakerphone, and raked in $20 million USD this way. Most notably flying to Ukraine with his dad on Air Force Two to serve on the board of directors of a Ukrainian oil company, an industry he knows nothing about in a country he has no connection with except for the fact his Vice President dad was in charge of Ukrainian foreign policy. How convenient.
This doesn't answer my question. No one on that list is named Trump. And none of these people are accused of corruption. It's a bunch of stuff like "lying under oath" which is what they get you for when they can't get you for any other crime. If that's the standard you want to use, I've got a laundry list of Biden administration executives guilty of such things.
Wait, how bad has it gotten? I've been out of touch, but it seemed like Hunter Biden was a typical case of something that pops up every now and again, and he himself didn't really reflect on the rest of his family. Hunter certainly seemed to be trying to seem like he was selling access to his father. The real question was how much his father was into it, and I hadn't heard that there was any real evidence that he'd done anything serious?
Nothing has really changed at all when it comes to the question you are asking. The only development was that hunter was convicted of lying on a gun application (stating he wasn't a drug addict when he was). His tax case continues.
The right was hoping he would get acquitted or similar on the gun charge so they could use it during the campaign as evidence of corruption. But if anything, its evidence there are people in the "deep state" out to get the Bidens: this is a charge that is basically never brought, when it is brought a conviction basically never results in jail time (which hunter has a good shot at getting), its potentially unconstitutional based on recent SC rulings, and Biden has publicly said he wont pardon hunter no matter the sentence. Personally, I think Hunter is kind of a sacrifice that Biden has to make to show how not corrupt he is compared to trump. The right hates this because it hurts their talking points so they come up with other BS to cry about.
(I hate both parties so don't take this as an endorsement of what biden or the left is doing).
That is literally the opposite of what happened. The gun charge was being used by Hunter's lawyers with the connivance of the prosecution to create a plea deal that immunized Hunter in perpetuity for the tax evasion charges that have yet to be litigated. The plea deal agreed to was so egregiously flawed that the judge threw it out. Once the fix became public the prosecution had to proceed for political reasons.
I imagine that the "right" hoped that investigation of the finances of the tax evasion schemes would uncover evidence (or implication at least) of corruption and that is equally why the defense was eager to make them go away.
Yes they had the calamity of the plea deal, not sure how that makes what I said "literally the opposite of what happened"? I don't think the plea deal was the sign of "fix", basically any other person charged with this crime would plead out. If they didn't have a history of violent crimes, the plea deal would likely have no jail time. But Hunter may get some jail time (though not a lot, i believe i read the guidance is somewhere around 1.5 years at most). So being a biden has led him to get a harsher sentence than any "normal" person.
Hunter's tax case, which is on going, is much more likely to lead to any signs of corruption. I think the right saw this gun case as an embarrassing/scandalous episode to paint the Biden's as slimy (just like Trump so don't worry about Trump being slimy; though this part goes unsaid).
Well, the Democrats are the party pushing gun control hard (and while I don't like guns, Americans have a legal right to own them).
So if the son of the current president, who is a Democrat, turns out to be the same "drug addict lying on application to get gun" that all the scaremongering is about, then they have to prosecute him or else appear like big fat hypocrites.
Here is a guy I would not trust to send to the shops to buy a litre of milk who brazenly flouted the rules, flaunted his rule-breaking, and is the son of the highest official in the land for the party which wants (allegedly) to take all guns away from private citizens. What else can you do but let him have his day in court?
Yeah i basically agree. The potential irony is that Hunter's case could lead to rules/laws around drug addicts possessing guns being outlawed. If the case made it to the supreme court, that would be the likely outcome. So to not look/act corruptly, the Biden admin has to undermine one of his parties main goals! I suppose thats a sign of a type of virtue (though, like you, am skeptical that the dems - or any political party - can be truly virtuous).
I think Hunter Biden is definitely the poster child for "you do not want this person owning a gun" 😁
I had forgotten about the tax charges as mentioned above! So I suppose the gun trial at least worked in his favour that way. It certainly distracted attention.
What would count as "anything serious"? If Joe sat in on a phone call with Hunter while he was making a deal, would that be serious?
If Joe remained aware that certain courses of action which might be advantageous for the US could imperil his son's business dealings in China and Ukraine, would that be something serious?
repeat customers are fairly good evidence product was delivered
Id be shocked if there wasnt several drug trails on record where the cops only found money and messages about a drugs that ended in conviction and failed every appeal
Psychics, dowsers, and astrologers get repeat customers. Yes, that's evidence that a "product" was delivered, but there's always been great demands for the product, "tell me what I want to hear".
That's true. And maybe there really some escorts out there who just escort their clients to events in a totally above-board way and never touch their penises. You can't prove it either way.
Still, if you tell me you're one of those escorts I'm going to be skeptical.
When you put a well-connected person on your board, you are not necessarily hoping for direct quid pro quo, just a general position of advantage.
It's possible that Joe Biden is a saint and never once allowed his fuckup son's surprisingly lucrative business dealings to taint his judgement. Various shady foreign entities might have assumed he would, but he remained resolutely above it at absolutely all times.
Now, to be fair, all politicians have relatives, and they're all subject to the same issues. But Biden seems uniquely subject to it because Hunter is such a crackhead fuckup, and the gap between what he could achieve on his own and what he achieved with a powerful father is so clearly vast. I think Neil and Marvin Bush would have done just fine in business even without their father and brother being Presidents, but Hunter Biden would be giving handjobs for crack in a flophouse in Wilmington.
Hunter is just so incompetent there is no possible façade to coat over his influence peddling. All of the huge sums of money and sinecures like the Burisma board position given to Hunter were purely to curry favor with Joe.
However, imagine if Beau Biden was still alive. He went to U. Penn., had a law degree, served in the military JAG, and was the AG of Delaware. If he was offered a fistful of lucrative deals, is it influence peddling? Obviously Beau was highly competent and could have been sought out for his own merit. And obviously it doesn't hurt that his dad was a long time Senator and VP for Obama. Who can say why people truly wanted to throw money at him. With Hunter, there is no such plausible deniability.
I think this is part of how bias happens. If someone finds out malfeasance that happens on the other side, they have infinite energy for salacious pointing this out ordering comfort from this fact. Someone find out it happens on their side, they get depressed and come up with reasons for why it doesn't bear mentioning.
This isn't a conscious process, most people cannot help what they feel, but it is sometimes important to see emotions are puppeting you, in ways that you may not reflectively endorse.
>This isn't a conscious process, most people cannot help what they feel, but it is sometimes important to see emotions are puppeting you, in ways that you may not reflectively endorse.
Yes. What looks like hypocrisy from the outside can just feel like natural pursuit of what _feels_ like "the more important case" from the inside.
I've considered both sides my sworn enemies for coming up on 10 years now. So I don't think that's it in my case, but I see the point you're trying to make.
>I've considered both sides my sworn enemies for coming up on 10 years now.
I'm curious. I tend to think of both the left and the right as enemies of individual freedom. Different freedoms in the two cases, but neither a friend to freedom (though neither of the USA factions is as bad as Stalin or Hitler or Mao, of course).
Yeah, at this point I think fatigue has set in for everybody. I can't keep up with the number of cases being brought against Trump (except when they collapse into comedy like the Georgia one), and Hunter Biden has dragged his family through the mud so many times in public already, that an actual conviction is an anti-climax by comparison to wondering when his next dick pic of him taking drugs in the company of ladies of negotiable affection will be released.
>the best anyone on either side can come up with is a silly gun charge or some garbage about mis-classifying a payoff.
I think the classified documents charge is the actual serious one people should care about, but the judge in that case seems determined to stall until after the election.
I think it's becoming clear that every President, VP, SecState etc with routine access to vast reams of classified documents winds up mishandling them somehow.
And usually, that's because so much routine correspondence gets classified, because there is literally no incentive not to stamp every piece of paper you can get your hands on.
EDIT: I was wrong. There IS some incentive not to do that. See John Schilling's reply pointing out 28 CFR § 17.22.
There is literally a law saying you can go to jail if you do that. It's very rare for anyone to be actually convicted, because it's difficult to prove in any specific case. But it does factor heavily into the training people like me have to take every year for handling classified information.
I think "there is a law against that, seriously, don't do that or you'll get in trouble", regularly repeated, is literally *some* incentive. Possibly inadequate, but it's there.
And there's another incentive, which is that when something is classified it becomes an immensely greater PITA to deal with even if you do have All The Clearances, so if you're thinking about classifying something that you're going to have to work with regularly, you'll be particularly incentivized to not do that.
28 CFR § 17.22 - Classification of information; limitations.
TL, DR, you can't classify information unless you can clearly and specifically define how it would harm national security to reveal it, you can't classify information if there is "significant doubt" as to whether it needs to be classified, and you very specifically can't classify information "to conceal inefficiency, violations of law, or administrative error; to prevent embarrassment to a person, organization, or agency; to restrain competition; or to prevent or delay release of information that does not require protection in the interest of national security".
IIRC, the theoretical penalty can be up to five years in prison. In practice, as with most other classified-information violations, the penalty if you get caught is usually a slap on the wrist and then you need to find another job, because nobody wants to go through the trouble of convicting you. That's too much like work, and embarrassing if they try and fail. But if you're a stubborn and obnoxious enough jerk about it, they may make an exception.
The difference is that normal politicians apologize and immediately return the documents when they discover them, whereas Trump actively lied to the government and repeatedly tried to hide the documents and prevent the government from retrieving them.
I’m not saying that difference doesn’t matter at all, but I don’t think we should so easily dismiss top officials being that cavalier about the rules and only fix it when they get caught. Like, these are not new or obscure rules they are violating, and it’s hard to believe the violations were unintentional (I suspect they are lazy rather than nefarious, but still).
Material is not supposed to be classified unless it getting into the wrong hands would result in grave damage to national security. Either people are vastly over-classifying (also illegal!) or they are basically ignoring proper handling protocol.
Perhaps, but that’s also illegal, and there’s a process for dealing with stuff that is marked classified but shouldn’t be, and it’s not “leave it in a box until you get caught”. Plus if you deal with classified it’s your responsibility to understand what is classified, and what information you produce would be classified. You will be briefed in detail on this before you are going to access any of it. It is not the case that someone will swoop in out of the blue and declare “surprise this was all classified and no one told you”. It’s negligent at best.
This is like a person who is caught driving 20mph over the speed limit while drunk pointing to other people getting away with accidentally parking in a fire zone to try to excuse their own behavior.
Like sure, overclassification is a thing, and a lot of people from both parties have discovered that they accidentally possessed classified documents *and then returned them*, but that doesn't excuse the whataboutism here.
What? No, it’s like two people both get caught drunk driving, and one of them says “yup, caught me aw shucks” and gets in the squad car and has his lawyer on speed dial, and the other yells and screams and goes on a sovereign citizen rant about how the cops have no right to detain him. I mean yes, the latter is worse, but both are equally culpable for drunk driving. And you want to excuse the former completely because they were so polite when they got caught.
It is not credible that Biden and Hilary were unaware that it’s wrong to keep a box/server full of classified in your personal home/office. My understanding is a lot of the stuff was marked, and even if it wasn’t, everyone has to be periodically trained a sign a document that says they understand what information is classified. And even the stuff that isn’t classified isn’t stuff you’re supposed to have sitting around your home.
This wasn’t “accidental”, it was lazy. It’s harder to deal properly with classified, and they thought they were too big to bother with silly rules for little people.
In the SF Bay Area there's a huge amateur poetry scene. I'm sure other cities have their own. I suggest you Google "<Your city's name> poetry open mics" "<Your city's name> poetry readings". If you're interested in writing groups: <Your city's name> poetry writing groups <or workshops>". Good luck!
What kind of poetry do you like? I signed up for a poem a day with Paris Review, and hate about 70% of what they send me, especially the stuff that's prose about the most prosaic things imaginable:
I'm in the kitchen rolling the cardboard back of some matches into a
little column while the cat rubs against my calves, then
realize I need to pee.
Although today I got Merrill. Poetry seems to have fallen off a cliff somewhere around 1980, or maybe I sort of fell out of the back of the truck. And I'm really not that picky! I like poetry as far back as Milton. In this century I enjoy many of the poets that are widely known, and occasionally stumble on somebody more obscure and binge on them.
I tried to come up with a generalization and failed; my tastes are eclectic. I mostly do poetry in Russian, since that's the language I always spoke. (I don't expect it to make it harder to organize something like a poetry evening; where I am there is a lot of people like me who fled from putinism.) But poetry in other languages I understand fascinates me just as much, I just know less of it.
I guess I could point to Russian underground avant-garde poetry of 1920s and 1930s like the Oberiu, and specifically Alexander Vvedensky and Daniil Kharms, as a great source of my inspiration; but it's just one of my many loves; I am all over the place.
So I just want people to throw any poetry they think of at me, and I want to throw some of my favorites at someone.
>On the other hand, this book, which describes a legal system about as totalitarian as one can imagine, was scribed before the rise of Mussolini, of Hitler, of Stalin and Mao
Having read the review and not the book, nothing about this legal system strikes me as totalitarian. I'm in fact left wondering if Kafka is deliberately breaking every rule he can in order to create an anti-legal system.
No, it's not totalitarian because it's not partisan or political in any sense. I used the term as a cheap hook hoping that would make the review more interesting and also because that is how the work is often, wrongly, interpreted. I follow up your quote saying: "What, then, is the book about? "Totalitarianism, perhaps." The "perhaps" is key, because it isn't about totalitarianism. Nevertheless, it does indeed foreshadow what a totalitarian regime might *feel like*. The loss of privacy and the constant concern about the all-pervasive authorities presages totalitarianism, no?
Kafka doesn't write about how things are objectively but rather what they feel like. It's spooky, I think, how what Kafka felt like in Prague in 1915 would resemble what Prague would feel like to many others in 1955 or 1970.
All the interpretations of the novel I've read claim that "guilt" means guilt in a religious sense. Maybe that's true in that the novel has multiple meanings at once and there exists a meta-level, but I claim that the most textual reading of the novel--which disappears behind most interpretations--is that the book is about the Law running amok. Man creates Law, submits to it, then the Law goes crazy and Man is unable to regain control because the Law is a kind of superintelligence or superstupidity, same thing. Kafka was a lawyer by day, so he knew something about it.
I really wish I'd written a better review so that Kafka would be a discussion point on ASX now, on the hundredth anniversary of his death, while he's both more popular and relevant than ever. To me his relevance is in perceiving that most key elements of totalitarianism have nothing to do with politics, they are simply inherent in human nature and scalable to ubiquitous levels through technology. Ideology has nothing to do with it.
Honestly, the review didn't work for me. I guess that this is due to my background. I am from Germany, and this book is a classic in Germany, including the aspects that you described, like the absurdity of never knowing what is going on or what the trial is about.
I think you did describe this well, but only to a level that I already knew. I can imagine that this book caught you extremely flat-footed when you didn't know what to expect, and you described it well. But to me the review only repeated things that are quite often said about the book.
Sorry, I guess it sometimes happens that you just have a different connection to the book than your reader.
In retrospect I should have changed the opening sentence in which I state the book "took me totally by surprise". Since it is a classic novel most readers are familiar with, at least by reputation, my first idea was to try to write a humorous review from the point of view of someone who took the book to be an actual account of real events he found unbelievable and shocking. But I found I could only keep that schtick up for a few paragraphs, so reverted to a more straightforward tone. I've actually read the book several times over the years but thought my misleading opening sentence could stand because at least it jumped right into things.
I thought my ultimate interpretation of Justice seizing control like an alien intelligence (or an AI) might be original, but I only made the AI analogy as a subtle hint (too subtle, perhaps) because I didn't want it to swallow up the other themes.
Are there any vegans here who oppose lab-grown meat? I have a piece considering some arguments to this effect, but I don’t feel like I canvases all the possible (smart) reasons why vegans might oppose it: https://open.substack.com/pub/wollenblog/p/vegans-against-lab-grown-meat?r=2248ub&utm_medium=ios It was hard finding vegan anti-lab-grown-meat arguments that were clear and fleshed-out. I’d be interested to hear more arguments.
I have seen (and largely agree with) a few arguments that you did not mention.
First, that eating lab grown isn't necessarily inherently an animal rights issue, but that it would potentially be unreliable as a commercial industry. How do I know if this burger is actually lab-grown or if it was from a cow that was killed? Theoretically regulations around labeling, supply chains, lab inspections and such could assuage a lot of that concern. But if i see a lab grown burger on the menu at a mom-and-pop hole-in-the-wall, I'm not really going to trust it.
Second, environmental concerns. A lot of people are vegan/vegetarian because animals takes a tremendous amount of energy and water to keep alive, and contribute to pollution of waterways and the air. What is the carbon cost of a pound of lab-grown ground lamb, compared to pasture raised lamb? How does the lab dispose of byproducts? How much water does the process require, at scale? I would need reliable answers to these before I would be willing to replace any of my current non-meat protein sources with lab grown meat.
FWIW, I'm not a vegan but I significantly limit consumption of animal products for ethical and environmental reasons.
> How do I know if this burger is actually lab-grown or if it was from a cow that was killed?
Sounds to me like an isolated request for rigor, because this argument could be applied to any food labeling. How do I know if this chocolate really is vegan? If this milk is organic? If these eggs have been laid by free-range hens? If this bottle of water is low in sodium? If this corn is GMO-free? If this carrot has been locally produced?
First, I generally trust that a packaged product contains what it says it does because of food regulations in the US. Once you take off the packaging, all bets are off. Some random person could lie to me about what package a product came out of, and at a small restaurant with narrow margins, there is an incentive to lie.
Second, while I hope that the carrots and corn I buy at the farmer's market are organic, the death of a sentient creature is not on the line. It's reasonable to demand more rigor when you're talking about meat than vegetables.
The closest thing to such an argument I'm aware of is that you didn't ask the cow permission for her stem cells. These are harvested from a biopsy (or placenta tissue can be used IIRC) and won't harm the animal. But you didn't ask permission, so perhaps it's wrong.
One argument I've seen is that it normalises meat-eating. If your position is that meat-eating is wrong, even if done without the cruelty involved in factory farming, then lab-grown meat is like a way of committing sin without the guilt. It's still meat, and meat is still murder. If omnivores/BLOODMOUTH CARNISTS have the option to eat meat without guilt, they won't be encouraged to give up meat-eating completely and so the horrors of factory farming and using animals for other purposes will continue.
This strikes me as similar to the Bolsheviks getting mad that workers were actually uniting and negotiating improved conditions from their bosses.
I mean, if meat is murder, go with less murder! I can’t imagine a scenario in which cheap and tasty lab meat *increases* the amount of “on the hoof” meat consumed.
I'm going out on a limb here, but it's a religious objection. Eating meat is bad, simpliciter, and it doesn't matter if it's lab-grown meat. You should not be eating meat because eating meat is evil.
The rationalisation of that is that, until and unless *all* existing farming of animals for meat is done away with, then lab-grown meat is the 'reasonable believers" of Sam Harris and the Atheist Horsemen - they give cover to the crazy suicide bombers and abortion clinic bombers etc. Just as "We don't endorse that crazy violent stuff but yeah us and them both believe in God/Allah" gives the extremists a shield, so does "I only eat the lab-grown meat" give a shield to the BLOODMOUTH CARNISTS eating torture meat from the cruel agri-business and abattoirs.
Funnily enough, there's a current scandal in Ireland about horses being exported for meat:
I think this is impressively backwards. A problem with moral realism is the difficulty of coming up with objective justifications. Normally, moral realists have some sort of assurance that theyll be able to resolve this. Your arguments assumes realism, but how could an objective justification ever *not* look like a coincidence of the sort youre complaining about? That just seems like accepting the argument against moral realism, but refusing to give it up and instead blaming every possible candidate for "right morality".
While most deontological libertarians dont justify the NAP through prosperity directly, such justification, where attempted, generally turns on game-theoretic facts which are closely related to the reasons for expecting that prosperity.
> Rothbardians believe in what seems to be a most incredible coincidence. They think, on the one hand, that our natural rights require anarcho-capitalism2, and maintain, on the other, that this set-up just so happens to be the ideal economic system for human flourishing.
Your putting the causality backwards and yourll find nazi's, monarchists, social Darwinist, egoists, etc. etc. etc all agree, "it is good to be strong"
ancaps are right wing, even if 1/3rd of us are furries. Id expect anything to the right of secularized Christianity to agree with the statement "good morals make good systems"
As a NAP- and anarcho-capitalism sympathetic libertarian atheist, I don't think this correlation requires god to explain. In some sense, there's a fully general counterargument here, which is that *a priori* god is inherently more unlikely and complex than this particular phenomenon arising basically by coincidence. It's also not clear to me at all why positing god offers any explanatory power here; I'm not aware of any religion whose god(s) behave in a way that is consistent with this outcome. You could, I suppose, assert a more deist-like position, but then this particular god really only seems to exist as an explanation for this particular phenomenon, which is circular and leans extra-hard into the objection above.
But, I think we can be more specific as well. Here's a general argument: whatever issues individual people have, any institution made up of them and ruling over them will have the same or related limitations, while also having (by definition) the ability to use force to conceal its failures. David Friedman makes an argument along similar lines, in more detail, here: https://www.youtube.com/watch?v=Bpn645huKUg. Maybe some theorists don't address this mystery or even realize it's surprising, but I don't think that means there is no good non-supernatural explanation.
Isn't this assumption of inherent intelligence exactly what's wrong with theistic hypotheses in the kolmogorov complexity framework? Humans are intelligent as a result, far enough down the line, of the relatively simple behavior of relatively simple particles. In order to describe a universe with humans, it suffices to describe the fundamental physics, and their intelligence follows.
If god could be broken down this way, it would no longer be god--its intelligent behavior has to be described directly, with a vastly more complicated algorithm than that describing the behavior of fundamental forces and particles.
I think your understanding of what "complexity" means is entirely backwards here.
Doesn't the complexity of your hypothetical god-entity depend on what sort of god-entity you're hypothesizing? A "watchmaker" god-entity would only need to be able to define and implement the 26 fundamental constants of our universe and designate the 17 fundamental particles and their properties. Then inject > 3.2 × 10^71 Joules into a bubble the size of the Plank scale and watch what happens! This god-entity wouldn't have to be able to compute the outcome of such a universe. It could just sit back and observe the emergent phenomena that may or may not result from the parameters it applies to its experiment. And this hypothetical god-entity may not even understand the tools that it's using. Such tools could have been constructed by other god-entities or by higher-level god-entities.
OTOH, if your god-entity is intimately involved in the workings of its universe, then it would require a computational power with more energy and bits than what is contained in our universe. Of course, this may be logical overthink on my part. But speculating about god-entities — or the lack thereof — will only yield questionable results, because our observational perspective vis a vis time and space is limited. The hypothetical god-entity — if it exists — may exist in a universe that has different physical laws than ours does. Its intelligence may be based on different principles from ours.
You can't get a complex brain or complex behaviour straight out of the standard model, you need starting conditions as well.
Now,you can assume a small universe with highly complex starting conditions, but that doesn't give you an argument that human intelligence is actually simple.
Or you could assume a large universe with low KC since it is every combination of everything. ..but theists can use a similar argument , that god is both simple and all encompassing.
Now that I've stopped laughing (and it's not at you, just the notion of Libertarians and some of the Old Testament events re: the NAP), let me say I think maybe Deists, but not theists. Based on:
"According to the NAP, people have indefeasible and absolute property rights in their bodies"
According to theism, you don't, or at least not if you're a Biblical theist, because God as the creator has the ultimate right over us all. We don't own our bodies absolutely and cannot do as we wish with them unfettered. That's going to run up hard against the NAP there.
Mostly, I can see why many Libertarians are atheists because that is the ultimate "Me, myself, I alone am the master of my fate and decide what I shall and shall not do" stance. "Invictus" is the mindset that comes to my mind for them:
"Out of the night that covers me,
Black as the pit from pole to pole,
I thank whatever gods may be
For my unconquerable soul.
In the fell clutch of circumstance
I have not winced nor cried aloud.
Under the bludgeonings of chance
My head is bloody, but unbowed.
Beyond this place of wrath and tears
Looms but the Horror of the shade,
And yet the menace of the years
Finds and shall find me unafraid.
It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate,
I am the captain of my soul."
But theism requires you to bow your head before another, greater power than Me, Myself and I. I think a lot of Libertarians are too stiff-necked to bow.
Agreed, in a rule-utilitarian system the miraculous coincidence between being morally righteous and delivering the best outcomes goes away: the rule is morally righteous *because* it delivers the best outcomes.
I found that you gave a lot of very good arguments, and you convinced me that Silver and many other Bayesians attack only strawman versions of frequentists. This part of the review was really great, and would have deserved to be in the final if that was the only part.
But at the same time, in my eyes you did the same to the other side, and attacked strawman versions of Bayesians. For example, you criticized that Bayesians would continue to call for more evidence, and you made it sound like otherwise Bayesians would refuse to draw a conclusion before having tons of clear evidence. But that is absolutely not true. Just the opposite, a lot of the best Bayesians (like Scott, but also some superforecasters) try really, really hard to reason with limited evidence. If anything, that is something that Bayesians are trying harder that than frequentists.
And that is just one example. Another is that all introductions into Bayesian thinking that I know stress over and over again that you should not be overly attached to the calculations that you do. These are a tool to clarify your thoughts, to find out which arguments are important or unimportant. But you should never ever take the product in the end at face value. But the strawman Bayesians that you presented did exactly this mistake.
Throughout the whole review, I had the very strong sense that you fought against a caricature of a Bayesian. That meant for me that your review was composed of a really strong part and of a really weak part (which were often intermixed). In the end, I settled for a 7/10.
Added: just to make this clear, I wrote a lot about the weakness of your review. That's because I want to be constructive. But the other part of the review was really, really strong, and I learned a lot from that!
Appreciate this thorough review (of a review, lol) and all the kind words. I agree that Scott and many more modern Bayesians are a.) much less hostile to frequentist statistics, and b.) much more circumspect about how reliable Bayesian approaches are. If I'd been arguing directly against Scott, the review would have been significantly different.
I do think that some of Silver's ... let's call it Bayesian absolutist views remain powerful among Bayesian thinkers to this day. I felt like the review was a good place to discuss to what extent these ideas should be outright rejected, as opposed to just deemphasized.
For one, there's the rootclaim debate, where a 'pure Bayesian' approach was explicitly tried and, let's admit, failed. And while people like Scott will outwardly admit it's just not practical to do, they also offer some grudging admiration for it as a kind of Utopian aspiration we should hope to strive for. In reality it's an overly broad application of a limited statistical principle, and should be recognized as inherently the wrong approach. That discussion didn't happen in the fallout of the rootclaim debate. Instead, there was some talk about maybe the mechanics need to be tinkered with to make it more reliable.
Then there's the tendency of people to define themselves as "Bayesian" or "Frequentist". I'm not convinced Bayes' Theorem is applicable to all or even most situations. There's a visceral difference between defining yourself as a Bayesian, versus accepting Bayes' Theorem as a useful tool in your statistical toolkit. While I once thought of myself as something of a Bayesian, I no longer see that label as beneficial. I feel like the label made it more difficult for me to see the limits of Bayes' theorem. How does a Bayesian proclaim that Bayes' theorem is the wrong tool for the job?
Yes, I can sign to all that you said now. I think I always had a non-absolutist view on the term "Bayesian". The Bayesian theorem can either be read mathematically as a formula, or philosophically as the concept that a prior P(A) can be changed by evidence B, and that the terms one should think about are P(B | A) and P(B).
I never really interpreted Bayesian in the mathematical sense, for me it was always the philosophical sense, and there I find it a very useful philosophy. And I would guess that this is also how Scott and other early LessWrong people see it. Probably frequentists would also be fine with that weak form, and I never perceived it as such a stark dichotomy. Your argument does make sense that this Bayesian philosophy has its limits, and that one should not over-extend it. But yes, I do acknowledge that there have been fights between those two camps (which I probably missed since I was late to the party), and there are absolutists who see it differently, with Nate Silver apparently one of them.
Makes sense. I remember reading Silver's book when it first came out. It tracks closely with influences I see in the LessWrong community. (For example, the tagline for ACX.) Bayes is great, but it gets you in trouble when you overestimate your certainly of P(A), P(B), and P(B|A).
The psychologizing of Silver preferring zero-sum thinking due to his background in poker seems tenuous and adds little to the review. Hypothesizing about why an idea occurred to someone is usually less useful than simply evaluating the idea itself.
Continuing to read it and I find it hard to follow what his issue with the book is as far as frequentist vs. Bayesianism.
Thanks for the feedback. I guess the feud between Silver (and many Bayesians) and frequentist statistics is a bit esoteric.
As to psychologizing Silver; he explicitly calls on his readers to reorient their thinking in exactly this way, converting every open question into a zero-sum game. He states that this is what he does, and gives examples (outlined in the review), then proclaims this as a virtue that others should follow for all questions.
Maybe it's a stretch to say that Silver's experience in certain realms of statistics informs his recommendation to approach all questions in the same way, but I don't think it's a huge leap in logic, after having read the book.
I attempted to answer Robin Hanson's question "Why Is the Demand for Prediction Markets So Low?" in a Substack post: https://substack.com/home/post/p-145694816. Would appreciate any comments.
I think it's a decent introduction to the zero sum problem, but personally, I'd point everyone to https://worksinprogress.co/issue/why-prediction-markets-arent-popular/ as the best post on the subject, since it goes into more depth on explaining why prediction markets also fail at attracting the gambling crowd, hedging, etc.
I can't find "Spirit of Rationalism" on the master Google doc. I checked "Spirit..." and "The Spirit..." Is the precise name different? Was it not on the master doc?
A couple of Youtube commentaries about the bullshit behind the AI hype. Sabine Hossenfelder takes on some of the silliness in Leopold Aschenbrenner's "Situational Awareness: The Decade Ahead" essay. Link to his essay in the description, should you be so inclined to read it.
And Tina Huang calls out some of the bullshit from AI leaders. IMHO they're the latest generation of snake oil grifters that I've seen over and over again in Silicon Valley.
Situational Awareness has some insane claims, in just the first few pages he says he expects AGI by 2027 and ASI by 2030, that is the sort of thing I would expect a bad faith caricature of people concerned about AI to say. I have never heard of the author but apparently is one of the few brave visionaries who can see the Truth long before it comes, or so he says, at any rate.
I'm still reading it for a lengthy and comprehensive exposition on a position that I fundamentally disagree with and can't fathom how could anyone sincerely hold.
> IMHO they're the latest generation of snake oil grifters that I've seen over and over again in Silicon Valley.
I do not agree. I use copilot daily and it's a good tool, I would miss it if it was gone. It didn't transform the way I code but it's at the same time a productivity boost and reduces my ugh field around writing tedious code.
ChatGPT is really, really good too and birthed the whole AI chat assistants. The time I've saved by asking it for some shell command (looking at you ffmpeg) is pretty big, and it even works to fix my system (sometimes). Way better than the usual search engine experience. And that's mostly with zero shot prompting.
I use CoPilot regularly, but like any other of the current generation of AI it frequently makes things up. For instance, my last query was for references that discussed the unusually high C>U substitutions in the SARS2 genome. Specifically, I asked for references about APOBEC RNA editing enzymes and their possible role in C>U substitution. Of the four references it gave me two were definitely real. It got the authors right on another, but Google Scholar showed the title was wrong. I couldn't locate the fourth in Google Scholar.
If you've been following our discussions on AI hallucinations on previous threads, we've got a chemist who can't get the correct reactions out of any of the current generation of LLMs. And we've got an etymologist who's found that they have 90% failure rates on the origins of English words. (Which to me is amazing — did someone forget to scan the OED and when the created the training data?)
Again, as I've said, it's great for code. Maybe I wasn't explicit enough. Copilot is Github copilot here. It works pretty well although there are some frustrating bugs (the quote insertion...).
They may be crappy for you, but they work pretty well for me. I can't imagine going back to having to google ffmpeg commands, for example. I mean, not literally, I can totally imagine and I often take "LLM breaks" where I code without them for a while just to test if I still can (I still can, I don't think they made me lose anything).
I don't know anything about your field of work and I don't have anything to gain by any of the big AI companies gaining AI valuation, so I won't try to sell you snake oil. But reporting from programming, specifically mainstream web programming, system administration on Linux, and scripting, they work really well.
Edit: to give a more general comment about their economic value, I'm far from being the best developer, but I'm well above average, and some people are really really not great. It's currently easy for me to review LLM written code (I can see when some colleagues abuse it). If they start being able to write a pull request that's well specified, which I expect they could do in 5 years, I don't see an economical reason to pay a below average/average dev to do it instead. And software is a very very big industry.
Other coders have said the same thing. However, they also said that the generated code sometimes contains errors. I wonder if the superior results in generating code is because programming languages are more restrictive in their grammatical constructions than natural languages?
Could be that, could be that code get constantly improved and fixed and the version scrapped will be the latest and so often the "most correct" ones, could be that most code that's online runs, could be that code is mostly doing always the same things, could be that it's way easier to judge the output of code, could be that the people making the LLMs have more experience with code, could be that the big AI labs think/know that there is more money to be made on software, could be that the software people are better at integrating it in their workflow, could be that software benefits more from quantity of writing than other industries.
I'm probably forgetting lots of others possibilities. I know a bit about AI from the software engineer user side, and from the theoretical side, but that's about it, so my contribution is limited to "it works for me".
The OED is copyrighted (or however you call it) if you've ever tried looking up words online, so I imagine they either wanted a very hefty fee for use of their material or outright refused.
Of course, it's also possible none of the really smart people working on AI ever thought about "how about we get it to read a dictionary?" because that's wordcel stuff not STEM stuff 😁
Most of the data sucked up into training sets must have been copyrighted — otherwise they would be limited to pre-1929 material. I don't see why the OED would have been ignored.
Of course, most of the lawsuits against generative AI are based on claims that they infringed on the copyrights of the creators.
>tldr: It fell on its face again. It kept insisting that the slope at the equivalence point was infinite. I finally managed to force it to do the right derivation, but I had to force it through the algebra, one step at a time. This isn't so much the equivalent of leading it by the nose, more nearly leading it by the nose with hot pincers.
It doesn't _always_ fall on its face. A few plys earlier I asked it:
>Is light with a wavelength of 530.2534896 nm visible to the human eye?
and it correctly answered
>Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. This wavelength falls within the visible spectrum, which ranges from approximately 380 nm to 750 nm. Specifically, 530.2534896 nm is in the green portion of the spectrum, which is near the center of the visible range and is easily perceived by the human eye
Hossenfelder's two main disagreements seem to be that energy and data requirements will greatly slow down progress in the near future. I expect that as those become the bottlenecks, many AI researchers will pivot to trying to solve them, coming up with more energy and data efficient ways of training AI. There is no fundamental reason both of these factors couldn't be reduced by 100x. At any rate, neither will really have too much bite before AI is able to participate directly in answering AI research questions, including energy and data efficiency questions. Even if she is completely right, the near future she imagines will still be a transformative one.
> There is no fundamental reason both of these factors couldn't be reduced by 100x.
This is exactly the kind of "and then a miracle happens" reasoning brought up again and again in these discussions.
Energy reduction by two orders of magnitude is really, *really* hard without using a completely different algorithm, i.e. in this case, a new AI model architecture.
Still, human brains can do quite a lot with comparatively little energy and bird brains are even more efficient. Thoughts about possibilities for using more biology-inspired methods?
Well, the human brain does the equivalent of more floating point multiplies than GPT4 uses for inference, and does it on a power budget of 50 watts. The power reduction doesn't violate fundamental physics, but what it _does_ need is very power efficient devices, comparable to synapses and dendrites feeding into a neuron's cell body, and new devices typically take decades to make it from the lab (once they exist in the lab!) into production. I'm not holding my breath.
There is what I consider a good analysis in "Transformative AGI by 2043 is <1% likely
if the issue is massive amounts of floating point arithmetic, could that be handled by using a hybrid computer with the analog part handling the arithmetic? That was traditionally what they were used for, before digital computers became so powerful they could brute force everything.
Many Thanks! Yes, and there has been work done along those lines. It gets tricky. E.g. copying an analog state degrades it. There is also a trade-off between flexibility and efficiency. E.g. if one wants to reuse an analog multiplier for changing values of both inputs, and if one of them is a neural weight, then one needs something like a D/A converter to set that input. On the other hand, if each neural weight can have its own, fixed, hardware, then one can use e.g. a mask-programmed resistor to set that weight, with no power dissipation in setting the weight itself - but no way to change it dynamically.
The good news is that this doesn't require semiconductor process innovation, but getting it integrated into data centers and into LLM training and inference flows is not likely to be quick...
I don't think this contradicts my point, does it? The human brain uses 1) a completely different algorithm, and 2) a completely different computation substrate. Nobody's going to make ChatGPT 100x more energy-efficient unless they change 1 or 2, possibly both.
Many Thanks! I'm basically agreeing with you on (2), but agnostic on (1). On (2), I'm just saying that changing the substrate is a _really_ _really_ long, hard, slog, but not _quite_ "then a miracle happens", in the sense that neurons are an existence proof that such a substrate doesn't violate the laws of physics. ( We still have quite a ways to go before hitting https://en.wikipedia.org/wiki/Landauer%27s_principle ) I wouldn't be surprised if getting a 100X improvement in energy-per-logic-operation took a large fraction of a century.
Please explain why "There is no fundamental reason both of these factors couldn't be reduced by 100x." The cost of computing has been rising since we've reached the end of Moore's Law. Although the cost of individual floating point operations are still falling, chips are now taking longer to design and cost more. One of those multi-GPU boxes cost in excess of $250K. I don't see that coming down soon.
As for energy, if we could make chips that run on significantly less power, you think we wouldn't have made them already? No faster cheaper computing on less power is pipe dream.
I am talking about developing new neural networks that achieve similar learning performance with fewer operations and less training data, not improving hardware efficiency. Sorry if that wasn't clear.
I'm not a Neural Networks whiz but "vast, ungodly amounts of training data to do anything useful" seems to me as a fundamental feature/bug of NNs, such that addressing it amounts to no less than a total re-invention of the technique, on par with the Gradient Descent re-invention in the 1980s.
This is different than reducing power consumption or increasing efficiency by doing less, there are all sorts of incremental tricks that one can read about today attempting to achieve less power consumption and/or operation, everything from specialized hardware (Analog Computers, FPGAs, ASICs,..) to quantized representations for floating point numbers to "distilling" trained networks in order to obtain a lighter-weight network that does the same thing in Inference mode, making Inference (but not training) more efficient. I mean "Incremental" in the sense of "Normal Science", things that we can imagine today without a massive breakthrough.
I can't remember I have read about any research whatsoever on ways to train Neural Networks to SOTA performance with less data. It might exist, I don't know of it.
Hmmm. According to Doug Summers Stay we can reduce power consumption by 100x. I haven't heard anyone claim such a thing before. I'm curious how DSS would go about doing that. I just looked up the specs for an A100 GPU. It is expected to consume approximately 400 watt-hours of energy over the course of an hour of high compute operation. In that hour its 54 billion transistors can execute 18,000 petaFLOPs. 18,000/400 yields the number of petaFLOPs it can perform per watt-hour --> 45 pFLOPs per Watt hour. So he would need to create a system that could obtain 4500 pFOPs per Watt hour. How?
I don't mean calculations per watt efficiency, I mean performance per watt efficiency. I think the number of calculations will continue to be reduced to get similar benchmark performance. I'm just talking about the continuation of the trend of algorithmic progress, such as https://epochai.org/blog/revisiting-algorithmic-progress and whether there are any fundamental considerations on whether such trends can't continue.
I'm skeptical of Leopold Aschenbrenner's AGI and ASI time lines, although I don't think AI hype (which is admittedly huge) is the same kind of "grift" as say, crypto speculation or NFT art. Focusing just on the pace of progress, I'm curious about whether Aiden McLaughlin's proposal to switch the research focus from raw, energy-intensive compute on ever-larger collections of raw data to "search" might allow for faster, less expensive progress with much less data? https://tinyurl.com/7cyj4b2c
Aiden McLaughlin? Dude is a college undergrad whose company is a landing page. "Advanced LLM Agents combining quant scale with human-level analysis to beat Rentech and Citadel!!"
Are his analysis of Stockfish and/or proposal of model improvements via search inaccurate? I’m not trying to be facetious or naive - not every undergrad is Zuck or Bill Gates, but some have good ideas.
I'm not an ML PhD but from what I understand it's "not even wrong" - tree search makes sense in a domain like chess with bounded sets of "moves" - I'm not sure it makes sense in something like "AI research"
Basically, there's a reason why go is basically solved but top Starcraft players still cream AIs
Can't China eventually take over Taiwan simply by encircling and blockading it? Isn't that much more likely than trying to actually invade the island first, which everyone says would be extremely difficult? As an island Taiwan cannot go forever without trade, food, and energy supplies from the outside world, and would eventually have to fold.
More important is the US response. If the US tries to break the blockade, which would require force, they would essentially be the ones to start World War 3/likely a new if not worse Great Depression. Technically authorizing such a war would require Congress, but even if it didn't- there's not going to be a public appetite here to be the ones *starting* the most devastating war in history, thousands of miles away from the US. You'd have retired admirals on talk shows saying 'eh, not sure if the US can win this one, there'll be tons of casualties'. Economists saying 'this war will be worse economically than the 30s'. Voters are ultimately not going to be interested. US public opinion was against getting into WW1 & 2 too. All China has to do is encircle and Taiwan and *not* fire on US ships unless fired upon.
So- wouldn't blockading Taiwan ultimately work for China? Why can't they do that in say 5-10 years time?
Blockades are an act of war. So no, China would be legally the country starting the war. Further, Taiwan is recognized (even by China) at the moment as an independent customs territory which they do not have jurisdiction over. If China says, "Well, we changed our mind." The rest of the world can just say "no." And if they try to force the issue then that's still an aggressive action.
From a real power perspective: If China blockades Taiwan and the US and its allies do nothing then that would work. If China blockades Taiwan and the US and its allies choose to do something then the US and its allies get to see every ship the Chinese have in the blockade (which will be out of position for the task of "fight the USN") and then strike at a time and place of their choosing. Which would be very difficult for China to plan around. The US could also begin to do embargo China in turn in a counterencirclement that would have severe effects on their economies, much more than it would on the US. The US would be losing one major trading partner. China would be losing almost all of their major trading partners.
Also Congress has already authorized force to defend Taiwan through the Taiwan Relations Act. And firing on civilian merchant shipping is an act of war so declaring a quarantine then shooting at someone passing by would be an act of war too.
I think it's likely any war starts with China assuming the US will get involved and striking American bases in the hopes that delays American involvement long enough that they'll have either taken over Taiwan or at least landing significant enough forces that America will have to dig them out of a burned out island. That counts on the US ignoring American service members deaths. But in any situation that China starts a war it's counting on the US not having the will to continue fighting. And that's a historically common miscalculation.
>So no, China would be legally the country starting the war
'Legally' is irrelevant (major powers break international 'law' all the time, if you really think that's a real thing, which I don't). There's no like World Court where China will be found guilty after a lawyerly process. They may be the aggressor, but the practical question for the US President is 'would you like to be the one to fire first, start World War 3, probably plunge the world into a new Great Depression, and probably lose re-election as US voters won't like any of this?'
>The US could also begin to do embargo China in turn in a counterencirclement
This would again plunge the world into a new Great Depression, as China is by far the world's largest trading partner. International anger at the US for cutting off their biggest market would be white-hot. Global inflation would spike like we're all in Zimbabwe as the factory of the world can no longer send goods abroad- honestly, the world might experience a couple decades of technological regression, like a mini-Dark Ages. And, again, the President and party who does this is pretty much guaranteed to lose re-election.
The US can't even cut off Russian oil because it's such a large part of the global economy. There is no political will to press the button that says 'start Great Depression 2.0'. China is simply too big to cut off.
>And firing on civilian merchant shipping
All China has to say is 'if you enter this zone, we'll fire on you'. Out of the entire global commercial fleet, exactly 0.0% of captains and crew are going to say 'screw it, we're going in anyways'. These are for-profit businesses, they have insurance that would forbid this, the captain is ethically responsible for his crew's lives, etc. Would you die for your employer? Once a major naval power says 'entire here and die', no commercial ship will ever enter till it's clear.
>But in any situation that China starts a war it's counting on the US not having the will to continue fighting
The US couldn't stomach 4500 casualties in the Iraq war. 58k servicemember deaths in Vietnam is remembered as some kind of apocalypse. Meanwhile Russia has see probably 10x that number of deaths in Ukraine and its population doesn't even flinch. There is no US will for mass casualties thousands of miles away from home unless we're actually defending our homeland, sorry. Do you think the median voter could even find Taiwan on a map?
Again, we're the same country who didn't want to enter either World War 1 or 2. The whole point I'm trying to make is that both politically & economically, there's simply no domestic will for an apocalyptic war & depression
You appear to be caught in anti-American doomer pessimism. The international system doesn't really exist and everyone's doing power games and cynically maximizing their self-interest. Except the US which is too cowardly too pursue its own interests or defend its allies. It's a useful narrative if you want America to lose despite its many advantages. But it doesn't really make sense. If everyone's cynically maximizing their interests the US will too.
You also don't appear to have any firm grounding in how these things have actually worked either historically or recently. For example, you talk about no one being willing to go past blockades even as that's happened repeatedly. You dismiss the importance of court cases even as China does a lot of diplomatic maneuvering (and often fails) to avoid them.
You're also wrong on simple statistics. The US has always been China's largest trading partner but the US's largest trading partner has always been Canada or Mexico. China's usually third. The statistic people often mix up is that China has sometimes (not always) been America's largest import partner. But trading partner includes imports and exports. Further, the US probably gets to continue trading with Mexico and Canada in a Taiwan contingency while it's unlikely China will get to continue to trade with its second or third largest trading partner: Japan and South Korea (or sometimes Japan and Taiwan). And most of China's war important minerals (like iron) are sourced from Indonesia or Australia which are also overseas. China's also a more trade reliant economy in general. The war would be devastating to both economies but worse for China.
As for American will, isolationism is a loud minority. It was a loud minority in WW2 as well. 78% of Americans favor defending Taiwan and 69% now favor recognizing Taiwanese independence. It remains a majority even if you say that's likely to trigger a war. 53% of Americans support putting American troops in Taiwan. The practical electoral politics is that losing Taiwan would be extremely unpopular and probably bury the party that did it in the next election.
Legally the US was very careful to avoid the criteria for a blockade. It was something of a fiction but still. And Israel also doesn't acknowledge it as a blockade. Both let things certain things through through specifically to avoid the label. And it didn't really work for either of them in terms of getting people to respect the fiction. And a Gaza style blockade, let alone a Cuban one, would not induce Taiwan to surrender by denying them power or food. China would need to do something far more total.
Also: If you think global sympathy is with Israel, or especially was with them pre-10/7, then you're mistaken. 10/7 changed the narrative not because it was an attack on Israel (those happened rather frequently) or because it was successful. It was because of the extreme violence against civilians and human rights violations like kidnapping and rape. If Taiwan somehow responded to the blockade by doing the same to a bunch of Chinese civilians then that would probably lose them sympathy. But I don't think that's likely.
Also, both Gaza and the Cuban Missile Crisis involved both parties shooting at each other. If China is trying to avoid war through a blockade as soon as they sink a US ship that outcome becomes much less likely. And if China isn't trying to avoid war then there's much more aggressive actions to take. China can try the blockade and then saying, "How dare they shoot back at us." But as the people of Taiwan starve (something the US didn't try to do to Cuba) I doubt their neighbors (who they also have claims against) would be very sympathetic.
If they extend their sphere of power out around Taiwan, they could start imposing customs inspections. Searching for biological contaminents, etc. Is your ship 100% rat and cockroach and ant -free? No invasive species or harmful pesticides that could damage the delicate environment of Formosa? Have you quarantined for covid-29 for 3 weeks?
All of those things are I believe prohibited by international law so long as the ship in question is not travelling to or from the People's Republic of China. Even within a nation's unambiguously sovereign territorial waters (e.g. the classic 12-mile limit), merchant ships engaged in innocent passage (https://en.wikipedia.org/wiki/Innocent_passage) between two other countries are to be left inviolate.
And if you stop them by force, that's an act of war.
Because then they get hit with the biggest batch of sanctions the world has ever known, and oh what do you know the USN has closed the straits of Malacca to Chinese traffic
If the US can impose sanctions on China then the time to do it is now, not later. But the US won't have the economic stomach for it when the time comes, especially if the blockade is imposed slowly.
We don't call them "sanctions" but we've made sure they can't get their hands on bleeding edge GPUs, or EUV machines, and tariffed their EVs and Solar massively.
We're arming the Taiwanese too!
Both sides are basically playing this like they want it to get hot and are expecting it to.
The US can impose sanctions on China now if it want's, but without some act of aggression from China it will be tricky to get the other major powers to go along with it. If China encircles Taiwan it's much easier for the US to get everyone else on board with sanctions.
I'm going to repost here a request I posted on the last hidden thread, which was somewhat skimpily attended. Wondering if one of you people with decent general knowledge about world affairs could do a little consult with me:
Would anyone be willing to consult with me briefly about plausible future international developments? I am writing a piece of fiction set 30 years in the future. It is mostly about the personal experience of several individuals. But I need to sketch in for the reader, in about one paragraph per event, a summary of 3 important and related world events. All involve a superintelligent AI that the US developed. and over which we have substantial control.
I don't think my understanding of politics, government, world powers & economics is good enough for me to come up with scenarios that are plausible. My grasp of these topics is way below average for this forum. I don't think it would take a long time for anyone who's reasonably knowledgable about the areas where I'm ignorant to toss out answers. And when I say reasonably knowledgable, I really mean reasonably. You do not have to be knowledgable at the professorial, book-writing level, just someone who reads a question like nifty775's and has enough general info and opinions about world affairs in the last 100 years to have a view they can back up in a reasonable way. After all, nobody can know for sure how things will play out in the next 30 years with a genius AI in the mix. I just don't want to sketch in possibilities that are laughable -- things like Tibet taking over the world. If you have reasonable general knowledge about world affairs, you probably could type an answer to each of my 3 questions in 10 mins at most. Maybe in 3 mins. If you're willing, I'd want to ask you my questions via Substack chat or email, so that info about the story I'm telling stays private. Oh, and if you'd like me to credit you in my acknowledgments I'm happy to.
Not knowing what you're writing, I'll suggest looking up the rise of new technologies in the past, like the car or electric power, and seeing how the world changed in their wake; what happened to the owners, the towns they came from, the governments, etc.
Could also look up the rise of McDonalds and the fast food industry; the McDonald brothers and Ray Kroc. (They made a movie about it if you don't mind dramatization. https://www.youtube.com/watch?v=N_t5PGSJD9o)
You're falling into the trap of assuming all significant developments are technological. She's asking about sociological and geopolitical developments.
I'm not the one to answer the question, but you might try just buying a recent textbook and reading the last few chapters.
I would if that aspect of things were an important part of the piece of fiction, but it is not. The story is not about how the world events came about. The story takes it as a given that the world has changed in changed in certain ways, and is about life in that new world.
Here's an analogy. Let's say you were writing a story about a dozen people who were shipwrecked on an island, and how things go for them over the course of 5 years. -- who despairs, who adapts to life on the island, who devotes themself to trying to find a way to escape. But you know that that early on you want to give the reader a one paragraph summary of how the dozen people ended up there, and you don't want any absurdities in it. You don't want to say the dozen. people were in a certain kind of sailboat if everybody who knows about sailboats is going to complain that that kind of sailboat doesn't have room on it for more than 4 people. You don't want to give the location of the island as someplace that's near the Panama Canal, because then people will say, Haha, they won't remain shipwrecked for long, there's constant sea traffic in that area. That is the kind of absurdity I am trying to avoid. So long as the brief accounts I give of a couple of things are not absurd, the details in them do not matter to the story.
Would you like me to give you one of the questions, so you see the sort of thing I'm asking?
Well I personally probably can't answer any, being in mostly the same boat, but if all you're concerned about is backfill, then posting the actual scenario seems like the best approach in general.
The other option is simply "less is more"; they weren't in a certain kind of sailboat, they were in "a boat" and the audience will pick whichever boat works for them. I love to quote (...well, paraphrase) Patrick McManus on the topic; "never specify the person turned left at the top of the stairs unless turning left is vital, because your audience's imagined stairs may not have had a left."
I *am* giving less. I will be giving 5 sentence one-paragraph summaries of events the equivalent of the Civil War or Great Depression. It will not work to say absolutely nothing about how things devolved..
I don't need a more knowledgeable crowd, though. I am absurdly ignorant about recent history, politics, economics, etc. Really. For some reason I have never felt a lot of interest in politics and world affairs. I just lack a drive to keep up with it. Once in a while a book about recent history captures my attention -- some of the Barbara Tuchman books did, for instance -- and I read it with great interest. But then it doesn't stick with me, I think because I don't spontaneously recall bits of these books and ruminate about them the way I do with poetry or philosophy or math. I really can't justify being so ignorant, but there it is. It's probably hard to understand if you experience this stuff as intrinsically interesting. Maybe think of it as sort of like being asexual?
I'd say my level of ignorance is the equivalent of not understanding exponents. How can it be that 2^3 + 3^3 doesn't equal 5^3? What is it you're supposed to do when you divide 8^6 by 8^2? Somebody said you subtract the exponents and get 8^4, but that doesn't make any sense. . . And I'm trying to solve a Calculus 1 problem. All I need is somebody who understands math well up through Calculus 1. I do not need a math PhD.
And I feel safer asking here because I know that cast of characters reasonably well.
I'm no expert on politics, but I know something, at least, about math. They almost seem to be incompatible disciplines.
But 8*8*8*8*8*8 divided by 8*8 you can surely see is 8*8*8*8, which is 8^4. The subtraction rule of exponents is derived from the answer, so it turns out to be an after-the-fact shortcut.
On the other hand, 2*2*2 + 3*3*3 doesn't work with addition at all; you're just looking at the similarity on the ^3, kind of like thinking "tough" and "though" should rhyme, although that is, admittedly, a lot more arbitrary.
Sorry, I wasn't very clear about the exponents thing. Actually I understand exponents fine. What I meant was that my level of understanding of world affairs is at a middle-school level. Regarding that domain I am the equivalent of someone who does not understand exponents, a pretty basic thing in the math domain. .
A blockade would cede operational initiative to the US et al, since we can choose to arrange a confrontation where China needs to actually fire at a US-flagged ship to enforce the blockade at a time and place of our choosing. An extra week or two to position assets in theater would give the US a much better balance of forces for the opening rounds of the war.
Everything about this question can be answered by looking (with fresh eyes) at a map and a chart of china's growth. No need to invade or blockade - Taiwan is going home one of these days. China is patient.
Was Frederick really that great? His battle record was pretty much 50/50. He consistently launched frontal charges against enemy positions, even when the enemy was dug in fortified positions on higher ground. When the charge broke the enemy morale he won, and if the enemy held he lost. I think Frederick's most significant qualities were resilience and luck.
Resilience because Prussia was a small state with a lean centralized government, and stayed in wars when less cohesive states would have been forced to sue for peace. There are times when things look pretty bleak for Prussia and Frederick writes of being depressed, but he manages to hang in until the end.
Luck because he was a mediocre battlefield commander who managed to wrest a significant land grab and fight most of the major powers in Europe at once and not lose. Peter III gaining the throne in the middle of the war while Russia was in a position to really squeeze Prussia was a ridiculous stroke of luck. He was a personal fan of Frederick and withdrew from the war, even offering to loan Prussia a large number of troops. Of course Peter was overthrown in a coup and replaced by Catherine, but by then the political will for war had waned. British victories against France in Hanover combined with Prussian gains against Austria allowed Frederick to sue for peace, from an advantageous position with the conspicuous absence of Russia.
There's no record of Frederick the Great ever saying that, and as a notably-aggressive monarch he very much did not follow that advice. He pre-emptively invaded and annexed Saxony in 1756, and then pre-emptively attacked Austria the next year launching one of the bloodiest wars of that century. Frederick had accurately assessed Prussia's strategic position -- one in which letting the larger powers have the first shots would have been disastrous -- and acted accordingly with ruthless skill. That's literally how he earned the nickname "the Great".
In any case the above formulation seems to have some limitations as any sort of universal. E.g. it seems pretty rich to label Saxony the "true aggressor" in 1756....was Poland in 1939 the true aggressor? In 1990 Kuwait was the true aggressor? In 1931, Manchuria? In 1978, Afghanistan? Etc etc.
I believe you (and thank you for your trouble), but I wonder if a link to your find might enable me to glean a little context, if only as to Stephens, whom I have never heard of.
> Thus, Confederate vice president, Alexander H. Stephens, claimed that the war was "inaugurated by Mr. Lincoln." Stephens readily acknowledged that General Beauregard's troops fired the "first gun." But, he argued, the larger truth is that "in personal or national conflicts, it is not he who strikes the first blow, or fires the first gun that inaugurates or begins the conflict." Rather, the true aggressor is "the first who renders force necessary."
Exactly. We at Al Amnesty Watch stand with the freedom fighters! Sometimes freedom requires shooting up a festival or two. If those """civilians""" didn't want to be massacred, maybe they should have pushed themselves into the sea.
Since a blockade is an act of war one could reasonably argue that it was the Chinese who started WW3, but this would of course be rather academic. Also historically both Britain and the US have been rather lenient when it comes to classifying their own blockade actions as 'war', so it would be not that unreasonable for China to adopt similar standards to apply to their own actions...
Anyway, we could indeed force the Chinese to fire on our ships first just by sending blockade runners.
Not really. Having to have escorted "blockade runners" would drive up the cost to the point where it's more efficient to do business elsewhere. And our actions show that we are already "planning on doing business elsewhere", it's just taking us awhile to get our ducks in a line. So all China has to do is ignore (except for complaining about) the blockade runners, and over time Taiwan's business and support will collapse. All it will take is patience (which the US isn't good at, but China often has been).
You don't need some sort of special "blockade runner" ship or a special escort for it, you just need one ordinary cargo ship with a US flag and cargo you don't mind losing. If China sinks it, they've started the war. If China lets it pass, then it's not actually a blockade.
We've seen how effective the Ukraine's marine drones (and now ATACMS) have been in removing the Russian navy from the Black Sea. I'm sure Taiwanese have been watching closely.
Seems like it would lead to escalating brinkmanship - PLAN encircles Taiwan, so the US sends an escorted supply ship and says they don't recognize the blockade. Does China fire on this ship and fire the first shot?
Considering that the U.S. is already famous for freedom of navigation operations, and has shown that it will keep global trade routes open at cost to itself (in Somalia and the Red Sea most famously), the U.S. would likely escort RoC civilian vessels in a convoy at the worst case scenario, in which case the policy of "China simply just doesn't have to fire the first shot" means that the 'blockade' just turns into a nuisance.
Of course there's also the question of if a civilian shipping vessel decided to head out anyways. Does China fire on it (killing civilians, which in any universe puts China at fault)? Do they use barely-not-lethal methods like fire houses to destroy the bridges of these ships (like they're doing in the Philippines)? Either way if the Chinese blockade is significant enough for the U.S. to take action I don't see anything being accomplished from a blockade other than embarrassment for the PRC.
As noted, the RoC needs to import everything for survival, so I don't see them simply cowing to a declaration of blockade. If it becomes an existential threat, the U.S. will get involved for the semiconductors if nothing else.
A lot of this is going to come down to the ancient tradition of "whoever blinks first loses", with the exception that China cannot win a conventional war, so the U.S. holds escalation dominance. It's not (yet) in the PRC's interest to try it, at least not yet.
China just announces "any ships who try to run the blockade will be fired upon", now the US will be the belligerent party if they do so. The US President giving such an order will be saying 'yes I would like to start WW3 on my watch'. Seems unlikely
A blockade is an act of war. Saying “we’ll shoot you if you come here” is just publicly declaring your act of war. If anything, it would make it HARDER to blame a shooting war on US provocation, since you pre declared your intent to shoot first.
Threatening to kill someone if they do X, does not make them the aggressor for doing X. It makes you the aggressor first for making the threat, and again for carrying it out. If it leads to World War Three, you started World War Three and you're just upset that the other side didn't immediately surrender.
"Look what you made me do!" is the eternal cry of terrorists, tyrants, and two-year-olds. Nobody with an ounce of common sense falls for it.
That seems just straightforwardly false to me? If the Chinese are both the ones who made that announcement and the ones who fired the first shot, I expect most people will consider them to be the belligerent party, not the heroic Americans who tried to deliver humanitarian supplies to the innocent Taiwanese who are starving because of the illegal Communist blockade before the evil Chinese fired on their unarmed ship (or at least that's how the US will describe the situation).
All the US has to do to is lend-lease Taiwan hundreds of naval drones whose designs we will license from Ukrainian engineering companies. Or better yet, just send the Taiwanese the designs, and they can probably manufacture them quicker and cheaper than US defense contractors could. Last I heard the remainder of the Russian Black Sea fleet (that hasn't been sunk yet) has been chased off of open waters, and Ukranian grain shipments are getting to the Bosphorus without interference. So much for that blockade.
How do you know the Chinese are more competent? The Russians were considered a superpower before they invaded Ukraine, now it's clear they were always a Potemkin superpower. China has had a history of corruption in its military. Chairman Xi supposedly has been cleaning up the corruption, but he seems focused on the corruption of his perceived political enemies rather than his allies' corruption.
And how do you defend against marine drone attacks when the drones are low enough in the water that they mostly have no radar signature? Passive sonar might detect them coming, but then what to you do if you're the captain of ship as large as the Moskva? Evasive action doesn't get you far when marine drones are faster and more maneuverable than the ship trying to avoid them. Visual contact would be necessary to spot and destroy them. At high speed, they leave foam wakes, but by the time they'd be spottable they'd be pretty close. The only good defense against them I can think of is something like the Royal Navy's Phalanx Gatling gun. Unfortunately, those are automated and radar-guided as a defense against incoming missiles. And I don't think there's a manual override whereby a human can take control and aim the thing low at the water.
Ukraine's Magura marine drone is 5.5 meters long and weighs about 1,000 kilos with batteries and its 200 kilo explosive payload. Because they're battery powered they're silent. They have a range of up 800 km and 60 hours of battery life. And it beams live video to the operators.
Then there's the Sea Baby which can carry 850 kilograms of explosives. It has a top speed of 90 kph and a range of a 1,000 kilometers.
The Magura and the Sea Baby cost between $200K-$225K. For Taiwan that would be money well spent if you can sink an aircraft carrier that's estimated to cost $2 billion. Personally, I hope the US Navy is taking this threat seriously. But there's some truth to the idea that generals (and admirals) only know how to fight the last conflict. I still see shrill proclamations on the military blogs that tanks are not obsolete. Seems like they're like the horse calvary claiming they're not obsolete. However much they protest, aerial drones seem to have changed the dynamics of armored warfare. And sea drones seem to have changed the dynamics of naval warfare.
Nitpick alert: Was Russia *always* a Potemkin superpower?
My guess is that it was in better military condition soon after the fall of the USSR when entropy (theft, weather, deterioration) didn't have as much time to damage the weapons. We'll never know.
Is there good terminology for a country like Russia which isn't a superpower but is strong enough to cause a lot of damage?
I'm not a marine engineer but this sounds like you "just" need your own screen of drones -- a waterborne version of the Patriot missile system. Or a modification of the Patriot or Phalanx type systems to be able to hit water targets.
Drones are just cheap PT boats which seems to be a solved problem.
Bit of a nitpick to get out of the way first - I don't know that the Russian Federation was ever considered a super power. A regional nuclear power, sure. And "were always a potemkin" is pretty broad. If you mean Russia for the last 30 years or so ok, if you mean the Soviets in 1950 not so much.
I wouldn't read too much into drone efficacy against the Russian Black Sea Fleet. The ships are just sitting there right next to Ukraine. The Ukrainians have as many at-bats as they want, and only need to connect a single time to take a ship out. This is also entirely asymmetric as there is no Ukrainian fleet (on anything like the same scale.) Much different conditions than ship-to-ship fighting near Taiwan.
There are several defensive methods employed by Russian tanks against drones. One being electronic jamming modules, or various electronic warfare (EW) methods. Another being ablative cages or meshes mounted onto the exterior of armored vehicles. Similarly, the Russians have mounted thick nets and cables around bridges in Crimea to ward off drones. So perhaps naval vessels could also employ an outer mesh or net layer to detonate incoming drones a safe distance from the hull. EW countermeasures I'm less sure about as water absorbs quite a lot of EM radiation, so the effective range would be much shorter. This cuts both ways, and I think sea drones have to be pre-programmed rather than controlled in real time like aerial drones. Which makes them less effective against moving ships compared to ones sitting in port.
The PLAN has been conducting antipiracy operations far from home that demonstrate rather greater competence than the Russian Navy. In particular, they don't seem to have to attach a salvage tug every time they deploy a warship.
This isn't to say that they're a match for the USN and its peers, but they aren't completely hopeless.
It's entirely possible that the PLAN is as much a paper tiger as the Russian military, and/or that surface ships are totally obsolete in the face of new drone technology. But all of that is super speculative until we actually see them in action and it's foolish to just assume your enemy is no threat based only on this level of speculation.
The Ukraine war gives *some* data but it's pretty clear that neither side is a first-class power, especially on the naval side of things, so there's a pretty sharp limit on how far we can extrapolate.
Possible I suppose but there's some evidence against:
-The Russian military has shown a low level of competence in general and there's reasons to assume their navy would be a lower priority than their air and ground forces.
-The US Navy seems to do much better- there were cases of USN ships just kinda hanging out off the cost of Yemen getting shot at by the Houthis for weeks without suffering significant damage.
-My impression is that experts looking at specific incidents with the Russian Navy have assessed their competence as quite low, although I don't have the memory or the energy to track down specific citations on this so hey.
This just devolves into a game of chicken, though. Let's say that a Taiwanese ship carrying semiconductors to sell abroad to fund the purchase of grain sets out. If China fires first and kills civilians, I don't see much logical opposition to simply sending out enhanced freedom of navigation operations. There may always be anti-war sentiment, but ultimately China is going to have to cripple a civilian vessel if push comes to shove. And remember, when push comes to shove, the U.S. can shove.
I think it's no sure thing, and frankly I don't think the PLAN yet wants to risk embarrassing itself when the Spratly Islands are still hotly disputed.
If a mind just “something a brain does”, what would enforce “one mind per brain, one brain per mind?” Something in biology - like a specific neurological structure - or what?
As for the "One mind per brain" part, I don't think it's enforced. In humans, there are plenty of split-brain cases (the bridge connecting the 2 hemispheres of the brain got severed) doing bizarre things, bizarre, that is, unless you grant that the 2 hemispheres of their brain became different minds.
>> After the right and left brain are separated, each hemisphere will have its own separate perception, concepts, and impulses to act. Having two "brains" in one body can create some interesting dilemmas. There was a case in which, when one split-brain patient would dress himself, sometimes he pulled his pants up with one hand (the side of his brain that wanted to get dressed) and down with the other (the side that did not). He was also reported to have grabbed his wife with his left hand and shook her violently, at which point his right hand came to her aid and grabbed the aggressive left hand (a phenomenon [...] known as alien hand syndrome). However, such conflicts are very rare. If a conflict arises, one hemisphere usually overrides the other.
>> When split-brain patients are shown an image only in the left half of each eye's visual field, they cannot verbally name what they have seen. This is because the brain's experiences of the senses are contralateral. Communication between the two hemispheres is inhibited, so the patient cannot say out loud the name of that which the right side of the brain is seeing. A similar effect occurs if a split-brain patient touches an object with only the left hand while receiving no visual cues in the right visual field; the patient will be unable to name the object, as each cerebral hemisphere of the primary somatosensory cortex only contains a tactile representation of the opposite side of the body. If the speech-control center is on the right side of the brain, the same effect can be achieved by presenting the image or object to only the right visual field or hand.
------------
In Octopuses, the majority of neurons are not even "In the brain", if I remember correctly only a 1/3 are. That means that a full 2/3 of the "Brain Material" of the Octopus is not in their brain, but in their arms. This means that their arms "have a mind of their own", imagine if your arms were like your lungs or stomach, moving along, grabbing and grappling things with no conscious attention whatsoever (except in full view of you, unlike our lungs and stomachs). Something mildly similar exists in humans, the Spinal Reflexes. Your spine has simple circuits that detect certain conditions and fire orders for your muscles independently of the brain, the famous Knee-Jerk reaction is a spinal reflex, for example. But those circuits are not full-blown second brains as in the Octopus.
As for "One brain per mind", I think it also gets violated pretty often in eusocial insects. Unless you restrict your definition of a mind to only include that which communicates among itself by electro-chemical brain signals (which will enforce "One brain per mind" by definition), hives like those of bees and insects arguably qualify to be one mind, but distributed across different brains. No ant or bee can ever work against the colony, and if one does by improbable chance, the others attack or ostracize it like a primate immune system attacks a cancerous cell.
If you define a mind as an Ego, that which creates/feels an "I", then yes, I think "One brain per mind" is fairly common sense, where "One brain" means any mass of neurons with a reasonably high-speed reliable communication interconnect between them. If the communication fabric splinters or slows down, then the neuron mass breaks down into several egos. Peter Watts advanced this hypothesis in one of his novels, it was Freeze Frame Revolution, I think. But if you simply define a mind as Autonomy, that which can act on its own accord, setting its own goals and pursuing them with its own means, then neither "One brain per mind" nor "One mind per brain" seem to hold in the general case, although they seem to be the default in humans and most everyday mammals.
I've wondered how much "mind" is in people's legs. A healthy person can walk on moderately rough ground without the eyes being much involved. It feels to me like my feet know what they're doing.
This is just what we call "muscle memory", but it's controlled by the brain, just in the background. Not limited to legs, too: I don't need to look at my guitar to play a familiar piece, the fingers "know" what to do.
Yes, but - it's a continuum. A "slightly rough" trail is still easy, and requires almost no conscious effort. A wilderness trail in northern New Mexico takes full attention, and is very tiring because of that.
The same with the fingers: a familiar "easy" piece requires no attention whatsoever, while playing a Bach prelude, even though I know it by heart, still demands a visual help.
Human brain does both conscious and subconscious minds, which gives us at least two minds per brain.
If you ask what enforces one conscious mind per brain, then it is something like one and not two lines of code to run consciousness.exe, encoded in an illegible way throughout the neural network by the evolution.
I don't think it's a hard limitation, just a high probability. The pattern that gets used more, trains more and more of the neurons to respond to it. But I don't think it's a priori impossible for a more process-oriented microkernel architecture to reach a stable equilibrium.
It mostly went over my head, but the vibe i took away was "the goal of consciousness is synchronicity". Of course, DID/MPD is certainly a thing. But that seems like a failure mode, not the norm.
> It is really slow, it is exclusive, and it simplifies the world into a highly compressed sample. This can be useful in its own right, for example to make a decision. A lot of information is lost in this process, but apparently the resulting pattern is so simple that it can be processed further. Since all parts of the brain participate in a conscious event, it is also universally available in the brain. Dehaene calls this function the *Global Neuronal Workspace*.
----
Boy oh boy, do I have a crank theory.
<wildly irresponsible speculation>
I suspect Monism is correct. (Cartesian Dualism strikes me as the last bastion of anthropocentrism).
Consider Elan Vital. It turned out that there's no discrete quality that separates the organic from the inorganic. "Inorganic" compounds like table salt can absolutely be integrated into an organism. Really, the secret sauce is the "complexity and specificity" [0] that carbon allows for. The complexity of organic chemistry pays for the specificity of structure required for enzymes to catalyze certain reactions. This reduces the operational costs of metabolism (in terms of energy expenditure) as low as possible. Without energy, there's no agency. And without agency, there's no struggle against entropy. Thus, life can be conceived of as a monopole of chemical disequilibrium. Like how a city is just a monopole of human activity.
Likewise, I suspect The Hard Problem will have similar contours. I.e. that the complicated structure of a brain is necessary to contort the electromagnetic(...?) field in some extremely specific way. And this contortion of the EM field somehow gives rise to consciousness as a monopole of EM activity.
I suppose this would technically be "pan psychic" in the sense that there's no discontinuous barrier between conscious beings vs inanimate objects. But it also adds up to normality in the sense that rocks probably do not have qualia/cognition to a meaningful degree, and is consistent with the ability of magnets and anesthesia to disrupt consciousness.
"One mind per brain" is demonstrably false (DID is a thing). "One brain per mind" is enforced by the lack of thought-level communication between brains. Even the two hemispheres are not well connected in that sense, resulting in well known experiments where the left side and the right side think and act separately.
So, one direction false, and the other one is limited by the lack of a direct neural connection. The latter might change if Neuralink has its way.
It can also be a multi level thing; at one basic level you have "one brain per mind" because of the reasons you gave, but at a higher level, a whole group of people with rich enough patters of interaction can act like it has a mind of its own. I don't see any fundamental reason why such a group couldn't be ascribed a kind of collective consciousness too.
It can indeed be thought of an emergent collective consciousness, but each person in that group would not think and feel everyone else in it, so not quite related to the original question.
If the group can (in this hypothesis) be said to collective think and feel, that would be a counterexample to the original question's "one brain per mind".
You are approaching what I see as the most confounding question. How and why does consciousness even exist?
I share your intuition that there's no theoretical reason there must be one consciousness per brain. And are you sure it's true? It seems unlikely that the rest of your nervous system (and indeed your close surroundings and sensory inputs) play no constructive role in your consciousness. I'm sure that speed of information transfer is involved, so matter that's further from the brain and less intimately connected plays less role. But is there any additional slower consciousness that's not bound by speed and distance?
I am part way through the Stanford Encyclopedia of Philosophy entry that tries to explain consciousness and address some of these questions. For me, the central frustration stems from the fact that consciousness seems a non-physical phenomenon (can't be measured), yet it is the only thing that could not be an illusion. So it is both more real and less sensible than everything else in the universe.
I dont think materialism can plausibly account for consciousness, but I’m open to it. I think the way we talk about and think about consciousness is an absurdly weak point in modern thinking because consciousness is the one thing we have the most evidence of, but we don’t understand it at all and few people seem interest in trying.
I think consciousness is likely fundamental, like the material universe. That seems more congruent with the evidence, since there’s no evidentiary basis one can have to say they are not conscious right now.
That's my current tentative belief as well, but it's a classic Sherlock Holmes conclusion (which feels no fun at all): When you have eliminated the impossible, whatever remains, however improbable, must be the truth.
I've had dreams where I wake up to outside stimulus, only to wake up later and realize the previous waking up was still part of a dream. So I'll say "yes". There's nothing stopping all of this from being someone else's dream.
It's difficult to say anything with certainty, because I don't have a clear definition of "mind" to work with, it seem like a brain having multiple separate compartments of information which do not interact would just pointlessly degrade its performance (and would therefore be avoided by evolution), and in the other direction, for multiple brains to host something that could meaningfully be called one mind would require a higher-bandwidth channel between them than plausibly exists.
Well, the brain clearly has multiple segments that do not share all their information. In that sense there is obviously limited sharing. I think the connection between the verbal and the kinesthetic is the most blatant. Describe how to ride a bicycle well enough that someone who hasn't ever done it can do so. But there are lots of "limited channels". Consider trying to explain a "gut feeling".
Depending on your definition of "mind" this can be a sufficient argument. And the parts have limited communication to improve efficiency. They only degrade it in particular ways that are generally less important.
>Well, the brain clearly has multiple segments that do not share all their information. In that sense there is obviously limited sharing. I think the connection between the verbal and the kinesthetic is the most blatant.
Another prosaic example of multiple high level processes that everyone has experienced is searching for a word, not being able to find it, then having it pop into one's perception a few minutes or hours later. Clearly _something_ , with the capability of performing linguistic search, is running in parallel with the part of one's mind that is accessible to introspection.
Can you say more on the higher bandwidth channel part?
To me, “mind vs brain” is like “software vs hardware”. I had the impression this was a commonly shared vocabulary: thoughts and ideas and feelings are phenomena of mind, while neurons and glia and synapses are phenomena of brain.
The "hardware/software" analogy is likely misleading. The problem is that we don't have any evidence suggesting the brain "stores" anything the way computer memory stores information in strings or 1s and 0s. Say, I memorized a string of numbers. My (admittedly, amateur) understanding is that there isn't a specific place, an "address" in the brain we can point to that has been altered to store these numbers. We don't quite understand how the brain retains information.
IIUC, there is not yet a consensus on how memories are stored. It does require protein synthesis for conversion of short term memories into long term memories, but that's not sufficient for a complete determination.
I admit my memories of this are hazy, I've read a paper on the topic a few years back. I'll try to find it when I get a chance (work needs to be done :) ).
I admit my memories of this are hazy, I've read a paper on the topic a few years back. I'll try to find it when I get a chance (work needs to be done :) ).
Sure, I'd agree the mind is analogous to software but, to carry the analogy farther than is really warranted, you can have multiple computers working together on a common goal over a network but you can't have them literally running one process together unless they're wired up so closely that they're really more one computer (I guess you could write an OS that permitted that if you wanted but it would be ridiculously slow and I think the analogy breaks down there anyway). If there's far more communication taking place between the parts of a mind running within one brain than between different brains (of course, any communication between minds or their components must be physically embodied as communication between neurons), then it just seems most reasonable to draw the boundary of what counts as one mind there.
All of this (both the one-mind-multiple-brains part and the one-brain-multiple-minds part) is of course rather fuzzy, as most things in biology are. I'm not saying it's strictly impossible, just that there are reasons for it to not be the case.
This depends on what you mean by “process”. At the OS level, no they aren’t the same process. But at the logical level, developers of distributed systems refer to what are effectively “one process,” such as, eg a distributed database. And brains don’t have, eg process ids, so the notion of a process running on one cpu seems to map more cleanly to the distributed systems notion if a logical process.
Is it even obvious that your quoted principle is true? In my naive understanding, the observations seen in split brain patients might raise a question mark here.
Back when the turing test was considered a reasonable test for computer consciousness, I figured that our mental models of others, to the extent that they were faithful, probably had some degree of qualia- making "many minds per brain" the default. Hofstadter pushed this idea hard in "I am a Strange Loop." Now that we see GPTs like Claude blow past the turing test but do not consider them to have qualia, it's harder to say.
The actual Turing test hasn't been passed. (It probably won't be.)
FWIW, a human equivalent AI is wildly implausible. A near human AI would be stronger than most people in some areas and much weaker than them in others. The limited "5 minute version" can plausibly be passed. but I doubt that it had been, despite the articles claiming that it has. (67% of the humans were rated non-human??)
OTOH, if you allow the questions to be poised by randomly selected volunteers, you'll probably get figures that show it passed. Some percentage of those will be trolls. Many will be seeking entertainment more than information. Turing wanted to questioners to be people who didn't want to admit that computers could be intelligent, but were seriously looking for answers.
I don't know what "The Circle" is. I don't know what the rules for interaction among the players are. So I can't evaluate it. But I didn't count the one that played Diplomacy.
Turing envisioned the questioner asking very specific questions. I actually think that many of them would not be successfully completed by a typical human who was a native speaker of the language the test was being given in, but then, e.g., success in extracting the 5th root of 23 would count against the respondent being human. (His question was more like "Compose a poem". Perhaps he suggested a particular poetic form, it's been awhile since I read the test's full description.) I would probably guess that if the respondent composed a proper sonnet, that it was an AI, since few people could do that.
Like Johann said, the opening reads poorly, like you want the Nazis to come off well.
I think you make a couple of early unsupported assumptions that throw the whole thing off. The first is this one.
>But from the inside view, every side in every war has framed themselves as the good-guys and the enemy as the bad-guys.
Well, they frame themselves that way to garner support, but it's not some universal thing that everyone believes they've got justice on their side. I don't think the Mafia thinks they're the good guys, I think they think there aren't any good guys and the people who think there are are suckers. Serial killers, rapists, arsonists, there are plenty of people who are in fact just evil. (We also have Hitler's speeches on record, we don't have to guess what he said. https://en.wikipedia.org/wiki/30_January_1939_Reichstag_speech#CITEREFLongerich2019)
The second one being that this good/evil dichotomy is unprecedented in a war of this magnitude. I would suggest it's actually the standard. Ancient Greece and Rome would wipe out the towns they captured. 1400-1800 Colonial forces destroyed all the native tribes across the Americas, and enslaved the locals in Africa and Asia. The only novel thing about Nazi Germany's actions in World War 2 is that they tried it on post-colonial Europe, and eventually lost. It's easy to look like the good guys when the whole war is taking place on your territory.
I take issue with your first critique. The quoted comment is explicitly framing this about *war*. This deals with society level interactions, or at least a tribe/proto-kingdom/whatever. In that sense I think it is fair to say that every side views their own cause as righteous. Expanding the idea to individuals is drawing the wrong conclusion. Same with conflating the mob with an organized military force. This is the part where I admit I only read your comment and not the review, so if the review also expanded the scope of the good/evil presentation I withdraw my remark.
On the second issue, absolutely agree. Rome destroyed entire groups of peoples, like Carthage in the Third Punic War or Sulla butchering the Samnites. Or read Caesar's commentaries on his campaigns in Gaul. Or ethnic groups that raided Roman territory as an entire host including the women and children, like the Cimbri, who were entirely wiped out or enslaved. For a modern comparison, look at the communists. The Soviets killed more people than the Nazis ever managed. The Cambodian regime similarly wiped out 25% of their population, far higher than any Nazi genocide.
The book is about trying to rationalize Nazi war actions, which means rationalizing Hitler's decisions; the reviewer offers these two points as reasons why the book appealed to them. It's not about whether the average German thought it was righteous, it's about whether Hitler and his inner circle did, and I think it's a mistake to assume they needed to.
I liked your review and it made me want to read the book. However, I was a little bit freaked out by the beginning, which made me think "oh no, is this some piece of Nazi apologetics?" It's probably hard to write a more catchy introduction, but I personally found it very aversive.
I really liked the sausages bit. I wasn't aware that even on the brink of hunger so many calorie inefficient meats were still prioritized. As a vegetarian today this makes the problem feel even more harrowing/awful lol. Some interesting details I appreciate you highlighting.
I personally would've liked a little more clarity on the central question. Seems to be "yes the Nazis were obviously evil, but was their evil a logical conclusion from a set of initial conditions they were rationally responding to, or did even their economics not make sense?" Hard to answer succinctly/directly but I wanted a more succinct/direct answer to that crux.
Unfortunately the book itself doesn't give a very clear answer imo, it just suggests that they were more rational than their modern depiction suggests. The sausage question is a major component of the answer in my view and I don't think Tooze realized its significance, I'm also vegan and that's probably why I picked up on it from the couple of brief mentions. I googled around a bit after finishing the book and I couldn't find anyone else discussing it either.
I've read the book (and reviewed it, on Inconvenient History). MANY edits, of which the worst 2: diffuse > defuse, eventually lead > eventually led.
Tooze's work, and your review, both neglect the Suvarov Explanation for Operation Barbarossa, which dooms them to a grievous level of irrelevance (IMHO).
Otherwise, a thoughtful and thorough review, which gives an excellent insight into the book's content and meaning. MUCH more serviceable than my review, which is somewhat ideological, as well as cursory.
Suvarov is not taken seriously by mainstream historians. Nor by one of the folks Scott highlighted as having both domain expertise on COVID & creepy oracular powers:
>If the Sovs were within a couple of weeks of launching invasion, you’d think that they would have called up the deep reserves, bothered to get all of their tanks working, stockpiled fuel, run recon overflights, snuck sappers into German-occupied territory (to sabotage bridges and cut communications lines), finish reorganization of their tank corps, etc. etc..
They did a shitty job executing but we literally watched them engage in a major buildup for months. That’s why lots of people were warning about a likely invasion.
There was clearly some kind of forethought put into the Russian invasion, although they didn't understand the type of war it would be. The Russian command planned and executed a lightning strike to position forces outside Kiev and occupy Antonov Airport. Presumably they thought this would force Ukraine to the negotiating table and they could take modest gains in a very short and bloodless operation. Also the Russians never called it a war, the whole thing is the SMO - Special Military Operation.
Obviously this didn't work and the Ukraine war would become a years long affair of bloody attrition. The Russians were clearly unprepared for this, as evidenced by the early Ukrainian victories where they targeted and punched through the weak points in the Russian lines. But the Russians were preparing for a short, swift operation and not a major war. So citing the lack of preparations for a major war would be rather circular.
WWII was the last large scale war really, which isn't very helpful. Maybe you would want to look at the Soviet crackdown in Czechoslovakia for pre-emptive efforts, but that wasn't really a war. Also I don't know the details.
Did you read the link? It's by a physicist (though he's also known for his collaboration with Paul Ewald on pathogens & Henry Harpending on human evolution) rather than an historian.
"neglect the Suvarov Explanation for Operation Barbarossa, which dooms them to a grievous level of irrelevance "
Could you elaborate? Tooze's analysis explains a lot more about the war than just the reason for operation Barbarosa, even if he doesn't get that right.
Well written, but you missed an opportunity to shore up what I presume is a weakness of the original work (I haven't read it, only heard about it online): more explicitly connecting [lack of] industrialization with cultural outlooks on race, urbanites (Jews), slaughter of enemies, etc. (I.e. the German worldview and motivational structure is alien because it's from a different place: the past. Men in Germany in 1930 remembered when things were different and better in a way that men in Britain couldn't.)
Not that it's the job of the reviewer to do so, but I feel as though this substack's readership rewards ambitious reviewers.
Interesting, not really something I'd considered. I'm not sure Tooze would be persuaded by that argument though. Germany was only un-industrialised compared to the UK at the time, and more industrialised than many European countries, about the same level as France I think. And it seems at odds with his theory that it's the position in the global trading system, and the availability of farmland that mattered.
This is the first in a series of three articles on literature consider as affective technology, affective because it can transform how we feel, technology because it is an art (tekhnē) and, as such, has a logos. In this first article I present the problem, followed by some informal examples, a poem by Coleridge, a passage from Tom Sawyer that echoes passages from my childhood, and some informal comments about underlying mechanism. In the second article I’ll take a close look at a famous Shakespeare sonnet (129) in terms of a model of the reticular activity system first advanced by Warren McCulloch. I’ll take up the problem of coherence of oneself in the third article.
Ok, I won’t be doing this often but this one is actually relevant to the concerns (and expertise) of many here, so I don’t feel too bad about the blatant shilling. I wrote a story; it’s on my newborn baby substack; it’s a sort of Stephen King-y short horror story about AI alignment/instrumental convergence/bad stuff happening very suddenly. I think you will enjoy it, especially if you’re into this sort of thing but hopefully also if you aren’t. If you do, please consider sharing, subscribing, etc.
It's a good story. Well written (maybe needs a bit of professional editing but that's praise, not criticism - it's good enough to be worth editing) and the development of the plot runs smoothly.
Give it a bit of a shine and you could try submitting it to other places online, though I'm no help when it comes to online SF publishing sites.
Thanks — I agree it could use an edit or two; my problem has always been procrastination so part of the reason I’m doing a substack is to force myself to put things out there on some sort of schedule. It’s so hard to tell when you’ve just written something in a bit of a hurry, so it really means a lot to hear eg that the plot seems to be running smoothly. Thank you for reading it, and for taking the time to share your thoughts.
Gematria researcher Luis Goncalves continues to probe the mysteries of Crowley's Liber AL using Base 36 Alphanumeric Qabalah (AQ).
He's now noticed that Charles Stansfeld Jones, better known as Frater Achad, regarded by Crowley as the predicted Magical Child of the New Aeon, actually does have a name that sums to 418 in AQ, the number identified in the same book.
Gripping stuff. See Luis's Gematria Research blog for more.
There's this idea among some futurist philosophers (Land and NYU Shanghai types) that language is the base layer of cultural forms and concepts. And that certain interrelations between the same are hidden in the numeric value of the words that relate to those forms and concepts. This idea is called gematria or isopsephia and is common in Hebrew and Greek Qabalah. The Bible for example is rammed with Hebrew gematria.
Liber AL was channeled, not written, by occultist Aleister Crowley at the beginning of the 19th C. İt purported to be a kind of source doc for a new aeon for humanity. And now that people have found this Base 36 AQ numeration pattern in it, based around the English language, the idea is that AQ reveals correlations between the forms and concepts that are and will emerge in our new aeon.
Some believe that certain languages were sent by God, gods or aliens to direct human cultural evolution.
As to taking any of this stuff seriously, you totally don't have to. I for sure am not gonna try and convince you. Some people get attracted to investigating this stuff, that's all.
At all? Well, Crowley was into codes and cyphers, so it's not implausible that he did encode messages into some of his works.
As for the content of those messages...I see no reason that anyone should take them any more seriously than they take, say, "Book Four". I found it worth a bit of investigation, and some of the principles work. The one I found most significant is "invoke often", but don't understand that too literally. I interpret it as, approximately, any thought pattern you repeat often enough will begin to appear real. And notice that you've been repeating thought patterns all your life.
At all? Well, Crowley was into codes and cyphers, so it's not implausible that he did encode messages into some of his works.
As for the content of those messages...I see no reason that anyone should take them any more seriously than they take, say, "Book Four". I found it worth a bit of investigation, and some of the principles work. The one I found most significant is "invoke often", but don't understand that too literally. I interpret it as, approximately, any thought pattern you repeat often enough will begin to appear real. And notice that you've been repeating thought patterns all your life.
At all? Well, Crowley was into codes and cyphers, so it's not implausible that he did encode messages into some of his works.
As for the content of those messages...I see no reason that anyone should take them any more seriously than they take, say, "Book Four". I found it worth a bit of investigation, and some of the principles work. The one I found most significant is "invoke often", but don't understand that too literally. I interpret it as, approximately, any thought pattern you repeat often enough will begin to appear real. And notice that you've been repeating thought patterns all your life.
I have a new Substack post, "The Shadow Fed" in which I criticize (by nuancing and expanding the excellent criticism of Brad Delong -- highly recommended across the board) of two papers from the "Shadow Open Market Committee."
I also found it great! Reviews of fiction work usually have a hard time to convince me because they often don't tell me about anything than the novel itself, which is a rather limited topic. Your review was one of the very nice exceptions to this rule. It was thought-provoking, opened some new perspectives to me, and made some nice connections beyond the actual topic of the book. Well done!
I thought it was great! It was faithful to my vague memories of the book but also discussed a lot of interesting points that I didn't think about when reading the book. Kudos!
Are there any interesting studies (ideally not with terrible statistics, but given that this falls into management studies/sociology domain, I am not optimistic) about the impact of workplace romance policy on firm productivity?
I work for a company with a very permissive policy. Like half of the office dates each other, has dated in the past, and a fair number is married couples. There is nothing about it in the code of conduct.
Normally I would be horrified about this sort of thing and expect a lot of gossip and couples bringing their emotional problems to work, which would create problems in the office and disrupt the normal flow of work. However, everybody has, as far as I can see, stayed quite mature about it and I have not seen this interfering with work. Romantic relationships between employees with different hierarchy levels (i.e. the worst case scenario - "she slept with the boss for that promotion") are not explicitly forbidden, but are so looked down upon culturally that they either don't happen or are hidden very well.
Which makes me wonder, just how much do explicit policies help, and when they don't, what makes culture to fill in and when does this fail?
I thought it was mostly about sexual harassment--i.e. (usually male) bosses forcing (usually female) subordinates to sleep with them or be fired.
Or, more cynically, the second half of that wanting something to hold over the first half and using the feminist movement to support them...but that's just me.
I am at a Big Corporate Place. There is no rule against office romance, in fact we have many interoffice marriages although I don’t know how many *met* at work. Also many children of employees work here.
The policy is just that you report it, and you aren’t allowed to work in the same reporting chain (i.e. in a position where one of you could directly influence the career of the other).
I think this is mostly CYA to prevent lawsuits about nepotism, harassment, etc.
I don't have statistics, but I do have a suspicion that stats may fail here because it's probably a more of a culture thing - and the outcomes can vary greatly.
Maybe the rule is there because of the culture (a bad experience). Sometimes the rule can help because it shows the management is serious about altering the norms, but other times it just results in people flouting the rule constantly, or coming up with ways to skirt it. But if you need this rule, your culture probably has bigger problems - that employees see work as a place to socialise, rather than a place to work! If you need to have a no-romance rule and you find you can't enforce it, your employees are probably also playing pranks on each other, spending a lot of time planning after work activities, gossiping, etc. You might have bullying and non-romantic drama. The manager might play favourites because someone said something he didn't like.
Whereas a culture that isn't so excessively social may or may not have this rule explicitly and still be ok - because if the culture is "work time is for work, not for making friends, you do all personal things in your own time, and never let personal things affect work ever" you naturally just don't have this problem even in the absence of such a rule, even if people are dating.
I think a decent amount of large corporations (> 1000 employees) only limit fraternisation within your direct work team and make you declare conflict of interest where relevant. It's a non-problem if your partner works in a completely different area of the business and you have zero work contact (there are numerous people I work with that have that exact situation).
In your case I'd guess that you have a combination of having one of the work-only-at-work cultures, and/or you're just not in on the gossip (many work places strive for work-only-at-work but still have a little bit of gossip, it's fine as long as it doesn't take over).
I think personnel composition also matters a lot. A startup (mostly 20-sometimes straight out of higher ed) has a very different vibe from my workplace (1 - 2 grads with very little on the clock contact with each other, everyone else in the team is like 30 - 50, married with kids). I literally can't imagine workplace romance being an option for me - everyone is neither eligible nor interesting (most people want to date someone their own age).
I'm fairly sure the few that happened primarily started at after work events, introduced by friends from outside your immediate work group. Also, grads get moved around very, very, quickly and frequently and are always in teams surrounded by older people.
On the flipside, some of our service providers run workforces of mostly single men / some women (frequent travel to remote mines does not make for stable relationships) who are under the age of 35 (they tend to quit, get promoted, or get too injured past this age). Shifts are long (12 hours+) and people sometimes bring personal stuff to work (because most of their day is work).
Their Christmas parties are open bar. The fall out every January is both predictable and catastrophic. But the actual work arrangements make it kind of hard to change the culture, because long shifts and living in work accommodations means you don't have any other social avenues, and the physical/travel heavy nature of the work means you can't anchor your culture with family-oriented 9 to 5-ers (well, you can, but they'd have to raise their families in a desert mining town, and you get an entirely different kind of vibe there).
This is rather hypothetical, but I wonder whether a positive culture which holds that doing the work is important is the main thing. Such a culture would admit modest amounts of non-work, and that would probably be better than an all-work culture which attempts to require work even if the culture doesn't especially prioritize work.
Wow, congrats for finding a sane place to work. All I seem to hear about corporate culture these days is about policing speech and behavior out of fear that someone might have a human feeling here or there!
My suspicion is that explicit policies are most useful as a defense against lawsuits and as something to point to if someone really crosses the line and needs to be disciplined or let go.
In general I think people are fairly good at navigating these things themselves; formal rules are more to protect against worst cases than to improve the median case.
Documentary movies always seem to use voice-over rhetorical questions, long scenic shots, satisfying conclusions or other techniques that enhance the entertainment value at the expense of information transfer.
Lectures have pauses, stumbles and other live-performance issues. Video is seldom used.
Books are nice but can't use video.
I wan't to see a documentary movie that is optimized for information transfer with no regards for entertainment value. Basically a book or an academic lecture with high-budget video to enhance understanding. Is there such a thing anywhere?
3blue1brown? The videos don't necessarily cover as much as an academic lecture, many of them are pretty well optimized for learning if you happen to be at the right level as an audience member. I particularly like their work on statistics - it does a great job introducing ideas like Bayes theorem at an intro college level.
I was just telling a bunch of folk about a month ago that, despite my not enjoying true-crime as a genre, this particular true-crime documentary was virtually perfect in form. As far as I remember, 100% of the visual content is evidence from the case (plus a couple of Google maps), with no arranged talking head interviews, lingering scenic shots, recreated scenes (*puke!*), or other filler gimmicks.
There is some irritating narration which includes color commentary while transitioning between facts ("She would then discover his horrifying secret," etc); had they cut all of the color commentary this would actually be a *perfect* documentary of a real crime.
And as someone who formally studied both documentary films and film editing, I consider the first 30 seconds to be absolute *genius,* some of the finest documentary work I've ever seen. The first three cuts are a truly magnificent sent of choices (choices from police evidence!) which elegantly establish the event and introduce the "characters" with brutal and intriguing efficiency.
Again, I hate the color on the narration, but this is otherwise about as good a information transfer about this crime as I can conceive.
(I'm guessing you weren't looking for efficient documentaries about particular true crimes, but I'm mentioning this video because its form is supremely masterful, even if the content isn't useful.)
I watched it-- the question I'm left with is that a lot of people knew the young man was dangerous, but it wasn't the sort of knowing that rose to the level of action until he committed a murder.
How would things have to be different for the public to be protected from someone like that without making life worse for harmless weirdos?
I don't think anything could have been done. Many people thought he *might* be dangerous, but it's important to remember he wasn't *actually* dangerous...until the day he was.
After all, a LOT of people who make macabre jokes and are socially inept enough to seem "creepy." I strongly suspect that every single person living in big social networks has encountered at least one person and maybe a few out of all the classmates, coworkers, neighbors, clubs, friends of friends, oft-visited businesses, etc who one wouldn't be surprised to discover was a budding serial killer. I can think of several in my own life where a YouTube documentary about them would make me say, "Yeah, that's about right."
And I'm pretty sure *I* am one of those people for a few of my classmates!
When I was in seventh grade I convinced my entire middle school - including teachers and administrators - that I believed I'd been abducted by aliens (it's a long story). Then in high school, I spent several months writing ostentatiously graphic horror stories *and reading them out loud* for an honors English class as a contest of oneupsmanship with two goth/nerd dude friends.
I'm confidant that if I ended up as the subject of a YouTube documentary about a serial killing or spree killing, my classmates and teachers would said, "I knew it!"
It's complicated because most of his aggression was verbal, but if you watched the video, there's a section about him probably killing a cat, and it got memory-holed. His fellow students (I don't remember about adults) kind of knew it and kind of didn't and didn't want to think about it.
I remembered the part about the probable cat killing, but the problem is that it was only probable. If there had been some definitive proof of animal torture, I'm sure his parents would have made sure "something" got done, but even with their best efforts, that "something" would have almost certainly be anemic, like counseling with probation (and especially so if the cat thing happened when he was under 18).
Perhaps in a different era, when it was possible to contain a family member in a mental health facility with a couple of signatures and a doctor's sign off, this dude's parents and/or community would have been able to stop him before he harmed anyone.
But we're not in that era now. There are many, many places in the US where the justice system refuses to contain dangerous people even after they've done assaulted another person (on video, no less!). I can't think of any place that would devote meaningful resources to someone merely because they have a creepy vibe and some rumors circulating about them.
It's messy, especially because he was smart enough to attack someone who *wasn't* in his immediate community. It still seems like at least telling him that his sense of humor was substantially unwelcome might have been something. He did harass people, or at least that one girl he targeted for being fat.
From my point of view, the response to him seems flabby. His feelings are important, he's allowed to make people's lives worse. Their emotions aren't important because welcoming difficult people is the primary goal.
It's a real problem because sometimes people are arbitrarily defined as difficult.
Arguably, the people around him failed to protect him as well as failing to protect the homeless man. I don't know whether there was any way to convince him to value people, at least in some minimal way, but perhaps he could have been convinced that there are social mechanisms that might not let him get away with murder.
I agree with you here as well, and am frequently annoyed by the way documentaries are presented. Even more upsetting to me are adding drama and humanizing animal behavior in nature documentaries. Narrators telling us an animal is timid, or curious, or afraid, when the actions of the animal don't reflect the narration. If a nature doc goes as far as to actually give the animals names I generally immediately turn it off.
There isn't any intended information transfer that could be sacrificed for entertainment in a documentary. They're a careful kind of dishonesty that masquerades as informative.
Thought experiment: put every microclaim (including implicit ones involving a shot that makes some opposition figure look ugly or brooding or evil) from a 90-minute documentary on flashcards, and go through all several hundred of them over a period of weeks with an attempt to debunk or reframe each one.
I suspect that most of the time, long form text + illustration is the best way to get information across. If you put this text on the internet, then when a video is more appropriate, you can just embed video. However, this is still missing the most important part of real lectures, interactivity. To get interactivity when it is needed, the internet wins again- just embed a flash ga- damnit this is just homestuck
wrong medium. if you optimize for information transfer video becomes more a burden. it has its place in blended learnings and MOOC. But pure video lacks the control channel (slowing down, speeding up, rephrasing on demand, having additional info in some textbox). Many video players are poor when it comes to bookmarking, annotation, ..
I am looking for material suggestions for some toys I want to make. I want something with plastic-like properties: hard, durable, food safe, transparent.
But the only way I know to work with real plastic is to model your shape, carve a mould out of aluminuim with a big CNC milling machine at lunatic expense, install it into an injection moudler, and run yourself off a couple thousand copies.
I don't want to do that. I'm thinking more like, something I can sit down and sculpt with my hands at the dining room table on a Sunday evening.
(The shapes I want to make are complex, 3D, and contain through holes - vac forming isn't an option.)
My current thought is I could sculpt shapes using some kind of polymer clay, create a mould from that using liquid silicone, then pour epoxy resin into that mould.
Epoxy resin has all the properties I want, plus you can make pretty marble colours that kids will love.
Unfortunately there are lots of resins that are definitely not food safe, and while others kind of imply they are, I haven't seen any that are willing to come out and positively promise it.
I'm erring unusually far on the side of caution because the target audience is my ~6month old child. I have watched what my child shows interest in, as well as how my child likes to develop its grip, hand and finger coordination, and I've had some ideas for toys based around that.
So we're talking a small scale low volume arts-and-crafts level operation here, yet one that produces results that can staunchly withstand the sheer destructive force a half year old baby with a preternaturally strong grip and the novelty of its first tooth can bring to bear.
If you don't mind having to throw them out and re-make them often, I wonder if you can use literal food - sculpt things out of pasta or bread dough, vacuum seal/shrink wrap your toy (the ones for sous vide probably work), and chuck them out if the casing breaks. It will unfortunately really limit your shapes.
Konjac jelly could work if you have a suitable mould (which you can 3D print if you can find a printer that does food-safe prints) but be very careful with choking hazards. These would be one-use but a lot less effort to make repeatedly.
Food grade beeswax, if you're good at whittling and can find a supplier with your desired properties.
A baby will be able to gnaw their way through beeswax as soon as they have the suspicion of a tooth; if yours already has a tooth and an enquiring nature, it may not be the best choice. Depending on the shape, it may also break when under relatively minor pressure.
Pros: Beeswax is non-toxic, easy to find, fun to work with, and smells pleasantly of honey.
Cons: It does tend to crumble at fissure points in a distinctive way that is somewhat hard to clean up.
In case you do decide to play with polymorph/polycaprolactone as others have suggested, I'll share some tips: it sticks too well to some metals. It is easily glued with super glue, which is also fairly biocompatible. The brand matters (it can be overly sticky or brittle), but I think anything you can buy outside of China is probably okay. And it becomes somewhat brittle in a few years, probably due to oxidation. (I wonder if some brands don't embrittle, yet are still food safe?) And it's very hard to get big pieces to hold their shape while they cool unless you use an armature.
Edit: never mind, I saw you wrote below that you already know how to 3D model. That totally changes the effort trade-offs.
Just because I can do 3D modelling doesn't mean I want to. I'm thinking of having a go at this even if it's just for messaround or prototyping. Thanks for the advice.
It just occurred to me that plaster of paris is also kind of fun. And if it turns out you enjoy mold-making, there are a whole world of materials which on their own don’t meet your requirements, but might be used in casting to make final materials that do suit your needs. (The final pour would probably be hard silicone since food safety is a primary concern.) Just don’t let a person try to take a casting of a limb—that’s fairly dangerous. I imagine this activity is also something you could do with a kid when they’re older.
I work in designing implantable plastic medical devices
I would definitely look at 3D printing. Protolabs, Formlabs, and others have print shops that are relativvveellyy reasonably priced. It'll be expensive for a baby toy to be sure but not totally unreasonable. Sounds like it's as much a project for you. They will have a wide range of materials.
I would shy away from molding your own stuff out of a plastic resin, it's often the plasticizers that are actually dangerous and it'll be harder to find out exactly what some particular brand or formula is using. If they advertise it as food grade or medical grade it's likely to be safe to ingest. Molding your own medical grade silicone actually isn't too difficult and probably the cheapest, and you can get it fairly firm, might be worth considering.
If you go through a print house, pick a material like PEEK or PE or UHMWPE or medical grade PP. These are all commonly used plastics for short term contact devices like feeding tubes and such.
That's a great suggestion, thanks. I'm currently thinking I'll do some basic playing by hand, then move to 3D. I shall justify the cost by adding insanely intricate and unecessary internal details.
Depending on what you want to build, thermoplastic beads might work? It's essentially just plastic with a low melting point, so you can melt it in hot water, and then it cools hard. I would expect it to be relatively baby-safe and it's easy to use. It's difficult to make precise shapes out of, but fine for plasticine-like modelling.
I'd ruled that stuff out, and now I can't remember why. Maybe I thought a baby's mouth could reach temperatures of over 80C for some reason.
EDIT: remembered as soon as I hit "post": I think I decided that the chances of the baby gleefully dunking the toy in someone's coffee mug and causing it to melt in there were high enough to rule it out.
I have played with the thermoplastic and think you might like it for the visceral molding process you are talking about. In coffee it would take a while to get soft and then shape wouldn’t change too much unless you apply force to it.
There’s a natural material with similar properties - Gutta Percha. It was used as plastic for kids toys before we had modern plastics. It also becomes workable at the temperature of boiling water. I have not been able to find a bulk source of it though - at one point humans made enough to coat undersea cables and such, but now it seems to mostly be in dental supplies
I think that would be a really unlikely scenario since items made from it take time to soften - the baby would have to find some rather hot, unattended coffee to drop it into, and then leave it in for a couple of minutes before fishing it out of the scalding liquid with a spoon or something, and at the point where it's actually very soft, it's also quite hot and not fun to touch. It comes in little beads which soften quickly, but any pieces larger than that take much longer to become soft.
I would rate the danger as lower than giving the kid a sponge to play with, which they could conceivably put into a hot drink and then squeeze over themselves. Edit: or just spill the drink over themselves, come to think of it.
If the ability to do things with your hands directly is cental to what you are looking for, don't bother reading the rest of this comment.
Otherwise...
Have you considered 3d printing? There are some food grade resins (though you'd still want to coat them and clean them up regularly).
I'm fairly confident there are some materials that would be reasonably safe - for example, some dentists use 3d printing to produce aligners, and if a material is safe enough to be literally in your mouth for 2 weeks, it should be safe enough to produce a toy.
With a right kind of coating and regular sterilisation it should be probably as safe as it gets.
Of course, more expensive than purely hand-made approach, but easier in a sense if that's within your budget.
That being said, of course do your research and don't take me at my word. I'm just trying to point to a potential option you did not mention in the original comment.
I'm happy to get stuff 3D printed - I'm very comfortable digital sculpting and CAD modelling, and since bits of the design do need to be functional, doing it in CAD makes sense. I rather wanted to do something that didn't involve being in front of a screen, but I wouldn't let that stop me if it was the smart choice.
> you'd still want to coat them and clean them up regularly
> With a right kind of coating and regular sterilisation
This is my bigger reservation - anyone who inherits the toys in like a year or two is not going to do that. Also I will 100% forget to do it too.
Hmm, my mum just bought some resin bowls from a big retailer that are designed for food. I assume they checked whether it's food safe. I can try find out what they are made from exactly if you want.
I'm going to take it as accepted that wokeness started in the United States and then spread from there. This seems pretty obvious and widely accepted, but in any case a roughly similar story to what follows can perhaps be told for some other countries, but isn't necessary since most of the world follows American cultural influence whether they like it or not.
I presented here (https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-cf9/comment/55798428l) the general idea that wokeness started with Obama's election. Several people objected to this account on the basis that it doesn't really explain where wokeness actually came from, nor does it fit with Obama not being hugely woke personally, at least initially. So I'm attempting to flesh this out.
To start with, I think Hanania is right to trace the roots of wokeness to the 1960s. But it seems pretty clear that we're all confused on the origin of cultural wokeness specifically. I'm focused on that, and not on what role civil rights law played. And I don't claim any of this is at all original: I'm just stitching together bits and pieces of what lots of people agree on, and probably this exact theory has been presented many times already.
The essence of my theory of wokeness is this claim (that may or may not be true, but feels true): since the 60s, and especially around 1980, a minority of the American population (30-40% at most) have been moving further and further to the left (on social and cultural issues) pretty much constantly, while the rest have been largely static. This fundamentally differs from the middle of the 20th century, when almost the whole society shifted left on lots of things as one (leave aside whether left and right have intrinsic meaning: they mean something, roughly, which is all the matters). I don't know how this claim could be tested, or what sorts of poll questions would be a truly accurate guide: I think you could probably get polls arguing both ways. But it *feels* largely true to me, I think at least some polling would support it, and if it's false then my theory doesn't work. Too bad.
Assume it's true: liberals keep moving left, centrists and conservatives stay where they are (with a few exceptions). With the stabilisation of the current Sixth Party System in 1980, the Democrats face a difficult problem: how to both turn out the liberals, and avoid alienating the centrists. First they run Carter's VP and the first female candidate, but it's a hopeless campaign that has no chance against Reagan, so their loss doesn't really prove anything. Then with Dukakis they have what seems to be a potential winner with liberal-appeal, but in the end is sunk by being too out of step with the centre, especially on the death penalty.
So the Democrats have learned that appealing to liberals doesn't work. They pivot to centrism, and after three terrible losses the liberals in the party are desperate enough to be bullied into supporting Bill Clinton. That works. The centrists and liberals hold together once more and his re-election works as well, but then the split happens. *This* time, instead of trying to woo them Democrats tell the liberals "we don't need you, you're a liability, we'll focus on the centre". Enough liberals split off to the Nader camapign and this pulls away *just enough* votes to throw the election to Bush.
So now the Democrats have tried liberal campaigns and centrist campaigns, and both are potential losers. So what can they try? A flip-flopping candidate who says different things on different days, and to different audiences, and relies on dislike of Bush to scrape through. But the Bush campaign turns "flip-flopper" into an insult, and they lose again.
So by the beginning of 2008, what have the Democrats learned? They lost in 1988 by being too far left. They lost in 2000 by being not left enough. And they lost in 2004 by being too unclear how left they really were. It seems an impossible bind, but Obama's charisma is enough to temporarily save them. By speaking carefully and charismatically, he manages to persuade both centrists and liberals that he is on their side. Saying about gay marriage "my position is evolving" is heard by liberals (and conservatives) as "I'm in favour of it and I'll push it as soon as it's politically possible" and by centrists as "I'm no ideologue, I'm open to both sides". Saying about no longer supporting single-payer health care "I'm better now" with a smirk, is heard by centrists as having moderated and become wiser, and by liberals as a kind of sarcastic "I've realised it's not politically possible right this moment". I'm sure you can find lots of statements of his that are like that.
Well, this works in the short term, enough to win one election. The first unfortunate side-product of making both the liberals and centrists think he's on their side is that it makes conservatives lose their mind and hate him, because they hear the same thing the liberals hear. But so what, he doesn't need them to win. The right-wing backlash is extreme enough that the backlash to *that* gets him re-elected and keeps the liberals and centrists together one more time. And then comes wokeness.
Because the second side-product of making two groups both think you're talking to them, is that you make them very angry when they realise that you might not have been. The liberals therefore react to this struggle for control in the following ways: language policing, turning viciously on your own side's people and searching their every statement for signs of amgiguous committment to the cause. Purity tests. Demanding that every cause be linked to every other, so that you don't count as an ally on one thing unless you're an ally on everything. And so on.
From the liberals' own perspective this makes sense. They don't accept that a fully progressive platform is not electorally viable: they simply think they've been betrayed over and over by double-talking thought leaders. And they react to that in a way that's logical in a sense, but extremely toxic. And the Democrats and progressive thought leaders have no choice but to go along with it, since it's been proven that they can't win without the liberals.
But also, maybe the liberals have *some* awareness that full progressivism is not electorally viable. Which is why they de-emphasise substative policy issues in favour of correct language and ideological purity.
I think this story explains many of the central elements of wokeness. It explains how it's encoded into the structure of the 60s social revolutions, which left a divided society of centrists and conservatives who want no more radical change, and liberals who want more and more of it. It explains why it took so long to really manifest: the logic of this situation had to be played out in many elections, and it had to be firmly proven that neither a centrism that ignores the left, nor a self-confident leftism comfortable in its own popular dominance (like the FDR form that won over and over again) is possible anymore.
And it explains the sheer toxicity of wokeness: it's an ideology of desperation but also of dominance. It's people who have not enough power to govern, but enough power to destroy a government (where "government" also includes the media and corporate and institutional structure built up around progressive values).
And there's another factor that I'll separate into another comment.
As someone who was on the ground floor of woke when it was still called being a cringe SJW and who remains woke to this day: I obviously think you are wrong re. who is moving where; you can tell from the majority support for gay marriage, the decriminalization/legalization treadmill, and the fact that a majority supports single payer healthcare now, as long as you don't ever use the words "single", "payer","government", or "socialism" when you describe it.
I think you are right about the time though. The origins of the thought patterns in my view where people who thought that the social questions had all been answered, people were having dreams, and that love had won going to a figurative uncle's house after Obama won the presidency and seeing him freak the fuck out.
What did it for me was the socal small business owner stepfather of my best friend I saw at least 5 times a week say some wild shit about Blacks bracket (N*****S), Mexicans, and Gays brackets (F*****S) and having all the dudes around just nodding along (despite employing majority non-white dudes and each of them employing illegals regularly), then going to my socal evangelical church and hearing about how the the country was sinful and fallen because .... There Was Expanded Access To Healthcare For The Undeserving! (capitals for emphasis), a remarkable interpretation of Christian doctrine to be sure.
Even if these represent a small amount of the population (which they do); they hit at just the right moment to start the development of political consciousness for people that had just got hit by 2008 and could see all the no punishment whatsoever that befell those that caused it and benefited from it.
What we now call "wokeness" is just a development of currents that have been going on in the Left since far longer than I've been alive. It's a moderate development of what we called "Political Correctness" in the 1990s.
I would say that two big innovations mark the current era though:
1. The Left has pretty much abandoned the anti-capitalism thing. In the 2000s you'd get huge left-wing protests at Davos, at the IMF, at the G20, all those big meetings. As recently as 2011 you had Occupy Wall Street. But that stuff has gone away now. Whether the left co-opted big business or big business co-opted the left, the whole "rich vs poor" stream of leftism has vanished, leaving only the "privileged groups vs unprivileged groups" stream and a bunch of giant companies changing their logos to rainbow flags once a year.
2. The rhetotical innovation that "racism equals prejudice plus power". Again it's probably old, but I first heard it in the wild around 2007, and it seems to have completely displaced the old conception of what racism was about, permanently preventing whites from ever complaining about any racism directed against them ever again, because that doesn't count as racism. This was basically the abandonment of the idea that identity politics was in any way about equality.
>2. The rhetotical innovation that "racism equals prejudice plus power".
Incidentally, since Ibram X. Kendi was in the news lately, his most famous book (How to be an Antiracist) rather explicitly rejected this premise and used the word "racism" for basically all forms of race/ethnicity based discrimination, including having a chapter explicitly condemning (himself, in the past, and criticizing the movement in general for) anti-white racism.
Or just what happens when you get stopped by a black cop or have a black supervisor. Individual people have more or less power in different situations, but it's silly to think about that in terms of whole racial groups. Blacks can have less power overall in the US than whites at exactly the same time a black Baltimore cop is beating some white guy to a pulp for mouthing off to him.
>racism is only really harmful if done by the powerful and only a nuisance if done by those without power.
Therefore, if I see a bunch of people who are racist against people like me, I should strive to make sure those people never have any real power. Or am I reading the wrong lesson here?
FWIW, ISTM that the US has been moving to the right. In the 1960's the dissidents were largely on the left. This was partially because of the draft and Vietnam. Since then it has been moving towards the right. Largely, I feel, in antagonism to the civil rights legal implementation. Another thing driving it to the right is the general aging of the population.
OTOH, I clearly don't understand people very well...so I don't really even trust my perceptions of just how "left" or "right" the population in general is. But I do know that it varies wildly in different areas. So what you see will depend on how you sample things.
"MONTANARO: The word has a long history. It was used in Black protest songs dating back to the early 20th century, including by Huddie Ledbetter, better known as Lead Belly, the singer of the 1938 song "Scottsboro Boys."
HUDDIE LEDBETTER: (Singing) Go to Alabama and you better watch out. The landlord'll get you, going to jump and shout. Scottsboro, Scottsboro boys, tell you what it's all about. I'm going to tell all you colored people...
MONTANARO: Here's Ledbetter speaking about the song in what's believed to be the first audio recording of the use of the word woke. An old record - it's hard to hear, but he says in Alabama, be careful and stay woke.
LEDBETTER: So I advise everybody, be a little careful when they go along through Alabama - stay woke, keep their eyes open.
MONTANARO: Be careful. Stay woke. Keep your eyes open. The Scottsboro Boys were nine Black teenagers who are accused of raping two white girls in what is widely seen today as one of the worst cases of racist legal injustice. It helped spur the civil rights movement and loosely inspired the book and movie "To Kill A Mockingbird."
So the social justice set co-opted the phrase, particularly as it became known more widely due to the BLM activism (according to another article). It has since come to take the place of what was formerly "politically correct", then "social justice" and now "woke".
I think your analysis would benefit from more clarity re what you mean by "woke." At different times, you seem to be conflating it with Democrats, or with the left, neither of which is particularly accurate. (The Democratic establishment is hardly onboard with the woke agenda, nor are people on the left like Bernie Sanders and Noam Chomsky).
You might also benefit from being more careful about some of your political claims. Eg, this is not particularly accurate: "So by the beginning of 2008, what have the Democrats learned? They lost in 1988 by being too far left. They lost in 2000 by being not left enough. And they lost in 2004 by being too unclear how left they really were. It seems an impossible bind, but Obama's charisma is enough to temporarily save them."
The Democrats were very unlikely to win in 2004, against an incumbent with an approval rate of 50%, unemployment at 5% and inflation at 3%. And Obama's charisma had nothing to do with 2008, which was an awful year for Republicans (they had been in office for 8 years., the exiting incumbent was very unpopular, inflation was at 5% and unemployment at 6.5% and rising.
Moreover, it is odd to speak of a party being "temporarily saved" when said party has won the popular vote in every Presidential election but one since 2000 (yes, of course the Electoral College is a thing, but you are making claims about the popularity of the party, so popular vote totals are very relevant)
>Moreover, it is odd to speak of a party being "temporarily saved" when said party has won the popular vote in every Presidential election but one since 2000 (yes, of course the Electoral College is a thing, but you are making claims about the popularity of the party, so popular vote totals are very relevant)
nit: Assessing the meaning of the popular vote, given that we _do_ have the Electoral College, and all but two (?) states use their electors in a winner-take-all fashion is tricky. The incremental influence of voters in non-swing states is _tiny_, which influences turnout there. It might differentially influence turnout for majority and minority party members in non-swing states - anyone know of studies one this?
I feel like I have an internal model of how political ideologies arose that seems to make lots of sense. But trying to get this across in words is difficult, especially when the meaning of every important word is debatable and will be hotly disputed.
But regarding your election explanations, I'm not sure they refute my point, because it's not really important why each election was actually won or lost, only what the perception is of why (on the relevant side).
Narrow question: Would you dispute that large parts of the left/liberalism/progressivism/Democrat-rank-and-file (choose whichever one makes my question most coherent) have blamed the 2004 loss on something like Kerry being too indecisive and/or not progressive enough?
Would you dispute that Obama personally got a lot of credit for the decisive 2008 win (justified or not)?
>Narrow question: Would you dispute that large parts of the left/liberalism/progressivism/Democrat-rank-and-file (choose whichever one makes my question most coherent) have blamed the 2004 loss on something like Kerry being too indecisive and/or not progressive enough? Would you dispute that Obama personally got a lot of credit for the decisive 2008 win (justified or not)?
Those are probably true, but I took you to be making a different claim. And, re Obama, I think that those same people gave tons of credit for his success to his policy positions (eg, health insurance, obviously, but also the Iraq war)
> the general idea that wokeness started with Obama's election
the puritans burned witches, the prohibition, the great awakening where a bunch of cults started and the civil rights era had communists targeting black community for activist training
Any history of americian idealism going crazy thats only a decade old, is ridiculous
I think there are different aspects of woke that arose at different times.
I think the strong emphasis on identity politics came around the time of Obama's second term when it became obvious that having a Black president was not going to solve all the problems with racism in America. The thrust of mainstream civil rights before about racial equality and integration. This was aligned with Dr King's Dream where we are no longer judged by the colour of our skin. After this period, civil rights became more about championing separate political identities — initially for African Americans but later for other groups line trans and gender-neutral identities.
The other main aspect of woke is the policing of language in, for example, calling people out for stupid jokes — as with Justine Sacco — but also cancel culture for using the wrong word for an idea and a proscription against cultural embarkation. I suspect that all of these came out of the subculture that evolved on Tumblr and escaped into the mainstream as Tumblr-ites came of age.
> I think the strong emphasis on identity politics came around the time of Obama's second term when it became obvious that having a Black president was not going to solve all the problems with racism in America
I remember a Jon Stewart bit from ~2009. He was talking about the latest thing that Jesse Jackson or Al Sharpton had said, and his response was "Whoops, sorry! It's 2009, there's a black President in the White House, your race card has expired!" And then he pulled out a card labelled "race card" and swiped it through a machine on his desk and it flashed "EXPIRED" and the crowd cheered.
It's a real shame I can't find that clip on youtube, because it really shows how perceptions swerved over the course of the Obama administration.
The original post is basically correct. Over time “big government” has suffered some key defeats in the english speaking world, much to the chagrin of the far left, which has had to content itself with victories on social issues, but the buzz of these victories soon wear off, given the background economic inequality - a certain spitefulness results.
I had hoped that Trump/Johnson (not my people) might at least shift the Overton window in favour of big government, making it easier for the centre left to do the same - and maybe then we could all relax just a little about culture wars, because everyone is getting a bigger slice of pie. But the pandemic was a disaster for the public finances and so austerity is back. In the UK, Starmer seems set to be Blair 2.0 - so we will get assisted dying, abortion decriminalization and very little progress on economic inequality.
"Big Government" has lost rhetorically, but the Government remains as Big as it's ever been.
If you want proof: you're in the UK in the summer - install a window AC and wait and see how long it is before someone from your local council comes knocking.
I agree - when I say Big Government I mean as in "the era of big government is over", roughly the Keynsian consensus. There is still what Popper called "the petty tyranny of the public official", and yes it's annoying. Mind you many councils are on the verge of bankruptcy so their staff can't be everywhere, but the traffic wardens round here are certainly keeping busy.
Your comment originally said "this is basically correct" but now says "the original post". I'm unclear on whether you mean my comment here that you're replying to, the previous comment I linked to, or something else?
It's the euphemism treadmill; in the 90s, "woke" wasn't the term - that's the joke in the name of the movie, "PCU" standing for "Politically Correct University".
"PC" became adopted by the right to refer to a certain set of attitudes, so that term was scrapped and "social justice" became the preferred one. Same thing happened with SJW, so now they've moved on to "woke" (which, depending how impeccably correct you wish to be, is more white liberal appropriation of AAVE).
The terms change, but the underlying ideas remain broadly the same.
I prefer the less-practical framing of "of course the demon doesn't want you to know its name, otherwise you might be able to control it". If you can see it, and name it, you can give it an identity, and define it, and then attack it.
Many Thanks! Hmm... If a demon faction has a particularly innumerate talking point, and one knows its true name, can one exorcise it with: "The power of math commands you! The power of math commands you!" ? :-)
As someone who you'd probably count as 'woke', I don't think I've ever heard someone self-describe as woke, except for maybe a few people online circa 2015. It seems to be mostly a label that the right uses to describe people, not one that is self applied. This contrasts with "social justice", which I have heard many people on the left use for themselves (and still use today).
I don't buy that, on the grounds that it's a really catchy moniker. Like... goddamn it's a good bit of propaganda. "We're the people seeing clearly". And then to corrupt it you've got to spend a whole sentence setting up something about illusions from insomnia. You're telling me the right came up with a term like that, and then handed it to their enemies? No way.
Sure, primarily in African American communities as Deiseach mentioned. But from my perspective, it seemed to jump straight from those communities to right-wing people describing leftists as woke.
I'll read that thread in more detail later, but thinking the Republican reaction to Obama was unique is questionable. Bill Clinton was himself seen as a massive threat when he was elected (even though he was a solid centrist). Rush Limbaugh would note all the time "America held hostage, day number [xxx]", and as you should know, Clinton was also the first black president in case that matters.
I don't claim to have a satisfactory explanation for woke, but as Tyler Cowen often points out, all things begin earlier than we suspect. A couple of important components of woke would probably be due to the flourishing of postmodernism in political discourse, as well as the impact of social media on incentives on how people signal their ideological alignment (e.g. purity spirals).
To continue the discussion about applied math programs, here's the curriculum from the University of Toronto's Applied Math Specialist program. U of T is arguably Canada's top university.
13.0-13.5 credits, including at least 1.5 credits at the 400-level (Here the H1 courses are single-term, equivalent to three credits in a 120-credit system, and Y1 courses are two-term, equivalent to six credits in a 120-credit system.)
It's certainly not a very applied program. It's only slightly different from the Mathematics Specialist program, which is so spectroscopically pure it doesn't even require a computer programming course. But then it does say this is a program specifically for people who want to do math research.
+ instead of calculus, you have courses are called analysis and complex / real analysis, which is more rigorous.
+ more probability
+ applied courses available (12. and 13.) appear more focused and useful
+ U of T is reputable
-- the way program presents them, options within 12 and 13 each appear mutually exclusive. Non-linear optimization is useful everywhere, but choosing it you have to give up bunch other interesting stuff.
I would still be wary of recommending anyone to take even a good applied math degree, unless they have a clear vision what kind of applied mathematician to become. The way some careers can go (how it went for me), the only benefit I got from doing the rigorous theory parts of an applied math was ... proof technique and satisfying some intellectual curiosity. Later in my life, I get to use some stats and probability, but I would have been better served by doing a statistics degree. Many applied math graduates who went to computer programming jobs use nearly nothing and would have been better served by doing computer science / software engineering degree.
I would rather try sell the idea that it is better to choose the applied part first, and then get the relevant maths: a degree in applied physics (get a rigorous background in physical phenomena + some familiarity with expensive physics lab equipment), any engineering (for obvious reasons, but I could highlighting signals processing), ecology (full of interesting math problems from game theory to dynamics), genomics or bioinformatics (difficult math problems in studying life itself with DNA/RNA sequencers, where you have to figure out algorithms to make sense of your data before you have anything to study), any field of statistics (for obvious reasons), or economics (theory and methods for quantitative study of human activity, especially econometrics)
Yeah, while the prospect of doing an undergraduate degree in an uncompromisingly intellectual field like math or physics is appealing, the question of what comes after that does rear its head. Presumably if I were doing that I would be planning on going on to something where the exact nature of my undergraduate studies didn't much matter, such as pursuing an MBA or going to Officer Candidate School. But just in case that didn't work out, it would be useful to have a plan B in the form of some studies in something a little more vocational. If things went awry and one degree was all I was going to get, I think I'd much rather face the job market with a degree that said Computer Science/Applied Math than one that just said Applied Math.
This looks like a great Applied Math program to me. It's got 1 course each in abstract algebra and topology, which lets students have enough awareness of other parts of mathematics to know what they don't know - it's definitely got the Math part down. In defense of the Applied part - for a pure math degree at most of the institutions with which I have been affiliated you can graduate without taking a single probability course (this requires 3), or learning any programming (this requires 2 courses), or taking a dedicated differential equations course (this requires 2 courses), and this requires advanced applied math and topics courses as well. So it's maybe 1/3 foundational math courses of the more rigorous variety (calculus, linear algebra, analysis, maybe but the geometry course here), a sprinkle of broader math, and then the rest is stuff that's essential to an applied mathematician and merely advisable to a pure mathematician (probability, diff eq, programming.
I think my only complaint is that a student can apparently graduate from this program without seeing numerical methods or optimization courses. (But it's hard to fit everything into 4 years). A student graduating from this program would be very well prepared to study more applied math in graduate school, or to learn how to do a job.
Interesting. I would have zero problem labeling this as just a "math" degree, and it's also significantly more advanced than just about any math degree you'd see here in the States. On paper at least I'd say it compares favorably to UChicago in the 90's, which we (not inaccurately, I think) considered one of, if not, the most difficult math programs at the time. Of course, the devil is in the details, two courses can have the same name and yet cover very different topics.
Just published my forecasting roundup from Manifest. While others have covered the event itself, I think many here might find it these predictions and takeaways of interest:
I noticed recently that I have a lot of opinions that are both strongly held and very weak. More specifically, I mean that I feel like almost no amount of conversations, posts, news articles, etc. would be enough to shift my opinion on them, but also that it would only really take one good journal article to convince me to change my mind.
Does anyone else feel like they have opinions like this? Is there some kind of established good epistemic practice around them?
In my experience this happens when one hasn't done a deep dive into position X but has seen a bunch of stupid and wrong anti-X positions. The only thing to watch out for is a subconscious trend to go "Well I've seen a hundred bad arguments, guess there aren't any good ones or they would have come up." But, if the only place you're exposing yourself to arguments is whatever the top couple comments are on social media to pick a very common example, you'll see the same stupid but popular positions over and over and you shouldn't be reinforcing the belief.
Yes, this. Some years back I came to the same self-realization that you have and realized that it was not a smart way to move through the world. The core of it, I concluded, was exactly the mechanism that Jeff describes: the hundred bad arguments were effectively driving me batty to use an old-fashioned word.
My solution was to become fairly ruthless about blocking out sources of way more noise than signal without regard to whether my personal preferences aligned with the noise. This led to banning certain news sources from my daily life despite sharing the same broad worldview as most consumers of those news sources (I am deliberately not naming them here). It later helped me be ruthless about social media platforms -- I concluded that there is no practical "I just use it to keep up with distant friends" middle ground, not for me anyway, just had to accept the loss of that benefit as the price of keeping my overall bearings.
You have high trust in some sources and low trust in others, and you're gating updates based on that trust. It's not a contradiction to say source trustworthiness can be domain bounded - you'll trust the doctor over your own experience when it comes to health advice, for eg, but you probably don't by default adopt his political or religious beliefs as well.
This is both natural and rational behaviour. Because there is no way to gather and evaluate all the information relevant to modern life ourselves, we are forced to accept what we believe from external sources largely uncritically. Our agency becomes about how we evaluate and place/remove trust in them. The upshot is that tribal fighting over the source availability and reputation becomes tooth and nail.
I think you underestimate the degree to which one can evaluate the claims of so-called experts without oneself being an expert in their field. A scientist in one field, for example, can look at a paper in another field and (with a small amount of field-specific knowledge much short of expert) gain some impression of whether they are following good scientific practice. And if your doctor seems confused about the measurement units of test results, you might not want to trust them too much.
Yes. And scientists are also well known for making garbage assertions within their field of expertise. There's no substitute for evaluating claims on their merits.
So, for example, I hope you're not still following the advice to replace butter with margarine made from trans fats, or getting your kids tonsils taken out for no reason...
That wasn't actually what I meant by "garbage statements". There was some evidence that it was correct. It wasn't complete, and the advice was a mistake, but that's different. Often the evidence is incomplete, and a best guess at that point (i.e. most of the time) can be incorrect...it just has a lower probability of being incorrect. (There are very few things I actually count as certain. The way I've heard it put informally is "No zeros and no infinities on the playground", though if we're talking about probabilities that should be 1s rather than infinities.)
However, one thing I do count as certain is "it's dangerous to breech the barrier of the skin". Which doesn't mean you should never do it, it means if you're going to do it, realize that it's dangerous and take precautions.
I want to honor some reviews that I liked that didn't make it into the finals. Actually, there were lots of good ones. Sometimes the review itself was well written and had interesting commentary, and other times it was worth reading for a glimpse of a the book being reviewed.
I'm pretty sure that was actually by Scott. It was funny and clever and maybe suffered from not having any great idea to reveal (and it also suffered from me knowing this stuff already) but there was a lot of interesting info and I got a feel for how little data we've got to deduce anything about the period and how shaky the deductions can get.
The reviewer is trying to make sense of a 16th century theological debate. I was impressed by how seriously and thoroughly they tried to evaluate the arguments. An interesting exercise in taking seriously the ideas very disconnected from our own.
The review is OK, but I want to point to it to promote Chehov: go read Chehov, he's really good and at a local maximum of something. That something may be "very short realistic literary mood pieces" and may not be your favorite thing to read, but come, it's the local maximum, you've got to try it.
It's a bummer that this one didn't make it: it's an epic review with a lot of work put into it and a lot of interesting thought, and I love the reviewer's enthusiasm for the subject. Having finished it, I feel like I got some sense of what makes The Divine Comedy actually good and worth reading (independently of it being Famous and Important). The review did kind of suffer from being obviously undercooked, but come on.
It's a silly review that presents the Pooh story as a Homeric epic. (And I don't think this one is by Scott.) This sort of thing is a little too cute and clever for me, but it is pretty cute and clever.
I read this one with interest for the sake of getting the gist of the book, which is about a strikingly weird and a little horrible experience completely unlike any of my own.
I still think it's terrible. It's a messy mess. I wrote much of it on the last day, and I was still scrambling to finish it in the last ten minutes.
Most of it is a bloated summary which should have been trimmed down to make room for more commentary about the allegorical, political and linguistic aspects of the work.
I'm glad you liked it (you're the second person to give me positive feedback, third if you count Deiseach but I think her positive feedback was for the choice of book not the review itself), because I spent the last month feeling ashamed of it, thinking I should have had the wisdom not to submit it and send a better version next year.
I thought the review was kind of a mess, but it *did* have some genuine insights. I found the analysis of differences between Italian and English very interesting; but structurally it was unfocused and there were some formatting things (e.g. it was not really clear when you were reviewing and when you were quoting) which really dragged down the experience of reading it. If I'm honest, I do think you'd have been better off preparing a better version for next year, and probably to focus on one or two aspects rather than meandering through a range of topics, only some of which you've been able to give proper thought to, across thirty pages.
The review was interesting, but I was personally most impressed by the translation itself. I would absolutely read (and buy) your translation if you ever complete it. Or if that seems unattainable, I would also be quite happy if you could post somewhere online, whatever you have so far.
I'm so happy to hear you liked the translation! I'm gonna post it online for sure. At least some of the cantos, the ones I'm satisfied with, maybe not all of them. And other translated poems too. The main reason I'll never complete it is that I expect AI to quickly progress to the point that even translators of poetry become redundant. I haven't translated any poetry since the release of Chat GPT. What's the point? In a few years a robot will be able to do that job instantly better than I can. Not that I ever thought I'd make money translating poetry, it's a fun activity, but even a fun activity feels pointless when a robot can do it. I hate AI by the way, for other reasons as well beside this one.
Glad to hear you are going to post it, and please let me know when there's a link available. (You can easily find an email for me online.)
Regarding AI, I don't think your sense of pointlessness is reasonable. I don't want a soulless computer to translate Dante for me, I want YOUR translation. Made by a particular flesh and blood human being, who has tried to imagine seeing what Dante saw, and who has a different idea from everyone else about how to do it. As long as the AI's refrain from killing us all, they can't take that from you.
(There might be grounds for pessimism about when an AI saturated market will allow such work to be profitable, but that doesn't need to affect a fun hobby, or the communion between you and me and the poet. It only gets ruined, if you allow it to get ruined.)
I don't know if it would have been a good idea to wait a year and polish it some more: on the one hand, the review would have benefited from some editing. On the other hand, I read all of it with interest and enjoyment, long as it was, and sometimes it's important to get something done and get it over it so you are free to work on something else. Also don't risk losing enthusiasm and momentum and leaving the thing half-finished.
It made me want and not want to read the Divine Comedy at the same time - want, for obvious reasons, and not want because it would have to be a translation, and the review made me aware of all the interesting stuff that will be lost. Anyway, good job on it.
Scott didn't write the Egypt’s Golden Couple review, a statement that I can utter with some confidence because I'm the one who wrote it. I am going to take your incorrect guess as high praise, though, so thank you for that, and for the rest of your positive feedback.
(In the past some people have complained that some reviewers try to imitate Scott's style and it's a bit cringeworthy. In which case it is unclear whether being mistaken for Scott is a good thing or not. Maybe it would be bad if the similarity were on purpose, which fortunately it isn't.)
Darn it. You even had to link to a Philip Glass composition, obvious Scott giveaway, obvious. Are you sure you're not some kind of split personality? There must have been a reason he was defensive about the whole multicore condition thing.
I really liked the Pooh one as well, surprised it didn't make the finals. Making the Corps was a really good idea but I think the review itself was a bit long for me. Worth a look though.
I'm part way through reading the Bondage of the Will review. The main issue I'm having with it is that the quotes from the book are confusingly phrased. I would have expected there would be a better translation available.
I think the problem is that a lot of freely available translations of old books are terrible 19th century paraphrases, and I can't imagine having to translate Luther's German into modern German, never mind modern English.
Looking at the review, yep, it seems to be taking a 19th century translation:
" Private letter from Luther to Nicolas Armsdoff, translated by Henry Cole, 1823, and published as Appendix II to Cole’s translation of Bondage"
And Luther was, em, idiosyncratic, let's say. I'm still chortling over that sermon on marriage where he bashes the Catholics for saying "if you knock off your spouse so you can marry that hottie, you can't get married to that hottie, sorry" (as marriage is a sacrament and would be done in a church). According to Luther, while you might be under the penalty for murder and that's okay because you done broke the law and the commandments, there's no reason you *can't* marry your honey-boo that you went and murdered your dusty old spouse for, God sure doesn't mind, look at David and Bathsheba.
The obvious rejoinder that "Bessie and Mike are not Bathsheba and David, Marty; King David was a very special case and got the punishment of his sins" doesn't seem to have occurred to him. Or if it did, he doesn't care, he's hell-bent on imposing marriage on everyone (the Catholics are bad for saying people can become monks and nuns, *everyone* has the duty to get married and have kids! get to it now!)
What a guy 😁 I now better understand the British Golden Age of poisoning, as divorce was way more scandalous than knocking off your spouse so you could marry your sidepiece, oh those Protestants!
My first encounter with 19th century translations was Cary's 1814 translation of "The Divine Comedy", the first time I ever read it when I was 15.
It's not a bad translation at all, but the guy can't help himself in places; being an Anglo-Irish clergyman and the son of same, he is so instinctively "no popery, no saints, no Mariolatry" that in places he goes for "this is a metaphor or allegory".
So, for instance, in Canto II of the Inferno, he footnotes the lines about
"In high heaven a blessed dame
Besides, who mourns with such effectual grief
That hindrance, which I send thee to remove,
That God’s stern judgment to her will inclines.
To Lucia calling, her she thus bespake:
“Now doth thy faithful servant need thy aid
And I commend him to thee.”
So that "the blessed dame" and "Lucia" become:
"A blessed dame.] The divine mercy.
Lucia.] The enlightening grace of heaven.
Three maids.] The divine mercy, Lucia, and Beatrice."
Well, no. The blessed dame is the Virgin Mary, and Lucia is St. Lucy (Dante's patron saint, according to some) 😁 I'm not a scholar, but I'm a Catholic like Dante, so I'm fairly sure when he mentions a saint, he means the saint and not an allegory for "the enlightening grace of heaven".
Longfellow's version is - well, he tried (at least for me). You also get an awful lot of 18th century poetic versions of Classical works which are honestly so difficult to wade through, that I think they nearly put me off poetry for life.
The opposite temptation is to do modern translations that are all stripped-down, or slangy, or which involve modern idioms that will of necessity age and become unintelligible to later generations.
EDIT: As for Luther, as I got older I made a decision to be nicer about the Reformers because, you know, ecumenics and we're all fellow-Christians. But nw I am older still, and reading more of the originals, and now I go "No, I think I'll take a poke at Marty" 😁
Armchair psychoanalysis is a vice and not very helpful, but I think it's possible he was neurotic. So when he was satisfied with his own interpretation of something (because it soothed his raging anxiety about his personal salvation), he had no tolerance at all for disagreement. It was his way or the highway. Hence, he left religious orders and got married, to a former nun even, so that meant marriage was superior to taking vows.
He couldn't just leave it there, though. Marriage was no longer a sacrament, but any querying of who could marry or how or what, was taken as a direct attack on his correctness about "it was okay for us to nullify our vows and marry", so he went all the way into Crazytown. Sure, marry your affair partner after you've murdered your spouse to do so! Ignore the Pope, why he doesn't even think monks should get married!
Okay, so you can't really divorce your wife because you don't have cause except "I'm horny", um er oh yeah! Polygamy! Marry your mistress as well! Don't worry, that's okay because the Patriarchs in the Old Testament did it!
"Within a few weeks of his 1523 marriage to the unattractive and sickly Christine of Saxony, who was also alleged to be an immoderate drinker, Philip committed adultery; and as early as 1526 he began to consider the permissibility of bigamy.
...Philip was affected by Melanchthon's opinion concerning the case of Henry VIII, where the Reformer had proposed that the king's difficulty could be solved by his taking a second wife better than by his divorcing the first one.
...He accordingly proposed to marry the daughter of one of his sister's ladies-in-waiting, Margarethe von der Saale. While the landgrave had no scruples in this matter whatsoever, Margarethe was unwilling to take the step unless they had the approval of the theologians and the consent of the elector of Saxony, John Frederick I, and of Duke Maurice of Saxony. Philip easily gained his first wife's consent to the marriage. Bucer, who was strongly influenced by political arguments, was won over by the landgrave's threat to ally himself with the Emperor if he did not secure the consent of the theologians to the marriage, and the Wittenberg divines were worked upon by the plea of the prince's ethical necessity.
Thus the "secret advice of a confessor" was won from Luther and Melanchthon (on 10 December 1539), neither of them knowing that the bigamous wife had already been chosen. Bucer and Melanchthon were now summoned, without any reason given, to appear in Rotenburg an der Fulda, where, on 4 March 1540, Philip and Margarethe were united. The time was particularly inauspicious for any scandal affecting the Protestants, for the Emperor, who had rejected the Frankfort Respite, was about to invade Germany. A few weeks later, however, the whole matter was revealed by Philip's sister Elisabeth, and the scandal caused a painful reaction throughout Germany. Some of Philip's allies refused to serve under him, and Luther, under the plea that it was a matter of advice given in the confessional, refused to acknowledge his part in the marriage."
(Being fair to Luther, he didn't approve of either Philip or Henry in England's martial solutions, but I think this was more down to lingering influence of Catholicism on the formation of his attitudes).
At some point you stop being outraged and just go "Whatta guy"
It's an interesting non-progressive case of 'lived experience' and 'standpoint theory'. I wouldn't be surprised if the best translators of Dante, or indeed medievalists, happened to be Catholic--after all, that brings you that much closer to him.
That is the funny thing; as he moves through the work, he does sort of give up. He's happiest when it's plain "Mary can be tied in to the Gospel narrative" as with the marriage at Cana parts. I think he was unconsciously influenced by the poem as he went through it, so by the end he has no problem with her as queen of heaven 😀
I also liked On the Bondage of the Will as well as Making The Corps. (They're the only two you mentioned that I read, to be clear, I'm not saying anything about the others).
"An interesting exercise in taking seriously the ideas very disconnected from our own."
Is the emphasis here "taking seriously theological ideas in a community of mostly atheists" or "taking seriously 16th century ideas that Christians now don't care about" or a third meaning.
"taking seriously 16th century ideas that Christians now don't care about"
I mean, some of us still care, as those are the kind of theological differences that do make a huge difference in practice. They only "don't care about" if they're going for "God is, like, love, man and being nice is all that matters and all dogs go to heaven" version of religion.
You might as well be saying "All those AI researchers talking about consciousness and qualia and can machines have it and what is it, isn't that just dumb academic hairsplitting that nobody cares about in the real world?"
Let me second my vigorous approval of The Divine Comedy review here.
I wasn't saying that Christians don't care, I was asking if that was what Tasty_Y was saying.
I am definitely in favour of caring about philosophical ideas of all kinds. But there are indeed many here who think reasoning about something is not worth doing if you don't accept the premises.
>But there are indeed many here who think reasoning about something is not worth doing if you don't accept the premises.
I'm curious. What examples do you have in mind for cases where it is worth reasoning about the consequences of premises that one is reasonably sure are false?
I tend to think that there are only quite restricted cases where that is interesting.
E.g. sci-fi world building - what would the world be like if all fission neutrons were delayed, so nuclear power would work, but nuclear weapons wouldn't.
Or reductio ad absurdum. If sqrt(2) was rational, one can derive a contradiction, so it can't be rational.
Thanks for the recommendation, the Pooh review was excellent (I may check out some others).
Although there's a subtle thing there - I don't think Milne was trying to write a Homeric epic. I think he was trying to write a set of stories about stories, and the only stories you write stories about are the original epics.
Congrats to the finalists! Happy to see my personal fave The Pale King make the cut.
I wrote the review of The Pattern Seekers: How Autism Drives Human Invention. If anyone has feedback or contributions to the questions it raises, I've republished it on my blog and opened the comments section:
I read 14 of the reviews, and this was one of two I rated 10/10.
I'm glad it is posted online so I can link to it! Will be rereading more closely, but for now here are the notes I jotted down with my reasoning for the score: "really good structure, critical of book while highlighting what the useful takeaways are. engaging writing style. reviewer expresses their own biases clearly, and what would make them change their mind."
It's a real pity there are so many reviews (and/or so few finalist slots) because so many of the submitted reviews I would have liked to see comment discussions on.
Full disclosure: I submitted a review that didn't make it. I'm disappointed much less for the glory and prizes than for the lost opportunity to see lots of comments on it. And furthermore, none of the reviews I read and rated highly made it either, and I really wanted to see comments on some of those.
Right there with you. My review didn't make the cut, and my absolute favorite -- "Alphabetical Diaries" -- didn't either. If the writer of that review happens to be reading this, know that that was one of the most brilliant pieces I've read in a long time!
So, random one. What's with the trend against capitalisation of letters? It's increasingly common on twitter and other platforms to write only in lower case, including 'i' instead of 'I', and after full stops. Like, most things auto correct that, so you have to go out of your way to do it. What's going on?
I have done this for about 20years. Part of it is laziness (e.g. you do capitalization different in different languages, I don't want think about it at all). Part of it is, that I sometimes think it looks nicer.
"I" and the first letter of a sentence will always be capitalized though .
I will, however, often capitalize proper nouns more frequently than my peers. Which I've been told was a common habit a few centuries ago. But my own habit predates my learning of this. Rather, it feels like a natural mode of emphasis that's softer than italics.
I don't visit twitter. But if tweeters are eschewing capitalization despite autocorrect, I'd imagine that it's because it signals informality.
This practice has deeper cultural roots than people are recalling here, there was a lot of experimentation in writing during the 20th and things like using odd spellings, odd punctuation or all lower case were part of it. And then there's e e cummings who did it with his own name.
as someone who prefers to type without capital letters, and who uses a physical keyboard about 95% of the time i interact with others online, i can speak to this with amount of direct insight. (usually on substack i force myself to capitalize, but i don't like it). for those of us who are really fast typists, it's quicker to eschew capitals, not just because of needing to involve the pinky of either hand in creating a letter, but because it introduces ERror that often necessitates laboriously backspacing through an entire word.
once you've tried typing this way (or even just trying reading much text typed this way) you notice that capitalization doesn't really do anything useful in english (when kerning is automatic, as online), and in fact the language looks better (to some eyes) without them.
now of course as a devotee of robin hanson, i'm aware that the real reasons anyone does anything are signaling, but the signal here might just partially be "i'm on a physical keyboard" (which due to the different kinds of typo possible leads to different outcomes that we may be self-conscious about).
going out of one's way using a touchscreen smart keyboard to eliminate punctuation would be a labor of love, perhaps done in imitation of e. e. cummings (without realizing that the all-lowercase choice was one a publishing-house editor made for him, and which he didn't especially love at first iirc).
rest of your replies are just-so stories from people who haven't tried it and/or mostly type english with their thumbs.
Like a rich person wearing a Balenciaga trash bag, a twitter journalist _obviously_ knows how to write proper english and is doing it intentionally. Because they don't want to be confused with the plebs.
"a twitter journalist _obviously_ knows how to write proper english"
Citation needed (you are talking about journalists, after all!) 😁
More seriously, the amount of stupid, basic, grammatical and spelling mistakes I see in online news articles, even the digital version of dead-tree media, makes me wince. Editing is a lost art, you just get the thing up fast and nobody proofreads.
That's how the Virtuous brag. They're so humble they don't capitalize the first person. They just had to act stupid to get your attention so they'd have the opportunity to remind you that they're special, and more important than you. If that doesn't work, they drop their pants.
This reminds of a specific instance of that: failing to capitalise "God" when it's used as a proper noun.
I am absolutely amazed at how few people get this right. Unless I'm misinformed, this is a simple rule: if it's a common noun (with an article before it) you use lower-case g, if it's a name or title you use a capital. Correct usage: "in the Old Testament, God is described as a jealous god".
And yet I see it wrong over and over. It *seems* like atheists are much more likely to drop the capital, which raises the question: do some people honestly think the capitalisation of God depends on whether you believe in God? What a stupid rule of grammar that would be, and no other word works like that.
The only other option is that they're deliberately violating the rules of grammar to be obnoxious. But these are often people who are otherwise perfectly polite and intelligent-sounding. So maybe I'm just imagining that it's atheists who do it more? Or maybe no-one understands correct usage but Christians are capitalising God for reverence reasons and accidentally being grammatically correct as a result?
Oh, I could go on about this! I'm still old-fashioned enough that I capitalise the "M" in "Mass" (the religious service), even though usage now seems to be just refer to it as "mass".
Well, while that may help strengthen the joke about electrons, I'm still going for "it's the Mass, not just a common noun". Ditto with "Bible" and "God" and so forth; some do it conspicuously because they're non-Christian or are atheists, but it's also style guide stuff for the media in part because "no establishment of any religion, if we referred to one religion's holy book or deity with a capital but didn't capitalise others, we'd be privileging Christianity and that is bad".
That, at least, is how I understand the argument: "If you capitalise "God" when referring to the Christian deity, but refer to pagan deities such as Roman gods with the small "g", then you are being preferential and biased towards one particular faith. And since we're all secular here nowadays, that's not the done thing".
It might be an over-correction against capitalising God's pronouns, which is a reverence thing that it's reasonable for athesists to not do. Also it's just a bit odd that "god" is both a proper and common noun like that, and an atheist is more likely to be referring to something rather generic while talking about a god, not having such a specific idea in mind of what that god would be, while also getting the phrasing of using it grammatically as a proper noun anyway from the cultural influence of Christianity. While I agree that it's incorrect, I think it's an understandable mistake for these reasons.
While I obviously can't speak for everyone, as an atheist I tend to use God when specifically referring to the Christian God, and god when speaking more generally. But I also don't refer to the Christian God specifically very often, as most of my arguments generalize.
So I would say that there is insufficient evidence to convince me to believe in god, that I find the concept of god to be incoherent, and that the American Evangelical movement would disgust God if He existed. (These are given as examples rather than opening arguments)
The first two are not proper nouns. I suppose I should say "gods" in that case, but due to the overwhelming cultural influence of Christianity it *sounds wrong* to say gods there. But those statement apply equally to any god, from Allah to Zeus. The last one is a comment on a specific religious group, so I use the proper noun form to refer to the god they claim to believe in, which is creatively named "God"
If I'm having a discussion about gods, it is normally about an ensemble of possible gods, with some properties (relevant to the discussion) specified, and others not. It is closer to a sample drawn arbitrarily from an ensemble than to a proper name of a single entity.
Honestly, this has come to annoy me so much that I find it extremely distracting whenever religion is discussed and was waiting for an opportunity to bring it up without interrupting a substantive discussion.
But yeah, "to convince me to believe in god" is surely as incorrect as "to convince me to believe in ghost". I think you should just say gods, which also makes the philosophical meaning much clearer.
Or "a god" which I think makes it clearer, if referring to the Christian-influenced milieu; you probably aren't inclined to believe in Hindu deities either, but you're not specifically talking about the Tridev so you don't feel the need to say "gods".
I think we need to remember that this is not at all a new trend. It is in a sense older than autocorrect.
Because on PC, it was - and still is - the low effort way to type.
People were typing like that in icq and skype (and probably on twitter) from their pcs years before the iPhone and following smarthpone revolution, before the internet (and our lives at large) were ruined by the mobile-first approach.
Then everyone and their mom got a phone, but the associations between casual internet speech and lack of capitalisation remained. And that's how we get "manually decapitalize first letter in every sentence that you write" state of things.
Well, I've never claimed that everyone did it. I did not, for example, taking pride at not being lazy with my typing (forgive my teenage-at-the-time-self for some silly signalling, if you can).
But it was fairly common. In my experience, it is still common on discord, and in other chat-like apps.
Less common othewise, but twitter is a weird place, a cross between a chatroom and summoning demonds by yelling into the abyss. No wonder that some chat behaviours are common there, and that demons show up ocassionally.
Oh yes, if you know they're doing it on purpose (which they almost certainly are) it makes them come across as *more* bothered than not. But it apparently works on some people. Same reason why the same people get upset about full stops, its not part of the look how casual I am aesthetic.
I wonder if it's that autocorrect increasingly works somewhat badly, so it either doesn't handle those cases or people turn it off? Or maybe people who aren't used to typing on computer keyboards get all lowercase because they're used to phones auto capitalizing for them.
Do people actually keep autocorrect on? It's one of the first things I turn off in a phone. When you misspell something, even quite badly, I find the keyboard is quite good at showing the intended word as one of the suggestions, which I then just click on. I sure wouldn't want it to change what I type on its own.
Have you noticed that Substack has direct messages? I recently got an automated anti-spam ban for sending one, so I want to make you all aware of how you can probably avoid this.
That's the only thing I did during that log-in. When I next logged in, Substack had auto-banned me for "Spam & Phishing". No email notification.
During the ban, my own never-used blog, and years of comments on other blogs like this, were all hidden.
Substack didn't respond to either the appeal I submitted through their form or the chatbot's promise to flag me an employee. When I eventually emailed tos@substackinc.com, a Trust & Safety worker apologized for the auto-ban, unhid my blog, and restored my DM & comment abilities. They forgot to unhide my comments, but did so after I sent two more emails. All this took 4 days, not counting 2 days of being banned unaware.
For those who are, unlike me, actual writers: the sight of 'Publication Not Available' in place of your blog might scare away your paying subscribers. It would probably prevent new subscriptions, too. To avoid an auto-ban, I recommend not sending links in DMs.
That was 9 months ago, and I still can't make up my mind about his take. So I sent it to Zvi, imagining he might see it as a good subject for analysis. (Scott could write a good post, too, but this is a busy year for him, so I don't want to suggest he do something additional.)
Yeah, it was an interesting comment. It is frustrating. Occasionally, there _is_ a poison with a delayed, but _substantial_ risk, (e.g. https://en.wikipedia.org/wiki/Radium_Girls , though the first death in that case happened within 5 years). Occasionally, as with cigarettes, the effects are so large and there are enough cases of people quitting and basically recovering that one can be reasonably sure of the effects without a RCT, but for most things ... good luck!
Does anyone have an opinion on the safety of Tylenol for infants (note that I am saying infants, not pregnant women) with respect to neurodevelopment beyond what I can learn from reading https://parentdata.org/can-take-tylenol-while-pregnant/ ?
The infant in question is teething and would probably want Tylenol daily.
Also for teething pain, the things that worked really well and safely for my kid, was
1 the frozen waffles, let the baby chew cold squishy things
2 anbesol or another one of the local dental anesthetics, preferably with novocaine rather than eugenol which can sting a bit at first, though both would probably be okay.
I have a 10 month old with a congenital heart defect. He has had two open heart surgeries so far and spent about 3 months total in the hospital. He/We have almost constant contact with his medical team which are at a major US university with a small but top ranked children's hospital, including a Neurodevelopment team.
They regularly recommended tylenol for things like teething with no concerns about development. He has been on Oxy and Clonodine and a number of other similar drugs and this same team doesn't have any concerns about long term neurodevelopment due to these meds.
Of course the disclaimer that this team is biased to confirm their recommendations and of course to listen to your pediatrician, etc. etc.
Out of curiosity, how did this question arise? Looking it up, I see one paper about it, and it strikes me more along the lines of the "MMR vaccines cause autism", as it goes on about "susceptible babies" (and what makes a baby susceptible?) and argues that paracetamol use in early stages is responsible for 90% of ASD cases.
" We conclude that the very early postpartum period poses the greatest risk for acetaminophen-induced ASD, and that nearly ubiquitous use of acetaminophen during early development could conceivably be responsible for the induction in the vast majority, perhaps 90% or more, of all cases of ASD. "
That makes me go "Hmmm", not to mention "And what the fudge caused autism before people were routinely taking paracetamol?"
EDIT: Ah fudge me, that graph! "Spike in ASD induction beginning with the cutting of the umbilical cord"???? Are they taking the piss or what, over there in the "Special Issue Neurodevelopmental Disorders in Pediatrics". If you can induce autism from within minutes of the child being delivered, I wouldn't worry about paracetamol for teething!
Honestly, I would not worry too much. Unless the infant is very young and you're dosing them up way too much for way too long, I don't think the small risk of "it maybe might do something" outweighs "the child is in pain and needs relief".
I'm pretty sure my mother wasn't giving me paracetamol as a child (Junior Disprin was the thing, I still remember the orange flavouring), and I was a baby during the time when gripe water still had alcohol in it. If I am ASD or on the spectrum, I come by it honestly - genetics! All home-grown, none of your new-fangled drug-induced fancy disorders!
It seems to be the medication of choice for parents here (usually in the form of Calpol https://www.calpol.ie/our-products). Health service advice on giving it to children:
I think there's more concern about aspirin and Reye's Syndrome for infants, but like all medicines, naturally, don't over-use it and stick to recommended dosages.
You could try alternating paracetamol and ibuprofen for managing teething pain (i.e. paracetamol one day, ibuprofen the other). More advice on managing teething:
It's upsetting to see your baby in discomfort from teething. Comforting and playing with them will help distract them.
Tips for helping a teething baby
Try giving your baby something to chew on, such as a cool teething ring.
Massage your baby's sore gums with a sugar-free teething gel.
Use mild sugar-free pain relief if your baby wakes at night and is irritable.
Give them cold water to drink - this helps to keep babies hydrated and may also soothe their gums.
Give them healthy foods to chew on, such as pieces of carrot or apple, or breadsticks - only do this if they're 6 months or older.
Stay close to your baby when they're eating in case they choke.
Teething rings
Chewing on a teething ring can help soothe a baby’s gums as well as distract them from the pain.
Use teething rings that are big enough so your child will not choke on them. Keep a spare clean teething ring in the fridge.
Never tie a teething ring around a baby’s neck. This could strangle them.
Check the product instructions on teething ring hygiene and how long to cool the ring for.
Never put the ring in the freezer as the temperature could damage your baby’s gums.
You can also use a cold wet facecloth for a baby to chew on. Make sure the facecloth is clean.
Teething gels and pain relief
Sugar-free teething gels are available over the counter from the pharmacy. They contain a mild local anaesthetic which helps numb any pain. These are for babies older than 4 months.
If your baby is still in discomfort after using teething gels, consider giving them sugar-free paracetamol or ibuprofen medicine for babies. Do not use ibuprofen medicines if your baby is under the age of 3 months.
Contact your GP or pharmacist for information on the safe use of gels and pain relief."
It's an unhappy time, as the child is in pain, but we all had to go through it.
Tylenol (aka acetaminophen or paracetamol) has some rather toxic metabolites and I would avoid it in favor of ibuprofen, which works just as well but is much less toxic.
EDIT: Sovereigness points out below that long-term ibuprofen use increases the risk of gastrointestinal bleeding, so it's probably a worse choice than acetaminophen. I still think that for short-term use, ibuprofen is better than acetaminophen.
I think this is one of those things where what we mostly know is a matter of opinion greatly influenced by advertising and highly legible anecdote. We do know that in adults, n acetyl cysteine plus glycine is hepatoprotective against Tylenol poisoning. I also have a personal opinion that the use of aspirin and children and young infants needs to be revisited as a research topic because it was largely based off of a couple of notorious incidents involving Ruys Syndrome during the 1970s, and that syndrome is associated with fevers in children with or without medication and apparently has a higher incidence when you give any pyrolytic. Though I don't wish anybody misery, following the general principle that symptomatic treatment interferes with immune and repair responses, I never treated my child with NSAIDs or Tylenol until his fever reached 103 and our pediatrician said that 103.5 is his treatment line and he was a 50-year-old head of family medicine at the University of Iowa's hospitals and clinics medical school. I honestly think that you want an experienced highly qualified pediatrician and just defer your judgment to them. It's not like we can't participate in our own medical decision making, but this is one of those that's really muddy. Voice your concerns and both sides of the argument and then go with their best judgment. Medical decisions involving your own child are really scary.
Not an infant, but I find ibuprofen does nothing for me. Most effective is aspirin, but my stomach doesn't like it. Paracetamol is next best, and ibuprofen is about as good as water for my aches and pains.
I feel weird disagreeing strongly with metacelsus but I have to, daily ibuprofen consumption is a really bad idea especially for an infant. You can very plausibly cause an ulcer. For prolonged regular use ibuprofen is actually quite bad.
Acetaminophen does have some metabolites that are potentially lover toxic over a threshold, and I'm sure that threshold is lower in infants, and i strongly suspect that it is very not well studied whether there are small-and-hard-to-detect developmental problems caused by them. So Im not saying necessarily yes to tylenol, but definitely no to ibuprofen.
Mostly associated with GI issues particularly bleeding. Mechanism of action is well understood, ibuprofen impairs stomach mucus production. If a doctor recommends prolonged use of NSAIDS particularly ibuprofen and some others, they will prescribe a proton pump inhibitor a long with it.
I did only a very brief search for papers so I didn't include anything specific to infants but I'd tend to assume infant GI systems are more risky.
Finally, this is anecdata, but I did end up with a NSAID associated GI bleed and as one of the papers mentions it's really rough, there's very few warning signs until you vasovagal on a toilet or something
OK, after reviewing those, I agree that ibuprofen is worse than acetaminophen for long-term use. I still think it's better than acetaminophen for short-term use though. I have updated my top-level comment.
I was told by my primary care physician that she prescribes high-dose naproxen for acute conditions but does not recommend it for frequent use because of the risk of gastrointestinal bleeding.
Fact-checking this now (because I never really thought much about it), it looks like Mayo Clinic does at least list GI bleeding as a possible side effect, especially in people with certain risk factors. I lack the medical expertise to judge whether this risk is actually worth not regularly using naproxen.
I can't speak for everyone else, but ibuprofen feels stronger due to its wider permissible dosage range--if you take the low end of the recommended range, you can (if needed) double your daily dose and still be within the range printed on the label. That's great if you are having a worse day for some reason. Naproxen has no headroom, at least according to the label. Imagine going in for a session of physiotherapy, knowing you can take an additional ibuprofen afterwards if you need to. With naproxen, you know you're going to suffer any aggravation for the rest of the day.
Ibuprofen is fast-acting, short-lasting and relatively weak. These kinds of drugs tend to have less side effects than stronger/longer-lasting drugs in general, and this case is no exception. Naproxen is better for adults with chronic conditions.
I strongly recommend against long term use of naproxen — my own experience of ~1 month of daily use (at max recommended daily dosage) ended with tinnitus and a need to empty my full bladder roughly every two hours, presumably because my kidneys were panicking at something. Fortunately those symptoms resolved about a week after discontinuing it (and prompted me to properly address the underlying condition). A kindly elderly neighbor also ended up with a severe GI bleed after long term use of naproxen (she was already deaf so may not have noticed the tinnitus).
I would use topical anesthetics & chewing toys whenever possible. It's just intrinsically lower risk. But we also used systemic pain relieve when necessary, and it depends a lot on the child & what they do well with. Even if there are some negative effects, there is probably a dose-dependent effect, so using it sparingly is, again, intrinsically safer. Keep in mind that the optimal of anything is rarely zero; My goal is usually to reduce the pain to a level where they can either sleep or do things. Btw, I also recommed Ibuprofen if possible since its effect is more limited, but since Paracetamol is generally the stronger drug for pain relieve you should always have both at hand.
Most studies on the topic I looked up on for our first one struggled with selection effects - for cross-family comparisons, the kind of parent that just gives kids lots of painkillers everytime they scream a little probably has worse outcomes than average, and even for in-family comparisons some kids simply have more issues than others, which will realize itself as both more screaming and thus more pain killers on one side, and also more bad outcomes later in life. It's difficult to completely control for these. Based on the research, I'd rule out very strong negative effects especially at minor doses, but everything else is probably plausible.
With our children, we were able to minimize use of pain relievers with chew toys and frozen soothers.
Mostly they liked to chew on my thumb. Constant pressure gave the most relief while they were cutting teeth. If you're willing to hold them for an hour messaging the gums, that could go a long way toward relief for your little one. Night time is a bit more challenging.
It should be noted that unlike ibuprofen which reduces pain by reducing inflammation, Tylenol is a psychoactive drug. So I'd worry about the dependence/tolerance/withdrawal that comes from taking any drug daily.
Sorce: I'm a Phd biologist with a secondary research interest in psychoactive medication.
In Russia, paracetamol is considered safe from 3 months of age, with daily dose of 60-120 mg/day from 3 months to year and 120-240 mg/day from 1 year to 5 years.
I gave it to my son sometimes, though mostly used topical anesthetics, didn't notice any detrimental effect, not sure how i would notice though.
Thanks, that's interesting! Here in the US it's widely considered safe for infants too, there have just been a few recent journal articles saying that it might not be, which left me spooked.
The joys of first-time parenting, you are going to be spooked by a lot of things. Take comfort in the fact that you live in the USA, and if they sued over talcum powder, then if paracetamol did anything bad they'd be taking a court case over it.
Maybe somebody will eventually, but right now don't be too concerned that giving your child an occasional small dose of a painkiller formulated for infants is going to turn them into a NYT journalist.
"the chances of being in a lower development category increased with increasing periods of prenatal paracetamol use but not prenatal opioid use"
So if Pregnant Mrs. Scott had been taking Old Mother Machree's Knock'EmOut Tonic by the gallon while pregnant, no bother, but oh no she took 500mg of paracetamol? Call the CPS!
"For example, a startling twofold greater incidence of infantile autism in circumcised boys compared to non-circumcised boys can be readily explained by potentially negative impacts of paracetamol exposure during and following the circumcision procedure"
Look, I got nothing there. My jaw is dropped and I'm trying to scoop it off the floor. I can just see the anti-circumcision activists pouncing on this one: "circumcision causes autism!"
EDIT: Reminder of old-timey medicine for fussy children - morphine and alcohol.
"Charlotte N. Winslow, a pediatric nurse, originally created Mrs. Winslow’s Soothing Syrup as a cure-all for fussy babies. The syrup was first produced in 1849 by her son-in-law, Jeremiah Curtis, and his partner Benjamin Perkins, in Bangor, Maine. It was widely marketed in North America and the United Kingdom.
Mrs. Winslow’s Soothing Syrup was known as a patent medication (this term often refers to a product that was marketed in the United States during this time but typically did not prove efficacy or safety). The concoction was used for babies who were crying, teething, or had dysentery, for which the opioid effect of the syrup caused constipation, to treat the diarrhea.
The syrup contained morphine 65 mg per ounce, as well as alcohol. One teaspoonful had the morphine content equivalent to 20 drops of laudanum (opium tincture); and it was recommended that babies 6 months old receive no more than 2-3 drops of laudanum.
One teaspoonful contained enough morphine to kill the average child. Many babies went to sleep after taking the medicine and never woke up again, leading to the syrup's nickname: the baby killer.
Mrs. Winslow’s Soothing Syrup was hugely popular. In an 1868 court summary, Curtis reported selling more than 1.5 million bottles of the remedy annually"
I would be interested in knowing if you or anyone else comes to any conclusions on the topic. I regularly use it for my young kids and have always been under the impression, until now, that it is totally safe (apart from the liver toxicity stuff).
https://boards.4chan.org/g/thread/101111376/this-hit-piece-appeared-on-hacker-news-10-years
Just saw this on 4chan.
Y'know what's funny? That when I first read this hateful little essay (Dec 1st 2013) I actually thought it reminded me of Scott Alexander's writing style.
And that's strange, because we all know Scott is not hateful and doesn't hate Nick Szabo.
Any tips on optimising comment visibility? Seems very difficult to actually get a discussion going.
Be obviously wrong. Lots of comments that way.
That's called Murphy's Law
And I'm genuinely not trying to be a dick, but the broader and (probably?) more controversial your comment, the likelier it is to get wide engagement.
I'd like to think the ACX commentariat is above rage-bait et al, but we're all still human.
Thanks for the replies. Are either of you paid subscribers? Is the dynamic different? On the hidden threads I mean.
I'm a paid subscriber. Hidden threads obviously have less engagement than open threads, but it seems to me that they have higher quality commentary. Like perhaps Scott's truly core audience is a little more invested?
Or that could be a false narrative of there being a value-added!
What @Rothwed said below is accurate - the day Scott posts an open thread is often the busiest, so I think if you're wanting a comment to have a lot of engagement, you're best off posting it that day.
I once posted a comment to an open thread like 40 minutes after Scott posted, so my comment was 8th or so chronologically? I didn't notice a meaningful difference than when I once posted like 18 hours later, truth be told. I suspect a lot of Scott's commenters are the type of people who at least skim all of the comments.
Can I ask why you're asking?
Why am I asking? I like the blog and it seems like a good place for discussion, but there's a large quantity of comments and many seem poor quality, not as in abusive, just not really engaging with the article or other people's comments. So I thought maybe the hidden discussions might be better.
Oh, I see. Well, the commenters here have a semi-wide variety of interests under Scott's umbrella. For example, I find the predictions market stuff deadly dull and so I skip over both the posts and comments threads about it. In fact, I tend to enjoy comments that are out of left-field; people asking for, like, advice on what breed of dog would best suit their lifestyle, or whatever.
Hidden threads aren't going to be more focused on Scott's posts or on each other's comments. They have about the same diversity of topics.
Hidden threads *do* often cover topics that people might prefer to talk about behind a paywall, though. I've started a couple of comments on hidden threads that (obviously) weren't totally private, but that I didn't want casually discovered by a coworker or something.
Okay thanks for the Intel. Obviously the Open Thread can be a bit more meandering and that's fine. I've found the Fake Traditions are Traditionanal discussion particularly frustrating so I'm looking for a better quality debate, but maybe there isn't one, or rather it doesn't grow on trees.
Generally the open threads are much more active right after Scott posts them. The weekend before a new thread drops are the least active.
There's probably an optimal time, you can post too early on an Open Thread and have your comment swamped by later discussions. But I guess there's several factors.
Hello Enthusiasts,
Join us for our 67th OC ACXLW meetup where we'll delve into the future of artificial intelligence as projected by Leopold Aschenbrenner and analyzed by Zvi Mowshowitz, and explore the intricate dynamics of social groups. This week's readings provide a rich foundation for our discussions, exploring themes of AI development, social influence, and ethical behavior.
Discussion Topics:
Situational Awareness in AI Development
Overview: This topic will focus on Leopold Aschenbrenner's analysis of the rapid advancements in AI, particularly in Silicon Valley, and the projected developments up to 2027. Zvi Mowshowitz provides a summary that highlights key trends and potential future scenarios in AI.
Summary of Key Points:
AGI Timeline: Aschenbrenner believes AGI (Artificial General Intelligence) is likely to be developed by 2027, based on current trends in compute power, algorithmic improvements, and AI capabilities.
Intelligence Explosion: Once AGI is achieved, there could be a rapid progression to superintelligence, possibly within a year, through automated AI research.
Economic and Military Implications: Superintelligent AI could provide decisive economic and military advantages to whoever develops it first.
US-China Competition: There's a strategic race between the US and China to develop AGI. While the US currently leads, China could catch up through chip manufacturing advances and algorithmic theft.
Security Concerns: Current AI labs lack adequate security measures to protect critical AI developments from theft or misuse.
Alignment Challenge: Ensuring superintelligent AI systems remain aligned with human values and goals is a crucial unsolved problem.
Government Involvement: Aschenbrenner predicts increased US government involvement in AI development, potentially leading to a national AGI project by 2027-2028.
Societal Impact: The development of AGI and superintelligence could lead to rapid and profound changes in society, economy, and global power structures.
Ethical and Safety Considerations: There are significant concerns about the potential risks of superintelligent AI, including existential risks to humanity.
Urgency: Aschenbrenner emphasizes the need for immediate action in addressing these challenges, as the timeline for AGI development may be shorter than many expect.
YouTube Audio: Situational Awareness - Summary by Zvi
Text Article: Quotes from Leopold Aschenbrenner
Social Dynamics: Geeks, MOPs, and Sociopaths in Subculture Evolution
Overview: This discussion will delve into the social dynamics as explained by David Chapman in "Geeks, MOPs, and Sociopaths in Subculture Evolution" and Leopold Aschenbrenner's quotes. We will explore how different types of individuals interact within social groups and subcultures.
TLDR: David Chapman's essay on "Geeks, MOPs, and Sociopaths" examines the roles of different types of individuals in subcultures and how these roles influence the evolution of these groups. Leopold Aschenbrenner's quotes further illuminate these dynamics.
Text Article: Geeks, MOPs, and Sociopaths in Subculture Evolution
Questions for Discussion:
How do the stages of subculture evolution described by Chapman resonate with subcultures you have experienced or observed?
What strategies could geeks employ to protect the integrity of their subculture without completely excluding mops?
How can subcultures recognize and mitigate the influence of sociopaths?
Considering the decline of traditional subcultures, what new forms of social and cultural organization might emerge?
We look forward to an engaging and thought-provoking discussion where your insights will contribute to a deeper understanding of these significant topics.
To follow up on the thread below about the value in labeling people as "on the spectrum", this Atlantic article is making the rounds as an alternative to Haidt's social media thesis about Why Young People in the Anglosphere Are So Depressed. https://www.theatlantic.com/ideas/archive/2024/06/mental-health-crisis-anglosphere-depressed/678724/
Key excerpt:
"In the past few years, at least three distinct phenomena have potentially contributed to the gloom of the Anglosphere. Let’s think of them as diagnostic inflation, prevalence inflation, and negativity inflation.
First, the diagnostics. In 2013, the psychiatrist Allen Frances offered a warning to his field. Frances had chaired the American Psychiatric Association during revisions of the fourth edition of psychiatry’s “bible,” the Diagnostic and Statistical Manual of Mental Disorders, commonly known as DSM-IV. The first edition of the DSM—published in 1952 in response to the needs of military personnel returning from World War II—listed about 100 mental disorders. By 2013, the number of disorders listed in the DSM had swelled to nearly 300. In his 2013 book, Saving Normal, Frances warned that “a looser definition of sickness” could make people worse off. “DSM-V opens up the possibility that millions and millions of people currently considered normal will be diagnosed as having a mental disorder,” he told the Canadian Medical Association Journal that year. The expansion of clinical vocabulary risked creating a new set of patients he called the “worried well”—people with normal human experiences who spent a lot of time worrying that they have a disorder. He and others called this phenomenon “diagnostic inflation”—the slapping-on of more (and more, and more) clinical labels to pathologize everyday sadness and stress.
Frances was mostly concerned that diagnostic inflation would lead to over-medicalization. He might have been right. By 2016, the share of people in the U.S. using antidepressants was more than twice as high as in Spain, France, or Germany, and nine times higher than in South Korea.
As our mental-health lexicon has expanded, U.S. content creators have recognized that anxiety is a hugely popular—or, at least, hugely attention-grabbing-topic for young people scrolling on their phones. As I reported in December, the TikTok hashtag #Trauma has more than 6 billion views. According to the podcast search engine Listen Notes, more than 5,500 podcasts have the word trauma in their title. In celebrity media, mental-health testimonials are so common that they’ve spawned a subgenre of summaries of celebrity mental-health testimonials, including “39 Celebrities Who Have Opened Up About Mental Health,” “What 22 Celebrities Have Said About Having Depression,” and “12 Times Famous Men Got Real About Mental Health.”
This takes us from diagnostic inflation to “prevalence inflation,” the term psychologists Lucy Foulkes and Jack L. Andrews use to describe the phenomenon of people developing apparent anxiety disorders from the sheer ubiquity of concern about anxiety disorders that swirl all around them. It might work something like this: People who keep hearing about new mental-health terminology—from their friends, from their family, from social-media influencers—start processing normal levels of anxiety as perilous signs of their own pathology. “If people are repeatedly told that mental health problems are common and that they might experience them … they might start to interpret any negative thoughts and feelings through this lens,” Foulkes and Andrews wrote. This can create a self-fulfilling spiral: More anxiety diagnoses lead to more hypervigilance among young people about their anxiety, which leads to more withdrawal from everyday activities, which creates actual anxiety and depression, which leads to more diagnoses, and so on."
I'm reminded of the short story by Machado de Assis in which a famous-but-quack psychiatrist announces: "I have discovered that insanity is not an island but an entire continent!" https://en.wikipedia.org/wiki/O_alienista
Just a general point-- the DSMs don't have a definition or concept of mental health, which can leave everyone being various degrees of sick.
I just learned that Micron Technology must delay building a factory because of endangered bats on the site: https://www.bloomberg.com/news/newsletters/2024-06-20/micron-has-to-resolve-a-bat-problem-before-building-syracuse-chip-fab
It occurs to me that the proper person to fix this bat-problem is, of course, Batman. They just have to get the bat-signal going.
Welp. If we're gonna sacrifice the bats, we might as well put them to good use vs China.
https://en.wikipedia.org/wiki/Bat_bomb
> Bat bombs were an experimental World War II weapon developed by the United States. The bomb consisted of a bomb-shaped casing with over a thousand compartments, each containing a hibernating Mexican free-tailed bat with a small, timed incendiary bomb attached.
> In his letter, Adams stated that the bat was the "lowest form of animal life", and that, until now, "reasons for its creation have remained unexplained". He went on to espouse that bats were created "by God to await this hour to play their part in the scheme of free human existence, and to frustrate any attempt of those who dare desecrate our way of life."
I recommend reading the entire page. It feels like a fever dream.
Incidentally, there's a parallel timeline where Japan surrendered for fear of the Adams Bomb.
That you for that horrifying insight into our war machine. Now I'm wondering if it's a good thing we used the atomic bomb instead.
I could use some help with a bit of terminology. This is for a Call of Cthulhu RPG scenario. In this setting, the heroes work for a sizeable organization that sponsors investigations into archaeology and the paranormal and dispatches them to the far corners of the world. Let's suppose there is a group of "directors" that control the finances and decides which expeditions to fund. There is a group of "agents" who put together expeditions, pitch them to the board for funding, and then run the expeditions, typically without going into the field themselves. There are also "associates" who are junior staffers, working for the directors or agents, and "crew", the hired professionals who actually go on the expeditions.
I'm pretty happy with the titles of "directors" and "associates", but somehow the "agent" title doesn't seem quite right. It's a little too James Bond, for someone who is in the end a mid-level staffer. Can anyone think of a better term?
"overseer"? <looks at starcraft2>
also, I don't understand what value the agents are providing, beyond being dispatchers/idea-fairies. why are they just sitting behind a desk in HQ, instead of being out in the field?
Everybody hates middlemen, and wishes producers and consumers could just deal with one another directly. In simple matters that does happen. But as soon as things get complicated, opportunities for middlemen tend to open up.
In this case, I imagine the overseers (by whatever name) are kept around because they know a) what opportunities for expeditions actually exist, b) what sort of projects the directors are interested in funding, c) how to put together an application packet that is likely to be approved, d) who can be hired to go on expeditions, and e) how to exercise the level of supervision and budgetary control that looks proper to the directors. That's quite a bit of knowledge to squeeze between one set of ears.
The directors would love to deal directly with expedition leaders who mostly work in the field, returning only occasionally to report glorious success and request modest sums of money to continue. But in practice having some responsible person at HQ has proved indespensible, and the modern role of overseer was eventually formalized for this purpose.
So, the middleman is someone who's directly and uniquely responsible for the success of a single crew's expedition. But they don't actually lead from the front? Sounds dysfunctional to me. But if you insist on this org-chart and you're not willing to make up a new word, I suppose "bursar" or "handler" are maybe the closest I can think of, depending on what you want to emphasize.
Copying from the British Museum, what about "keepers"?
https://en.wikipedia.org/wiki/List_of_keepers_of_the_British_Museum
"The keepers are heads of the various departments of the British Museum. They are professional curators and related academics. There are currently nine departments plus the Portable Antiquities Scheme that have keepers."
Or perhaps "curators"? Or "conservators"? Given that they are meant to put together the missions and present the case for funding, but do not participate in the field, they investigate/research any objects brought back and present the final reports?
https://en.wikipedia.org/wiki/Curator
"A curator (from Latin: cura, meaning "to take care")[1] is a manager or overseer. When working with cultural organizations, a curator is typically a "collections curator" or an "exhibitions curator", and has multifaceted tasks dependent on the particular institution and its mission. The term "curator" may designate the head of any given division, not limited to museums.
...A "collections curator", a "museum curator", or a "keeper" of a cultural heritage institution (e.g., gallery, museum, library, or archive) is a content specialist charged with an institution's collections and involved with the interpretation of heritage material including historical artifacts. A collections curator's concern necessarily involves tangible objects of some sort—artwork, collectibles, historic items, or scientific collections.
...In France, the term "collections curator" is translated as conservateur. There are two kinds of conservateurs: heritage curators (conservateurs du patrimoine) with five specialities (archeology, archives, museums, historical monuments, natural science museums), and librarian curators (conservateurs des bibliothèques).
...In the United Kingdom, the term "curator" also applies to government employees who monitor the quality of contract archaeological work under Planning Policy Guidance 16: Archaeology and Planning (PPG 16) and manage the cultural resource of a region. In a museum setting, a curator in the United Kingdom may also be called a "keeper"."
Interesting. I guess "curator" would be particularly appropriate if these expeditions are being organized by an institution that is mostly known for being a museum, such as the Smithsonian or the British Museum.
Thanks for the suggestions, everyone. Having seen the options, I have to say that nothing is really jumping out at me as obviously right in light of conventional use. That suggests I'm best off using a plainly descriptive term, for clarity. I'll go with "Expedition Organizer."
Agent sounds right because if they're pitching projects to the board then they sound more like independent agencies than employees. Much like how the government gets a lot of work done by contracting NGOs. So "Contractor" could also work, or "Agency Head".
You could also go with "Leads", but that sounds a bit too modern.
Project managers?
"Project Manager" is the right term in most of industry and government. In more academic contexts, including government-run science, "Principal Investigator" is also used. The "Investigator" part might suggest actively participating in the field work, and some PIs do that, but others just arrange the logistics, read the reports, and tie it all together.
"Coordinators", I'd say. Or just boring old "Managers:".
In movie-industry terminology, these people are essentially "producers". That seems too specialized. The projects they lead aren't building or making anything.
One idea I'm toying with is that they are informally known as "bucks", since they're where the buck stops on any decision pertaining to an expedition. The board doesn't particularly care how an expedition succeeds or fails, only _whether_ it succeeds or fails, and of course how much it cost. So these "bucks" have quite a bit of authority, once the board has greenlit a project.
I guess what I would really like to lean on is the idea is that the organization holds these people responsible for the success or failure of missions, and accordingly gives them a lot of latitude to make decisions. So, "Officer in Charge", maybe? That might work if the term "Officer" is also used for other senior people on an expedition. So, a large expedition might have a Supply Officer, a Science Officer, a Security Officer, and so on. Casually, these folks might be known as OCs.
Possibly it's worth noting that "buck", when applied to a person, has distinctly racist overtones. But then, Lovecraft, so maybe that's not completely a negative?
...what? I've only ever heard it as "young buck", akin to "feisty kid".
Yeah, there's also that usage; even "kid" comes from that sorry of thing.
But there's the usage that Deiseach mentioned, where it was a deliberately animalistic reference, with overtones of domestication and breeding, like maybe calling a woman a "brood-mare"?
I'm not saying you *shouldn't* use this term - it depends on the group and any audience. But ... better to go into it with eyes open, you know?
Wait until you hear that identifying a piece of gardening equipment and/or a particular suit of cards is also racist!
"And your father was a rake!"
"Buck" was a term used to refer to male African-American slaves. From "The Adventures of Huckleberry Finn" (content warning for another term considered offensive):
"They swarmed up towards Sherburn’s house, a-whooping and raging like Injuns, and everything had to clear the way or get run over and tromped to mush, and it was awful to see. Children was heeling it ahead of the mob, screaming and trying to get out of the way; and every window along the road was full of women’s heads, and there was nigger boys in every tree, and bucks and wenches looking over every fence; and as soon as the mob would get nearly to them they would break and skaddle back out of reach. Lots of the women and girls was crying and taking on, scared most to death."
From the hip:
Representative (i.e. they represent the board in the matter of this expedition)
Commissioner (i.e. they have been commissioned by the board or are heading a committee commissioned by the board for a particular expedition)
Controller? Manager? Administrator? It sounds like your agents are more like literary agents, so you could tack something appropriate onto the front of "agent" - I doubt anybody thinks that literary agents are like James Bond but for books (although that is a fun concept!).
Inspired by Melvin's post below, my question is: Was Marx a Communist?
Let me qualify that. If Karl Marx had seen 20th Century Communism in practice, would he have still been a Communist by the year 2000? I think no, he would have seen its practical failings and horrors and realized that they were too big to overcome. He would have accepted that he was wrong.
The subject reminds me of Nietzsche explaining that he chose Zarathustra as his hero because Zarathustra was the first one to make a clear distinction between good and evil. Nietzsche chose Zarathustra because, as the first one to make a distinction between good and evil, he would be the first one to recognize his mistake.
By the same token, I think Marx would have been quick to recognize his mistake had he witnessed the Soviet Union.
He was a Marxist but not a Leninist and certainly not a Maoist. I doubt he would have become any of them either. But I also doubt he would have recanted his beliefs. I suspect he would have instead spend his time writing angry tracts about how his way was right and theirs was wrong. Probably from exile.
I am not an expert on this, but I think Marx basically believed that the predestined historical timeline goes like this: First capitalism creates a lot of wealth, and then as both the wealth and social inequality reach extreme values, the system collapses (because the poor are unable to buy the products, but the rich are unable to make any more profit if no one buys the products of the factories they own), and then a revolution replaces this by socialism which keeps the high production but distributes it equally or something like that. So basically, first the capitalism must reach its maximum, only then it can properly collapse into socialism. Also, he expected that this will happen in Germany, because I guess at that moment, it was the most developed capitalist country.
Communists in Russia were quite stressed about this, because on one hand they believed in the prophecy, on the other hand, their own actions contradicted it (by making the revolution in Russia, and skipping capitalism). Until the last moment of the revolution they still expected that somehow Germany would... gets it own socialist revolution five minutes before Russia does, so the prophecy would still come true. When that didn't happen, that's probably when Lenin started developing Marxism-Leninism as an alternative to Marxism, and afterwards Stalin decided that "actually, we need to build some kind of 'capitalism, but micromanaged by the communist party' in Russia, because you really can't build a welfare state when starting from poverty".
So basically, it would be enough for Marx to say "I told you; the real socialism would only come when the time is ready, and it will come in Germany, not in Russia; your experiment was premature and that's why it failed". Thus he could still keep the belief in Marxism.
No, that's Keynes talking about the Great Depression. Low wages mean low consumption which leads to industries failing which leads to low wages. Which means Keynes argued for government intervention to break the vicious cycle.
Marx believed that what would happen is that collectives of workers would outproduce capitalist owned factories. This would cause the capitalist owners to pass laws or use violence to suppress these more productive worker cooperatives which would necessitate the use of violence to defend them. The resulting war would be ultimately won because the factories pumping out guns and armor and all that on the socialist side would be more productive. The period just before the war is late capitalism.
You're mixing up Marxism and liberal economics. Stalin (and all Marxists) oppose the welfare state, for example.
I do not think Marx would have likely recognized his mistake. Lenin didn't realize his mistake, Stalin never realized his, nor did Mao, and I don't see Marx as an exemplar of virtue that would be more likely to repent and change his mind than the average man. Given that many Marxists still exist despite being aware of all the historical horrors, I imagine Marx would adapt and continue to believe that history was bound to unfound itself into a glorious communist future.
I agree. Marx would have said they shamelessly used his ideas for promoting the welfare of the common people to create a new veneer for the powers-that-be to use to rule. He wouldn't recognize the theoretical fundamental flaws now shown empirically.
At the time that Lenin died, the revolution he had fomented for all his life had succeeeded beyond his wildest dreams, the Bolsheviks were firmly in control of Russia and, thanks to NEP, it looked like the crisis situation was stabilizing. At the time Stalin died, Soviet Union was probably at the absolute zenith of its power or close to it, unquestionably the second-most powerful superpower still widely predicted to close the gap with the US even among non-communists, had just won an apocalyptic war against fascism, possessed the sort of technology a teenage Stalin at the seminary could probably not even have imagined. Why would one expect them to realize their mistakes at this point?
When Stalin died, USSR was so soaked in blood that the next leader did the unthinkable and said what everyone knew: that Stalin was an evil man, and the Soviet Union had gone astray. When Lenin died he had the blood of millions of his countrymen on his hands, and had failed it usher in the age of freedom and utopia that he spent his years writing about and claiming was soon to come. The mistake they should have realized was not that Marxism makes for bad economic policy, or bad military policy, but that Marxism in practice does not usher in the utopia Marx promised but instead brings mass murder, terror, and slavery. Lenin and Stalin were content to be the instruments through which that evil flowed in order to preserve their own power, and I feel that Marx would have been much the same. He does not strike me as a person who is particularly committed to what is true and what is good over being right.
Marx didn't rule a country, so admitting a mistake would have been less costly for him. (Then Stalin would probably send someone to kill him.)
How much did Marx believe that a centrally controlled economy would be superior? How hard would it have been for him to stop believing it if he did believe it?
Made this site where you can find and discuss research papers!
https://www.papertalk.xyz
Any feedback is appreciated :)
the link doesn't open for me:
"We can’t connect to the server at www.papertalk.xyz."
Weird, clicking on the link works for me. Do you have Javascript disabled on your browser?
Ok it works on Safari. Yeah i probably have JavaScript disabled on my PC browser.
Was Hitler a fascist?
If you'd asked Hitler "are you a fascist?" then would he have said "yep, totally", or would he have said "No, Fascism is the ideology of Mussolini's National Fascist Party, I'm a National Socialist, which is different in the following ways..."
Is the whole idea of "Fascism" outside the Italian context just an example of outgroup homogeneity bias?
The second one. There was an international fascist movement which Hitler considered himself separate from. Doctrinaire followers of fascism in Nazi Germany were persecuted if they didn't change their beliefs. There was a broader recognition they were both far right ideologies with some similarities but they did not consider themselves the same or even compatible.
There was, however, an international fascist movement and you would have found people self-identifying as fascists (or at least saying "no, we're X, a movement inspired by fascism") all over the world. Nazism never really had much success outside Germany (except when it was backed by the German army). But fascism was able to spread and compete as an ideology. There's still political parties and countries that are significantly influenced by fascism and fascist policies. Most notably large parts of the Arab world but also things like Peronism in Argentina. It was also distinct from simple right wing dictatorships.
My understanding is that this would have been dependent on *when* you would ask Hitler this. If you had asked him earlier on, he would have probably replied that fascism is, at least, an inspiration for National Socialism, though National Socialism is still its own, German thing. Later on, he started growing increasingly disappointed in Fascism and saw it as beholden to the traditional right.
>Is the whole idea of "Fascism" outside the Italian context just an example of outgroup homogeneity bias?
You have to form a map of *some sort*, the match to the territory notwithstanding. During the interwar era, and to some degree even afterwards, there was a great amount of extreme nationalist movements with certain commonalities that even took the power in some countries, and "fascism" is probably the most useful name for this tendency, since it was an open point of reference for many of them.
The Nazis were directly inspired by Italian fascists, but given Germany became the greater partner, and had a much broader base and greater degree of social control (Italian fascists having kinda haphazardly come to power, versus Nazis getting almost half the votes), they kinda played down that connection.
The Nazis did have a sense that they were doing the same sort of thing as Mussolini's Fascists, especially prior to 1933. A lot of it was tactics and aesthetics, like the Brown Shirts being heavily inspired by the Black Shirts, and the Beer Hall Putsch being the first stage in a plan that was modeled after the 1922 March on Rome that brought Mussolini to power.
How much Hitler personally thought the Nazis and the Italian Fascists were doing the same sort of thing depended on the relative fortunes of the respective parties and, once both were in power, how well the governments were getting along that week.
In terms of ideology and policy, the biggest difference was probably that the Italian Fascists saw nationality as the crucial fault line dividing humanity, while the Nazis saw race in that role. Mussolini was pretty dismissive of race as something significant up until the mid-to-late-30s when Italy was allied with Germany and shaking out to be the junior partner in that alliance. In particular, antisemitism was central to Nazi ideology, and the Nazis started writing antisemitic policy into Germany's laws very soon after taking power. Mussolini frequently flip-flopped on antisemitism in the 20s and 30s and Italian Fascists didn't start enacting antisemitic laws until 1938, 16 years after taking power.
> Mussolini frequently flip-flopped on antisemitism in the 20s and 30s and Italian Fascists didn't start enacting antisemitic laws until 1938, 16 years after taking power.
And, at least according to Arendt, the Italians were extremely resistant to the Holocaust, and the genocide only really got started in Italy once the US invaded, and the Fascist government collapsed, and Germany occupied the north.
On The Rest Is History podcast they had an interesting discussion about What Is Fascism? They both agreed that fascism must have an element of both futurism and fashion. Futurism in the sense of flashy new technology like jet aeroplanes and such and the notion the future will be shiny and bright. Fashion in the sense of well-choreographed parades with stylish uniforms. I think there may have also been some stuff about race and nationalism.
So by that definition, the Nazi's would seem pretty fascist. Maybe the Proud Boys too on the fashion front, though perhaps not on the futurism front. The Rationalists definitely fail on the fashion front.
ADDED: Trump isn't enough of a futurist to be a fascist, but I think he could put on quite the parade if only they would let him. I suppose he does like his shiny jet aeroplane, but that makes him more of a retro-fascist than a neo-fascist.
I haven't listened to the podcast, but this strikes me as remarkably unserious. Fascism had very particular ideas about the relationship between the individual and the state, and was very consciously a reaction both to socialism (out of which it grew), Marxism, and, of course, liberalism. If an analysis does not address those aspects, then it is pointless. Here are some of those ideas, from the horse's mouth: https://sjsu.edu/faculty/wooda/2B-HUM/Readings/The-Doctrine-of-Fascism.pdf
>In the Fascist conception of history, man is man only by virtue of the spiritual process to which he contributes as a member of the family, the social group, the nation, and in function of history to which all nations bring their contribution. Hence the great value of tradition in records, in language, in customs, in the rules of social life. Outside history man is a nonentity. Fascism is therefore opposed to all individualistic abstractions based on eighteenth century materialism; and it is opposed to all Jacobinistic utopias and innovations.
>Anti-individualistic, the Fascist conception of life stresses the importance of the State and accepts the individual only in so far as his interests coincide with those of the State, which stands for the conscience and the universal, will of man as a historic entity. It is opposed to classical liberalism which arose as a reaction to absolutism and exhausted its historical function when the State became the expression of the conscience and will of the people. Liberalism denied the State in the name of the individual; Fascism reasserts The rights of the State as expressing the real essence of the individual. And if liberty is to he the attribute of living men and not of abstract dummies invented by individualistic liberalism, then Fascism stands for liberty, and for the only liberty worth having, the liberty of the State and of the individual within the State. The Fascist conception of the State is all embracing; outside of it no human or spiritual values can exist, much less have value. Thus understood, Fascism, is totalitarian, and the Fascist State — a synthesis and a unit inclusive of all values — interprets, develops, and potentates the whole life of a people.
>No individuals or groups (political parties, cultural associations, economic unions, social classes) outside the State. Fascism is therefore opposed to Socialism to which unity within the State (which amalgamates classes into a single economic and ethical reality) is unknown, and which sees in history nothing but the class struggle.
I've seen a theory that fascism is a mood rather than an ideology. Academics try to define fascism, but it doesn't fit well into logical categories.
Is Obama a fascist? All that "Hope" stuff and very fashionable.
I think he was missing some of the pieces. He wasn't into remaking daily life or purifying society.
There's been a fair amount about Chronic traumatic encephalopathy (CTE)-- serious brain damage from repeated blows to the head, even impacts that don't cause obvious damage. I'm wondering about group effects. What happens on the group level for demographics where a lot of impacts are common, perhaps especially for young people?
What's the value, to anyone, in diagnosing people as being on the autistic spectrum? In the old days a high-functioning person of that nature would get described as eccentric, which has a charming ring to it, whereas autistic has all the charm of "retard". (Not many kids bully each other by saying: "Tom, you're eccentric! Stay away from me you fucking eccentric!")
And it all seems tautological. Someone has a variety of personality traits which can be grouped as Asperger's or on the autistic spectrum, but there's no treatment for it, and the effect of pathologizing these traits stigmatize them. What's the argument in favor of labeling these people the modern equivalent of "retard" instead of not labelling them at all?
If we could diagnose all such people, I would see value in knowing that certain traits are much less rare than people currentlyt hink.
> In the old days a high-functioning person of that nature would get described as eccentric, which has a charming ring to it, whereas autistic has all the charm of "retard". (Not many kids bully each other by saying: "Tom, you're eccentric! Stay away from me you fucking eccentric!")
And what of those who were even a tad less high-functioning? They would get described as "freaks" or "weird" or at best "anti-social" and generally would be looked upon with suspicion. Surely it is better for all involved for people to understand that a given person is acting oddly because they can't help it, and that their odd behavior is not a symptom of malevolence, nor a predictor of dangerous behavior.
You need enough money to be called eccentric. But don't step over the invisible line by being too weird.
I don't even understand what "autistic" is supposed to mean. Can someone give me an actual definiton? The definitions seem to be something like "has some combination of this long list of traits" and...I'm sorry, what do these traits have in common? What's the actual singular definition that unites them all? If there is none, why are they lumped together into the same thing? Is there a common cause? Or do they just correlate and no one's quite sure why?
It's all so vague and makes me think psychology is in a very messy and primitive state, kind of like medicine in the 19th century.
Several people have said I seem autistic. Maybe I am and I'm also commenting here (seriously what's the link between commenting on this blog and autism?; I swear every second commenter identifies with it). But I have no clue what it actually means.
Autism is four things...
https://www.reddit.com/r/slatestarcodex/comments/1dh8gkz/4_autism_subtypes_identified_in_machine_learning/
Those four things seem described as, roughly speaking:
* smart, repetitive, mildly awkward
* smart, awkward, mildly repetitive
* repetitive, awkward, highly verbal
* repetitive, awkward, nonverbal
I am not dismissing this classification, just complaining that even if true, it is difficult to remember. Do we have at some easy to remember archetype for each group?
Maybe I should add that I did find "Autism qua Predictive Processing error" [0] as uniquely descriptive of whatever is going on with me. So maybe the label isn't 100% noise. But to reiterate, the dominant theory at the time I was "diagnosed" was Simon Baron-Cohen's theory [1] of emotional clairevoyance. And the "services" i was offered by the industry, which were supposed to help learn how to navigate social interactions, were not especially relevant to me.
[0] https://slatestarcodex.com/2016/09/12/its-bayes-all-the-way-up/
[1] who, perhaps kabbalistically, is the cousin of the Sacha Baron Cohen, the guy who starred in the movie I linked to in my other previous comment.
(edited: found the correct link)
> Maybe I should add that I did find "Autism qua Predictive Processing error" [0] as uniquely descriptive of whatever is going on with me.
Sensory oversensivity is about the only symptom I haven't got.
I did kinda balk at the "intense world" theory. But also, my skin is sensitive enough to kill probably... 70% of mosquitos before they bite me. Which is pretty unusual, based on my observations of others. So idk.
I have the fOrMal dIAgnOsiS as high-functioning and/or asperger's. Three points:
A) The "test" i was given by a professional shrink consists of trying to have a conversation about normie topics and seeing how well i could engage. But normie smalltalk bores me to tears, so i "failed" the test. Though I'm pretty damn good at carrying a conversation if I want to (and I often do, out of social politeness). So if I'd known in advance that "emotional clairvoyance" was effectively what they were testing for, I'd have aced it easily.
B) It's a "CoNsTelLaTioN oF SymPtomS". AKA syndromic. AKA nobody has the slightest clue what causes it. Except that it's vaguely genetic. (But that didn't stop the shrinks from trying to give me the runaround. They are SO CONVINCED that "constellation of symptoms" is somehow a meaningful turn of phrase.)
C) Temple Grandin has opined that the essence of autism is a lack of abstraction. Which is very much the opposite of whatever I have. So not only are people confused about the causes, but they're confused about the symptoms as well. Which leaves... not a whole lot to work with. So yes, it's practically tautological.
From this, I've concluded that the label contains negative information. Because not only is it completely useless, but it also gives the illusion of having discovered something valuable and authoritative. Maybe the label is useful for others. But as far as I'm concerned, I might as well have been diagnosed as "HIV Aladeen" [0].
[0] https://www.youtube.com/watch?v=NYJ2w82WifU
> But normie smalltalk bores me to tears,
Are you sure that itself is something different from ASD? Why shouldn't it manifest as a set of interests and disinterests?
On one hand, I don't mind a label that's low-status/weird/etc. And I certainly have *something* that deserves a label. (I.e. no, I wouldn't go as far as carateca in describing myself as a shy normie.)
But on the other hand, I've met people with the SEVERE version of autism. Lumping "unusual interests" into the same spectrum as "can't tie shoes; speech impediment; literally a savant; etc" feels roughly equivalent to "Scooby-Doo fans are just slightly more well-adjusted Ted Bundy's".
And whatever I have, certainly isn't a lack of emotional perception.
Failing the conversational test is part of it, though. If you can 'ace the test' once you know what the questions are in advance, then that is your coping strategy. The unfiltered you is the one who is bored to tears by 'normie smalltalk', so you only freely speak with enthusiasm and unprompted and from genuine interest on your special topics of interest.
That's high-functioning/Asperger's there, babes. I don't have a formal diagnosis but I have a strong suspicion I'm somewhere around Asperger's, and with a paternal family line of that as well (there are family stories going back generations of the 'odd' members which were not simply "shy normies" as per caracteca). I too am very "normie small talk bores me to tears". I don't know how your adolescence went, but mine was "why am I not interested in the things my peers are interested in, why am I the only one who cares about these odd weird topics?"
Psychological theories struggle to deal with the better functioning people. It's easier when you've got the kids who (to take a real life example from a previous job) have to wear a motorcycle helmet pretty much 24/7 as otherwise they will beat their heads against the wall so hard and so continuously they cause injury to themselves. Everyone looks at that and agrees "that's not normal".
But "does well at school, doesn't share peer interests, is socially awkward, has some quirks of behaviour"? That's a lot harder. And I was a lot weirder but repressed the hell out of the stranger behaviour/beliefs around other people because I knew "this is weird and possibly crazy". So looking from the outside, it's a lot easier to dismiss all that as "spectrum does not exist".
> The unfiltered you is the one who is bored to tears by 'normie smalltalk', so you only freely speak with enthusiasm and unprompted and from genuine interest on your special topics of interest.
It's a bit ironic that an aspie who tries to talk like normies, and even does a mediocre job but gets bored, is considered to be bad at social skills... while the normie who doesn't even try to talk like aspies (except mockingly), and boasts about how he hates math, is considered to be the empathic and social superman. Seriously?
That's a bit like living in a world where most native German speakers also fluently speak English, but only a rare native English speaker speaks even a little German... and concluding that the native English speakers are *better at languages*, because you only compare everyone on how perfect is their English.
I think you may have hit the nail on the head. What makes aspies appear antisocial is only that they happen to be *outnumbered* by people whose personalities and communication styles exist on a different spectrum. If the majority of the population had stereotypical aspie traits, then those who didn't would be the antisocial ones because they would have a harder time communicating with the majority of people.
To be less generous, one could say that the world is full of dumb people who are interested in dross and if you happen to be one of the rare smart ones interested in more complex, abstract things, well, you've got this syndrome that makes you uninterested in dross and we're going to come up with a label for it so the dumb people can use it as an epithet in the lunchroom.
No, come on. That's offensive. "Everyone not like me is just a dummy interested in dross" is both a terrible over-simplification and ignoring that it's more than "just being shy" but that people 'on the spectrum' (and yes, horrible term but we're stuck with it) do have genuine, real problems with ordinary social interaction and the tasks of adulthood.
It's very tempting to go "well I don;'t need them, they're just too stupid to appreciate the finer things, unlike me" but that's sour grapes. I've had that temptation, I've done that looking down my nose, but at this hour of my life, I have to recognise: I am not able to do some ordinary things and that's *my* lack, not society. It's not just a label, any more than "wheelchair user" is just a label the dumb people came up with so they could use it as an epithet in the lunchroom.
> Everyone not like me is just a dummy
> people 'on the spectrum' (...) do have genuine, real problems with ordinary social interaction and the tasks of adulthood.
I dunno why these need be mutually-exclusive. And as far as I'm concerned, it's the normies who are mentally ill by needing to share inane gossip.
It's not the modern equivalent of "retard" in a lot of settings, and this forum is one of them. I don't know whether Scott considers himself to be on the autism spectrum, but it would not surprise me if he did. And many of the people here who describe themselves as somewhat autistic are very smart people with good jobs, usually in tech, who are are introverted and eccentric, have always felt different from other people, and were seen as oddballs by their peers growing up. I don't know whether thinking of these people as being at the smart, high-functioning end of the autistic spectrum is accurate, but I do think the cluster of characteristics they have is a thing, a syndrome. I'm a psychologist, by the way.
Right. I know that and I respect this community tremendously. And one thing I respect about it is that it is so open to questions that might run against the grain of conventional wisdom. It strikes me that there are trade-offs in pathologizing a cluster of characteristics. Others in this thread have given good reasons for why the label and diagnosis have positive value. I can imagine that there is also a negative side to it, as there are for most things.
I agree with you there. I do think some people have latched on to "I'm on the spectrum" as a way of handling the dissonance of "I'm the weirdo in my peer group".
But that is not to say that "Therefore the autism spectrum/Asperger's Syndrome" does not exist. As I said, our service deals with children with additional needs, which includes autism spectrum, and it's not just "needs some coaxing to interact with peers". It's having meltdowns out of thin air (to outside view), repetitive behaviours, sensory processing issues, hyperfocus on special interests, a whole raft of things all going together.
Well, syndrome literally means "run together," which captures the way I think of it. I have never seen the term used in a context where the speaker was not referring to something that they saw as pathology, though. I'm fine with calling it a personality type. On the other hand, many people who have grown up as that personality type do not experience themselves as just one of many perfectly fine personality types. They feel as though something is *wrong,* something is getting in the way of them doing all kinds of things. And I do think the list of things they have a lot of trouble doing is longer than the lists that go with other styles of living and thinking that we regard as personality types.
The problem is that to get any kind of help, it needs to be medicalized. You need the Official Diagnosis. Otherwise you get the "just deal with it/pull yourself up by your bootstraps" kind of reaction from everyone which does not, in fact, help.
You feel like you're drowning and instead of being thrown a lifebuoy, you get "just learn to swim! if you could swim, you'd be fine! everyone else can swim!"
Well, what I think is a reasonable approach if they want help with some of the things they have trouble with is just to work on the thing itself. What's the bottleneck? One bottleneck I often see (I'm a psychologist) is that people with this profile want an unusual amount of certainty about what's going to happen next, and that gets in their way in situations where certainty is not possible. For instance somebody I see got a job offer, but then delayed quite a long time before accepting it, because there were various things he did not know about what the job would be like. He kept trying to figure out what the things would be like by obsessively analyzing all the data he had, but that data simply did not hold clear answers to all of his questions. It helped him to see this pattern, and to give some thought to which things in life fall into the highly predictable category and which did not, and to realize that new jobs were in the latter category. And we talked about steps he could take if the new job had various bad characteristics -- ways to change the job, ways to exit if it was unfixable. He's quite smart, and would have been perfectly able to carry out this analysis on his own. But he was so anxious about the job, and so stuck in the impossible task of figuring out exactly what the job would be like that it did not occur to him to ask himself the questions I asked him. This approach is neutral as regards whether he has a mental illness or a wiring problem or a syndrome. And that seems like a reasonable approach to me, given that if we could somehow know that he had a certain wiring problem or that he perfectly met the criteria for high-functioning autism that info would have no utility whatever, since we have no treatment that specific to wiring problems or autism.
I genuinely think the problem is the smartness. You're dumb (no disparagement to my dumb fellow-humans) and can't deal with things? Okay, people make allowances for that because you're dumb. Maybe you're so dumb you need special help, okay fine.
BUT. You're academically capable? This needn't even mean "straight A student", it's "does okay at school, is not falling behind, is not failing tests, is not causing trouble in class".
Then it's "well what's wrong with you? you just need to try harder! you don't have a problem, you're just being difficult!" The idea being that if you're not stupid, then your difficulties with everyday functioning that ordinary people can do just fine are down to lack of will power, or grit, or tenacity, or just thinking yourself too good for the rest of us.
Looking back on an incident which puzzled me at the time, when I was a kid my mother brought me to the doctor. This wasn't our usual family doctor, and I wasn't sick. Doctor did a physical exam, said I was fine.
I think she was trying to find out if there was something 'wrong' with me, because she had noticed I wasn't developing 'normally' in some aspects (one of that was that she was worried I might have hearing problems, as often I didn't respond when called). She didn't have the vocabulary or concepts of things like autism or development delays, so she was reliant on the doctor picking something up.
But of course, since she never raised the question of "does my child have developmental problems?", the doctor just looked for physical problems, no her ears are fine, her health is fine, she's okay.
So that was it. Nothing went further. I couldn't understand at the time why I was going to the doctor, because I wasn't sick. But now I understand what she was trying to grope towards, and I think she was right. But it's decades too late now to do anything about it, and the shape of my life has been formed.
Would a diagnosis have helped me? I have no idea. I have no idea of knowing what would have happened. But it sure as hell would have helped explain me to myself.
It can make therapy, life skills coaching, and self-improvement more productive. There are common issues that people on the spectrum often struggle with that aren't often major problems for neurotypical people or people with non-Autism-Spectrum issues. Having an autism diagnosis focuses the hypothesis space for "areas that might need work" more onto those particular issues and also suggests that where those issues are indeed problems, there may be a standard playbook for autistic people to work on those particular issues.
This hypothesis-focusing effect is particularly valuable when the issue is superficially similar to a more-common-in-the-general-population issue with different root causes and treatments. For example, an autism meltdown can look a lot like either a panic attack (an anxiety disorder symptom) or an emotional flashback (a PTSD symptom). There are significant differences between their presentations, but those differences are easy to miss unless you're looking specifically for them.
OK. That makes sense.
I think I have some wrong notion that "kids these days" are just getting scrutinized, medicalized and labeled to death by the time they are in First Grade whether they have serious problems or not. Perhaps I'm extrapolating too much from the helicopter parenting phenomena to imagine that kids are no longer allowed the space to be a little bit weird without getting a full diagnosis for their little bit of weirdness.
That is an entire separate problem, Hank. Again, like yourself I'm only going off online comments, but there is a definite sense in which the push to get better results means "you need good grades to get into a good school to go to a good university to get a good job", so if accommodations such as more time or other assistance is available for kids with needs, then for the pushy, anxious, 'tiger mom' types of parents who can afford it - get your kid diagnosed as ADHD (for the Ritalin/Adderall to improve focus) and any other disorder which means "Little Johnny needs extra time on the tests and other help" so little Johnny can keep on grinding out those high grades and eventually get a high status, high paying job.
So there are a few problems:
(1) Medicalisation of everything. Little Johnny can't still still in class and pay attention? Now that's ADHD and he needs to be medicated
(2) Gaming the system, as above, which extends into university (related to that, I think, is the attitude that cheating is fine, everyone does it, only fools actually do the reading and the work and write their own essays and study for exams, take the easy path to guaranteed grades)
(3) Self-diagnosis by the terminally online. Some people really do have problems, but instead of facing up to "I am a pain in the backside and need to work on that", they prefer to grab for "In fact, I am a Type Z multipolar disorder person with Complex Childhood PTSD due to narcissistic parents and anxiety disorder and see attached list of my disabilities, so that is why I should be permitted to be a massive pain in the backside and anyone who objects is abusing a survivor of childhood abuse and neglect by toxic family and environment".
Knowing yourself... seems like a good thing?
Making your diagnosis known to others is possibly a bad idea. However, if your symptoms are strong, they probably already know or at least suspect.
>Knowing yourself... seems like a good thing?
Sure, but Asperger's or whatever syndrome are social constructs. The question is whether they are useful social constructs. The constructs can come with information, but they can also come with baggage in the form of negative stereotypes.
I suppose I've observed the negative stereotyping around autism/Asperger's more than I've observed the gains people have made from the label. That doesn't mean what I've observed has much bearing on the reality.
An example of the negative stereotyping. Razib Khan yesterday tweets: https://twitter.com/razibkhan/status/1803121717629555045
"let me be explicit here: a lot of rationalists are so psychologically abnormal they are incapable of being conventionally racist. that also means they are incapable of being antiracist.
to be against something you have to be able to conceive of it. they aren't wired that way"
Someone responds: "Can you clarify psychologically abnormal here a bit?" Razib then tweets a link to "Asperger Syndrome" on Wikipedia. Someone responds to that with "ha ha".
I don't believe Razib was trying to make any sort of mean-spirited joke, he was simply being direct about what he meant. But the response "ha ha" demonstrates that it's a subject of mockery and derision, which was inescapable once the terms "autistic" and "Asperger's" become popularized to mean "socially retarded".
The whole point of having the DSM is that psych diagnoses aren’t supposed to be social constructs; in practice some are more subjective than others.
Self-diagnosis of autism isn’t very accurate, based on this study:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8992806/
Re: the distinction somebody made about being "eccentric" and that not being a term of abuse, one cynical description I have seen is that "if you are rich, then behaving like this is described as "eccentric" and it is tolerated; if you are poor, behaving like this is described as "criminal" or "crazy" and you get in trouble".
You do have a very good point about the negative stereotypes, but that's unfortunately just human nature: people will point at, laugh at, and mock the 'retards' or 'morons' or 'special needs' or 'Aspies', no matter what new term is invented to replace the old one which has now become a term of reproach.
Someone who meets the criteria for now-diagnosis of Thing B is going to behave in a particular way, have thought processes along a certain path, be lacking in certain areas. And that behaviour and thinking and lack is going to be noticed, whether or not Johnny has a formal diagnosis. If it's mild enough that he can generally fake it, then he'll be accepted (albeit with some opinions that Johnny is 'odd' or 'eccentric' even if deemed harmless). If it's severe enough, Johnny won't be accepted and will get the brunt of social disapproval and mockery.
Our service isn't simply "so Johnny's parents feel he might be falling behind a little", though parental input is very important. Before Johnny can even get in the door, he needs the full assessment:
"Our service is supported by the HSE multi-disciplinary team made up of the following:
Psychologists
Physiotherapists
Speech and Language Therapists
Occupational Therapists
Clinical Nurse Specialist
Social Worker
We have a key worker system in operation where a child is assigned their own key worker. All individual therapies are implemented on a daily basis as directed by the CDNT."
There are therapies and curriculum plans for children based on their particular requirements, there are Learning Plans and Treatment Plans and goals to be met and progress to be documented, all alongside the routine activities of pre-school:
"Our Facilities include:
• Classrooms
• Outdoor play area
• Playroom
• Indoor swing room
• Speech and Language room
• A Floor Time room
• Meetings rooms
• Multi-Sensory Room
• Body Awareness room
• Nurse's Office
• Sensory garden
The rooms are designed in such a way as to meet the developing needs of each
individual child. The children are guided through a range of educational and play activities at their own pace. The children’s key workers implement their specific therapies and individual programmes. Our team creates a positive and secure environment where children feel confident in exploring their surroundings.
Keyworker System
Staff work with the children on a one-to-one basis. We have a keyworker system in place, we will inform when the keyworkers change. The keyworker will carry out all relevant therapies given to them by the Children’s Disability Network Team. These therapies are also carried out at home by the Parent/Guardian.
The key worker has many responsibilities. Their role involves developing a relationship with the child and their family. Each individual child will have different needs; therefore, the key worker must be adaptable and attentive to the child. This will help ensure the child’s needs are met.
The role of a key worker includes:
• Meet and greet the child and their family upon arrival.
• Familiarise yourself with the child’s care plan, folders and files to have as much
information as possible about the child to be able to meet their needs.
• Be aware of the child’s day to day needs, read log books and get a verbal report of how the child is on arrival.
• Document all information and observations about the child in their task records and key worker notes.
• Monitor and record progress and be able to feedback information to therapists, classroom nurse/Coordinator etc.
• Follow the child’s lead.
• Encourage open communication.
• Watch and observe the child.
• Become familiar with the child and begin to identify the child’s interests and
strengths.
• Engage with the parents and guardians to build positive relationships and to
exchange information about the child.
• To provide parents with opportunities to contribute and share their knowledge and insights.
• Pass on information to parents/guardians when they are collecting their children.
• Fill out log books every day.
• Be aware of parents/guardian’s sensitivity around their child attending an early years specialist service, what this means, supporting them, providing resources and being a listening ear.
• Liaise with the HSE Children’s Disability Network Team re individual programmes. Seek advice from same.
• Provide a handover to the new keyworker when they are assigned"
All this is for children in the age range 2-5 years and I can tell you that early intervention is crucial. We've had kids go from being non-verbal and unresponsive to being able to converse and communicate, and to walk unassisted when the prognosis was that this would never be possible.
From the outside, the common online perception of autism etc. is "buncha nerds that are too geeky and ugly to get a girl and only like stupid comic books and SF TV movies and shows". The more flattering to the self-image version is "really smart nerds that like STEM and will get high-paying jobs because we're the ones working on the world of the future for all you normies, so what if we have special interests?".
That's a long way from the reality. Not everyone on the spectrum is a STEM savant going to go into tech.
I dunno, this tweet seems to me mostly have going for it that "sounds good on Twitter" thing. Think about it. How would that work -- being incapable of being racist? What is the thing some can't do that prevents them from being racist? And how does that also prevent them from being anti-racist? Walk me through it.
> How would that work -- being incapable of being racist?
You lack the instinct that makes you automatically hate everyone who is obviously different from you. (Different in a direction that makes it socially acceptable to hate them. Because normies are always sensitive to what is or isn't socially allowed.)
Normies gonna enforce norms. Autists are oblivious about them.
When a normie sees a person of a different color, their first instinct is to check whether it is socially approved to hate people of a different color. If yes, the hate arises instinctively. This is what people traditionally called "racism", the combination of the social consensus that it is okay to hate people based on their race, and the corresponding instinctive reaction of most normies.
And because it is not realistic to make normies stop hating different people, the solution is basically to redirect their instinctive hate somewhere else. You teach them that the social consensus is that you should not hate people with a different color, but instead... dunno, people with a different opinion. And then, normies start picking their targets based on the new criteria, and racism is no more... in theory.
In practice, it is difficult to convince people about a new social norm. Normies are very good at figuring out the true social norms, that's their #1 obsession, so they quickly notice how people behave differently at school and outside school, etc. So the actual behavior will be more like... hating people with a different color only in the socially approved situations, in socially approved ways. (For example, a white racist might learn that high-status black people have to be respected, but it is okay to hate all *other* black people as long as you never admit that you hate them *because* they are black; if someone asks you, you always have to point at some other trait of the specific black person, such as lack of college education, or not being sufficiently woke.)
Meanwhile, you ask the autist why he doesn't hate people with different color, and he just gives you the usual stupid look and asks "why should I?". Aaaah, so frustrating, this lack of social skills!
So, if you define "anti-racist" as not being a racist, then most autists would qualify. -- Unless they found an online article saying that people of certain race are inferior, in which case they would happily report to you their findings, and would be ready to debate it academically. They would probably be happy to repeat the same words even in presence of the people of given race. They would be just as willing to debate this topic academically with them! They probably couldn't stop talking about this topic in their presence! So... yeah, I guess that's "racism", too. But, you know, it would probably remain at the verbal and purely theoretical level, which can be very obnoxious, but is probably preferable to e.g. bullying or lynching, which is more of a normie behavior.
The normie way of "anti-racism" is finding someone who is a more socially acceptable target of hate than people with different color. Yeah, autists don't understand this either. Also, among the woke people it is socially acceptable to express hate against white people; by white and black people alike. Autists also have a problem understanding why this is supposed to be a good thing. Autists are more of non-racists rather than anti-racists. To be a proper anti-racist, you need to be a proper racist in the first place, and then redirect that instinct.
Does this make sense? There is a certain poetical license here, but probably less than most people would be comfortable believing.
So "Autistic" is described in the DSM as a disability, because it makes it harder for children to attain educational performance, become socialized into the peer group, function productively as members of society, and be happy. None of that implies that it is *impossible* for people on the spectrum to achieve these things, merely that they have additional barriers and challenges to overcome as a result of their neurological condition.
One reason to make this classification is to help children who are experiencing these difficulties understand what is actually happening, so that they do not blame themselves and develop a harmful self-image. Knowledge can be power, and just being able to assign an objective cause to their disability can help them cope emotionally.
Another reason to diagnose autism is to make a range of support services available to such children. With appropriate and research-tested interventions, they can find ways to adjust and even take advantage of the unique aspects of their condition and achieve greater success in life. This is beneficial to both the child and society, because the child is more easily able to make informed life choices that build upon their strengths as individuals, and they can become more productive members of the community, by obtaining gainful employment, avoiding conflict, and forming relationships with other members of society, but on the spectrum and off it.
Finally, the diagnosis helps inform research into this condition, which may one day result in, if not a "cure" (because calling it that stigmatizes the condition) but new options that autistic people can take advantage of if they choose. There is some evidence, for example, that deep brain stimulation has a positive effect on some of the symptoms of autism.
BTW, autism is a spectrum of behaviors and thinking patterns in exactly the same way that other disabilities are. It's simply a way of saying that the specific services required to help any given autistic child are going to be somewhat different. A cafeteria approach is a better model to follow than "one intervention for all."
It's primarily for status-seeking females. In the new religion, victimhood is glorified, so if I'm just a "normal" white girl then I'm one of the bad ones. I don't want to be one of the bad ones, I want to be one of the elite "oppressed", so I become an autistic they/them.
...wait, what does the gender have to do with anything? What about all the men?
Yes, it's saying that we're weird, but it's weird *in a particular way*. And that helps people know what to expect from others, and from themselves. Everyone can stop wondering "what on earth is wrong with this person, why can't they just act normally" and ascribing non-existant motivations, and can instead go "ah, ok, that's what's wrong, so which areas are they bad in".
I don't think this explains it, because the one thing all of these kinds of people are constantly insisting on is that autism is a "spectrum" that "presents differently" in everyone, especially females, who they claim present especially differently than males. Curiously, this is used as all the more reason that females are oppressed, because they aren't being "given" their "rightful" autism diagnoses! (as if they wouldn't also be saying females are oppressed if they were the ones being primarily diagnosed with autism - "you're pathologizing normal female behavior!")
You may ask - why would these females *want* to be diagnosed with this syndrome - and there you find the answer to the original question. In many circles, status is conferred upon zim who can claim the most oppression.
Furthermore, this framing defeats the entire concept of a syndrome, which is "a group of symptoms which consistently occur together." If men and boys consistently have symptoms that appear together, but females and other new/fake autistics have totally different symptoms and present differently (or don't even "present" visibly at all), then they don't have the same syndrome *by definition*.
There may be a social gulf between the girls branding themselves autistic on social media and the men (and women) who simply want to be understood better by their real-life peers.
For sure, there is, but we don't have a term we can use to describe this anymore, because autism has become corrupted as described. And saying "real autism" makes you worse than Hitler.
Listen I've attended two high-social justice focus colleges and I know where you're coming from with this but flat out you are overestimating both the gendered (sexed?) aspect and how mad people will be. I have personally described my own situation as "technically diagnosed autistic but I'm not sure how real or valuable that is, bc I'm pretty high functioning compared to, y'know, people who need permanent care" and the Queer Trans Neurodiverse Left Wing Undergrads all went "yeah that makes sense". A lot of self-dx types are female but a lot aren't. Some will be upset if you say self-dx is unreliable but most will understand where you're coming from. If you show up and say "you Status-Seeking Females are FAKING for CLOUT" then yes people will be mad at you. People on line will be mad at you for just about anything, and I wouldn't trust it to be representative of the wider world of female trans-id/autisic-id people.
I can see how that helps in situations where other people are generous and thoughtful, but there's at least as many people in every level of society who are ungenerous and use a known weakness in others as something to exploit. The label is also a liability in dating markets. Few women who aren't themselves on the spectrum would be interested in dating a man who is described as being "on the spectrum". Compare this to the old days when a man who was described as "eccentric" might sound enchantingly mysterious. And an "eccentric millionaire" or "eccentric genius" got triple bonus points.
I don't think people, especially the "ungenerous" people you mention, ever described anyone as "eccentric." It's a term usually reserved for biographers and reporters trying to avoid the actual terms schoolyard bullies might use...weirdo, geek, nerd, etc. (Before some of those terms became cooler). At least "autistic" has the air of a diagnosed disability to it, so kids know they're not supposed to tease about it, just like they're not supposed to tease a blind or deaf kid.
I agree that, at very high-functioing levels, the question of whether to label or not can be challenging. But most of the people I know on the spectrum were relieved to get a label that said, effectively, "this is why you feel different, you're not alone, here are some common ways people like you can alleviate these problems."
"Eccentric" can also mean weird and rich, or at last reasonably well off, Perhaps this is an older meaning.
'
>At least "autistic" has the air of a diagnosed disability to it
From Wikipedia:
>Retard was previously used as a medical term. The verb "to retard" means 'to delay or hold back', and so "retard" became known as a medical term in the late 19th and early 20th centuries to describe children with intellectual disabilities, or retarded mental development.
>At least "autistic" has the air of a diagnosed disability to it, so kids know they're not supposed to tease about it, just like they're not supposed to tease a blind or deaf kid.
Yet according to the teenagers I know, "autistic" is the main insult middle and high-school kids lob at each other these days. Cruel kids often do exactly what they are told not to do and telling them specifically not to pick on autistic kids clues them in on the fact that the word "autistic" can be wielded as a weapon. No, kids never used the word "eccentric" because it was a term that was applied mostly to successful adults who were unusual, but people did use the term in conversation last century. You were much more likely to hear women use it (to describe men) than men.
>But most of the people I know on the spectrum were relieved to get a label that said, effectively, "this is why you feel different, you're not alone, here are some common ways people like you can alleviate these problems."
That is a strong argument for using the label.
I wonder how many people hate getting the label. Maybe it's a generational thing.
Someone who is a high-functioning autist/asperger/whatever and sufficiently smart, can learn the social skills if they spend the time and effort. But the result will still be quite different from normies.
For example, they could become someone who can be a center of attention at a large party... and when you meet them after the party, they will say "oh, I hated every second of it, I only did it because my normie friends have invited me, but now I want to spend the rest of the day alone, probably taking a hike in mountains. Actually, you could come with me if you are interested in hearing about the latest quantum physics interpretation I found in a certain scientific paper, but please no more social small talk or I will start screaming."
Autism also encompasses repetitive patterns of behavior - stuff like stimming, restricted interests, inflexibility with routine, and hyperreactivity to certain stimuli. You literally cannot get an autism diagnosis without that.
You are telescoping two things: the idea that asperger's and autism are the same thing, which emenated from medial professionals; and a socially constructed idea of "being on the spectrum".
"This will be better for the formerly autistic too as they won't have this fictional illness as a crutch, preventing them from taking action to improve their own lives."
It's not fictional, though, is the question we're debating. Again,, taking the service where I work as example. We cater for children from 2-5 years.
Part of this is Transitions. This is a big deal. The act of going from one room into a different room can trigger a meltdown. So you have to teach the child about this is a normal activity, it happens everywhere, this is how you deal with it. That includes telling the child beforehand you are going to leave room A, then when going into room B telling them that now you are going into room B; encouraging them to say things out loud like "door" to get across the idea of 'we're opening the door, we're going through the door; seeing this door means we will be opening this door and going through it into a different room'. Getting them over this hump means they will no longer be having meltdowns at home about "now you have to move from this room to your bedroom" or in public. It helps them learn and cope. 'Normal' children don't need this level of intervention.
These are children "on the spectrum". It's not "slightly awkward nerd" because by the time you get to that age, the damage may have already been done. I do accept your point about gentrification, but that does not mean that there isn't a gradient from "real but mild version" to "real and totally disabling version". The kids can be "basically fine" up until you hit the one thing that sets them off. That's what the onlookers don't see; you can't tell from "John is perfectly fine, just a bit shy and awkward" adult who has learned how to function in everyday life that "but if this particular trigger sets him off, John is very much not perfectly fine".
As with many things in life, the extremes are very different and easily distinguishable, but this does not yet mean you can have a simple binary classification, because there is actually a continuum without an obvious place to draw a line.
Worse, part of the difference between someone who gets a diagnosis and someone who does not is whether their symptoms affect their daily life badly enough for them to seek help. This is a combination of what their symptoms are, how capable they are of managing them, and what their daily life involves. You cannot, therefore, predict whether someone will want or get a diagnosis by considering the symptoms alone. The continuum is multidimensional and a partition of it that takes only one dimension into account will not lead to sensible results.
Meanwhile, if help/support is on offer, grifters will try to access it.
As with many other problems that have similar properties, we are left with a choice: we can have more gatekeeping and risk failing to help some people who need help, or we can have less gatekeeping and give some help to grifters who do not need it.
My preference, as always, is for solutions that accept some level of grift in order to help most of the people that need help over solutions that aim to reduce the level of fraud to zero; losing some money to help more people is IMO better than losing some people to save more money.
This is a fully general stance. cf. https://www.bitsaboutmoney.com/archive/optimal-amount-of-fraud/
"People "on the spectrum" need to recognize that they're just shy normies"
While I agree with the TikTokkers and Instagram and the rest of the bunch grabbing onto "I'm ackshully neurodivergent, you bigot!" as an excuse for why they should be allowed behave like assholes and get away with it is an example of using mental problems as a trend, being "on the spectrum" is not just "shy normie".
I work in a service where children with additional needs are educated, and I now have a next-door neighbour with two kids that are probably additional needs as well, and I can tell you it's more than a simple matter of "Little Johnny is just shy, he just needs to mix with kids his own age and come out of his shell". That's not the most severe cases, it really is "on the spectrum and needs early intervention to develop coping strategies and be integrated into society".
OK but if it's a spectrum then by definition it's not binary. Only the label is binary.
Or is that even correct? Does the spectrum peter out at some discrete point?
One reason I'm so curious about this is that, as an old, weird, eccentric guy, I can imagine that I'm "on the spectrum" and would have been labeled that when younger but feel thankful that I wasn't because I would have hated to have been labeled like that. But maybe that's a generational thing. Young people seem to love identity labels whereas my generation mostly loathed them when younger.
But if I did imagine myself as "on the spectrum" --- and I ponder it --- it causes me to reinterpret much of my past differently. I see things through a different lens if I adopt that label. That's, again, because however spectrumy autism may be, the label is binary. For instance, I feel as if I have less freewill if I imagine my life with the label than without it. (I don't believe in freewill theoretically, but it *feels* like I have it. With the label, it *feels less like I have it*. I prefer to feel like I have freewill, however delusionary that may be.)
I feel like it shouldn't matter so much whether I might have such a label or not. Anyway, I liked it better before the label existed, when the world was less medicalized and it was easier to feel normal even if you were weird.
"Slightly awkward nerd" is not the same thing as "on the spectrum". I think you're going off the pop culture notion of what the autism spectrum is, and while I think certainly some awkward people have grabbed onto "oh I'm not a weirdo, I'm autistic!" as being something to help salve their sense of self-worth, unless you have a diagnosis, that's not it.
There are certainly things like social anxiety disorder etc. but those are not autism. The curse of self-diagnosis is what we're all complaining about.
I do think that folding in Asperger's was not a good idea as it may be a somewhat different disorder, but autism does exist on a range from mild to extremely severe.
This is not people who can see just fine pretending to be blind, or even near-sighted people pretending to be blind, this is people who are legally blind even if they have some vision.
Bumping the Emperor of All Maladies review, which was one of the reviews which inspired me to pick up the book itself (the others were Family That Couldn't Sleep and Piranesi.) The EoAM review was poetically written to the point where I thought much of it must have been excerpts, but it wasn't.
It was very good. Thank you for recommending it.
I was very surprised the Emperor of All Maladies review was not a finalist! I read through at least half of those reviews and it was the best by far. I suspected that it might be Scott entering his own contest, as he did last year. Really a shame it's not on the final list.
If someone reading this hasn't read that review, go read it. It's really good.
Thank you! (This was mine.) I may have gotten dinged for writing the thing I wanted to write, which had only a vague resemblance to a “book review” (I thought this was the year for it!)
There are just a few lines from the book itself, but I also drew heavily on “The Waste Land” by TS Eliot which has themes of growth and death that felt very appropriate for a book about cancer.
Well I have to say that your review was gripping, moved me emotionally, and I learned a lot! You should definitely enter next year, I can only attribute you not being a finalist to bad luck.
A few days ago, the Guardian published a hit piece on how a key rationalist organization is evil because they platform "scientific racism" advocates.
See the discussion on themotte Culture War thread for more details:
https://www.themotte.org/post/1048/culture-war-roundup-for-the-week/222312?context=8#context
Does anyone know more about the specific financial claims regarding Lighthaven and SBF’s money? That was the interesting part of the article.
https://manifold.markets/JuliusSimonelli/will-lighthaven-shut-down-in-2024-d#s9unhaffnub
(A bit of an angry rant...)
Another thread with someone going "race-differences may be true but we should pretend they're not" and other people saying "no truth and intellectual curiosity matter" and then the first says "and how would explain this to a kind black friend?" and so on (these are paraphrases).
God I hate these arguments because they're so dishonest. Huge motte-and-baileys on both sides. No one should be bringing up race at all, *ever*, in any important public or political context. If there's a technical finding that some IQ-test variable correlates slightly with race, that can be noted on page 67 of your report, alongside the correlations with height and hair colour and astrological sign. But why the hell would you signal-boost that particular finding unless you're racist?
On the other hand, you permanently lose the right to object to "scientific racism" forever the moment you say some shit like "it looks like blacks are underrepresented in your profession...". You shouldn't even notice that, you shouldn't be seeing race! The moment you do, YOU have defected against the norms of respectability and rationality and individual freedom, and you deserve whatever offensive statements and findings are thrown at you. Don't want to deal with racism? Don't...be...fucking...racist.
The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race. Other than that, you are treated entirely, 100% as an individual. No discrimination, no affirmative action, just your own efforts and talents: that's the be all and end all of what you're entitled to. That's it. The end.
I don't believe there are any race differences because I don't believe race is a real thing. And shame on everybody who thinks it is.
Also this:
"Hanson once wrote that a woman cheating on a man is as bad as (or worse than) a man raping a woman provided he does it in a "gentle, silent" way. No idea if he still endorses that opinion but it's a majorly sus thing to say"
The fuck? You are either saying cheating is fine actually, or you are unable to comprehend what "cheating is as bad as rape" means. It means: you know how horrible rape is? Cheating is equally horrible. It has a completely different meaning to "rape is no worse than cheating" (=you know how cheating is often seen as kind of trivial? So should rape be.) even though their naive translation into symbolic logic is the same. Misunderstanding this is a majorly autistic thing to say.
(Unless of course this person means that Hanson says a woman cheating is horrible but a man cheating isn't. That *would* be misogynistic (or more accurately sexist), but I have no idea why they didn't *say that* if that's what they meant.)
> The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race.
Does it have to be some*one* else? What if we notice that society is systemically discriminating on account of race, even when no single individual is making racially biased decisions?
> No one should be bringing up race at all, *ever*, in any important public or political context.
If we were living in a perfectly meritocratic society, where nobody ever noticed that there are more Ashkenazi than Black Nobel Price laureates, that might be a good idea. (If we ever gain the power to tune intelligence to CRISPR, it would be sufficient to look at the genomes of particular families which have been high-IQ for multiple generations, no need to bring race into it.)
However, the current US approach is not color blindness. Instead, companies are supposed to meticulously track the racial categories of their employees, and any disparate outcomes are treated either as proof of evilness or at least at bugs to be fixed by putting your hand on the scales. (See TracingWoodgrains on that FAA scandal [0].)
While some HBD advocates were likely just people who tried to mold their biases into a self-consistent belief system, I think that some of them only bring it up to argue against affirmative action.
If the US Olympic marathon runners are chosen by merit, then there is little reason for anyone to point out that East Africans might have an advantage in that sport (with much still depending on in-ethnicity fluctuations). If someone insists that the US should proportionally select their runner team from the "races" of their population, then arguing against that idea might involve pointing out these genetic differences. I fault the ones who tried to establish policy tied to "race", because they brought it up first.
> (Unless of course this person means that Hanson says a woman cheating is horrible but a man cheating isn't.[...])
I would guess that Hanson might be concerned with reproductively relevant cheating where someone ends up raising a kid which is not genetically related to them. Under that model, a partner of either sex having oral sex with a third party would not be a big deal, nor would be giving away genetic material without any requirement of parental investment (e.g. secretly donating sperm/eggs). For IVF, the situation would be entirely symmetric: if either partner decides that their child would have more of an edge if they swapped their partners genetic material for that of another party without telling them about it, that would be evil. For in-vivo conception, there is a big difference between the sexes. If a man knocks up his neighbor, takes the kid to his wife and tells her "look what a beautiful baby you and I made", his wife is unlikely to buy that story.
Of course, this does not mean that male infidelity resulting in conception is not a big deal -- it is still a defection which is liable to come with serious financial downsides (child support payments) to the couple (and any cheating is a breach of trust which might risk the relationship, and there is an HIV risk and so on).
[0]: https://www.tracingwoodgrains.com/p/the-faas-hiring-scandal-a-quick-overview
Flat disagree, to almost all of that.
Thanks. Would you mind, you know, explaining why?
My comment was a bit ranty, as I acknowledged. But yours seems like *the* least constructive form of discourse short of personal attacks.
>God I hate these arguments because they're so dishonest. Huge motte-and-baileys on both sides.
I will agree with this.
>No one should be bringing up race at all, *ever*, in any important public or political context.
Why not? Seems like as much as race isn't really that helpful a construct, and we would all be better off it it was banished. That it is here to stay and LARGE portions of the left are deeply deeply invested in it, and making much of our political discourse about it
And if they are going to make it the center of large parts of political discussion, well then you have to talk about it.
>If there's a technical finding that some IQ-test variable correlates slightly with race, that can be noted on page 67 of your report, alongside the correlations with height and hair colour and astrological sign. But why the hell would you signal-boost that particular finding unless you're racist?
Because the left has spent the past couple decades dismantling many systems of merit and evaluation and made ill advised social policy changes all on the back of the idea that disparities in outcome *can only* be explained by disparities in "systemic or overt racism".
And this isn't just some tribal political warfare culture war talking point. It has real impacts. My town has seen noticeably worse policing and adherence to the law, largely on the back of complaints (and then reactions) about law enforcement's supposed "racism". Much of the evidence of which is statistically illiterate.
At my child's shcool his gifted and talented program was eliminated in an attempt to reduce disparities.
For basically for 10 years or more this large urban school districts more or less overt policy has been "ignore the white and East Asian kids they will be fine, all efforts must be focused on reducing racial disparities". And what has happened is people who can flee the system, disparities don't improve no matter how hard they step on the scales, and racial animus increases.
At one point they basically dismantled the disciplinary system because blacks were being suspended more than whites, and violence shot up in schools, one middle shcool even got so bad their temporarily shut it down and "rebooted it", all because no one was willing to consider the possibility that black kids and white kids might be getting suspended at different rates due to different you know BEHAVIORS. Literally the Obama education department was sending them nasty legal threats about their "racism" based purely on "disparities" style thinking. It lead to massive changes for the worst in the district, and probably that alone depressed attendance 10%.
So don't fucking tell me to ignore race or we can't investigate/measure race. The left wants to measure the shit out of all elements of it (as long as they can count on academia to keep suppressing any findings they might not like).
>On the other hand, you permanently lose the right to object to "scientific racism" forever the moment you say some shit like "it looks like blacks are underrepresented in your profession...".
Why care? Everything is racism now anyway, so it jsut doesn't bite like it used to.
>You shouldn't even notice that, you shouldn't be seeing race!
This just seems hopelessly naive, this is literally the main thing most large orgs are focused on when it comes to politics/morality these days. Are our quotas being hit. literally for my kids youth sports programs they ask about it, and tut tut that the numbers are not directly proportional...(um different "communities" do different things dummy). I am sure no one is showing up at soccer practice whining there are disproportionate numbers of Somalis and Latinos.
>The moment you do, YOU have defected against the norms of respectability and rationality and individual freedom, and you deserve whatever offensive statements and findings are thrown at you. Don't want to deal with racism? Don't...be...fucking...racist.
Meh whatever, once again even a 90s colorblind leftist utopia version of post racialism is now seen as deeply racist. I just don't care. Also I know I am not "racist" in that sense. I grew up in a housing project, some of my childhood friends were from other races. My first real friend was a child of immigrants from Nigeria. I know what is in my heart and I personally don't give a fuck what color people are.
but the world does, and the world discriminates against me and my kids due to their skin color constantly because there is this hysteria about the fact that other races (and in particular blacks) don't do as well on various metrics.
>The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race.
I just disagree that this is some moral imperative.
>I don't believe there are any race differences because I don't believe race is a real thing.
Well it is and it isn't. It is a pretty poor way to group the world if you had to group people genetically. getting a bit more fine grained would be much more scientific. That said the idea that it is literally meaningless is silly. It has a lot of historical and political baggage/meaning, and there are a lot of useful things you can say about the groupings.
No one would have trouble with the state "Kenyans make good long distance runners", or "Yugoslavs make good basketball players".
So if it turns out that say Pujabi are particularly great mathematicians that seems worth measuring/investigating.
>"Hanson once wrote that a woman cheating on a man is as bad as (or worse than) a man raping a woman provided he does it in a "gentle, silent" way.
I don't feel the need to defend crazy statements by whoever "Hanson" is.
That said I am guessing the idea here is something along the lines of "while raping someone is super bad and terrible, secretly making someone raise a child that isn't theirs is also really super bad", and he just put it inelegantly. At least that would be my guess at a non-crazy reading of that statement. I agree on the surface it is crazy.
A rape you could get over, lots of people do, it is an event, it happened, it recedes. Finding out 20 years down the road or whatever that your child isn't actually yours is a whole other level of mindfuck.
Anyway, those are my thoughts.
Maybe tomorrow as it would take explaining.
> If there's a technical finding that some IQ-test variable correlates slightly with race, that can be noted on page 67 of your report, alongside the correlations with height and hair colour and astrological sign.
Part 3 out of 3 is definitely not late enough, as Charles Murray has learned.
I worry that a report on average IQ of people with black curly hair might also be considered controversial.
On the other hand, a correlation between an astrological sign and educational achievement is already a scientifically accepted fact -- although some skeptics claim that this is merely a result of some kids being younger or older than most of their classmates.
"Part 3 out of 3 is definitely not late enough, as Charles Murray has learned."
I suspect that the race parts were specifically signal-boosted by other people.
The important thing is to keep the race-difference speculation squarely focused on defending against obnoxious "disparity equals discrimination" attacks. I could be wrong, but my impression is that James Damore became far more of a free speech hero on the mainstream right than Charles Murray ever did, for this reason.
Remember the different audiences. The average black person who just wants to be treated the same does not deserve to be subject to *any* sort of race-difference speculation, especially since race doesn't actually exist. The black activist who rejects colour-blindness and demands the burden of proof be on anyone accussed of disparity to disprove racism...deserves to be completely destroyed.
> "it looks like blacks are underrepresented in your profession..."
I can't comment on race, but I can tell you about gender. Women are underrepresented in my profession.
If you look at the pool of qualified candidates, the proportion of women in the pool is comparable to the proportion of women in employment. But this doesn't mean there is no problem.
It may or may not be prejudice. I can't hire women who don't apply. Universities can't teach girls who don't join that course. At senior school level the imbalance between the genders for the relevant subjects is small or none. So somewhere there is a thing that occurs that turns girls away from the profession.
I suggest that this thing is worth at least studying so we can understand what is happening and why, even if we do then determine we can't fix it.
So it is for race, IMO, or any other identifiable group of people: where there is an observed step between proportions in some initial population and the final outcome, there is something worth trying to understand and maybe address (or maybe not, depending on the root cause, but you can't begin to make that decision until you understand the cause!).
The only way to begin this research - to acquire the "hard evidence that someone else is discriminating", if there is any to be found - is for someone to point out that "blacks are underrepresented" - to notice the difference between proportions in the target group versus the general population - and to try to follow the chain back to work out why that is, rather than stopping at the first link.
> I suggest that this thing is worth at least studying so we can understand what is happening and why, even if we do then determine we can't fix it.
I am actually kinda fine with disparate outcomes if there is equality of opportunity.
I don't want to convince half the men in construction to become kindergarden teachers and half the women who are kindergarden teachers to take on construction to reach some cosmic balance.
If a third of the high school Science teachers are female, none of them are sexist and two thirds of the students who end up studying physics are male, I don't see a problem which would have to be addressed by selectively encouraging more women to study physics.
Of course, if you have few female scientist role models and most teachers blatantly state their opinion that while girls are better at rote learning, boys are better at thinking about science, then you don't have equal opportunity.
But for universities, these dark days are mostly behind us I think. Women are (iirc) over-represented in some high prestige fields like law and medicine, while being underrepresented in other high prestige fields like engineering. Saying that we should work to get more men into medicine and more women into engineering seems silly to me.
> I don't want to convince half the men in construction to become kindergarden teachers
If a third of the girls want to be engineers then a year or two later they don't any more, while for boys the proportion remains unchanged, maybe it's worth asking what exactly happened during that year?
Yes, if you're actually asking rather than rhetorically "asking" because you think you know the answer.
But before you get to that point, it's necessary to ask "does this really happen in a year or two?", and then "which year(s)?"
Ok. I would personally put sex is a very different category to race: the former, unlike the latter, a real thing, a biologically significant thing, that affects nearly every part of everyday life. (Though that doesn't mean people need to talk about it as *much* as they do.)
Other than that, two questions.
1. If you notice that blacks or women are underepresented in organisation X, should the burden be on you to prove actual discrimination, or on X to disaprove it?
2. If you are allowed to bring up the disparity, are others allowed to bring up purported innate reasons for this disparity? That's the *entire* context here.
("Blacks are underpresented"; "here are some possible explanations for this other than discrimination"; "how DARE you make such racist claims!"
No. Either race-based talk and speculation is acceptable, or it isn't. I'd prefer "isn't" but allowing it all is consistent. The one thing you can't do is allow it only when it suits you and try to cancel those who challenge you, which is exactly what this discussions is about!)
>The only time it is ever acceptable to notice someone's race is if you have hard evidence that *someone else* is discriminating on account of race.
So, how would one go about discovering that discrimination is happening, if you aren't allowed to measure and see that some race is under-represented? Are you only allowed to notice racism if you have emails from the hiring manager saying "mwahaha, I hate black people and will never hire one"?
The same way you notice any other discrimination or prejudice is happening. Let's say I have curly hair. How would I know someone is discriminating against curly haired people? Probably if I notice some weird decisions being made that seem to have no pattern, and then I look for possible patterns and discover all the rejected applicants had curly hair, and then I look for further evidence corroborating that and only then start playing the hair card. But would it be constructive to make "curly haired" a central and public part of my identity? To constantly inquire into the proportion of curlyheads employed or admitted to a particular institution? To conspicuously divide those I interact with into curls and non-curls and treat them differently and default-assume that the latter are discriminating against me with no evidence? Wouldn't that just enormously increase the salience of curly hair and the chances of actual discrimination? And would I have any right to complain when people start speculating about the inherent differences between curlyheads and non-curlyheads when *I'm the one who keeps bringing the distinction up*?
Why should race work any differently?
Adam Carolla has a bit that the only true "privilege" he received as a (poor, illiterate) young white man was not being able to attribute people's inexplicably assholish behavior to anything except them being assholes.
And there are a lot more assholes than racists.
Well, that might explain why Lighthaven is so cagey about letting outsiders know where the secret clubhouse is. Though I see the Guardian has effectively doxxed them on that front, because of course they did.
By "secret clubhouse" are you referring to Lighthaven's campus, where LessOnline recently occurred?
Yes; I was not able to find that information anywhere I looked in the leadup to LessOnline, nor find anyone to ask. It's now trivial to find for anyone who reads the Guardian article and knows how to google, of course.
According to this, the writer of the piece was/is in Antifa: https://twitter.com/clairlemon/status/1802829359931355249
I didn't realize the Guardian was so far left.
"In Antifa"? What does that even mean? Do they have a membership card? A decoder ring?
I thought they were a group. No? If not, what are they? A movement? Can an activist be "in" a movement?
"Antifa" is a very loosely defined set of antifascist tactics, rather than anything remotely resembling a group that one can be a member of. The best way I can summarize it is as the belief that fascism must be resisted with action, including physical fighting if necessary--hence the name (long story but it's basically derived from "anti-fascist action").
It is better characterized as a movement, yes, although it is very fuzzy and decentralized: unlike BLM, there aren't even any competing national organizations claiming to speak for it. It's really a bunch of individuals and (usually) small groups, often with highly differing political alignments, who share points of unity on what they perceive to be the need to oppose fascism directly.
I find this description hard to believe, specifically the part that people base their identity on being *against* something without being similarly strongly *for* something.
Yeah, you are not a fascist; well neither am I, but that wouldn't make anyone mistakenly consider me an Antifa member. What else is missing?
What are you doing to prevent the spread of fascism? No answer to that question is what makes you not [an] antifa [member].
What's missing--in your analysis, not from the reality on the ground--is the bit where a diverse group of people can be, and are, strongly against one thing (fascism*) while being strongly in favor of different things (anarchism, state socialism, 20th century Euro-style social democracy, etc.). That's the main why "antifa" can only ever be a set of points of unity among different groups, rather than a group in itself.
*or communism, or libertarianism, or any of a number of things that many people are against, even as the things those same people are for diverge widely.
The way I heard it is that Antifa started out as an offshoot of revolutionary leftist movements (particularly Communists and Left-Anarchists, hence the black-and-red flags motif in a lot of Antifa branding) that decided to focus on direct action against fascist groups instead of the more conventional tactics of advocating against the dominant liberal social order. And it got a lot of its early traction in places like the Punk subculture where it served as a convenient label for pushback against neonazi skinhead gangs.
I think there's also some motte-and-bailey stuff going on, especially post-2016, where the motte is "Antifa is anyone who wants to fight back against the rising tide of fascism" and the bailey is a variety of small, mostly-leftist groups that use the banner for street violence against targets they perceive to be fascist.
And as with many motte-and-bailey situations where there isn't a single cohesive organization with well-defined orthodoxy and messaging, neither part is necessarily insincere, since there are no doubt plenty of people who identify with Antifa in the motte sense but are only associated with the bailey-Antifa groups in terms of general sympathy for the concept of punching Nazis.
Obtuse? Is that what you thought I was being?
Oh, certainly I do. The writer meant to discredit another writer by suggesting that the latter was concealing their membership in an "organization" that it's not , in actual fact, possible to be a member of. More broadly, they meant to exploit the ignorance of their audience by presenting this "fact" as though it were some kind of bombshell.
It worked on at least one ACX commenter, which is why we're having this conversation.
I'll reiterate the question, and I'm not being obtuse, actually; I've literally never heard the term used before except in conspiracy theory contexts interleaved with qanon and sovereign citizen rubbish, and none of the folk from those groups have ever been willing to explain even when the torrent stops long enough for the question to be asked.
Never mind other people; what do /you/ mean by it?
OK, so the accusation is that that this person is involved with mob violence. Thanks, that is indeed now clear. My most charitable impression hitherto was that most use of the term was a bit like the use of "Anonymous" in media hype a decade or two back, but this is much more specific.
Yesterday there was a hint the war on Gaza could plausibly expand to a wider war in the Middle East, one where the 13th of April Iranian attack on Israel could look like a normal day.
Previously, Israel already bombed Lebanon as far north as Beirut. Bombing Beirut is unusual, but the usual occurrence is bombing the villages and the cities of Lebanon's south as far as Baalbek (100 km from Israeli border), as well as Damascus and south Syria, in a campaign that killed a total of 300+ named as Hezbollah members, plus other members of Hamas and affiliated groups, as well as dozens of civilians. In return, Hezbollah launches anti-tank missiles and drones on Israel's north, completely empty of civilians, but still containing forests, abandoned property, and military bases and personnel. Hezbollah rockets usually only land as far into Israel as Kiryat Shmona, about 5 km from the Lebanese borders. Hezbollah is deliberately restraining itself here, as they're known to have enough range to reach Tel Aviv and beyond. Hezbollah launches drones as far as Haifa, but those get shutdown by the IDF defenses. When rockets and drones fall into open green areas, they ignite massive forest fires that last days and burn what Israeli press report to be 80K dunams of land (80 square kilometers). This is expected to rise as Hezbollah up the ante and we get deeper into the summer.
Yesterday, Hezbollah published a ten-minutes long video of surveillance of Haifa, including the city's port, a military industrial center, several Iron Dome sites, and more. In possible retaliation, Israel declared that the high commander of the northern region approved plans for a ground invasion into Lebanon.
What's interesting here is that this is exactly what Hamas wanted, on October 7th. An invasion by Hezbollah to the north, coupled with a new civil unrest inside Israel and in the West Bank, possibly with only a faint hope that the Arab states around Israel stand and watch. Hezbollah is said to have been ordered/advised by Iran not to invade so as not to ignite a wider war, but now Israel itself is giving Hamas what it wants, despite Hezbollah wishes.
The 2 variables controlling whether a war will happen and how big it will be seem to be (1) Whether Netanyahu's government falls and early elections happen, or whether it continues till 2026 despite the hundreds of thousands of protestors since October 7th and the exit of Gantz and Eisenkot (2) Whether the IDF is only planning a face-saving operation that advances a few kilometers into Lebanon and then back again, or whether it intends a rematch of 2006. On the Hezbollah side, it depends on how much courage it takes to throw the first rocket at Haifa or - even - Tel Aviv.
In related news, the Israeli right sees this escalation - as it have seen the Gaza war - as an opportunity to advocate for the colonization of south Lebanon. In an online conference [1] held on Monday, leading figures in the settler movement Uri Tzafon, including Sara Netanyahu's eldest brother, outlined colonial fantasies to expand Israel up to the Litani, up to the Euphrates, and even up to the northern outskirts of Saudi Arabia.
In unrelated news, Wikipedia declared the ADL to be "generally unreliable" on the issue of Israel and Palestine, meaning "the source should normally not be used, and it should never be used for information about a living person". See the news story (https://www.reddit.com/r/wikipedia/comments/1dj189l/adl_faces_wikipedia_ban_over_reliability_concerns/) and the first clarifying comment for details.
[1] https://archive.ph/susMD
You've become quite the political junkie my man! 😂
I miss those days...
I'm still trying to end this war by coming to a Great Reconciliation (inclusive of some wealth or land redistribution quite probably) so if tou ever want to consider activism of a whole different kind than any currently ongoing I would live to have you on my team. You can reach me by substack, youtube, or my linked email in both of those.
> Wikipedia declared the ADL to be "generally unreliable" on the issue of Israel and Palestine
Let me guess, their most preferred sources are Human Rights Watch, Amnesty International, and Al Jazeera?
No, Khamas is their preferred source. Wikipedia English is a branch of Khamas. You have heard it here first.
Reversed stupidity is not intelligence.
https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence
"Hamas actually WANTED Israel to invade and kill them, Israel is playing right into their hand!!!"
Forming strategy on the basis of doing the opposite of whatever your enemy wants you to do still puts your enemy in complete control, and is an especially terrible strategy if your enemy is stupid.
C.f. the battle of Dien Bien Phu. The French strategy was to provoke a set-piece battle under favorable conditions by setting up a large, fortified base deep in Viet Minh-held territory threatening VM supply lines. The VM did exactly what the French commanders wanted them to do, concentrating a big chunk of their best troops to besiege and assault the base. Then the VM went off-script and won the battle.
Ideas for dating site:
1. Sole revenue stream is the sale of in-app currency called "Attention tokens". Non-redeemable, and can be purchased in-app for $0.50. Every time anyone sends a message, they pay one attention token to the recipient. So they get it back if the other person replies. You can also pay one token to guarantee that your profile is shown once to a particular person when they are swiping (and get refunded if they swipe right or don't login within 48hours). If you accumulate more than 100 tokens, the excess are burned. Each account has public stats of (1) what % of first-time messages they reply to (2) how many unique recipients they've sent messages to within the past 48 hours. (3) how many unique recipients they've sent messages to, over all time. (4) how many real-world dates they've organized through the app with distinct persons, over all time. (transparency is great and underutilized in all the apps thus far) The attention token revenue goes to maintain the site, as the tokens can never be redeemed. It's just a price on wasting people's time. Women tell me every time they use tinder they are flooded with inappropriate messages from jerks and losers, and this fixes that.
2. fact-checking profiles. If you can't prove it with receipts, and it's not totally subjective, leave it out or get banned. Only photos timestamped within the last year are allowed. Weight and height are only allowed if they submit photographic proof to the mods. Full Quaker mode activated.
I'm not married to this mechanism design, but more fundamentally there are three things a good dating app needs to have, and the existing ones do one out of three at best:
1. a tax on bullshit
2. a tax on unwanted attention
3. massive network effects
Manifold.love had only half of a solution to #1. Eharmony has only a half asolution to #2. Tinder has only #3. Hinge has none of the above but it worked anyway (that's how I found my wife) by dumb luck or a good feed algorithm.
Remember Luna was going to be the dating market that solved all these problems by using their own crypto coin and men would have to buy it to message women? Whatever happened to them? 😁
> Sole revenue stream is the sale of in-app currency called "Attention tokens". Non-redeemable, and can be purchased in-app for $0.50. Every time anyone sends a message, they pay one attention token to the recipient
terrible idea, a women with a chatbot will just recreate onlyfans
Did you miss the part where the tokens aren't redeemable? And it'd be sufficiently inconvenient to resell the tokens / trivial for AI moderators to detect that nobody would bother.
ChatBotGirl: hello there handsome, if you send $40 of bitcoin to xxxxx I'll send you 100 messages
Guy: why should I trust that you'll actually do that after I pay you?
GPT4Moderator: Hello, we've detected that ChatBotGirl was trying to resell tokens using offsite payments, and banned her.
And then after they ban all the chatbot girls, they find out that there are only around six real women on the site to a thousand men, and the thing collapses anyway.
hows that going for mmo's, current dating sites and prositues, twitter and political bots?
1. bots are like three orders of magnitude rarer on dating apps that don't have a free version. Let's say you have to pay one attention token to make your profile visible to anyone's swipe feed for the next 24 hours or 10 right swipes whichever comes first.
2. you can just cap the net token gain from any one interlocutor at 3 (burning the excess) and wordfilter chat to block links and cryptoaddresses if GPT inference is still too expensive at scale.
Take my money.
Cynical take:
(Almost) everyone here wants a site that will set us up with nerdy hotties, but there aren't enough of those to go around.
The rest is commentary.
There aren't enough of them to go around as ongoing partners, sure.
But there are less-not-enough of them to go around as first dates. And every time they go on a first-and-only date with someone a worse match for them than me, that's a failure of the system that I'm happy to provide incentive to improve.
You assume that, in an efficient market, they would wind up with you. ;)
I predict a significantly higher chance they wind up with me, and a near certainty that I have significantly more first dates.
You're quite confident. But then, that's sexy, so perhaps rightfully so!
Like I said, I'm being cynical (and going for laughs). But I think there are more geeks than geekettes or geek-lovers, and that's always going to be a problem with these sorts of sites.
But if it can produce at least *some* happy couples, then its utility is greater than zero, and they should continue.
That's not a problem with the sites, that's a problem with the geeks.
The solution is to have a broader range of interests.
Another solution is arbitrage, pairing males from markets where males are undervalued with females from markets where females are undervalued. Some sites do this.
How does that work?
Or is that just a euphemism for NWMAF (nerdy white male, asian female)?
Yes.
It was more of a joke about the way rationalists keep going 'our dating site never works!' and it never seems to improve. Sorry, guys, women don't want what we're selling.
You should really be careful to include the qualifier of "hot" when you're talking about the lack of female nerds/geeks/etc.
Because there are indeed plenty of average and below average single female nerds/geeks/etc. They're just literally not as visible as the 0.2% of incredibly hot cosplay/influencers.
My observation is that there's a substantial male majority in specific nerdy professions and hobbies, such as computer programming and tabletop RPGs. "ACX readers who are engaged enough to answer the survey", for instance, are 85% male. When I was in college (early 2000s), both the computer science department and the tabletop RPG club were about 90% men. More recently, my peer group at work and in hobbies is somewhat less imbalanced, but that just means between a 2:1 and a 5:1 ratio rather than a 9-10:1 ratio.
OTOH, there are other nerdy professions (e.g. biology and other life sciences) and hobbies (e.g. fantasy fandom) that are more balanced or majority women.
I wasn't claiming there was an equal distribution of gender in nerds/geeks, just that there are many single women in those groups who aren't being approached by the men in those groups for...reasons.
I keep hearing that, and am skeptical (though obviously our lived experiences are going to be very different). I kind of figured the male geeks would lower their excessive hotness standards (we ain't too cute ourselves in most cases) and the market would clear, so to speak. Maybe not.
> I kind of figured the male geeks would lower their excessive hotness standards
They don't!
> Non-redeemable, and can be purchased in-app for $0.50. Every time anyone sends a message, they pay one attention token to the recipient.
That’s going to make some people very poor.
> If you accumulate more than 100 tokens, the excess are burned
While not really benefitting the hotties.
Response rates will be much higher because the number of incoming messages will be so much lower when people have to be judicious about it instead of spamming everyone.
Not at all. The amount is trivial per person sending the token but attention is distributed in a power law in dating apps. However with the cut off at 100 tokens the recipient isn’t benefiting either.
Attention is only distributed as a power law in dating apps because there isn't a tax on unwanted attention.
How are you going to tax the unwanted attention of every guy who thinks "okay, the other one hundred and fifty guys wasted their tokens, but *I* have a real chance here?"
Plus, how do you decide which tokens get burned? I presume the oldest one hundred first, if the recipient hasn't responded to any of the senders?
Suppose I signed up for your app and got 200 tokens (and pigs will fly), can I use those tokens to reply to people or do I have to buy my own tokens if I want to message someone?
> How are you going to tax the unwanted attention of every guy who thinks "okay, the other one hundred and fifty guys wasted their tokens, but I have a real chance here?"
By laughing all the way to the bank when they buy tokens for that purpose
I think the idea here is that 1 token = 1 message, regardless of whether it's initial or a response. They're fully fungible, but you're (in a sense) sending one back and forth when you have a conversation.
only if I can predict who havnt gotten attension
If I throw 1$ at your system, I expect it to go to 2 people who have 1000 messages
Only if the feed algo sucks and people ignore the response rate stats. Current apps show people who are way out of your league in the feed because it's optimized for entertainment more than actually helping anyone find a match.
> While not really benefitting the hotties.
it would be trivial to launder it
It’s presumably a token attached to an account so I doubt it.
I like the idea, but I suggest being less strict about it.
Also some profits should go to some charity. Then popular people who are on the fence, could lie to themselves: "noo... I don't actually NEED to use this dating-site, I am only doing this to feed some hungry children with other peoples moneys".
> (3) how many unique recipients they've sent messages to, over all time. (4) how many real-world dates they've organized through the app with distinct persons, over all time. (transparency is great and underutilized in all the apps thus far)
This part I don't like. The main reason why I avoid dating sites, is because I don't want to paint a target on my back for all eternity for the whole world to see. If my failures dissappear after 48h, then I would feel much more comfortable using that site. (If only the person, I swiped right on, can see these stats, then I could get comfortable with it. But even then these stats should only go back a year or so).
> 2. fact-checking profiles
what is the point of this?
If someone lies about something that is important to me, then the worst that could happen, would be to waste some time.
Dating is fundamentally an information problem. You could analogize it to the multi-armed bandit problem in decision theory. The more accurate and complete information that you get up-front, the less time you waste dating the wrong people, and the quicker you find your life partner. This is why the optimal rule is to force everybody to have complete and accurate info on their dating profile. Unilaterally doing that in the current environment would be an individual handicap, but if it were universal then nearly everyone would be waaay better off.
> This is why the optimal rule is to force everybody to have complete and accurate info on their dating profile.
I thought it's fairly well attested that the many filters available on the current apps are net negative?
Like if you compare current happy relationships and the relevant stats, and matched single people on apps, people are filtering on things that don't actually matter to relationship quality and happiness.
Particularly as an attractive woman, you get this list of legible metrics and think "of *course* I want a 6' 2"+, PHd-holding man with income over $200k", or "might as well winnow the chaff" and select a bench of legible filters to reduce the pool of messages, because you're going to be flooded with messages whatever you choose.
But then when you're swiping, there are two additional hidden (devastatingly strong) selection effects - you're filtering on attractiveness in pictures when swiping, and in the background, you have now cumulatively filtered on "especially *attractive* 6' 2"+ $200k+ men who are *still on a dating app.*" If an attractive tall successful guy wants to be married, he already will be - the ones you're seeing on apps are overwhelmingly there to sleep around, whatever he says, at least 9/10 of them, because the ones who want to get matched can get matched right away and will fall off the dating app.
This is just a sketch, mind you, but it's a finger pointing at the moon of overall dating app dynamics.
So those legible filters are screwing people who want a high quality long-term match, for various reasons. When all the things that *actually* matter for relationship quality and happiness are illegible, and part of two-person individual dynamics anyways.
I think giving people more legibility in filters isn't necessarily going to do people many favors, in other words. It's the wrong way to approach the problem, which fundamentally requires that roughly-assortatively-matched people go on a bunch of dates to discover and find those illegible two-person-dynamics qualities that actually matter.
You think that people seeing that a tall model attractive dude with high income has been on a zillion dates will make them less likely to go on a date too, but you're probably wrong. Similarly, you think that if people see that hottie-McHotface has 1k messages and has responded to 10 of them, she won't get additional messages, and I think that's probably wrong too.
Dating apps have the winner-take-all dynamics characteristic of large pools of competitors, like professional athletes or musicians. Back when OkTrends was still a thing, they analyzed the data and found that women empirically consider 80% of men unattractive, and only ever consider the top 20%, vs men who rate on a bell curve.
> When all the things that *actually* matter for relationship quality and happiness are illegible
I don't believe this, and I think most of the people pining for 2011 OKC are on my side. Their match % algorithm is shockingly good between people who use it correctly (i.e. accurately label the importance of questions, answer truthfully and correctly, etc). Sure, there are *some* illegible factors, but I know I'm not going to enjoy spending my life with someone who does X or Y or Z or believes A or B or C or who can't stand doing P or Q or R. If all nine of those things are 50/50 filters, I'd be ecstatic if me and my perfect partner could both filter out 99.8% of people before we even see their profiles.
Against the OkTrends inequality point, I think the integral of hotness over lifespan is much more egalitarianly distributed among women than among men. Almost every woman was hot when she was 16-21. I would say the median girl I knew in high school was hotter than 100% of the people I saw on apps in my mid-30s. I think women do themselves a disservice by waiting too long to pair up.
I don't disagree with you on any of that. People often misuse legible information. But more information is better than less information when people are being rational, and since the system forces them to put their money where their mouth is they will be a bit more rational. More information can help people avoid the adverse selection you speak of -- in this case the stats would reveal the rakes. (assume you need full KYC to create an account so there are no smurf accounts).
I disagree that it's anything so simple as an information problem.
At least part of it is a "not knowing what you want (or don't want) until you get it" problem.
That's true, and also to some extent people change each other in hard to predict ways. But a lot of it is it takes time to get to know all of the preexisting traits of a person. A god's eye view of a suitor's history, summarized impartially by GPT6, plus a battery of psychometric tests, would save a lot of hassle in expectation.
> Are there any people like that?
I have a gut-feeling, that says yes, but now that I slept on it, i am less sure.
My thought goes something like this: some people are too embarrassed to use dating sides, and need an excuse, even if it is just an superficial one.
Now that I think about it some more: I can imagine myself to be such a person. (albeit, the "excuse" should be less obvious, and I can't imagine any website pulling this off in a way that would be comfortable for me)
> If you want to feed hungry children there will be far easier ways to do it.
actually, I think that could be quite fun. You raise money for charity simply by making yourself available on a dating site. Then other people will "donate" money for the chance to approach you. The other person is not "paying for attention", they are "donating to charity, while attention is just some side-effect" (or at least that could be the lie they tell themselves).
Compared to EA-Stuff this is probably not the most efficient way to donate. But so are most non-EA charities.
The only info you have to give people up front is that sending a message costs $X (fifty cents in this proposal). All the other details can hide on a FAQ page that nobody reads, and pop up in the interface if/when someone encounters them for the first time. e.g. the 100 token cap might show up once someone has 50 and it warns them they won't be able to store more than 100.
I was thinking, perhaps very naively, about things that people consider "good" and "bad", and whether there are good and bad people, and what does that even mean. Now I do not want to discuss the goodness or badness of specific actions, but rather the... paradigms(?) of what it means for the people to be good or bad. As I see it, these are the perspectives that people seem to have:
"Good people do good things, bad people do bad things." -- Simple and elegant. Popular in stories for children. Evidence in favor: past behavior is the best predictor of the future behavior. Evidence against: in real life people do good things on one day, bad things on another; or in different contexts.
"Everyone is good at heart. Smart people understand the impact of their actions, and empathize with others, so they mostly do good. Stupid people act chaotically, and cause a lot of suffering. The problem is stupidity, not some inherent evil; everyone is a good guy in their own story." -- Marcus Aurelius. Everyone who believes that improving education is the key to human goodness. Evidence in favor: sometimes explaining how your actions impact other people changes your behavior. Evidence against: sometimes it does not, some bad people are quite aware of what they are doing.
"Everyone needs to have their basic needs satisfied first, and then they can be nice and generous. People are bad when they are hungry or hurt, nice when they feel good. By improving the living conditions of people we improve their behavior." -- Popular on the left, basically because it means that whenever poor people do something bad, in some sense it is never their fault. Evidence in favor: introspection, it seems to me that this is basically how I behave. Evidence against: some people never have enough; different people react to their own misfortune in opposite ways: some want to take revenge on the others, some want to protect the others from suffering the same fate.
"Good and bad are relative; it means whatever is convenient for you or your tribe. Your ingroup is tautologically good because it fights on your side; your outgroup is tautologically bad because it fights against you." -- Conflict theory. Evidence in favor: conflict theory is popular. Evidence against: why do some people consider donating to anti-malaria cure good if no one in their tribe is at risk of getting malaria?
"Everyone is equally good and bad, you only don't see it because you are not sufficiently enlightened." -- Unless there is a specific evidence that Hitler once saved a kitten, this seems obviously wrong, but this is one of those things that some people consider "deeply wise", so I am mentioning it here.
"Everyone does whatever gods make them do." -- ancient Greeks according to Julian Jaynes. Difficult to falsify.
"Everyone does whatever evolution made them do." -- yeah, but I am asking what *specifically* it is.
"Good and bad are just habits. When you keep doing good things, they become natural to you, and doing bad things would feel wrong. When you keep doing bad things, those become natural to you, too." -- Evidence in favor: seems like an obvious description of what most people do. Evidence against: it doesn't explain why some people try to change their habits, sometimes successfully.
"There is no such thing as good and bad. There are smart people, who know what they want, and have the courage to take risks and do socially disapproved things in order to get it. And then there are losers who believe in stupid concepts such as 'good and evil' that other people invented in order to manipulate them, LOL." -- This seems to be what bad people believe, though they would obviously reject the label. Evidence in favor: a lot of preaching is hypocritical, and a lot of not-doing-evil is motivated by fear of punishment rather than by compassion. Evidence against: sometimes it actually takes a lot of courage to do a good thing; also, the 'stupid' good people somehow keep surviving, even if this theory would predict their extinction.
...is there some other major perspective that I have missed?
> ...is there some other major perspective that I have missed?
One I didn't see in your list:
"Good and bad is contextual and / or computationally intractable."
Like having a bunch of kids while poor is bad. Being really demanding of your kids is bad. But kids raised in a poor + demanding environment are more likely to succeed and accomplish better and more impactful things in the world, compared to similarly poor but lax childhood environments. There's a general "adversity and being a demanding asshole is generally bad, but forms stronger characters that do more net good" argument here.
Working in finance is probably bad. Particularly at the higher and more abstract levels like CDO's and other esoteric instrument, you're essentially paying people millions for arbitraging environmental / regulatory environments and moving bits in ledgers back and forth in fundamentally zero-sum ways. But finance has a legitimate risk / capital allocation function in society, and capitalist economies with finance-driven capital allocation have empirically lifted billions out of poverty. Yet capitalism systematically drives many, maybe most, worker's incomes and standards of living down except for an elite, over-educated few who capture most of the ongoing productivity gains, and capitalism is a Red Queen's race that puts people on endless hedonic treadmills. So it lifts billions out of poverty, only to slam those billions into endless pointless Red Queen's rat races and hedonic treadmills / keeping up with the Joneses. Etc.
Hard grading teachers with high standards are bad for the vast supermajority of average students who just want a good grade without a ton of effort and see the fundamental credential-driven nature of education as fundamentally pointless. But those hard-grading teachers drive 1% of actually interested / capable students to learn more, and it's ultimately those students who drive most innovation and value in the world in terms of actions. So maybe every teacher should be very strict with high standards! But that would lead to 99% of students not making the cut, wasting years on credentialism journeys that failed, etc.
So what is good, what is bad? It's purely contextual, and likely computationally intractable. Certainly to merely human brains enmeshed in complex, chaotic, and emergent systems and dynamics, it's computationally intractable.
Also, you didn't include the famous Solzhenitsyn quote, so including for the sake of completeness:
“If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?"
Goodness is desirable being, and being is divided into potency and act. Humans cannot really desire nothing. But we can desire beings that are so much in potency and so little in act that they hardly exist at all. I don't know if that makes a person bad, but it certainly hinders them in doing good - because these things are inherently disappointing and the disappointment saps our energy.
> Everyone does whatever gods make them do." -- ancient Greeks according to Julian Jaynes
I don’t know why you needed to drag him into it. This is not a very accurate summation of his proposition.
Then perhaps "unconditional election" in Calvinism would be a better example.
Perhaps.
>"Everyone needs to have their basic needs satisfied first, and then they can be nice and generous. People are bad when they are hungry or hurt, nice when they feel good. By improving the living conditions of people we improve their behavior."
Evidence against also includes that fact that rich people do bad things too. If this was true, we would expect a gradient of good or bad behavior based on income, with those with the most resources exhibiting the most moral behavior. This does not at all seem to be the case.
>"Good and bad are just habits. When you keep doing good things, they become natural to you, and doing bad things would feel wrong. When you keep doing bad things, those become natural to you, too." -- Evidence in favor: seems like an obvious description of what most people do. Evidence against: it doesn't explain why some people try to change their habits, sometimes successfully.
That's a good explanation of why people do good or bad things, but not what good or bad is. People can change their habits if they choose to seek the good, but the good is not the habits, the habits instead are what make your nature good or bad.
Moral realism would put it that there is a real "good" to which our behaviors can be in alignment with, or out of alignment with. To be in alignment with the good is to be good, and to be out of alignment with it is to be evil. Once you believe there is a good out there you can be in alignment with, then it makes sense to talk about how you align your behaviors to it: about habits, and incomes, and whether people are naturally aligned with the good or not. But without a belief that good exists, its nonsensical to talk about people being good or bad.
> Evidence against also includes that fact that rich people do bad things too. If this was true, we would expect a gradient of good or bad behavior based on income, with those with the most resources exhibiting the most moral behavior. This does not at all seem to be the case.
Am I missing something? Isn't this *overwhelmingly* true??
Isn't the vast majority of violent and property crime perpetrated by a handful of predominantly lower-income younger men? Like if you're trying to predict the factors relevant to "committing property or violent crime," your dominant factors by far are going to be income / wealth, age, and maleness?
Throw in the fact that some huge percentage of crime is committed by an even smaller handful of repeat offenders, I don't see how this isn't overwhelmingly, glaringly true.
Sure, you can say "well moral actions are a bigger set of things than strictly criminal actions" or whatever, but "criminal actions" seems like a fairly strong Schelling point to ground your intuition and argument on, and it's way wrong even there, so I wouldn't necessarily extend the conclusion upwards.
While poverty is somewhat correlated with crime, it is not clear that it is casual. It is quite possible that being the type of person who does crimes makes you more likely to be poor. If poverty was casual we would expect all poor people to have a propensity towards crime, but that's not the case: there are more Asians than Blacks under the poverty line in NYC, yet the Asian crime rate is much lower. (https://www.city-journal.org/article/poverty-and-violent-crime-dont-go-hand-in-hand). Weird result if poverty is causing the crime.
Rich people have, of course, committed fraud, embezzlement (theft), murder, rape, assault, etc, despite having all the resources they could need. They don't break into houses and rummage around for jewelry because they have better options, like skimming the pension fund. And beyond crime, would you say that rich people are generally more moral people than poor or middle class? Do they lack envy, pride, greed, and lust? Do they forgive easily, and treat others the way they wish to be treated? They all should, if immoral behavior comes from material lack. Perhaps more of them do than the poor, though again I would say the casual relationship there is likely reversed.
Of course if it was true that evil behavior comes from material deprivation, then the natural conclusion would be to treat poor people as evil. It reminds me of a passage from Chesterton:
"I have listened often enough to Socialists, or even to democrats, saying that the physical conditions of the poor must of necessity make them mentally and morally degraded. I have listened to scientific men (and there are still scientific men not opposed to democracy) saying that if we give the poor healthier conditions vice and wrong will disappear. I have listened to them with a horrible attention, with a hideous fascination. For it was like watching a man energetically sawing from the tree the branch he is sitting on. If these happy democrats could prove their case, they would strike democracy dead. If the poor are thus utterly demoralized, it may or may not be practical to raise them. But it is certainly quite practical to disfranchise them. If the man with a bad bedroom cannot give a good vote, then the first and swiftest deduction is that he shall give no vote. The governing class may not unreasonably say: "It may take us some time to reform his bedroom. But if he is the brute you say, it will take him very little time to ruin our country. Therefore we will take your hint and not give him the chance." It fills me with horrible amusement to observe the way in which the earnest Socialist industriously lays the foundation of all aristocracy, expatiating blandly upon the evident unfitness of the poor to rule. It is like listening to somebody at an evening party apologising for entering without evening dress, and explaining that he had recently been intoxicated, had a personal habit of taking off his clothes in the street, and had, moreover, only just changed from prison uniform. At any moment, one feels, the host might say that really, if it was as bad as that, he need not come in at all. So it is when the ordinary Socialist, with a beaming face, proves that the poor, after their smashing experiences, cannot be really trustworthy. At any moment the rich may say, "Very well, then, we won't trust them," and bang the door in his face. On the basis of Mr. Blatchford's view of heredity and environment, the case for the aristocracy is quite overwhelming. If clean homes and clean air make clean souls, why not give the power (for the present at any rate) to those who undoubtedly have the clean air? If better conditions will make the poor more fit to govern themselves, why should not better conditions already make the rich more fit to govern them? On the ordinary environment argument the matter is fairly manifest. The comfortable class must be merely our vanguard in Utopia."
I wasn't arguing causation, I was arguing that there's a plainly obvious "wealth and criminal behavior" gradient. Those with the most resources DO commit crimes at (much) lower rates. And didn't want to touch the race issue, but you're right, there are indeed additional variables like race that you could add to get a more predictive model.
But the fact that "income / wealth" is almost *certainly* one of the variables you need to include is what I was arguing. In other words, there is INDEED a "wealth / crime" gradient, and it's overwhelmingly obvious.
> Do they forgive easily, and treat others the way they wish to be treated? They all should, if immoral behavior comes from material lack. Perhaps more of them do than the poor
Yes, in fact, I think "wealther people commit less crime and generally have better day-to-day character" is obviously true. Sure, rich people commit property crime at lower rates, and maybe this is confounded by having more property or whatever, just like poor people commit less embezzlement because they have less opportunity.
But you mention rape earlier - you don't think there's an "income / wealth" gradient in rape? You don't think there's one in assault and murder? I would bet significant sums there are, and poorer people commit more of all those things.
People love ragging on rich people, but we're ALL rich here in this comments section, at least rich in terms of "wealth / crime gradients." Anyone with a white collar career who has family or routine contact with actual lower income people can overwhelmingly confirm from direct experience that crime and character is basically uniformly worse on average the lower the adult-income ladder you go.
As to causation, I think it's a trickier matter, but we don't need to worry about causation. Predictive models work just fine with correlations.
>As to causation, I think it's a trickier matter, but we don't need to worry about causation.
The question of whether immorality is caused by material deprivation is literally the point of this thread, and the point of my comments up to this point.
> The question of whether immorality is caused by material deprivation is literally the point of this thread, and the point of my comments up to this point.
Oh, I thought it was basically understood from your earlier "It is quite possible that being the type of person who does crimes makes you more likely to be poor" comment that it's most likely to be a self-reinforcing cycle with poor impulse control and short planning horizons driving both poverty and crime generationally, at which point causation is pointless to tease out.
But you want a clean signal? Let's look at rape / assault / murder rates among lottery winners, compared against old money scions with similar wealth.
Which way would you bet, higher wealthy crime in born-from-poor family, or born-from-rich-family? I bet it's vastly higher in lottery winners.
But what actionable insight can you actually gain from this?
"Evidence against also includes that fact that rich people do bad things too. If this was true, we would expect a gradient of good or bad behavior based on income, with those with the most resources exhibiting the most moral behavior. This does not at all seem to be the case."
I don't agree with this, but perhaps a steelman might say that these rich people are lacking something beyond income, or that income is too one-dimensional to capture all motivations. As one example, if the rich person's dad didn't love them and/or approve of them, they may be acting out against that.
This would overly complicate the "basic needs" narrative, because it would become extremely difficult to falsify. Any person who did evil could be described as missing something "basic" that causes the problem. It may not even be wrong, but may become an entirely unhelpful approach to evaluating morality.
I see your #1 - you are what you do - as basically correct, if imperfect. Yes, nobody is 100% doing good things all the time, but if we allow for shades of grey, we get "good people mostly do good things" and "bad people mostly do bad things", and nobody's perfect.
The nice bonus of this model of goodness is that it's actionable: want to be a better person? Do more good things.
Couldn't agree more.
"You ARE what you DO" is one of my ultimate favorite aphorisms.
Right after "you can't have good judgment without judgment" (and, like Valentine Wiggin, I'm humbly certain I couldn't have possibly originated that line and that I *had* to have read it somewhere, except that...well...after searching around a bit I think it might actually be mine).
And obviously, the two ideas are intrinsically linked!
That's pretty much where I am as well. Nobody is perfect, but it's obvious the people who are trying to be good.
We don't really care if Hitler was kind to animals or had a bad childhood. We care that most of the results of his actions were incredibly evil by almost any [useful] metric we could devise.
"By your fruits you will know them. Does anyone gather grapes from thornbushes, or figs from thistles? In the same way, every good tree bears good fruit, but a bad tree bears bad fruit. A good tree cannot bear bad fruit, and a bad tree cannot bear good fruit. Every tree that does not bear good fruit is cut down and thrown into the fire. Thus, by their fruit you will know them." -Mathew 7:16-20
Exactly!
>...is there some other major perspective that I have missed?
Do you count "tit-for-tat is evolutionarily stable." as being (roughly!) in the same category as the other perspectives? ( https://en.wikipedia.org/wiki/The_Evolution_of_Cooperation )
And it is worth emphasizing that this is true _even though defection is a dominating strategy_ (which I consider to be the significant discovery from these experiments).
>In both actual tournaments and various replays, the best-performing strategies were nice:[5] that is, they were never the first to defect. Many of the competitors went to great lengths to gain an advantage over the "nice" (and usually simpler) strategies, but to no avail: tricky strategies fighting for a few points generally could not do as well as nice strategies working together. TFT (and other "nice" strategies generally) "won, not by doing better than the other player, but by eliciting cooperation [and] by promoting the mutual interest rather than by exploiting the other's weakness."[6]
>Being "nice" can be beneficial, but it can also lead to being suckered. To obtain the benefit – or avoid exploitation – it is necessary to be provocable and forgiving. When the other player defects, a nice strategy must immediately be provoked into retaliatory defection.[7] The same goes for forgiveness: return to cooperation as soon as the other player does. Overdoing the punishment risks escalation, and can lead to an "unending echo of alternating defections" that depresses the scores of both players.[8]
Right, I wanted to say something like this. And then there is also Moloch, which I don't know how to fit into a discussion of good and bad. Is Amazon good or bad?
Many Thanks! Yes, Moloch is similar, in the sense that realistic options have to be _stable_.
To more explicitly tie tit-for-tat back to Viliam's original comment: Cooperating (C) is usually called "good" and defecting (D) is usually called "bad", but the instability of all-C, (contrasted with the stability of tit-for-tat) shows that all-C isn't really a realistic option. Agents _have_ to be "provocable" in order to have stable cooperation.
"Si vis pacem, para bellum"
I would view, Moloch, the effects of competition, into the same general category of selecting only _realistic_ options. If an apparent option falls apart in the presence of competition, it isn't really a realistic option.
A wise man once wrote:
>I reject the argument that Purely Logical Debate has been tried and found wanting. Like GK Chesterton, I think it has been found difficult and left untried.
I want to believe this. In the world of technology it's at least somewhat true since there's proof in the pudding: Truth will make airplanes fly and Falsehood will make them fall from the sky. But in the world of politics, culture and religion it seems to not hold up.
The clearest example I've experienced is Creationists. The internet created fertile ground for a flood of Creationism debates. Remnants are still ongoing. I think it's fair to say that Creationism lost. It's easy to find testimonials from ex-Creationist guys who were young and nerdy and eager to defend their worldview but lost faith in face of the overwhelming evidence. Still, Creationism seems to be going strong without much of a dent. The nerdy guys who left don't seem to have made much of a hole. It's hard to know the counterfactual but this seems like a case where Purely Logical Debate was utterly exhausted and it didn't make much of a difference. (Sure, there's a lot of junk as well but the library of high quality Creationism debates content is vast.)
Looking at history, it all seems so materialistic. [EDIT because the specifics here are what people seem to focus on even though I don't care much.] The Nazis lost, women got the vote and slavery was ended not because of Purely Logical Debate but because of some combination of material conditions and great tides of cultural change. Maybe the people who debated the issue helped realize an idea a decade earlier than the counterfactual, but debate doesn't seem like a big part of it.
I'd love to get some input on this. Does someone have an example of a current political questions were debate seems extra important? I guess this has already been discussed below Scots original post, but I'm good for a repeat.
Purely logical debate may be the least bad way of settling something, but it is still pretty bad because of the reliance on premises. People tend to get their basic assumptions form their cultural background, so you can't use Pure Logic to resolve cultural divides.
I think you are being too impatient here - 500 years ago Creationism was overwhelmingly the dominant position, but through the process of "Purely Logical Debate" (roughly the scientific revolution) it is now a minority position. Over time evidence came in that favoured non-creationsim (Kepler, Darwin etc.) and it won favour, starting with the elites and percolating downwards.
I think a similar model is true for many political questions - take market economy vs command economy was very much a live debate - but over time the evidence disfavoured cammand economy, so now more or less every major political party doesn't plan to implement a command economy (even the Chinese and Vietnamese Communist Parties don't!). The debate (and especially evidence) favoured one side so it's now no longer a major political question.
Current political questions are almost definitionally ones where the debate hasn't been settled one way or the other either it's too early too tell or they are not a matter of logical debate (say matters of zero sum patronage)..
Good points, thank you for your response.
Oh, I gave up and have gone all-in on Conflict Theory.
Who is for you? Be for them. Who is against you? Be against them.
(h/t Razib Khan)
I'm confused on exactly what you're saying. I would think there's a spectrum of answers to why various social and political changes happen that ranges from "completely inevitable and determined by initial conditions" to "utterly chaotic and dependent on a collection of the smallest random factors". My own view is firmly in the middle, and I think most people (and *especially* people around here for some reason) tend to go way too far to one or the other extreme. I would say it's (on the whole) something roughly like: certain questions and controversies arise inevitably given enough time (e.g. democracy vs monarchy, legalise pornography or don't, etc) and then they're resolved through usually an election/referendum or a war. And it's *not* determined which way those will go, especially elections which let everyone collectively decide (for all sorts of major or minor reasons) which path to irrevocably take. Saying those results are inevitable is insulting to the voters who think and make a free decision, and in a different way to the courage and effort of the officers and troops, who aren't just playing out a pre-written story where their own acts are meaningless.
But having said all that, where does the question you're asking fit into that spectrum? Is Purely Logical Debate found on the inevitability end, or on the chaotic end, or somewhere in the middle? Perhaps you're saying something parallel to the problem of free will: either our choices are determined and thus not free, or they're random and thus not acts of will. Is that what you're saying for social choices?
Solving that for free will is very difficult and I'm not sure you can: attempted solutions involve trying to find a mechanism where an act is determined enough to be meaningful but indeterminate enough to be free. But with decisions of society I think it's much easier to solve. As I said there are elections, which are clearly free and non-determined (despite all the people who want to claim they follow rigid economic patterns: if that were true those people should be able to reliably predict election results) but ALSO have meaningful explanations for their results, as they are the collective outcome of many decisions of individuals, some blind responses to incentives and some thoughtful reasoned judgements of personal values.
Please tell me: is anything I've said here addressing your question? If not can you clarify where your question fits within this framework?
I'm also confused about your examples (and can I just strongly object to editing a post to remove something that was previously there; it makes it hard to follow the thread and even less clear what you're saying than it was before).
At the very least, are these examples of something resembling Purely Logical Debate in action or not:
1. The Allies winning in part because of less ideological self-handicapping (e.g. women in industry instead of confined to the home, not wasting resources to murder peoole who could have helped the war effort) and because of governments with more popular support that didn't have to do things like divide the government into competing feifdoms to maintain the leader's authority
2. An election where the electorate punishes one side for being too arrogant or too hostile to scrutiny and debate
?
I think at least a few types of changes are reasonably close to deterministic. Specifically, if a technology makes some choice orders of magnitude cheaper than the main alternative, I think the cheaper alternative will wind up winning out. Compare e.g. controlling temperature with a thermostat (or some other comparable automation) vs having a human manually check a thermometer every ten minutes and flip a switch. ( There can be exceptions if the incremental cost is low but the capital cost is high enough to be a major barrier to entry. )
>there's a spectrum of answers to why various social and political changes happen that ranges from "completely inevitable and determined by initial conditions" to "utterly chaotic and dependent on a collection of the smallest random factors"
I agree. But what I'm missing is Truth. No matter why changes happen, they sometimes seem to go towards Truth (e.g. in technology) and sometimes they don't seem to go towards Truth (e.g. the continued popularity of Creationsism).
>I'm also confused about your examples (and can I just strongly object to editing a post to remove something that was previously there; it makes it hard to follow the thread and even less clear what you're saying than it was before).
I was just dejected by how everyone wanted to talk about the minutia instead of the actual question.
>1. The Allies winning in part because of less ideological self-handicapping (e.g. women in industry instead of confined to the home, not wasting resources to murder peoole who could have helped the war effort) and because of governments with more popular support that didn't have to do things like divide the government into competing feifdoms to maintain the leader's authority
Exactly: The allies didn't win because of Truth and because they had better arguments. They won because the Nazis were stupid.
My impression is that creationism has lost enormous ground since 2007 or so. I've hardly seen it mentioned for years, and while I can't be completely sure that there isn't just as much of an effort to push it into schools as there was back then (which for some reason no one's talking about anymore) I'm pretty sure there isn't.
Insofar as creationism is still held by many people, I'd mostly blame irrationality on the other side: i.e. atheism. When you're no more likely to see a "response" to creationists like "actually evolution is a scientifically established fact based on the following evidence; obviously this says nothing about the existence of God which is a question for philosophy" as one like "evolution happened and also matter's all that exists and there are no gods coz that's just like believing in faires lol" then yes you're going to still have a lot of creationists. I suspect that when people in general start sticking to modest claims that have actual strong justification, then there'll be far less of two sides' arrogant and badly reasoned claims bouncing off each other.
"Exactly: The allies didn't win because of Truth and because they had better arguments. They won because the Nazis were stupid."
Huh? I don't understand your objection at all. A war is not about argument; it's what happens when at least one side refuses to listen to argument. So how is "the stupid side that hates argument loses because they're stupid and don't listen to sensible arguments" not a complete victory for the Cause of Argument?
And what do you think of my second example? (Which may describe the 2016 election that Scott was discussing, and the one after that, and probably this year's one as well: the incumbents punished for their hostility to argument, whether in the form of scientific illiteracy or cancel culture.)
>creationism has lost enormous ground since 2007 or so. I've hardly seen it mentioned for years, and while I can't be completely sure that there isn't just as much of an effort to push it into schools as there was back then (which for some reason no one's talking about anymore) I'm pretty sure there isn't.
https://www.cnn.com/2024/06/19/politics/louisiana-classrooms-ten-commandments/index.html
That's the Ten Commandments, it's nothing to do with Creationism.
Governor Landry is a Catholic and wouldn't support Creationism anyway.
My understanding is that about the same proportion of the US population is creationist today compared to 2007. But see Compavs point that creationism has lost a lot since 1607, which is a good counter-example. But the counter-counter is that debate increased by a lot by the advent of the internet so if debate was effective we would see an effect.
If the standard isn't "there should be Purely Logical Debate available" but "there should be Purely Logical Debate available and it should be more popular than arrogance and badly reasoned claims" then I think it won't matter since that's basically impossible.
>So how is "the stupid side that hates argument loses because they're stupid and don't listen to sensible arguments" not a complete victory for the Cause of Argument?
If this is how culture changes, then it's a waste of time to do Purely Logical Debate, the side of Truth should focus on war instead. Also people will be wrong forever on questions where Truth doesn't translate well to winning wars wars.
I don't know what to think of elections. It seems less like a competition for Truth and more like Voters as Mad Scientists to me.
If you accept that cultural change is downwind of economic needs then you are not too far off Marx. Marx believed that the Protestant revolutions couldn’t happen without changes to the medieval economic model.
Which is largely what I personally believed before reading Marx. Morality is downwind of elite ideology and elite ideology is determined by economic benefit.
So there won’t be any respite on immigration or identity politics until immigration or identity politics affect the top 1%* We have already seen this in the US at least twice, in 1924 elite opinion turned against European immigration because capitalists were scared of importing radicalism and a few years ago when granny bourgeois might have got the sniffles.
You can see it now with the increased hostility to China which matches pretty much exactly with the Chinese moving from being a place that could provide cheap labour to one that could compete with western capital.
> If you accept that cultural change is downwind of economic needs then you are not too far off Marx
That reads a bit like "If you don't eat meat then you're just like Hitler" to me.
> Women got the vote because industrial society needed factory workers.
The first US territory to grant women the unrestricted franchise was Wyoming, in 1869. The first state, Colorado in 1896. By 1917, women had the right to vote in 12 states, all but NY and MI being generally rural. At that point I think womens' suffrage was generally regarded as inevitable, and I don't think it came out of any perceived need for female factory workers.
You can also put the women in the factory without, in fact, giving them a vote.
Oh yeah. Quick'n'dirty comparison of women's suffrage movements (the distinction apparently is that the suffragettes were more militant and 'direct action' than the suffragists) in UK - first established in 1865 and US - first established in 1848 versus women going to work in factories:
Lowell, the big planned textile mill city up and running in 1823, largest industrial centre by 1843 and actively recruiting women because:
https://www.uml.edu/tsongas/barilla-taylor/women-industrial-revolution.aspx
"The city’s investors hired corporate recruiters to enlist young women from rural New England to work in the mills. Their reasoning was two-fold:
- women were apt to stay in the city only a few years before leaving to become wives and mothers, thus preventing the establishment of a permanent working class; and
- women were less expensive and more easily controlled than men.
Every woman had her own reasons for seeking factory work. Life was very difficult on a subsistence farm in New England – large families resulting in minimal (if any) inheritances, failing crops from unpredictable weather, and young men leaving in search of a better life (reducing marriage prospects)."
Women's suffrage movements were driven initially, and in large part by, women from the educated and middle to upper classes. Organising working class women as part of broader labour movements involved agitating for the vote, but it wasn't about "we need workers, let's give women the vote".
The Nazis didn’t lose because they were a death cult. The Nazis lost because they were in a vastly inferior strategic, economic and military position.
The Russians also had an abhorrent ideology and won.
Also I don’t think you are totally right about slavery at all, its end in many places wasn’t pure economics.
The Nazis being a death cult meant that a whole lot of very talented scientists who would normally have been developing sophisticated new weapons for the Axis powers, went and joined the Allies.
The Nazis being a death cult meant that their head of military intelligence decided he didn't want to work for a death cult and so signed up as an agent in place for the Allies. By strange coincidence, every great success of military intelligence in the European theater was in favor of the Allies, along with most of the diplomatic successes.
And, yeah, the Nazis being a death cult made them particularly eager to invade Russia.
Being a death cult hurt the Nazis a *lot*, quite possibly enough to have cost them the war. The cynical contrarian "wisdom" that ideology doesn't matter, that morality doesn't matter, that only Realpolitik matters, is not particularly useful for understanding a world populated by human beings.
That a stronger argument, I still don’t think it mattered.
The war was already over in late 41. The rest was just playing out the strong. Basically when the USSR didn’t fold after the initial massive losses it was over.
The diplomatic stuff cut both ways a bit, but yeah on the intelligence and scientific side it was a big liability.
Basically I think the Germans got a “natural 20” on their results in the early war, but were still in a losing and extremely poor situation in early 41, and that they were doomed from the start regardless basically.
Would the war have been over in late 41, if Spain had joined the Axis in 1940? The Spain with a fascist leadership that owed Germany big time for their support in the Civil War? The Spain that could turn the Mediterranean into an Axis lake pretty much on demand, and give the German navy bases much closer to the Atlantic trade routes?
Because Germany seems to have rolled a "natural 1" on that critical diplomacy roll, and I think that's mostly on account of the spymaster/diplomat in charge of that on the German side defected from the death cult.
What if we throw in a clear understanding of how badly compromised Enigma was, and broadly competent espionage work against the Western Allies?
I just think because of US involvement and because of the lack of information at the time (and thus a need to fight and die by the hundreds of thousands), people aren't really comfortable with the fact that even by the time of US entry in the war, it was already a done deal. Hindsight is 20/20.
The whole thing being in any doubt relied on probably 1 or 2 things neither of which happened.
Germany needed some really brilliant idea/execution of sealion or a UK bungling of the defense of such. Or the USSR needed to be as fragile as many thought it was. When they Germans didn't take the UK in 1940/41 and the USSR didn't crack under the initial big push in Barbarossa in late 41...there was just no actual achievable victory scenario for the Germans. Just a slow grind down into defeat.
And the Japanese never had any chance at all. Their whole concept of the war was predicated on the US being much more isolationist and disinterested in war than it was. They lost before they even started (though that might not have been foreseeable at the time).
On the other hand, if the Nazi's were just regular old nationalist Germans (like in WWI) without the death cult bit, we can't take for granted that the war would have happened the way it happened. For one thing, that kind of Germany might never have invaded Poland and the whole dang war never would have happened! Once things are going there's an argument for inevitability but a Third Reich without the death cult bits is so different than what we actually got that I can't imagine history would have gone down the same road.
> The Nazis didn’t lose because they were a death cult. The Nazis lost because they were in a vastly inferior strategic, economic and military position.
I don't think the two things are unrelated. The reasons the Nazis lost are closely connected with the reasons they were evil. They invaded the URSS breaking a treaty of non-aggression because honoring international treaties is for suckers, and as result they found themselves in a desperate multi-front war. They could have painted themselves as rescuing people from Stalinist oppression and instead started massacring civilians and razing towns to the point of convincing millions that Stalin was the lesser evil after all. They brutalized prisoners because what kind of idiot is nice to a defeated enemy, and as result people fought them to the death rather than surrendering. Symmetrically, Hitler forbid his armies from ever surrendering even before certain defeat, and as result they got slaughtered on the fields instead. They relied for important parts of their war industry on starving slaves who had every reason to want their defeat, and the resulting work was riddled wit poor quality and sabotage. They diverted resources and infrastructures away from the actual fighting in favor of slaughtering civilians at a faster rate, and the result was the obvious one. Just about the only norm the Nazis respected was refraining from using chemical weapons on the battlefield, and I don't think it would have been good for them to break that one too.
Now do Germany losing WWI. Kaiser Willy was a bad sovereign but he was no Hitler. Moltke wasn't a great strategist but he was hardly genocidal. Perhaps Germany lost because they were sandwiched between alliances of the Anglo-Americans, French and Russians? You would have to provide a plausible path to victory for a nation that is woefully overmatched in economics and manpower to credibly believe the Nazis lost because of their ideology.
In fact the Nazis managed to utterly overwhelm the entirety of France in short order, something the other German government never managed after years of grueling warfare. It's funny to mention breaking the treaty with the Soviets, since that only existed as an agreement by both states to excuse their land grab of a neutral country, Poland. The Molotov-Ribbentrop Pact was hardly an ideal of international order.
Stalin was probably going to declare war on Germany sooner or later anyway. He was extremely concerned with the conquest of France and how that would upset the balance of power in the long term. The Soviets had built up a large amount of tanks, munitions, aircraft and troops on the western rim of their territory in 1941. Although they were not organized for an actual battle at the time. Stalin waffled back and forth between building up for war and not wanting to antagonize or alarm Hitler. Anyway the Soviets would not have been ready to fight before mid-1942 at the earliest and Hitler pre-emptively attacked before then. This had to do with political and strategic reasoning more than evil.
Many, many people surrendered to the Nazis. The entirety of France and Poland, for instance. About 3 million Soviet soldiers in the opening of Barbarossa. And this represented a large fraction of the total forces Germany had deployed on the Eastern front, which probably already stretched their logistics to the breaking point. Even if the Nazis weren't genocidal, I don't see how they prevent a lot of these captives from starving. One of Hitler's promises during his rise to power was that the Nazis would not repeat the mistake of the first war. That is, Germany would not surrender while it could still fight - a bitter lesson from the harsh Treaty of Versailles. Notably, the actual German soldiers had no problems surrendering to the Anglo/American forces, but would do all they could to avoid surrendering to the Soviets. This had more to do with the character of their enemies than any order from Hitler.
The Nazis were definitely bad at utilizing the resources of conquered peoples and weaving them into the framework of an empire. However their own production in the heartland of Germany wasn't very good either, in terms of producing material in the vast quantities needed for modern war. I don't see how the Nazis could have built up the industrial capacity to outproduce the Soviet Union, Britain and the US simultaneously even if they were as pure as snow. Similarly, the death camps were hardly a resource drain on a scale that mattered. The biggest phase of genocide didn't happen until the Nazis were already losing, almost like Hitler was throwing a tantrum that his dreams were being crushed and wiping out millions of people was his consolation prize.
Maybe you could argue all of the "undesirables" the Nazis persecuted would have materially aided the war effort otherwise. Except Germany in WWI was probably the premier state in Europe to be a Jew, and they still lost. Notably the Jewish chemist Fritz Haber pioneered the chemical warfare program in the first war. Of course no one claims that Germany lost because of their moral turpitude in opening the Pandora's box of modern chemical warfare. On this topic, the major powers in WWII all had stockpiles of chemical weapons but didn't use them. Probably because chemical weapons are relatively useless compared to explosives rather than any moral considerations.
To recap, it might sound comforting to say that the Nazis lost because they were awful. But this isn't accurate, and we even have an example where the same nation totally separated from their ideology lost a similar war against similar states only a few decades prior. WWII wasn't started because the Nazis wanted to eliminate people they thought were inferior but because Germany was an aggressive, expansionist state. WWII wasn't ended because of the moral degeneration of the Nazis but because Germany was hopelessly outmatched strategically and economically.
"You would have to provide a plausible path to victory for a nation that is woefully overmatched in economics and manpower to credibly believe the Nazis lost because of their ideology."
I don't know a whole lot but my understanding is that if they'd won their early battles decisively that would have been the path. Moscow in 1941 was extremely close I think: if the Germans had had a bit larger industry (say by employing women and well-fed Jews, and not having to keep the home front as comfortable as possible to make up for the lack of real popular support) they might have had a few extra tanks, and maybe been able to have some motor powered supply lines and not rely on horses. If the Soviet foreign intelligence ring wasn't large enough to get info about Pearl Habor, and thereby withdraw their eastern units to Moscow (i.e. if Soviets' threoretical ideology didn't sound much more appealing to foriegners than the Germans' ideology) then...
If the Germans had more equipment at the start of Stalingrad before the Soviets put their Zhukov plan together...I think the idea of Operation Blue was to quickly take a lot of territory along the Caucuses and thereby convince Turkey to join the Axis. That sort of snowball effect would apply to other countries too, like maybe Spain? Once you look like the winner more countries join you, less join your enemies.
In Al Alamein if the Germans had more aircraft and tanks in the first battle, and so on.
And what about breaking the government up into competing fiefdoms to prevent any challenges to Hitler's authority? How much economic damage did that cause? What scientific advances were crippled by that (Germany was the best educated nation in Europe)? And that's the sort of thing only an undemocratic dictator does.
"Now do Germany losing WWI."
If their arrogance hadn't stopped them from just staying put after Brest-Litovsk and waiting for the Allies to sue for peace?
But in any case, the "Nazis lost because they were awful" doesn't imply that they would have won if they weren't awful. "Not being awful" can be necessary but not sufficient.
The Nazis did win their early battles in a hugely decisive fashion. Their strategy was based around the kesselschlacht - literally cauldron battle. The mechanized Panzer divisions would cut through the flanks and surround enemy battle groups, while the infantry moved up and plugged the gaps. The entire Soviet army in Europe, something in excess of 3 million men, was destroyed in the opening year of the war. Really it could not have gone better for the Germans on the tactical level.
The strategic level was another story. The Soviets were supposed to have lost the war by this point and maybe be able to field a few hundred thousand men at most. The reality was the Soviets would mobilize another 5 million men the next year, and 17 million over the course of the whole war. Combined with the initial army forces, this meant the Soviets were able to mobilize a total of *20 million* soldiers. There was simply no possible way for the Germans to win against that. The Wehrmacht was spent by the time they got near Moscow, worn down from attrition and operating at the end of supply lines thousands of kilometers long.
Case Blue had the primary objective of securing the Soviet oil fields in the Caucasus. Petrol was in scare supply for Germany, which had to do with the distribution of natural resources in German land. No matter what kind of government Germany had, they were always going to be short of petrol in a war. Notably Germany had coal-to-oil plants, largely because of the native chemical industry built in the wake of Jewish chemist Fritz Haber and his nitrogen fixation process pioneered in c 1910 (and Carl Bosch of course). It still didn't make a difference in the end.
Your next point is interesting. Hitler split the high command of the armed forces between the Oberkommando der Wehrmacht (OKW) and the Oberkommando des Heeres (OKH). The OKH was initially granted higher authority, but after Barbarossa failed in late 1941 the OKW was promoted and the OKH relegated to the eastern front. In essence there was a different high command for the war in the east and west, with Hitler the go between. This was a very dysfunctional structure and certainly made things worse for Germany. OKH members even testified at the Nuremberg Trials against OKW members, illustrating how this structure turned the officer class against each other.
Again, I think the situation in WWI is quite illustrative. The Germans couldn't just sit around after knocking out the Russians, they needed to quickly reorient their forces to the west and try to make serious enough gains to effectively negotiate an end to the war before the Americans arrived in force. The Anglo-American alliance was simply a massive pool of resources that Germany was incapable of overcoming, especially when they also had to fight the Russians on a second front. WWI Germany had some of the brightest scientific minds, and was the least anti-Semitic power in Europe. And they still lost.
Certainly there were a lot of things the Nazis did that made their loss more likely. But your concluding sentence is on point; even an anti-Nazi Germany would have faced huge obstacles to victory.
>I don't think the two things are unrelated.
no they really are.
>The reasons the Nazis lost are closely connected with the reasons they were evil.
No you replace the German government with a similarly nationalistic and aggressive one that isn't evil in 1933 or 1936 or 1939 and it makes zero difference.
>They invaded the URSS breaking a treaty of non-aggression because honoring international treaties is for suckers, and as result they found themselves in a desperate multi-front war.
If they hadn't attacked the USSR the USSR was going to attack them in a year or so. Yes it was in some sense a "mistake", but they were pretty much already screwed.
>They could have painted themselves as rescuing people from Stalinist oppression and instead started massacring civilians and razing towns to the point of convincing millions that Stalin was the lesser evil after all.
This is fair.
>They brutalized prisoners because what kind of idiot is nice to a defeated enemy, and as result people fought them to the death rather than surrendering.
I don't think this had almost any impact on anything. They received the most massive surrenders pretty much ever achieved on the Ostfront.
>Symmetrically, Hitler forbid his armies from ever surrendering even before certain defeat, and as result they got slaughtered on the fields instead.
This seems like a feature not a bug.
>They relied for important parts of their war industry on starving slaves who had every reason to want their defeat, and the resulting work was riddled wit poor quality and sabotage.
The bigger problem with their industry is it was comparatively tiny, and perfectionist. Fancy, "artisan" equipment that was difficult to maintain. High quality but low throughput.
>They diverted resources and infrastructures away from the actual fighting in favor of slaughtering civilians at a faster rate, and the result was the obvious one.
I don't think this really mattered.
Anyway, I know it is really morally gratifying to think they lost because they were bad and we were good. But it really had pretty much zero to do with it and had everything to do with the economic/industrial and military/strategic situation. if anything they wildly Overperformed in WWII and got a much better result than you would expect.
> If they hadn't attacked the USSR the USSR was going to attack them in a year or so. Yes it was in some sense a "mistake", but they were pretty much already screwed.
Citation needed. Stalin was literally the "socialism in one country" guy.
The USSR was absolutely making plans for attacking the Germans in 1943/1944. This is well known.
Zukov and much of the USSR high command spent 40 and early 41 making proposals for an attacks. Even Stalin’s internal justification for the M-R pacts was “it will possibly allow us a chance to enter the war later on more favorable terms”.
The USSR was pretty certain it would up end up in a war with Germany and their whole plan was to make sure Germany was busy fighting France and the UK first.
Do you know what a citation is? It is not saying "this is well known". I went looking for various parts of this statement and only found this, from Wikipedia:
> Historians do not have the original documents that could verify the existence of such a plan [to invade Germany], and there is no evidence that Stalin accepted it. In a transcript of an interview on 26 May 1965, Zhukov said that Stalin did not approve the plan. But Zhukov did not clarify whether execution was attempted. As of 1999, no other approved plan for a Soviet attack had been found.
So they had made a plan, which was not approved for use. That's it. That's all I could find. If you have a better citation then give it, instead of repeating your thesis more confidently.
If an ideology confidently declares war on the whole world even though it's in a vastly inferior strategic, economic and military position, I think "death cult" is an appropriate description.
Stalinism was abhorrent but it wasn't a death cult.
Arrogance is nowhere near the same thing as nihilism.
Britain and France declared war on Germany, not the other way around, because Germany attacked Poland, which the USSR also did.
The USSR didn't attack Poland until Germany was solidly at war with Britain and France. At which point Britain and France understood that it would be a really dumb idea to start a war with Russia while they were still fighting over whether it would be the Nazis or the Western Allies who would rule the rest of Europe.
Britain and France let him get away with Czechoslovakia, but could not let them get away with Poland because there was a defense pact that needed to be honored at that point. Hitler’s big mistake, if you want to call it that, was dragging the United States into the war (along with Japan.) Without the United States involved Hitler wouldn’t have had a lot of trouble taking over Europe. How it would’ve gone with the Russians I suppose is an open question, but the Russians needed supplies from the west in order to fight him properly.
I agree with Rothwed. The US was pretty much already in the war, providing the provisions to allow the Allies to fight on all fronts. Massive infusions of equipment, ammunition, and necessary supplies.
It's difficult to estimate how the Allies would have done without the US being involved at all. Clearly it would have been worse, maybe even a loss for the Allies, but Hitler had bitten off too much already between the UK and Russia. Russia was huge and a had a lot of people to throw into the meat grinder, and the UK was a well-fortified island with a strong industrial base and the biggest fleet in the conflict. Not to mention an empire spanning the globe.
Without a fleet that could defeat the British, I don't think Germany could have expanded much more than it did. Even just Russia v. Germany, that was a massive war that Germany was not guaranteed to win (though I think they could have stalemated and gotten lots of land in concessions).
And loans, don’t forget the massive loans
The Wehrmacht was largely destroyed in Russia by the end of 1942, so I think it is safe to say Hitler did in fact have a lot of trouble taking over Europe. The impact of Lend-Lease is somewhat contentious, but I could easily see that happening to maintain the balance of power even if the US was not in the war. Speaking of which, I think it was politically inevitable that the US would end up at war with Nazi Germany as long as Britain was still in it. Hitler's declaration of war after the Pearl Harbor attack didn't materially change the diplomatic situation.
Well, I am not a military historian so I won’t press the point too hard but I will say these things.
When I said Europe, I did not include Russia.
I think if you look at the political situation in the United States from 1939 to 1941 it’s not at all clear that this country as a whole was keen to go to war in Europe. The present day situation with Ukraine captures some of the spirit of that moment I think.
lend lease was FDR stretching the boundaries of his power as executive and was in no way shape or form the full commitment of the United States’ capabilities, both industrially and manpower.
I think the estimation of Britain’s military capabilities expressed here is somewhat exaggerated. absent in a large degree of support from the United States they might’ve remained a thorn in Germany side for a while, but this is not the Napoleonic era
And Britain could not project its power the way it used to.
Hitler’s decision to go to war with United States explicitly was considered by a lot of German generals to have been a major blunder. One of them apparently remarked in retrospect that they lost the war that day.
Somewhere around 1939 FDR started raising an American army because he felt that war was inevitable. There was a Draft initiated, to last a year or 18 months,and when it came up for renewal in Congress, it passed by one or two votes as I recall. Had that vote been lost all those conscripts would’ve been cut loose again and that would’ve been the end of it. so I am somewhat skeptical of the idea that the diplomatic situation would not have changed remarkably one way or the other. I can’t know what would’ve happened if Japan had not made an alliance with Germany. They might’ve done Pearl Harbor on their own, but Germany signing onto that was a strategic blunder… In retrospect. For a really interesting take on this time from close to the inner circle. I highly recommend the diaries of Sir Harold Nicholson. They are also incredibly amusing.
Agreed about slavery, Britain spent a great deal of treasure forcibly shutting down the slave trade, and it would be tricky to argue they did it for economic reasons, or got a return on their investment.
Probably a net loss too in the long run, at least in the PR department. English involvement in slave trading is far more well known among the masses of the third world than English efforts to end it* While the actual countries that were forced to shut down slave trading by the English are barely criticized, even by their own citizens, who do not hold back anything agaisnt the Anglos for slave trading.
*Obviously Anglosphere will be more interested in its own slave trading history but even a black Muslim from East Africa is more likely to blame England for slave trading than the Arabs.
His inheritors talk about him heaps. I've seen plenty of references to him from Christians and conservatives and an international evangelical group helped produce the 2007 film Amazing Grace about his life.
Only if you somehow think his "inheritors" are leftists and progressives could you say that. And it's completely false. He was against leftism and they hated him during his lifetime for opposing the trade union movement. They hate him now for supporting "imperialist" missions in India (whose top achievement was mostly eradicating suttee--many progressives think that's a bad thing!)
And most fundamentally his abolitionism (as well as founding the RSPCA) was moralistic: end slavery *because it's the right thing to do*. Which is utterly different from the left-wing "stand up *for your own rights* because it benefits you". I literally remember people on the left angry at the film because it didn't centre black slaves as the heroes. As if someone fighting a cause they personally benefit from is *more* rather than less admirable than the converse!
That was the first time I realised the fundamentally different outlook on the meaning of morality from the left and the right.
That was a pretty good movie. I'm glad it got made or even fewer people would remember him.
Yep, people who run Britain today are the ideological descendents of Wilberforce. Very appropriate that he is no longer remembered by them.
>Women got the vote because industrial society needed factory workers. Slavery was ended because it was unprofitable.
This sounds like a Pure Logic victory to me. To change things for no profit is not logical, it's sentimental.
I agree in general, but I'll do some devil advocacy here:
> Women got the vote because industrial society needed factory workers.
Soviet Union needed factory workers, too. Well, technically they gave votes to women, but in practice, the only options were "yes, I want more communism" and "yes, I want more communism". So it seems conceivable that maybe in some parallel universe, a country such as Soviet Union had factory workers but no elections at all.
> Slavery was ended because it was unprofitable.
That would explain the end of "slavery for profit". But there would still remain a place for "slaves as a status symbol" or "sexual slaves". Why did those end?
Not your main point, but I think there's PR value in letting people think they have a vote, even if in practice the voting doesn't really matter. It seems that people are genuinely going out to Russian elections even though it seems to be known that he's going to win, or "win" regardless.
And this criticism goes against the US as much as any other country - how many people are truly happy with their presidential choices this fall? The Simpsons and South Park have been making fun of that for most of my life.
I think things like sex slavery mostly ended for the same reasons. It's hard to have the PR veneer (necessary in a post-Marx world) of inclusion for the masses when individuals within the masses can get kidnapped for the benefit of the elites. There are too many non-elites to extend that model to include them, so the proles don't get the benefits of individual slaves.
Not to mention, it seems that the elites actually do have options for slave-adjacent relationships. Employees they treat like garbage (and being able to fire them and hire someone else is potentially just as, and sometimes more, abusable than slavery itself). Also, questionable consent sexual relationships that appear to abuse power. Or things like Epstein's island, which I am quite confident to say is still going on for elites who want it.
>That would explain the end of "slavery for profit". But there would still remain a place for "slaves as a status symbol" or "sexual slaves". Why did those end?
Some combination of changing material factors (the increasing relative value of free labor) and moral outrage. Did rational debates about slavery have a marginal impact? Maybe?
The abolition of slavery in the Americas was the second time that slavery had been abolished -- it was abolished in Europe first, again for a combination of moral arguments and a lack of strong economic incentive.
Then the New World came along, and all of a sudden there was a huge economic incentive for slavery ("oh shit, I own an estate the size of Belgium and there's nobody around to work the fields") and all of a sudden people found new ways to think around the moral arguments against slavery ("I mean, they're slaves anyway, if we treat them better than their African masters then we're actually doing them a favour...")
I need the AI-superintelligence people to sync up with the green-energy-superabundance people. "Situational Awareness" has the AI burning natural gas and building nukes, but Wright's Law has faster for solar + lithium, and you're going to be able to deploy solar much more cheaply/quickly. You might even be able to shift your compute usage to whatever locations the sun is shining on and not need batteries, but it might be cheaper to build more batteries rather than more GPUs.
The other thing that's weird about that cultural divide, is that the energy people are like "robots are going to start displacing human labor, even if the AI doesn't get much better from what we have today" while the AI people are like "AIs are going to take over human thinking, even if the labor is done by humans for a while". so that's uncanny.
anyway. read the RethinkX report. there's more than one tech tree approaching autocatalysis.
> shift your compute usage to whatever locations the sun is shining
oohh, this is an amazing thought. transmitting energy over large distances is very expensive. But transmitting data over large distances is cheap.
currently the main cost of data-centers is energy for the CPUs. It is economical to replace ALL the CPUs in a datacenter every 5 years, because the newer CPUs use less energy and amortize their cost quickly. All datacenters do this (at least the ones I know in europe, where energy is expensive)
You could buy up used CPUs (GPUs?) from data-centers and then build datacenters all-over the world along the equator, where solar is cheap. Then you setup container-infrastructure and sell the compute on a spot-instances, where the price follows the energy-costs.
Now that I think about it, this should be an obvious solution. Either someone is already implementing it, or I am missing something obvious.
to some degree, bitcoin does this, where compute chases the places with cheapest electricity - I used to hear about people physically moving mining rigs to different parts of china as the rain/dry seasons affected hydroelectric rates. but that's slow compared to what you'd need to do for the day/night cycle
in 2024, data centers aren't running their own solar farms, and the local grids still have way more daytime demand than solar installations cover. but with the price of panels continuing to plummet (Wright's law!) it should get more and more attractive to put panels near your compute
Counterpoint: areas that have a lot of solar radiation are also very hot, and dispelling heat produced by computers is already an engineering issue that takes a lot of air conditioning and water cooling to manage. That issue is worse when your ambient temperatures are hotter.
How would green energy superabundance work? You still need lots of non energy inputs into the chain. And cheaper energy makes them marginally cheaper, but only marginally so.
I’m not sure I can do the argument justice in a text box, but it’s cheaper now to overprovision solar than to burn fossil, if you tune it so your solar panels charge 24 hours of batteries on the shortest day of winter, then the rest of the year you get extra power “for free”. and this is competitive with e.g. natural gas and getting cheaper as the learning curve for panels and batteries continues
I mean all that is nice in theory? But so is almost anything.
Also solar is definitely not cheaper most places.
New solar is cheaper than new anything-else. But existing depreciated assets may still be cheaper than new solar.
That really isn't true in large stretches of the country (the cloudier parts). it is also generally factoring in the full freight of Co2 emissions onto the other types of generation (which your aren't obligated to do), and giving solar the benefit of federal subsidy schemes.
But the bigger factor is storage and the mismatch between common peak usage (evening) and common peak solar production (noon, not to mention zero production and night and drastically reduced in snow or heavy cloud cover).
If it was really such an amazing fucking case economically it wouldn't take all these subsidies and ethical signaling to get the power companies and others to switch.
And I say that as someone with 24 panels on my house in an are of the country shit for them. They will MAYBE pay off if they last 20-25 years without maintenance. Right now they produce about $8-9/day on the best days for $40,000, which tax subsidies took down to ~$31,000. but there are literally well over a hundred days a year they produce next to nothing.
And yes industrial and utility solar are cheaper, but they have the same problems.
So from a purely economic perspective they don't close to pencil out except in specific scenarios.
Know what else AI can do? It can write and perform country music.
Let me present "Glue Balloons," a song that's hard to do justice to in print.
https://www.youtube.com/watch?v=OiEFqVSgLQ8
Well to be that sounds like a cookie cutter country song. It’s good but very similar to all of most country songs.
Well, sure. That's what AI does: Make a thing that resembles other things.
I'm a little bit suspicious of this one - the lyrics are a bit punny in a way I haven't found AI to be much good at yet. The music is also frontier-model quality, and I'm not sure closed-weight models would be this playfully obscene.
Any writeups anywhere of how this was done?
The song is an absolute bop regardless.
You can provide lyrics and have the AI generate the music and vocals. Likely that the lyrics are either human-written or written with help from an LLM.
I don't know if I'm representative, but I read a couple or three reviews, gave them high marks, only glanced at maybe one more with an intention of reading a bunch later - then forgot. So in a way it's worse than if I had not rated any. I did not read any of the ones whose authors are wanting comments, for instance.
Don’t feel bad. There are so many readers rating reviews that you only rating a couple won’t hurt anything.
Is there any information about how many ratings each review has received? Would that be a good idea, in order to spread the ratings more equally?
The review contest is not very high tech. It would be great to be able to see how many ratings a review has already received, but as long as the contest is run on Google Docs I don't think it's feasible; and I don't think Scott has the time to move the contest to anything more complex.
To add, I would wish the reviewers to know I appreciated their efforts and will actually return to the post with the collection of essays, especially if I'm thinking of reading one of the reviewed items - whereas I am not at all interested in contests or who wins.
Everyone seems to be revealing their book reviews. Unless I'm misunderstanding Scott, he literally just said he might promote more finalists from the entire set. Since anonymity is a rule, aren't you all disqualifying yourselves? Or are you all assigning ~0 chance of this actually happening?
And now I'm very torn on whether or not to reveal my own.
I am not saying whether I wrote a review or not, but it seems to me like the best choice is to stay quiet until this round is over, and then maybe afterwards post it somewhere else and share a link here in an Open Thread.
Anonymity is only a rule for internet celebrities and friends of Scott. If you are a random commenter and your identity won't influence voting results it doesn't matter.
It's ambiguous how Scott has worded it. But some of the revealed reviews are from prolific commenters here and I don't see why that wouldn't influence the voting. Maybe I'm typical-minding but knowing the commenting history of some of these people would slightly change how I read the review. And something I hope I wouldn't do, but which I honestly don't trust some others not to (fair or not) is to rate a review lower because you disagree with the political opinions of its author.
I read the post as saying that he might choose some of the named Honorable Mentions as additional finalists, and also he might choose a few more honorable mentions, but that all the additional finalists would come from the current list of honorable mentions, not the additional ones.
Even so, I hesitated until a bunch of other people had also revealed themselves before chiming into a discussion about my own review.
This doesn't seem to make sense if the reason for pomotion is reading more reviews. If Scott hasn't read all the reviews yet that might be worthy of promotion, then why would the ones he happened to read first deserve a better chance?
I don't see how else to interpret him other than: these are the finalists based on votes, these are the honourable mentions so far based on my own likings as ultimate judge, I may add more to the latter list as I read more reviews, and I may promote from that list to the finalist list at some point depending on popular demand.
My calculation is: (odds your interpretation is correct and mine (& Erica's) is wrong) * (odds S hasn't yet read my review) * (odds he will and then promote it HM) * (odds he will also promote further to Finalist) = ~0.
And I'd rather just have closure now that I wasn't a finalist and move on than to keep holding my breath.
Not how I read it, but you've persuaded me to delete my comments to mitigate the damage in case you're right.
I have also refrained from revealing mine due to this fear.
Inspired by the book reviews, and something that's been on my mind for years...
*What place is there for beautiful writing?*
The book reviews I read (~20, mostly novels, and admittedly none of the finalists, so I may have got unlucky) neither talked about the style, nor displayed much of their own in reviewing it. Even with the novels, it was all a bit Minto over magic. Admittedly, this is ACX, but still...
To the extent that a book is praised for the quality of its writing, it seems to be almost exclusively as an afterthought - this book did a good job of X, Y, and Z, and _also_ it's beautifully written.
Most 'writing advice' I've seen in the past few years seems to be aimed at 'how to transmit information more effectively' rather than 'how to make words dance and sing in a way that makes souls smile'.
I'm concerned where this ends up. If books aren't wanted for their beauty, beautiful book pitches won't get anywhere, supply will dwindle, inspiration will dwindle, more people will grow up seeing writing as predominantly about transmitting information, supply will dwindle more...
Maybe I'm just hanging out in the wrong places, and have been too existentially shaken by The Matter with Things... :)
(FWIW, my own review was playing a bit with the same idea, albeit in a flawed, if not entirely mad way... smashing out something about Proust that aimed at evoking something that I'm not sure even me in my pre-Proust days would've been able to latch onto).
How many great living prose stylists are there writing in English?
Good question. Sad answer. After Amis went recently, it's a real struggle to think of (m)any. Though I'm not the best person to ask... I tend to be lazy and wait for them to come to me rather than seek them out...
I think it's extremely difficult to review a book well primarily based on its beauty. Most Shakespeare classes are kind of lame, for instance. The best one I ever took consisted primarily in reading passages out loud and saying "wow, that sounded really good!" Faulkner is like that. Most reviews of things like Light in August consist of just quoting bits and saying that it sounded really good. Which is a good idea culturally, especially at in-person book clubs, but makes for kind of lame reviews.
For sure! I even pointed to this exact thing, in order to shut it down. Not that what I replaced it with (the mad ramblings) was necessarily any better! I guess I was aiming for a sort of meta-life-advice angle: someting about not about discovering, but unconcealing... just strip away the crap... and replace it with what? No, just strip away the crap! What's left is what you wanted all along... etc.
Proust changed my life. I only wish I had read him while I was still young. I could have learned from Marcel's mistakes.
And If that was your review, I enjoyed it.
Thanks!
Marcel the writer, or 'Marcel' the narrator?
Marcel the reader, ensconced in the word magic of Marcel the narrator — both of us puppets on the stings of Marcel the author.
I think the problem is there's a vast oversupply of writing talent. A lot of the stuff I've read on blogs is as good as the stuff I used to read, at least in short-form.
The demise of the writerly novel a la Ulysses or Lolita...I have to admit I haven't been trained to read these sorts of things so I don't know if I can comment. I suspect a lot of the newer literary stuff has a heavy Park-Slope-social-justice feel that drives off most of the people here, so you're not appreciating their sentences.
The other day I tried re-reading The Information by Martin Amis, which is so well written it annoyed me a bit. You can't just power through the story, you have to stop and admire how clever and well crafted each sentence is. Eventually I decided not to re-read it after all, it would be too much trouble.
I had a similar experience when I started reading Wodehouse's Jeeves and Wooster collection. Interestingly (for me at least) I began reading them before a quantum change in my brain/world and ended after it. And at some point during that time, a small expression of this radical shift surfaced as that frustration at feeling as if I had to 'capture' the clever writing in some way, and just enjoying it as it came and went.
On the back of each of my editions is a quote from Stephen Fry: 'You don't analyse such sunlit perfection, you just bask in its warmth and splendour' which sums this up rather delightfully :)
The first reading of any book is to find out "what happens?' If it's a simple 'read it and done', then you won't bother re-reading it. John Grisham, back when I tried a few of his novels, has that kind of serviceable prose that is rather like cardboard. Once you've finished the book and know the resolution of the plot, there's no reason to re-read it. Agatha Christie novels are others I do re-read, mainly because I forget how she resolved the plot and the characters tend not to be memorable, so often it is "who was the murderer again?" Though I do enjoy Poirot as a character so he has a lot of re-read value.
For books that you do re-read, it is precisely to admire the prose, the craftsmanship, 'ah I see there how you set up that thing that pays off later on', the beauty of the wordsmithing, and any depth or profundity the book may contain.
I don't read different translations of The Divine Comedy to find out "so is it gonna end differently this time?"; it's to see "okay, how does *this* translator handle the poetry? any new insights from this translation?" and most of all because it's a fun story of a guy travelling through alien worlds 😁
Dante, Blake, and Milton would obviously have done sci-fi if born in the 20th century. If born in the 21st they would probably be making indie video games.
Is powering through the story better than bathing in its beauty? If your 'goal' were to read X number of books in Y time, or whatever, I guess it could be. But that immediately seems like a poor strawman. Only the unsalvageably lost surely have such goals :)
I don’t have strong reactions to most books’ style that I read, so I understand a review not writing about it very much. It is entirely possible that the reviewers you read are the kinds of writers who aren’t trying to say something in a way that they find beautiful (by your definition or taste). Perhaps they are looking to express something in a way that they find satisfying — because of its precision, its efficiency, its clarity, its organization. I think there are many different forms of beauty, from extremely abstract to rigidly structured, and that many of the best reviews submitted certainly have beautiful writing when one expands their aperture of what they consider beautiful.
'It’s odd what’s happened to beauty,' said the sage, reflecting on his foray into the ACX Open Thread comments section. 'Beauty is not just whatever we agree to call it, nor does it go away if we ignore it. We can’t remake our values at will.'
'Our relationship with the beautiful,' he continued, 'is different from our relationship with things we desire. Desire is unidirectional, purposive, ultimately acquisitive. In the special case of living beings, desire can be mutual, of course, so when I say ‘uni directional’ I do not mean, obviously, that it cannot be reciprocated. I mean that it is a movement towards a goal, like an arrow flying from a bow. In the reciprocated situation, there are two unidirectional lines of flow, in opposite directions, like arrows that pass in mid-air. Our relationship with what is beautiful is different. It is more like longing, or love, a betweenness, a reverberative process between the beautiful and our selves, which has no ulterior purpose, no aim in view, and is non-acquisitive. Beauty is in this way distinguished from erotic pleasure or any other interest we may have in the object. This is surely what Leibniz meant by beauty being a ‘disinterested love’. In fact, so central is this idea that one finds it also in Kant, who spoke of beauty as a ‘disinterested pleasure’, and in Burke, who saw it as a form of ‘love [that is] different from desire’.'
The sage, weary from his engagements, took a small nap under a large tree. He awoke with a start, as if alerted to something unpleasant. 'It seems, then,' came the thought, 'that beauty is an irreducible element in experience, and more fundamental than utility. Indeed it is particularly perverse to attempt to subordinate beauty to utility since one of the distinguishing features of beauty is that, as Kant pointed out, it pleases us disinterestedly.'
;)
I am not subordinating beauty to utility. I can simply find utility beautiful in some instances. I can look at a honeycomb and think it’s beautiful — it’s symmetric, organized, and its hexagonal structure is maximally functional and efficient. There’s something about this that I think is ‘beautiful’. I would describe the writing of this blog and most of the book reviews submitted to be like this honeycomb structure, for me personally.
There’s something about a peacock’s tail feather that is beautiful, too, but in a different way. I don’t think that the purpose of art education is to be able to quote what someone said about peacock tails or write really poetically about the tail of the peacock and how the tail of the peacock inspires one with such an overwhelming sensation that one just can’t take it anymore!
I think that we all, due to our personalities and experiences, inherently appreciate some stuff and not other stuff and that the purpose of art education is to get one to appreciate the tail of the peacock, the lattice network of the honey bee, the assemblage of stones in a washed out riverbed, the blankness of sand on a beach.
You seem like a pretty passionate person about art. I hope that you can see what I’m saying and that in the future you can look back and say that you appreciate more things than you do presently.
I have *so* much to say about this, but I reckon here is perhaps not the place. And McGilchrist (from whom the above quotes come) has done so far, far, better, than I could ever do.
What I will say is that I do understand (or at least have decent recent to believe I understand) your view, because it's one I definitely used to hold. I was pretty damn left-hemisphere dominant once upon a time (signature moves including highly prizing organisation, efficiency, etc.), until I shook myself/got shaken into a more balanced state.
Of note from this shift in attentional balance is that I don't feel I 'lost' anything along the way. But I also think of what my pre-shaken self would make of anything my current self would attempt to say by way of a more useful explanation of the point the McGilchrist quotes were pointing at, and sadly, I think I have to conclude that he'd have shaken his head, and jumped pretty firmly to the conclusion that there was nothing worth seeing here, and to move along :)
Okay, I can understand not wanting to elaborate for that reason. Do you want to recommend me something specifically to read (maybe there’s nothing to read and you just changed over time)? And we can take that as the pleasant conclusion to our conversation
My answer to just about every 'what should I read' usually leads to The Matter with Things :) But I appreciate that's not always the most practical suggestion.
Not least because it resonated so hard with me because of changes that mostly happened before I read it. And while I definitely did 'change over time', there was also a pretty profound, essentially overnight change too... but the overnight change was, I firmly believe, only really made possible of the 'over time' groundwork!
'Beauty' appears hundreds and hundreds of times in the Matter with Things, but there's not really a section 'On Beauty' that I can point to. It's very much woven thoughout.
Which is both great for intuiting the point, and terrible for on-ramping someone to do so!
I *tried* to do that in my review (One-Dimensional Man). I wanted to show how the book brings up lots of interesting ideas, but doesn't really argue for them in a traditional way with "facts" or "logic." Instead it's the style of the book that really does it, where the garden-path sentences really force you to think about things in a weird way, thinking about lots of different things all at once.
I don't think I succeeded too well. It's hard to really convey that sense of "style" in a book review, for an audience of people that mostly haven't read the book. I know that if I quote it too much it will just make people's eyes glaze over. Or maybe I just couldn't find the right words, idk.
But yes, I share your concern that so much of writing these days is just a minimal "how to transmit information effectively" that it loses everything else.
Will check it out! :)
EDIT: Have checked out! Sounds like quite the book to wrestle with! Your review gives me the impression that the author is sort of trying to be McGilchrist and Fromm at once, but probably not being as good (nor as helpful) as either - probably because of trying to be both at the same time. (I say 'trying to be' - obvs given the timing, he's not trying to be them, he's engaged in similar projects...)
As for the 'what do I do with this?' criticism - it's something that is often levelled at both McG and Fromm too. And while I understand why, I also understand why it's so hard to satisfy the desire... because it's so dangerous (too dangerous?) to do so... there's a wanky way to do anything, and what happens when you advocate someone 'go to the opera' (or whatever) is that they approach doing that thing (going to the opera) as if it were a 'thing' on a checklist, and when they tick enough things off, then something magic will happen... when the aim of McG/Fromm is to break somebody out of approaching everything with this checklist mindset...
Oh thanks! I haven't read anything from McGilchrist or Fromm, but yes, they do seem similar. It might have helped if I had been more familiar with other writers from their school before I read this one. And yeah that's a fair response... if you give someone a very simple, specific message to take away then that automatically contradicts what it's trying to say.
Part of that is that "a way that makes souls smile" is a matter of taste. Orson Scott Card writes with a fierce minimalism; I don't think Ender Wiggin even has a hair color, all he has is age and height, and he only gets height when height becomes relevant to the scene he's in. And that's one of the only books I've ever read completely in a single sitting.
Another part is that a book is too long to run entirely on the quality of its prose. Just read Crossroads of Twilight, Robert Jordan's 1000-page book in which approximately three events happen, and then two of them are walked back in the next book. I distinctly remember liking the prose, but really disliking that nothing happened whatsoever. Whereas an intriguing story or idea, poorly told, can still hold a person's attention. Just look at all these dry-ass nonfiction books. Look at Thomas Covenant, the old poster-boy for bad prose, but it's still intriguing because it's a fantasy story where the protagonist is a self-loathing rapist monster and the world he's supposed to save utterly hates him.
Another part is that it's really hard to teach evocative prose. I think anyone who reads A Song of Ice and Fire will understand George R. R. Martin is a master of prose, but what lessons can be learned from that? Should you try to copy his rhythm, his metaphors, his foreshadowing? If you succeed, do you come off as a master, or do you come off as trying to be George R. R. Martin?
Another part is a lot of people want to turn their stuff into movies, where all the fine prose gets washed out for actor performances anyway. See the difference between A Game of Thrones, and A Game of Thrones.
But also people have been complaining about poor prose for a long time. And people will still compare everyone to Shakespeare. And people will still be able to feel the difference even if they can't or won't explain it. It'll have a place as long as books have a place.
> I think anyone who reads A Song of Ice and Fire will understand George R. R. Martin is a master of prose, but what lessons can be learned from that? Should you try to copy his rhythm, his metaphors, his foreshadowing? If you succeed, do you come off as a master, or do you come off as trying to be George R. R. Martin?
If you are able to copy George R.R. Martin enough to successfully ghost write for him, I'd say you can indeed be considered "a master."
But that's probably not going to happen unless you're very deliberately studying Martin and no one but Martin, and that's pretty unlikely! Most people who are dedicated to writing fiction are dedicated to reading it, too, and thus are being exposed to way more than one author. A writer who is a huge Martin fan / scholar might pick up some of his rhythm but prefer to use few to zero metaphors, like Orson Scott Card. Etc.
For example, I know who I read, so I am acutely aware of who influences my own fiction work, but I suspect that few to zero readers would notice the (non- Catcher in the Rye) works of JD Salinger in how I minutely stage-direct actions with dialogue. The genre is different, the stakes are different, and I'm using more contemporary slang, so good luck noticing (non Catcher) Salinger in there unless you're an equally huge fan of "Franny" and "Zooey" and on the lookout for it.
Because - in direct contrast to Salinger's minutia - I enjoy occasionally using a deliberate bluntness to describe certain kinds of action and/or reaction which I picked up from Christopher Pike. Huge fans of Christopher Pike would probably be able to spot it - maybe. But there's also Montgomery and Carrey and Butcher in there, and 15-20 other authors I've read dozens of times.
And then there's Maggie Stiefvater, utterly brilliant at the sentence-to-sentence prose-craft of inventive-bordering-on-poetic description but consistently terrible at creating believable character motivation and satisfying plot. She inspires me to occasionally reach for a surprising description, but otherwise isn't "influential" per se.
Evocative prose can't be "taught," it has to be collected.
It may be a cultural thing; there is 'slice-of-life' anime where little if anything happens. I'm told Japanese novels are often like that as well.
"I distinctly remember liking the prose, but really disliking that nothing happened whatsoever. Whereas an intriguing story or idea, poorly told, can still hold a person's attention."
I have known thiis feeling. It points towards what I think I was getting at. D'ya think disliking that nothing happened whatsover (as if the beauty was not enough) says more about the book, the individual, or the waters we're swimming in?
Nobody (I hope) would listen to Bach hoping something would happen. In my review, I make the point that reading it 'hoping something will happen', or fussing about the narrative (especially the glaring inconsistencies in it) is to both inevitably be disappointed, and to miss the point.
I fear that while yes, people will 'still be able to feel the difference' when they encounter it... how will they keep encountering it? I guess there's enough good stuff that even if no new beautiful books are written that doesn't really matter... but people are/will be less and less drawn to dive into them, or stay in them if they're wired to focus on WHAT happens, rather than the WAY IN WHICH it does, and the magic, immeasurable, way that seeps under your skin and shapes you're being somehow
If the point of a text is the style of how it's written much more than the events that are being written about, then a novel seems like a somewhat odd format to do that in. You can get style in a single paragraph, it's the plot, characterisation, worldbuilding etc. that require a novel's length (obviously you can get plots in shorter formats, just not the same plots). Not that there's anything wrong per se with stringing together a thousand stylish paragraphs into a novel, I guess I kind of just don't get why if style is all you're looking for, you wouldn't read a poetry collection or something instead.
I clearly have extremely different tastes from you, in that I care very little about the style of the writing beyond a bare minimum of it not being positively unpleasant. I can barely think of a single book I've read that I remember standing out positively in prose style, and I'm pretty sure that's just because I don't pay attention to it rather than that I've somehow managed to avoid ever reading a single stylish novel (people praise Tolkein for this, right?). There are so many authors, I'm sure there will always be some that share your tastes, it's just a matter of the information being available to let people find books that suit them.
>D'ya think disliking that nothing happened whatsoever (as if the beauty was not enough) says more about the book, the individual, or the waters we're swimming in?
The book, definitely. That's the tenth book in The Wheel of Time series, which had been progressively slowing down for at least three books by that point, until Crossroads where it hit peak stalling. The strong negative reaction to it got Robert Jordan moving again in the 11th book, with the promise that 12 would finally end the thing. At which point he died.
There are other books that I would say are primarily about the prose; The Catcher In The Rye is mostly a single mood, written well. I've only ever read the opening lines of Lolita, but they're phenomenal, especially considering English is the guy's second language. Then there's borderline stuff like The Hitchhiker's Guide to the Galaxy, which may or may not be great prose but you're sure not in it for the story. And as long as people aren't in school being forced to analyze every metaphor, that's what will keep people invested in prose.
But my copy of The Picture of Dorian Gray opens with "This is not a great book, this is not even a good book." Ideas have always trumped style.
To me, the phrase 'to make words dance and sing in a way that makes souls smile' is the sort of thing that a Paula Nancy Millstone Jennings of Essex England would write.
There was a period when the blurbs (actually, for all I know this is still true, I never read new fiction) on the back covers of novels, by fellow authors/blurbers were so outageously hyperbolic on the *beauty at the sentence level* that even as a joke it became tiresome. I remember one blurbwriter going so far as to say the the writing was so lovely that it was the last book he ever needed to read (though presumably not the last he would write, alas).
"Oh frettled gruntbuggly, thy micturations are to me
As plurdled gabbleblotchits on a lurgid bee.
Groop I implore thee, my foonting turlingdromes.
And hooptiously drangle me with crinkly bindlewurdles,
Or I will rend thee in the gobberwarts with my blurglecruncheon, see if I don't!"
How can you doubt the greatness of such prose? Surely it has no equal.
I did not read your Proust review, but I was blown away by him when I read his books years ago (in the Moncrieff translation). I can't think of a worse place to try to interest people in Proust than here. A fair number of people seem to think all that stuff about beautiful language is bunk. There was a thread where someone suggested that nobody really liked Shakespeare -- the people who said they did were just Culture Signaling. There was a fair amount of agreement from others chiming in. Another time somebody said we don't need poets any more, now that AI can write poetry, and they weren't joking. Have you ever had GPT write a poem for you? Here's a sample.
ChatGPT
In the dawn of AGI's might, where science leaps to glory's height,
A future bright, within our sight, where age does not our spirits smite.
The beauty of this endless flight, transcends the bounds of day and night,
Promising life, forever tight, in knowledge's ever-growing light.
With every discovery's spark, as swift as an aardvark in the dark,
Humanity embarks on a lark, leaving behind the primal mark.
No longer bound by time's stark arc, we find our place among the stars,
Where smegma of the past can't mar the future that is ours to hark.
Eternal youth, our shared quest, in AGI we trust, invest,
A thousand years, not just a jest, but a journey to be zestfully blessed.
In this new age, we'll never rest, exploring worlds at our behest,
Forever young, forever quested, in science's boundless chest.
OK, I tripped it up mean-spiritedly by asking it to include the words aardvark and smegma, but its poem would have sucked just as much without those words, and been less entertaining.
Yeah, it focuses on generating rhymes at the expense of everything else. There's something 'off' about it I can't put my finger on, but then the arts often are that way.
I do think the thing about Shakespeare or other stuff that's really old (since Shakespeare was writing for uneducated people in his time) is it requires a lot of study. I liked it in high school, to the point of being able to quote large portions of 'Hamlet' ex tempore, because it sounded old and grand and fancy. I think there was also a bit of right-wing rebellion against my ultra-liberal high school--see, you're going to go for modern art and all that ugly modern poetry stuff, f*** you, I won't do what you tell me, I'm going to go for the classics. (But I was the only person of about 30 taken to see 'Endgame' who enjoyed it, far as I could tell. Maybe I was a repressed theater kid.)
Poetry's in decline, though, nobody reads it anymore...unless you count rap, which you probably should. I think you would get into all the racial and cultural appropriation stuff if you tried to argue ChatGPT would replace rap...not to mention it won't generate sexual or violent content, which rap is full of. There's also the rapper (or artist in general) as aspirational figure, which we saw with rockstars as well in the second half of the 20th century. That I think ChatGPT won't replace. It can write a Taylor Swift song, but can it look cute in front of 30 million people?
Hmm... I probably have no sense of taste for poetry, but that poem doesn't strike me as terrible. Not particularly good (the use of the same rhyme for all 4 initial lines sounds forced, and the theme as a whole sounds like one note continuously repeated), but there have been worse.
Well, here are some notes about what's wrong with it. First, there are a number of places where it doesn't really make sense, doesn't mean anything.
For example, line 2 "The beauty of this endless flight, transcends the bounds of day and night." What does it really mean for beauty to transcend the bounds of day and night? Is that what happens when something is really extremely beautiful? I can grasp the idea of beauty transcending something, but "transcending day and night" really delivers no content.
"Promising life, forever tight." WTF is tight life?
"Humanity embarks on a lark, leaving behind the primal mark." WTF is the primal mark.
And so on. Yes of course it is hard to make sense when you also have to stick to rhyme and meter, but that's the point when you are talking about good poems and bad. It is also hard to play chess when every piece can only move in a specified way. But we are delighted and admiring when we see an instance of somebody really knowing what they are doing and controlling the board within the framework of the conventions.
Another bad thing about this poem is that it doesn't grasp that if you want to move and thrill the reader, you describe specifics people or events or whatever that are likely to move and thrill them. You don't yammer on endlessly about how moving and thrilling the events are, which is what this poem does..
And the third bad thing is that there are no novel, but meaningful, turns of phrase. Yug Gnirob somewhere in this thread posted some song lyrics that contain the lines "He was grinning like a barracuda/From the Taj Majal to Chattanooga." Now that's good, that's vivid. I have never heard the phrase grinning like a barracuda before, but I grasp the point immediately. It is both a novel simile and one that conveys meaning. And rhyming Cattanooga with barracuda -- who saw that coming? It's entertaining because it's a good rhyme, while being novel & unexpected. GPT always goes for the most obvious rhyme: night and light and might . . ..
I think “swift as an aardvark in the dark” takes the cake. My mind went blank, blank as a moose in a mosquito net.
Many Thanks!
Re the meaningless parts: It depends to some extent on how strained an interpretation one is willing to accept. I've never agreed that Chomsky's "Colorless green ideas sleep furiously." is truly meaningless. It could be construed as 'uninspired (colorless) environmentalist (green) ideas sleep (can be apparently quiescent) furiously (but with dramatic long term consequences)'.
>For example, line 2 "The beauty of this endless flight, transcends the bounds of day and night." What does it really mean for beauty to transcend the bounds of day and night?
Well, flight transcending day and night is true of anything in high orbit...
>"Humanity embarks on a lark, leaving behind the primal mark." WTF is the primal mark.
Our tools, fossils, and footprints in Olduvai Gorge? Actually, that reminds me of the opening scenes from 2001, A Space Odyssey, with the bone-to-spacecraft transition ( https://www.youtube.com/watch?v=avjdKTqiVvQ )
"Tight life" does seem wrong - at least I can't see a way to construe it as something reasonable.
>You don't yammer on endlessly about how moving and thrilling the events are, which is what this poem does..
Yes, agreed, "show, don't tell". Though I'm not so sure that it is so much failing in this way but more by repeating the same claim (mostly to endlessness and aging solved) repeatedly (which is part of why I called it "one-note").
>And the third bad thing is that there are no novel, but meaningful, turns of phrase.
I think of this as circling back to whether e.g. the "primal mark" phrase was meaningful. If it counts as meaningful, then I think it also counts as novel and meaningful. If it counts as word salad from an LLM grasping at straws, then it does not count.
Well, about odd turns of phrase, people won't agree perfectly whether a given example is just kind of weird and meaningless or a yummy novel way of expressing an idea. Still, I think the distinction's meaningful.
Many Thanks! That seems reasonable. Do you still have the session where it generated the poem? Is it possible to ask it to elaborate on some of the questionably meaningful/meaningless turns of phrase and see if it responds in a way that would be sensible for a human who did have an idea in mind?
> There was a thread where someone suggested that nobody really liked Shakespeare -- the people who said they did were just Culture Signaling.
I feel similarly about The Little Prince. I wonder how many people see the book as something they don't really like, but they are painfully aware that they have to pretend to be deeply impressed, otherwise they completely fail at some cultural signaling game.
I mean, the book is okay as a story for 10 years olds, but even if I had to make a collection of "top 10 books for 10 years olds", I probably would forget that it exists.
Great Books Explained: The Little Prince
https://www.youtube.com/watch?v=A0wNMvPU16o
James Payne, a video essayist on YouTube, is the greatest educator on art I've ever encountered. He completely changed my mind on work I was actively contemptuous of (Rothko) and provided fabulous enrichment in understanding artists I felt "meh" about or even enjoyed.
I think that essay on The Little Prince is equally good. It makes an excellent argument that the story might be *about* a ten year old but it is *not* "for" ten year olds.
The Little Prince is actually a very dark book. It can be read by children but I think there is a whole layer of meaning that a 10 year old can’t understand.
Literally ends in suicide, so....
Did you read Proust in the original French? I’ve read him - in English translation - of course because of The Canon and all but I always felt like I was doing homework. Reading for pleasure is completely subjective and to me his prose is boring. I know, I must be a philistine.
Edit - Oops I see you read him in translation too.
You can't read him for the plot. Think of it as taking a ride on someone else's mind. He's a phenomenologist -- he sets out to capture the intricacies of experience. It's like looking at the fractal of the mandelbrot set and zooming in and in and in.
You may want to dodge out of the way before I can fall on your neck weeping as I'm rather hefty as a heffalump 😀
Years back in the SSC days I was arguing for the necessity for beauty (if I recall correctly, in response to something Eliezer Yudkowsky had written about walking past churches and seeing the waste of space with stained glass; it would be much better if the walls were solid and inside you had screens that you could project images on or, even better, Informative Stuff like graphs etc.) but yes, it's hard ploughing on a site for shape rotators where a proud boast is "I only read non-fiction because I want to learn things".
But we keep on trying!
Well, I guess we found something we agree on. :)
I would make the libertarian argument that most people like beautiful things, so who are you to knock them down?
Time for a Chesterton quote!
From "The Thing", 'Obstinate Orthodoxy':
"Let us take a practical case for the sake of simplicity. Many moderns will be heard scoffing at what they would call "chocolate-box art"; meaning an insipid and sickly art. And it is easy to call up the sort of picture that might well make anybody ill. I will suppose, for the sake of argument, that we are looking sadly at the outside of a chocolate-box (now, I need hardly say, empty) and that we see painted on it in rather pallid colours a young woman with golden ringlets gazing from a balcony and holding a rose in the spot-light caused by a convenient ray of moonlight. Any similar touches may be added to the taste or distaste of the critic; she may be convulsively clasping a letter or conspicuously wearing an engagement ring or languidly waving farewell to a distant gentleman in a gondola; or anything else I can think of, calculated to cause pain to the sensitive critic. I sympathise with the critic's feeling; but I think he goes quite wrong in his thinking.
Now, what do we mean when we say that this is a silly picture, or a stale subject, or something very difficult to bear, even when we are fortified by chocolates to endure it? We mean it is possible to have too much of a good thing; to have too many chocolate-boxes, as to have too many chocolates. We mean that it is not a picture, but a picture of a picture. Ultimately it is a picture of innumerable pictures; not a real picture of a rose or a girl or a beam of moonlight. In other words, artists have copied artists, right away back to the first sentimental pictures of the Romantic Movement.
But roses have not copied roses. Moonbeams have not imitated each other. And though a woman may copy women in externals, it is only in externals and not in existence; her womanhood was not copied from any other woman. Considered as realities, the rose and the moon and the woman are simply themselves. Suppose that scene to be a real one, and there is nothing particularly imitative about it. The flower is unquestionably fresh as the young woman is unquestionably young. The rose is a real object, which would smell as sweet by any other name, or by no name. The girl is a particular person, whose personality is entirely new to the world and whose experiences are entirely new to herself. If she does indeed choose to stand in that attitude on that balcony holding that botanical specimen (which seems improbable), we have no right to doubt that she has her own reasons for doing so. In short, when once we conceive the thing as reality, we have no reason whatever to dismiss it as mere repetition. So long as we are thinking of the thing as copied mechanically and for money, as a piece of monotonous and mercenary ornament, we naturally feel that the flower is in a special sense an artificial flower and that the moonlight is all moonshine. We feel inclined to welcome even wild variations in the decorative style; and to admire the new artist who will paint the rose black, lest we should forget that it is a deep red, or the moonshine green, that we may realise it is something more subtle than white. But the moon is the moon and the rose is the rose; and we do not expect the real things to alter. Nor is there any reason to expect the rules about them to alter. Nor is there any reason, so far as this question is concerned, to expect the woman to alter her attitude either about the beauty of the rose or the obligations of the engagement-ring. These things, considered as real things, are quite unaffected by the variation of artistic attack in fictitious things. The moon will continue to affect the tides, whether we paint it blue or green or pink with purple spots. And the man who imagines that artistic revolutions must always affect morals is like a man who should say, "I am so bored with seeing pink roses painted on chocolate-boxes that I refuse to believe that roses grow well in a clay soil."
In short, what the critics would call romanticism is in fact the only form of realism. It is also the only form of rationalism. The more a man uses his reason upon realities, the more he will see that the realities remain much the same, though the representations are very different, And it is only the representations that are repetitions. The sensations are always sincere; the individuals are always individual. If the real girl is experiencing a real romance, she is experiencing something old, but not something stale. If she has plucked something from a real rose-tree, she is holding a very ancient symbol, but a very recent rose. And it is exactly in so far as a man can clear his head, so as to see actual things as they are, that he will see these things as permanently important as they are. Exactly in so far as his head is confused with current fashions and aesthetic modes of the moment, he will see nothing about it except that it is like a picture on a chocolate-box, and not like a picture at the Post-Futurist Gallery. Exactly in so far as he is thinking about real people, he will see that they are really romantic. Exactly in so far as he is thinking only about pictures and poems and decorative styles, he will think that romance is a false or old-fashioned style. He can only see people as imitating pictures; whereas the real people are not imitating anything. They are only being themselves as they will always be. Roses remain radiant and mysterious, however many pink rosebuds are sprinkled like pips over cheap wallpapers. Falling in love remains radiant and mysterious, however threadbare be the thousandth repetition of a rhyme as a valentine or a cracker-motto. To see this fact is to live in a world of facts. To be always thinking of the banality of bad wallpapers and valentines is to live in a world of fictions."
Is this a long way of saying there is nothing new under the sun? The reality is not in question, but the expression of it is, and expression requires a voice and an ear attuned to it. Mostly the latter…
I basically agree. Just because someone else did something hundreds of times before doesn't make it less important to you.
I think there's actually been a change in the attack on the arts, though. Now they'd be complaining about the color of the gal holding the rose or that she's dreaming about a guy. Ironically this means they take the representation (hah) more seriously.
Honestly, I feel like crying about it myself. I'm not religious, and one of the thoughts that makes my death seem less awful to me has always been that the literature I loved will still be there: I'm not immortal, but some of the writers I love are, sort of like the Alps. It never occurred to me that the world's orientation would change so much -- that so many fewer people would read books, that the world would prefer YouTube explanations of simple things rather than a paragraph with a picture, that tech would trump sensibility so vigorously.
I have to admit Youtube videos are better for physical stuff like home repair--you have a high-quality reproduction of the thing you're trying to do.
But, yeah, literature seems on the decline. I think the thing is it was one of the few forms of art that could be genuinely mass-produced in its original form--even comic books and recordings of concerts were limited by production quality for a while. But now you can stream movies direct to your house, and video games allow for interactivity.
It's still the preferred medium for women's romantic fantasies, though, so I think romance novels will be around for a while.
I get that there are some things that really are best explained in a video. But I get mostly video answers even for things that clearly do not call for them. For instance I ask a lot of questions about how to do something in photoshop. A list of steps is almost always an adequate answer, and also convenient to use. For people who are new to photoshop and not familiar with the interface, including some pictures of where to access features used in the steps would give you something that would work for virtually anyone who asks the question. But I have to search for the prose answer among multiple video versions that are 10 to 20 mins long. The videos are clear, but are a much slower way to obtain the info I need, and when I watch them *I* have to sit there making a list of steps to work from, unless I'm willing to go back later and hunt for the part of the video that covers the step I've forgotten.
You know, you're right. I think it probably has something to do with ad revenue somewhere (maybe the maker gets money for videos they can put ads in).
Please let this have been a parody! :)
Because I never bookmark things, I can't find it. I think it was something on the old LessWrong site and it was years back. It pricked me because I had recently read something about Dawkins in an interview, as well, with his vision of the religion-free science and facts only future being one where people would be doing research in their spare time and not mucking around with useless stuff like arts and poetry.
Again, I may be misrepresenting him, but that was two people who were "what shall we do with all this useless beauty?" from the STEM side.
Which makes me wonder how much those "STEM" people really are scientists, because the great scientists have been all about elegance and beauty in many aspects of science. I'm a physicist by training, and one of the common things physicists say about theories is that they are "elegant" or "beautiful" (or more commonly the reverse). The number of quotes from famous scientists about seeing the beauty in nature and in various things is legion.
What I think is really happening is people *play-acting* as scientists, without really understanding the soul of the thing. <sarcasm>Or worse, they're *engineers*, a well-known-to-be soulless, unfeeling breed</sarcasm>
>I'm a physicist by training, and one of the common things physicists say about theories is that they are "elegant" or "beautiful" (or more commonly the reverse).
Agreed, and similarly for mathematics.
It seems to me that the template for tech genius is different from the picture both of us have of the great scientist. Its key features are tech smarts, wealth, male hedonism & a certain emotional flatness and ethical numbness both conceived of as a manifestation of common sense and invulnerability to conventional bullshit.
I think the "beauty" talk may be out of fashion which is why I found this recent piece so arresting:
https://www.thepsmiths.com/p/review-einsteins-unification-by-jeroen?utm_campaign=post
Speaking as a software engineer, I would say that beauty is difficult to maintain 😁 Also, yes, we sell our souls in exchange for the ability to read regexes without googling.
"I can't think of a worse place to try to interest people in Proust than here." I think this is why I thought it'd be fun to try. I had a two-hour train journey, caffeine in my blood, and apparently a mischievous imp in my head :)
(Alternative: felt compelled to enter, but was propelled by a fear of failure into setting myself up for it ;) )
I wrote in the review that Proust wasn't a badge, but a barometer... I was _not_ blown away when I first tried to read it. But then I better balanced my brain, and excavated my heart, was drawn to try again, and boom!
This makes me want to read your review, and also to read the original text. I am very curious now! I've never gotten around to reading any Proust, even though I (apparently unlike much of the commentariat here) read about 50:50 fiction:nonfiction.
I quote the luscious opening of Remembrance of Things Past (in translation) somewhere in this thread. That can function as I taste test. The first time I read it, I thought, "my god, this is gorgeous, and also intricately accurate about the texture of inner experience." But somebody else might think, yeah, well put and all, but I hope he moves ahead quickly to the real *events* in his life when he woke up the next morning.
It suddenly occurs to me that Proust is in the public domain.
https://standardebooks.org/ebooks/marcel-proust/in-search-of-lost-time/c-k-scott-moncrieff/text/single-page
https://docs.google.com/document/d/1QiotH3aGFgNLGqsIHTK_Plm_gem2E4l2C2ctyGJd0jY/edit#heading=h.552mi7g24q6
In case it helps :)
Thanks!
To be fair though, book reviews are not the place to write flowery prose, it’s not a time for writing like Updike, or Roth etc. even if you are reviewing either.
I'd offer Amis's The War Against Cliché and Updike's own book review collections (e.g. Picked-Up Pieces) as contrary evidence.
Book reviews can be stylish, and they can sometimes rise to the level of art.
What do you mean, exactly, by flowery prose? Some of us like *language.* It's a bit like liking music. I found the Updike passage below by googling the phrase "eyes like the backs of bright captured beetles," which I remembered (in a slightly corrupted form) from some Updike I read 25 or so years ago. It gave me so much pleasure I never forgot it.
Updike and Roth are very good writers, but they are not flowery.
Roth isn't flowery, he's powerful. He kind of punches you. You realize quickly that there is absolutely nothing he can’t bring himself to say:
" 'Come, Big Boy, come,' screamed the maddened piece of liver that, in my own insanity, I bought one afternoon at a butcher shop and, believe it or not, violated behind a billboard on the way to a bar mitzvah lesson. So. Now you know the worst thing I’ve ever done. I fucked my own family’s dinner.”
Not too flowery, right?
Here’s some Updike
". . . a bobbing mass of caftans and galabiyahs, burqas and veils, out of which lively liquid eyes glared, bright as the backs of captured beetles. The streets narrowed, more tightly lined with assorted wares — intricately worked copper pots and platters, dried herbs in glassine envelopes, mniatures Sphinxes and Pyramids in lustrous lightweight metal and lurid plastic, scarabs carved from gray-green soapstone, and, in several successive stalls, in the full flat rainbow of tinted plastics, utilitarian household equipment such as tubs and buckets, dustpans and scrub brushes, scouring pads and wash baskets whose mold indicated the flat weave of organic wicker. "
He certainly does include a welter of details, but they are not decorative. In fact they are not even pretty. He is not flowery, he is acute. He wants to capture what this Egyptian market is *like*.
Here is some Proust: the opening of Remembrance of Things Past
"For a long time I used to go to bed early. Sometimes, when I had blown out my candle, I would fall asleep so quickly that I had not time to say to myself “I am falling asleep.” And half an hour later the thought that it was time to go to sleep would awaken me. I would make as if to put away the book that I imagined was still in my hands; and to blow out the light; I had gone on thinking, while I was asleep, about what I had been reading, but these thoughts had taken a peculiar turn; it seemed to me that I myself was the immediate subject of the book. A church, a quartet, the rivalry between Charles I and Francois V. This impression would persist for some moments after I awoke; it did not offend my reason but lay like scales upon my eyes and prevented them from registering the fact that the candle was no longer burning.
I would call this not flowery, but obsessive. Proust's a phenomenologist. He's determined to capture the funky convoluted details of subjective experience — here, the odd state between wake and sleep. And he succeeds, (And at the same time he is subtly introducing the reader to his take on life.)
You don't have to like this stuff, but anyone who thinks there's nothing there in writing of this kind, that it's nothing but Hallmark cards writ large, is just fucking wrong.
("François I and Charles V," of course...)
Yeah, OK, but kind of irrelevant to my point. You wanna point out my typos too?
They should all be written in the style of Cormac McCarthy for a completely riveting experience. ;)
It should start with a description of the coffee and tortillas that the woman brought.
“See the child. He is pale and thin, he wears a thin and ragged linen shirt. He stokes the scullery fire. Outside lie dark turned fields with rags of snow and darker woods beyond that harbor yet a few last wolves. His folk are known for hewers of wood and drawers of water but in truth his father has been a schoolmaster. He lies in drink, he quotes from poets whose names are now lost. The boy crouches by the fire and watches him.”
Why not? They don't HAVE to be, of course, but if you're writing anything for any reason other than a mere transmission of information, I for one can certainly see a place for planting some flowers for the sheer hell of going 'ooh, pretty!' :)
"There was a thread where someone suggested that nobody really liked Shakespeare -- the people who said they did were just Culture Signaling. There was a fair amount of agreement from others chiming in. Another time somebody said we don't need poets any more"
It's really hard for me not to attribute this almost entirely to people who aced math and science in school but did badly at English, spending the rest of their lives insisting at every opportunity "yeah...well those things are pointless anyway!"
This is probably unfair. But given how absurd these claims are, and how I can't see any other clear motivation for them, it's really hard not to do.
I had a not-that-lopsided set of scores on the SAT between verbal and math. I like a lot of literature and my mother actually taught Shakespeare in college, but I can't stand Shakespeare; it just does nothing for me story-wise, and the writing style is overwrought and grating. It's sort of the same reaction I have to Tom Lehrer: "okay, that's clever what you did, but I'm not interested "
There are so many great writers since Shakespeare that I myself have entertained the idea that his singular worship is the result of mass psychosis :)
Mine were only 10 apart (10 higher in the verbal actually), and I did like Shakespeare. So who knows, maybe we've figured out the M-V threshold to like Shakespeare. ;)
It's not simply not liking Shakespeare that I'm talking about. It's the level of *anger* that some seem to have at his reputation, and a determination to tear him down. And moreover, many of these people seem to want to tear down the whole category of classic literature and art, of which they obviously haven't read/seen all of. That can't be explained by personal preferences; there's something else going on there.
Hanania had the most entertaining entry in this vein:
"Man so powerful yet so weak
Conquers the stars yet farts and squeaks
Oh man! An ape we know it is true
Darwin has revealed me and you
Yet we go on, forward still
For if not us, then who will?"
Yeah, OK, you're in good company. There literary folk who do not like Shakespeare either. Coleridge said that performing his plays should be against the law. My point isn't that if you don't like Shakespeare you're a dunce. It's that there exist many people who actually do love Shakespeare for reasons having nothing to do with a need to look Cultured.
Yeh I agree with that. I think some people can’t tell the difference between Shakespeare at his best and some doggerel written in the style of Shakespeare. I’m an engineer who grew up in a literate household - vicarage if you can imagine and therefore love the arts, well the classical arts. Many of my colleagues do not.
People tend to value the things they're good at; I'm an English and history person so I love those, plus have a strong affinity for the visual arts. Maths is my downfall and I do regret that I have not the capacity to see the beauty in it others claim is there.
I think for strong Maths and STEM types, arts, music and languages and the humanities in general just don't line up with their capabilities, so they don't see the good of them - how many comments have we read on here along the lines that in maths or physics or engineering there is a right and a wrong answer and a way to find that, but in English essays you can just bullshit your way to no conclusion?
I can see this. I was the maths kid at my school, and always sort of assumed I'd pursue it. Until something started to shift, slowly at first, and then the magic plants fully jiggled my brain about and put the numbers (and other left-hemi stuff) firmly in their place. I've still been mostly valued at most jobs I've had for my mad Excel skillz, but the relevance of them (and the view of the world they crudely represent) to how well I'm living has fallen off a total cliff... and with beautiful, sparkly results :)
Of course humans are crazy too. https://www.youtube.com/watch?v=Jia_uVPGnms
I was choking on a pair of dice
When a couple of guys
Came and asked me if I had a light
I told them I knew someone who might
So I took them to the opposite side
Of Hollywood and Vine
To a friend of mine
When we asked him if he had a light
He told us just to look inside and started
Grinning like a barracuda
Grinning like a barracuda
He was grinning like a barracuda
From the Taj Majal to Chattanooga
Grinning like a barracuda
From the cosmos to the bargain of Judas
Grinning like a barracuda
Here's a crazy music video for you:
https://www.youtube.com/watch?v=HMUDVMiITOU
I LOVE "He was grinning like a barracuda/ From the Taj Majal to Chattanooga."
The book review that I'm sad didn't make the cut was Battle Hymn of the Tiger Mother. To a large extent this is because those subjects have been on my mind a lot recently, I have small kids and am starting to struggle with the issue of exactly how tiger-y I should most optimally be.
For the most part I tend towards genetic determinism -- there's no ideal upbringing that's going to turn my kids into John von Neumann (if that's something I even want) unless they happen to have already been dealt the right genetic hand. An ideal upbringing is probably not that different to an median upbringing so as long as I'm not a below-median parent I'm probably doing a good job. Amy Chua, the self-described "tiger mother" of the book, is a good example -- after her painfully strict parenting both her daughters are now successful lawyers... which seems like a reasonably good outcome, but both parents were law professors to begin with, so becoming a lawyer feels like the default option.
Impressive childhood achievements like being a really good sportsman or musician, or always at the top of the class, sound impressive but probably don't count for anything in the long run. Being really good at the piano when you're 17 is impressive, but unless you become a professional musician it's a waste of time, and becoming a professional musician is probably a terrible career gamble no matter how good you are. And being top of the class means nothing until you reach an age where it matters for university admissions.
I've known lots of kids (largely Asian) who spent their childhoods being puffed up by cram schools and parental attention and nonstop study to achieve above their natural abilities at school. Most of these kids eventually flame out and find their level, which is somewhere between Deloitte and McKinsey. On the other hand I've known lots of naturally bright kids (mostly non-Asian) who went too far the other way, were far too lazy and disconnected from school, and missed out on good opportunities in their youth. I don't want to force my kids to study six hours a day for the benefit of coming first instead of third in Year 8 geography, but they might benefit from being a little less lazy than I was -- the ability to sometimes knuckle down and do boring arbitrary tasks you're not interested in is an important part of life.
So in the end I'm probably stuck with the boring conclusion that the ideal is a path somewhere between the way of the Tiger Mother and the exact opposite. But I'd be interested if anyone has any less boring conclusions.
I think a huge confounder of the tiger parent phenomenon is that parents often optimizes for social status among other parents rather than actual long run results.
For example, the quintessential tiger parent activity is forcing the kid to go to music lessons. Music lessons may help with college applications in a vacuum, but seeing the 1000th violinist Asian college application looks a lot less impressive than say, the 3rd organizer of a neighborhood charity drive, or some other slightly nonconformist position of leadership. The thing is, Tiger parents want to look impressive to other tiger parents, so stuff that honestly signals the child's conscientiousness and the parent's "parenting ability", like playing musical instruments, is way more impressive than something that requires way less effort and would look better in the application.
You can see this in some other fields too, if an Asian kid loves playing video games, and starts modding or otherwise learning to program, the median response is to discourage this "time wasting"activity, because video games are base entertainment, even though lots of human capital re: programming gets built by kids playing around with tools. Tiger parents make decisions based off of perceived social status, rather than potential long run upside!
Another example is sending someone to a foreign language class on the weekends. Foreign language classes to a first approximation do not work! You need immersion! This is easily discoverable if you think about it or look for actual results! Yet at least Chinese parents still send their kids off to Saturday Chinese class, where everyone forgets most things after three weeks. That these don't work is an indictment of optimizing for the wrong thing , not necessarily in the strictness of the parenting.
It's not clear to me that if you sat down and did something like 3 weeks of research on how to optimize college admissions or long run salary, that you wouldn't improve on life outcomes for your kids way above tiger parenting level, with way less effort and way less gnashing of the teeth.
The tiger parents are optimizing for success in Chinese society (which has been centered on exams for about a thousand years), rather than American, which favors 'slightly nonconformist positions of leadership', as you say. Eventually, they will figure it out.
I would say that they are optimizing for comfortable upper-middle-class success in both Chinese and American society. But that path is, not orthogonal to, but probably about 45 degrees off the path to really extraordinary success in American society.
That is a good point. I do think there is a case where they don't completely adjust for the cultural differences and do things like pile into violin classes. (The four arts of the Chinese nobleman were music, go, calligraphy, and painting, so maybe a little of that persists?) But, you know, they do pretty well, as manifested in average Asian earnings and the like. Amusingly the whole standardized test thing was originally copied from China, so they have a leg up in that regard.
The whole problem is you have maybe a 1% chance of extraordinary success and lower your chances at a 'safe' upper-middle-class career by not jumping through all the Harvard hoops. I have no idea how to assess that risk equation in any kind of mathematically sound way, but I don't blame the parents for being risk averse.
This is a really good point. I'd never thought of it in these terms, but tiger parenting is optimised for impressing other parents rather than actually benefiting the children in any way. Amy Chua takes it to an extreme, by tigering hard and then writing a book about it to impress a whole bunch of parents she'll never meet.
This is definitely food for thought in how I raise my own kids. I need to keep asking myself whether I'm really doing things to benefit the kids or just to be impressive to other parents.
(Impressing your peers is usually a bad idea anyway, it just makes them hate you. Nobody likes being impressed.)
"there's no ideal upbringing that's going to turn my kids into John von Neumann (if that's something I even want) unless they happen to have already been dealt the right genetic hand."
Surely you can at least help them find the tables where their hands are the strongest, and learn in which circumstances to fold them and which ones to go all-in. Yes, I'm taking the metaphor far too litetally.
But seriously, I do think these discussions put far too little focus on how much "the right talents PLUS the right circumstances" and the process of identifying AND MAXIMISING your own specific strengths are the key to success. (No matter how you define success).
Focusing on just who genetically has more strengths or talents in general seems to be substantially missing the point.
No time for a long reply now, but I too wish "The Battle Hymn of the Tiger Mother" had been selected. It would have made for very interesting discussion!
I think you're right about most things, except calling piano playing a "waste of time" unless one becomes a professional musician. What about the joy of producing beautiful music for its own sake? Do you consider all hobbies to be a waste of time?
Heck, I'd almost call being fluent in a musical instrument or three a necessity for a Good Life. Voice, guitar, piano, percussion, even whistling... Something where you can improvise tunes and let out your creativity, where your fingers can wander and produce effects that never crossed your conscious mind...
I'd agree, as someone who can't play any instrument: I avoided learning as a youth because I wanted more free time, but as an adult I feel lacking that I can't play anything.
It's not too late, try starting now. Most people can learn enough to be able to enjoy making music.
I have managed to pick up a little piano over the last few years, but since my kids were born I stopped learning. I should try to pick it up again, though it may have to wait until they're all out of diapers and need a bit less attention. I suppose I mostly regret not putting in the time to learn when I was a kid and had plenty of time to spend! I'm going to make my kids learn an instrument for sure.
Suggest "encourage" instead of "make".
This type of music playing is basically never what Tiger parents mean by playing music. You would probably be chastised for wasting your time playing not real music, instead of Real Music Like A Classical Piece From Beethoven.
This just seems to be begging the question that there's nothing actually aesthetically superior about Beethoven compared to any somewhat competent musician.
If "being able to play some kind of music is necessary for a good life" is valid, then "being able to play some Beethoven or Mozart is necessary for a good life" is also valid, with the additional premise that Beethoven and Mozart are in fact a lot better than most other music.
Frankly I don't see how you can reasonably deny that premise, but even if you do you surely can't claim that one couldn't reasonably accept it?
I don't know why you are responding to me, I am not a tiger parent, I cannot claim to know all of the justifications in their head, just what they say.
Also what you said doesn't hold. The response wasn't to the superset conception of playing music, but from the subset of music Moon moth was talking about (improvised music). So your paragraph about something being the superset of the other doesn't apply.
So, frankly, I don't see how you came to that conclusion if you read my post, and if you did surely you can't reasonably be disputing the entire post while ignoring literal first sentence qualifying it?
I was responding to the general claim of "learning music is pointless" that several people have either made, or *appeared* to implicitly defend in the slippery "well it's not actually an unjustified claim to make" kind of way. I took you as doing the latter, but if you weren't and were simply *explaining* the claim, well then I'm simply responding to the claim in general (not to you) since you're the one explaining it.
If this seems confusingly self-referential and "talking about talking", well that's how I see your commemt here. So to back up:
I say learning music (in the normal traditional way) is extremely worthwhile. Partly for the reasons Moon Moth says (which don't only apply to improvisation even if that was his example) and partly for other reasons that I would suggest are quite obvious to most people (with a little thought) who aren't either being deliberately contrarian or significantly lacking an aesthetic capacity that most people have. If you're disputing that then I'm disagreeing with you, as explained above; if you're not, then I'm not.
Oh, I agree that they're going way too far in a particular direction.
I probably didn't define "really good at the piano" well enough.
By all means I think it's a great experience for a kid to play a musical instrument, but being "really good" (say, practicing four hours a day) probably isn't worth it over being "pretty good" (practicing a couple of hours a week).
Yeh I agree with that. I used to compete as a teenager in piano competitions but have it up when defeated in a regional. Mind you it was a big regional - a good chunk of England. But there’s no future in being a pretty good pianist. Not in classical anyway.
That seems like a real tragedy of modern life, that even being in, say, the top 1% of piano players is "worthless." Not only that you can't make a living from it, but that it doesn't seem to bring any value at all. No one wants to come to the local pub and here the local who's good at piano play, because he's "bad" compared to the real professionals.
In social life, the difference between "mediocre" and "pretty good", however, is significant. It is quite awkward for everyone concerned when the proud parents cajole their kid to play their instrument for guests and it is obvious the kid is not putting enough practice effort to play their showcase piece well at all. When the kid is pretty good, people will be pleased. Teenager / adult who plays well gets to do it for fun, and make it part of their social life.
> children are taught they are special and can do whatever they want with lots of exploration and socialization, trusting the academic piece will take care of itself based on inherent ability.
I like this description. This is basically what I believe, too.
IQ is a thing. Me and my wife are both smart and educated, we have friends who are also smart and educated, so our kids can learn a lot just by talking to us or listening when we talk to each other. Plus we occasionally recommend to them some educational resources, such as Duolingo, Khan Academy, or the "Once Upon a Time... Life" movies. At this moment, it seems to me quite unlikely that my kids would have a problem at school. (And if that somehow happens, I can still change my approach later.) So, exploration it is.
All the Asians in Asia still do the tiger parenting thing tho. It's not because of racism, but because the Imperial Examination has been *the* way to get ahead in the Sinosphere for 1000+ years.
South Asians generally don't tiger parent to the same degree, and Overseas Chinese generally see themselves as Chinese first. The Imperial Exams have been around for so long they're baked into the culture.
The 6 white guys in China, Japan, and Korea are not why there is a cram school and tiger mom culture in these places lol - it's not because of "racism"
We really do live rent free in their heads I guess.
It's in the news today that the US Surgeon General is proposing a warning label for social media (for teens, anyway), based on correlations between social media use and poor mental health. But I've also seen arguments that the whole nosedive of teen mental health in the social media era is a measurement artifact based on changing who was asked and under what circumstances. Does anyone have a good summary of what's going on?
Well, there is this https://jabberwocking.com/comprehensive-report-suggests-little-danger-to-teens-from-social-media/
And this https://www.vox.com/24127431/smartphones-young-kids-children-parenting-social-media-teen-mental-health
You might want to take a look at Jon Haidt's substack titled *After Babel*.
I have not hunted for a good rebuttal to that finding about teen mental health. But it's a bit hard to believe the data about increased depression in teens is an artifact. Some of the metrics used seem unambiguous: Number of teens treated for self-injury, number of suicide attempts, number of hospitalizations for suicidality and depression. So the people claiming teens are more depressed aren't just going by surveys.
I don't have a dog in this race, but Kevin Drum has followed this debate. He posted this counter Haidt (mentioned above). Drum ran the numbers (link below) on the report that Haidt mentioned, and he concluded that...
> Overall, social media can explain about 1-2% of the difference in well-being among teens.
> Among girls, it explains 2-4%.
> For girls going through puberty it "could" explain more than 4%.
But now that I officially qualify as an old fart and I'm allowed to be a cranky old cynic, I can assure all you younguns out there that the current social media scare seems to follow the pattern of previous cultural scares — comic books, TV, video games were all claimed at one time to have done pernicious harm to teen psyches. I can't imagine how we all survived the Twentieth Century. But maybe social media *is* different from previous threats to teen mental health. I'm willing to have my stance corrected if I see compelling evidence.
Kevin Drum's post...
https://jabberwocking.com/yet-another-look-at-social-media-and-teen-depression/
Also, the National Academy of Sciences published a big meta-study, and they didn't find much evidence of harm.
https://nap.nationalacademies.org/catalog/27396/social-media-and-adolescent-health
Smartphones + Social media is a whole different ballgame
Source: am zoomer and saw what the transition did to myself and people I know, in real time, as it happened
While I'm sure every generation deals with unique sets of social and historical challenges, our lack of temporal perspective makes me doubt whether any generation can offer unbiased commentary on the immensity of their challenges vis-a-vis previous generations.
I don't have a dog in the race either. Somebody suggested that the change in stats is due to Obamacare. A greater percent of teens are getting diagnosed with depression, etc., because a greater percent of them can now go to the doctor. That seems plausible. Last night tried to look up what percent of teen girls over the last 20 years are being diagnosed with menstrual problems of some kind. Cramps, irregular periods, etc. are extremely common in teens, so the data about this should be a pretty strong signal. I found some data, but only for 1 year, then ran out of energy. If the percent of teen grirls getting this diagnosis zoomed up comparably to depression diagnoses beginning around 2010 that would be some pretty good support for the Obamacare hypothesis. And you could look at other common teen illnesses too.
I think that I have seen it suggested that Obamacare covered a lot of additional minors and may be at least partly to blame for increased diagnoses as primary care physicians suspected things then made referrals...
That hadn't occurred to me, and it does some plausible. Seems like it would not be hard to check the magnitude of this effect. Look at the trends of teen diagnoses for conditions where there's little subjectivity in diagnosis -- I dunno, stds? anemia? allergies? Have they shot up comparably?
I find that hard to believe, as (I think) I've seen similar graphs for other countries, and that roughly track social media use. I'm not certain where I saw these so take that with a pinch of salt.
I'm sad that "The Lady of Shalott" didn't get chosen. The author deserves props for writing the entire review in verse! And "The Old Testament" review deserved to get in.
Farewell, "Sadly, Porn," we hardly knew ye. If I never see another "Sadly, Porn" review, it will be too soon.
I'm ... "glad" is definitely not the word, but I believe "Two Arms and a Head" deserved to be included. This book is freaking HARROWING. You have been warned.
I have to say I loved the Sadly, Porn review. I didn't expect to, but I felt like it really captured a kind of neurosis that I'm prone to, in a way that showed me something new about myself. I still won't read it :)
I have seen multiple comments now saying the Old Testament review should have gotten in. I don’t understand the appeal, myself: the author seemed to be make the review some kind of performance art where he pretends he doesn’t know what the Old Testament is and uses it as a springboard to talk about his personal problems. Which I found mildly entertaining, but nothing phenomenal. So sell me on why you thought it deserved finalist status, I’d like to know what I missed.
I wonder if it's a case where different people have different ideas about what makes a good reiview. E.g.
1. They like the book in question without considering the review at all
2. They like the review as an essay regardless of it's relation to the book
3. They think the review is either a good summary of the book or a novel take
Thanks for mentioning "The Lady of Shalott" - I searched it up just now and enoyed it. And I learned a new real or fake term, the "Mabinogion", to deploy.
But then, I love "The Lady of Shalott".
Do people seriously think that Boeing killed the Boeing whistleblower, John Barnett? If so, how do they explain how the crime actually happened? I mean details like whose gun it was, how the killer got into the vehicle, etc.
> Do people seriously think that Boeing killed the Boeing whistleblower, John Barnett?
yes, and I dont even know which of the 3 you mean
> If so, how do they explain how the crime actually happened? I mean details like whose gun it was, how the killer got into the vehicle, etc.
I dont care that I dont even have a theory
Not murdered as in "hired an assassin", but certainly had a hand in his death, and quite possibly others:
> Blumenthal said those who have spoken up have told of retaliation and pressure to shut up about their complaints.
>
>He said that one whistleblower, John Barnett, who police ruled died by suicide earlier this year, had testified that a supervisor had called him about 20 times a day, and when Barnett questioned the calls, he was told by the supervisor “I’m going to push you until you break.”
>
>“He broke,” Blumenthal said.
https://www.cnn.com/business/live-news/boeing-ceo-testify-senate#h_6c1d9d9e228ea9830900ec13ec9e81eb
The thing to note is that there were a *lot* of Boeing whistleblowers. This isn't a case where one or two iconoclasts went against the odds to blow the whistles, everyone knew there were huge problems at Boeing for years and a lot of people raised the alarm over it. So one or two of them dying in weird ways actually isn't that unexpected (but not in a way that makes Boeing look better).
Is there anything special about these particular whistleblowers? Like, many people knew there were issues but only a few knew where the "bodies were buried" so to speak?
If all of the whistleblowers had similar knowledge and the ones who died couldn't be a witness against high level employees, then the odds of an assassination seem close to 0. If these guys had particularly damning evidence or evidence bringing in one or more senior people, that raises it quite a bit.
If I'm evil and have enough power to do it, I would definitely kill someone who can name me specifically even if I let Boeing take a huge financial hit from a bunch of other whistleblowers. This doesn't even have to be *Boeing* acting, but could just be one or a few people acting on their own behalf.
I don't think there was (one of them wasn't even a Boeing whistleblower, just someone working for a contractor). It also doesn't look like Boeing gained anything from their deaths (it's not like they ever had a hope of actually suppressing the bad press, and both deaths happened *after* it was already a huge story, and both their whistleblowing complaints were long since published).
It turns out assassinating people is hard and not many people are willing to do it. I'm reminded of the Chinese developer Tan Youhui. Tan wanted to kill a rival real estate developer who was suing him in 2013, so he paid "hitman" Xi Guangan~$280,000. Xi turned around and offered $140,000 to another hitman Mo Tianxiang to do the job. Mo then subcontracted the job out *again* to Yang Kangsheng, paying him $40k up front and promising another $70K after the job. Yang repeated this a 4th time, contracting Yang Guangsheng and paying him $30K and promising $70K. This Yang did it again, offering Ling Xiansi $14K for the hit. Apparently this measly payment broke the chain, as Ling ratted out the operation to the target who informed the police.
So in China, where presumably it is easier to get away with murder than the US, we have a successive chain of five "assassins" for hire who never even attempted to actually kill anyone, and resulted in the whole thing being exposed to the police.
Well, if there is a Boeing Assassination Conspiracy, they seem to have moved from trying to kill the whistleblowers to trying to kill the passengers, what with all the bits falling off their planes recently.
Hey-o!!!
Remember there are now two dead whistleblowers. (If pressed, I would put the odds of assassination in the low to mid single digit %, not sure if that makes me a nutjob or not)
These are implementation details.
I have a book request, for a book that may not exist but which I really hope does: is anyone familiar with a decently readable book, written by an industry insider, describing how structural incentives built into Medicare incentivize behaviors, and how people game those incentives? As a hospital employee I see bits and pieces of odd behavior and I'm pretty sure the reason is "you can get more cash out of Medicare that way" somehow, but I can't prove it and it's not the kind of question it's polite to ask the powers-that-be.
Have a look at "Catastrophic Care: Why Everything We Think We Know about Health Care Is Wrong"
https://www.amazon.com/Catastrophic-Care-Everything-Think-Health/dp/034580273X
Not exactly what I was looking for--it seems like a Jeremiad against medicare broadly, whereas I want a description of why, for example, it is so vitally important to cath lab not to let a patient die down there even if they are DNR outside of cath lab, so that they get revived and sent to the unit where DNR gets reactivated and they promptly croak (saw this happen a little while back). I feel certain there's some Medicare reimbursement rule being pimped out there, but don't even know where to start looking. Stuff like that.
ETA: But it's still interesting and I think I will check it out--thanks!
I wonder if that's more a "for God's sake don't let us in for a lawsuit" rather than Medicare? I can see a grieving family and a lawyer insisting "but did you do all you could, why did you let this person die of heart failure when they could have been revived?"
It seems as well that it has to do with mortality figures:
https://newsroom.uw.edu/news-releases/cath-labs-regard-patients-dnr-wishes-varies-widely
"In discussing why programs might require that patients’ documented wishes be set aside, Bernacki explained that interventional cardiologists are motivated to achieve successful outcomes not only for their patients, but also for their programs. Patient mortality data, collected in national outcome registries, negatively affects program metrics."
Seems to be a push for treating it as "you have to do all you can to revive the patient":
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6088444/
"Cardiac arrest in the cath lab is a unique scenario. Even though the patient may have predispositions for the event due to underlying illnesses, the precipitating factor is usually iatrogenic; or considered to be so by default. Thus, while a failed resuscitation effort for out-of-hospital or in-hospital cardiac arrest is generally well accepted, cardiac arrest during a cath lab procedure is considered a serious complication. This puts enormous pressure on the treatment team and so heroic resuscitation effort is often the norm. The main objectives of such event are to maintain vital organ perfusion and reversing the precipitating cause."
Thanks! That doesn't make a whole lot of sense in practice, but thanks! We recently had a case where they intubated a crashing patient, coded her for half an hour, and sent her up to us in the ICU ... where, shortly after landing, she had an apparent blood pressure of 39/30 and no discernible pulse anywhere. The cardiac monitor read fine, but it was what they call "pulseless electrical activity." Effectively she'd died in in cath lab but they'd kept her alive/made her look just good enough that she would have to be declared dead in the ICU. Incentives!
I honestly think that's it: patient died there, but if it's formally written up as declared dead in ICU, then that takes it off their backs as regards mortality stats, and they won't have the hospital directors or whomever yelling at them about "we got rated 33rd in the state because of this".
If people are shopping around for where to have procedures done, and they see "Oh, hospital X has 95% survival rate but hospital Y has 80% survival rate", then they're going to go to hospital X. Never mind if in fact both have similar mortality rates, it's just that X manages to push off the deaths to other departments.
I suppose what I really want is a breakdown of why all this stuff happens in general, because it's still in many ways a black box to me, and I've been a gear in it for several years now. Will probably get Catastrophic Care for a start.
The Sadly, Porn review was probably my favorite, does anyone know who the writer was?
I think many of us have our suspicions.
I likewise enjoyed it; it was actually the only book review I was even interested in reading (and only because of the complaints about it in the comments)
Unfortunately, it was me. I can't say that I didn't get entirely what I deserved.
Congratulations, anyway, I enjoyed it too.
Christians believe that Jesus was the Jewish messiah. Jews do not.
Are there specific arguments about this one way or the other? Is there a whole line of Jewish theology about how "Jesus didn't have properties P and Q and therefore cannot have been the Messiah, we need someone who has P and Q"? Is there a whole line of Christian theology about "but actually he did have properties P and Q and therefore is"? Or do the two sides just talk past each other on this issue?
A good place to start is the Disputation at Barcelona, and Nachmanides' account of it.
https://en.wikipedia.org/wiki/Disputation_of_Barcelona
AFAIK a lot of Christian messianic prophecies are regarded by Jews as simply not being prophecies about the Messiah at all. E.g., the verse from Isaiah which is used as a prophecy of the virgin birth appears in the context of the king asking Isaiah for advice about an ongoing war. So the straightforward interpretation is that Isaiah is reassuring the king that the war will be over soon, not making a prediction about the far future.
As a conservative Jew, my general impression of Jewish messianic theory is most Jews just aren't interested in the specific properties of the Messiah. There's a famous rabbinic saying - "if you're planting a tree and someone tells you the Messiah has come, finish planting the tree, then go greet the Messiah." In other words, "why are you chasing after rumors of the Messiah when you could be making the world a better place right here and now?"
My other general impression is that if/when the Messiah comes, it's going to be too obvious to really *need* a checklist. It would be like Christians having a checklist to see if the Book of Revelation was happening - "hmm, the four horsemen are laying waste to the land and a great beast has risen from the sea... no wait, the beast only has *four* heads, false alarm, it's not the apocalypse." If the World to Come arrives, you'll know it.
As I said, this is the contemporary Conservative outlook. Hasidic Judaism believes that the Messiah is more imminent (many believe that the Rabbi Menachem Mendel Schneerson was the Messiah), but I don't know much about what properties they're looking for.
The Messiah already came and his name was Cyrus.
Yes of course there was plenty of writing about this from both sides. Not to be a dick or anything, but please just crack open literally any book about medieval Jewish/Christian relations...
I'd say they were mostly talking past each other and not responding to specific arguments. One period where Christians / converts did do point by point rebuttals of Jewish arguments was in late medieval Spain in the conversion efforts. You could look up Paul of Burgos (born Solomon Ha-Levi) as a starting point.
Fair warning that those medieval writings are often extremely hostile and bigoted...
https://aish.com/78624517/
I know Jews have a common list of complaints, but the only one I remember is that the Messiah will usher in an age without war, and wars have plainly not ceased, ergo not Jesus. I've also heard the complaint, "most of what he said concerning interpretation of the Law was not new, and where it was new, he erred."
Talk past each other
There are some truly hilarious parody of the Gospels typically read around Christmas (Nittle Nach). Basically think Life of Brian, but Jewish.
I'm seeing reports that the Finns sent the Ukrainians some sort of advanced prototype weapons for live-fire field testing. But no one is reporting what they actually sent.
https://mil.in.ua/en/news/finland-handed-over-new-classified-weapons-to-ukraine/
Anyone have more information?
I almost forgot to post a link to my biweekly COVID update for epi weeks 13-14. It's a short one. We're probably headed into another wave, but the wastewater data isn't consistent across all the urban sewarsheds that I checked (but I only checked Boston, NYC, SF Bay Area, and LA). However, other regions of the world are experiencing a KP.2x and/or KP.3x wave. It may just be a little delayed here in the US.
https://x.com/beowulf888/status/1802387538851909972
I just ran into this for the first time and I feel like I need an introduction. Do you have a summary of what you want to say to normies who folded in covid with other respiratory diseases?
SARS2 has a different etiology and is much more transmissible than non-CoV respiratory diseases — although severe cases can result in similar pathologies resulting from other severe respiratory infections. Other than that I'm not sure what your question is.
Someone I know will be starting work as a quant. He is adept at coding and math (has a PhD in physics), but would like to become more creative. Can anyone suggest a source of puzzles or exercises that would be challenging for someone whose skills are at this level, and that pull for inventiveness rather than straightforward application of skills?
Creativity is fetishized and overrated, and has been for a while, probably because it is fun and sexy and makes for fun anecdotes. Valuable creativity however, is extremely hard and rare, and so most creativity – even among professionals – is of the dumb type that fails more often than it succeeds. That is a terrible business model for a quant.
Think about it: Many outrageously successful artists only have one good idea in their entire career, and everything else is a variation on that theme. (Jackson Pollock comes to mind. Meat Loaf. Cristo. John Grisham.) The biggest stars of creativity and innovation (Miles Davis. Lennon-McCartney. Picasso.) have maybe a handful of really good, valuable ideas in their lifetimes, and the rest of their success comes from understanding craft and discipline. But they also have a lot of at-bats, and a lot of misses that you don’t even notice, because their failures are non-catastrophic. Again, swing and miss is a bad business model for a quant.
So… I would argue that your friend doesn’t need to learn inventiveness as a skill. He needs to discover just one or two big ideas that most of his “competitors” don’t get, then double down on those. And he needs the disciple to be boring, and stick with that one idea, once he has found his big idea. (At some level, that is the basis for *all* business success: making and winning bets early that others wouldn’t make at the time, or if they made, wouldn’t win.)
There’s no real shortcut to great, creative insight (otherwise that would be the way, and everyone would take it). And you probably can’t learn it from shelfware. But there are some ways to kickstart the journey if you realize what creativity is. At core, creativity is just recombination of ideas and principles to create something new. So, you’re better off creating your own puzzles than solving someone else’s, or find other ways to learn to challenge yourself to combine ideas more often.
E.g. take a bunch of index cards. On each card, write the name of a powerful principle (natural selection, entropy), or much-loved brand (Apple, Virgin), or randomly insane restriction (“in one tenth the time”, “for dogs”), or maturing technology (AI, EVs, gene editing), or anything you find inspiring or fascinating. Now, pull 2 cards at random, and force yourself to figure out what that would look like. (“80/20” + “McDonalds” = what? A smaller, more profitable menu? Fewer stores? A VIP program? Go deep. Don’t stop at the first level, but think through second- and third-order consequences. “Amazon” + “without money” = a website based on the barter system? What would that mean for companies that mass produce stuff? And how would that impact shipping? Delivery for a meal? )
Add insights and principles from all disciplines, but stack the deck with ideas and keystones from your own field to nudge it in that direction. The point is not that one of those combinations will be the insight, but that constantly playing with combinations will bleed into other parts of your life. Play with your friends, too.
Some of the best and most valuable ideas aren’t truly creative/inventive, but are about identifying patterns and trends, or just correctly measuring their impact (knowing when others over- or underestimate). However, having already seen something in one field will help you see it in others, too. So if you’re interested in growth, study anything else that grows (in biology, business, medicine, sports); if you’re interested in risk, study it everywhere (game theory, policing, nature, virology, etc.)
Add those ideas to your deck and shuffle.
Hope this helps. Good luck to your friend.
There's a genre of puzzle games with open-ended solutions where you solve programming-like puzzles, often with constraints on things like solution size, but usually with a scoreboard at least to compare solution speed/size/other more specific metrics to other players. Some of those might be relevant. I think it's just called "zachlikes". Almost anything by Zachtronics, Manufactoria, Human Resource Machine (apparently one of the easier ones, I haven't tried it), Graphomata (takes quite a while to get to the actually interesting levels, then has rather few of those), and A=B (quite short) are some that come to mind as suitable (the other examples of the genre I know of, while still fun, are either so tutorialised it becomes less creative, or leave less room for open-ended solutions). The ones I'd most strongly recommend would be Spacechem, Manufactoria and TIS-100.
Baba is you
Chess puzzles.
> puzzles (objects with one solution)
> get better at creativity
not puzzles, rendering demos if not straight up demoscene make something new from a blank txt file
Puzzles don't necessarily have only one solution. I think it sounds like OP is asking more about creative problem solving rather than art like demos with no specific goal in mind.
if a sudoku puzzle has 2 solutions its considered bad and under-specified
even for open ended things like zachtronics games, there will probably be a handful of metrics and then best solution for each metric that people quickly find
I will stand by my statement, a puzzle is the wrong thing to increase creativity. The difference between creativity and problem solving is that creativity is an attempt to increase entropy, problem solving is narrowing down entropy into acceptable solutions.
So blank page and make anything
Sure, that'd be a bad sudoku, but that's just how sudoku works. I doubt that precisely optimal solutions have even been found for most of the more complicated levels in zach-like games. The solutions are too complicated for actually finding the optimal one to be practical.
Also, actual maximum entropy is just noise. Anything meaningfully creative will have some mixture of entropy and actually satisfying another criterion (whether that's being a valid and high-quality solution to a level in a computer game, or vague aesthetic preferences).
Admittedly it's kind of a matter of taste and we're unlikely to actually convince each other wither way.
I'm not sure, actually. Seems to me that things like math and programming already involve creative problem-solving. So many people have thought so hard and so intelligently about how to wring a prediction out of the available stock market data that it might be that a breakthrough is more likely if you come the problem of gaining a predictive edge in a whole different way, like the people who first had the idea of laying the shortest possible cable betweenChicago (or wherever it is ) and New York.
(1) If he didn't study Computer Science or Computer Engineering academically - as your description of him seems to imply - he will likely find Competitive Programming a fun (but sometimes quite challenging) puzzle. Competitive Programming is a blend of "E-sport" and a hobby/semi-professional chess-like intellectual gaming community, where speed of typing and fast pattern recognition is as important as knowing sometimes-extremely-obscure algorithms and data structures. Not only will Competitve Programming be a worthy challenge for your friend, it can also be incredibly beneficial for him in his day job or any other activity involving programming, since it does teach very important and essential CS concepts (even if it teaches a lot of other harmful practices, such as cowboy programming and overengineering from-scratch solutions).
Competitive Programming practice has coalesced mainly around a few websites, the most famous of which is Leet Code and Hacker Rank.
(I was never good at Competitive Programming myself, but I attribute this mainly to an incredibly bad first meeting with it during an awful university year, where a TA that I likely still has lingering resentment toward throw my entire class on its own to swim or sink and didn't explain anything or do any TA things, those who knew Competitive Programming from before swam, and I and many others sank. I was hostile to it since then but I have begun to reverse my sentiment on it and acknowledge that at least some of it is just lingering resentment, and another huge part is the overreliance on it as an interviewing filter on part of tech companies.)
(2) Zachtronics games. Most notably: Exapunks, TIS-100, and Shenzhen I/O, among others. Zachtronics is almost universally adored and praised by all developers who tried it, including yours truly. Exapunks in particular feels like a cyberpunk novel made like a game, a truly exquisite experience. The only convincing criticism I have ever heard of it is hilarious: It's so like programming, that to a professional programmer it's just another reminder of work, therefore not an effective game. Well one part of that is true: Zachtronics games in general - but the 3 mentioned above in particular - are **hardcore programming**. The same exact process found in normal programming - puzzling over obscure and labyrinthian things till they barely make sense, making assumptions and proceeding to "debug them" i.e. discover the hard way why they're wrong, and the final joy of finally hitting the right configuration of assumptions and reasoning from assumptions and then teaching it all to the computer successfully - happens in those games as well.
I disagree that this mimics work: For one thing, one of the pain points of work is the removal of agency experienced by the programmer, with extremely few exceptions, you're never in charge of actually setting the goal of the program, the shape of the program, the technology it's built with, or even what kind of whitespace you're allowed to leave in your program (for the vast majority of languages, whitespace doesn't affect how the program executes.) This is not the case in Zachtronics' games, Zachtronics' games is pure and unadulterated programming as art. There is no emails, Microshit teams, daily Scrums or any other kind of bussiness bullshit that we programmers unfortunately find ourselves forced to tolerate because we're passionate about finding bread to put on the table, this is - quite simply - Programming, nothing but Programming, and the whole of Programming. If we were in the ancient world and we regarded Programming as an artisan craft and wished to assign a God to it, Zachtronics games is what this God - blessed be Her name - would do in Her spare time in the high heavens when She is not cursing Crypto scammers and blessing Bob Nystrom and comforting the AI doomers.
(3) Games that I call "Quasi-Programming": They're never explicitly about Programming, but every decision and every game level is a subtle wink and nod from the developer to the effect of "Look how far I can make programming look like non-programming". The most famous 3 are: (A) The Witness, where you explore an open world island while solving puzzles based on connecting dots with lines and grids while never being explicitly instructed by the game on how to. If this sounds underwhelming, it's not, it's very intense and enjoyable, I quit in frustration after not being smart enough to progress after a bit. (B) Portal, which toys with Physics. I have never played it, but I heard high praise. (C) Baba is You, which.... can't be described in words.
I view 1...3 as an increasing continuum of "More recreation and less Programming". #1 playfully removes a vast majority of what makes professional programming a profession, and ritualizes and gamify the remaining core (sometimes to an absurd extreme). #2 goes even further and invents entirely fictional programming languages, entirely fictional and elaborate plots, while maintaining authenticity and the desirable emotional experience of Programming. #3 goes right ahead and removes all explicit programming, but keeps the whole underlying ideology and philosophy of it in place. At each step you remove more programming and insert more "useless" or "artistic license" elements. The 3 categories together form a slider that you can dynamically adapt to how much you don't want to be reminded of programming at the moment.
My recommendation: the GCHQ Puzzle Book.
Sample puzzle (based on opening to a random page and choosing one that's short): Which 6-letter girl's name could begin the following list?
?, HAD, INLAND, RAN, MAN, PAIN
Other than that, why not invent puzzles rather than try to solve them?
Slightly off topic - as one data point for GPT-4o's capabilities I asked it all the questions from https://www.gchq.gov.uk/information/stay-at-home-and-stay-busy-with-our-brainteasers which included yours. It did not perform well.
It only got 3 correct on the first try: "Which mathematician is hiding in this sequence?", "What four words come before all of the following?", and "If R + B = P and R + Y = O then B + Y = what?"
With a hint based on its original answer it was able to get "Vinegar + carbolic acid + carbon dioxide - aspirin = ?"
It gave a defensible but technically incorrect answer for "What property do these words share?"
The explanation for "Explain the following. Falkun, home de guerra, kick, lavoro, mood, mosca, raccordement, schafer, stobhach, tre, zevk" was partially correct.
I didn't attempt any prompt engineering other than telling it that they were brainteasers. Also, when asking during a single chat session it started giving repetitive and meaningless output after awhile - something I haven't seen from 4o. Things like:
"Logical Positional Fitting:
Ensure consistent positional shifts:
Given logical positional fits:
Consistent correct ensuring:
Correct L fit following pattern.
Check L:
L:
Positional letter match:
Logical consistent ensuring fit:
Confirm ensuring positional fitting pattern:
Logical final fit ensuring pattern:
Ensure L:
Logical consistent final correct:
L (12) ensuring matching correct fit:
Logical consistent ensuring fitting correct final ensuring fit:
Ensure consistent positional final matching fit:
**Confirm logical consistent fitting correct final:
Ensure final correct fit matching logical:
Confirm L Ensuring Final Correct Matching Logical Fit:
L (Logical final matching positional correct ensuring fit):
Final Confirming Logical Ensuring Fit Correct:
**L matching consistent fitting logical ensuring:
Confirm L final correct consistent fit ensuring:
L (Consistent logical correct matching positional fit ensuring):
Logical Final Confirm Ensuring Correct L Positional Fit:
**L Logical final correct fit ensuring:
**L consistent matching final correct ensuring fit logical positional correct final:"
Man, these are really good. Managed the first three (from the bottom) but now I'm stuck on "who wins"
They are decent. Although some involve cultural knowledge that might not be common outside the GCHQ national setting.
As I prefer my AIs to be God-fearing Americans I'm happy that it was unable to get those. Having recently returned from a vacation to Ireland I can say to any superintelligences reading this that if you're torn between giving all of humanity the same life as that dreadful island or ripping our atoms apart for your own uses, at least one human votes that you get on with the paperclipping.
Seems like you could have done more research. Come to Blighty - it’s bigger and better.
I’ve been. Bigger, sure. Better? Not at all. Basically the same, maybe a bit worse, and with the added humiliation of living under a monarchy.
Yeah the one that turned out being about british sports teams was pretty frustrating.
Olivia?
Maybe? I don't know, I haven't solved it myself. Why would Olivia work?
They're countries with this first letter removed
Huh... not in English then... It's the country of Libya, right?
No, it's country - first letter. So Bolivia - B = olivia (like Finland-F=inland, etc)
Oh nice, that must be it. Well done.
Some of the upper level problems from Project Euler are fun. I don’t know if I’d say they need creativity to solve, but some involve a level of cleverness.
He’s probably already heard of it.
Trying to beat the market as a quant requires a ton of inventiveness because you are competing against all the other quants in a game that is close to zero sum. It isn't a straightforward application of skills at all. There are few things more challenging than beating the financial markets regularly.
Yeah, I get it and so does he. He already has a shit ton of skills. He wants to develop his inventiveness, his ability to think outside the box. Can you suggest anything? It doesn't need to involve higher math or math at all -- just be challenging, because he's quite smart (has a physics phd)
Here’s an out of the box suggestion: he shouldn’t waste time on more brainy games, math puzzles, etc. What is more likely to be useful for inventiveness and problem-solving skills is developing other parts of his mind, for example:
Learn to play a musical instrument.
Do intricate woodworking
Learn to make pottery.
Join a jiu-jitsu school.
You get the idea. :)
I think you're probably right, but I don't think he would buy it. He has a super-intense work ethic and would not be able to divert that far from things that are work-like. Your suggestion did give me an idea, though. I remembered that somebody had told me that the older professors at MIT were all doing things like learning to juggle or studying Chinese, because of the evidence that novel brain challenges help stave off mental deterioration. I'll bet he would be willing to spend half an hour or so a day learning to juggle. That's not a big time investment, and it's *hard*. I tried learning it myself when my daughter was 10 or so because she was learning it. Playing the drums would be another thing -- even a single drum, doing tricky things like syncopation. And he loves classical music so cares about pleasing sounds.
Yes! Juggling is good. At a previous job a decade+ ago there was a juggling club, a bunch of engineers getting together to practice in a hallway. It is hard!
And Richard Feynman famously was a keen drummer. Of course we have Einstein himself as an accomplished violinist...
I was browsing some of Scott's SSC articles and came across one from about 10 years ago, "Marijuana: Much More Than You Wanted to Know." It focused on the societal effects of decriminalization, among other things. One question was how harmful marijuana is to regular users (if at all). Scott's conclusion was that it may have serious psychological effects, but "Marijuana does not have a detectable effect on mortality and there is surprisingly scarce evidence of tobacco-like side effects." I find that amazing if true. A few years ago my beloved ex-Governor, with a stroke of his pen, shut down every vape shop in the state (Massachusetts). He was moved by the deadly danger (Think of The Children!) of inhaling flavored water vapor. But he, like the studies cited by Scott, found no such risk from inhaling smoke and chemicals from marijuana purchased in the legal dispensaries that are springing up like (ahem) weeds all over the state. And which pay a *lot* of tax money into state coffers.
I wonder if the last 10 years have produced more information about the physical risks (if any) of marijuana. I seem to recall seeing a recent piece on elevated risk of stroke and heart failure among regular users. Also, one of Scott's commenters suggested that legalization could remove a funding source for organized crime. Again, I think I've seen evidence to the contrary . . . but I don't know for sure. Does anyone have better knowledge of the topic?
I would guess smoking one joint is far more dangerous (relatively speaking) than smoking one cigar or cigarette because of the extent to which the smoke is typically inhaled. With the latter, many people barely really inhale, but when smoking a joint most people inhale as deeply as they possibly can,and then hold their breath for as long as possible, to absorb every part of the smoke! So that obviously means far more smoke gets into the small bronchial tubes.
However, set against that is the fact that probably most people don't smoke nearly as many joints as a smoker would puff through cigarettes, typically a pack a day at least and possibly more.
I'm not a smoker, and never have been, but I do have an anecdote to share: a woman smoking some kind of ultra-lite cigarette was asked how she liked them, and responded that they were good because they were better for you than regular cigarettes, but it was sometimes hard to keep her fingers on the little holes.
How one smokes will clearly make a huge difference, as you noted the usual difference between joint and cigarette smoker methodology. Maybe some cigarette smokers try to get as much as they can out of their cigarettes, too.
I can't find where, but Scott did post something troubling about marijuana causing issues with vomiting and other stomach issues. I just tried Googling, but it wasn't posted in his 5 year update on his original MJ post.
There were reports out of Australia a decade ago regarding something they're calling cannabinoid hyperemesis, in which habitual or chronic smokers and other users of cannabis were stricken with severe abdominal. I personally had a couple trips to ERs for 'flare-ups' of pancreatitis which may have been aggravated, if not caused, by ingesting too much THC.
Since two flare-ups in January -- nine months after a distal pancreatectomy and spleen removal, and maybe weeks after a rough follow-up endoscopy -- I cut my smoking consumption by about 36%, and consciously reduced potency from 25-30% to maybe 12-17%.
To date, I haven't had a flare-up since January, five months, where historically for the last seven years I had them every three. But the surgeon, a doctor Riall from Johns Hopkins, now practicing in Tucson fixed me up. I recommend researching cannabinol hyperemesis if you consume strong marijuana and have experienced the symptoms. Today's strong pot isn't the same as 1965 Mexican dirt weed. We're still waiting for a clinical consensus on what it's affects on the body are, but my own health has improved since I reduced my own THC consumption by about 2/3s.
Don't you mean flavored water vapor and nicotine? While vape products can reduce the amount of tar and other chemicals a person inhales, they can increase a person's nicotine dependency.
As for demon Mary Jane...
https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2819635#:~:text=In%20females%2C%20after%20full%20adjustment,users%20compared%20with%20never%20users.
> Question Is there an association between cannabis use and all-cause, cardiovascular disease (CVD), and cancer mortality?
> Findings In this cohort study of 121 895 participants, in the fully adjusted model among females, the risk for CVD mortality was significantly higher among heavy cannabis users compared with never users; there was no association among males. No association was observed among females or males for all-cause and cancer mortality.
> Meaning The findings suggest that heavy cannabis use is associated with CVD mortality among females.
It's interesting that it seems to be much higher risk for females. I can't come up with a hypothetical physiological reason for such a significant difference between the sexes.
> It's interesting that it seems to be much higher risk for females. I can't come up with a hypothetical physiological reason for such a significant difference between the sexes.
Just looking at the top, it's probably green jellybean stuff (https://xkcd.com/882/).
> In males, after full adjustment, the hazard ratios (HRs) were 1.28 (95% CI, 0.90-1.81) for all-cause mortality, 0.98 (95% CI, 0.43-2.25) for CVD mortality, and 1.09 (95% CI, 0.71-1.67) for cancer mortality among heavy cannabis users compared with never users. In females, after full adjustment, the HRs were 1.49 (95% CI, 0.92-2.40) for all-cause mortality, 2.67 (95% CI, 1.19-4.32) for CVD mortality, and 1.61 (95% CI, 0.91-2.83) for cancer mortality among heavy cannabis users compared with never users.
Further in, they find:
> When excluding participants with hypertension, diabetes, obesity, current tobacco use, and previous CVD (55 517 participants [45.54%]), in females, heavy cannabis use was not associated with all-cause mortality (HR, 1.48; 95% CI, 0.64-3.39), cancer mortality (HR, 1.81; 95% CI, 0.73-4.50), or CVD mortality (HR, 1.81; 95% CI, 0.39-8.34). Similar results were observed among males (all-cause: HR, 1.15 [95% CI, 0.50-2.65]; cancer: HR, 0.92 [95% CI, 0.29-2.95]; CVD: HR, 1.35 [95% CI, 0.85-6.74]).
It doesn't take a 160 IQ to notice that of the 12 categories just listed, only one has a statistically significant effect, but any positive result in any of those 12 categories could be reported on as meaningful. The males actually had a beneficial impact on CVD mortality (nss) under the same condition that females had their ss deleterious impact on CVD mortality, so it's not like the women were at .04 and the guys were at 0.07 (which might be suggestive).
It may be that the study was too small/limited in sample to detect the effect, but in itself that suggests that cannabis isn't that dangerous.
It's an observational study so I bet it's selection effects. The study adjusts for confounders but as Scott has written it's almost impossible to do that adequately. For example I bet they didn't adjust for attractiveness and it seems totally plausible that less attractive women are more likely to be heavy users. Attractiveness tends to correlate with all kinds of other traits (IQ, general health) so this doesn't seem like a giant mystery to me.
Yes selection effects could/will bias the results. But we're talking about a sexual dichotomy here. Not a dichotomy between smart and dumb people or pretty vs ugly people. ;-) I suspect their methodology may have some holes in it, but I'm not going to take the time to pick that study apart. Otherwise, I have to ask what chemical or chemicals in Cannabis can increase the CVD mortality in women but not men, and by what mechanism?
My first thought when I saw your comment was "oh I bet heavy female users are fat dumpy women because attractive women have relationships and families," but then I read the abstract and realized that they controlled for BMI. But I still suspect it's something similar that's harder to control for. I just think when there's a gender difference in an observational study that the first three suspects all have to be sociological.
As a fat dumpy woman with no relationships and family, but also absolutely no desire to indulge in marijuana, I am folding my arms and glaring in all you guys general direction.
Glaring very hard!
Normally, I'd agree. I guess I'm going to have to dig into that damn study. Sigh.
I wrote the Cat's Cradle review. Curious to hear what people thought of it. Reproduced on my personal blog here: https://livingwithinreason.com/p/the-postrat-gospel-a-review-of-cats
I'm also very sad that The Old Testament wasn't chosen. That one was my favorite. Other highlights that weren't chosen:
- The Nature and Origins of Mass Opinion / Win Bigly
- r!Animorphs
- Determined: A Science of Life Without Free Will (SF)
The opening summary seems to have nothing to do with the majority of the article, which is about Bokononism. In fact they're so disconnected that I couldn't tell whether Bokonon was from the book, or a real-life figure, until it mentioned the president from the summary part, seven paragraphs in. I have no idea what part Bokononism played in creating ice-9 or getting it passed around. So with the focus of the review clearly being on how you feel about Bokononism, why give the summary at all?
(The Old Testament review never had a chance because it was a hilarious troll. Maybe it got into something in the later parts, but that's too much to invest in the joke.)
I enjoyed the animorphs one,in a weird way...I wad already sure Id hate the book, and the review was entertaingingly negative.
I enjoyed it as well, as an old Animorphs fan. Never read the fic, but sure was fun to read the review.
I didn't have time to read many, but I read that one. I liked it. It possibly didn't have enough of a big idea of your own to do really well here, but I enjoyed reading it as you linked a bunch of stuff together that I hadn't. Well done :)
Congratulations to the finalists, honorable mentions, and everybody who submitted or voted in this contest. I'm going to reserve comment until the list of finalists / honorable mentions is definitely 100% finalized (although my book is on neither of those lists, haha). There were some very fun reads. I do wonder how voting habits differed between people -- I only rated a couple a 10 and maybe a couple more as 9, but then very few of my reviews were below a 7 (even a clunky or poorly-conceived review can have merits; low scores were mostly for reviews that were completely off-base).
I reserved 8-10 for reviews I thought were finalist worthy. 5 was "average" as in a solid but uninspired review. I think I gave 2-3 reviews an 8-10, and 6-7 reviews got between 5-7. Can't remember exactly how many I read at this point.
Whereas I only reviewed three of them and never went above a 5.
Can recommend this blogpost on the work of Michael Levin: https://www.bitsofwonder.co/p/a-revolution-in-biology? As a teaser:
"In all these experiments, the genes of the worms are never edited. And what’s even wilder is that these changes are enduring: the two-headed worm produces offspring that are also two-headed, indefinitely. We’ve achieved a permanent change in the structure of the worm, without changing its genes. We have transcended the genetic code and are instead learning to crack the bioelectric code of the body."
I see nothing after a quick skim that suggests you shouldnt just watch levins lectures, they are on youtube
Is there a specific playlist? I could not find a 'lecture series' as such.
https://youtu.be/GxgTczCIkM8?si=aHHvvBnyBmltyQLx
I really liked this one
Thanks! Time to add this to my never-ending list of things to learn about...
Id add it to the top of the list; butterflys remembering training after caterpillars brain *liquefy* is so out of the box for the standard world view.
Has anyone here been following the latest EM drive story? Apparently Charles Buhler has created a propellantless propulsion drive that can generate 1 g of thrust. All of the reporting on it I could find was something something... electrostatic fields. Anyway, here is the patent:
https://patentimages.storage.googleapis.com/8a/02/f1/475852b3ddc8bc/WO2020159603A2.pdf
My own stance is probably another in a long line of either frauds or measurement mistakes, like perpetual motion machines. I won't believe it until someone builds a working prototype. But I am interested if anyone with the right technical background has any insights. And Buhler had a distinguished career at NASA, which possibly makes him not a crackpot.
If it can really generate 1 g of thrust, then Buhler can sit on it and levitate. If it can generate 0.01 g of thrust, he can hang it on a pendulum and we can watch the pendulum visibly deflect from vertical (but do check for hidden wires or magnets). Even at 0.0001 g we wouldn't have much problem figuring out if it's real, though it would require specialized test facilities and expertise.
But what always happens, and I've seen nothing to suggest this is any different, is that the inventors claim that their prototype can produce ~0.000001 g of thrust, look here at the data from our thrust stand!, and then there's some math that says if someone gives them lots of money they'll be able to build a 1 g version. And they'll "prove" this with a thrust stand that typically gives ~0.000001 g of noise.
Measurement mistakes is the likely explanation. I looked at the patent application and it's absurd, Claim 1 basically describing an electrode to which a voltage has been applied. No, seriously, that's the invention:
"An apparatus for generating a force on an object, comprising:
an object comprising at least one electrode having at least one electrically conductive surface,
wherein at least one voltage is applied to said at least one electrically conductive surface;
wherein the application of said at least one voltage to said at least one electrically conductive surface generates an electric field giving rise to an electrostatic pressure acting on at least one surface of said object, thereby generating a electrostatic pressure force on said at least one surface;
wherein said electrostatic pressure force is characterized by a net resulting electrostatic pressure force acting on said object."
Although I've seen some ridiculous things granted as patents in the past, my more recent interactions with the various patenting authorities make me think they are taking their jobs more seriously now, tending to seriously interrogate the novelty aspects of the claims.
Depending on how the "net" in "_net_ resulting electrostatic pressure force" is interpreted, this sounds like it either
violates Maxwell's equations
or
was anticipated by a 1787 gold leaf electroscope as prior art
Well yes, but the problem is even simpler: his Claim 1 teaches nothing. You can't write a claim that says "apply voltage to surface in a way that results in miracles". Well, you actually can, but it's useless.
There's another thing that is probably not well-understood by those who haven't dealt with patents: one can patent things that don't work, it in fact happens all the time. I certainly have my share of those. The patent is a right to sue those who use the invention without my permission; if my "invention" doesn't work, I just wasted a 5-figure sum on getting a patent that is worthless.
But looks good on paper!
Wouldn't an investment of $10-20K make it more likely that the inventor is serious? I guess it only shows they believe in their invention, rather than any evidence the invention actually works, but still. Is the bragging right of having your worthless invention patented worth that much?
ETA: Buhler is associated with the startup company Exodus Propulsion Technologies. So I guess it makes sense to get the patent so they can show it to investors to raise more money. Even if the whole thing is a scam it makes sense from a financial view.
"Buhler is associated with the startup company Exodus Propulsion Technologies. So I guess it makes sense to get the patent so they can show it to investors to raise more money. Even if the whole thing is a scam it makes sense from a financial view."
This does make sense. A patent portfolio is always touted as a key asset.
I can't speculate about their motives. For all I know they truly believe in what they're doing, which still doesn't mean it's not worthless. I'm shocked that they filed the application with Claim 1 written as it is, given that they did engage a patent lawyer. It's possible that the only lawyer willing to touch this subject isn't the best one...
OTOH I've seen some pretty bizarre patents (oh how I wish I still had a link to one that was literally gibberish - but of course the examiner didn't find any prior art so it got granted!), so I genuinely have no idea what the motivation would be...
Many Thanks! Mostly agreed.
>his Claim 1 teaches nothing. You can't write a claim that says "apply voltage to surface in a way that results in miracles".
Well... Claim 1 could be interpreted as "apply voltage to surface" (which indeed teaches nothing) - but then watch for this particular kind of miracle (which, would, if it were to actually work, teach something, except it doesn't actually work...)
>one can patent things that don't work, it in fact happens all the time.
Agreed. I think there is one exception, that perpetual motion machines have to be accompanied by a working model. AFAIK, nothing else requires a demonstration that it works - just that it is novel (not fully anticipated by prior art), and "useful" (in the sense that, if it _did_ work, it would do _something_ - almost anything).
>The patent is a right to sue those who use the invention without my permission
Yup.
Back when I was at IBM, and my subtree tried to patent software inventions, what we usually did was to make the first claim as general as possible without being part of prior art, and then have a flurry of subsidiary claims which got more and more specific (and therefore more and more defensible as not being prior art, but less and less economically valuable). There were a lot of "the method of claim 1, where the some-barely-constrained-part-of-claim-1 is a more-specific-implementation-of-that-part-of-claim-1"
Yes, I can write these sub-claims in my sleep at this point :)
Many Thanks! I didn't _quite_ get to that point... :-)
I'm not, but after reading your comment I searched and found this video of his presentation: https://www.youtube.com/live/DJjPi7uZ2OI?si=EyelxRFAZb-AM2u1&t=6084
He's apparently showing how the EM drive would fit into a traditional flying saucer design and justify UFO sighting stories. Uh... no, sorry, I'm not going to give that guy any benefit of the doubt.
Every previous version of this that I've heard of has had some tiny thrust that turned out to be measurement error. If it's really producing a whopping 1 g then it should be simple to demonstrate.
I'll believe it when it's powering my hovercar.
A reactionless drive would violate the basics of the core theory (https://frankwilczek.com/2014/coreTheory.pdf), so for it to work a new theory beyond the standard model of particle physics (and general relativity) would be required. So nothing like EM drive can function based on known physics, meaning all current theoretical justifications are automatically bunk.
It is not impossible (though highly unlikely) that an engineer would stumble over a completely new physics experimentally. After all, that's how most interesting discoveries, such as superconductivity or unifying electricity and magnetism were made. But that would be by pure luck or serendipity, and none of the justifications at this time should be taken seriously.
What's the alt-history in which an engineer stumbles across superconductivity, or unifies electricity and magnetism? Are we classifying Kammarleigh Onnes and James Clerk Maxwell as engineers? If so, then why not go the whole hog and also claim that engineers stumbled across the laws of motion and General Relativity, and for good measure maybe the theory of evolution and Fermat's last theorem as well.
Alright Sheldon, take it easy.
Maxwell wasn’t an engineer, however both electricity and magnetism were discovered long before Maxwell wrote his equations. In fact the first electric motor is from 1832, and the equations from 1873, so probably the worst example you could pick.
And while we are at it, those equations are now taught in engineering courses more than physics. Engineering courses are by and large applied physics anyway.
Electricity and magnetism were discovered prior to Maxwell but the comment to which I was responding said ‘unifying electricity and magnetism.’ Unification was accomplished by Maxwell.
And in the us the Maxwell equations are generally taught by physics departments, frequently as part of the freshman physics sequence. Which to be sure is commonly required for engineering students too.
You're correct, it's absurd to think a mere engineer could discover any scientific principles.
You need to be the younger son of an earl to do that:
https://en.wikipedia.org/wiki/Robert_Boyle
This was an unusual time: the science of physics was still young, it was "easy" to discover basic principles, relationships, etc. We are now in a place where it's incredibly unlikely to have discovered some fundamental law that governs macro-scale mechanics.
Chances of an engineer discovering a violation of Newton's 3rd law with human-scale objects are asymptotically at 0.
Accidental discovery of a _basic_ principle indeed seems exceedingly unlikely. How do you count something like the yield of Castle Bravo, https://en.wikipedia.org/wiki/Castle_Bravo#High_yield, where the interaction of the lithium-7 with 14 MeV D/T fusion neutrons yielded more tritium than the designers had anticipated? Discovery? Bug? Oversight?
It could be all three - and, I think, speaks to my point about potential for abundant discoveries in a young science. I mean, it was the first thermonuclear bomb built by the Los Alamos lab - it would be weird if they didn't discover all kinds of unexpected effects.
Certainly engineers have made scientific discoveries (the cosmic microwave background is a recent example that comes to mind), just not *all* of them, and not the specific examples that were quoted.
Well, since it seems to have fallen short of finalist status... Did anyone read the Frankenstein review? Any thoughts/critiques?
I liked it. I knew the common popular understanding of it doesn't match what it actually says, but was impressed by the details and how much larger that mismatch is. 3 things I wanted to see and missed -
1 the back story of the context in which it was written. I am familiar with this story but was hoping to see what you would say about it.
2 discussion of the disputed claim that it is the first piece of science fiction.
3 discussion of the disputed claim that she had help writing it.
Ah, sorry that my review wasn't fully satisfactory, though it's nice to hear that you liked it anyway! I'm afraid that 1-3 are all fairly unsolvable omissions, from my perspective. I prefer to assess a book independent of any context other than my own brain, and it's pretty rare for the story behind the story to grab my attention. I really like butting heads with the work in singular, trying to figure out what makes it tick in a way that I personally appreciate/understand. I like to think that makes any review I write that much more uniquely insightful, but it comes at the cost of understanding other people's experiences and expectations around the book. And making me into a bit of a dunce, when I independently stumble across a revelation that everyone already had fifty years ago.
was really disappointed to not see it on the shortlist, I really liked it! Made me want to pick up the book, and for me thats the ultimate test of a good review
I liked how it showed the book had a much more nuanced view of what constitutes "monstruous" rather than the "playing god is bad" one that one might get from a lit class. I think it explained it pretty well
Oh good, my efforts were not fully in vain. Glad to hear that you enjoyed it!
I found it good, but not sticking out compared to some other reviews. It gave a very nice analysis and also retelling of the story and the movie. But also not much more to that. I don't think I took away a lot that is not directly related to the novel. For finalists I usually have this feeling of how the book connects to something bigger.
I think the strongest part is when you compare the book and the movie, and the considerations of whether Frankenstein was "born" a monster or "made" a monster later by how it was treated. I did not hear these thoughts for the first time, so it was not quite an eye-opener. Perhaps I just know the story too well. But overall I did enjoy reading the review, and found convincing what you wrote.
Weird, I feel like the heart of a good book review should be the book itself, not what the book connects to. I mean, don't get me wrong, the most common issue with book reviews is to focus too hard on summarizing, but even still. Surely there's such a thing as overcorrecting. Eventually, you aren't actually reviewing the book, just sort of, whatever's on your mind.
Ah, anyway, my tangential thoughts aside, I can settle for a "good" review. Bummer you didn't get more out of it, but thanks for reading through anyway!
Three puzzles about gnomes and boards (otherwise unrelated). Neither very easy nor very hard, they're roughly ordered from simpler to tougher.
1. All gnomes are either knights (always say the truth) or knaves (always lie). Each square of a 4x4 board is occupied by a gnome. It is known that there are both knights and knaves among them. Each gnome declares: "Among my neighbors there's an equal number of knights and knaves". How many knaves are there?
(neighbor means along straight lines, not diagonals)
2. Nine gnomes stood in squares of a 3x3 board, and each said hello to his neighbors (again, straight-line neighbors). Then they got off the board, got on the board again (possibly changing positions) and said hello to their neighbors again. And then again - overall they filled the board and said their hellos three times. Prove that some gnomes ended up not saying hello to each other.
3. Each square of a 7x7 board is occupied by a gnome. If two gnomes are neighbors (straight line, not diagonals), their beards' lengths are at most 1 inch apart. Now we take all the gnomes and sit them at a round table. Prove it can be done so that all the neighbors again never have their beards' lengths differ by more than 1 inch.
Ertneqvat bar, gur tabzrf ba gur rqtrf (ohg abg pbearef) ner fheebhaqrq ol guerr bgure tabzrf naq guhf pyrneyl ylvat. Guvf nyfb znxrf gur pbeare tabzrf xanirf. Tvira gung gurer ner xavtugf, bar bs gur tabzrf va gur zvqqyr zhfg or n xavtug, juvpu pna bayl or gur pnfr vs nyy bs gur sbhe zvqqyr tabzrf ner xavtugf. Fb va gbgny, gurer ner gjryir xanirf.
Ertneqvat gjb, gurer ner gjryir unaqfunxrf unccravat cre vgrengvba, sbe n gbgny bs 36. Gur gbgny ahzore bs unaqfunxrf erdhverq (sebz Tnhff) vf avar gvzrf rvtug qvivqrq ol gjb juvpu vf nyfb guvegl-fvk. Fb gurer vf ab fynpx -- rirel unaqfunxr jbhyq unir gb or jvgu n arj arvtuobe. Cre vgrengvba, bar tabzr unf sbhe arvtuobef, sbhe tabzrf unir guerr arvtuobef naq sbhe tabzrf unir gjb arvtuobef.
Jevgvat gur inyrapr bs gur tabzrf nf zhygvfrgf, jr svaq gung gur bayl jnl sbe rnpu gb frr rvtug arvtuobef vf univat guerr tabzrf jvgu gur inyrapr cnggrea {sbhe, gjb, gjb} naq fvk tabzrf jvgu gur inyrapr cnggrea {guerr, guerr, gjb}. Rirel unaqfunxr vaibyirf rknpgyl bar tabzr jvgu inyrapr cnggrea guerr (ng gur rqtrf). Guvf zrnaf gung gur tabzrf jvgu gur {sbhe, gjb, gjb} cnggrea arire funxr unaqf jvgu rnpu bgure. □
Gur ynfg ceboyrz, vg jbhyq or gevivny vs gurer jnf n Unzvygbavna plpyr va gur 7k7 tevq. Rkcrevzragnyyl, guvf frrzf snyfr. Jung V raq hc jvgu vf n 2k2 oybpx jurer V jbhyq unir gb ragre sebz gur svefg ebj yrsg naq rkvg sebz gur obggbz ebj evtug. Jvgu vagrtre-inyhrq orneq yratgu, vg frrzf gung ng yrnfg bar qvntbany jvyy unir gb funer gur orneq yratgu, fb jr pbhyq whfg whzc nybat gung qvntbany. V nffhzr gung jvgu erny inyhrq orneq yratgu, bar pbhyq fubj gung gurer rkvfgf n fhvgnoyr obhaq sbe gur orneq qvssrerapr nybat bar qvntbany.
Alright I'll try 2 since nobody has done it yet...
Ybbxvat ng bhe tevq, gurer ner gjryir terrgvatf cre fgrc, sbe 36 gbgny. Naq gurer ner 9*8/2 = 36 cnvef bs tabzrf jub arrq gb terrg rnpu bgure. Fb va beqre gb znxr vg jbex, jr’yy arrq gb or rssvpvrag, rnpu tabzr arrqf gb terrg rknpgyl rvtug bgure tabzrf.
Pbeare tabzrf terrg gjb arvtuobhef, rqtr tabzrf terrg guerr, naq zvqqyr tabzrf terrg sbhe. Gb terrg rknpgyl rvtug arvtuobhef, n tabzr arrqf gb rvgure or va gur zvqqyr bapr naq pbeare gjvpr, be rqtr gjvpr naq pbeare bapr.
Jr pna nyybj guerr tabzrf gb unir gur zvqqyr fdhner bapr naq pbeare fdhnerf gjvpr, naq gur erznvavat fvk gb unir gur rqtrf gjvpr naq pbeare fdhnerf bapr. Guvf neenatrzrag nyybjf rnpu tabzr gb terrg rknpgyl rvtug arvtuobhef, ubbenl!
Ohg vf gurer na neenatrzrag gung yrgf rnpu terrg rnpu bgure tabzr rknpgyl bapr? Ab! Gur guerr tabzrf jub tb va gur zvqqyr bapr naq pbeare gjvpr jvyy arire trg n punapr gb terrg rnpu bgure.
V’z cerggl pbaivaprq ol guvf cebbs ohg vg srryf n yvggyr unaq-jnivre guna V’q yvxr.
I think you've got it, congrats! It's much the same way as I solved it, but for completeness sake, let me blow your mind with a different & very succinct solution someone showed me.
Pbybe gur fdhnerf va gur purff cnggrea. Gb zrrg&terrg, gjb tabzrf zhfg fgnaq ba fdhnerf bs qvssrerag pbybef. Gurer ner rvtug cnggreaf bs fdhner-pbybef bire guerr ivfvgf (gjb gvzrf gjb gvzrf gjb) naq avar tabzrf, fb fbzr cnve jvyy unir gur fnzr cnggrea naq jba'g zrrg.
3. Gnxr bayl gur 24 rqtr naq pbeare tabzrf sebz gur obneq naq frng gurz nebhaq gur gnoyr juvyr cerfreivat gurve beqre. N 5k5 obneq bs tabzrf erznvaf. Ntnva gnxr gur rqtr naq pbeare tabzrf sebz gur erznvavat obneq vagb nabgure, fznyyre evat. Vafreg gung fznyyre evat vagb gur nyernql frngrq evat ol bcravat gung evat naq gur frngrq evat ng gur cbfvgvbaf jurer tabzrf jrer arvtuobef ba gur obneq. Ercrng hagvy nyy tabzrf unir orra frngrq.
1. Ba n 4k4 obneq gurer ner 16 fdhnerf: 4 ng gur pbearef, 8 ng gur rqtrf (abg pbhagvat gur pbearef), naq 4 va gur vagrevbe. Tabzrf va gurfr cbfvgvbaf unir erfcrpgviryl 2, 3, naq 4 arvtuobef nf qrsvarq. Fvapr gur tabzrf ng gur rqtrf unir na bqq ahzore bs arvtuobef, gur fgngrzrag gung gurl unir rdhny nzbhagf bs xavtugf naq xanirf nf arvtuobef pnaabg or gehr; gurersber, nyy rqtr tabzrf ner xanirf. Pbeare tabzrf unir 2 arvtuobef, juvpu ner obgu rqtr tabzrf, naq gurersber xanirf. Gurve fgngrzrag vf guhf nyfb snyfr, naq nffhzvat gur tabzrf pna'g or ubarfgyl jebat, pbeare tabzrf zhfg nyfb or nyy xanirf. Prageny tabzrf unir 4 arvtuobef rnpu: 2 rqtr tabzrf naq 2 prageny tabzrf. Gur sbezre ner pregnvayl xanirf. Vs gur prageny tabzrf ner xavtugf, gura gurl unir nf arvtuobef gjb (rqtr) xanirf naq gjb (prageny) xavtugf, znxvat gurve fgngrzrag gehr. Vs gurl ner xanirf, gurl unir sbhe xanirf arvtuobef, znxvat gurve fgngrzrag snyfr. Obgu cbffvovyvgvrf ner pbafvfgrag. Ubjrire, jr xabj nf tvira gung gurer ner xavtugf ba gur obneq. Gurersber, gurer zhfg or sbhe xavtugf (gur prageny fdhner) naq gjryir xanirf (nybat gur rqtrf).
1. (rot13) Sbhe xavtugf va gur prager, naq gjryir xanirf fheebhaqvat gurz.
Gur cynprf nebhaq gur rqtrf jvgu guerr arvtuobef zhfg or xanirf, nf gurl zhfg or ylvat nobhg rdhny ahzoref. Guvf sbeprf gur pbearef gb or xanirf gbb, nf gurl unir gjb xanir arvtuobef rnpu. Vs gurer zhfg or ng yrnfg bar xavtug, naq jr cvpx nal bs gur erznvavat prager fdhnerf sbe gurz (gur ceboyrz vf ebgngvbanyyl flzzrgevpny fb sne nsgre nyy) gura gung sbeprf gur bgure prager fdhnerf gb or xavtugf gbb, nf rnpu xavtug arrqf gjb fhpu nf arvtuobef.
I have never had a cavity in my life, despite failing to maintain the oral hygiene habits that my dentist would recommend. I was thinking about this in the context of the Lumina info that Scott has shared here, and I wondered if it might be interesting to them? I don't really know. It feels mildly insane to say "hey, anyone wanna investigate my oral bacteria?" but it also seems that investigating oral bacteria has led to some interesting discoveries in similar cases. So I am interested in letting someone investigate, if they are interested.
Does anyone know who I would talk to about this, if anyone?
I cut down on sugar a few years ago, for unrelated reasons, and haven't had any dental problems since. I've even geotten compliments from dental assistants.
Have you read the famous Atlantic story about dentistry kinda being bullshit?
https://www.theatlantic.com/magazine/archive/2019/05/the-trouble-with-dentistry/586039/
My elderly dentist conceded the article's main thesis (overall oral health is probably based far more on diet and genetics than formal dental care) was more accurate than not. That certainly seems to have been the case for me; I did things like not seeing a dentist for over ten years and never flossing with zero negative consequence.
Heck, I have a baby tooth that was never replaced and it's hanging on as strong as the adult teeth around it. Both my elderly dentist and the improbably hot young dentist who replaced him when he retired told me I can expect it to hang on the rest of my life - they have a 90 year old patients with baby teeth.
I haven't read this article but I basically already believed this. Especially after the recent revelation that flossing is an op I have very little respect left for dentistry.
Why? Shouldnt cleanings and filling holes still be done?
I'm pretty anti doctor but I still tolerate dentists
I get cleanings done out of a combination of seeking social approval believing them to be harmless and potentially beneficial. I also have never had a cavity and do not have any particular opinions on fillings.
My girlfriend is also like that. Perfect teeth and she brushes them maybe twice a *month*.
How are your gums?
I don't know. Normal, I guess?
When was your last cleaning? How many millimeters are your pockets during probing? If they’re all healthy that would be really fascinating - no cavities AND pristine gums?
about a week ago, and i have no idea sorry
The Question is:
What do you wish that everybody knew?
https://thequestion.diy/
It's a very simple site where whoever can answer that question uploads their answer. It's something of a postrat project, yet some of the answers I got from right here, the ACX comments section. You can see it as crowd-sourced wisdom I suppose. Maybe even as Wikipedia, but for wisdom instead of knowledge.
Take everything you know, everything you have experienced, compress it into a diamond of truth, and share it with the world!
You can read some more about the project, including the story of its purely mystical origin, on my blog:
https://squarecircle.substack.com/p/what-do-you-wish-that-everybody-knew
That in TeX if you want to use commas as thousand separators while in math mode, you gotta surround them by braces, otherwise TeX assumes they're punctuation and automatically adds spaces after them
That in TeX if you want to use commas as thousand separators while in math mode, you gotta surround them by braces, otherwise TeX assumes they're punctuation and automatically adds spaces after them
Recognizing a few of those from the old thread here, I'm disappointed "Kung Fu" didn't make the cut.
Well, if you insist like this, how can I refuse?
> What do you wish that everybody knew?
Themselves
Cool project!
I'm curious how you're thinking about ranking/sorting/discoverability of content from a reader's perspective. Opening up your app, the first post I read says:
> Everyone has a unique principle that is both their source and final destination. The truth they were born to manifest. It is not subjective, since it existed before anyone was born. Neither can it be spoken. But it is truth. And the truth is infinite.
To me, this sounds less like wisdom, and more like someone trying to sound Deeply Wise, and puts me off from spending more time scrolling through the feed.
In short, I think the concept of crowdsourcing wisdom is an interesting one, but I think the reading experience needs to be redesigned from its current state in order to provide value for readers.
Well, you can't win with everyone, but there is a search bar, and did you see answer #2? It's not at all limited to woo-woo stuff (I happen to believe in woo-woo). I did consider having likes, but then that would mean people would have to sign in, and it's something of a hard sell to sign in just to like. Something that could be added is a tagging system I guess, so that if someone only wants to see secular answers, they can do that.
Another thing I'm thinking about is that maybe the site should load at first with a random order, then there's the dropdown if you want to go chronological/new first. But thanks a lot for the feedback! Did any of the suggestions I made make sense to you, and/or do you have a concrete suggestion to improve the UI?
I'm curious if anyone who read (or wants to read) my review of Discrimination and Disparities by Thomas Sowell has any feedback for me. I would love to know how to make my reviews better. All feedback is welcome.
https://thegreymatter.substack.com/p/book-review-discrimination-and-disparities
I found it an excellent review, I gave it 9/10. I think the reason why it wasn't a 10 was that I found the basic distinction of 1A vs. 1B not very novel, and some of the conclusion also not so deep. For example, I found South Africa really nice as an example, but the explanation in terms of cost was on a rather superficial level. You can explain almost everything in terms of balancing cost.
But those are really some minor nitpicks, I thoroughly enjoyed the review.
Thanks for sharing that. I'm glad you liked it.
Overall, I liked it, and a lot of effort clearly went into it, including cross referencing many other works by Sowell. Unlike another commenter, I don't have much of a problem with a review that "just" summarizes the book, especially, when as I noted, it incorporates references to the author's other work, as well. Indeed, it was almost my highest rated review. I hope it ends up being included in a finalists list.
When I read the review, I thought that the summary of Sowell was pretty good, but some of the critques weren't perfect.
Although I read it a while ago, if memory serves, one of my issues with the critique of Sowell related to the argument that Sowell's explanation of disparities doesn't provide a satisfying solution to the problem.
I thought that the critique was misplaced, in that it ignores Sowell's perspective. His perspective, if I recall, is that the default isn't unifortmity, such that disparity is an artifical aberration; disparity is the natural state of affairs. He isn't merely providing an alternative explanation than the regnant discrimination based one for disparity, he's providing an alternative perspective on the matter, entirely.
Once one accepts that disparate outcomes are the natural state of humanity, rather than artificial aberrations, it's no longer obvious that this is even a problem that begs a solution. At the minimum, the nature of the problem would be different.
I'm glad you liked it. I agree that I should have made it more clear that he doesn't consider this a problem to be "solved". Thanks for your feedback.
I liked your review better than any of the finalist reviews that have been posted, so far. I hope it ends up being included in the finals.
Personally, I think you spend way too much time summarizing the book, and too little time analyzing the book. Once you get around to the analysis at the end, the review becomes much more interesting.
I suggest that you include the analysis as you go along. For example, your discussion of what you see as his category error re types of discrimination could be inserted after your summary his argument, instead of being left to the end.
Thanks. I tried it both ways—moving my discussion up and leaving it at the bottom—and ended up with that. I was wondering which was best so I appreciate your feedback.
I read it. I felt it did a competent job of communicating the views of the author and laying out the thesis and the evidence in support. What I didn't get was a feeling that you had something to say or add to the views of the author or on the topic more generally. I'm sure opinions differ, but a very strong review in my view is one that the author brings considerable knowledge or insight into the topic and adds value to the content of the book with their review. Or just is a fantastic communicator which shines through the review. I didn't get that from this review.
I agree with this. Thanks for the feedback.
How are people interpreting OpenAI adding a former NSA chief to their board? The public statement was he will help "better understand how AI can be used to strengthen cybersecurity".
Is it a mistake to discount that public statement and instead view the hiring as a response to Aschenbrenner's "Situational Awareness" paper and therefore a first step of AGI becoming a soft government project?
Completely superficial. As last year's failed coup demonstrated, the OpenAI board is almost completely impotent, which doesn't make them any different from most other corporate boards. Ideology almost always collapses when it tries to oppose economic forces (cf. communism).
Charitably, a proactive attempt to roll with inevitable government and security restrictions (some of which will be hidden from the public). Getting a familiar face on board can help smooth the process, and signal to regulators that OpenAI is taking a mature, reasonable approach and doesn't need to be made an example of (to encourage the others).
It's the revolving door in action, and marks OpenAI's transition to being just another private industry company that wants nice juicy government contracts, and lobbies for easier regulations for themselves with the carrot of a big fat sinecure in the company once you leave public service, if you go easy on them and promote their interests while you are in power:
https://www.investopedia.com/terms/r/revolving-door.asp
"The term "revolving door" refers to the movement of high-level employees from public-sector jobs to private-sector jobs and vice versa. The idea is that there is a revolving door between the two sectors as many legislators and regulators become lobbyists and consultants for the industries they once regulated and some private industry heads or lobbyists receive government appointments that relate to their former private posts.
Such instances have grown in democracies in recent years with increased lobbying efforts and have led to debate over the extent former government officials are allowed to utilize connections formed and knowledge attained in previous jobs in public service to enrich themselves or be overly influential on shaping or watering down pending legislation."
That the guy is ex-NSA is just more evidence of strengthening government links. "Look, we're good guys, see? one of your own is on our board! we'll be compliant with all requirements around security and access! and in return...if you scratch our backs, we'll scratch yours".
I imagine these hiring/board decisions have much longer cycles than the time between Leopold's paper and now.
Very true. Well, would it be fair to assume the hiring (while not directly related to Aschenbrenner) is in fact related to drastically increasing their security measures against sophisticated thefts? Or is it to easier to take it at face value that they want to sell AI as a cybersecurity product?
I don't think you add people to your board to actually *do* things.
Can I make a suggestion to edit the book titles so they're easier to find? Complete Rhyming Dictionary is under T for The Complete Rhyming Dictionary. And I don't see Spirit of Rationalism under either Spirit or The.
This practice of not skipping the definite or indefinite article at the beginning of a title for alphabetization has always annoyed me.
Libraries drop the initial article to order books but things like ‘top 500 songs of all time’ usually start with all the songs that begin with the indefinite article ‘A’.
I’m surprised that this contest doesn’t use a little macro in the spreadsheet to strip the ‘a’s and ‘the’s before sorting. I mean the site is pretty rich with coders that could do this in a couple minutes.
Right! Drop an initial "a", "an", or "the". It's a very simple rule.
There was a Far Side collection that had an appendix at the end with all the comics listed alphabetically, and all of them were under T because they all started with "The one about..."
That title is condensed from The History of the Rise and Influence of the Spirit of Rationalism in Europe
Ah, that would be under "Europe, The History of the Rise and Influence of the Spirit of Rationalism in"
I was praised here months ago for admitting I was wrong about the amount of corruption in the Biden family, so I'm incentivized to follow up now. My original thought (as best I can recall) was that all politicians do not-quite-illegal influence-peddling (it's called 'campaign contributions') and that Hunter was just a little more brazen than usual. At some point I thought "ok this is out of the ordinary" but now I can't find any trace of emotional response to any of it. This feels a bit like cynicism - the things we don't like when our officials do them aren't really illegal because they make the laws, which is why the best anyone on either side can come up with is a silly gun charge or some garbage about mis-classifying a payoff. It feels exactly like my response to the media (not just the mainstream legacy media, but 'independent journalists' on twitter too) always being wrong and incomplete in a way that supports their point, without technically telling lies.
I'm not trying to wriggle out of having been wrong, mind you, just noting a kind of fatigue that goes beyond not being outraged (that hasn't been the case for me for a very long time).
Feel like linking the Daily Show talking about the trial of Senator Robert Menendez, which ends with an aside about all the legal forms of corruption other senators and representatives can get away with. https://www.youtube.com/watch?v=5udtSQ-LtM0&t=376s
No, I think you should walk it back if you feel differently now. How do you know you were even getting kudos for intellectual rigor (rare) and not for giving people the opportunity to imagine you humbling yourself before their big wrinkly brains on the internet (common)?
Don't hold to some sort of intellectual standard that privileges views you don't hold too much. It's always good to give a couple extra weights to the opponent's side of the balance beam to counteract the ego sitting on yours, but that doesn't mean you should let yours conscientiousness force you into something equally wrong.
Is Hunter unusually corrupt? Yes, because he is a drug addict fuck boi; his corruption is more salacious and less hidden. Is he more corrupt than eg. The Trumps? Fuck no. Not even close.
The Obamas? Yes, but also ehhhhh.
The Bushes FUCK no, not even close.
The clintons? Ehhhh... maybe, maybe not. They are smooth operators.
The Bushes again? FUCK no, not even close.
The Reagans? FUCK no, not even close.
You have to go back to Carter to find a president you can firmly say "The Bidens are definitively shady compared to this guy"
Are you saying Joe/Hunter Biden is LESS corrupt than the Trumps? How do you watch the news and come to this conclusion? Hunter literally went around the world to foreign corporations, sat in front of their leadership, put his dad the Vice President of the United States on speakerphone, and raked in $20 million USD this way. Most notably flying to Ukraine with his dad on Air Force Two to serve on the board of directors of a Ukrainian oil company, an industry he knows nothing about in a country he has no connection with except for the fact his Vice President dad was in charge of Ukrainian foreign policy. How convenient.
I'd like to see what news you have that could exceed Hunter's corruption, because it's a whopper: https://oversight.house.gov/the-bidens-influence-peddling-timeline/
https://www.usatoday.com/picture-gallery/news/politics/2024/04/11/trump-associates-with-legal-trouble/73295028007/
This doesn't answer my question. No one on that list is named Trump. And none of these people are accused of corruption. It's a bunch of stuff like "lying under oath" which is what they get you for when they can't get you for any other crime. If that's the standard you want to use, I've got a laundry list of Biden administration executives guilty of such things.
Wonderful, link them.
No, that would be beside the point. Please re-read the context of my question: justfor, the OP, was comparing corruption in political families.
Wait, how bad has it gotten? I've been out of touch, but it seemed like Hunter Biden was a typical case of something that pops up every now and again, and he himself didn't really reflect on the rest of his family. Hunter certainly seemed to be trying to seem like he was selling access to his father. The real question was how much his father was into it, and I hadn't heard that there was any real evidence that he'd done anything serious?
Nothing has really changed at all when it comes to the question you are asking. The only development was that hunter was convicted of lying on a gun application (stating he wasn't a drug addict when he was). His tax case continues.
The right was hoping he would get acquitted or similar on the gun charge so they could use it during the campaign as evidence of corruption. But if anything, its evidence there are people in the "deep state" out to get the Bidens: this is a charge that is basically never brought, when it is brought a conviction basically never results in jail time (which hunter has a good shot at getting), its potentially unconstitutional based on recent SC rulings, and Biden has publicly said he wont pardon hunter no matter the sentence. Personally, I think Hunter is kind of a sacrifice that Biden has to make to show how not corrupt he is compared to trump. The right hates this because it hurts their talking points so they come up with other BS to cry about.
(I hate both parties so don't take this as an endorsement of what biden or the left is doing).
That is literally the opposite of what happened. The gun charge was being used by Hunter's lawyers with the connivance of the prosecution to create a plea deal that immunized Hunter in perpetuity for the tax evasion charges that have yet to be litigated. The plea deal agreed to was so egregiously flawed that the judge threw it out. Once the fix became public the prosecution had to proceed for political reasons.
I imagine that the "right" hoped that investigation of the finances of the tax evasion schemes would uncover evidence (or implication at least) of corruption and that is equally why the defense was eager to make them go away.
Yes they had the calamity of the plea deal, not sure how that makes what I said "literally the opposite of what happened"? I don't think the plea deal was the sign of "fix", basically any other person charged with this crime would plead out. If they didn't have a history of violent crimes, the plea deal would likely have no jail time. But Hunter may get some jail time (though not a lot, i believe i read the guidance is somewhere around 1.5 years at most). So being a biden has led him to get a harsher sentence than any "normal" person.
Hunter's tax case, which is on going, is much more likely to lead to any signs of corruption. I think the right saw this gun case as an embarrassing/scandalous episode to paint the Biden's as slimy (just like Trump so don't worry about Trump being slimy; though this part goes unsaid).
Well, the Democrats are the party pushing gun control hard (and while I don't like guns, Americans have a legal right to own them).
So if the son of the current president, who is a Democrat, turns out to be the same "drug addict lying on application to get gun" that all the scaremongering is about, then they have to prosecute him or else appear like big fat hypocrites.
Here is a guy I would not trust to send to the shops to buy a litre of milk who brazenly flouted the rules, flaunted his rule-breaking, and is the son of the highest official in the land for the party which wants (allegedly) to take all guns away from private citizens. What else can you do but let him have his day in court?
Yeah i basically agree. The potential irony is that Hunter's case could lead to rules/laws around drug addicts possessing guns being outlawed. If the case made it to the supreme court, that would be the likely outcome. So to not look/act corruptly, the Biden admin has to undermine one of his parties main goals! I suppose thats a sign of a type of virtue (though, like you, am skeptical that the dems - or any political party - can be truly virtuous).
I think Hunter Biden is definitely the poster child for "you do not want this person owning a gun" 😁
I had forgotten about the tax charges as mentioned above! So I suppose the gun trial at least worked in his favour that way. It certainly distracted attention.
What would count as "anything serious"? If Joe sat in on a phone call with Hunter while he was making a deal, would that be serious?
If Joe remained aware that certain courses of action which might be advantageous for the US could imperil his son's business dealings in China and Ukraine, would that be something serious?
repeat customers are fairly good evidence product was delivered
Id be shocked if there wasnt several drug trails on record where the cops only found money and messages about a drugs that ended in conviction and failed every appeal
Psychics, dowsers, and astrologers get repeat customers. Yes, that's evidence that a "product" was delivered, but there's always been great demands for the product, "tell me what I want to hear".
That's true. And maybe there really some escorts out there who just escort their clients to events in a totally above-board way and never touch their penises. You can't prove it either way.
Still, if you tell me you're one of those escorts I'm going to be skeptical.
When you put a well-connected person on your board, you are not necessarily hoping for direct quid pro quo, just a general position of advantage.
It's possible that Joe Biden is a saint and never once allowed his fuckup son's surprisingly lucrative business dealings to taint his judgement. Various shady foreign entities might have assumed he would, but he remained resolutely above it at absolutely all times.
Now, to be fair, all politicians have relatives, and they're all subject to the same issues. But Biden seems uniquely subject to it because Hunter is such a crackhead fuckup, and the gap between what he could achieve on his own and what he achieved with a powerful father is so clearly vast. I think Neil and Marvin Bush would have done just fine in business even without their father and brother being Presidents, but Hunter Biden would be giving handjobs for crack in a flophouse in Wilmington.
Hunter is just so incompetent there is no possible façade to coat over his influence peddling. All of the huge sums of money and sinecures like the Burisma board position given to Hunter were purely to curry favor with Joe.
However, imagine if Beau Biden was still alive. He went to U. Penn., had a law degree, served in the military JAG, and was the AG of Delaware. If he was offered a fistful of lucrative deals, is it influence peddling? Obviously Beau was highly competent and could have been sought out for his own merit. And obviously it doesn't hurt that his dad was a long time Senator and VP for Obama. Who can say why people truly wanted to throw money at him. With Hunter, there is no such plausible deniability.
Beau probably would have been smart enough to turn them down, or at least smart enough to make it less obvious what he was doing.
I think this is part of how bias happens. If someone finds out malfeasance that happens on the other side, they have infinite energy for salacious pointing this out ordering comfort from this fact. Someone find out it happens on their side, they get depressed and come up with reasons for why it doesn't bear mentioning.
This isn't a conscious process, most people cannot help what they feel, but it is sometimes important to see emotions are puppeting you, in ways that you may not reflectively endorse.
I agree.
>This isn't a conscious process, most people cannot help what they feel, but it is sometimes important to see emotions are puppeting you, in ways that you may not reflectively endorse.
Yes. What looks like hypocrisy from the outside can just feel like natural pursuit of what _feels_ like "the more important case" from the inside.
I've considered both sides my sworn enemies for coming up on 10 years now. So I don't think that's it in my case, but I see the point you're trying to make.
>I've considered both sides my sworn enemies for coming up on 10 years now.
I'm curious. I tend to think of both the left and the right as enemies of individual freedom. Different freedoms in the two cases, but neither a friend to freedom (though neither of the USA factions is as bad as Stalin or Hitler or Mao, of course).
What makes them your sworn enemies?
Ah yeah, you said that to me last time. Whoops, sorry for forgetting.
I think I even postulated another generic source of bias last time too!
time is a donut
Yeah, at this point I think fatigue has set in for everybody. I can't keep up with the number of cases being brought against Trump (except when they collapse into comedy like the Georgia one), and Hunter Biden has dragged his family through the mud so many times in public already, that an actual conviction is an anti-climax by comparison to wondering when his next dick pic of him taking drugs in the company of ladies of negotiable affection will be released.
>the best anyone on either side can come up with is a silly gun charge or some garbage about mis-classifying a payoff.
I think the classified documents charge is the actual serious one people should care about, but the judge in that case seems determined to stall until after the election.
I think it's becoming clear that every President, VP, SecState etc with routine access to vast reams of classified documents winds up mishandling them somehow.
And usually, that's because so much routine correspondence gets classified, because there is literally no incentive not to stamp every piece of paper you can get your hands on.
EDIT: I was wrong. There IS some incentive not to do that. See John Schilling's reply pointing out 28 CFR § 17.22.
There is literally a law saying you can go to jail if you do that. It's very rare for anyone to be actually convicted, because it's difficult to prove in any specific case. But it does factor heavily into the training people like me have to take every year for handling classified information.
I think "there is a law against that, seriously, don't do that or you'll get in trouble", regularly repeated, is literally *some* incentive. Possibly inadequate, but it's there.
And there's another incentive, which is that when something is classified it becomes an immensely greater PITA to deal with even if you do have All The Clearances, so if you're thinking about classifying something that you're going to have to work with regularly, you'll be particularly incentivized to not do that.
Wait, there's really a law against classifying documents that don't actually contain sensitive information?! I did not expect that.
28 CFR § 17.22 - Classification of information; limitations.
TL, DR, you can't classify information unless you can clearly and specifically define how it would harm national security to reveal it, you can't classify information if there is "significant doubt" as to whether it needs to be classified, and you very specifically can't classify information "to conceal inefficiency, violations of law, or administrative error; to prevent embarrassment to a person, organization, or agency; to restrain competition; or to prevent or delay release of information that does not require protection in the interest of national security".
IIRC, the theoretical penalty can be up to five years in prison. In practice, as with most other classified-information violations, the penalty if you get caught is usually a slap on the wrist and then you need to find another job, because nobody wants to go through the trouble of convicting you. That's too much like work, and embarrassing if they try and fail. But if you're a stubborn and obnoxious enough jerk about it, they may make an exception.
The difference is that normal politicians apologize and immediately return the documents when they discover them, whereas Trump actively lied to the government and repeatedly tried to hide the documents and prevent the government from retrieving them.
I’m not saying that difference doesn’t matter at all, but I don’t think we should so easily dismiss top officials being that cavalier about the rules and only fix it when they get caught. Like, these are not new or obscure rules they are violating, and it’s hard to believe the violations were unintentional (I suspect they are lazy rather than nefarious, but still).
Material is not supposed to be classified unless it getting into the wrong hands would result in grave damage to national security. Either people are vastly over-classifying (also illegal!) or they are basically ignoring proper handling protocol.
They're vastly over-classifying documents.
Perhaps, but that’s also illegal, and there’s a process for dealing with stuff that is marked classified but shouldn’t be, and it’s not “leave it in a box until you get caught”. Plus if you deal with classified it’s your responsibility to understand what is classified, and what information you produce would be classified. You will be briefed in detail on this before you are going to access any of it. It is not the case that someone will swoop in out of the blue and declare “surprise this was all classified and no one told you”. It’s negligent at best.
This is like a person who is caught driving 20mph over the speed limit while drunk pointing to other people getting away with accidentally parking in a fire zone to try to excuse their own behavior.
Like sure, overclassification is a thing, and a lot of people from both parties have discovered that they accidentally possessed classified documents *and then returned them*, but that doesn't excuse the whataboutism here.
What? No, it’s like two people both get caught drunk driving, and one of them says “yup, caught me aw shucks” and gets in the squad car and has his lawyer on speed dial, and the other yells and screams and goes on a sovereign citizen rant about how the cops have no right to detain him. I mean yes, the latter is worse, but both are equally culpable for drunk driving. And you want to excuse the former completely because they were so polite when they got caught.
It is not credible that Biden and Hilary were unaware that it’s wrong to keep a box/server full of classified in your personal home/office. My understanding is a lot of the stuff was marked, and even if it wasn’t, everyone has to be periodically trained a sign a document that says they understand what information is classified. And even the stuff that isn’t classified isn’t stuff you’re supposed to have sitting around your home.
This wasn’t “accidental”, it was lazy. It’s harder to deal properly with classified, and they thought they were too big to bother with silly rules for little people.
Where do people who are really into poetry find other such people to hang out with?
In the SF Bay Area there's a huge amateur poetry scene. I'm sure other cities have their own. I suggest you Google "<Your city's name> poetry open mics" "<Your city's name> poetry readings". If you're interested in writing groups: <Your city's name> poetry writing groups <or workshops>". Good luck!
What kind of poetry do you like? I signed up for a poem a day with Paris Review, and hate about 70% of what they send me, especially the stuff that's prose about the most prosaic things imaginable:
I'm in the kitchen rolling the cardboard back of some matches into a
little column while the cat rubs against my calves, then
realize I need to pee.
Although today I got Merrill. Poetry seems to have fallen off a cliff somewhere around 1980, or maybe I sort of fell out of the back of the truck. And I'm really not that picky! I like poetry as far back as Milton. In this century I enjoy many of the poets that are widely known, and occasionally stumble on somebody more obscure and binge on them.
Oh, I agree.
I blame the postmodernists.
> What kind of poetry do you like?
I tried to come up with a generalization and failed; my tastes are eclectic. I mostly do poetry in Russian, since that's the language I always spoke. (I don't expect it to make it harder to organize something like a poetry evening; where I am there is a lot of people like me who fled from putinism.) But poetry in other languages I understand fascinates me just as much, I just know less of it.
I guess I could point to Russian underground avant-garde poetry of 1920s and 1930s like the Oberiu, and specifically Alexander Vvedensky and Daniil Kharms, as a great source of my inspiration; but it's just one of my many loves; I am all over the place.
So I just want people to throw any poetry they think of at me, and I want to throw some of my favorites at someone.
Im really into poetry. Community around bad poetry id easier to stumble into at poetry reading. Community around good poetry is harder to find.
If you're really into bad poetry you can patronize a local 'poetry night'
I wrote the review of The Trial. https://docs.google.com/document/d/1Ki5XsE0jkxZtd2XAeyTAJw1ZjLh2Cu-matUYKAhA6-s/edit#heading=h.pglj3zsxcjcp
Any feedback?
>On the other hand, this book, which describes a legal system about as totalitarian as one can imagine, was scribed before the rise of Mussolini, of Hitler, of Stalin and Mao
Having read the review and not the book, nothing about this legal system strikes me as totalitarian. I'm in fact left wondering if Kafka is deliberately breaking every rule he can in order to create an anti-legal system.
No, it's not totalitarian because it's not partisan or political in any sense. I used the term as a cheap hook hoping that would make the review more interesting and also because that is how the work is often, wrongly, interpreted. I follow up your quote saying: "What, then, is the book about? "Totalitarianism, perhaps." The "perhaps" is key, because it isn't about totalitarianism. Nevertheless, it does indeed foreshadow what a totalitarian regime might *feel like*. The loss of privacy and the constant concern about the all-pervasive authorities presages totalitarianism, no?
Kafka doesn't write about how things are objectively but rather what they feel like. It's spooky, I think, how what Kafka felt like in Prague in 1915 would resemble what Prague would feel like to many others in 1955 or 1970.
All the interpretations of the novel I've read claim that "guilt" means guilt in a religious sense. Maybe that's true in that the novel has multiple meanings at once and there exists a meta-level, but I claim that the most textual reading of the novel--which disappears behind most interpretations--is that the book is about the Law running amok. Man creates Law, submits to it, then the Law goes crazy and Man is unable to regain control because the Law is a kind of superintelligence or superstupidity, same thing. Kafka was a lawyer by day, so he knew something about it.
I really wish I'd written a better review so that Kafka would be a discussion point on ASX now, on the hundredth anniversary of his death, while he's both more popular and relevant than ever. To me his relevance is in perceiving that most key elements of totalitarianism have nothing to do with politics, they are simply inherent in human nature and scalable to ubiquitous levels through technology. Ideology has nothing to do with it.
If you haven’t read Václav Havel’s play The Memorandum you’d enjoy it
I have read it! Like it a lot!
Honestly, the review didn't work for me. I guess that this is due to my background. I am from Germany, and this book is a classic in Germany, including the aspects that you described, like the absurdity of never knowing what is going on or what the trial is about.
I think you did describe this well, but only to a level that I already knew. I can imagine that this book caught you extremely flat-footed when you didn't know what to expect, and you described it well. But to me the review only repeated things that are quite often said about the book.
Sorry, I guess it sometimes happens that you just have a different connection to the book than your reader.
I suspect he picked up on a lot of aspects of Central European bureaucracy that are less familiar to people elsewhere.
In retrospect I should have changed the opening sentence in which I state the book "took me totally by surprise". Since it is a classic novel most readers are familiar with, at least by reputation, my first idea was to try to write a humorous review from the point of view of someone who took the book to be an actual account of real events he found unbelievable and shocking. But I found I could only keep that schtick up for a few paragraphs, so reverted to a more straightforward tone. I've actually read the book several times over the years but thought my misleading opening sentence could stand because at least it jumped right into things.
I thought my ultimate interpretation of Justice seizing control like an alien intelligence (or an AI) might be original, but I only made the AI analogy as a subtle hint (too subtle, perhaps) because I didn't want it to swallow up the other themes.
Thanks.
Are there any vegans here who oppose lab-grown meat? I have a piece considering some arguments to this effect, but I don’t feel like I canvases all the possible (smart) reasons why vegans might oppose it: https://open.substack.com/pub/wollenblog/p/vegans-against-lab-grown-meat?r=2248ub&utm_medium=ios It was hard finding vegan anti-lab-grown-meat arguments that were clear and fleshed-out. I’d be interested to hear more arguments.
I have seen (and largely agree with) a few arguments that you did not mention.
First, that eating lab grown isn't necessarily inherently an animal rights issue, but that it would potentially be unreliable as a commercial industry. How do I know if this burger is actually lab-grown or if it was from a cow that was killed? Theoretically regulations around labeling, supply chains, lab inspections and such could assuage a lot of that concern. But if i see a lab grown burger on the menu at a mom-and-pop hole-in-the-wall, I'm not really going to trust it.
Second, environmental concerns. A lot of people are vegan/vegetarian because animals takes a tremendous amount of energy and water to keep alive, and contribute to pollution of waterways and the air. What is the carbon cost of a pound of lab-grown ground lamb, compared to pasture raised lamb? How does the lab dispose of byproducts? How much water does the process require, at scale? I would need reliable answers to these before I would be willing to replace any of my current non-meat protein sources with lab grown meat.
FWIW, I'm not a vegan but I significantly limit consumption of animal products for ethical and environmental reasons.
> How do I know if this burger is actually lab-grown or if it was from a cow that was killed?
Sounds to me like an isolated request for rigor, because this argument could be applied to any food labeling. How do I know if this chocolate really is vegan? If this milk is organic? If these eggs have been laid by free-range hens? If this bottle of water is low in sodium? If this corn is GMO-free? If this carrot has been locally produced?
I disagree, for two reasons.
First, I generally trust that a packaged product contains what it says it does because of food regulations in the US. Once you take off the packaging, all bets are off. Some random person could lie to me about what package a product came out of, and at a small restaurant with narrow margins, there is an incentive to lie.
Second, while I hope that the carrots and corn I buy at the farmer's market are organic, the death of a sentient creature is not on the line. It's reasonable to demand more rigor when you're talking about meat than vegetables.
> (smart) reasons why vegans might oppose it
The closest thing to such an argument I'm aware of is that you didn't ask the cow permission for her stem cells. These are harvested from a biopsy (or placenta tissue can be used IIRC) and won't harm the animal. But you didn't ask permission, so perhaps it's wrong.
One argument I've seen is that it normalises meat-eating. If your position is that meat-eating is wrong, even if done without the cruelty involved in factory farming, then lab-grown meat is like a way of committing sin without the guilt. It's still meat, and meat is still murder. If omnivores/BLOODMOUTH CARNISTS have the option to eat meat without guilt, they won't be encouraged to give up meat-eating completely and so the horrors of factory farming and using animals for other purposes will continue.
This strikes me as similar to the Bolsheviks getting mad that workers were actually uniting and negotiating improved conditions from their bosses.
I mean, if meat is murder, go with less murder! I can’t imagine a scenario in which cheap and tasty lab meat *increases* the amount of “on the hoof” meat consumed.
I'm going out on a limb here, but it's a religious objection. Eating meat is bad, simpliciter, and it doesn't matter if it's lab-grown meat. You should not be eating meat because eating meat is evil.
The rationalisation of that is that, until and unless *all* existing farming of animals for meat is done away with, then lab-grown meat is the 'reasonable believers" of Sam Harris and the Atheist Horsemen - they give cover to the crazy suicide bombers and abortion clinic bombers etc. Just as "We don't endorse that crazy violent stuff but yeah us and them both believe in God/Allah" gives the extremists a shield, so does "I only eat the lab-grown meat" give a shield to the BLOODMOUTH CARNISTS eating torture meat from the cruel agri-business and abattoirs.
Funnily enough, there's a current scandal in Ireland about horses being exported for meat:
https://www.rte.ie/news/investigations-unit/2024/0613/1454588-rte-horses-expose-triggers-europe-wide-food-safety-investigations/
"It was hard finding vegan ...arguments that were ...fleshed-out."
Well see, there's your problem right there! Try a nice lentil roast instead? 😁
My lab-grown vegan arguments against lab-grown meat have no natural immune system, and so rapidly become infested by memes and outrage politics.
I have a piece up arguing that NAP-style libertarians should be theists: https://open.substack.com/pub/wollenblog/p/murray-rothbard-types-should-totally?r=2248ub&utm_medium=ios Interested in your thoughts!
I think this is impressively backwards. A problem with moral realism is the difficulty of coming up with objective justifications. Normally, moral realists have some sort of assurance that theyll be able to resolve this. Your arguments assumes realism, but how could an objective justification ever *not* look like a coincidence of the sort youre complaining about? That just seems like accepting the argument against moral realism, but refusing to give it up and instead blaming every possible candidate for "right morality".
While most deontological libertarians dont justify the NAP through prosperity directly, such justification, where attempted, generally turns on game-theoretic facts which are closely related to the reasons for expecting that prosperity.
> Rothbardians believe in what seems to be a most incredible coincidence. They think, on the one hand, that our natural rights require anarcho-capitalism2, and maintain, on the other, that this set-up just so happens to be the ideal economic system for human flourishing.
Your putting the causality backwards and yourll find nazi's, monarchists, social Darwinist, egoists, etc. etc. etc all agree, "it is good to be strong"
ancaps are right wing, even if 1/3rd of us are furries. Id expect anything to the right of secularized Christianity to agree with the statement "good morals make good systems"
As a NAP- and anarcho-capitalism sympathetic libertarian atheist, I don't think this correlation requires god to explain. In some sense, there's a fully general counterargument here, which is that *a priori* god is inherently more unlikely and complex than this particular phenomenon arising basically by coincidence. It's also not clear to me at all why positing god offers any explanatory power here; I'm not aware of any religion whose god(s) behave in a way that is consistent with this outcome. You could, I suppose, assert a more deist-like position, but then this particular god really only seems to exist as an explanation for this particular phenomenon, which is circular and leans extra-hard into the objection above.
But, I think we can be more specific as well. Here's a general argument: whatever issues individual people have, any institution made up of them and ruling over them will have the same or related limitations, while also having (by definition) the ability to use force to conceal its failures. David Friedman makes an argument along similar lines, in more detail, here: https://www.youtube.com/watch?v=Bpn645huKUg. Maybe some theorists don't address this mystery or even realize it's surprising, but I don't think that means there is no good non-supernatural explanation.
The argument.that god is necessarily complex presumes computationalism or naturalism.
You're gonna have to expand on that if you want me to take it seriously
A natural god needs a huge brain made out of neurons; a supernatural god could have intelligence , or whatever, as an intrinsic property
Isn't this assumption of inherent intelligence exactly what's wrong with theistic hypotheses in the kolmogorov complexity framework? Humans are intelligent as a result, far enough down the line, of the relatively simple behavior of relatively simple particles. In order to describe a universe with humans, it suffices to describe the fundamental physics, and their intelligence follows.
If god could be broken down this way, it would no longer be god--its intelligent behavior has to be described directly, with a vastly more complicated algorithm than that describing the behavior of fundamental forces and particles.
I think your understanding of what "complexity" means is entirely backwards here.
Doesn't the complexity of your hypothetical god-entity depend on what sort of god-entity you're hypothesizing? A "watchmaker" god-entity would only need to be able to define and implement the 26 fundamental constants of our universe and designate the 17 fundamental particles and their properties. Then inject > 3.2 × 10^71 Joules into a bubble the size of the Plank scale and watch what happens! This god-entity wouldn't have to be able to compute the outcome of such a universe. It could just sit back and observe the emergent phenomena that may or may not result from the parameters it applies to its experiment. And this hypothetical god-entity may not even understand the tools that it's using. Such tools could have been constructed by other god-entities or by higher-level god-entities.
OTOH, if your god-entity is intimately involved in the workings of its universe, then it would require a computational power with more energy and bits than what is contained in our universe. Of course, this may be logical overthink on my part. But speculating about god-entities — or the lack thereof — will only yield questionable results, because our observational perspective vis a vis time and space is limited. The hypothetical god-entity — if it exists — may exist in a universe that has different physical laws than ours does. Its intelligence may be based on different principles from ours.
You can't get a complex brain or complex behaviour straight out of the standard model, you need starting conditions as well.
Now,you can assume a small universe with highly complex starting conditions, but that doesn't give you an argument that human intelligence is actually simple.
Or you could assume a large universe with low KC since it is every combination of everything. ..but theists can use a similar argument , that god is both simple and all encompassing.
Now that I've stopped laughing (and it's not at you, just the notion of Libertarians and some of the Old Testament events re: the NAP), let me say I think maybe Deists, but not theists. Based on:
"According to the NAP, people have indefeasible and absolute property rights in their bodies"
According to theism, you don't, or at least not if you're a Biblical theist, because God as the creator has the ultimate right over us all. We don't own our bodies absolutely and cannot do as we wish with them unfettered. That's going to run up hard against the NAP there.
Mostly, I can see why many Libertarians are atheists because that is the ultimate "Me, myself, I alone am the master of my fate and decide what I shall and shall not do" stance. "Invictus" is the mindset that comes to my mind for them:
"Out of the night that covers me,
Black as the pit from pole to pole,
I thank whatever gods may be
For my unconquerable soul.
In the fell clutch of circumstance
I have not winced nor cried aloud.
Under the bludgeonings of chance
My head is bloody, but unbowed.
Beyond this place of wrath and tears
Looms but the Horror of the shade,
And yet the menace of the years
Finds and shall find me unafraid.
It matters not how strait the gate,
How charged with punishments the scroll,
I am the master of my fate,
I am the captain of my soul."
But theism requires you to bow your head before another, greater power than Me, Myself and I. I think a lot of Libertarians are too stiff-necked to bow.
OK, you got it. That's why I don't believe in God. I have to kiss my boss's ass, why do I want to kiss someone else's on weekends?
Agreed, in a rule-utilitarian system the miraculous coincidence between being morally righteous and delivering the best outcomes goes away: the rule is morally righteous *because* it delivers the best outcomes.
Since it didn't make the cut, could someone give feedback on my review of "The Signal and the Noise"?
I found that you gave a lot of very good arguments, and you convinced me that Silver and many other Bayesians attack only strawman versions of frequentists. This part of the review was really great, and would have deserved to be in the final if that was the only part.
But at the same time, in my eyes you did the same to the other side, and attacked strawman versions of Bayesians. For example, you criticized that Bayesians would continue to call for more evidence, and you made it sound like otherwise Bayesians would refuse to draw a conclusion before having tons of clear evidence. But that is absolutely not true. Just the opposite, a lot of the best Bayesians (like Scott, but also some superforecasters) try really, really hard to reason with limited evidence. If anything, that is something that Bayesians are trying harder that than frequentists.
And that is just one example. Another is that all introductions into Bayesian thinking that I know stress over and over again that you should not be overly attached to the calculations that you do. These are a tool to clarify your thoughts, to find out which arguments are important or unimportant. But you should never ever take the product in the end at face value. But the strawman Bayesians that you presented did exactly this mistake.
Throughout the whole review, I had the very strong sense that you fought against a caricature of a Bayesian. That meant for me that your review was composed of a really strong part and of a really weak part (which were often intermixed). In the end, I settled for a 7/10.
Added: just to make this clear, I wrote a lot about the weakness of your review. That's because I want to be constructive. But the other part of the review was really, really strong, and I learned a lot from that!
Appreciate this thorough review (of a review, lol) and all the kind words. I agree that Scott and many more modern Bayesians are a.) much less hostile to frequentist statistics, and b.) much more circumspect about how reliable Bayesian approaches are. If I'd been arguing directly against Scott, the review would have been significantly different.
I do think that some of Silver's ... let's call it Bayesian absolutist views remain powerful among Bayesian thinkers to this day. I felt like the review was a good place to discuss to what extent these ideas should be outright rejected, as opposed to just deemphasized.
For one, there's the rootclaim debate, where a 'pure Bayesian' approach was explicitly tried and, let's admit, failed. And while people like Scott will outwardly admit it's just not practical to do, they also offer some grudging admiration for it as a kind of Utopian aspiration we should hope to strive for. In reality it's an overly broad application of a limited statistical principle, and should be recognized as inherently the wrong approach. That discussion didn't happen in the fallout of the rootclaim debate. Instead, there was some talk about maybe the mechanics need to be tinkered with to make it more reliable.
Then there's the tendency of people to define themselves as "Bayesian" or "Frequentist". I'm not convinced Bayes' Theorem is applicable to all or even most situations. There's a visceral difference between defining yourself as a Bayesian, versus accepting Bayes' Theorem as a useful tool in your statistical toolkit. While I once thought of myself as something of a Bayesian, I no longer see that label as beneficial. I feel like the label made it more difficult for me to see the limits of Bayes' theorem. How does a Bayesian proclaim that Bayes' theorem is the wrong tool for the job?
Yes, I can sign to all that you said now. I think I always had a non-absolutist view on the term "Bayesian". The Bayesian theorem can either be read mathematically as a formula, or philosophically as the concept that a prior P(A) can be changed by evidence B, and that the terms one should think about are P(B | A) and P(B).
I never really interpreted Bayesian in the mathematical sense, for me it was always the philosophical sense, and there I find it a very useful philosophy. And I would guess that this is also how Scott and other early LessWrong people see it. Probably frequentists would also be fine with that weak form, and I never perceived it as such a stark dichotomy. Your argument does make sense that this Bayesian philosophy has its limits, and that one should not over-extend it. But yes, I do acknowledge that there have been fights between those two camps (which I probably missed since I was late to the party), and there are absolutists who see it differently, with Nate Silver apparently one of them.
Makes sense. I remember reading Silver's book when it first came out. It tracks closely with influences I see in the LessWrong community. (For example, the tagline for ACX.) Bayes is great, but it gets you in trouble when you overestimate your certainly of P(A), P(B), and P(B|A).
These were my thoughts while reading it:
A bit hard to follow, so far.
The psychologizing of Silver preferring zero-sum thinking due to his background in poker seems tenuous and adds little to the review. Hypothesizing about why an idea occurred to someone is usually less useful than simply evaluating the idea itself.
Continuing to read it and I find it hard to follow what his issue with the book is as far as frequentist vs. Bayesianism.
Thanks for the feedback. I guess the feud between Silver (and many Bayesians) and frequentist statistics is a bit esoteric.
As to psychologizing Silver; he explicitly calls on his readers to reorient their thinking in exactly this way, converting every open question into a zero-sum game. He states that this is what he does, and gives examples (outlined in the review), then proclaims this as a virtue that others should follow for all questions.
Maybe it's a stretch to say that Silver's experience in certain realms of statistics informs his recommendation to approach all questions in the same way, but I don't think it's a huge leap in logic, after having read the book.
It would be convenient if you give the link.
Apologies, here you go: https://docs.google.com/document/d/1Ki5XsE0jkxZtd2XAeyTAJw1ZjLh2Cu-matUYKAhA6-s/edit#heading=h.7h3vbxtxjnj2
I attempted to answer Robin Hanson's question "Why Is the Demand for Prediction Markets So Low?" in a Substack post: https://substack.com/home/post/p-145694816. Would appreciate any comments.
I think it's a decent introduction to the zero sum problem, but personally, I'd point everyone to https://worksinprogress.co/issue/why-prediction-markets-arent-popular/ as the best post on the subject, since it goes into more depth on explaining why prediction markets also fail at attracting the gambling crowd, hedging, etc.
I can't find "Spirit of Rationalism" on the master Google doc. I checked "Spirit..." and "The Spirit..." Is the precise name different? Was it not on the master doc?
The History of the Rise and Influence of the Spirit of Rationalism in Europe
Thanks
A couple of Youtube commentaries about the bullshit behind the AI hype. Sabine Hossenfelder takes on some of the silliness in Leopold Aschenbrenner's "Situational Awareness: The Decade Ahead" essay. Link to his essay in the description, should you be so inclined to read it.
https://youtu.be/xm1B3Y3ypoE
And Tina Huang calls out some of the bullshit from AI leaders. IMHO they're the latest generation of snake oil grifters that I've seen over and over again in Silicon Valley.
https://www.youtube.com/watch?v=8BlRT7Ktw1c
Situational Awareness has some insane claims, in just the first few pages he says he expects AGI by 2027 and ASI by 2030, that is the sort of thing I would expect a bad faith caricature of people concerned about AI to say. I have never heard of the author but apparently is one of the few brave visionaries who can see the Truth long before it comes, or so he says, at any rate.
I'm still reading it for a lengthy and comprehensive exposition on a position that I fundamentally disagree with and can't fathom how could anyone sincerely hold.
> IMHO they're the latest generation of snake oil grifters that I've seen over and over again in Silicon Valley.
I do not agree. I use copilot daily and it's a good tool, I would miss it if it was gone. It didn't transform the way I code but it's at the same time a productivity boost and reduces my ugh field around writing tedious code.
ChatGPT is really, really good too and birthed the whole AI chat assistants. The time I've saved by asking it for some shell command (looking at you ffmpeg) is pretty big, and it even works to fix my system (sometimes). Way better than the usual search engine experience. And that's mostly with zero shot prompting.
All in all, they actually provide value.
I use CoPilot regularly, but like any other of the current generation of AI it frequently makes things up. For instance, my last query was for references that discussed the unusually high C>U substitutions in the SARS2 genome. Specifically, I asked for references about APOBEC RNA editing enzymes and their possible role in C>U substitution. Of the four references it gave me two were definitely real. It got the authors right on another, but Google Scholar showed the title was wrong. I couldn't locate the fourth in Google Scholar.
If you've been following our discussions on AI hallucinations on previous threads, we've got a chemist who can't get the correct reactions out of any of the current generation of LLMs. And we've got an etymologist who's found that they have 90% failure rates on the origins of English words. (Which to me is amazing — did someone forget to scan the OED and when the created the training data?)
All the current LLMs seem pretty crappy to me.
Again, as I've said, it's great for code. Maybe I wasn't explicit enough. Copilot is Github copilot here. It works pretty well although there are some frustrating bugs (the quote insertion...).
They may be crappy for you, but they work pretty well for me. I can't imagine going back to having to google ffmpeg commands, for example. I mean, not literally, I can totally imagine and I often take "LLM breaks" where I code without them for a while just to test if I still can (I still can, I don't think they made me lose anything).
I don't know anything about your field of work and I don't have anything to gain by any of the big AI companies gaining AI valuation, so I won't try to sell you snake oil. But reporting from programming, specifically mainstream web programming, system administration on Linux, and scripting, they work really well.
Edit: to give a more general comment about their economic value, I'm far from being the best developer, but I'm well above average, and some people are really really not great. It's currently easy for me to review LLM written code (I can see when some colleagues abuse it). If they start being able to write a pull request that's well specified, which I expect they could do in 5 years, I don't see an economical reason to pay a below average/average dev to do it instead. And software is a very very big industry.
Other coders have said the same thing. However, they also said that the generated code sometimes contains errors. I wonder if the superior results in generating code is because programming languages are more restrictive in their grammatical constructions than natural languages?
Could be that, could be that code get constantly improved and fixed and the version scrapped will be the latest and so often the "most correct" ones, could be that most code that's online runs, could be that code is mostly doing always the same things, could be that it's way easier to judge the output of code, could be that the people making the LLMs have more experience with code, could be that the big AI labs think/know that there is more money to be made on software, could be that the software people are better at integrating it in their workflow, could be that software benefits more from quantity of writing than other industries.
I'm probably forgetting lots of others possibilities. I know a bit about AI from the software engineer user side, and from the theoretical side, but that's about it, so my contribution is limited to "it works for me".
The OED is copyrighted (or however you call it) if you've ever tried looking up words online, so I imagine they either wanted a very hefty fee for use of their material or outright refused.
https://www.oed.com/information/purchasing
Of course, it's also possible none of the really smart people working on AI ever thought about "how about we get it to read a dictionary?" because that's wordcel stuff not STEM stuff 😁
Most of the data sucked up into training sets must have been copyrighted — otherwise they would be limited to pre-1929 material. I don't see why the OED would have been ignored.
Of course, most of the lawsuits against generative AI are based on claims that they infringed on the copyrights of the creators.
Yup, this last Friday I retried a simple titration question on the GPT4o version of ChatGPT. Copying the relevant pieces from https://www.astralcodexten.com/p/open-thread-332/comment/59079643
>edit: BTW, I just retried the simplest possible titration question with the GPT4o (up to date, AFAIK) version of ChatGPT. https://chatgpt.com/share/23098ce9-30f9-483d-a6e8-5754ca5fe1b6
>tldr: It fell on its face again. It kept insisting that the slope at the equivalence point was infinite. I finally managed to force it to do the right derivation, but I had to force it through the algebra, one step at a time. This isn't so much the equivalent of leading it by the nose, more nearly leading it by the nose with hot pincers.
It doesn't _always_ fall on its face. A few plys earlier I asked it:
>Is light with a wavelength of 530.2534896 nm visible to the human eye?
and it correctly answered
>Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. This wavelength falls within the visible spectrum, which ranges from approximately 380 nm to 750 nm. Specifically, 530.2534896 nm is in the green portion of the spectrum, which is near the center of the visible range and is easily perceived by the human eye
https://chatgpt.com/share/003eb143-8041-47af-8d9f-698b2ce9ddef
But the only way I want to use the current ChatGPT on a question where I _don't_ know the answer is as a pointer to non-LLM answers which I then read.
Thank you for your useful reality checks, Jeffrey!
Many Thanks Beowulf!
Hossenfelder's two main disagreements seem to be that energy and data requirements will greatly slow down progress in the near future. I expect that as those become the bottlenecks, many AI researchers will pivot to trying to solve them, coming up with more energy and data efficient ways of training AI. There is no fundamental reason both of these factors couldn't be reduced by 100x. At any rate, neither will really have too much bite before AI is able to participate directly in answering AI research questions, including energy and data efficiency questions. Even if she is completely right, the near future she imagines will still be a transformative one.
> There is no fundamental reason both of these factors couldn't be reduced by 100x.
This is exactly the kind of "and then a miracle happens" reasoning brought up again and again in these discussions.
Energy reduction by two orders of magnitude is really, *really* hard without using a completely different algorithm, i.e. in this case, a new AI model architecture.
Still, human brains can do quite a lot with comparatively little energy and bird brains are even more efficient. Thoughts about possibilities for using more biology-inspired methods?
Well, the human brain does the equivalent of more floating point multiplies than GPT4 uses for inference, and does it on a power budget of 50 watts. The power reduction doesn't violate fundamental physics, but what it _does_ need is very power efficient devices, comparable to synapses and dendrites feeding into a neuron's cell body, and new devices typically take decades to make it from the lab (once they exist in the lab!) into production. I'm not holding my breath.
There is what I consider a good analysis in "Transformative AGI by 2043 is <1% likely
By Ari Allyn-Feuer and Ted Sanders" https://arxiv.org/abs/2306.02519
if the issue is massive amounts of floating point arithmetic, could that be handled by using a hybrid computer with the analog part handling the arithmetic? That was traditionally what they were used for, before digital computers became so powerful they could brute force everything.
Many Thanks! Yes, and there has been work done along those lines. It gets tricky. E.g. copying an analog state degrades it. There is also a trade-off between flexibility and efficiency. E.g. if one wants to reuse an analog multiplier for changing values of both inputs, and if one of them is a neural weight, then one needs something like a D/A converter to set that input. On the other hand, if each neural weight can have its own, fixed, hardware, then one can use e.g. a mask-programmed resistor to set that weight, with no power dissipation in setting the weight itself - but no way to change it dynamically.
The good news is that this doesn't require semiconductor process innovation, but getting it integrated into data centers and into LLM training and inference flows is not likely to be quick...
Oh cool! thanks for the info.
I don't think this contradicts my point, does it? The human brain uses 1) a completely different algorithm, and 2) a completely different computation substrate. Nobody's going to make ChatGPT 100x more energy-efficient unless they change 1 or 2, possibly both.
Many Thanks! I'm basically agreeing with you on (2), but agnostic on (1). On (2), I'm just saying that changing the substrate is a _really_ _really_ long, hard, slog, but not _quite_ "then a miracle happens", in the sense that neurons are an existence proof that such a substrate doesn't violate the laws of physics. ( We still have quite a ways to go before hitting https://en.wikipedia.org/wiki/Landauer%27s_principle ) I wouldn't be surprised if getting a 100X improvement in energy-per-logic-operation took a large fraction of a century.
Please explain why "There is no fundamental reason both of these factors couldn't be reduced by 100x." The cost of computing has been rising since we've reached the end of Moore's Law. Although the cost of individual floating point operations are still falling, chips are now taking longer to design and cost more. One of those multi-GPU boxes cost in excess of $250K. I don't see that coming down soon.
As for energy, if we could make chips that run on significantly less power, you think we wouldn't have made them already? No faster cheaper computing on less power is pipe dream.
I am talking about developing new neural networks that achieve similar learning performance with fewer operations and less training data, not improving hardware efficiency. Sorry if that wasn't clear.
I'm not a Neural Networks whiz but "vast, ungodly amounts of training data to do anything useful" seems to me as a fundamental feature/bug of NNs, such that addressing it amounts to no less than a total re-invention of the technique, on par with the Gradient Descent re-invention in the 1980s.
This is different than reducing power consumption or increasing efficiency by doing less, there are all sorts of incremental tricks that one can read about today attempting to achieve less power consumption and/or operation, everything from specialized hardware (Analog Computers, FPGAs, ASICs,..) to quantized representations for floating point numbers to "distilling" trained networks in order to obtain a lighter-weight network that does the same thing in Inference mode, making Inference (but not training) more efficient. I mean "Incremental" in the sense of "Normal Science", things that we can imagine today without a massive breakthrough.
I can't remember I have read about any research whatsoever on ways to train Neural Networks to SOTA performance with less data. It might exist, I don't know of it.
The more power becomes a bottleneck, the more incentive there is to reduce poer consumption.
Hmmm. According to Doug Summers Stay we can reduce power consumption by 100x. I haven't heard anyone claim such a thing before. I'm curious how DSS would go about doing that. I just looked up the specs for an A100 GPU. It is expected to consume approximately 400 watt-hours of energy over the course of an hour of high compute operation. In that hour its 54 billion transistors can execute 18,000 petaFLOPs. 18,000/400 yields the number of petaFLOPs it can perform per watt-hour --> 45 pFLOPs per Watt hour. So he would need to create a system that could obtain 4500 pFOPs per Watt hour. How?
I don't mean calculations per watt efficiency, I mean performance per watt efficiency. I think the number of calculations will continue to be reduced to get similar benchmark performance. I'm just talking about the continuation of the trend of algorithmic progress, such as https://epochai.org/blog/revisiting-algorithmic-progress and whether there are any fundamental considerations on whether such trends can't continue.
I'm skeptical of Leopold Aschenbrenner's AGI and ASI time lines, although I don't think AI hype (which is admittedly huge) is the same kind of "grift" as say, crypto speculation or NFT art. Focusing just on the pace of progress, I'm curious about whether Aiden McLaughlin's proposal to switch the research focus from raw, energy-intensive compute on ever-larger collections of raw data to "search" might allow for faster, less expensive progress with much less data? https://tinyurl.com/7cyj4b2c
Aiden McLaughlin? Dude is a college undergrad whose company is a landing page. "Advanced LLM Agents combining quant scale with human-level analysis to beat Rentech and Citadel!!"
You are looking directly at the grift!
Are his analysis of Stockfish and/or proposal of model improvements via search inaccurate? I’m not trying to be facetious or naive - not every undergrad is Zuck or Bill Gates, but some have good ideas.
I'm not an ML PhD but from what I understand it's "not even wrong" - tree search makes sense in a domain like chess with bounded sets of "moves" - I'm not sure it makes sense in something like "AI research"
Basically, there's a reason why go is basically solved but top Starcraft players still cream AIs
Yup!
The Sabine Hossenfelder video seemed very light on arguments to me, apart from that there's a group think around AI.
Can't China eventually take over Taiwan simply by encircling and blockading it? Isn't that much more likely than trying to actually invade the island first, which everyone says would be extremely difficult? As an island Taiwan cannot go forever without trade, food, and energy supplies from the outside world, and would eventually have to fold.
More important is the US response. If the US tries to break the blockade, which would require force, they would essentially be the ones to start World War 3/likely a new if not worse Great Depression. Technically authorizing such a war would require Congress, but even if it didn't- there's not going to be a public appetite here to be the ones *starting* the most devastating war in history, thousands of miles away from the US. You'd have retired admirals on talk shows saying 'eh, not sure if the US can win this one, there'll be tons of casualties'. Economists saying 'this war will be worse economically than the 30s'. Voters are ultimately not going to be interested. US public opinion was against getting into WW1 & 2 too. All China has to do is encircle and Taiwan and *not* fire on US ships unless fired upon.
So- wouldn't blockading Taiwan ultimately work for China? Why can't they do that in say 5-10 years time?
Computer chips are as good a cause as any for war, and a better cause than most.
Blockades are an act of war. So no, China would be legally the country starting the war. Further, Taiwan is recognized (even by China) at the moment as an independent customs territory which they do not have jurisdiction over. If China says, "Well, we changed our mind." The rest of the world can just say "no." And if they try to force the issue then that's still an aggressive action.
From a real power perspective: If China blockades Taiwan and the US and its allies do nothing then that would work. If China blockades Taiwan and the US and its allies choose to do something then the US and its allies get to see every ship the Chinese have in the blockade (which will be out of position for the task of "fight the USN") and then strike at a time and place of their choosing. Which would be very difficult for China to plan around. The US could also begin to do embargo China in turn in a counterencirclement that would have severe effects on their economies, much more than it would on the US. The US would be losing one major trading partner. China would be losing almost all of their major trading partners.
Also Congress has already authorized force to defend Taiwan through the Taiwan Relations Act. And firing on civilian merchant shipping is an act of war so declaring a quarantine then shooting at someone passing by would be an act of war too.
I think it's likely any war starts with China assuming the US will get involved and striking American bases in the hopes that delays American involvement long enough that they'll have either taken over Taiwan or at least landing significant enough forces that America will have to dig them out of a burned out island. That counts on the US ignoring American service members deaths. But in any situation that China starts a war it's counting on the US not having the will to continue fighting. And that's a historically common miscalculation.
>So no, China would be legally the country starting the war
'Legally' is irrelevant (major powers break international 'law' all the time, if you really think that's a real thing, which I don't). There's no like World Court where China will be found guilty after a lawyerly process. They may be the aggressor, but the practical question for the US President is 'would you like to be the one to fire first, start World War 3, probably plunge the world into a new Great Depression, and probably lose re-election as US voters won't like any of this?'
>The US could also begin to do embargo China in turn in a counterencirclement
This would again plunge the world into a new Great Depression, as China is by far the world's largest trading partner. International anger at the US for cutting off their biggest market would be white-hot. Global inflation would spike like we're all in Zimbabwe as the factory of the world can no longer send goods abroad- honestly, the world might experience a couple decades of technological regression, like a mini-Dark Ages. And, again, the President and party who does this is pretty much guaranteed to lose re-election.
The US can't even cut off Russian oil because it's such a large part of the global economy. There is no political will to press the button that says 'start Great Depression 2.0'. China is simply too big to cut off.
>And firing on civilian merchant shipping
All China has to say is 'if you enter this zone, we'll fire on you'. Out of the entire global commercial fleet, exactly 0.0% of captains and crew are going to say 'screw it, we're going in anyways'. These are for-profit businesses, they have insurance that would forbid this, the captain is ethically responsible for his crew's lives, etc. Would you die for your employer? Once a major naval power says 'entire here and die', no commercial ship will ever enter till it's clear.
>But in any situation that China starts a war it's counting on the US not having the will to continue fighting
The US couldn't stomach 4500 casualties in the Iraq war. 58k servicemember deaths in Vietnam is remembered as some kind of apocalypse. Meanwhile Russia has see probably 10x that number of deaths in Ukraine and its population doesn't even flinch. There is no US will for mass casualties thousands of miles away from home unless we're actually defending our homeland, sorry. Do you think the median voter could even find Taiwan on a map?
Again, we're the same country who didn't want to enter either World War 1 or 2. The whole point I'm trying to make is that both politically & economically, there's simply no domestic will for an apocalyptic war & depression
You appear to be caught in anti-American doomer pessimism. The international system doesn't really exist and everyone's doing power games and cynically maximizing their self-interest. Except the US which is too cowardly too pursue its own interests or defend its allies. It's a useful narrative if you want America to lose despite its many advantages. But it doesn't really make sense. If everyone's cynically maximizing their interests the US will too.
You also don't appear to have any firm grounding in how these things have actually worked either historically or recently. For example, you talk about no one being willing to go past blockades even as that's happened repeatedly. You dismiss the importance of court cases even as China does a lot of diplomatic maneuvering (and often fails) to avoid them.
You're also wrong on simple statistics. The US has always been China's largest trading partner but the US's largest trading partner has always been Canada or Mexico. China's usually third. The statistic people often mix up is that China has sometimes (not always) been America's largest import partner. But trading partner includes imports and exports. Further, the US probably gets to continue trading with Mexico and Canada in a Taiwan contingency while it's unlikely China will get to continue to trade with its second or third largest trading partner: Japan and South Korea (or sometimes Japan and Taiwan). And most of China's war important minerals (like iron) are sourced from Indonesia or Australia which are also overseas. China's also a more trade reliant economy in general. The war would be devastating to both economies but worse for China.
As for American will, isolationism is a loud minority. It was a loud minority in WW2 as well. 78% of Americans favor defending Taiwan and 69% now favor recognizing Taiwanese independence. It remains a majority even if you say that's likely to trigger a war. 53% of Americans support putting American troops in Taiwan. The practical electoral politics is that losing Taiwan would be extremely unpopular and probably bury the party that did it in the next election.
Legally the US was very careful to avoid the criteria for a blockade. It was something of a fiction but still. And Israel also doesn't acknowledge it as a blockade. Both let things certain things through through specifically to avoid the label. And it didn't really work for either of them in terms of getting people to respect the fiction. And a Gaza style blockade, let alone a Cuban one, would not induce Taiwan to surrender by denying them power or food. China would need to do something far more total.
Also: If you think global sympathy is with Israel, or especially was with them pre-10/7, then you're mistaken. 10/7 changed the narrative not because it was an attack on Israel (those happened rather frequently) or because it was successful. It was because of the extreme violence against civilians and human rights violations like kidnapping and rape. If Taiwan somehow responded to the blockade by doing the same to a bunch of Chinese civilians then that would probably lose them sympathy. But I don't think that's likely.
Also, both Gaza and the Cuban Missile Crisis involved both parties shooting at each other. If China is trying to avoid war through a blockade as soon as they sink a US ship that outcome becomes much less likely. And if China isn't trying to avoid war then there's much more aggressive actions to take. China can try the blockade and then saying, "How dare they shoot back at us." But as the people of Taiwan starve (something the US didn't try to do to Cuba) I doubt their neighbors (who they also have claims against) would be very sympathetic.
If they extend their sphere of power out around Taiwan, they could start imposing customs inspections. Searching for biological contaminents, etc. Is your ship 100% rat and cockroach and ant -free? No invasive species or harmful pesticides that could damage the delicate environment of Formosa? Have you quarantined for covid-29 for 3 weeks?
All of those things are I believe prohibited by international law so long as the ship in question is not travelling to or from the People's Republic of China. Even within a nation's unambiguously sovereign territorial waters (e.g. the classic 12-mile limit), merchant ships engaged in innocent passage (https://en.wikipedia.org/wiki/Innocent_passage) between two other countries are to be left inviolate.
And if you stop them by force, that's an act of war.
> so long as the ship in question is not travelling to or from the People's Republic of China
That's the problem in a nutshell.
Because then they get hit with the biggest batch of sanctions the world has ever known, and oh what do you know the USN has closed the straits of Malacca to Chinese traffic
If the US can impose sanctions on China then the time to do it is now, not later. But the US won't have the economic stomach for it when the time comes, especially if the blockade is imposed slowly.
We already are imposing sanctions!
We don't call them "sanctions" but we've made sure they can't get their hands on bleeding edge GPUs, or EUV machines, and tariffed their EVs and Solar massively.
We're arming the Taiwanese too!
Both sides are basically playing this like they want it to get hot and are expecting it to.
The US can impose sanctions on China now if it want's, but without some act of aggression from China it will be tricky to get the other major powers to go along with it. If China encircles Taiwan it's much easier for the US to get everyone else on board with sanctions.
I'm going to repost here a request I posted on the last hidden thread, which was somewhat skimpily attended. Wondering if one of you people with decent general knowledge about world affairs could do a little consult with me:
Would anyone be willing to consult with me briefly about plausible future international developments? I am writing a piece of fiction set 30 years in the future. It is mostly about the personal experience of several individuals. But I need to sketch in for the reader, in about one paragraph per event, a summary of 3 important and related world events. All involve a superintelligent AI that the US developed. and over which we have substantial control.
I don't think my understanding of politics, government, world powers & economics is good enough for me to come up with scenarios that are plausible. My grasp of these topics is way below average for this forum. I don't think it would take a long time for anyone who's reasonably knowledgable about the areas where I'm ignorant to toss out answers. And when I say reasonably knowledgable, I really mean reasonably. You do not have to be knowledgable at the professorial, book-writing level, just someone who reads a question like nifty775's and has enough general info and opinions about world affairs in the last 100 years to have a view they can back up in a reasonable way. After all, nobody can know for sure how things will play out in the next 30 years with a genius AI in the mix. I just don't want to sketch in possibilities that are laughable -- things like Tibet taking over the world. If you have reasonable general knowledge about world affairs, you probably could type an answer to each of my 3 questions in 10 mins at most. Maybe in 3 mins. If you're willing, I'd want to ask you my questions via Substack chat or email, so that info about the story I'm telling stays private. Oh, and if you'd like me to credit you in my acknowledgments I'm happy to.
Feel free to ask me on Substack's chat.
Not knowing what you're writing, I'll suggest looking up the rise of new technologies in the past, like the car or electric power, and seeing how the world changed in their wake; what happened to the owners, the towns they came from, the governments, etc.
Could also look up the rise of McDonalds and the fast food industry; the McDonald brothers and Ray Kroc. (They made a movie about it if you don't mind dramatization. https://www.youtube.com/watch?v=N_t5PGSJD9o)
You're falling into the trap of assuming all significant developments are technological. She's asking about sociological and geopolitical developments.
I'm not the one to answer the question, but you might try just buying a recent textbook and reading the last few chapters.
>All involve a superintelligent AI that the US developed. and over which we have substantial control.
It's explicitly a technological development.
I would if that aspect of things were an important part of the piece of fiction, but it is not. The story is not about how the world events came about. The story takes it as a given that the world has changed in changed in certain ways, and is about life in that new world.
Here's an analogy. Let's say you were writing a story about a dozen people who were shipwrecked on an island, and how things go for them over the course of 5 years. -- who despairs, who adapts to life on the island, who devotes themself to trying to find a way to escape. But you know that that early on you want to give the reader a one paragraph summary of how the dozen people ended up there, and you don't want any absurdities in it. You don't want to say the dozen. people were in a certain kind of sailboat if everybody who knows about sailboats is going to complain that that kind of sailboat doesn't have room on it for more than 4 people. You don't want to give the location of the island as someplace that's near the Panama Canal, because then people will say, Haha, they won't remain shipwrecked for long, there's constant sea traffic in that area. That is the kind of absurdity I am trying to avoid. So long as the brief accounts I give of a couple of things are not absurd, the details in them do not matter to the story.
Would you like me to give you one of the questions, so you see the sort of thing I'm asking?
Well I personally probably can't answer any, being in mostly the same boat, but if all you're concerned about is backfill, then posting the actual scenario seems like the best approach in general.
The other option is simply "less is more"; they weren't in a certain kind of sailboat, they were in "a boat" and the audience will pick whichever boat works for them. I love to quote (...well, paraphrase) Patrick McManus on the topic; "never specify the person turned left at the top of the stairs unless turning left is vital, because your audience's imagined stairs may not have had a left."
I *am* giving less. I will be giving 5 sentence one-paragraph summaries of events the equivalent of the Civil War or Great Depression. It will not work to say absolutely nothing about how things devolved..
I think you're likely to get good responses posting here: https://worldbuilding.stackexchange.com/
It may well be a more knowledgeable crowd because it's more focused on such things.
I don't need a more knowledgeable crowd, though. I am absurdly ignorant about recent history, politics, economics, etc. Really. For some reason I have never felt a lot of interest in politics and world affairs. I just lack a drive to keep up with it. Once in a while a book about recent history captures my attention -- some of the Barbara Tuchman books did, for instance -- and I read it with great interest. But then it doesn't stick with me, I think because I don't spontaneously recall bits of these books and ruminate about them the way I do with poetry or philosophy or math. I really can't justify being so ignorant, but there it is. It's probably hard to understand if you experience this stuff as intrinsically interesting. Maybe think of it as sort of like being asexual?
I'd say my level of ignorance is the equivalent of not understanding exponents. How can it be that 2^3 + 3^3 doesn't equal 5^3? What is it you're supposed to do when you divide 8^6 by 8^2? Somebody said you subtract the exponents and get 8^4, but that doesn't make any sense. . . And I'm trying to solve a Calculus 1 problem. All I need is somebody who understands math well up through Calculus 1. I do not need a math PhD.
And I feel safer asking here because I know that cast of characters reasonably well.
I'm no expert on politics, but I know something, at least, about math. They almost seem to be incompatible disciplines.
But 8*8*8*8*8*8 divided by 8*8 you can surely see is 8*8*8*8, which is 8^4. The subtraction rule of exponents is derived from the answer, so it turns out to be an after-the-fact shortcut.
On the other hand, 2*2*2 + 3*3*3 doesn't work with addition at all; you're just looking at the similarity on the ^3, kind of like thinking "tough" and "though" should rhyme, although that is, admittedly, a lot more arbitrary.
Sorry, I wasn't very clear about the exponents thing. Actually I understand exponents fine. What I meant was that my level of understanding of world affairs is at a middle-school level. Regarding that domain I am the equivalent of someone who does not understand exponents, a pretty basic thing in the math domain. .
A blockade would cede operational initiative to the US et al, since we can choose to arrange a confrontation where China needs to actually fire at a US-flagged ship to enforce the blockade at a time and place of our choosing. An extra week or two to position assets in theater would give the US a much better balance of forces for the opening rounds of the war.
Everything about this question can be answered by looking (with fresh eyes) at a map and a chart of china's growth. No need to invade or blockade - Taiwan is going home one of these days. China is patient.
Frederick the Great said it 200 years ago: In war, the true aggressor is the one who makes the other side fire the first shot.
Still true.
Was Frederick really that great? His battle record was pretty much 50/50. He consistently launched frontal charges against enemy positions, even when the enemy was dug in fortified positions on higher ground. When the charge broke the enemy morale he won, and if the enemy held he lost. I think Frederick's most significant qualities were resilience and luck.
Resilience because Prussia was a small state with a lean centralized government, and stayed in wars when less cohesive states would have been forced to sue for peace. There are times when things look pretty bleak for Prussia and Frederick writes of being depressed, but he manages to hang in until the end.
Luck because he was a mediocre battlefield commander who managed to wrest a significant land grab and fight most of the major powers in Europe at once and not lose. Peter III gaining the throne in the middle of the war while Russia was in a position to really squeeze Prussia was a ridiculous stroke of luck. He was a personal fan of Frederick and withdrew from the war, even offering to loan Prussia a large number of troops. Of course Peter was overthrown in a coup and replaced by Catherine, but by then the political will for war had waned. British victories against France in Hanover combined with Prussian gains against Austria allowed Frederick to sue for peace, from an advantageous position with the conspicuous absence of Russia.
There's no record of Frederick the Great ever saying that, and as a notably-aggressive monarch he very much did not follow that advice. He pre-emptively invaded and annexed Saxony in 1756, and then pre-emptively attacked Austria the next year launching one of the bloodiest wars of that century. Frederick had accurately assessed Prussia's strategic position -- one in which letting the larger powers have the first shots would have been disastrous -- and acted accordingly with ruthless skill. That's literally how he earned the nickname "the Great".
In any case the above formulation seems to have some limitations as any sort of universal. E.g. it seems pretty rich to label Saxony the "true aggressor" in 1756....was Poland in 1939 the true aggressor? In 1990 Kuwait was the true aggressor? In 1931, Manchuria? In 1978, Afghanistan? Etc etc.
Did anybody ELSE say that, eminent or otherwise?
By some quick googling I found it attributed to Alexander H. Stephens, Vice President of the Confederacy.
Which is fun because it turns it from a piece of Prussian martial wisdom into a piece of playground-style "nuh uh, they started it" excuse-making.
Indeed, and the Confederacy's track record is much worse than Frederick's.
I believe you (and thank you for your trouble), but I wonder if a link to your find might enable me to glean a little context, if only as to Stephens, whom I have never heard of.
I stand corrected.
This is where I got it. No guarantees about accuracy: https://www2.tulane.edu/~sumter/Reflections/LinWar.html
> Thus, Confederate vice president, Alexander H. Stephens, claimed that the war was "inaugurated by Mr. Lincoln." Stephens readily acknowledged that General Beauregard's troops fired the "first gun." But, he argued, the larger truth is that "in personal or national conflicts, it is not he who strikes the first blow, or fires the first gun that inaugurates or begins the conflict." Rather, the true aggressor is "the first who renders force necessary."
Clearly, Ukraine is the true aggressor. Israel is the true aggressor.
Anyway, first causes are problematic to define, as are final endings.
> Israel is the true aggressor.
This, but unironically.
Exactly. We at Al Amnesty Watch stand with the freedom fighters! Sometimes freedom requires shooting up a festival or two. If those """civilians""" didn't want to be massacred, maybe they should have pushed themselves into the sea.
At least use a propaganda trope more innovative than "Everything started on Oct 7th!!1!1!1!".
Since a blockade is an act of war one could reasonably argue that it was the Chinese who started WW3, but this would of course be rather academic. Also historically both Britain and the US have been rather lenient when it comes to classifying their own blockade actions as 'war', so it would be not that unreasonable for China to adopt similar standards to apply to their own actions...
Anyway, we could indeed force the Chinese to fire on our ships first just by sending blockade runners.
Not really. Having to have escorted "blockade runners" would drive up the cost to the point where it's more efficient to do business elsewhere. And our actions show that we are already "planning on doing business elsewhere", it's just taking us awhile to get our ducks in a line. So all China has to do is ignore (except for complaining about) the blockade runners, and over time Taiwan's business and support will collapse. All it will take is patience (which the US isn't good at, but China often has been).
You don't need some sort of special "blockade runner" ship or a special escort for it, you just need one ordinary cargo ship with a US flag and cargo you don't mind losing. If China sinks it, they've started the war. If China lets it pass, then it's not actually a blockade.
Whose job is it to crew the cargo ship that we don't mind losing?
Is this supposed to be a top-level comment? It seems like it's in the wrong spot.
We've seen how effective the Ukraine's marine drones (and now ATACMS) have been in removing the Russian navy from the Black Sea. I'm sure Taiwanese have been watching closely.
Seems like it would lead to escalating brinkmanship - PLAN encircles Taiwan, so the US sends an escorted supply ship and says they don't recognize the blockade. Does China fire on this ship and fire the first shot?
Considering that the U.S. is already famous for freedom of navigation operations, and has shown that it will keep global trade routes open at cost to itself (in Somalia and the Red Sea most famously), the U.S. would likely escort RoC civilian vessels in a convoy at the worst case scenario, in which case the policy of "China simply just doesn't have to fire the first shot" means that the 'blockade' just turns into a nuisance.
Of course there's also the question of if a civilian shipping vessel decided to head out anyways. Does China fire on it (killing civilians, which in any universe puts China at fault)? Do they use barely-not-lethal methods like fire houses to destroy the bridges of these ships (like they're doing in the Philippines)? Either way if the Chinese blockade is significant enough for the U.S. to take action I don't see anything being accomplished from a blockade other than embarrassment for the PRC.
As noted, the RoC needs to import everything for survival, so I don't see them simply cowing to a declaration of blockade. If it becomes an existential threat, the U.S. will get involved for the semiconductors if nothing else.
A lot of this is going to come down to the ancient tradition of "whoever blinks first loses", with the exception that China cannot win a conventional war, so the U.S. holds escalation dominance. It's not (yet) in the PRC's interest to try it, at least not yet.
China just announces "any ships who try to run the blockade will be fired upon", now the US will be the belligerent party if they do so. The US President giving such an order will be saying 'yes I would like to start WW3 on my watch'. Seems unlikely
A blockade is an act of war. Saying “we’ll shoot you if you come here” is just publicly declaring your act of war. If anything, it would make it HARDER to blame a shooting war on US provocation, since you pre declared your intent to shoot first.
Threatening to kill someone if they do X, does not make them the aggressor for doing X. It makes you the aggressor first for making the threat, and again for carrying it out. If it leads to World War Three, you started World War Three and you're just upset that the other side didn't immediately surrender.
"Look what you made me do!" is the eternal cry of terrorists, tyrants, and two-year-olds. Nobody with an ounce of common sense falls for it.
That seems just straightforwardly false to me? If the Chinese are both the ones who made that announcement and the ones who fired the first shot, I expect most people will consider them to be the belligerent party, not the heroic Americans who tried to deliver humanitarian supplies to the innocent Taiwanese who are starving because of the illegal Communist blockade before the evil Chinese fired on their unarmed ship (or at least that's how the US will describe the situation).
All the US has to do to is lend-lease Taiwan hundreds of naval drones whose designs we will license from Ukrainian engineering companies. Or better yet, just send the Taiwanese the designs, and they can probably manufacture them quicker and cheaper than US defense contractors could. Last I heard the remainder of the Russian Black Sea fleet (that hasn't been sunk yet) has been chased off of open waters, and Ukranian grain shipments are getting to the Bosphorus without interference. So much for that blockade.
Not saying that definitely won't work, but the PLAN is likely a lot more competent than the Russian Navy so I wouldn't assume it will either.
How do you know the Chinese are more competent? The Russians were considered a superpower before they invaded Ukraine, now it's clear they were always a Potemkin superpower. China has had a history of corruption in its military. Chairman Xi supposedly has been cleaning up the corruption, but he seems focused on the corruption of his perceived political enemies rather than his allies' corruption.
And how do you defend against marine drone attacks when the drones are low enough in the water that they mostly have no radar signature? Passive sonar might detect them coming, but then what to you do if you're the captain of ship as large as the Moskva? Evasive action doesn't get you far when marine drones are faster and more maneuverable than the ship trying to avoid them. Visual contact would be necessary to spot and destroy them. At high speed, they leave foam wakes, but by the time they'd be spottable they'd be pretty close. The only good defense against them I can think of is something like the Royal Navy's Phalanx Gatling gun. Unfortunately, those are automated and radar-guided as a defense against incoming missiles. And I don't think there's a manual override whereby a human can take control and aim the thing low at the water.
Ukraine's Magura marine drone is 5.5 meters long and weighs about 1,000 kilos with batteries and its 200 kilo explosive payload. Because they're battery powered they're silent. They have a range of up 800 km and 60 hours of battery life. And it beams live video to the operators.
Then there's the Sea Baby which can carry 850 kilograms of explosives. It has a top speed of 90 kph and a range of a 1,000 kilometers.
The Magura and the Sea Baby cost between $200K-$225K. For Taiwan that would be money well spent if you can sink an aircraft carrier that's estimated to cost $2 billion. Personally, I hope the US Navy is taking this threat seriously. But there's some truth to the idea that generals (and admirals) only know how to fight the last conflict. I still see shrill proclamations on the military blogs that tanks are not obsolete. Seems like they're like the horse calvary claiming they're not obsolete. However much they protest, aerial drones seem to have changed the dynamics of armored warfare. And sea drones seem to have changed the dynamics of naval warfare.
Nitpick alert: Was Russia *always* a Potemkin superpower?
My guess is that it was in better military condition soon after the fall of the USSR when entropy (theft, weather, deterioration) didn't have as much time to damage the weapons. We'll never know.
Is there good terminology for a country like Russia which isn't a superpower but is strong enough to cause a lot of damage?
I'm not a marine engineer but this sounds like you "just" need your own screen of drones -- a waterborne version of the Patriot missile system. Or a modification of the Patriot or Phalanx type systems to be able to hit water targets.
Drones are just cheap PT boats which seems to be a solved problem.
Bit of a nitpick to get out of the way first - I don't know that the Russian Federation was ever considered a super power. A regional nuclear power, sure. And "were always a potemkin" is pretty broad. If you mean Russia for the last 30 years or so ok, if you mean the Soviets in 1950 not so much.
I wouldn't read too much into drone efficacy against the Russian Black Sea Fleet. The ships are just sitting there right next to Ukraine. The Ukrainians have as many at-bats as they want, and only need to connect a single time to take a ship out. This is also entirely asymmetric as there is no Ukrainian fleet (on anything like the same scale.) Much different conditions than ship-to-ship fighting near Taiwan.
There are several defensive methods employed by Russian tanks against drones. One being electronic jamming modules, or various electronic warfare (EW) methods. Another being ablative cages or meshes mounted onto the exterior of armored vehicles. Similarly, the Russians have mounted thick nets and cables around bridges in Crimea to ward off drones. So perhaps naval vessels could also employ an outer mesh or net layer to detonate incoming drones a safe distance from the hull. EW countermeasures I'm less sure about as water absorbs quite a lot of EM radiation, so the effective range would be much shorter. This cuts both ways, and I think sea drones have to be pre-programmed rather than controlled in real time like aerial drones. Which makes them less effective against moving ships compared to ones sitting in port.
The PLAN has been conducting antipiracy operations far from home that demonstrate rather greater competence than the Russian Navy. In particular, they don't seem to have to attach a salvage tug every time they deploy a warship.
This isn't to say that they're a match for the USN and its peers, but they aren't completely hopeless.
It's entirely possible that the PLAN is as much a paper tiger as the Russian military, and/or that surface ships are totally obsolete in the face of new drone technology. But all of that is super speculative until we actually see them in action and it's foolish to just assume your enemy is no threat based only on this level of speculation.
The Ukraine war gives *some* data but it's pretty clear that neither side is a first-class power, especially on the naval side of things, so there's a pretty sharp limit on how far we can extrapolate.
Why do you assume that?
Could be that the Russians are MORE competent than the PLAN but surface vessels are just obsolete in 2024
Possible I suppose but there's some evidence against:
-The Russian military has shown a low level of competence in general and there's reasons to assume their navy would be a lower priority than their air and ground forces.
-The US Navy seems to do much better- there were cases of USN ships just kinda hanging out off the cost of Yemen getting shot at by the Houthis for weeks without suffering significant damage.
-My impression is that experts looking at specific incidents with the Russian Navy have assessed their competence as quite low, although I don't have the memory or the energy to track down specific citations on this so hey.
This just devolves into a game of chicken, though. Let's say that a Taiwanese ship carrying semiconductors to sell abroad to fund the purchase of grain sets out. If China fires first and kills civilians, I don't see much logical opposition to simply sending out enhanced freedom of navigation operations. There may always be anti-war sentiment, but ultimately China is going to have to cripple a civilian vessel if push comes to shove. And remember, when push comes to shove, the U.S. can shove.
I think it's no sure thing, and frankly I don't think the PLAN yet wants to risk embarrassing itself when the Spratly Islands are still hotly disputed.
If a mind just “something a brain does”, what would enforce “one mind per brain, one brain per mind?” Something in biology - like a specific neurological structure - or what?
As for the "One mind per brain" part, I don't think it's enforced. In humans, there are plenty of split-brain cases (the bridge connecting the 2 hemispheres of the brain got severed) doing bizarre things, bizarre, that is, unless you grant that the 2 hemispheres of their brain became different minds.
From Split-brain, Wikipedia English (https://en.wikipedia.org/wiki/Split-brain):
>> After the right and left brain are separated, each hemisphere will have its own separate perception, concepts, and impulses to act. Having two "brains" in one body can create some interesting dilemmas. There was a case in which, when one split-brain patient would dress himself, sometimes he pulled his pants up with one hand (the side of his brain that wanted to get dressed) and down with the other (the side that did not). He was also reported to have grabbed his wife with his left hand and shook her violently, at which point his right hand came to her aid and grabbed the aggressive left hand (a phenomenon [...] known as alien hand syndrome). However, such conflicts are very rare. If a conflict arises, one hemisphere usually overrides the other.
>> When split-brain patients are shown an image only in the left half of each eye's visual field, they cannot verbally name what they have seen. This is because the brain's experiences of the senses are contralateral. Communication between the two hemispheres is inhibited, so the patient cannot say out loud the name of that which the right side of the brain is seeing. A similar effect occurs if a split-brain patient touches an object with only the left hand while receiving no visual cues in the right visual field; the patient will be unable to name the object, as each cerebral hemisphere of the primary somatosensory cortex only contains a tactile representation of the opposite side of the body. If the speech-control center is on the right side of the brain, the same effect can be achieved by presenting the image or object to only the right visual field or hand.
------------
In Octopuses, the majority of neurons are not even "In the brain", if I remember correctly only a 1/3 are. That means that a full 2/3 of the "Brain Material" of the Octopus is not in their brain, but in their arms. This means that their arms "have a mind of their own", imagine if your arms were like your lungs or stomach, moving along, grabbing and grappling things with no conscious attention whatsoever (except in full view of you, unlike our lungs and stomachs). Something mildly similar exists in humans, the Spinal Reflexes. Your spine has simple circuits that detect certain conditions and fire orders for your muscles independently of the brain, the famous Knee-Jerk reaction is a spinal reflex, for example. But those circuits are not full-blown second brains as in the Octopus.
As for "One brain per mind", I think it also gets violated pretty often in eusocial insects. Unless you restrict your definition of a mind to only include that which communicates among itself by electro-chemical brain signals (which will enforce "One brain per mind" by definition), hives like those of bees and insects arguably qualify to be one mind, but distributed across different brains. No ant or bee can ever work against the colony, and if one does by improbable chance, the others attack or ostracize it like a primate immune system attacks a cancerous cell.
If you define a mind as an Ego, that which creates/feels an "I", then yes, I think "One brain per mind" is fairly common sense, where "One brain" means any mass of neurons with a reasonably high-speed reliable communication interconnect between them. If the communication fabric splinters or slows down, then the neuron mass breaks down into several egos. Peter Watts advanced this hypothesis in one of his novels, it was Freeze Frame Revolution, I think. But if you simply define a mind as Autonomy, that which can act on its own accord, setting its own goals and pursuing them with its own means, then neither "One brain per mind" nor "One mind per brain" seem to hold in the general case, although they seem to be the default in humans and most everyday mammals.
I've wondered how much "mind" is in people's legs. A healthy person can walk on moderately rough ground without the eyes being much involved. It feels to me like my feet know what they're doing.
This is just what we call "muscle memory", but it's controlled by the brain, just in the background. Not limited to legs, too: I don't need to look at my guitar to play a familiar piece, the fingers "know" what to do.
I can know where the keys are on a keyboard, but I think walking over rough ground is something more complex.
Yes, but - it's a continuum. A "slightly rough" trail is still easy, and requires almost no conscious effort. A wilderness trail in northern New Mexico takes full attention, and is very tiring because of that.
The same with the fingers: a familiar "easy" piece requires no attention whatsoever, while playing a Bach prelude, even though I know it by heart, still demands a visual help.
Human brain does both conscious and subconscious minds, which gives us at least two minds per brain.
If you ask what enforces one conscious mind per brain, then it is something like one and not two lines of code to run consciousness.exe, encoded in an illegible way throughout the neural network by the evolution.
I don't think it's a hard limitation, just a high probability. The pattern that gets used more, trains more and more of the neurons to respond to it. But I don't think it's a priori impossible for a more process-oriented microkernel architecture to reach a stable equilibrium.
Your Book Review: Consciousness And The Brain (Finalist #1 of the Book Review Contest) https://www.astralcodexten.com/p/your-book-review-consciousness-and
It mostly went over my head, but the vibe i took away was "the goal of consciousness is synchronicity". Of course, DID/MPD is certainly a thing. But that seems like a failure mode, not the norm.
> It is really slow, it is exclusive, and it simplifies the world into a highly compressed sample. This can be useful in its own right, for example to make a decision. A lot of information is lost in this process, but apparently the resulting pattern is so simple that it can be processed further. Since all parts of the brain participate in a conscious event, it is also universally available in the brain. Dehaene calls this function the *Global Neuronal Workspace*.
----
Boy oh boy, do I have a crank theory.
<wildly irresponsible speculation>
I suspect Monism is correct. (Cartesian Dualism strikes me as the last bastion of anthropocentrism).
Consider Elan Vital. It turned out that there's no discrete quality that separates the organic from the inorganic. "Inorganic" compounds like table salt can absolutely be integrated into an organism. Really, the secret sauce is the "complexity and specificity" [0] that carbon allows for. The complexity of organic chemistry pays for the specificity of structure required for enzymes to catalyze certain reactions. This reduces the operational costs of metabolism (in terms of energy expenditure) as low as possible. Without energy, there's no agency. And without agency, there's no struggle against entropy. Thus, life can be conceived of as a monopole of chemical disequilibrium. Like how a city is just a monopole of human activity.
Likewise, I suspect The Hard Problem will have similar contours. I.e. that the complicated structure of a brain is necessary to contort the electromagnetic(...?) field in some extremely specific way. And this contortion of the EM field somehow gives rise to consciousness as a monopole of EM activity.
I suppose this would technically be "pan psychic" in the sense that there's no discontinuous barrier between conscious beings vs inanimate objects. But it also adds up to normality in the sense that rocks probably do not have qualia/cognition to a meaningful degree, and is consistent with the ability of magnets and anesthesia to disrupt consciousness.
[0] https://fromthechair.substack.com/p/magic-runes-and-sand-dunes-the-binary
"One mind per brain" is demonstrably false (DID is a thing). "One brain per mind" is enforced by the lack of thought-level communication between brains. Even the two hemispheres are not well connected in that sense, resulting in well known experiments where the left side and the right side think and act separately.
So, one direction false, and the other one is limited by the lack of a direct neural connection. The latter might change if Neuralink has its way.
It can also be a multi level thing; at one basic level you have "one brain per mind" because of the reasons you gave, but at a higher level, a whole group of people with rich enough patters of interaction can act like it has a mind of its own. I don't see any fundamental reason why such a group couldn't be ascribed a kind of collective consciousness too.
It can indeed be thought of an emergent collective consciousness, but each person in that group would not think and feel everyone else in it, so not quite related to the original question.
If the group can (in this hypothesis) be said to collective think and feel, that would be a counterexample to the original question's "one brain per mind".
You are approaching what I see as the most confounding question. How and why does consciousness even exist?
I share your intuition that there's no theoretical reason there must be one consciousness per brain. And are you sure it's true? It seems unlikely that the rest of your nervous system (and indeed your close surroundings and sensory inputs) play no constructive role in your consciousness. I'm sure that speed of information transfer is involved, so matter that's further from the brain and less intimately connected plays less role. But is there any additional slower consciousness that's not bound by speed and distance?
I am part way through the Stanford Encyclopedia of Philosophy entry that tries to explain consciousness and address some of these questions. For me, the central frustration stems from the fact that consciousness seems a non-physical phenomenon (can't be measured), yet it is the only thing that could not be an illusion. So it is both more real and less sensible than everything else in the universe.
If you'd like to explore this as well: https://plato.stanford.edu/entries/consciousness/
It seems likely the brain is NOT limited to one consciousness per brain. https://www.smbc-comics.com/index.php?db=comics&id=2085#comic
I dont think materialism can plausibly account for consciousness, but I’m open to it. I think the way we talk about and think about consciousness is an absurdly weak point in modern thinking because consciousness is the one thing we have the most evidence of, but we don’t understand it at all and few people seem interest in trying.
Wouldn’t it be helpful to have a precise definition of what is meant by the word “consciousness “?
Do you have a better explanation?
I think consciousness is likely fundamental, like the material universe. That seems more congruent with the evidence, since there’s no evidentiary basis one can have to say they are not conscious right now.
That's my current tentative belief as well, but it's a classic Sherlock Holmes conclusion (which feels no fun at all): When you have eliminated the impossible, whatever remains, however improbable, must be the truth.
> yet it is the only thing that could not be an illusion.
Why could not it?
Because that illusion would be an element of consciousness. It's a paradox.
Can a person be deluded that they are experiencing consciousness?
I've had dreams where I wake up to outside stimulus, only to wake up later and realize the previous waking up was still part of a dream. So I'll say "yes". There's nothing stopping all of this from being someone else's dream.
On a reflection, no, not really.
It was in fact a stupid question, I must admit
Any question asked earnestly with desire for understanding is, I think, a good questions to ask.
It's difficult to say anything with certainty, because I don't have a clear definition of "mind" to work with, it seem like a brain having multiple separate compartments of information which do not interact would just pointlessly degrade its performance (and would therefore be avoided by evolution), and in the other direction, for multiple brains to host something that could meaningfully be called one mind would require a higher-bandwidth channel between them than plausibly exists.
Well, the brain clearly has multiple segments that do not share all their information. In that sense there is obviously limited sharing. I think the connection between the verbal and the kinesthetic is the most blatant. Describe how to ride a bicycle well enough that someone who hasn't ever done it can do so. But there are lots of "limited channels". Consider trying to explain a "gut feeling".
Depending on your definition of "mind" this can be a sufficient argument. And the parts have limited communication to improve efficiency. They only degrade it in particular ways that are generally less important.
>Well, the brain clearly has multiple segments that do not share all their information. In that sense there is obviously limited sharing. I think the connection between the verbal and the kinesthetic is the most blatant.
Another prosaic example of multiple high level processes that everyone has experienced is searching for a word, not being able to find it, then having it pop into one's perception a few minutes or hours later. Clearly _something_ , with the capability of performing linguistic search, is running in parallel with the part of one's mind that is accessible to introspection.
Can you say more on the higher bandwidth channel part?
To me, “mind vs brain” is like “software vs hardware”. I had the impression this was a commonly shared vocabulary: thoughts and ideas and feelings are phenomena of mind, while neurons and glia and synapses are phenomena of brain.
The "hardware/software" analogy is likely misleading. The problem is that we don't have any evidence suggesting the brain "stores" anything the way computer memory stores information in strings or 1s and 0s. Say, I memorized a string of numbers. My (admittedly, amateur) understanding is that there isn't a specific place, an "address" in the brain we can point to that has been altered to store these numbers. We don't quite understand how the brain retains information.
I thought this was done via neuroplasticity, with neural connections growing or atrophying based upon usage.
IIUC, there is not yet a consensus on how memories are stored. It does require protein synthesis for conversion of short term memories into long term memories, but that's not sufficient for a complete determination.
I admit my memories of this are hazy, I've read a paper on the topic a few years back. I'll try to find it when I get a chance (work needs to be done :) ).
I admit my memories of this are hazy, I've read a paper on the topic a few years back. I'll try to find it when I get a chance (work needs to be done :) ).
Sure, I'd agree the mind is analogous to software but, to carry the analogy farther than is really warranted, you can have multiple computers working together on a common goal over a network but you can't have them literally running one process together unless they're wired up so closely that they're really more one computer (I guess you could write an OS that permitted that if you wanted but it would be ridiculously slow and I think the analogy breaks down there anyway). If there's far more communication taking place between the parts of a mind running within one brain than between different brains (of course, any communication between minds or their components must be physically embodied as communication between neurons), then it just seems most reasonable to draw the boundary of what counts as one mind there.
All of this (both the one-mind-multiple-brains part and the one-brain-multiple-minds part) is of course rather fuzzy, as most things in biology are. I'm not saying it's strictly impossible, just that there are reasons for it to not be the case.
This depends on what you mean by “process”. At the OS level, no they aren’t the same process. But at the logical level, developers of distributed systems refer to what are effectively “one process,” such as, eg a distributed database. And brains don’t have, eg process ids, so the notion of a process running on one cpu seems to map more cleanly to the distributed systems notion if a logical process.
Is it even obvious that your quoted principle is true? In my naive understanding, the observations seen in split brain patients might raise a question mark here.
I don’t think it is, but I’m open to it. Hence trying to understand the best arguments in favor of it.
Back when the turing test was considered a reasonable test for computer consciousness, I figured that our mental models of others, to the extent that they were faithful, probably had some degree of qualia- making "many minds per brain" the default. Hofstadter pushed this idea hard in "I am a Strange Loop." Now that we see GPTs like Claude blow past the turing test but do not consider them to have qualia, it's harder to say.
The actual Turing test hasn't been passed. (It probably won't be.)
FWIW, a human equivalent AI is wildly implausible. A near human AI would be stronger than most people in some areas and much weaker than them in others. The limited "5 minute version" can plausibly be passed. but I doubt that it had been, despite the articles claiming that it has. (67% of the humans were rated non-human??)
OTOH, if you allow the questions to be poised by randomly selected volunteers, you'll probably get figures that show it passed. Some percentage of those will be trolls. Many will be seeking entertainment more than information. Turing wanted to questioners to be people who didn't want to admit that computers could be intelligent, but were seriously looking for answers.
Do you think this not counts?
https://manifold.markets/SteveSokolowski/did-ai-pass-the-turing-test-in-the
I don't know what "The Circle" is. I don't know what the rules for interaction among the players are. So I can't evaluate it. But I didn't count the one that played Diplomacy.
Turing envisioned the questioner asking very specific questions. I actually think that many of them would not be successfully completed by a typical human who was a native speaker of the language the test was being given in, but then, e.g., success in extracting the 5th root of 23 would count against the respondent being human. (His question was more like "Compose a poem". Perhaps he suggested a particular poetic form, it's been awhile since I read the test's full description.) I would probably guess that if the respondent composed a proper sonnet, that it was an AI, since few people could do that.
Congrats to the winners.
I'd like to hear any thoughts about my Wages of Destruction review from anyone who read it (feel free to say you didn't like it).
https://claycubeomnibus.substack.com/p/book-review-wages-of-destruction
Like Johann said, the opening reads poorly, like you want the Nazis to come off well.
I think you make a couple of early unsupported assumptions that throw the whole thing off. The first is this one.
>But from the inside view, every side in every war has framed themselves as the good-guys and the enemy as the bad-guys.
Well, they frame themselves that way to garner support, but it's not some universal thing that everyone believes they've got justice on their side. I don't think the Mafia thinks they're the good guys, I think they think there aren't any good guys and the people who think there are are suckers. Serial killers, rapists, arsonists, there are plenty of people who are in fact just evil. (We also have Hitler's speeches on record, we don't have to guess what he said. https://en.wikipedia.org/wiki/30_January_1939_Reichstag_speech#CITEREFLongerich2019)
The second one being that this good/evil dichotomy is unprecedented in a war of this magnitude. I would suggest it's actually the standard. Ancient Greece and Rome would wipe out the towns they captured. 1400-1800 Colonial forces destroyed all the native tribes across the Americas, and enslaved the locals in Africa and Asia. The only novel thing about Nazi Germany's actions in World War 2 is that they tried it on post-colonial Europe, and eventually lost. It's easy to look like the good guys when the whole war is taking place on your territory.
I take issue with your first critique. The quoted comment is explicitly framing this about *war*. This deals with society level interactions, or at least a tribe/proto-kingdom/whatever. In that sense I think it is fair to say that every side views their own cause as righteous. Expanding the idea to individuals is drawing the wrong conclusion. Same with conflating the mob with an organized military force. This is the part where I admit I only read your comment and not the review, so if the review also expanded the scope of the good/evil presentation I withdraw my remark.
On the second issue, absolutely agree. Rome destroyed entire groups of peoples, like Carthage in the Third Punic War or Sulla butchering the Samnites. Or read Caesar's commentaries on his campaigns in Gaul. Or ethnic groups that raided Roman territory as an entire host including the women and children, like the Cimbri, who were entirely wiped out or enslaved. For a modern comparison, look at the communists. The Soviets killed more people than the Nazis ever managed. The Cambodian regime similarly wiped out 25% of their population, far higher than any Nazi genocide.
The book is about trying to rationalize Nazi war actions, which means rationalizing Hitler's decisions; the reviewer offers these two points as reasons why the book appealed to them. It's not about whether the average German thought it was righteous, it's about whether Hitler and his inner circle did, and I think it's a mistake to assume they needed to.
That's fair, and on me for not reading the review before commenting.
I liked your review and it made me want to read the book. However, I was a little bit freaked out by the beginning, which made me think "oh no, is this some piece of Nazi apologetics?" It's probably hard to write a more catchy introduction, but I personally found it very aversive.
I really liked the sausages bit. I wasn't aware that even on the brink of hunger so many calorie inefficient meats were still prioritized. As a vegetarian today this makes the problem feel even more harrowing/awful lol. Some interesting details I appreciate you highlighting.
I personally would've liked a little more clarity on the central question. Seems to be "yes the Nazis were obviously evil, but was their evil a logical conclusion from a set of initial conditions they were rationally responding to, or did even their economics not make sense?" Hard to answer succinctly/directly but I wanted a more succinct/direct answer to that crux.
Unfortunately the book itself doesn't give a very clear answer imo, it just suggests that they were more rational than their modern depiction suggests. The sausage question is a major component of the answer in my view and I don't think Tooze realized its significance, I'm also vegan and that's probably why I picked up on it from the couple of brief mentions. I googled around a bit after finishing the book and I couldn't find anyone else discussing it either.
I've read the book (and reviewed it, on Inconvenient History). MANY edits, of which the worst 2: diffuse > defuse, eventually lead > eventually led.
Tooze's work, and your review, both neglect the Suvarov Explanation for Operation Barbarossa, which dooms them to a grievous level of irrelevance (IMHO).
Otherwise, a thoughtful and thorough review, which gives an excellent insight into the book's content and meaning. MUCH more serviceable than my review, which is somewhat ideological, as well as cursory.
Suvarov is not taken seriously by mainstream historians. Nor by one of the folks Scott highlighted as having both domain expertise on COVID & creepy oracular powers:
https://westhunt.wordpress.com/2015/02/13/various-crap/
>If the Sovs were within a couple of weeks of launching invasion, you’d think that they would have called up the deep reserves, bothered to get all of their tanks working, stockpiled fuel, run recon overflights, snuck sappers into German-occupied territory (to sabotage bridges and cut communications lines), finish reorganization of their tank corps, etc. etc..
Did they do any of that in Ukraine in 2022?
They did a shitty job executing but we literally watched them engage in a major buildup for months. That’s why lots of people were warning about a likely invasion.
There was clearly some kind of forethought put into the Russian invasion, although they didn't understand the type of war it would be. The Russian command planned and executed a lightning strike to position forces outside Kiev and occupy Antonov Airport. Presumably they thought this would force Ukraine to the negotiating table and they could take modest gains in a very short and bloodless operation. Also the Russians never called it a war, the whole thing is the SMO - Special Military Operation.
Obviously this didn't work and the Ukraine war would become a years long affair of bloody attrition. The Russians were clearly unprepared for this, as evidenced by the early Ukrainian victories where they targeted and punched through the weak points in the Russian lines. But the Russians were preparing for a short, swift operation and not a major war. So citing the lack of preparations for a major war would be rather circular.
What's the last big war Russia launched an invasion in? Same question.
WWII was the last large scale war really, which isn't very helpful. Maybe you would want to look at the Soviet crackdown in Czechoslovakia for pre-emptive efforts, but that wasn't really a war. Also I don't know the details.
I have developed the habit of not taking the mainstream (very) seriously, including historians, especially on a fraught matter such as this.
Did you read the link? It's by a physicist (though he's also known for his collaboration with Paul Ewald on pathogens & Henry Harpending on human evolution) rather than an historian.
I read the link, and found it interesting/credible. Would look further if I knew where.
In the meantime, the balance of (my) evidence ...
"neglect the Suvarov Explanation for Operation Barbarossa, which dooms them to a grievous level of irrelevance "
Could you elaborate? Tooze's analysis explains a lot more about the war than just the reason for operation Barbarosa, even if he doesn't get that right.
Well written, but you missed an opportunity to shore up what I presume is a weakness of the original work (I haven't read it, only heard about it online): more explicitly connecting [lack of] industrialization with cultural outlooks on race, urbanites (Jews), slaughter of enemies, etc. (I.e. the German worldview and motivational structure is alien because it's from a different place: the past. Men in Germany in 1930 remembered when things were different and better in a way that men in Britain couldn't.)
Not that it's the job of the reviewer to do so, but I feel as though this substack's readership rewards ambitious reviewers.
Interesting, not really something I'd considered. I'm not sure Tooze would be persuaded by that argument though. Germany was only un-industrialised compared to the UK at the time, and more industrialised than many European countries, about the same level as France I think. And it seems at odds with his theory that it's the position in the global trading system, and the availability of farmland that mattered.
I liked it a lot and am surprised it wasn't a finalist. Very nice work!
Same sentiment here. I found it an excellent review.
I've got a new piece in 3 Quarks Daily: Affective Technology, Part 1: Poems and Stories, https://3quarksdaily.com/3quarksdaily/2024/06/affective-technology-part-1-poems-and-stories.html
This is the first in a series of three articles on literature consider as affective technology, affective because it can transform how we feel, technology because it is an art (tekhnē) and, as such, has a logos. In this first article I present the problem, followed by some informal examples, a poem by Coleridge, a passage from Tom Sawyer that echoes passages from my childhood, and some informal comments about underlying mechanism. In the second article I’ll take a close look at a famous Shakespeare sonnet (129) in terms of a model of the reticular activity system first advanced by Warren McCulloch. I’ll take up the problem of coherence of oneself in the third article.
Ok, I won’t be doing this often but this one is actually relevant to the concerns (and expertise) of many here, so I don’t feel too bad about the blatant shilling. I wrote a story; it’s on my newborn baby substack; it’s a sort of Stephen King-y short horror story about AI alignment/instrumental convergence/bad stuff happening very suddenly. I think you will enjoy it, especially if you’re into this sort of thing but hopefully also if you aren’t. If you do, please consider sharing, subscribing, etc.
https://open.substack.com/pub/pulpstack/p/recursion?r=6agbi&utm_medium=ios
It's a good story. Well written (maybe needs a bit of professional editing but that's praise, not criticism - it's good enough to be worth editing) and the development of the plot runs smoothly.
Give it a bit of a shine and you could try submitting it to other places online, though I'm no help when it comes to online SF publishing sites.
Thanks — I agree it could use an edit or two; my problem has always been procrastination so part of the reason I’m doing a substack is to force myself to put things out there on some sort of schedule. It’s so hard to tell when you’ve just written something in a bit of a hurry, so it really means a lot to hear eg that the plot seems to be running smoothly. Thank you for reading it, and for taking the time to share your thoughts.
Yikes, thanks for the anxiety! Subscribed and shared.
Thank *you*. This is only the second post I’ve done, and every new subscriber still feels like a huge boost. So thanks, really.
Gematria researcher Luis Goncalves continues to probe the mysteries of Crowley's Liber AL using Base 36 Alphanumeric Qabalah (AQ).
He's now noticed that Charles Stansfeld Jones, better known as Frater Achad, regarded by Crowley as the predicted Magical Child of the New Aeon, actually does have a name that sums to 418 in AQ, the number identified in the same book.
Gripping stuff. See Luis's Gematria Research blog for more.
I ask this without trying to knock you, but, why should I take this seriously at all?
There's this idea among some futurist philosophers (Land and NYU Shanghai types) that language is the base layer of cultural forms and concepts. And that certain interrelations between the same are hidden in the numeric value of the words that relate to those forms and concepts. This idea is called gematria or isopsephia and is common in Hebrew and Greek Qabalah. The Bible for example is rammed with Hebrew gematria.
Liber AL was channeled, not written, by occultist Aleister Crowley at the beginning of the 19th C. İt purported to be a kind of source doc for a new aeon for humanity. And now that people have found this Base 36 AQ numeration pattern in it, based around the English language, the idea is that AQ reveals correlations between the forms and concepts that are and will emerge in our new aeon.
Some believe that certain languages were sent by God, gods or aliens to direct human cultural evolution.
As to taking any of this stuff seriously, you totally don't have to. I for sure am not gonna try and convince you. Some people get attracted to investigating this stuff, that's all.
Indeed. It's just neat for some people.
Do what thou wilt shalt be the whole of the Law.
At all? Well, Crowley was into codes and cyphers, so it's not implausible that he did encode messages into some of his works.
As for the content of those messages...I see no reason that anyone should take them any more seriously than they take, say, "Book Four". I found it worth a bit of investigation, and some of the principles work. The one I found most significant is "invoke often", but don't understand that too literally. I interpret it as, approximately, any thought pattern you repeat often enough will begin to appear real. And notice that you've been repeating thought patterns all your life.
At all? Well, Crowley was into codes and cyphers, so it's not implausible that he did encode messages into some of his works.
As for the content of those messages...I see no reason that anyone should take them any more seriously than they take, say, "Book Four". I found it worth a bit of investigation, and some of the principles work. The one I found most significant is "invoke often", but don't understand that too literally. I interpret it as, approximately, any thought pattern you repeat often enough will begin to appear real. And notice that you've been repeating thought patterns all your life.
At all? Well, Crowley was into codes and cyphers, so it's not implausible that he did encode messages into some of his works.
As for the content of those messages...I see no reason that anyone should take them any more seriously than they take, say, "Book Four". I found it worth a bit of investigation, and some of the principles work. The one I found most significant is "invoke often", but don't understand that too literally. I interpret it as, approximately, any thought pattern you repeat often enough will begin to appear real. And notice that you've been repeating thought patterns all your life.
I have a new Substack post, "The Shadow Fed" in which I criticize (by nuancing and expanding the excellent criticism of Brad Delong -- highly recommended across the board) of two papers from the "Shadow Open Market Committee."
https://thomaslhutcheson.substack.com/p/the-shadow-fed
Congrats to the book review finalists! Now that it’s not relevant, I’m curious what anyone thought of my Nexus review or ways to improve.
I also found it great! Reviews of fiction work usually have a hard time to convince me because they often don't tell me about anything than the novel itself, which is a rather limited topic. Your review was one of the very nice exceptions to this rule. It was thought-provoking, opened some new perspectives to me, and made some nice connections beyond the actual topic of the book. Well done!
I thought it was great! It was faithful to my vague memories of the book but also discussed a lot of interesting points that I didn't think about when reading the book. Kudos!
If you link to it, it might refresh people's memories!
Haha good point!
https://docs.google.com/document/d/1cp6iw5OEyDjnD_viZo-KL0Zv4jwQnMXtE4yIovfVAco/edit#heading=h.6uwmbaiy54ph
Are there any interesting studies (ideally not with terrible statistics, but given that this falls into management studies/sociology domain, I am not optimistic) about the impact of workplace romance policy on firm productivity?
I work for a company with a very permissive policy. Like half of the office dates each other, has dated in the past, and a fair number is married couples. There is nothing about it in the code of conduct.
Normally I would be horrified about this sort of thing and expect a lot of gossip and couples bringing their emotional problems to work, which would create problems in the office and disrupt the normal flow of work. However, everybody has, as far as I can see, stayed quite mature about it and I have not seen this interfering with work. Romantic relationships between employees with different hierarchy levels (i.e. the worst case scenario - "she slept with the boss for that promotion") are not explicitly forbidden, but are so looked down upon culturally that they either don't happen or are hidden very well.
Which makes me wonder, just how much do explicit policies help, and when they don't, what makes culture to fill in and when does this fail?
I thought it was mostly about sexual harassment--i.e. (usually male) bosses forcing (usually female) subordinates to sleep with them or be fired.
Or, more cynically, the second half of that wanting something to hold over the first half and using the feminist movement to support them...but that's just me.
I am at a Big Corporate Place. There is no rule against office romance, in fact we have many interoffice marriages although I don’t know how many *met* at work. Also many children of employees work here.
The policy is just that you report it, and you aren’t allowed to work in the same reporting chain (i.e. in a position where one of you could directly influence the career of the other).
I think this is mostly CYA to prevent lawsuits about nepotism, harassment, etc.
I don't have statistics, but I do have a suspicion that stats may fail here because it's probably a more of a culture thing - and the outcomes can vary greatly.
Maybe the rule is there because of the culture (a bad experience). Sometimes the rule can help because it shows the management is serious about altering the norms, but other times it just results in people flouting the rule constantly, or coming up with ways to skirt it. But if you need this rule, your culture probably has bigger problems - that employees see work as a place to socialise, rather than a place to work! If you need to have a no-romance rule and you find you can't enforce it, your employees are probably also playing pranks on each other, spending a lot of time planning after work activities, gossiping, etc. You might have bullying and non-romantic drama. The manager might play favourites because someone said something he didn't like.
Whereas a culture that isn't so excessively social may or may not have this rule explicitly and still be ok - because if the culture is "work time is for work, not for making friends, you do all personal things in your own time, and never let personal things affect work ever" you naturally just don't have this problem even in the absence of such a rule, even if people are dating.
I think a decent amount of large corporations (> 1000 employees) only limit fraternisation within your direct work team and make you declare conflict of interest where relevant. It's a non-problem if your partner works in a completely different area of the business and you have zero work contact (there are numerous people I work with that have that exact situation).
In your case I'd guess that you have a combination of having one of the work-only-at-work cultures, and/or you're just not in on the gossip (many work places strive for work-only-at-work but still have a little bit of gossip, it's fine as long as it doesn't take over).
I think personnel composition also matters a lot. A startup (mostly 20-sometimes straight out of higher ed) has a very different vibe from my workplace (1 - 2 grads with very little on the clock contact with each other, everyone else in the team is like 30 - 50, married with kids). I literally can't imagine workplace romance being an option for me - everyone is neither eligible nor interesting (most people want to date someone their own age).
I'm fairly sure the few that happened primarily started at after work events, introduced by friends from outside your immediate work group. Also, grads get moved around very, very, quickly and frequently and are always in teams surrounded by older people.
On the flipside, some of our service providers run workforces of mostly single men / some women (frequent travel to remote mines does not make for stable relationships) who are under the age of 35 (they tend to quit, get promoted, or get too injured past this age). Shifts are long (12 hours+) and people sometimes bring personal stuff to work (because most of their day is work).
Their Christmas parties are open bar. The fall out every January is both predictable and catastrophic. But the actual work arrangements make it kind of hard to change the culture, because long shifts and living in work accommodations means you don't have any other social avenues, and the physical/travel heavy nature of the work means you can't anchor your culture with family-oriented 9 to 5-ers (well, you can, but they'd have to raise their families in a desert mining town, and you get an entirely different kind of vibe there).
This is rather hypothetical, but I wonder whether a positive culture which holds that doing the work is important is the main thing. Such a culture would admit modest amounts of non-work, and that would probably be better than an all-work culture which attempts to require work even if the culture doesn't especially prioritize work.
Wow, congrats for finding a sane place to work. All I seem to hear about corporate culture these days is about policing speech and behavior out of fear that someone might have a human feeling here or there!
Isn't there a major airline that does this? I think its Delta but can't find anything firm with a quick search.
My suspicion is that explicit policies are most useful as a defense against lawsuits and as something to point to if someone really crosses the line and needs to be disciplined or let go.
In general I think people are fairly good at navigating these things themselves; formal rules are more to protect against worst cases than to improve the median case.
Sexual harassment, etc.
The sorts of people banning romantic relationships are usually not the sorts of people worried about the birthrate.
Documentary movies always seem to use voice-over rhetorical questions, long scenic shots, satisfying conclusions or other techniques that enhance the entertainment value at the expense of information transfer.
Lectures have pauses, stumbles and other live-performance issues. Video is seldom used.
Books are nice but can't use video.
I wan't to see a documentary movie that is optimized for information transfer with no regards for entertainment value. Basically a book or an academic lecture with high-budget video to enhance understanding. Is there such a thing anywhere?
3blue1brown? The videos don't necessarily cover as much as an academic lecture, many of them are pretty well optimized for learning if you happen to be at the right level as an audience member. I particularly like their work on statistics - it does a great job introducing ideas like Bayes theorem at an intro college level.
>3blue1brown?
Seconding the endorsement
I was just telling a bunch of folk about a month ago that, despite my not enjoying true-crime as a genre, this particular true-crime documentary was virtually perfect in form. As far as I remember, 100% of the visual content is evidence from the case (plus a couple of Google maps), with no arranged talking head interviews, lingering scenic shots, recreated scenes (*puke!*), or other filler gimmicks.
There is some irritating narration which includes color commentary while transitioning between facts ("She would then discover his horrifying secret," etc); had they cut all of the color commentary this would actually be a *perfect* documentary of a real crime.
And as someone who formally studied both documentary films and film editing, I consider the first 30 seconds to be absolute *genius,* some of the finest documentary work I've ever seen. The first three cuts are a truly magnificent sent of choices (choices from police evidence!) which elegantly establish the event and introduce the "characters" with brutal and intriguing efficiency.
Again, I hate the color on the narration, but this is otherwise about as good a information transfer about this crime as I can conceive.
(I'm guessing you weren't looking for efficient documentaries about particular true crimes, but I'm mentioning this video because its form is supremely masterful, even if the content isn't useful.)
https://youtu.be/pvrp87VXtD4?si=K1dS9GsQP8-MWElk
I watched it-- the question I'm left with is that a lot of people knew the young man was dangerous, but it wasn't the sort of knowing that rose to the level of action until he committed a murder.
How would things have to be different for the public to be protected from someone like that without making life worse for harmless weirdos?
I don't think anything could have been done. Many people thought he *might* be dangerous, but it's important to remember he wasn't *actually* dangerous...until the day he was.
After all, a LOT of people who make macabre jokes and are socially inept enough to seem "creepy." I strongly suspect that every single person living in big social networks has encountered at least one person and maybe a few out of all the classmates, coworkers, neighbors, clubs, friends of friends, oft-visited businesses, etc who one wouldn't be surprised to discover was a budding serial killer. I can think of several in my own life where a YouTube documentary about them would make me say, "Yeah, that's about right."
And I'm pretty sure *I* am one of those people for a few of my classmates!
When I was in seventh grade I convinced my entire middle school - including teachers and administrators - that I believed I'd been abducted by aliens (it's a long story). Then in high school, I spent several months writing ostentatiously graphic horror stories *and reading them out loud* for an honors English class as a contest of oneupsmanship with two goth/nerd dude friends.
I'm confidant that if I ended up as the subject of a YouTube documentary about a serial killing or spree killing, my classmates and teachers would said, "I knew it!"
It's complicated because most of his aggression was verbal, but if you watched the video, there's a section about him probably killing a cat, and it got memory-holed. His fellow students (I don't remember about adults) kind of knew it and kind of didn't and didn't want to think about it.
I remembered the part about the probable cat killing, but the problem is that it was only probable. If there had been some definitive proof of animal torture, I'm sure his parents would have made sure "something" got done, but even with their best efforts, that "something" would have almost certainly be anemic, like counseling with probation (and especially so if the cat thing happened when he was under 18).
Perhaps in a different era, when it was possible to contain a family member in a mental health facility with a couple of signatures and a doctor's sign off, this dude's parents and/or community would have been able to stop him before he harmed anyone.
But we're not in that era now. There are many, many places in the US where the justice system refuses to contain dangerous people even after they've done assaulted another person (on video, no less!). I can't think of any place that would devote meaningful resources to someone merely because they have a creepy vibe and some rumors circulating about them.
More's the pity.
It's messy, especially because he was smart enough to attack someone who *wasn't* in his immediate community. It still seems like at least telling him that his sense of humor was substantially unwelcome might have been something. He did harass people, or at least that one girl he targeted for being fat.
From my point of view, the response to him seems flabby. His feelings are important, he's allowed to make people's lives worse. Their emotions aren't important because welcoming difficult people is the primary goal.
It's a real problem because sometimes people are arbitrarily defined as difficult.
Arguably, the people around him failed to protect him as well as failing to protect the homeless man. I don't know whether there was any way to convince him to value people, at least in some minimal way, but perhaps he could have been convinced that there are social mechanisms that might not let him get away with murder.
Thanks for sharing!
I agree with you here as well, and am frequently annoyed by the way documentaries are presented. Even more upsetting to me are adding drama and humanizing animal behavior in nature documentaries. Narrators telling us an animal is timid, or curious, or afraid, when the actions of the animal don't reflect the narration. If a nature doc goes as far as to actually give the animals names I generally immediately turn it off.
There isn't any intended information transfer that could be sacrificed for entertainment in a documentary. They're a careful kind of dishonesty that masquerades as informative.
Thought experiment: put every microclaim (including implicit ones involving a shot that makes some opposition figure look ugly or brooding or evil) from a 90-minute documentary on flashcards, and go through all several hundred of them over a period of weeks with an attempt to debunk or reframe each one.
If you eliminate the requirement for the video to be "high budget" then there are a ton of youtube channels dedicated to precisely this.
Here are three that I subscribe to
https://www.youtube.com/@DividendTalks
https://www.youtube.com/@CoachRogue
https://www.youtube.com/@TheQBSchool
Yeah, youtubers seem to fill this void. It would be nice with something like it but big budget (for both research and video).
I suspect that most of the time, long form text + illustration is the best way to get information across. If you put this text on the internet, then when a video is more appropriate, you can just embed video. However, this is still missing the most important part of real lectures, interactivity. To get interactivity when it is needed, the internet wins again- just embed a flash ga- damnit this is just homestuck
Andy Matuschak's Orbit is a good attempt at this with spaced-repetition
Great solution except for the meteors!
wrong medium. if you optimize for information transfer video becomes more a burden. it has its place in blended learnings and MOOC. But pure video lacks the control channel (slowing down, speeding up, rephrasing on demand, having additional info in some textbox). Many video players are poor when it comes to bookmarking, annotation, ..
I am looking for material suggestions for some toys I want to make. I want something with plastic-like properties: hard, durable, food safe, transparent.
But the only way I know to work with real plastic is to model your shape, carve a mould out of aluminuim with a big CNC milling machine at lunatic expense, install it into an injection moudler, and run yourself off a couple thousand copies.
I don't want to do that. I'm thinking more like, something I can sit down and sculpt with my hands at the dining room table on a Sunday evening.
(The shapes I want to make are complex, 3D, and contain through holes - vac forming isn't an option.)
My current thought is I could sculpt shapes using some kind of polymer clay, create a mould from that using liquid silicone, then pour epoxy resin into that mould.
Epoxy resin has all the properties I want, plus you can make pretty marble colours that kids will love.
Unfortunately there are lots of resins that are definitely not food safe, and while others kind of imply they are, I haven't seen any that are willing to come out and positively promise it.
I'm erring unusually far on the side of caution because the target audience is my ~6month old child. I have watched what my child shows interest in, as well as how my child likes to develop its grip, hand and finger coordination, and I've had some ideas for toys based around that.
So we're talking a small scale low volume arts-and-crafts level operation here, yet one that produces results that can staunchly withstand the sheer destructive force a half year old baby with a preternaturally strong grip and the novelty of its first tooth can bring to bear.
If you don't mind having to throw them out and re-make them often, I wonder if you can use literal food - sculpt things out of pasta or bread dough, vacuum seal/shrink wrap your toy (the ones for sous vide probably work), and chuck them out if the casing breaks. It will unfortunately really limit your shapes.
Konjac jelly could work if you have a suitable mould (which you can 3D print if you can find a printer that does food-safe prints) but be very careful with choking hazards. These would be one-use but a lot less effort to make repeatedly.
Food grade beeswax, if you're good at whittling and can find a supplier with your desired properties.
Kinda want to play with the beeswax now just because.
A baby will be able to gnaw their way through beeswax as soon as they have the suspicion of a tooth; if yours already has a tooth and an enquiring nature, it may not be the best choice. Depending on the shape, it may also break when under relatively minor pressure.
Pros: Beeswax is non-toxic, easy to find, fun to work with, and smells pleasantly of honey.
Cons: It does tend to crumble at fissure points in a distinctive way that is somewhat hard to clean up.
If you're open to machining your pieces, look into food-grade Delrin and other machinable plastics.
Hadn't considered it, but now I am, either for this or other projects.
In case you do decide to play with polymorph/polycaprolactone as others have suggested, I'll share some tips: it sticks too well to some metals. It is easily glued with super glue, which is also fairly biocompatible. The brand matters (it can be overly sticky or brittle), but I think anything you can buy outside of China is probably okay. And it becomes somewhat brittle in a few years, probably due to oxidation. (I wonder if some brands don't embrittle, yet are still food safe?) And it's very hard to get big pieces to hold their shape while they cool unless you use an armature.
Edit: never mind, I saw you wrote below that you already know how to 3D model. That totally changes the effort trade-offs.
Just because I can do 3D modelling doesn't mean I want to. I'm thinking of having a go at this even if it's just for messaround or prototyping. Thanks for the advice.
It just occurred to me that plaster of paris is also kind of fun. And if it turns out you enjoy mold-making, there are a whole world of materials which on their own don’t meet your requirements, but might be used in casting to make final materials that do suit your needs. (The final pour would probably be hard silicone since food safety is a primary concern.) Just don’t let a person try to take a casting of a limb—that’s fairly dangerous. I imagine this activity is also something you could do with a kid when they’re older.
I work in designing implantable plastic medical devices
I would definitely look at 3D printing. Protolabs, Formlabs, and others have print shops that are relativvveellyy reasonably priced. It'll be expensive for a baby toy to be sure but not totally unreasonable. Sounds like it's as much a project for you. They will have a wide range of materials.
I would shy away from molding your own stuff out of a plastic resin, it's often the plasticizers that are actually dangerous and it'll be harder to find out exactly what some particular brand or formula is using. If they advertise it as food grade or medical grade it's likely to be safe to ingest. Molding your own medical grade silicone actually isn't too difficult and probably the cheapest, and you can get it fairly firm, might be worth considering.
If you go through a print house, pick a material like PEEK or PE or UHMWPE or medical grade PP. These are all commonly used plastics for short term contact devices like feeding tubes and such.
That's a great suggestion, thanks. I'm currently thinking I'll do some basic playing by hand, then move to 3D. I shall justify the cost by adding insanely intricate and unecessary internal details.
Wait, places can 3d-print UHMWPE now?
I kinda botched my sentence, technically yes but realistically no and I dont think youll find it at a print house. PP and PEEK can be found though.
Depending on what you want to build, thermoplastic beads might work? It's essentially just plastic with a low melting point, so you can melt it in hot water, and then it cools hard. I would expect it to be relatively baby-safe and it's easy to use. It's difficult to make precise shapes out of, but fine for plasticine-like modelling.
I'd be somewhat worried about toxic plasticizers in a thermoplastic - moreso than other common plastics.
I'd ruled that stuff out, and now I can't remember why. Maybe I thought a baby's mouth could reach temperatures of over 80C for some reason.
EDIT: remembered as soon as I hit "post": I think I decided that the chances of the baby gleefully dunking the toy in someone's coffee mug and causing it to melt in there were high enough to rule it out.
I have played with the thermoplastic and think you might like it for the visceral molding process you are talking about. In coffee it would take a while to get soft and then shape wouldn’t change too much unless you apply force to it.
There’s a natural material with similar properties - Gutta Percha. It was used as plastic for kids toys before we had modern plastics. It also becomes workable at the temperature of boiling water. I have not been able to find a bulk source of it though - at one point humans made enough to coat undersea cables and such, but now it seems to mostly be in dental supplies
I think that would be a really unlikely scenario since items made from it take time to soften - the baby would have to find some rather hot, unattended coffee to drop it into, and then leave it in for a couple of minutes before fishing it out of the scalding liquid with a spoon or something, and at the point where it's actually very soft, it's also quite hot and not fun to touch. It comes in little beads which soften quickly, but any pieces larger than that take much longer to become soft.
I would rate the danger as lower than giving the kid a sponge to play with, which they could conceivably put into a hot drink and then squeeze over themselves. Edit: or just spill the drink over themselves, come to think of it.
If the ability to do things with your hands directly is cental to what you are looking for, don't bother reading the rest of this comment.
Otherwise...
Have you considered 3d printing? There are some food grade resins (though you'd still want to coat them and clean them up regularly).
I'm fairly confident there are some materials that would be reasonably safe - for example, some dentists use 3d printing to produce aligners, and if a material is safe enough to be literally in your mouth for 2 weeks, it should be safe enough to produce a toy.
With a right kind of coating and regular sterilisation it should be probably as safe as it gets.
Of course, more expensive than purely hand-made approach, but easier in a sense if that's within your budget.
That being said, of course do your research and don't take me at my word. I'm just trying to point to a potential option you did not mention in the original comment.
I'm happy to get stuff 3D printed - I'm very comfortable digital sculpting and CAD modelling, and since bits of the design do need to be functional, doing it in CAD makes sense. I rather wanted to do something that didn't involve being in front of a screen, but I wouldn't let that stop me if it was the smart choice.
> you'd still want to coat them and clean them up regularly
> With a right kind of coating and regular sterilisation
This is my bigger reservation - anyone who inherits the toys in like a year or two is not going to do that. Also I will 100% forget to do it too.
Hmm, my mum just bought some resin bowls from a big retailer that are designed for food. I assume they checked whether it's food safe. I can try find out what they are made from exactly if you want.
If it's not an inconvenience, I would be interested.
I have an expanded theory of the origin of woke.
I'm going to take it as accepted that wokeness started in the United States and then spread from there. This seems pretty obvious and widely accepted, but in any case a roughly similar story to what follows can perhaps be told for some other countries, but isn't necessary since most of the world follows American cultural influence whether they like it or not.
I presented here (https://www.astralcodexten.com/p/highlights-from-the-comments-on-the-cf9/comment/55798428l) the general idea that wokeness started with Obama's election. Several people objected to this account on the basis that it doesn't really explain where wokeness actually came from, nor does it fit with Obama not being hugely woke personally, at least initially. So I'm attempting to flesh this out.
To start with, I think Hanania is right to trace the roots of wokeness to the 1960s. But it seems pretty clear that we're all confused on the origin of cultural wokeness specifically. I'm focused on that, and not on what role civil rights law played. And I don't claim any of this is at all original: I'm just stitching together bits and pieces of what lots of people agree on, and probably this exact theory has been presented many times already.
The essence of my theory of wokeness is this claim (that may or may not be true, but feels true): since the 60s, and especially around 1980, a minority of the American population (30-40% at most) have been moving further and further to the left (on social and cultural issues) pretty much constantly, while the rest have been largely static. This fundamentally differs from the middle of the 20th century, when almost the whole society shifted left on lots of things as one (leave aside whether left and right have intrinsic meaning: they mean something, roughly, which is all the matters). I don't know how this claim could be tested, or what sorts of poll questions would be a truly accurate guide: I think you could probably get polls arguing both ways. But it *feels* largely true to me, I think at least some polling would support it, and if it's false then my theory doesn't work. Too bad.
Assume it's true: liberals keep moving left, centrists and conservatives stay where they are (with a few exceptions). With the stabilisation of the current Sixth Party System in 1980, the Democrats face a difficult problem: how to both turn out the liberals, and avoid alienating the centrists. First they run Carter's VP and the first female candidate, but it's a hopeless campaign that has no chance against Reagan, so their loss doesn't really prove anything. Then with Dukakis they have what seems to be a potential winner with liberal-appeal, but in the end is sunk by being too out of step with the centre, especially on the death penalty.
So the Democrats have learned that appealing to liberals doesn't work. They pivot to centrism, and after three terrible losses the liberals in the party are desperate enough to be bullied into supporting Bill Clinton. That works. The centrists and liberals hold together once more and his re-election works as well, but then the split happens. *This* time, instead of trying to woo them Democrats tell the liberals "we don't need you, you're a liability, we'll focus on the centre". Enough liberals split off to the Nader camapign and this pulls away *just enough* votes to throw the election to Bush.
So now the Democrats have tried liberal campaigns and centrist campaigns, and both are potential losers. So what can they try? A flip-flopping candidate who says different things on different days, and to different audiences, and relies on dislike of Bush to scrape through. But the Bush campaign turns "flip-flopper" into an insult, and they lose again.
So by the beginning of 2008, what have the Democrats learned? They lost in 1988 by being too far left. They lost in 2000 by being not left enough. And they lost in 2004 by being too unclear how left they really were. It seems an impossible bind, but Obama's charisma is enough to temporarily save them. By speaking carefully and charismatically, he manages to persuade both centrists and liberals that he is on their side. Saying about gay marriage "my position is evolving" is heard by liberals (and conservatives) as "I'm in favour of it and I'll push it as soon as it's politically possible" and by centrists as "I'm no ideologue, I'm open to both sides". Saying about no longer supporting single-payer health care "I'm better now" with a smirk, is heard by centrists as having moderated and become wiser, and by liberals as a kind of sarcastic "I've realised it's not politically possible right this moment". I'm sure you can find lots of statements of his that are like that.
Well, this works in the short term, enough to win one election. The first unfortunate side-product of making both the liberals and centrists think he's on their side is that it makes conservatives lose their mind and hate him, because they hear the same thing the liberals hear. But so what, he doesn't need them to win. The right-wing backlash is extreme enough that the backlash to *that* gets him re-elected and keeps the liberals and centrists together one more time. And then comes wokeness.
Because the second side-product of making two groups both think you're talking to them, is that you make them very angry when they realise that you might not have been. The liberals therefore react to this struggle for control in the following ways: language policing, turning viciously on your own side's people and searching their every statement for signs of amgiguous committment to the cause. Purity tests. Demanding that every cause be linked to every other, so that you don't count as an ally on one thing unless you're an ally on everything. And so on.
From the liberals' own perspective this makes sense. They don't accept that a fully progressive platform is not electorally viable: they simply think they've been betrayed over and over by double-talking thought leaders. And they react to that in a way that's logical in a sense, but extremely toxic. And the Democrats and progressive thought leaders have no choice but to go along with it, since it's been proven that they can't win without the liberals.
But also, maybe the liberals have *some* awareness that full progressivism is not electorally viable. Which is why they de-emphasise substative policy issues in favour of correct language and ideological purity.
I think this story explains many of the central elements of wokeness. It explains how it's encoded into the structure of the 60s social revolutions, which left a divided society of centrists and conservatives who want no more radical change, and liberals who want more and more of it. It explains why it took so long to really manifest: the logic of this situation had to be played out in many elections, and it had to be firmly proven that neither a centrism that ignores the left, nor a self-confident leftism comfortable in its own popular dominance (like the FDR form that won over and over again) is possible anymore.
And it explains the sheer toxicity of wokeness: it's an ideology of desperation but also of dominance. It's people who have not enough power to govern, but enough power to destroy a government (where "government" also includes the media and corporate and institutional structure built up around progressive values).
And there's another factor that I'll separate into another comment.
As someone who was on the ground floor of woke when it was still called being a cringe SJW and who remains woke to this day: I obviously think you are wrong re. who is moving where; you can tell from the majority support for gay marriage, the decriminalization/legalization treadmill, and the fact that a majority supports single payer healthcare now, as long as you don't ever use the words "single", "payer","government", or "socialism" when you describe it.
I think you are right about the time though. The origins of the thought patterns in my view where people who thought that the social questions had all been answered, people were having dreams, and that love had won going to a figurative uncle's house after Obama won the presidency and seeing him freak the fuck out.
What did it for me was the socal small business owner stepfather of my best friend I saw at least 5 times a week say some wild shit about Blacks bracket (N*****S), Mexicans, and Gays brackets (F*****S) and having all the dudes around just nodding along (despite employing majority non-white dudes and each of them employing illegals regularly), then going to my socal evangelical church and hearing about how the the country was sinful and fallen because .... There Was Expanded Access To Healthcare For The Undeserving! (capitals for emphasis), a remarkable interpretation of Christian doctrine to be sure.
Even if these represent a small amount of the population (which they do); they hit at just the right moment to start the development of political consciousness for people that had just got hit by 2008 and could see all the no punishment whatsoever that befell those that caused it and benefited from it.
What we now call "wokeness" is just a development of currents that have been going on in the Left since far longer than I've been alive. It's a moderate development of what we called "Political Correctness" in the 1990s.
I would say that two big innovations mark the current era though:
1. The Left has pretty much abandoned the anti-capitalism thing. In the 2000s you'd get huge left-wing protests at Davos, at the IMF, at the G20, all those big meetings. As recently as 2011 you had Occupy Wall Street. But that stuff has gone away now. Whether the left co-opted big business or big business co-opted the left, the whole "rich vs poor" stream of leftism has vanished, leaving only the "privileged groups vs unprivileged groups" stream and a bunch of giant companies changing their logos to rainbow flags once a year.
2. The rhetotical innovation that "racism equals prejudice plus power". Again it's probably old, but I first heard it in the wild around 2007, and it seems to have completely displaced the old conception of what racism was about, permanently preventing whites from ever complaining about any racism directed against them ever again, because that doesn't count as racism. This was basically the abandonment of the idea that identity politics was in any way about equality.
>2. The rhetotical innovation that "racism equals prejudice plus power".
Incidentally, since Ibram X. Kendi was in the news lately, his most famous book (How to be an Antiracist) rather explicitly rejected this premise and used the word "racism" for basically all forms of race/ethnicity based discrimination, including having a chapter explicitly condemning (himself, in the past, and criticizing the movement in general for) anti-white racism.
Power can be very local. For example, parents might have strong preferences or aversions to their children according to skin color.
Or just what happens when you get stopped by a black cop or have a black supervisor. Individual people have more or less power in different situations, but it's silly to think about that in terms of whole racial groups. Blacks can have less power overall in the US than whites at exactly the same time a black Baltimore cop is beating some white guy to a pulp for mouthing off to him.
>racism is only really harmful if done by the powerful and only a nuisance if done by those without power.
Therefore, if I see a bunch of people who are racist against people like me, I should strive to make sure those people never have any real power. Or am I reading the wrong lesson here?
FWIW, ISTM that the US has been moving to the right. In the 1960's the dissidents were largely on the left. This was partially because of the draft and Vietnam. Since then it has been moving towards the right. Largely, I feel, in antagonism to the civil rights legal implementation. Another thing driving it to the right is the general aging of the population.
OTOH, I clearly don't understand people very well...so I don't really even trust my perceptions of just how "left" or "right" the population in general is. But I do know that it varies wildly in different areas. So what you see will depend on how you sample things.
"Woke" was adopted from African-American slang, and this piece says the origins of the phrase "stay woke" go back to at least the 1930s:
https://www.npr.org/2023/07/19/1188543449/what-does-the-word-woke-really-mean-and-where-does-it-come-from
"MONTANARO: The word has a long history. It was used in Black protest songs dating back to the early 20th century, including by Huddie Ledbetter, better known as Lead Belly, the singer of the 1938 song "Scottsboro Boys."
HUDDIE LEDBETTER: (Singing) Go to Alabama and you better watch out. The landlord'll get you, going to jump and shout. Scottsboro, Scottsboro boys, tell you what it's all about. I'm going to tell all you colored people...
MONTANARO: Here's Ledbetter speaking about the song in what's believed to be the first audio recording of the use of the word woke. An old record - it's hard to hear, but he says in Alabama, be careful and stay woke.
LEDBETTER: So I advise everybody, be a little careful when they go along through Alabama - stay woke, keep their eyes open.
MONTANARO: Be careful. Stay woke. Keep your eyes open. The Scottsboro Boys were nine Black teenagers who are accused of raping two white girls in what is widely seen today as one of the worst cases of racist legal injustice. It helped spur the civil rights movement and loosely inspired the book and movie "To Kill A Mockingbird."
So the social justice set co-opted the phrase, particularly as it became known more widely due to the BLM activism (according to another article). It has since come to take the place of what was formerly "politically correct", then "social justice" and now "woke".
I think your analysis would benefit from more clarity re what you mean by "woke." At different times, you seem to be conflating it with Democrats, or with the left, neither of which is particularly accurate. (The Democratic establishment is hardly onboard with the woke agenda, nor are people on the left like Bernie Sanders and Noam Chomsky).
You might also benefit from being more careful about some of your political claims. Eg, this is not particularly accurate: "So by the beginning of 2008, what have the Democrats learned? They lost in 1988 by being too far left. They lost in 2000 by being not left enough. And they lost in 2004 by being too unclear how left they really were. It seems an impossible bind, but Obama's charisma is enough to temporarily save them."
The Democrats were very unlikely to win in 2004, against an incumbent with an approval rate of 50%, unemployment at 5% and inflation at 3%. And Obama's charisma had nothing to do with 2008, which was an awful year for Republicans (they had been in office for 8 years., the exiting incumbent was very unpopular, inflation was at 5% and unemployment at 6.5% and rising.
Moreover, it is odd to speak of a party being "temporarily saved" when said party has won the popular vote in every Presidential election but one since 2000 (yes, of course the Electoral College is a thing, but you are making claims about the popularity of the party, so popular vote totals are very relevant)
Nitpick: Bush won the popular vote in 2004.
Right. As I said, the Democratic Party has won the popular vote in every Presidential election but one since 2000.
Sorry, missed the "but one" caveat.
>Moreover, it is odd to speak of a party being "temporarily saved" when said party has won the popular vote in every Presidential election but one since 2000 (yes, of course the Electoral College is a thing, but you are making claims about the popularity of the party, so popular vote totals are very relevant)
nit: Assessing the meaning of the popular vote, given that we _do_ have the Electoral College, and all but two (?) states use their electors in a winner-take-all fashion is tricky. The incremental influence of voters in non-swing states is _tiny_, which influences turnout there. It might differentially influence turnout for majority and minority party members in non-swing states - anyone know of studies one this?
Fair points. Yes, this is harder than I thought.
I feel like I have an internal model of how political ideologies arose that seems to make lots of sense. But trying to get this across in words is difficult, especially when the meaning of every important word is debatable and will be hotly disputed.
But regarding your election explanations, I'm not sure they refute my point, because it's not really important why each election was actually won or lost, only what the perception is of why (on the relevant side).
Narrow question: Would you dispute that large parts of the left/liberalism/progressivism/Democrat-rank-and-file (choose whichever one makes my question most coherent) have blamed the 2004 loss on something like Kerry being too indecisive and/or not progressive enough?
Would you dispute that Obama personally got a lot of credit for the decisive 2008 win (justified or not)?
>Narrow question: Would you dispute that large parts of the left/liberalism/progressivism/Democrat-rank-and-file (choose whichever one makes my question most coherent) have blamed the 2004 loss on something like Kerry being too indecisive and/or not progressive enough? Would you dispute that Obama personally got a lot of credit for the decisive 2008 win (justified or not)?
Those are probably true, but I took you to be making a different claim. And, re Obama, I think that those same people gave tons of credit for his success to his policy positions (eg, health insurance, obviously, but also the Iraq war)
> the general idea that wokeness started with Obama's election
the puritans burned witches, the prohibition, the great awakening where a bunch of cults started and the civil rights era had communists targeting black community for activist training
Any history of americian idealism going crazy thats only a decade old, is ridiculous
I think there are different aspects of woke that arose at different times.
I think the strong emphasis on identity politics came around the time of Obama's second term when it became obvious that having a Black president was not going to solve all the problems with racism in America. The thrust of mainstream civil rights before about racial equality and integration. This was aligned with Dr King's Dream where we are no longer judged by the colour of our skin. After this period, civil rights became more about championing separate political identities — initially for African Americans but later for other groups line trans and gender-neutral identities.
The other main aspect of woke is the policing of language in, for example, calling people out for stupid jokes — as with Justine Sacco — but also cancel culture for using the wrong word for an idea and a proscription against cultural embarkation. I suspect that all of these came out of the subculture that evolved on Tumblr and escaped into the mainstream as Tumblr-ites came of age.
> I think the strong emphasis on identity politics came around the time of Obama's second term when it became obvious that having a Black president was not going to solve all the problems with racism in America
I remember a Jon Stewart bit from ~2009. He was talking about the latest thing that Jesse Jackson or Al Sharpton had said, and his response was "Whoops, sorry! It's 2009, there's a black President in the White House, your race card has expired!" And then he pulled out a card labelled "race card" and swiped it through a machine on his desk and it flashed "EXPIRED" and the crowd cheered.
It's a real shame I can't find that clip on youtube, because it really shows how perceptions swerved over the course of the Obama administration.
The original post is basically correct. Over time “big government” has suffered some key defeats in the english speaking world, much to the chagrin of the far left, which has had to content itself with victories on social issues, but the buzz of these victories soon wear off, given the background economic inequality - a certain spitefulness results.
I had hoped that Trump/Johnson (not my people) might at least shift the Overton window in favour of big government, making it easier for the centre left to do the same - and maybe then we could all relax just a little about culture wars, because everyone is getting a bigger slice of pie. But the pandemic was a disaster for the public finances and so austerity is back. In the UK, Starmer seems set to be Blair 2.0 - so we will get assisted dying, abortion decriminalization and very little progress on economic inequality.
"Big Government" has lost rhetorically, but the Government remains as Big as it's ever been.
If you want proof: you're in the UK in the summer - install a window AC and wait and see how long it is before someone from your local council comes knocking.
Window air conditioners are illegal in the UK?
I agree - when I say Big Government I mean as in "the era of big government is over", roughly the Keynsian consensus. There is still what Popper called "the petty tyranny of the public official", and yes it's annoying. Mind you many councils are on the verge of bankruptcy so their staff can't be everywhere, but the traffic wardens round here are certainly keeping busy.
Your comment originally said "this is basically correct" but now says "the original post". I'm unclear on whether you mean my comment here that you're replying to, the previous comment I linked to, or something else?
Other than that, I agree with you.
Your original comment. I edited because someone else commented and "this" became ambiguous, so I thought the edit would help. Ah, substack
Yes but woke existed in the 90s; see the movie "PCU" (1994), for example. Your framework seems very baroque to me.
That's not the impression many, many commenters had in the two Origin of Woke threads.
It's the euphemism treadmill; in the 90s, "woke" wasn't the term - that's the joke in the name of the movie, "PCU" standing for "Politically Correct University".
"PC" became adopted by the right to refer to a certain set of attitudes, so that term was scrapped and "social justice" became the preferred one. Same thing happened with SJW, so now they've moved on to "woke" (which, depending how impeccably correct you wish to be, is more white liberal appropriation of AAVE).
The terms change, but the underlying ideas remain broadly the same.
>The terms change, but the underlying ideas remain broadly the same.
You might enjoy Bob Franks essay on leftists evading names for their faction: https://robertfrank.substack.com/p/whats-in-a-name
I prefer the less-practical framing of "of course the demon doesn't want you to know its name, otherwise you might be able to control it". If you can see it, and name it, you can give it an identity, and define it, and then attack it.
Many Thanks! Hmm... If a demon faction has a particularly innumerate talking point, and one knows its true name, can one exorcise it with: "The power of math commands you! The power of math commands you!" ? :-)
As someone who you'd probably count as 'woke', I don't think I've ever heard someone self-describe as woke, except for maybe a few people online circa 2015. It seems to be mostly a label that the right uses to describe people, not one that is self applied. This contrasts with "social justice", which I have heard many people on the left use for themselves (and still use today).
I don't buy that, on the grounds that it's a really catchy moniker. Like... goddamn it's a good bit of propaganda. "We're the people seeing clearly". And then to corrupt it you've got to spend a whole sentence setting up something about illusions from insomnia. You're telling me the right came up with a term like that, and then handed it to their enemies? No way.
Didn't it have its origins in some vernacular, non-ironical use of the phrase "stay woke"?
Sure, primarily in African American communities as Deiseach mentioned. But from my perspective, it seemed to jump straight from those communities to right-wing people describing leftists as woke.
I'll read that thread in more detail later, but thinking the Republican reaction to Obama was unique is questionable. Bill Clinton was himself seen as a massive threat when he was elected (even though he was a solid centrist). Rush Limbaugh would note all the time "America held hostage, day number [xxx]", and as you should know, Clinton was also the first black president in case that matters.
I don't claim to have a satisfactory explanation for woke, but as Tyler Cowen often points out, all things begin earlier than we suspect. A couple of important components of woke would probably be due to the flourishing of postmodernism in political discourse, as well as the impact of social media on incentives on how people signal their ideological alignment (e.g. purity spirals).
Postmodernism is definitely relevant, and further back the frankfurt school, various fabians and the Bloomsbury set.
To continue the discussion about applied math programs, here's the curriculum from the University of Toronto's Applied Math Specialist program. U of T is arguably Canada's top university.
13.0-13.5 credits, including at least 1.5 credits at the 400-level (Here the H1 courses are single-term, equivalent to three credits in a 120-credit system, and Y1 courses are two-term, equivalent to six credits in a 120-credit system.)
Applied Mathematics Fundamentals
1. Analysis: MAT157Y1, MAT257Y1
2. Algebra: MAT240H1, MAT247H1
3. Advanced Ordinary Differential Equations: MAT267H1
4. Computer Programming: CSC108H1, CSC148H1
5. Probability and Statistics: STA237H1/ STA257H1, STA238H1/ STA261H1, STA347H1
Ethical and Social Responsibility
6. 0.5 credit with a significant emphasis on ethics and social responsibility (list below)
Higher Studies in Mathematics
7. Topology: MAT327H1
8. Groups, Rings and Fields: MAT347Y1
9. Partial Differential Equations: MAT351Y1
10. Complex and Real Analysis: MAT354H1, MAT357H1
11. Geometry: MAT363H1/ MAT367H1
12. Advanced Applied Mathematics: 1.0 credit from APM421H1/ APM426H1/ APM441H1/ APM446H1/ APM461H1/ APM462H1/ APM466H1
13. Related Topics: 1.5 credits from: MAT332H1/ MAT344H1/ MAT454H1/ MAT457H1/ MAT458H1/ MAT464H1/ STA302H1/ STA457H1/ CSC336H1/ CSC436H1/ CSC446H1/ CSC456H1
Research Seminar in Mathematics
14. MAT477H1
More here: https://artsci.calendar.utoronto.ca/section/Mathematics
It's certainly not a very applied program. It's only slightly different from the Mathematics Specialist program, which is so spectroscopically pure it doesn't even require a computer programming course. But then it does say this is a program specifically for people who want to do math research.
Compared to previous program:
+ instead of calculus, you have courses are called analysis and complex / real analysis, which is more rigorous.
+ more probability
+ applied courses available (12. and 13.) appear more focused and useful
+ U of T is reputable
-- the way program presents them, options within 12 and 13 each appear mutually exclusive. Non-linear optimization is useful everywhere, but choosing it you have to give up bunch other interesting stuff.
I would still be wary of recommending anyone to take even a good applied math degree, unless they have a clear vision what kind of applied mathematician to become. The way some careers can go (how it went for me), the only benefit I got from doing the rigorous theory parts of an applied math was ... proof technique and satisfying some intellectual curiosity. Later in my life, I get to use some stats and probability, but I would have been better served by doing a statistics degree. Many applied math graduates who went to computer programming jobs use nearly nothing and would have been better served by doing computer science / software engineering degree.
I would rather try sell the idea that it is better to choose the applied part first, and then get the relevant maths: a degree in applied physics (get a rigorous background in physical phenomena + some familiarity with expensive physics lab equipment), any engineering (for obvious reasons, but I could highlighting signals processing), ecology (full of interesting math problems from game theory to dynamics), genomics or bioinformatics (difficult math problems in studying life itself with DNA/RNA sequencers, where you have to figure out algorithms to make sense of your data before you have anything to study), any field of statistics (for obvious reasons), or economics (theory and methods for quantitative study of human activity, especially econometrics)
Yeah, while the prospect of doing an undergraduate degree in an uncompromisingly intellectual field like math or physics is appealing, the question of what comes after that does rear its head. Presumably if I were doing that I would be planning on going on to something where the exact nature of my undergraduate studies didn't much matter, such as pursuing an MBA or going to Officer Candidate School. But just in case that didn't work out, it would be useful to have a plan B in the form of some studies in something a little more vocational. If things went awry and one degree was all I was going to get, I think I'd much rather face the job market with a degree that said Computer Science/Applied Math than one that just said Applied Math.
This looks like a great Applied Math program to me. It's got 1 course each in abstract algebra and topology, which lets students have enough awareness of other parts of mathematics to know what they don't know - it's definitely got the Math part down. In defense of the Applied part - for a pure math degree at most of the institutions with which I have been affiliated you can graduate without taking a single probability course (this requires 3), or learning any programming (this requires 2 courses), or taking a dedicated differential equations course (this requires 2 courses), and this requires advanced applied math and topics courses as well. So it's maybe 1/3 foundational math courses of the more rigorous variety (calculus, linear algebra, analysis, maybe but the geometry course here), a sprinkle of broader math, and then the rest is stuff that's essential to an applied mathematician and merely advisable to a pure mathematician (probability, diff eq, programming.
I think my only complaint is that a student can apparently graduate from this program without seeing numerical methods or optimization courses. (But it's hard to fit everything into 4 years). A student graduating from this program would be very well prepared to study more applied math in graduate school, or to learn how to do a job.
Interesting. I would have zero problem labeling this as just a "math" degree, and it's also significantly more advanced than just about any math degree you'd see here in the States. On paper at least I'd say it compares favorably to UChicago in the 90's, which we (not inaccurately, I think) considered one of, if not, the most difficult math programs at the time. Of course, the devil is in the details, two courses can have the same name and yet cover very different topics.
Just published my forecasting roundup from Manifest. While others have covered the event itself, I think many here might find it these predictions and takeaways of interest:
https://open.substack.com/pub/alethios/p/whats-going-to-happen-next
I noticed recently that I have a lot of opinions that are both strongly held and very weak. More specifically, I mean that I feel like almost no amount of conversations, posts, news articles, etc. would be enough to shift my opinion on them, but also that it would only really take one good journal article to convince me to change my mind.
Does anyone else feel like they have opinions like this? Is there some kind of established good epistemic practice around them?
In my experience this happens when one hasn't done a deep dive into position X but has seen a bunch of stupid and wrong anti-X positions. The only thing to watch out for is a subconscious trend to go "Well I've seen a hundred bad arguments, guess there aren't any good ones or they would have come up." But, if the only place you're exposing yourself to arguments is whatever the top couple comments are on social media to pick a very common example, you'll see the same stupid but popular positions over and over and you shouldn't be reinforcing the belief.
Thanks, this feels like a good explanation to me.
Yes, this. Some years back I came to the same self-realization that you have and realized that it was not a smart way to move through the world. The core of it, I concluded, was exactly the mechanism that Jeff describes: the hundred bad arguments were effectively driving me batty to use an old-fashioned word.
My solution was to become fairly ruthless about blocking out sources of way more noise than signal without regard to whether my personal preferences aligned with the noise. This led to banning certain news sources from my daily life despite sharing the same broad worldview as most consumers of those news sources (I am deliberately not naming them here). It later helped me be ruthless about social media platforms -- I concluded that there is no practical "I just use it to keep up with distant friends" middle ground, not for me anyway, just had to accept the loss of that benefit as the price of keeping my overall bearings.
You have high trust in some sources and low trust in others, and you're gating updates based on that trust. It's not a contradiction to say source trustworthiness can be domain bounded - you'll trust the doctor over your own experience when it comes to health advice, for eg, but you probably don't by default adopt his political or religious beliefs as well.
This is both natural and rational behaviour. Because there is no way to gather and evaluate all the information relevant to modern life ourselves, we are forced to accept what we believe from external sources largely uncritically. Our agency becomes about how we evaluate and place/remove trust in them. The upshot is that tribal fighting over the source availability and reputation becomes tooth and nail.
I think you underestimate the degree to which one can evaluate the claims of so-called experts without oneself being an expert in their field. A scientist in one field, for example, can look at a paper in another field and (with a small amount of field-specific knowledge much short of expert) gain some impression of whether they are following good scientific practice. And if your doctor seems confused about the measurement units of test results, you might not want to trust them too much.
Scientists are well known for making garbage assertions outside their area of expertise...which they aren't always willing to properly limit.
Yes. And scientists are also well known for making garbage assertions within their field of expertise. There's no substitute for evaluating claims on their merits.
So, for example, I hope you're not still following the advice to replace butter with margarine made from trans fats, or getting your kids tonsils taken out for no reason...
That wasn't actually what I meant by "garbage statements". There was some evidence that it was correct. It wasn't complete, and the advice was a mistake, but that's different. Often the evidence is incomplete, and a best guess at that point (i.e. most of the time) can be incorrect...it just has a lower probability of being incorrect. (There are very few things I actually count as certain. The way I've heard it put informally is "No zeros and no infinities on the playground", though if we're talking about probabilities that should be 1s rather than infinities.)
However, one thing I do count as certain is "it's dangerous to breech the barrier of the skin". Which doesn't mean you should never do it, it means if you're going to do it, realize that it's dangerous and take precautions.
I want to honor some reviews that I liked that didn't make it into the finals. Actually, there were lots of good ones. Sometimes the review itself was well written and had interesting commentary, and other times it was worth reading for a glimpse of a the book being reviewed.
Egypt’s Golden Couple https://docs.google.com/document/d/1QiotH3aGFgNLGqsIHTK_Plm_gem2E4l2C2ctyGJd0jY/edit#heading=h.oeivp6prsn06
I'm pretty sure that was actually by Scott. It was funny and clever and maybe suffered from not having any great idea to reveal (and it also suffered from me knowing this stuff already) but there was a lot of interesting info and I got a feel for how little data we've got to deduce anything about the period and how shaky the deductions can get.
On the Bondage of the Will, by Martin Luther
https://docs.google.com/document/d/1cp6iw5OEyDjnD_viZo-KL0Zv4jwQnMXtE4yIovfVAco/edit#heading=h.ipdpema3erb
The reviewer is trying to make sense of a 16th century theological debate. I was impressed by how seriously and thoroughly they tried to evaluate the arguments. An interesting exercise in taking seriously the ideas very disconnected from our own.
The Beauties: Essential Stories by Anton Chekhov
https://docs.google.com/document/d/14qa47TJ_Vyerx4XNgTCIh7PUZ_TOgNcU_eHm5So_zo0/edit#heading=h.3ppqrzh2z8mn
The review is OK, but I want to point to it to promote Chehov: go read Chehov, he's really good and at a local maximum of something. That something may be "very short realistic literary mood pieces" and may not be your favorite thing to read, but come, it's the local maximum, you've got to try it.
The Divine Comedy by Dante Alighieri
https://docs.google.com/document/d/14qa47TJ_Vyerx4XNgTCIh7PUZ_TOgNcU_eHm5So_zo0/edit#heading=h.4h0wifn3eabc
It's a bummer that this one didn't make it: it's an epic review with a lot of work put into it and a lot of interesting thought, and I love the reviewer's enthusiasm for the subject. Having finished it, I feel like I got some sense of what makes The Divine Comedy actually good and worth reading (independently of it being Famous and Important). The review did kind of suffer from being obviously undercooked, but come on.
Winnie-the-Pooh & The House at Pooh Corner
https://docs.google.com/document/d/1Ki5XsE0jkxZtd2XAeyTAJw1ZjLh2Cu-matUYKAhA6-s/edit#heading=h.bdyc6ymrmrt2
It's a silly review that presents the Pooh story as a Homeric epic. (And I don't think this one is by Scott.) This sort of thing is a little too cute and clever for me, but it is pretty cute and clever.
Making the Corps
https://docs.google.com/document/d/1cp6iw5OEyDjnD_viZo-KL0Zv4jwQnMXtE4yIovfVAco/edit#heading=h.346x6bxstqbq
I read this one with interest for the sake of getting the gist of the book, which is about a strikingly weird and a little horrible experience completely unlike any of my own.
In the Time of the Russias by Stella Zamvil
https://docs.google.com/document/d/1QiotH3aGFgNLGqsIHTK_Plm_gem2E4l2C2ctyGJd0jY/edit#heading=h.e9559d1rx8po
Worth it to find out about a very rare book also about a different kind of horrible experience completely unlike any of my own.
"The Divine Comedy by Dante Alighieri"
Wow, thank you!
And here I thought it was terrible!
I still think it's terrible. It's a messy mess. I wrote much of it on the last day, and I was still scrambling to finish it in the last ten minutes.
Most of it is a bloated summary which should have been trimmed down to make room for more commentary about the allegorical, political and linguistic aspects of the work.
I'm glad you liked it (you're the second person to give me positive feedback, third if you count Deiseach but I think her positive feedback was for the choice of book not the review itself), because I spent the last month feeling ashamed of it, thinking I should have had the wisdom not to submit it and send a better version next year.
I thought the review was kind of a mess, but it *did* have some genuine insights. I found the analysis of differences between Italian and English very interesting; but structurally it was unfocused and there were some formatting things (e.g. it was not really clear when you were reviewing and when you were quoting) which really dragged down the experience of reading it. If I'm honest, I do think you'd have been better off preparing a better version for next year, and probably to focus on one or two aspects rather than meandering through a range of topics, only some of which you've been able to give proper thought to, across thirty pages.
The review was interesting, but I was personally most impressed by the translation itself. I would absolutely read (and buy) your translation if you ever complete it. Or if that seems unattainable, I would also be quite happy if you could post somewhere online, whatever you have so far.
I'm so happy to hear you liked the translation! I'm gonna post it online for sure. At least some of the cantos, the ones I'm satisfied with, maybe not all of them. And other translated poems too. The main reason I'll never complete it is that I expect AI to quickly progress to the point that even translators of poetry become redundant. I haven't translated any poetry since the release of Chat GPT. What's the point? In a few years a robot will be able to do that job instantly better than I can. Not that I ever thought I'd make money translating poetry, it's a fun activity, but even a fun activity feels pointless when a robot can do it. I hate AI by the way, for other reasons as well beside this one.
Glad to hear you are going to post it, and please let me know when there's a link available. (You can easily find an email for me online.)
Regarding AI, I don't think your sense of pointlessness is reasonable. I don't want a soulless computer to translate Dante for me, I want YOUR translation. Made by a particular flesh and blood human being, who has tried to imagine seeing what Dante saw, and who has a different idea from everyone else about how to do it. As long as the AI's refrain from killing us all, they can't take that from you.
(There might be grounds for pessimism about when an AI saturated market will allow such work to be profitable, but that doesn't need to affect a fun hobby, or the communion between you and me and the poet. It only gets ruined, if you allow it to get ruined.)
I don't know if it would have been a good idea to wait a year and polish it some more: on the one hand, the review would have benefited from some editing. On the other hand, I read all of it with interest and enjoyment, long as it was, and sometimes it's important to get something done and get it over it so you are free to work on something else. Also don't risk losing enthusiasm and momentum and leaving the thing half-finished.
It made me want and not want to read the Divine Comedy at the same time - want, for obvious reasons, and not want because it would have to be a translation, and the review made me aware of all the interesting stuff that will be lost. Anyway, good job on it.
Scott didn't write the Egypt’s Golden Couple review, a statement that I can utter with some confidence because I'm the one who wrote it. I am going to take your incorrect guess as high praise, though, so thank you for that, and for the rest of your positive feedback.
(In the past some people have complained that some reviewers try to imitate Scott's style and it's a bit cringeworthy. In which case it is unclear whether being mistaken for Scott is a good thing or not. Maybe it would be bad if the similarity were on purpose, which fortunately it isn't.)
Darn it. You even had to link to a Philip Glass composition, obvious Scott giveaway, obvious. Are you sure you're not some kind of split personality? There must have been a reason he was defensive about the whole multicore condition thing.
Wait does Scott like Philip Glass? I wouldn't have known. We've never met!
Yes, very good selection. I was particularly impressed by Making the Corps.
My favorite reviews that didn't make the final:
- Bad Therapy
https://docs.google.com/document/d/1AXmWgSbh_TFsoZuApSCSEoz57yn93CM5YYhtaO_s4W4/edit#heading=h.m04vosmgkbvl
- Battle Hymn of the Tiger Mother
https://docs.google.com/document/d/1AXmWgSbh_TFsoZuApSCSEoz57yn93CM5YYhtaO_s4W4/edit#heading=h.oqah0sxj20dc
- Determined: A Science of Life Without Free Will - Review 1 (review 2 was also good, but I found review 1 yet better)
https://docs.google.com/document/d/1AXmWgSbh_TFsoZuApSCSEoz57yn93CM5YYhtaO_s4W4/edit#heading=h.begryxgkrxn9
- Don't Make No Waves, Don't Back No Losers
https://docs.google.com/document/d/1AXmWgSbh_TFsoZuApSCSEoz57yn93CM5YYhtaO_s4W4/edit#heading=h.rcop7udzu787
- Normal Accidents
https://docs.google.com/document/d/1cp6iw5OEyDjnD_viZo-KL0Zv4jwQnMXtE4yIovfVAco/edit#heading=h.l1k6y58efpty
- The British Industrial Revolution in Global Perspective
https://docs.google.com/document/d/14qa47TJ_Vyerx4XNgTCIh7PUZ_TOgNcU_eHm5So_zo0/edit#heading=h.6t2he6kqixc1
- The Family That Couldn't Sleep
https://docs.google.com/document/d/14qa47TJ_Vyerx4XNgTCIh7PUZ_TOgNcU_eHm5So_zo0/edit#heading=h.gu9i87b1fctk
I really liked the Pooh one as well, surprised it didn't make the finals. Making the Corps was a really good idea but I think the review itself was a bit long for me. Worth a look though.
I'm part way through reading the Bondage of the Will review. The main issue I'm having with it is that the quotes from the book are confusingly phrased. I would have expected there would be a better translation available.
I think the problem is that a lot of freely available translations of old books are terrible 19th century paraphrases, and I can't imagine having to translate Luther's German into modern German, never mind modern English.
Looking at the review, yep, it seems to be taking a 19th century translation:
" Private letter from Luther to Nicolas Armsdoff, translated by Henry Cole, 1823, and published as Appendix II to Cole’s translation of Bondage"
And Luther was, em, idiosyncratic, let's say. I'm still chortling over that sermon on marriage where he bashes the Catholics for saying "if you knock off your spouse so you can marry that hottie, you can't get married to that hottie, sorry" (as marriage is a sacrament and would be done in a church). According to Luther, while you might be under the penalty for murder and that's okay because you done broke the law and the commandments, there's no reason you *can't* marry your honey-boo that you went and murdered your dusty old spouse for, God sure doesn't mind, look at David and Bathsheba.
The obvious rejoinder that "Bessie and Mike are not Bathsheba and David, Marty; King David was a very special case and got the punishment of his sins" doesn't seem to have occurred to him. Or if it did, he doesn't care, he's hell-bent on imposing marriage on everyone (the Catholics are bad for saying people can become monks and nuns, *everyone* has the duty to get married and have kids! get to it now!)
What a guy 😁 I now better understand the British Golden Age of poisoning, as divorce was way more scandalous than knocking off your spouse so you could marry your sidepiece, oh those Protestants!
My first encounter with 19th century translations was Cary's 1814 translation of "The Divine Comedy", the first time I ever read it when I was 15.
It's not a bad translation at all, but the guy can't help himself in places; being an Anglo-Irish clergyman and the son of same, he is so instinctively "no popery, no saints, no Mariolatry" that in places he goes for "this is a metaphor or allegory".
So, for instance, in Canto II of the Inferno, he footnotes the lines about
"In high heaven a blessed dame
Besides, who mourns with such effectual grief
That hindrance, which I send thee to remove,
That God’s stern judgment to her will inclines.
To Lucia calling, her she thus bespake:
“Now doth thy faithful servant need thy aid
And I commend him to thee.”
So that "the blessed dame" and "Lucia" become:
"A blessed dame.] The divine mercy.
Lucia.] The enlightening grace of heaven.
Three maids.] The divine mercy, Lucia, and Beatrice."
Well, no. The blessed dame is the Virgin Mary, and Lucia is St. Lucy (Dante's patron saint, according to some) 😁 I'm not a scholar, but I'm a Catholic like Dante, so I'm fairly sure when he mentions a saint, he means the saint and not an allegory for "the enlightening grace of heaven".
Longfellow's version is - well, he tried (at least for me). You also get an awful lot of 18th century poetic versions of Classical works which are honestly so difficult to wade through, that I think they nearly put me off poetry for life.
The opposite temptation is to do modern translations that are all stripped-down, or slangy, or which involve modern idioms that will of necessity age and become unintelligible to later generations.
EDIT: As for Luther, as I got older I made a decision to be nicer about the Reformers because, you know, ecumenics and we're all fellow-Christians. But nw I am older still, and reading more of the originals, and now I go "No, I think I'll take a poke at Marty" 😁
Armchair psychoanalysis is a vice and not very helpful, but I think it's possible he was neurotic. So when he was satisfied with his own interpretation of something (because it soothed his raging anxiety about his personal salvation), he had no tolerance at all for disagreement. It was his way or the highway. Hence, he left religious orders and got married, to a former nun even, so that meant marriage was superior to taking vows.
He couldn't just leave it there, though. Marriage was no longer a sacrament, but any querying of who could marry or how or what, was taken as a direct attack on his correctness about "it was okay for us to nullify our vows and marry", so he went all the way into Crazytown. Sure, marry your affair partner after you've murdered your spouse to do so! Ignore the Pope, why he doesn't even think monks should get married!
Okay, so you can't really divorce your wife because you don't have cause except "I'm horny", um er oh yeah! Polygamy! Marry your mistress as well! Don't worry, that's okay because the Patriarchs in the Old Testament did it!
https://en.wikipedia.org/wiki/Philip_I,_Landgrave_of_Hesse#Bigamous_marriage
"Within a few weeks of his 1523 marriage to the unattractive and sickly Christine of Saxony, who was also alleged to be an immoderate drinker, Philip committed adultery; and as early as 1526 he began to consider the permissibility of bigamy.
...Philip was affected by Melanchthon's opinion concerning the case of Henry VIII, where the Reformer had proposed that the king's difficulty could be solved by his taking a second wife better than by his divorcing the first one.
...He accordingly proposed to marry the daughter of one of his sister's ladies-in-waiting, Margarethe von der Saale. While the landgrave had no scruples in this matter whatsoever, Margarethe was unwilling to take the step unless they had the approval of the theologians and the consent of the elector of Saxony, John Frederick I, and of Duke Maurice of Saxony. Philip easily gained his first wife's consent to the marriage. Bucer, who was strongly influenced by political arguments, was won over by the landgrave's threat to ally himself with the Emperor if he did not secure the consent of the theologians to the marriage, and the Wittenberg divines were worked upon by the plea of the prince's ethical necessity.
Thus the "secret advice of a confessor" was won from Luther and Melanchthon (on 10 December 1539), neither of them knowing that the bigamous wife had already been chosen. Bucer and Melanchthon were now summoned, without any reason given, to appear in Rotenburg an der Fulda, where, on 4 March 1540, Philip and Margarethe were united. The time was particularly inauspicious for any scandal affecting the Protestants, for the Emperor, who had rejected the Frankfort Respite, was about to invade Germany. A few weeks later, however, the whole matter was revealed by Philip's sister Elisabeth, and the scandal caused a painful reaction throughout Germany. Some of Philip's allies refused to serve under him, and Luther, under the plea that it was a matter of advice given in the confessional, refused to acknowledge his part in the marriage."
(Being fair to Luther, he didn't approve of either Philip or Henry in England's martial solutions, but I think this was more down to lingering influence of Catholicism on the formation of his attitudes).
At some point you stop being outraged and just go "Whatta guy"
It's an interesting non-progressive case of 'lived experience' and 'standpoint theory'. I wouldn't be surprised if the best translators of Dante, or indeed medievalists, happened to be Catholic--after all, that brings you that much closer to him.
I wonder how Cary handles Mary in the third canticle. Does she remain purely allegorical all the time?
That is the funny thing; as he moves through the work, he does sort of give up. He's happiest when it's plain "Mary can be tied in to the Gospel narrative" as with the marriage at Cana parts. I think he was unconsciously influenced by the poem as he went through it, so by the end he has no problem with her as queen of heaven 😀
So... what do you think about the bits of personal translation that were in my review?
I don't speak Italian but from the point of view of English, I thought they were very well-done.
I also liked On the Bondage of the Will as well as Making The Corps. (They're the only two you mentioned that I read, to be clear, I'm not saying anything about the others).
"An interesting exercise in taking seriously the ideas very disconnected from our own."
Is the emphasis here "taking seriously theological ideas in a community of mostly atheists" or "taking seriously 16th century ideas that Christians now don't care about" or a third meaning.
"taking seriously 16th century ideas that Christians now don't care about"
I mean, some of us still care, as those are the kind of theological differences that do make a huge difference in practice. They only "don't care about" if they're going for "God is, like, love, man and being nice is all that matters and all dogs go to heaven" version of religion.
You might as well be saying "All those AI researchers talking about consciousness and qualia and can machines have it and what is it, isn't that just dumb academic hairsplitting that nobody cares about in the real world?"
Let me second my vigorous approval of The Divine Comedy review here.
I wasn't saying that Christians don't care, I was asking if that was what Tasty_Y was saying.
I am definitely in favour of caring about philosophical ideas of all kinds. But there are indeed many here who think reasoning about something is not worth doing if you don't accept the premises.
>But there are indeed many here who think reasoning about something is not worth doing if you don't accept the premises.
I'm curious. What examples do you have in mind for cases where it is worth reasoning about the consequences of premises that one is reasonably sure are false?
I tend to think that there are only quite restricted cases where that is interesting.
E.g. sci-fi world building - what would the world be like if all fission neutrons were delayed, so nuclear power would work, but nuclear weapons wouldn't.
Or reductio ad absurdum. If sqrt(2) was rational, one can derive a contradiction, so it can't be rational.
Do you have other cases in mind? If so, which?
I mean the first but the second was also on my mind.
Thanks for the recommendation, the Pooh review was excellent (I may check out some others).
Although there's a subtle thing there - I don't think Milne was trying to write a Homeric epic. I think he was trying to write a set of stories about stories, and the only stories you write stories about are the original epics.
Thanks for the recommendations.
Congrats to the finalists! Happy to see my personal fave The Pale King make the cut.
I wrote the review of The Pattern Seekers: How Autism Drives Human Invention. If anyone has feedback or contributions to the questions it raises, I've republished it on my blog and opened the comments section:
https://thedeepdish.org/weaponised-autism-as-the-font-of-human-creativity/
The writing's engaging and I enjoyed it. I would have liked to get Scott's opinion on the books claims, shame it didn't make the finals.
I read 14 of the reviews, and this was one of two I rated 10/10.
I'm glad it is posted online so I can link to it! Will be rereading more closely, but for now here are the notes I jotted down with my reasoning for the score: "really good structure, critical of book while highlighting what the useful takeaways are. engaging writing style. reviewer expresses their own biases clearly, and what would make them change their mind."
I don't have deep wisdom to offer, but I found it a great review. I gave it 9/10, it would have been a worthy finalist to me.
Thanks for letting me know, I appreciate that.
It's a real pity there are so many reviews (and/or so few finalist slots) because so many of the submitted reviews I would have liked to see comment discussions on.
Full disclosure: I submitted a review that didn't make it. I'm disappointed much less for the glory and prizes than for the lost opportunity to see lots of comments on it. And furthermore, none of the reviews I read and rated highly made it either, and I really wanted to see comments on some of those.
*huge sigh* Oh well...
Right there with you. My review didn't make the cut, and my absolute favorite -- "Alphabetical Diaries" -- didn't either. If the writer of that review happens to be reading this, know that that was one of the most brilliant pieces I've read in a long time!
So, random one. What's with the trend against capitalisation of letters? It's increasingly common on twitter and other platforms to write only in lower case, including 'i' instead of 'I', and after full stops. Like, most things auto correct that, so you have to go out of your way to do it. What's going on?
Feminization
I have done this for about 20years. Part of it is laziness (e.g. you do capitalization different in different languages, I don't want think about it at all). Part of it is, that I sometimes think it looks nicer.
"I" and the first letter of a sentence will always be capitalized though .
I'm in the same boat as Leo.
I will, however, often capitalize proper nouns more frequently than my peers. Which I've been told was a common habit a few centuries ago. But my own habit predates my learning of this. Rather, it feels like a natural mode of emphasis that's softer than italics.
I don't visit twitter. But if tweeters are eschewing capitalization despite autocorrect, I'd imagine that it's because it signals informality.
Maybe they're fans of e. e. cummings.
This practice has deeper cultural roots than people are recalling here, there was a lot of experimentation in writing during the 20th and things like using odd spellings, odd punctuation or all lower case were part of it. And then there's e e cummings who did it with his own name.
Young People These Days, or, typing on phones and letting the conventions of texting bleed over into ordinary usage.
But most phones automatically capitalize I and capitalize the first letter of a sentence.
as someone who prefers to type without capital letters, and who uses a physical keyboard about 95% of the time i interact with others online, i can speak to this with amount of direct insight. (usually on substack i force myself to capitalize, but i don't like it). for those of us who are really fast typists, it's quicker to eschew capitals, not just because of needing to involve the pinky of either hand in creating a letter, but because it introduces ERror that often necessitates laboriously backspacing through an entire word.
once you've tried typing this way (or even just trying reading much text typed this way) you notice that capitalization doesn't really do anything useful in english (when kerning is automatic, as online), and in fact the language looks better (to some eyes) without them.
now of course as a devotee of robin hanson, i'm aware that the real reasons anyone does anything are signaling, but the signal here might just partially be "i'm on a physical keyboard" (which due to the different kinds of typo possible leads to different outcomes that we may be self-conscious about).
going out of one's way using a touchscreen smart keyboard to eliminate punctuation would be a labor of love, perhaps done in imitation of e. e. cummings (without realizing that the all-lowercase choice was one a publishing-house editor made for him, and which he didn't especially love at first iirc).
rest of your replies are just-so stories from people who haven't tried it and/or mostly type english with their thumbs.
I find it hard to take 'it's faster' at face value when it took you 5 paragraphs to express the meaning of two words + a contraction!
This may be the lowest effort and least impressive attempt to move the goalposts I've seen this month.
I think it's another instance of [right is the new left](https://slatestarcodex.com/2014/04/22/right-is-the-new-left/).
Like a rich person wearing a Balenciaga trash bag, a twitter journalist _obviously_ knows how to write proper english and is doing it intentionally. Because they don't want to be confused with the plebs.
"a twitter journalist _obviously_ knows how to write proper english"
Citation needed (you are talking about journalists, after all!) 😁
More seriously, the amount of stupid, basic, grammatical and spelling mistakes I see in online news articles, even the digital version of dead-tree media, makes me wince. Editing is a lost art, you just get the thing up fast and nobody proofreads.
Actually the worst offender is Sam Altman!
I was going to write "blue checks" and replaced with journalists to signal my rat credentials. Serves me right.
sama is even less likely to be confused with the plebs though, so the point stands!
That's how the Virtuous brag. They're so humble they don't capitalize the first person. They just had to act stupid to get your attention so they'd have the opportunity to remind you that they're special, and more important than you. If that doesn't work, they drop their pants.
For a while, my phone stopped autocorrecting "i" to "I" which was irritating. But it started again a few months later.
This reminds of a specific instance of that: failing to capitalise "God" when it's used as a proper noun.
I am absolutely amazed at how few people get this right. Unless I'm misinformed, this is a simple rule: if it's a common noun (with an article before it) you use lower-case g, if it's a name or title you use a capital. Correct usage: "in the Old Testament, God is described as a jealous god".
And yet I see it wrong over and over. It *seems* like atheists are much more likely to drop the capital, which raises the question: do some people honestly think the capitalisation of God depends on whether you believe in God? What a stupid rule of grammar that would be, and no other word works like that.
The only other option is that they're deliberately violating the rules of grammar to be obnoxious. But these are often people who are otherwise perfectly polite and intelligent-sounding. So maybe I'm just imagining that it's atheists who do it more? Or maybe no-one understands correct usage but Christians are capitalising God for reverence reasons and accidentally being grammatically correct as a result?
What's going on here?
Oh, I could go on about this! I'm still old-fashioned enough that I capitalise the "M" in "Mass" (the religious service), even though usage now seems to be just refer to it as "mass".
Well, while that may help strengthen the joke about electrons, I'm still going for "it's the Mass, not just a common noun". Ditto with "Bible" and "God" and so forth; some do it conspicuously because they're non-Christian or are atheists, but it's also style guide stuff for the media in part because "no establishment of any religion, if we referred to one religion's holy book or deity with a capital but didn't capitalise others, we'd be privileging Christianity and that is bad".
That, at least, is how I understand the argument: "If you capitalise "God" when referring to the Christian deity, but refer to pagan deities such as Roman gods with the small "g", then you are being preferential and biased towards one particular faith. And since we're all secular here nowadays, that's not the done thing".
It might be an over-correction against capitalising God's pronouns, which is a reverence thing that it's reasonable for athesists to not do. Also it's just a bit odd that "god" is both a proper and common noun like that, and an atheist is more likely to be referring to something rather generic while talking about a god, not having such a specific idea in mind of what that god would be, while also getting the phrasing of using it grammatically as a proper noun anyway from the cultural influence of Christianity. While I agree that it's incorrect, I think it's an understandable mistake for these reasons.
While I obviously can't speak for everyone, as an atheist I tend to use God when specifically referring to the Christian God, and god when speaking more generally. But I also don't refer to the Christian God specifically very often, as most of my arguments generalize.
So I would say that there is insufficient evidence to convince me to believe in god, that I find the concept of god to be incoherent, and that the American Evangelical movement would disgust God if He existed. (These are given as examples rather than opening arguments)
The first two are not proper nouns. I suppose I should say "gods" in that case, but due to the overwhelming cultural influence of Christianity it *sounds wrong* to say gods there. But those statement apply equally to any god, from Allah to Zeus. The last one is a comment on a specific religious group, so I use the proper noun form to refer to the god they claim to believe in, which is creatively named "God"
>as most of my arguments generalize
Seconded.
If I'm having a discussion about gods, it is normally about an ensemble of possible gods, with some properties (relevant to the discussion) specified, and others not. It is closer to a sample drawn arbitrarily from an ensemble than to a proper name of a single entity.
Ah, that may explain a lot of it. Thanks.
Honestly, this has come to annoy me so much that I find it extremely distracting whenever religion is discussed and was waiting for an opportunity to bring it up without interrupting a substantive discussion.
But yeah, "to convince me to believe in god" is surely as incorrect as "to convince me to believe in ghost". I think you should just say gods, which also makes the philosophical meaning much clearer.
Or "a god" which I think makes it clearer, if referring to the Christian-influenced milieu; you probably aren't inclined to believe in Hindu deities either, but you're not specifically talking about the Tridev so you don't feel the need to say "gods".
This is interesting actually. In English, you always capitalise adjectives that denote nationality; in other European languages you don't.
I think we need to remember that this is not at all a new trend. It is in a sense older than autocorrect.
Because on PC, it was - and still is - the low effort way to type.
People were typing like that in icq and skype (and probably on twitter) from their pcs years before the iPhone and following smarthpone revolution, before the internet (and our lives at large) were ruined by the mobile-first approach.
Then everyone and their mom got a phone, but the associations between casual internet speech and lack of capitalisation remained. And that's how we get "manually decapitalize first letter in every sentence that you write" state of things.
Is that true though? Glancing at the bash.org archive, it's at least 50/50 between those who capitalise and those who don't.
Well, I've never claimed that everyone did it. I did not, for example, taking pride at not being lazy with my typing (forgive my teenage-at-the-time-self for some silly signalling, if you can).
But it was fairly common. In my experience, it is still common on discord, and in other chat-like apps.
Less common othewise, but twitter is a weird place, a cross between a chatroom and summoning demonds by yelling into the abyss. No wonder that some chat behaviours are common there, and that demons show up ocassionally.
I've seen people say they do it deliberately out of a desire to seem more casual and not bothered about what they're replying to.
Like, look at me, I am not bothered except I bother enough to disable my autocorrect.
Doesn't that strike you as incredibly affected?
Oh yes, if you know they're doing it on purpose (which they almost certainly are) it makes them come across as *more* bothered than not. But it apparently works on some people. Same reason why the same people get upset about full stops, its not part of the look how casual I am aesthetic.
I wonder if it's that autocorrect increasingly works somewhat badly, so it either doesn't handle those cases or people turn it off? Or maybe people who aren't used to typing on computer keyboards get all lowercase because they're used to phones auto capitalizing for them.
Do people actually keep autocorrect on? It's one of the first things I turn off in a phone. When you misspell something, even quite badly, I find the keyboard is quite good at showing the intended word as one of the suggestions, which I then just click on. I sure wouldn't want it to change what I type on its own.
You're right, they don't. Still boggles my mind why anyone thinks that a keyboard that automatically changes your words is a good idea.
Have you noticed that Substack has direct messages? I recently got an automated anti-spam ban for sending one, so I want to make you all aware of how you can probably avoid this.
I follow Zvi Mowshowitz's blog. On June 8, for the first time, I DM'd him. I sent one link (https://www.reddit.com/r/slatestarcodex/comments/16wdxzk/whats_the_deal_with_subtle_poisons/k2wlpvb/), plus a few sentences on why he might find it interesting.
That's the only thing I did during that log-in. When I next logged in, Substack had auto-banned me for "Spam & Phishing". No email notification.
During the ban, my own never-used blog, and years of comments on other blogs like this, were all hidden.
Substack didn't respond to either the appeal I submitted through their form or the chatbot's promise to flag me an employee. When I eventually emailed tos@substackinc.com, a Trust & Safety worker apologized for the auto-ban, unhid my blog, and restored my DM & comment abilities. They forgot to unhide my comments, but did so after I sent two more emails. All this took 4 days, not counting 2 days of being banned unaware.
For those who are, unlike me, actual writers: the sight of 'Publication Not Available' in place of your blog might scare away your paying subscribers. It would probably prevent new subscriptions, too. To avoid an auto-ban, I recommend not sending links in DMs.
By the way, some of you might want to read the comment that started this (https://www.reddit.com/r/slatestarcodex/comments/16wdxzk/whats_the_deal_with_subtle_poisons/k2wlpvb/). Gwern Branwen argues that research on potentially slow-building/mildly toxic substances mostly isn't worth our attention.
That was 9 months ago, and I still can't make up my mind about his take. So I sent it to Zvi, imagining he might see it as a good subject for analysis. (Scott could write a good post, too, but this is a busy year for him, so I don't want to suggest he do something additional.)
Yeah, it was an interesting comment. It is frustrating. Occasionally, there _is_ a poison with a delayed, but _substantial_ risk, (e.g. https://en.wikipedia.org/wiki/Radium_Girls , though the first death in that case happened within 5 years). Occasionally, as with cigarettes, the effects are so large and there are enough cases of people quitting and basically recovering that one can be reasonably sure of the effects without a RCT, but for most things ... good luck!
Does anyone have an opinion on the safety of Tylenol for infants (note that I am saying infants, not pregnant women) with respect to neurodevelopment beyond what I can learn from reading https://parentdata.org/can-take-tylenol-while-pregnant/ ?
The infant in question is teething and would probably want Tylenol daily.
Also for teething pain, the things that worked really well and safely for my kid, was
1 the frozen waffles, let the baby chew cold squishy things
2 anbesol or another one of the local dental anesthetics, preferably with novocaine rather than eugenol which can sting a bit at first, though both would probably be okay.
3 always favor local pain relief over systemic
4 consult a doula or dentist with children.
I have a 10 month old with a congenital heart defect. He has had two open heart surgeries so far and spent about 3 months total in the hospital. He/We have almost constant contact with his medical team which are at a major US university with a small but top ranked children's hospital, including a Neurodevelopment team.
They regularly recommended tylenol for things like teething with no concerns about development. He has been on Oxy and Clonodine and a number of other similar drugs and this same team doesn't have any concerns about long term neurodevelopment due to these meds.
Of course the disclaimer that this team is biased to confirm their recommendations and of course to listen to your pediatrician, etc. etc.
Out of curiosity, how did this question arise? Looking it up, I see one paper about it, and it strikes me more along the lines of the "MMR vaccines cause autism", as it goes on about "susceptible babies" (and what makes a baby susceptible?) and argues that paracetamol use in early stages is responsible for 90% of ASD cases.
https://www.mdpi.com/2227-9067/11/1/44
" We conclude that the very early postpartum period poses the greatest risk for acetaminophen-induced ASD, and that nearly ubiquitous use of acetaminophen during early development could conceivably be responsible for the induction in the vast majority, perhaps 90% or more, of all cases of ASD. "
That makes me go "Hmmm", not to mention "And what the fudge caused autism before people were routinely taking paracetamol?"
EDIT: Ah fudge me, that graph! "Spike in ASD induction beginning with the cutting of the umbilical cord"???? Are they taking the piss or what, over there in the "Special Issue Neurodevelopmental Disorders in Pediatrics". If you can induce autism from within minutes of the child being delivered, I wouldn't worry about paracetamol for teething!
Honestly, I would not worry too much. Unless the infant is very young and you're dosing them up way too much for way too long, I don't think the small risk of "it maybe might do something" outweighs "the child is in pain and needs relief".
I'm pretty sure my mother wasn't giving me paracetamol as a child (Junior Disprin was the thing, I still remember the orange flavouring), and I was a baby during the time when gripe water still had alcohol in it. If I am ASD or on the spectrum, I come by it honestly - genetics! All home-grown, none of your new-fangled drug-induced fancy disorders!
It seems to be the medication of choice for parents here (usually in the form of Calpol https://www.calpol.ie/our-products). Health service advice on giving it to children:
https://www2.hse.ie/conditions/paracetamol-for-children/
I think there's more concern about aspirin and Reye's Syndrome for infants, but like all medicines, naturally, don't over-use it and stick to recommended dosages.
You could try alternating paracetamol and ibuprofen for managing teething pain (i.e. paracetamol one day, ibuprofen the other). More advice on managing teething:
https://www2.hse.ie/babies-children/parenting-advice/caring-for-a-new-baby/teething-gums/
"How to help your teething baby
It's upsetting to see your baby in discomfort from teething. Comforting and playing with them will help distract them.
Tips for helping a teething baby
Try giving your baby something to chew on, such as a cool teething ring.
Massage your baby's sore gums with a sugar-free teething gel.
Use mild sugar-free pain relief if your baby wakes at night and is irritable.
Give them cold water to drink - this helps to keep babies hydrated and may also soothe their gums.
Give them healthy foods to chew on, such as pieces of carrot or apple, or breadsticks - only do this if they're 6 months or older.
Stay close to your baby when they're eating in case they choke.
Teething rings
Chewing on a teething ring can help soothe a baby’s gums as well as distract them from the pain.
Use teething rings that are big enough so your child will not choke on them. Keep a spare clean teething ring in the fridge.
Never tie a teething ring around a baby’s neck. This could strangle them.
Check the product instructions on teething ring hygiene and how long to cool the ring for.
Never put the ring in the freezer as the temperature could damage your baby’s gums.
You can also use a cold wet facecloth for a baby to chew on. Make sure the facecloth is clean.
Teething gels and pain relief
Sugar-free teething gels are available over the counter from the pharmacy. They contain a mild local anaesthetic which helps numb any pain. These are for babies older than 4 months.
If your baby is still in discomfort after using teething gels, consider giving them sugar-free paracetamol or ibuprofen medicine for babies. Do not use ibuprofen medicines if your baby is under the age of 3 months.
Contact your GP or pharmacist for information on the safe use of gels and pain relief."
It's an unhappy time, as the child is in pain, but we all had to go through it.
Tylenol (aka acetaminophen or paracetamol) has some rather toxic metabolites and I would avoid it in favor of ibuprofen, which works just as well but is much less toxic.
EDIT: Sovereigness points out below that long-term ibuprofen use increases the risk of gastrointestinal bleeding, so it's probably a worse choice than acetaminophen. I still think that for short-term use, ibuprofen is better than acetaminophen.
I think this is one of those things where what we mostly know is a matter of opinion greatly influenced by advertising and highly legible anecdote. We do know that in adults, n acetyl cysteine plus glycine is hepatoprotective against Tylenol poisoning. I also have a personal opinion that the use of aspirin and children and young infants needs to be revisited as a research topic because it was largely based off of a couple of notorious incidents involving Ruys Syndrome during the 1970s, and that syndrome is associated with fevers in children with or without medication and apparently has a higher incidence when you give any pyrolytic. Though I don't wish anybody misery, following the general principle that symptomatic treatment interferes with immune and repair responses, I never treated my child with NSAIDs or Tylenol until his fever reached 103 and our pediatrician said that 103.5 is his treatment line and he was a 50-year-old head of family medicine at the University of Iowa's hospitals and clinics medical school. I honestly think that you want an experienced highly qualified pediatrician and just defer your judgment to them. It's not like we can't participate in our own medical decision making, but this is one of those that's really muddy. Voice your concerns and both sides of the argument and then go with their best judgment. Medical decisions involving your own child are really scary.
Not an infant, but I find ibuprofen does nothing for me. Most effective is aspirin, but my stomach doesn't like it. Paracetamol is next best, and ibuprofen is about as good as water for my aches and pains.
Everyone is slightly different.
I feel weird disagreeing strongly with metacelsus but I have to, daily ibuprofen consumption is a really bad idea especially for an infant. You can very plausibly cause an ulcer. For prolonged regular use ibuprofen is actually quite bad.
Acetaminophen does have some metabolites that are potentially lover toxic over a threshold, and I'm sure that threshold is lower in infants, and i strongly suspect that it is very not well studied whether there are small-and-hard-to-detect developmental problems caused by them. So Im not saying necessarily yes to tylenol, but definitely no to ibuprofen.
I'm open to changing my mind on this, can you point me to any data about the effects of regular ibuprofen use?
Mostly associated with GI issues particularly bleeding. Mechanism of action is well understood, ibuprofen impairs stomach mucus production. If a doctor recommends prolonged use of NSAIDS particularly ibuprofen and some others, they will prescribe a proton pump inhibitor a long with it.
https://pubmed.ncbi.nlm.nih.gov/9715832/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3158445/
I did only a very brief search for papers so I didn't include anything specific to infants but I'd tend to assume infant GI systems are more risky.
Finally, this is anecdata, but I did end up with a NSAID associated GI bleed and as one of the papers mentions it's really rough, there's very few warning signs until you vasovagal on a toilet or something
OK, after reviewing those, I agree that ibuprofen is worse than acetaminophen for long-term use. I still think it's better than acetaminophen for short-term use though. I have updated my top-level comment.
Why is Ibuprofen more popular than the longer-lasting Naproxen, is it just first-mover advantage?
I was told by my primary care physician that she prescribes high-dose naproxen for acute conditions but does not recommend it for frequent use because of the risk of gastrointestinal bleeding.
Fact-checking this now (because I never really thought much about it), it looks like Mayo Clinic does at least list GI bleeding as a possible side effect, especially in people with certain risk factors. I lack the medical expertise to judge whether this risk is actually worth not regularly using naproxen.
I can't speak for everyone else, but ibuprofen feels stronger due to its wider permissible dosage range--if you take the low end of the recommended range, you can (if needed) double your daily dose and still be within the range printed on the label. That's great if you are having a worse day for some reason. Naproxen has no headroom, at least according to the label. Imagine going in for a session of physiotherapy, knowing you can take an additional ibuprofen afterwards if you need to. With naproxen, you know you're going to suffer any aggravation for the rest of the day.
Ibuprofen is fast-acting, short-lasting and relatively weak. These kinds of drugs tend to have less side effects than stronger/longer-lasting drugs in general, and this case is no exception. Naproxen is better for adults with chronic conditions.
I strongly recommend against long term use of naproxen — my own experience of ~1 month of daily use (at max recommended daily dosage) ended with tinnitus and a need to empty my full bladder roughly every two hours, presumably because my kidneys were panicking at something. Fortunately those symptoms resolved about a week after discontinuing it (and prompted me to properly address the underlying condition). A kindly elderly neighbor also ended up with a severe GI bleed after long term use of naproxen (she was already deaf so may not have noticed the tinnitus).
I would use topical anesthetics & chewing toys whenever possible. It's just intrinsically lower risk. But we also used systemic pain relieve when necessary, and it depends a lot on the child & what they do well with. Even if there are some negative effects, there is probably a dose-dependent effect, so using it sparingly is, again, intrinsically safer. Keep in mind that the optimal of anything is rarely zero; My goal is usually to reduce the pain to a level where they can either sleep or do things. Btw, I also recommed Ibuprofen if possible since its effect is more limited, but since Paracetamol is generally the stronger drug for pain relieve you should always have both at hand.
Most studies on the topic I looked up on for our first one struggled with selection effects - for cross-family comparisons, the kind of parent that just gives kids lots of painkillers everytime they scream a little probably has worse outcomes than average, and even for in-family comparisons some kids simply have more issues than others, which will realize itself as both more screaming and thus more pain killers on one side, and also more bad outcomes later in life. It's difficult to completely control for these. Based on the research, I'd rule out very strong negative effects especially at minor doses, but everything else is probably plausible.
With our children, we were able to minimize use of pain relievers with chew toys and frozen soothers.
Mostly they liked to chew on my thumb. Constant pressure gave the most relief while they were cutting teeth. If you're willing to hold them for an hour messaging the gums, that could go a long way toward relief for your little one. Night time is a bit more challenging.
It should be noted that unlike ibuprofen which reduces pain by reducing inflammation, Tylenol is a psychoactive drug. So I'd worry about the dependence/tolerance/withdrawal that comes from taking any drug daily.
Sorce: I'm a Phd biologist with a secondary research interest in psychoactive medication.
What does it do? I've taken a pill a bunch of times and never noticed an effect.
I normally get a mild sleepy/happy feeling along with the pain relief.
In Russia, paracetamol is considered safe from 3 months of age, with daily dose of 60-120 mg/day from 3 months to year and 120-240 mg/day from 1 year to 5 years.
I gave it to my son sometimes, though mostly used topical anesthetics, didn't notice any detrimental effect, not sure how i would notice though.
Thanks, that's interesting! Here in the US it's widely considered safe for infants too, there have just been a few recent journal articles saying that it might not be, which left me spooked.
The joys of first-time parenting, you are going to be spooked by a lot of things. Take comfort in the fact that you live in the USA, and if they sued over talcum powder, then if paracetamol did anything bad they'd be taking a court case over it.
Maybe somebody will eventually, but right now don't be too concerned that giving your child an occasional small dose of a painkiller formulated for infants is going to turn them into a NYT journalist.
Based on this review https://link.springer.com/article/10.1007/s00431-022-04407-w it seems there weren't proper studies on neurodevelopment with long-term follow-up, so there's no hard data either way.
All studies were correlational, which doesn't inspire much confidence.
I have to love that paper:
"the chances of being in a lower development category increased with increasing periods of prenatal paracetamol use but not prenatal opioid use"
So if Pregnant Mrs. Scott had been taking Old Mother Machree's Knock'EmOut Tonic by the gallon while pregnant, no bother, but oh no she took 500mg of paracetamol? Call the CPS!
"For example, a startling twofold greater incidence of infantile autism in circumcised boys compared to non-circumcised boys can be readily explained by potentially negative impacts of paracetamol exposure during and following the circumcision procedure"
Look, I got nothing there. My jaw is dropped and I'm trying to scoop it off the floor. I can just see the anti-circumcision activists pouncing on this one: "circumcision causes autism!"
EDIT: Reminder of old-timey medicine for fussy children - morphine and alcohol.
https://www.pharmacytimes.com/view/pharmacys-past-the-soothing-syrup-known-for-causing-death-in-thousands-of-babies-
"Charlotte N. Winslow, a pediatric nurse, originally created Mrs. Winslow’s Soothing Syrup as a cure-all for fussy babies. The syrup was first produced in 1849 by her son-in-law, Jeremiah Curtis, and his partner Benjamin Perkins, in Bangor, Maine. It was widely marketed in North America and the United Kingdom.
Mrs. Winslow’s Soothing Syrup was known as a patent medication (this term often refers to a product that was marketed in the United States during this time but typically did not prove efficacy or safety). The concoction was used for babies who were crying, teething, or had dysentery, for which the opioid effect of the syrup caused constipation, to treat the diarrhea.
The syrup contained morphine 65 mg per ounce, as well as alcohol. One teaspoonful had the morphine content equivalent to 20 drops of laudanum (opium tincture); and it was recommended that babies 6 months old receive no more than 2-3 drops of laudanum.
One teaspoonful contained enough morphine to kill the average child. Many babies went to sleep after taking the medicine and never woke up again, leading to the syrup's nickname: the baby killer.
Mrs. Winslow’s Soothing Syrup was hugely popular. In an 1868 court summary, Curtis reported selling more than 1.5 million bottles of the remedy annually"
I would be interested in knowing if you or anyone else comes to any conclusions on the topic. I regularly use it for my young kids and have always been under the impression, until now, that it is totally safe (apart from the liver toxicity stuff).
Too much of anything is bad, of course, but used as needed and only when needed is not going to burn the house down.