You are suggesting. We're do you have to be in life to get something different? I understand and then it doesn't matter. I just wrote something without thinking about any of it. I never want to say anything in person or typing. I never wanted to sound like anyone because I am only what I think. I could sit around and beat myself up of make myself feel bad because someone had a thought. When it comes to needing to be aware of what I'm going to say. Words on earth will always be followed with more words. Say something good to someone today or don't. It's like asking me if a tree falls in the woods.....? I would stop you right their and tell you i wasn't there so why would you celebrate sence with something so common.
$10 nope, $1 maybe. Most of that maybe is because I don’t like subscriptions. If it were possible to get into the epistemic state where I believed that paying $100 would mean I only had to listen to single-digit podcast ads in the next decade, I would gladly pay it, but it seems unlikely I could reach that epistemic state even if the product actually existed.
Per this idea they in fact are (in the sense of having qualia subjectively experienced by their constituent matter in a manner determined by and inextricably linked to its current physical state), they just aren't self-aware. Arguing against the proposition that everything is (to some extremely primitive degree) conscious by rhetorically asking "why isn't x conscious?" is begging the question, wouldn't you agree?
Does anyone know how, exactly, the use of semaglutide as a weight loss drug got popularized? Scott started writing his article on it before Elon Musk endorsed it, so it isn't Musk, first and foremost.
Perhaps imprecise phrasing on my part. I meant it as "I thought you might be interested to know that...". Considering that the article in question is in a field of study a bit far removed from the remit of this blog.
Historically, how many places have actively advertised for immigrants? I know that Texas did when it was part of Spain and Mexico and when it was an independent country (not so much recently tho). What other places have?
I don't know if this counts, but some of the railway companies in the US advertised (and subsidized) some immigrants from England and Scandinavia to move to the plain states and be given farmland near their railroad lines.
They wanted to ensure there was adequate demand along the newly constructed lines, to help them become more profitable more quickly.
Back in the 19th century, Canadian authorities were very keen to settle the prairies with Europeans so the area wouldn't be de facto claimed by the US. They even hired agents in Europe who persuaded whole villages to emigrate to the New World en masse. I expect in the course of this campaign they published plenty of advertisements.
Rhodesia after WWII also went out of its way to attract white settlers.
Doug Saunders' *Maximum Canada*, which strongly influenced Matt Yglesias' *One Billion Americans*, goes into a lot of detail about Canada's ambivalent attitude towards immigrations in the 19th century.
I've been talking about the importance of teaching adult literacy for a long time, and it seems like I'm talking into a blind spot, as though people generally believe that if you didn't learn how to read in school, it's hopeless.
I wanted to say hello. I have been traveling in my whole life to get here. Impartial, I would like to place 5% of the blame for my being late on opinions. If I had a dream it would be. May opinion never get between us. We are all faced with with the daunting task of coading and decoding information in the real world experience. I believe the unhealthy opinion is born when we are so quick to want to say something. Not speaking toward any one here. Just speaking for my life. When I take information from someone I fuses it or reject it by what I am receiving from the person. I tend to not want to say anything. Except I always want to engage in the opinion when it comes to just. That right there needs to be destroyed. I can't act that way. I seem to find myself in some what of what top gun is about. Just as dangerous due to I'm the pilot and the aircraft. If you crash their is no coming back. I understand that I have disorder when it is coming from crowd that doesn't see it that way. Wow it's a wonder their are not as many names for disorders as their are people. Wait they give us names at birth. Since we run out of names the number we have is the our disorders indentation. I truly try to be a better me than the person I was yesterday. I tried to stay the same but as each day I pass threw I can't be like anything that who I believe I want to be. I'm guessing that is why we say love by the code. Love is only a word. How will you know it if all you do Is hear it. If I say die do you die? I say live it goes off and dies. I just do what I can when I see it. I speak positive and I try to be respectful by smiling and staying quiet when it comes to those who are people to take from. My everyone live one way or other. See you on the Battle field of. Fight well. Who ever you may be. Be it. So that we may always see where you are.
You know, Carlos, it's really quite possible that this person does have schizophrenia. It's not a rare disorder. If it's a bot post, what would be the point of sending the bot here? The post is not at all offensive, and is way to disorganized to be an effort to convince us of anything. And I don't see any jokes or references to ACX topics, which if present would probably indicate that someone is pulling our leg. So I'd say your post is in fact offensive. Sort of like "hey, buddyt, did you get your head shaved or are you having chemo.?"
Well, I wanted to engage with it due to its bizarreness and couldn't think of anything else. I don't think there's a polite way to ask someone if they have schizophrenia, and I am curious.
Do you believe there is little time left before transformative AI? Are you willing to put your money where your mouth is? Apparently, you can win big if you do:
because markets are not expecting transformative AI any time soon, meaning there are absolutely massive inefficiencies to exploit if AI timelines are as short as some are predicting. Some are even expecting a phase of explosive GDP growth (30% per year). I don't think I buy that, because it seems to me there is too much inertia and complexity for the market to transform that quickly in response to any tech, but that's just an uneducated intuition. I may throw some money at the funds indicated in the article, if only to save the world: if I win on those, those would be the first successful investments I ever made! The odds of it panning out strike me as miniscule, so the fact I'm considering investing like this maybe indicates the reasoning in that article (and by extension, that of the AI safety community) is shoddy.
Can there really be heaps of gold lying on the ground right now? What do you think?
Are there analogous eras when a new invention was 1) fascinating to those paying attention, but error-prone, unrefined, and not widely used; and, 2) on the verge changing the economy, as we know from hindsight?
If so, you could compare with financial markets at that point in time.
In recent living memory: personal computing and the internet. You could have made a lot of money buying companies like Microsoft at the right time. On the other hand, there was the dot-com bubble, where you could have lost a lot of money buying the wrong companies.
The difficulty is that "changing the economy" is step (3). There is a step (2), "some companies succeed at commercializing the invention and some others try and fail, stocks go up and down while people try to figure which companies are which".
The EA forum post linked above cuts through the difficulty of picking the right stocks by suggesting that one should bet on the real interest rate, instead. I am slightly unconvinced of their plots because (a) the interest rate data they have goes back only to early 1990s (real) or 1950s (nominal), so it misses some of the big transformative changes that could guide our intuition, and more importantly,
(b) in my layman's understanding, interest rates and GDP growth correlate in quite complicated ways, so plotting one against other may be less informative than one would think.
I have a friend who's suffering from depersonalization / derealization disorder (DPD), the persistent sense that oneself / the external world is not real. DPD seems to be inherently poorly understood, and I'd be grateful for advice from anyone who has successfully treated or cured this condition, or could recommend a therapist in the Boston area.
Here's what I've got so far in terms of a bro science treatment protocol. I don't think there are any miracle cures or One Weird Tricks, but it seems like there are many bits of applied common sense that could plausibly help and are unlikely to hurt:
- Achieve brilliance in the basics of sleep, diet and especially exercise.
- Proactively limit and manage stress.
- Go outside, get as much sunlight as you can, quite literally touch grass.
- Engage in physical hobbies such as crafts, gardening and the like, that involve a lot of multisensory integration and not a lot of high-level thinking.
- Socialize IRL, especially in comfortable, low-stakes social situations.
- Limit screen time.
- Keep a journal of symptoms and note what aggravates or ameliorates symptoms but don't otherwise obsess over them (spend, say, 5min/day on journaling).
- Avoid sitting alone in your room pondering the nature of reality or otherwise ruminating.
- Try reciting common-sense mantras / affirmations ("The world is real. My name is X Y. I am sitting in a room right now" &c) if helpful but do not obsess over them.
- Engage moderately in your religious / spiritual tradition if applicable but do not attempt any week-long kundalini benders or whatever.
- Check for and address feelings of inadequacy / excessive self-criticism / low self-worth through CBT, talk therapy, or similar. (For some reason this seems to be a common co-morbidity of DPD,)
It seems that episodes of DPD often resolve spontaneously, but I think it's worth trying to resolve it as quickly as possible and limit the chances of future relapses. Thanks in advance for any breadcrumbs or advice.
DPDR is *very* unpleasant. I had it myself when I was undergrad after a bad drug experience. I did not seek any treatment, in fact for some reason did not even think of it as an illness, and it eventually faded away after about 6 months. I'm a psychologist who treats OCD. I think DPDR is a form of OCD, or at least a close cousin of it. Somebody with regular OCD might check that their door is locked over and over again -- people with DPDR keep checking over and over again to make sure they and the stuff around them looks and feels "real." Of course doing that makes it feel less real, because they're staring at their hands, at the picture on the wall, etc., looking for a feeling of realness -- which is a weird activity that makes everything you're looking at seem sort of arbitrary and peculiar. It's like saying a certain word, like say "mosquito," over and over til it sounds like an arbitrary sequence of sounds, instead of like a familiar word. I think what keeps DPDR going is that the person harbors a belief that if they did not do this perpetual checking for realness, things would feel even *less* real, and the idea of them feeling any less real is terrifying.
The best approach to dismantling the mental checking is to do less and less of it. When you do, you discover that things don't feel less real because you're not slaving away at trying to make them feel real. However, it's not possible to make yourself not think about realness. It's really not possible to not make yourself think about anything that you crave to think about. What you can do, though, is to spend time doing things that capture your attention so much that there's not much room left in your mind to obsess about realness. Something like skiing, or any thrill sport or really any active sport is excellent for capturing attention. So is dancing. Outside the realm of vigorous physical activity, what captures somebody's attention depends more on their personality, but there is almost no solitary activity that is likely to work. Here is a random scattering of things that are pretty engaging for people who have a taste for them: Tutoring a small child; helping a friend move or repaint their kitchen or put together Ikea furniture; clicker training your cat; gardening; playing music or singing with a group; fancy, complicated cookingl
I noticed in the comments here somebody recommended the Reddit sub for DPDS. I actually do not recommend it. In online forums for people with health problems, people with severe and/or incurable cases are way overrepresented. There are a sprinkling of people in the process of getting better, and a very few people who have recovered and are sticking around to help others, but the reader's overall impression is likely to be that once you get this illness your are stuck with it for life. I'd recommend instead that you or your friend do a google search for stories of recovery from DPDR. I'm sure there are some out there.
If your friend decides to see a therapist, I'd recommend looking for one who describes themselves as a specialist in either OCD or DPDR, and says they use CBT (using other approaches in addition to CBT is fine, but CBT should be on this list). You can find OCD specialists at iocdf.org. Boston is probably the best town in the country for finding OCD specialists, because the OCD Institute is there. There are many therapists who have lots of good training and lots of experience with OCD and related disorders. The only bad thing about the Boston psych scene is that most specialists in private practice do not take insurance. If your friend has a kind of insurance that allows them to see what's called an "out of network" provider, they can probably get reimbursed by insurance for about half of what they've paid. If your friend doesn't have that kind of insurance, it's worth coughing up the cash if they possibly can. Treatment of DPDS is not a long process -- should be doable in 3 months or so. I do not think that having DPDS is grounds for believing somebody needs a complete psychic overhaul via years of therapy. It's an anxiety loop that the human mind can get caught in, and treatment that focuses on breaking the loop, without searching at length for some reason why the person was vulnerable to it, works well. There's a therapist in Lexington named Jim Vermilyea whom I recommend highly. He's in practice with some other people who are probably also very good, because he wouldn't have let them join his practice if they weren't.
I'm sure there is a recommended drug treatment for DPDS, probably some SSRI. However, SSRI's aren't magic bullets, despite what the drug companies would like us to think, and they often interfere with sexual pleasure and cause weight gain, so there's definitely a downside to taking this road.
I am so glad you chimed in here. I was especially concerned seeing a subreddit recommended, since visiting mental health forums is usually a terrible idea (for the reasons you mentioned). Treatment for OCD can be so counterintuitive and a lot of things that seem like common sense will just make it worse.
Yeah, I'm in a weird spot where I haven't actually had DP/DR, but it's one of my OCD themes. Just reading the articles on that site helped me better understand what it actually is and helped me come to acceptance with it.
There’s a DPDR subreddit, which might be of some help.
A lot of New Agey traditions have things like “grounding exercises” to help counterbalance the weird mental states their other techniques conjure. Things like standing barefoot in the grass, naming the things around you, exercise, breath techniques, etc. The kundalini subreddit has some good examples in their wiki
As I understand it, this is largely an anxiety symptom and has some elements in common with OCD. This site is a good source of info: https://www.dpmanual.com
Do you know of any religious, philosophies, or ideologies that correlate (positively or negatively) with DPD?
My guess was that DPD might correlate positively with religions which teach the concept of "philosophical realism", the belief that the words we use should refer unambiguously to discrete entities (possibly material, but often to an atomic spiritual essence of an entire named kind of material thing, as in ancient Greek myth, Aristotelianism, or many Native North American myths) which has a clear and firm boundary or definition. Some teach that these discrete real entities exist (e.g., Christianity); some teach that they don't (Hinduism, Buddhism).
Buddhism in particular seems to teach acceptance of DPD as its core doctrine.
People who believe in philosophical realism in the modern world should logically either deny that realism, embrace something like the Buddhist concept of emptiness, or conclude that the external world isn't real. So possibly this isn't always a mental disease, but can be caused by having the mental acuity to actually believe your metaphysical "beliefs", or to comprehend their consequences.
Certain cultural and religious or family behaviors: Some people may develop DPD due to cultural or religious practices that emphasize reliance on authority.
To answer my own question, Sartre's famous novel /Nausea/, which is often cited as the best existing explanation of existentialism, sounds like an extended description of DPDR. For instance, read this famous passage:
<<<
Black? I felt the word deflating, emptied of meaning with extraordinary rapidity. Black? The root was not black, there was no black on this piece of wood—there was . . . something else: black, like the circle, did not exist. I looked at the root: was it more than black or almost black? ... I had already scrutinized innumerable objects, with deep uneasiness. I had already tried—vainly—to think something about them: and I had already felt their cold, inert qualities elude me, slip through my fingers. ... And the hand of the Self-Taught Man; I held it and shook it one day in the library and then I had the feeling that it wasn’t quite a hand. I had thought of a great white worm, but that wasn’t it either. And the suspicious transparency of the glass of beer in the Café Mably. Suspicious: that’s what they were, the sounds, the smells, the tastes. When they ran quickly under your nose like startled hares and you didn’t pay too much attention, you might believe them to be simple and reassuring, you might believe that there was real blue in the world, real red, a real perfume of almonds or violets. But as soon as you held on to them for an instant, this feeling of comfort and security gave way to a deep uneasiness: colours, tastes, and smells were never real, never themselves and nothing but themselves. The simplest, most indefinable quality had too much content, in relation to itself, in its heart. That black against my foot, it didn’t look like black, but rather the confused effort to imagine black by someone who had never seen black and who wouldn’t know how to stop, who would have imagined an ambiguous being beyond colours. It looked like a colour, but also . . . like a bruise or a secretion, like an oozing—and something else, an odour, for example, it melted into the odour of wet earth, warm, moist wood, into a black odour that spread like varnish over this sensitive wood, in a flavour of chewed, sweet fibre. I did not simply see this black: sight is an abstract invention, a simplified idea, one of man’s ideas. That black, amorphous, weakly presence, far surpassed sight, smell and taste. But this richness was lost in confusion and finally was no more because it was too much.
... The essential thing is contingency. I mean that one cannot define existence as necessity [presumably a reference, but to whom?]. To exist is simply to be there; those who exist let themselves be encountered, but you can never deduce anything from them. I believe there are people who have understood this. Only they tried to overcome this contingency by inventing a necessary, causal being [God]. But no necessary being can explain existence: contingency is not a delusion, a probability which can be dissipated; it is the absolute, consequently, the perfect free gift. All is free, this park, this city and myself. When you realize that, it turns your heart upside down and everything begins to float,
– (Sartre 1938, transl. Lloyd Alexander 1949., the third Monday, 6:00 pm; from Google's scan of the New Editions 2013 printing, which lacks page numbers)
>>>
Sartre describes his nausea as being due to realizing his own "contingency", which is philosopher-speak meaning that he wasn't necessary to the universe--that it would have been possible for him never to have been. In Platonist philosophy, this means that he isn't real, as every Real thing is eternal, not temporal and contingent.
Sartre probably learned to consider temporal existence unreal from Hegel, though perhaps not directly. Hegel wrote in The Science of Logic, "The idealism of philosophy consists in nothing else than in the recognition that the finite is not truly an existent" (that which exists only temporarily, was never real).
Yeah, really agree. I read Nausea after I'd had my own 6-month episode of DPDR, and not only did I recognize right away that Nausea was about the same kind of stuff, but I actually found Nausea quite disturbing to read. I felt afraid it was going to trigger another episode of DPDR. It didn't, though.
I had much the same experience as you as an undergrad (drugs exacerbating latent dpdr /existential vertigo), I couldn't put Nausea down even though I thought it may be an info hazard.
Camus' Myth of Sisyphus was the antidote for me, at the time
It seems to me that DPDR is one link in a giant web of problems induced by Western philosophy. It begins by taking mistakes of Plato as foundational. Foundationalism itself is one of those mistakes we get from Plato--the belief that you need to start with some certain truth, and then build on it deductively.
In foundationalism, the foundations can never be questioned. When the foundations are wrong, as in Plato, those errors can never be fixed. Instead, they propagate through belief networks. The initial error about X, E(X), leads to some obvious contradiction further down the line involving Y; but this can't be resolved by questioning E(X), so the contradiction is resolved by instead believing something wrong about Y, E(Y). This in turn leads to a contradiction with Z, and a compensating erroneous belief E(Z). So we see Western philosophers performing worse than random at even easy questions, like, Does the material world exist? Is life desirable? Is pain good? Is pleasure bad? Answer one of these wrong, and you'll likely answer the others wrong as well.
In this case, we begin with Plato's assertion that the Real consists of pure, eternal, transcendental, absolute Forms and Truths. This mistake leads to the belief that our lives ought to have some transcendental "purpose" or "meaning" derived from God--the second mistake.
The second mistake causes us great dissatisfaction with the messiness of reality, a distaste for life, and sometimes even a feeling of ghostliness--DPDR.
To get past DPDR, Camus proposes in the Myth of Sisyphus that we must acknowledge the absurdity of our lives. This seems to me to be the third mistake: the wrong belief that we should live with contradictions. We make this third mistake to protect us from the consequences of the second mistake.
This "third mistake" is quite common in the history of Western philosophy, whether it's to embrace contradictions (as Camus says), or not to acknowledge or even look for them. Other examples include:
- the development of the concept of "mysteries" by the Catholic Church, which teaches that it's necessary to be able to believe contradictory things at the same time (notably about the Trinity and Christology)
- Hegel's "dialectic", which teaches that contradictions need never be resolved, but should be welcomed and synthesized into a new wisdom formed by accepting both branches of the contradiction
- Kierkegaard's critique or parody of Hegel's dialectic, which I don't understand, but it definitely involves accepting things without understanding them
- the use of phenomenology by the Nazis to reject the validity of traditional values, empirical data, and non-contradiction, in favor of subjective feelings, "lived experience", and "authenticity"
It's found especially in totalitarian regimes, which must teach their citizens to hold contradictory beliefs–or rather, not to believe or disbelieve, but merely to accept. In fiction, this is "2+2 = 5" from 1985, "there are 5 lights" from Star Trek: TNG, and Plato forbidding the citizens of his Republic from studying philosophy before age 50.
I don't think all the different instances of it arose from the same chain of wrong beliefs (the Real is eternal > the temporal and contingent is unreal > we must be able to hold contradictory beliefs). There's a web of basic philosophical questions common to most philosophies, and whenever the answer to one of these questions tells you something about the answer to others of these questions, accepting a wrong answer to the first forces adoption of a wrong answer to all the others. But you can take those other nodes of the web in any order, with the consequence that there are many possible paths from any of the foundational beliefs to any derived belief.
If this continued indefinitely, a well thought-out and self-consistent belief system which began with a foundational error would eventually converge on pure error--an ideology of nothing but wrong or incoherent beliefs. The "third mistake", that we should reject reason, is necessary both to avoid renouncing the mistakes already made, and to stop the propagation of mistakes before the system becomes so wrong that it ceases to function.
The one I knew well was pretty anguished. He absolutely hated it. Along with his sense of being unreal, he had developed a habit of watching his breath, and a fear that it would stop if he did not make sure he was breathing "right" -- fast enough, deep enough, with just the right amount of attentiveness to the sensations. He was afraid to fall asleep because he felt like loss of consciousness was = loss of the little self he had left, and what if when he woke up he could not find that little bit again.
I mean I for sure find hard manual labor something that prevents existential maundering. But yes I am also grinding an axe. Someone who is having serious difficulties because they worry themselves the world isn't real needs to as the kids say "touch grass".
For example the past few weeks I have been spending evenings doing maintenance on a public skating rink as a volunteer. Dozens of hours shovels snow and dragging/holding hoses, often in sub zero temperatures and one night with a -45 wind chill. Maundering doesn't come to mind because you got too many other pressing problems.
I am half convinced a lot of modern "anxiety" is because the human mind is wired for a general level of day-to-day anxiety, and so when we have have constructed a life/society with so little actual need for it, people find stupid shit to get anxious about.
If someone goes to a Buddhist retreat for some intense meditation over days or weeks and they can’t turn off ‘monkey mind’, the severe form of background narration that happens when our mind isn’t ‘doing something’ someone will help them out by putting a broom in their hand or have them do dishes or any repetitive mindless tasks to get things to quiet down so the hapless meditator can get a start.
I have always experienced a lot of anxiety and I know that mowing the lawn or re-staining the deck or as you mentioned shoveling snow will always help me settle down.
Shoveling snow is especially appealing. I like to wait until the snow stops completely and go out after dark. The temperature usually drops, the night sky clears and I usually have the neighborhood to myself. It can take on the aspect of a mystical sacrament at times. A bit like making the sand mandalas that the monks carefully create and then immediately sweep away. A visual aid to appreciate the concept of impermanence.
Just like the mandalas, my tidy driveway and sidewalk will soon be covered with new snow.
I agree with your point about modern anxiety. Something I often think about is that a century ago, people understood far more about the everyday objects around them:
Anybody can understand reasonably well how a horse pulling a wagon works, or an ax, or a staged play or concert, how a fire warms a room, etc etc. But most people understand very little about about the modern equivalents: Cars, jets, electronic entertainments, etc etc. It affects your feeling about your life to understand so little about basic elements of it. And I'm sure the cave man in all of us feels anxious because he knows that if it all breaks, we wouldn't be able to come anywhere close to reconstructing it ourselves.
I had a similar thought the other day. When humans were primitive, our habitat was the natural world, in the sense that survival consisted primarily of interacting with things that exist in nature. That habitat was understood in terms of magic -- we created myths to explain natural phenomena and ascribed mystical properties to natural things. However, we developed extensive practical knowledge of those magic phenomena -- we could manipulate fire, and predict buffalo herds' grazing patterns.
As humans civilized, human artifacts came to play an increasingly large role in the human habitat, yet those artifacts -- by virtue of being the creation of human minds -- were not magical. While we could not explain the physics of how a bow shot an arrow, the component parts were visible and we understood their various functions and how they worked together. This remained the case for the bulk of human history, perhaps even through the start of the Industrial Revolution.
As human artifacts came to dominate our habitat (e.g., cities, themselves a human artifact), natural phenomena both played less of a role in daily survival and became less magical. The Scientific Revolution began to explain natural phenomena in non-magical terms, and while those explanations remained inaccessible to ordinary people, people accepted that nature operates according to laws rather than magic. Daily survival increasingly consisted of navigating a habitat composed of human artifacts that were generally comprehensible to the average observer (a loom, a hearth, a mill).
However, with the advent of modern technology and the rise of a post-industrial society, daily survival consists of navigating and manipulating human artifacts about which the average person has extensive practical knowledge but no scientific knowledge. This is both because human artifacts are increasingly complex and because the division of labor in a post-industrial society permits the average person to be ignorant about their complexity. I spend 10 hours a day on a computer but know nothing about how a computer actually works. We understand these artifacts are not magic, but we cannot explain them ourselves.
So we are reverting to a state of understanding of our habitat similar to that of primitive man - we know how to manipulate our habitats in order to survive, but we cannot explain our habitats. The difference is that primitive man at least thought he could explain his.
Yeah, and primitive man thought gods or magic were running the show. That's a lot more comforting, thrilling and special than thinking that Elon Musk & Mark Zuckerberg are in charge. They are a truly tiny and nasty stand-in for gods. Eww.
I dunno. I feel like anxiety issues have gone to the Moon just in the last 50 years. Young people in my kids' cohort seem signifinicantly more anxious than I or my friends were at their age decades ago -- and it was hard to explain nuclear fission or the transistor back then.
What about the explosion in communications and always-connectedness that the Internet and devices have caused? At least there's the virtue that the growth in potential cause and effect have tracked each other. And...it seems to me people are almost always more self-aware, nervous, anxious -- whatever their inherent level of social skills -- when they are aware they are being watched by strangers.
These days, in much of what we do, we're always being watched by hundred to hundreds of millions of strangers. And not even just strangers! There's very few moments of the average day when you're *not* potentially in touch with your wife, your husband, your parents or children, all your friends from the most intimate to the most casual (not to mention assorted past flings and affairs, sometimes). Your boss, your employees, your customers, your clients. When I was young, there were big chunks of the day when I was out of touch, unreachable. Walking to school, 20 minutes where my mother couldn't call me, no friends could text me. Driving to work, nobody could call me. When I got home from work, my boss couldn't reach me except in the direst emergency. If I talked to an acquaintance by phone, or wrote a letter, nobody else overheard the conversation, the way zillions of people do when we write on our FB page, post a comment, Tweet.
It's like Panopticon crossed with the Stasi, where everyone watches everyone, not in general with malevolent intent, but....doesn't seem natural to our monkey brains, I bet.
Yes, I think that's true. I remember the first time I ever posted anything on an online forum, probably about 25 years ago: It felt like a huge deal, like being on Dancing with the Stars or having an editorial in the NYTimes. Thousands of people were going to read my words. I felt nervous and excited and presumptuous. Now of course I'm used to it, like everybody else, and yet I think a part of my brain is still registering that a LOT of people are reading this, and I know very few of them. Even on here, even among the names I recognize I don't know most people's gender or age. And then of course fairly regularly you get a reminder that some members of the group you are speaking to are not the kind of person you'd ordinarily disclose anything to, because they make clear right away that they despise your ideas and hate you for having them. Heh.
That sounds like a very fulfilling use of one's time. As I mentioned in the OP, I think manual labor is good! As is--also already mentioned--literally touching grass.
Consider the analogy to depression: from the outside, depression often looks like someone being a little mopey, slow, or avoidant. Big deal! Cheer up, you big baby!! But dig a little deeper and it turns out that severely depressed people sound like they're suffering more than cancer patients, who objectively suffer a lot but seem to adjust and even find the bright side of their diagnosis, in a way that rarely / never happens with depression. Add to that a bunch of weird physiological symptoms that seem hard to predict and fake in advance (why would malingerers all come up with psychomotor retardation, for example?) and depression starts to look pretty different from "just being really sad".
Much like depression, this derealization thing seems multicausal. Some people are depressed because their lives suck (sometimes for reasons within their control, at that!) and some people are depressed in the midst of the fullness of life, for no obvious reason. Which camp you're in is helpful intel, in case your life just sucks and there's something you might could do about that. But "unsuck your life" isn't helpful advice if your life doesn't really suck in the first place. DPD seems correlated with "having the sort of childhood that would alarm CPS", but "don't have an abusive childhood" isn't actionable. (And yes there is obvious genetic confounding but "don't have child abuser genes" is even less actionable.)
Or consider the analogy to obesity, the bulk (heh) of which is pretty obviously due to some combination of sedentary lifestyle plus cheap and hyperpalatable foods. A victimhood of our own success if there ever was one. Yet "stop being a coddled modern" is not actionable advice in the way that "eat exactly this much and exercise" is.
It's possible that DPD is just another disease of modernity and can be treated by consciously unwinding some of its more virtualizing aspects. It also sounds like a real hell to be trapped in, so I have some sympathy for people who find themselves stuck there, never having chosen to inherit the broken social tech that made their condition all the more likely. So yeah I encourage them to touch grass and meanwhile I'm polling here to ask if there's anything else I've missed.
A) I think a high enough % of people who present with depression or other things like DPD or whatever are just basically malingering, that one of the first main pieces of advice and attempts at therapy should just be to "grow up and start getting out there and doing stuff.
B) Not to mention which that is generally good advice anyway even if it doesn't work. I think even for the people who are actually having some sort of underlying issue that isn't just "I have worked myself into a crummy series of behaviors and excuses that make me unhappy but I am in a local minima and so I struggle to get out", it is still helpful in most cases.
I was "depressed" with an actual diagnosis and at one point an SSRI prescription (which I only took for a couple months before I started reselling) from ages ~11-26. And severely depressed for most of of ages ~13-19.
Now I had a lot to be depressed about. A father who was totally out of the picture since 4, and alcoholic mother who was passed out more days than she wasn't by about 7PM. I was super into girls and horny, and tried really hard to be charming and pleasant, yet high status girls more or less hated me from ~11-16 (had a lot of success after that). Had 2 very serious, embarrassing and public medical issues (one of which involved removing half a testicle). And also generally hated myself and was ashamed at my overall behavior.
Nevertheless on top of that at times it felt like I was literally not in control of my own mind. Like there was a dark whirlpool in my brain just sucking it down into dark, intrusive self-harming thoughts. I got lots of advice, mostly about talking SSRIs or about therapy related to my mother. None of that was very helpful, SSRIs just made me feel numb.
What was helpful was when my life started going better. And going out and accomplishing things and building up some self esteem. And I am positive that my uncles conscripting me into manual labor and things like that, while I HATED it, was at a minimum more helpful than the therapy, and looking back now I wish they had done it more.
What I needed was less candy coating. More people pointing out all the good things I had in my life, and how easy it would be to turn it around. (Which I did eventually stumble into on my own). But everyone was so concerned with servicing/validating my whining that there weren't nearly enough people saying "even with all this shit you are probably sitting in a top 3% situation globally and a top 1% situation historically, and you are going to fucking whine and wallow about this instead of turn it into something?". That is a message that would have resonated with me, and did resonate with me the one time I heard it.
Anyway, I jsut think we are way too precious about these things. You can't make an omelet without breaking some eggs, and the world needs omelets not cracked eggs.
I'm more sympathetic to your tough love approach than you'd think a shrink would be. For instance, I'm pretty sure that the best approach with kids who have school phobia is to make them go to school Sure, also talk with them about their worries, try to teach them some coping techniques -- but meanwhile, they gotta go to school. I had a school phobia in 7th grade, no idea why, and after my parents saw that I wasn't really sick they gently but firmly insisted I go to school. My fear faded away in a few days, & I as fine with school forever after. And last I checked research supports that view.
On the other hand, I think you overestimate how many people are malingering. I definitely was not, when I had my school phobia. For some reason I had been seized by a fear that if I went to school I was going to throw up in some horrible public way, like right on my desk in the middle of class, and would lay awake literally half the night trying to get unscared, trying to convince myself it wouldn't happen. And in college when I had DPDR I most certainly was not malingering. I was terrified and miserable, and would have paid any amount of money to get the feeling to stop. And, by the way, I was not using the fact that I had that problem to get out of anything. I didn't even tell anybody but a few close friends that I had it. And I continued going to classes, and ended the term with a high GPA.
Telling people who are suffering like that they they're faking it is really a bad move. If they're not, it's very destructive, especially if you are important to them. Think how you'd feel if you had pretty severe pain from a migraine or a whatever and somebody important to you said they didn't believe you that it hurt very bad! On the other hand, a moderate amount of pushing can really help people people who are depressed or anxious. But if you're going to push, the message to give isn't "you're malingering -- get to work" but "it is very likely that being active will make you feel better, even though it feels like it will make you feel worse. Give it a try, for god's sake!"
You have better grounds for your beliefs than I expected, but I wonder how you'd tell if someone had a physical reason (say, a dietary deficiency) for problems with manual labor.
Eh, that could also go in the opposite direction. The few times in my life I've had to endure grueling physical labor I actually found myself dissociating more and more as a coping mechanism.
Asking for someone else: does anyone know of good resources to find rental apartments in the south peninsula (Bay Area, California, USA), within ~5 miles or 20 minutes from San Bruno, ideally under $3k for 1 bedroom plus parking? Person is currently there and scrambling to find something in the next day or two, but all their early leads collapsed.
Apparently all they're getting is automated responses and AI-generated content.
Surf, an app that helps you make new friends based on mutual interests and that some of you will already be familiar with, is looking for a CTO.
A word on the product - on Surf, users open the platform and type in a desired outcome (e.g. "I want to find a partner in London for a Kilimanjaro climbing trip."). We match them with someone who wants to achieve the same outcome. They chat. They become friends. That's it. Elegant. Simple. Life-changing. We want to use it to eradicate loneliness from the world.
A word on who we're looking for - we need someone with considerable NLP/AI/ML pedigree and experience who can also do good things with simple app creation softwares like Expo. Someone who loves the early stage challenges of startup life. Someone so enthused about the idea of eliminating loneliness from the world and finding a friend for everyone that they would happily work on this for a year+ pre-funding.
In terms of existing assets, we've already got lots of proprietary technology incl. key algorithms, and an app that's 80% finished. Our waiting list has seen uptake in over 40 countries (+1 if you consider ACX its own pirate nation) and we're in promising talks with universities over pilot schemes. We already have one advisor (prior exp. at Google) onboard and are actively seeking others to make near-term fundraising more straightforward. Any CTO coming into this project will be well-set-up for success.
Early stage - equity only, but we'll be pursuing funding immediately after launch.
The ideal candidate would be in London, UK but fundamentally we don't care where you live, especially if you're the next Aaron Swartz/Tarun Mathur/Mira Murati.
Get in touch by emailing us at team@imsurf.in with the subject line "Surf CTO Position".
Excellent sentiments, but far afield from the topic: is it 'odd' for conservatives to not want their communities or nations filled with folks who are not 'their people'?
Looking at immigration from an economic perspective is always reasonable, of course -- that's why the white anglo-saxon protestant elite in my example above was willing to let irish (and then italian) catholics overrun some parts of new england. That lens itself is reasonable, but to pretend there are no other possible lenses through which to view the issue is not reasonable.
The irish weren't fond of losing Boston when it happened, either. All of history is every place being overrun and people being sad. I'm no conservative, and I don't believe conserving places/peoples against this kind of thing is even possible. It's just not 'odd' to want to do it. You know what I mean?
The idea that immigration is always economically beneficial is fantasy (unless you think a larger GDP is the sole determinant of a "good" economy). And Irish and Anglo-Saxons are very, very similar genetically/culturally comapred to anglos and somalians.
I think it is a bit more understandable in places like Europe where you have fairly homogeneous cultures that go back hundreds of years or are amalgams of smaller but very close cultures that do (those forced amalgams like France arguably already present a bit of a cultural loss). Denmark with 3 million extra Ugandans settling there over the period of say a hundred years is fundamentally going to be a different country. The same holds for 3 million Koreans or Russians or the Spanish or whatever, but the more distant the incoming culture is, the lower number of people from that culture is going to change Denmark a lot)
Less so in places like the US where the citizen's ancestral culture means next to nothing (to the point where absurdly broad categories like "white" or "black" are considered ethnicities) and almost no citizens can trace their ancestry to America in the 17th century or even the early 19th century (native Americans are an obvious exception, descendants of the original Dutch and English settlers also, to a lesser degree). USA with 60 million extra Ugandans settling there over a century is not going to be that much different. That is unless the melting pot of the US stops being a melting pot and the society atomizes in a way that you have a 60 million people with a completely different culture living separated from the rest of the society...
The difference I guess is that countries like Denmark (and by extension most countries in the Old World) cannot realistically be melting pots lest they stop being the countries they are whereas being a melting pot is kind of the point of the US and "melting" more people is therefore not going to change its nature much.
That said, you can have assimilation even in places like Denmark but I think the capacity for such assimilation is much lower and it is going to happen much slower. By the capacity I mean "absorbing more people from different cultures without changing the country in a fundamental way". So I think it is more understandable if people worry about the speed of immigration, or speed of immigration from distant cultures...but only to a certain degree - like I said, the capacity for assimilation is still nonzero even for the Old World countries (probably also somewhat different for each country).
Yes, this...the way American and Canadian Right-Wingers criticise immigration based on European "tropes" is a bit annoying and just cringe IMO...I understand that some people, even in the US or Canada, might not feel comfortable with people from a different culture, but then to me it doesn't make much sense as one of the major arguments in Europe against immigration is that it will lead to the collapse of the Welfare State (which I share to some extent)… but for the US (and to a lesser extent Canada), this argument doesn't make much sense since the welfare state there isn't nearly as developed as it is in Western Europe... yes, I've heard some self-styled "libertarians" in the US being against immigration because they think foreigners don't appreciate Anglo-Americans libertarian values...which is even more ridiculous, since it not only goes against one of the major tenets of libertarianism (person-to-person exchanges should be the major considerations, and groups don't exist), and also is ridiculous for other reasons (aren't most founders of US startups of immigrant backup)… basically being Anti-Immigration in the US or Canada (either for economic or cultural reasons) is a very cringe position based IMO on egoism and maybe some kind of (White?) Supremacism (I understand that most alt-righters here will deny this, but IMO it's the same as being a leftist who complains about police violence against POCs in Europe)…
For Europe it makes more sense, but even in the (Western) European context I would focus on the economic dimension of immigration and base my criticism on this area (e.g. Daniel Stelter or Thilo Sarrazin).
>this argument doesn't make much sense since the welfare state there isn't nearly as developed as it is in Western Europe.
Non-whites already cost white taxpayers around half a trillion dollars in government services received in excess of taxes paid (not considering the cost of crime and imprisonment, which would significantly increase it).
Importantly, non-whites overwhelmingly vote Democrat, and if enough of them come and Democrats amass enough political power, the welfare state will almost assuredly expand. You're assuming that America has a fixed set of policies that will not change, but they depend entirely on who makes up the voters of this country.
>which is even more ridiculous, since it not only goes against one of the major tenets of libertarianism (person-to-person exchanges should be the major considerations, and groups don't exist), and also is ridiculous for other reasons (aren't most founders of US startups of immigrant backup)…
It's not ridiculous. We don't live in a libertarian society, so groups have power to restrict other people's rights, therefore, its entirely reasonable to think about things in terms of groups.
>basically being Anti-Immigration in the US or Canada (either for economic or cultural reasons) is a very cringe position based IMO on egoism and maybe some kind of (White?) Supremacism
White people have a significantly higher mean IQ than all groups besides North-East Asians and Ashkenazi jews (selected populations of other groups will be high IQ, but pro-immigration people don't support selective immigration). This is a brute scientific fact, and you dismissing this as "white supremacist" is extremely bad faith. You're basically using a slur to dismiss scientific reality that doesn't suit your ideology.
I am not sure how to argue with alt-righters like you...ok fine, are you suggesting that the US should base it's immigration policy solely on IQ? Anyway, if they did, should they still let in people from low-IQ groups if they have higher IQs themselves individually?
Also, you did not provide any reliable source for your claim that "Non-whites already cost white taxpayers around half a trillion dollars in government services received in excess of taxes paid"?
> Importantly, non-whites overwhelmingly vote Democrat, and if enough of them come and Democrats amass enough political power, the welfare state will almost assuredly expand
For a guy likes to complain about the bad guys doing everything based on ‘ideology’ you spout an awful lot of dogmatic right wing ideology yourself. Are you oblivious to the irony?
Or do you think the crap you say is simply ‘the truth’? That is what you are accusing your adversaries of here. They think they are speaking the truth but really it’s just ‘ideology’.
FFS. The word Ideology by itself doesn’t even have inherent negative connotations.
Here let me be as clear as I can.. This is the dictionary definition of ideology:
ideology
noun
1. a system of ideas and ideals, especially one which forms the basis of economic or political theory and policy."the ideology of democracy"
Ton of assumptions there. I realize it's hard to put yourself in someone else's perspective but it's a valuable skill. Of course, if they're 'white supremacists' they're not really people and you don't have to work so hard. Whew!
Nowhere in their comment to they at all claim or even imply that white supremacists aren't people. It seems that you are making assumptions about their motives rather than providing tangible facts that support your side of the argument.
Fair enough. I was responding to the dismissive tone, but was not precise.
As for you responding to me, nowhere in any of my comments here or elsewhere have I advanced or even implied a "side", let alone one that would be served by "tangible facts". You must have me confused with some other commentator -- my point throughout has been that dismissing as 'odd' (or in the case of the comment under discussion here) or the products of pure 'egoism' or 'supremacism' the default position of human groups throughout history is myopic and absurd. Nowhere do I imply I hold this position, and if it seems shocking to you that someone can even describe a side without being on it, that's hardly anything to do with me.
Also all excellent thoughts, but again beside the mark: it's not for us to say how good or meaningful of a culture an ethnic group or coalition needs to maintain in order for it not to be 'odd' for them to want to preserve whatever it is that they have.
And again, a deracinated concern for 'cultural values' is why republicans with very low non-white support can talk about how the good people from south of the border are 'natural conservatives' at the same time that their actual constituents wanted the border sealed shut fifty years ago. It's not something broad like "do these people believe in jesus (albeit a catholic version) and love their families" -- everyone believes in something and loves their own families -- that makes the difference, but merely the question "are these people recognizably 'my people' -- do they look like me, do they sound like me, will their sons have a natural sympathy for mine due to these similarities?" All of this takes place in a fraction of a second when we see another human being -- we notice body language, race (or ethnicity if we're from somewhere that distinguishes this to a high degree of granularity: anthropologists in africa are often amazed at what a tremendous distance a trained human eye can determine in/out group) almost instantly. Most of us here reading this substack learned to set aside these instinctual movements in favor of the individual and in favor of the brotherhood of the human race, but that people who haven't done this work continue to feel the way the vast majority of humanity until the modern period felt isn't 'odd' in any meaningful sense.
Who in the US are "your people"? I am not American, but from what I gather (e.g. popular media, social media, news, blogs etc.) there seems to be little that various groups of Americans (broadly "liberals' and "conservatives"), so I think that time were "Americans" (or at least White Americans) of all kinds were feeling like they were part of the same group are now over for a long while...
You can ctrl-f in this page (if substack will allow you) to see that I'm not talking about myself or 'my people' -- we're engaged in an anthropological exercise to explain the apparent motivations of people we would otherwise think of as "odd".
Now if we pretend you were addressing that question to a white american who opposes immigration, if he's smart he would look at you with disgust and say that he doesn't have to justify his sense of his people to an obviously hostile interlocutor engaged in tactical ignorance -- after all, we know exactly who this white conservative's people are when we call them backwards hicks who don't spice their food and think pro wrassling is real, but when we're talking about who they think they are suddenly they don't exist as a people group.
Honestly, I don't deal much with White Americans, except on the internet...since I'm not planning to move to the US, I don't think I'll be having too many conversations with them... as for Europeans who are against immigrants: I understand the sentiment, but they should base their arguments rationally (i.e. in economics)...
The idea that group-level preference is irrational and economics is rational is probably not your most rational opinion. People don't live in economies, they live in communities. And that goes for everyone. When black folk in Harlem complained they were being priced out by gentrification, shouting at them about economics probably wouldn't have reassured them much. Same thing everywhere else, every time this happens. Not saying it shouldn't happen or that you should care, just trying to help you understand people who aren't like you.
The part left unsaid (but which we all know, and without which the conversation doesn't make any sense) is that some cultures are better than other cultures. Adding more Swiss people to US culture is likely to make it better, adding more Ugandans will likely make it worse.
It's no secret that Swiss culture is better than Ugandan culture, it's obvious from the fact that countries populated by Swiss people look like Switzerland, and countries populated by Ugandan people look like Uganda.
Sorry, but this argument is unserious IMO... for two reasons:
Yes, Switzerland is a wealthy country with a high quality of life, but is it really only because it's inhabited by "Swiss" people? I mean, it is a country with 3 official languages, and this alone should cause "Ethno-Nationalists" to see where they went wrong by using Switzerland as an example of an "Ethnonationalist" country... IMO it's much more complicated why Switzerland is so wealthy, but it's partly because of Geography and obviously the institutions (which formed that way for various reasons further still) of that country...definitely not genetics/biology though... I mean, just saying that if Swiss People move somewhere a place automatically becomes "better" strikes me as unrealistic...
After all, there are places in the US where many people Swiss descent live , and let's see how they perform economically against the rest of the country (according to Wikipedia, these are the following places with the most Swiss Americans as a % of their population :
Berne, Indiana – 29.10%
Monticello, Wisconsin – 28.82%
New Glarus, Wisconsin – 28.26%
Boys Ranch, Texas – 23.30%
Monroe, Wisconsin – 18.91%
Pandora, Ohio – 18.90%
Argyle, Wisconsin – 17.84%
Sugarcreek, Ohio – 17.29%
Elgin, Iowa – 15.79%
Monroe, Indiana – 14.35%)
Looking at the median household income for some of these places, they were
Also, secondly, among Americans who are anti-immigration, there seems to be the believe that it would be simple to just get people from those wealthy European countries to move to the US...but the truth is simply that not many people in Western Europe are interested in moving to the US anymore...so it's simply not an option. So either you don't have any immigration (because there aren't many people from other Western Countries who would want to move to the US) and deal with US demography becoming more like Japan in the future, or simply accept that if economic growth is important to you (as it seems to be for most US conservatives), then the US will need immigration and the vast majority of it will come from "Non-Western" countries...
Also, lastly,, while I agree that most people would agree with you and say that Swiss culture is better than Ugandan culture, it is still subjective...since culture is inherently subjective.
Oh come on. Instead of burying your nose in a book (or Googling random data), just freaking go to Switzerland, walk around and keep your eyes and ears open. Nobody who's actually been to Switzerland for any length of time can doubt that Switzerland is as successful as it is because it's full of Swiss people.
"Who cares about statistics? What about my lived experience?" -> I suspect you dislike people who make arguments like this; why do it yourself?
(Have /you/ ever lived in Switzerland? If not, it's not even your own ~lived experience~ it's that of a hypothetical person you're imagining, even less grounded in reality.)
I've been to Switzerland quite a few times (8 times in the past 12 years)…and it's a very beautiful place, both in terms of the natural and built environment.
But are you suggesting that if all 8 million people from Switzerland were to move to the US, then they would magically turn the US (with a population of 330 million people) into a society just like Switzerland...? Also, as I wrote before, Switzerland itself is a multicultural and multiethnic country, so to use Switzerland of all countries as an example for Ethnonationalism is a bit odd...
And also, once again I am asking why anyone from Switzerland would want to move to the US if it's already a better country (I think you could say objectively that this is the case, at least in terms of statistics like GDP and health outcomes)…?
>Yes, Switzerland is a wealthy country with a high quality of life, but is it really only because it's inhabited by "Swiss" people?
OF COURSE!
It's not because of it's natural resources. It's not that Switzerland has magic soil. Switzerland is prosperous entirely because of the people living there, and if you change the people living there, you change the country. If replaced the Swiss population with the population of Haiti, the country collapses. People are what make a country.
> mean, it is a country with 3 official languages, and this alone should cause "Ethno-Nationalists" to see where they went wrong by using Switzerland as an example of an "Ethnonationalist" country...
Ethno-nationalism isn't linguistic nationalism. They're all the same race. They're literally more closely genetically related than people from different parts of India are!
> and obviously the institutions (which formed that way for various reasons further still)
They formed the way they did because of the people living there. There's no grand mystery here. It's the people. Africans have never, ever made a country with "good instutions" before, and the only non-genetic explanation of this is an endless series of just so stories to rationalize a denial of racial differences.
>I mean, just saying that if Swiss People move somewhere a place automatically becomes "better" strikes me as unrealistic...
Northern/western Europeans have made everywhere they go better. Look at the US, look at Canada, look at Australia, look at New Zealand, look at South Africa.
"unrealistic" is any narartive in which the sudden and rapid flourishing of these countries following european settlement has nothing to do with them being settled by the same type of people.
> deal with US demography becoming more like Japan in the future, or simply accept that if economic growth is important to you
Economic growth isn't important - per capita GDP growth is important, and you will not get that from low-IQ third world immigrants. They will continue to be a fiscal drain.
>Also, lastly,, while I agree that most people would agree with you and say that Swiss culture is better than Ugandan culture, it is still subjective...since culture is inherently subjective.
If we're talking about what leads to properous, safe, socities with good instiutions, then no, it's not subjective in the slightest.
>Ethno-nationalism isn't linguistic nationalism. They're all the same race. They're literally more closely genetically related than people from different parts of India are!
India is an extremely large, diverse country that has 22 different languages recognised in its constitution. Being more homogenous than India is like being taller than Peter Dinklage.
Well, you don't seem to answer my arguments...Are you saying that it's because of the "genetics" of the Swiss people that Switzerland is such a rich country? Sorry, but how come Ticino is so rich, if large parts of Italy aren't? Also "Europeans have made everywhere they go better" is really debatable...I am not saying that colonialism was always bad, but what you are stating here is pure opinion..no sources to back up your claims. Also "per capita GDP growth is important" - if it's caused by IQ why isn't Japan growing faster than India or even the US, considering they have a higher IQ as a country?
No they're not. South Africa was a much better place to live, even during apartheid, even for blacks, than was Zimbabwe. People can put up with a fair amount of petty racism in order to have enough to eat and not have a serious risk of being fed to a woodchipper because you said something disrespectful about the Chief Thug. Ranking racism as the #1 Evil is a First World Problem viewpoint.
How is that comment racist? It was a comment about culture --social shibboleths, the values people cherish or don't, how they act or don't act on those, et cetera. I don't see any place where "people with black skin are better/worse than people with red/yellow/green skin" was stated. If you're seeing things that aren't really there -- maybe it's your own assumptions about people that need examining?
If you mean "diverse" in terms of skin color, I personally couldn't care less about that, any more than I care how many toes other commenters have, or the color of their pubic hair. Frankly, I would find it a little creepy if someone *was* interested in those things -- if it were of interest to other commenters whether I was black or yellow or white. Why do you want to know? Ew.
If you mean "diverse" in the sense of different life histories, different perspectives and talents, then I'm all for that, but I am baffled how this connects to skin color. Again, the fact that you seem to assume it does makes me wonder about your own unexamined and maybe unconscious race-based and race-oriented attitudes about other people.
Anyone put off by "racism" here (i.e. discussing the scientific REALITY of racial differences) doesn't belong here, because it means they're incapable of good faith discuession of thorny issues.
Speaking of melting pot, I have been thinking lately about that concept, and also the concept of "white." It seems to me that the woke left have been looking at those two terms as intricately linked. That to have cultures melt together means that the people sublimating their culture to the broader culture become "white" regardless of their skin color or racial background. That something is lost in the process of "melting" together. This is not as big a deal for Anglo-Saxons who change a little bit, but a really big deal for cultural minorities who have to change more from who they might be in order to fit it.
I would be curious to get other thoughts on this aspect, though I recognize it's a bit niche for a third tier post response.
Well, I am not American and my impression of Americans (to a lesser degree also Canadians and Latin Americans, but they seem less "universalistic" to me) is that they underestimate a lot by how much Europeans differ from each other. The difference between say Spain and Sweden is at least as large as that between Mexico and Canada. And even the cultural between France and Germany is like the difference between Mexico and the US. Even the differences INSIDE European countries are probably at least as large as the differences between the US and Canada. E.g. Bavaria vs Schleswig-Holstein (it's basically not even the same language anymore :D ).
Americans with European ancestry might have a few specific family dishes that somewhat resemble something you might encounter in their "old country", maybe they know a little bit more about the history of that country...and that's about that. Otherwise they are Americans (even if they fancy calling themselves Italian-Americans, Polish-Americans or whatever). At least that is my experience with all Americans I've met in Europe and elsewhere (though I've never been to the US or anywhere in North America, actually).
If the "white ethnicities" (basically usually meaning European ethnicities) retained their individuality, then the US would look like the EU. It would never become a federation in the first place (though probably some states would be pushing for it in hopes of controlling it...*cough* France *cough*), you'd be stuck with the articles of the confederation and each state would have a very unique identity, most people form one state would not understand the people from another state unless they learned a foreign language, etc.
Of course, Germans and Ukrainians are a lot closer than Germans and Malaysians, but Bavarians and the people from Schleswig-Holstein (both parts of modern-day Germany) are very likely further away from each other culturally (even linguistically in a way) than German-Americans and Ukrainian-Americans.
So at the very least, the various Europeans largely melted to "whites" in the US. The same can probably be said of Africans although even though there it was often quite involuntary. But if you look at (sub-saharan) Africa, being "black" means very little there, being Bantu vs being say Oromo is a big difference there. For various historical reasons (but probably mostly slavery) the melting pot in the US seems to be worse at melting it much further than that (although both groups are still very American, i.e. American whites are not just an average of Europeans and American blacks are not an average of Africans...they are both distinctly American)
Exactly... that's my impression too, though we Europeans do the same e.g. in regards to thinking of "Indian people" as being the same as a European nationality, even if India itself should be compared to the whole of Europe IMO, since both are subcontinents of (Eur)Asia...but yeah, the American view of "races" is quite peculiar to them, especially "White" vs "everyone else" and considering people of Pakistani and Japanese ancestry to be part of the same "race" strikes me as weird too...
Not all of us think of "white" as some monolithic group, and especially not "Asian" - though some of our regulations and government counting rules may make it seem that way. "Woke" is not the only viewpoint in America, and is actually a pretty small minority (though apparently a majority in a number of fields, including academia, media, and tech).
That's not exactly the case though. "White" ethnicities retain their individuality, they're just in a state of mutual intercomprehensibility with other whites.
In places where peoples live together, the groups will become "white" unless there are active steps taken to do otherwise. In Austin, Mexicans and Vietnamese were as white as anyone else. Things are different in New York. Though didn't realize how many white ethnic enclaves existed until I moved up here.
I'm also going to question the generalization of "White ethnicities retain their individuality". I believe that in the vast majority of cases, by the third generation any remnant of their ancestral identity is more of a hobby than anything else. And usually not even that. My Irish-American sister-in-law drinks Guinness, celebrates St. Patrick's day, visited Dublin when anyone else would have visited London or Paris, and is otherwise indistinguishable from any other mainstream American. For my part, I happen to know which European countries my ancestors came from, but that fact is about as relevant to me as my astrological sign (which I also happen to know). And the vast majority of the white Americans I know, I would have a hard time guessing their ancestry beyond "Europe" unless I recognized the surname's origin.
I didn't know that white ethnics existed until I moved here. It was a revelation to be able to distinguish a Pole from a Dane from an Irishperson by sight.
But again, I'm not claiming that the mass-media "American" "ethnicity" doesn't exist or that people haven't been pressured to abandon previous identities in favor of it. What I'm claiming is that it's neither necessary nor sufficient to adopt it in order to be "white." That as long as those particular ethnicities are mutually intelligible with the existing "white" bloc, they will be considered as being "white."
This allows for the phenomenon that I saw in Texas of whiteness allowing for a larger variety of skin colors than elsewhere.
I know a fair number of immigrants and their kids/grandkids. The kids who grew up here are basically Americans who will, if pressed, go ahead and speak some Spanish, but who think of themselves as Americans, hang out with and date other random Americans, and only mention that they're Salvadoran or Mexican or Peruvian or whatever if someone asks or there's some special reason it's relevant. The grandkids mostly don't even know Spanish or Tagalog or Chinese or whatever.
I think the main difference is in culture though. Yes an average Pole looks different than an average Irishman and you can get a better than 50% accuracy guessing their ethnicity but with some people you can't really tell by just looking at them whether they are French or Irish or German or Swedish or Polish or even Spanish in some cases (if we are talking about northern Spain). Also, the looks are really not quite as important to people, I think.
It is the cultures which are different and that is what matters more to people, not the shade of your skin colour (though of course there is always some prejudice based on the first impressions). I know people with Vietnamese ancestry (born here, children of immigrants) here in the Czech republic who I consider pretty much Czech. And their kids will be culturally as Czech as Italian-Americans are Americans. So while a (ethnically) French guy looks a lot more like a Czech guy than these people, his culture is clearly French and not Czech and that is what matters. Or rather the physical looks give the first impression but that only lasts until you actually go and talk to the people.
By the way, I was actually really surprised that people consider Harry Windsor's (or whatever his surname is now that he is no longer an official part of the British royal family and so should use an actual surname) wife black. If nobody ever mentioned that to me and someone asked me I'd say she was white. She is definitely very American in any case :-)
Yeah this just isn't the whole story. As the other person replied tons of "white" people in the US had lost almost all their individuality by the 1980s. Some families retained a small smattering of "ethnic" practices, but many did not or were so interbred as for them to be meaningless, or just ad hoc curiosities instead of some actual heritage.
My parents were between them like 6 different types of northern European minimum (though my dad wasn't in the picture anyway). Living in a pretty German/Scandinavian part of the country. I know some families who retained their "Finnishness", and a bit of Finnish cultural practices, and Swedish, and Polish, and German. But just a few. Most of them were like mine and were just "American" with no particular ethnic connection other than something researched for elementary school projects on "melting pots". And these are only like 4th generation families with 5th generation kids.
I really think the TV and mass culture of the 60s/70s did a good job of washing out much of the ethnic difference for a lot of people, especially white people.
The shared cultural heritage was Charlie Brown Christmas special not lutefisk.
IMO it should be accepted that "White American" is an ethnicity just like "German" or "Russian" (which itself are also composites of people with many ethnic backgrounds)…of course, these days, White American could be split into "liberal" and "conservative" as sub-ethnicities... :D .
Except that the "white" part has been mostly optional since the 1980s at the latest. Not everybody takes the option, so it's still mostly white, but we shouldn't be trying to insist that it is exclusively white.
Not all "white" ethnicities retain their individuality. Some lost it before coming to the US, and many lost it after. Some of it is a random mixing of previously separate people (my mom's family is Eastern European, with some German, Polish, Slovakian, etc. and various unknowns). My dad's family may or may not involve a variety of Western European cultures. We don't know or observe any of the culturally relevant practices from any of these countries. And assuming that one "white" cultural aspect is the same as another is part of the problem. Slovakians are not German, and may have good reason to resent them.
IMO, the melting pot model describes American cities much better than it does American rural regions. I've only lived in one American rural region, so I might be generalizing wrong, but it definitely has a distinctive culture. This is slowly getting eroded by internal migration and cultural influence of people from the dominant cultural group.
There have been observations and articles (whose accuracy I'm not remotely able to speak to except in the broadest sense) referring an ongoing homogenization and "Southernization" of US rural culture.
I have no firsthand knowledge if that's right, though it would help explain the popularity of Confederate iconography in places like southern Illinois (the Land of Lincoln!) and West Virginia (which exists as a state because people there in the 1860s were decidedly not Confederates).
I’m in New Hampshire, and I’ve been surprised and disappointed to see a Confederate battle flag or two even this far north. Trumpy types, of course: more looking to flip the bird than express anything resembling a thought. The homogenization of hick culture via the internet is complete. Ironically enough, the Republican Party of the 21st century has become the Confederacy 2.0
Dunno about WV, but in Illinois (where I live and where I have spent lots of times in the rural parts for more than a decade now) the overall trend may be related to US rural culture shrinking. There are small cities all over central and southern Illinois that are now half-ghost towns: built out for 3,000 residents but now home to only 1,000, etc. (Driving through places like Henry, Illinois is downright spooky -- the empty houses mostly aren't boarded up they're just sitting there like a dusty old movie set.)
Illinois now has 40 entire counties each having resident populations under 20,000 people, and 15 of them have fewer than 10,000. That drain-out is not new of course but it is very current, those rural counties' current population curves all look like this:
Our statewide population decreased by 0.1% from 2010 to 2020 while the City of Chicago population increased by a similar tiny fraction. The suburban areas generally increased, the medium-sized cities like Rockford stayed flat. The part of the state which is really draining out is that vast farm belt. And the people remaining are disproportionately older; you see hardly any 30somethings or 40somethings anymore except for the Mexican-immigrant pockets, every farm-county elected official or community leader now is in his/her 70s, etc.
To what degree the homogenization/Southernization (which is absolutely true) is a cause of the rural drain as opposed to vice versa, I have no idea. The two are simultaneous though and surely connected in some way.
Yeah but Illinois is a dumpster fire next to 55-gallon drums of aviation fuel stored in the mail hold of RMS Titanic, so most people with a brain have fled or are making plans to flee.
Yea that's been the talking point in certain circles for a while now, it just isn't supported by reality. For instance I need to correct something I wrote above: in fact Illinois statewide had a net population gain of 250,000 residents from the 2010 to 2020 censuses. (I had accidentally looked at a preliminary estimate of the 2020 census not the actual final census results.)
The population-loss meme is connected to the one about Illinois being one of the highest-taxing states, when in fact it ranks 30th among 50 states by state income tax rates. Is also very average nationally (23rd) in sales taxes; where Illinois does crack the top ten (8th) is in property-tax rates.
And the part of Illinois which genuinely is emptying out is that huge farm belt in the middle. Since that is the part of our state's economy which is least impacted by the property tax rates (farmland in Illinois is taxed at 1/6th the rate of residential or commercial properties), it does not appear that levels of taxation are a driving factor in which parts of the state are losing/gaining population.
The US also has the advantage of hundreds of years of development of social technologies for turning people into new Americans. For example, we Americans tend to be less subtle in communication than most places but that's for obvious reasons.
Even if that's true, it doesn't make it unreasonable at all. And people in 19th century America would have considered it laughable to suggest whites would ever be a minority in the US, and yet this is an inevitability this century.
I'm not sure that's the question intended. Everyone already knows that that fear is common around the world. Historically, that's how many nations and peoples perished, dissolved, or were conquered. It's the usual explanation for the fall of Rome. (I recently posted my own disagreement with that here, but I can't deny that the Goths arrived as refugees, then became the rulers of Western Rome). I think outright conquest was more common, but it's often hard to tell at great distances in time. Historians now argue over whether the Celts wiped out the Picts; whether the Anglo-Saxons invaded England suddenly and violently, or slowly and peacefully; whether the Hyksos invaded ancient Egypt from outside or were immigrants who seized power (https://www.science.org/content/article/invasion-ancient-egypt-may-have-actually-been-immigrant-uprising), and whether the "sea people" invaded ancient Egypt, or were originally just refugees.
I also note that the question is posed using the words "their people", and not ethnic or racial terms. Today, conservatives don't want their nation filled with folks who are radicals, and radicals don't want their nation filled with folks who are conservatives. Is one of those "odder" than the other?
So I think that, if there's a question to ask here, it's, "Why do so many people today think the fear of being outnumbered by people who don't respect your cultural values is unreasonable or immoral?" Or perhaps, "Why is everyone in America today in denial about their reluctance to live with people who aren't 'their people'?"
Radical is the opposite of conservative. Liberal is, if anything, the opposite of authoritarian. Today America has a conservative party and a radical party, but no uniquely liberal or authoritarian party. Both liberal and authoritarian ideals are split about equally between our parties.
Liberalism historically emphasized individualism, equality before the law, equality of opportunity, free markets, private property, the limitation of state power, freedom of speech, toleration of diverse opinions, and the right to own weapons. Conservatives are clearly the liberals today in that original meaning of the term. But they fail when it comes to newer freedoms that weren't thought of 2 centuries ago, like control over one's own body (sexual preference and practice, prostitution, medical treatment, recreational drug use, abortion if your community's metaphysics allow it); the freedom to have privacy (no search without a warrant, freedom to travel and to buy things without it being tracked), cryptography, and pornography; and freedom from gender roles.
Forgive me Phil, I’m way out my wheelhouse here but your definitions seem a bit fusty.
I’ve poked around a bit and having completely absorbed the thinking of Edmund Burke [joke] I see radical being used to describe Margret Thatcher and that weird bit of CosPlay that just went down in Brasilia.
Help me out here with some fairly recent examples. Would you label all these people as Conservative?
The word "radical" has a clear meaning. Radicals are people who want to make large changes right now.
The word "conservative" doesn't denote a particular set of beliefs, but the desire to keep things mostly the way they are at the present moment, or to revert them to how they were at some prior time. This means it doesn't have a clear meaning, because often one party wants to keep things as they are, while the other wants to adopt a policy that was national policy sometime in the (possibly distant) past.
So these terms aren't really opposites. There are radical conservatives, who want to make big changes right now to revert to some (possibly imaginary) earlier time, like the Nazis, the Ku Klux Klan, or maybe Margaret Thatcher (I don't follow British politics).
I'd call all those people you listed mostly conservative, but "conservative" meant something different in Reagan's time than it does today. For instance,
- In the 1950s, conservatives were against free speech if it might be communist propaganda. Today, radicals are against free speech in general. So you could call them "conservative" because they want to blacklist and silence people as was done in the 1950s.
- In Woodrow Wilson's day, conservatives thought America should worry about America, while Wilsonian progressives said it should take on poverty and bad governments in other nations, preferably working toward a world government. After World War 2, the Marshall Plan was definitely radical and interventionist, but was supported by "conservatives". In Reagan's day, conservatives felt that America had a responsibility for the rest of the world, while radicals felt America should stop interfering with other nations. Today, conservatives again think America should worry about America, while radicals again say it should take on poverty in other nations, preferably working toward a world government.
The term "liberal" is clearer than "conservative". Historically, it refers to principles of liberty described by Enlightenment thinkers, especially John Locke in the 17th century. The main point of what I wrote is about the meaning of "liberal", not about the meaning of "conservative".
The confusion over the meaning of "liberal" is probably due to the Civil War. The Old South considered itself liberal in the old sense, and yet was a hierarchical society that didn't extend the freedoms it praised to slaves or the lower classes. I would argue that it wasn't really liberal. There was no freedom of speech. Abolitionists or workmen trying to claim equal rights before the law would get beaten up. And the North was both radical and liberal (in any sense).
You could argue that the term "liberal" today has come to mean simply people who want radical changes. I object to people doing that, because I believe personal liberty is important, and the people who want radical changes today are generally opposed to personal liberty. I don't like it when people call for censorship, government control of sex and gender, racial separatism, and the elimination of gun rights, free trade, and private property, and call it "liberal".
So my use of the word "liberal" isn't as objectively correct as I implied. I just think it's more honest.
Indeed, they did from long before any Europeans showed up. E.g. when the first French voyageurs arrived in the Great Lakes they found that the Iroquois and Algonquin confederacies had been engaged in a mutually-genocidal war for something like a century, with the core issue being which tribe was entitled to live where.
Similar examples are found throughout history around the world going as far back as we have any historical records. It does seem as if that is one of those fundamental gut human fears.
If resources are scarce, people are going to fight about resources. Where resources are not scarce , people might fight over ethnicity and ideology , but also might not. Western societies tend to deliver abundance, and enforce tolerance.
One reason for the Norman conquest of Ireland was a petty king inviting in Anglo-Norman mercenaries to help him in a political row with another petty king which escalated up to the high king. Petty 1 seals the deal by marrying his daughter off to leader of said mercs, with promises that merc will be king after him. Mercs then decide they like the place, settle down all over, and set up as local lordlings. Local chieftains and kings who are fighting each other decide that having the new guys on *their* side whacking their enemies over the head is a great idea.
Then the king in England decides "hey, my former vassals may be getting ideas above their station, time to remind them who's the king" and claims lordship of Ireland. Fast-forward the Eight Hundred Years 😁
It happened all over: A and B are at each other's throats, C turns up, A and/or B thinks this is great opportunity to get C on their side, eventually C ends up owning the place. Then it's hard to kick them out again:
Same happened in Middle/South-America. I'm certainly confusing which is one was which, but the Mayas and Aztecs were at each others throats. Then spanish conquistadors showed up and Mayans invited them to crush the Aztecs (or the other way round?) and guess who ended up owning the place.
Will the international drug trade inevitably collapse thanks to future machines that can synthesize any drug from simple precursors? I ask because I just read a report describing how advances in chemistry over the last 15 years had made it possible to synthesize methamphetamine from more common types of chemicals that governments have difficulty tracking.
What's the market case for anyone ever developing a "future machine that can synthesize any drug from simple precursors"? It may be technologically feasible at some point, but it's going to be competing with an extant global supply chain that can connect you with a factory that produces the particular chemical you want at scale and with full economies of scale, now with overnight delivery. I can see niche applications, but they may not be enough to finance the development and it may not be enough for hypothetical future you to get lost in the crowd when you buy one for your home drug-peddling business.
I remember the early hype about 3-D printers as the inevitable, omnipresent home appliance of the future, and the speculation about how that would mean e.g. gun control was futile because anyone would be able to print a Glock or an AR-15 on demand. I also remember what happened when that dream met the reality of Amazon, leaving 3-D printers as mostly hobbyist toys that are nowhere near capable of printing serious guns, with a handful of high-end industrial machines that could *maybe* do so but not at a competitive scale even on the black market.
You mean illegal drug trade, I assume? Doesn't seem likely. It's very unlikely that any time in the next 100 years it will be possible to dial into a simple machine the structure of some random small molecule and have it synthesize it from whatever random feedstocks you can source from your local grocery, hardware, and animal feed stores. A more plausible scenario is that it becomes possible to type your structure and desired precursors into OChemChatBot and have it outline a plausible synthesis.
Of course, whether the synthesis works or causes your garage/backwoods lab to explode in blue-green fire because OChemChatBot hallucinated the answer will remain a business risk.
I think it's more likely that the tech will be available, but giving the recipe to your Synth-o-matic will be illegal. Defense Distributed will be an indicator.
I hadn’t read either of these before and they created a lot of new tracks of thought and a tall stack of new reading I want to do.
Right now I’ll only say that my take on Daniel Ingram, from watching videos of people interviewing him, is that he is playing the long con. Sorry David , if you are really enlightened this won’t bother you, if you are running a con it probably won’t either.
I'm trying to help a young guy I know find a job that's a good fit. He's in his mid-20's, and has ADD & Asperger's. He's distractible, and a bit odd -- on the other hand he is friendly, honest, hard-working, and quite bright. He's got a college degree in computer animation, and knows how to use Blendr, Photoshop and some of the lower-end animation and video editing software. Is also competent, though not expert, with the basic office suite apps -- Word, Excel, etc. -- and had a couple courses in Python. Has built a couple simple web sites.
He's been working at a hardware store stocking shelves and helping customers for several years. He is well-liked there but makes little more than minimum wage, and really needs to earn more. It seems to me that his computer skills should help him get a job that pays above minimum wage, but I can't think of a job that might suit. But if I were opening a store on a tight budget, and was not very computer literate myself, I'd love to have somebody like him who could help me get oriented with using a computer for the store, could build a simple website for the store, could explain spread sheets to me, could make attractive notices in a nice font to post somewhere -- things like that -- and then later help me unload boxes and put the stuff on the shelves.
He is willing to take one or 2 courses if improving certain crucial skills would make him more hirable for jobs that pay at least 50% more than minimum wage, but he's clueless about what courses to take.
One last thing: It would not work for him to be self-employed. He needs the structure of a regular job.
What ideas have you got for this amiable young oddball?.
He could start as an office assistant for some small law/accounting/insurance firms etc, develop his computer skills and grow with the business or move on to a larger firm for a better position after 6 months -1 year of experience.
It sounds like his social skills might be the core problem? If you're only looking for "above minimum wage" and he has a college degree, this seems like an extremely easy problem to solve. Even if his degree were in something completely unusable, he should be able to land a generic office job somewhere. Adding in what could be summarized as "IT skills" to a lot of jobs should also open up a small world of Help Desk or basic tech department jobs. 50% more than minimum wage should be easy going either route, with reasonable expectations of 2X+ minimum wage at least as growth potential.
That is to say, if I'm reading you correctly that he's got social interaction issues holding him back from pursuing something more obvious, then it's not so much his skill set that's in question, as where he can fit in. Assuming that, it seems that his best bets would be to improve his work skills to the point that a large tech firm (or local equivalent if relevant) would want him for his skills and would be willing to overlook the other difficulties, or for him to look for a smaller company where there would be some clearly missing skills (probably general IT/MS Office) that would be willing to give the guy a chance.
If his social skills are strong enough, then the other option I would suggest is doubling down on one or more specific aspects of his degree or tech knowledge and applying specifically for those kinds of jobs. I don't know computer animation as a field specifically, but it sounds to me like a field that is hard to get into because it's so niche (geographically dependent, limited general use for most companies). If so, he may need to figure out all of the related fields that have some kind of crossover, and apply there as well.
"But if I were opening a store on a tight budget, and was not very computer literate myself, I'd love to have somebody like him who could help me get oriented with using a computer for the store, could build a simple website for the store, could explain spread sheets to me, could make attractive notices in a nice font to post somewhere -- things like that -- and then later help me unload boxes and put the stuff on the shelves."
Some of these things you only need once, and you can buy them separately. There are Word and Excel courses for beginners. There are companies that will create for you a static website cheaply.
I do not think it is realistic to look for a tailored "stock-keeper / Python web developer" role. That is very unlikely to happen... and even if by a miracle it happens, he would lose all the leverage that comes from being able to say "I quit", because it is unlikely he would find a job of the same type again. So he needs to choose one or the other.
However, that does *not* mean that he needs to make the choice in advance. He can simply apply to both types of jobs simultaneously, and take the first job offer that is an improvement over his current position. But he needs to remember that the two different roles require two different personae. When applying to a stock-keeper job, do not emphasize Python and Photoshop. When applying to a Python development job, the experience in stock-keeping is only relevant in the sense of "can keep a job".
*
So, the first option is to try becoming a better-paid stock-keeper. Write a CV that displays (1) previous experience with stocking shelves, and (2) the knowledge of office applications, that is: Word, Excel, e-mail. (Everything else is dark magic that the stock-keepers are not supposed to know.) Send this to shops, both large and small. Or maybe, let a job agency do it for you. The image you are selling here is "an intelligent stock-keeper, who can also do the related administrative work". (Which might be a reason to pay him better than mere stock-keeper.) In longer term, possible advancement to a position of a supervisor, or maybe a purely administrative position. Or the company might immediately offer an administrative position instead.
*
Another option is to put the focus on Python and web development. The problem with developing simple web sites is that a company only needs such thing *once*; and them maybe an update a few months later. That is not enough to justify a full-time job. He would need to develop for many companies, but if self-employment is not an option, he needs an employer who does this kind of business. But in 2023, such company will probably use some content management system, and create the new websites by clicking "create web site" in the user interface. The ability to create a simple web site from scratch is only useful as a stepping stone towards something more complicated.
So the image here is "a young person with basic IT skills". Apply for a position of tester or junior Python developer.
Before applying as a tester, download https://www.selenium.dev/ and write a few Python scripts. Try automating something simple; like log in to a website, go to some list and verify that an item with certain properties exists, maybe also select that item and perform some action and verify that you received a success message. You can do this over a weekend, and it could make a dramatic difference over "I have never done anything testing-related". As a web developer, I assume you already know HTML and CSS; also learn how to write (the most simple) XPath expressions to use in the Selenium scripts.
Advantages of being a tester: requires less knowledge than applying as a developer, and you can transition to a developer later. Disadvantages: many software companies do not use testers, because the testing is done by the developers.
To apply as a Python developer... I am out of my depth here, not a Python developer. Learn how to use "venv", "pytest", https://jupyter.org/ . Maybe ask someone, what are the most popular Python frameworks these days, and write something simple in one of them. Notice that Python is useful not just for web development, but also to write command-line scripts which e.g. process JSON files or find something in a database.
Don't overthink it. The idea is that you spend a year or two in your first IT job, and then you can apply elsewhere and ask for a significantly higher salary.
Thanks for your detailed advice. I was thinking that it would probably help this guy to meet with someone whose job it is to assess somebody's skills and tell them what jobs are a good fit, also which of the good-fit jobs are currently looking for more employees, and what skills, if mastered, would make the person more hirable. Looked online, found ads for places that advertised that they do that. For instance one called STEM Career Services: "STEM Career Services retains a panel of expert career coaches, each with invaluable experience in consulting, biotech, pharma, federal government, nonprofits and more – all ready to help you find the perfect job." Is this bogus, or a real service? Seems like there should be places offering actual help of the kind I have in mind, because there must be a need for it.
I have never seen a service like that, so no idea. Perhaps try to find a review online? Also, it could be something in a middle: a mere job agency that tries to look more impressive than it really is. Which might still be a good outcome.
I only have a job experience with the job market in Slovakia; I have no idea whether in other countries it works similarly or not. Most job agencies are not specializing on IT; they provide jobs for everyone. Which means that they have a few hundred job positions they hunt for, and maybe five of them are IT related. So it does not make sense to give them a too detailed list of your skills and preferences, they will anyway just give you one of those five options that seems to match best the keywords you have mentioned. Might as well say "Python developer" and save time. There is also one job agency specializing on IT, but they only want self-employed contractors.
I always thought that something better should exist, and I am not really sure why it does not. Maybe Slovakia is just a too little market. Or maybe it is a chicken-and-egg problem, like trying to build a new Facebook. It does not matter how good idea you have, it is most likely to fail, because people want to be where other people already are. Imagine you start a new job agency tomorrow, now what? Companies reject you because you have no job candidates waiting. Candidates reject you because you have no job offers waiting. So you either fail, or you desperately take anything you can, and become the general job agency with hundred company clients and five IT positions. This is just my guess; never tried that.
Then there is coaching, which is a different type of business: you pay them money, they give you lessons. Can they actually find you a job? Probably only in the sense that if you have skills, any job agency would find you the same job. Maybe they cooperate with a job agency or two, and send them CVs of people who completed their courses. But finding you a job is *not* their core business; it is giving you lessons for money. Probably might as well take lessons from someone who doesn't call themselves a "career service". But again, just a guess.
eaThanks. Can you give me an idea how someone could learn about server maintenance? Is that the sort of thing you can learn with an online course? I do not work in tech and don't know about this sort of thing to advise him, or to judge whether he's up to mastering server maintenance..
I was thinking that it would probably help this guy to meet with someone whose job it is to assess somebody's skills and tell them what jobs are a good fit, also which of the good-fit jobs are currently looking for more employees, and what skills, if mastered, would make the person more hirable. Looked online, found ads for places that advertised that they do that. For instance one called STEM Career Services: "STEM Career Services retains a panel of expert career coaches, each with invaluable experience in consulting, biotech, pharma, federal government, nonprofits and more – all ready to help you find the perfect job." Is this bogus, or a real service? Seems like there should be places offering actual help of the kind I have in mind, because there must be a need for it.
Finally got DALL-e to produce an image of Shrimp Love Me, Unaligned AI's Fear Me. It's here if you want to have a look: https://i.imgur.com/fBwwZSq.png
I had to make 2 images and photoshop them together. Main prompt was "Steampunk style: A man standing in water is dismantling a huge machine. Many shrimp are swimming towards him." But DALL-e just would not do the damn shrimp no matter how I phrased it. I tried mentioning them before the huge machine, but then I got machine versions of shrimp -- short of robotic metallic ones. I also tried editing the original by erasing a lot of little areas and then putting "swimming shrimp" as the prompt for the edit. DALL-e simply filled the erased areas with what had been there before, or with plain blue water. So finally I just did a separate image with prompt "Many pink shrimp are swimming towards the center of the image," which DALL-e rendered just fine, and I photoshopped the 2 together. That's the first time I've used Photoshop for a DALL-e image, and it definitely made the process less fun. There's something magical about just using what DALL-e gives me, but in this case I hadda have the shrimp.
I have a couple related to AI (The Ol' Job and What Dreams May Come), The Shell Game relates to economics, and there are some others like Gourbain's Flux Capacity Theorem that I think some people round here might like. Also the pinned, No Hot Take Under the Sun, is short and directly inspired by something Scott said once.
> God created each one of us to live out our lives in one or another cognitive and ideological bubble, and though we may paw desperately at the inside of the slippery surface, there is no outward progress. Every inch up the wall just rotates the sphere around us.
I am the worst sort of reader, the sort who comes up with obscure exceptions without addressing the gist. Anyway, have you read about craniopagus twins, i.e. conjoined twins joined at the head? I have read that their cognition seems to overlap somewhat (e.g. https://en.wikipedia.org/wiki/Krista_and_Tatiana_Hogan). Can their minds fit into the bubble metaphor?
that's really interesting! i imagine they have a lot in common and probably don't disagree with each other too much on political or social issues, but who knows. It'd be interesting to be physically and cognitively attached to someone you can never agree with on anything. If they see the same things, but perceive them differently, it'd just go to show how important perception is in terms of interpreting and understanding the world
If, in general, the "Art World" hates Tartarianism and everyone else likes it, this might be an opportunity to break out of a local maximum and mainstream Tartarianism again. AI could allow the capitalists to cut the Art World out of the loop and sell people the styles they like. Of course. this would likely depend on AI design being able to generalize to things like woodworking if it wants to compete with IKEA.
I doubt it. The commercial art for commission world has always been Tartarianism meets Furry Porn.
The Art World is filled with weird stuff like bananas taped to the wall because it's defined in opposition to the large and thriving beautiful and thus actually popular art scene, not because there is a shortage of actually beautiful stuff around.
AI Art will make changes within that beautiful art scene, but it won't change the relationship between the pretty scene and the Art World. To the extent that Ai Art actually enters the art would, it will do so by continuing it's existing un-pretty styles.
I don't expect AI art to enter the Art World at all; the opposite, in fact. I propose that AI could break the Art World's influence on commercial art/design. I have two end tables in my living room: one is a Brutalist IKEA piece, while the other is a more Tartarian design that I inherited from my grandmother. What if AI made it possible to compete with IKEA by selling Tartarian furniture? The Art World might end up becoming a hermetic scene with no broader influence.
It seems that you are saying the Art World produces ugly art because pretty art is an oversaturated field? I can see that, and it makes sense to me. Given that, then the only real chance to break into art (or fashion, etc.) is to make ugly art. That would apply to AIs as much as people, so yes, AIs would have to make ugly art that is somehow novel in order to get recognized. Being from an AI seems enough right now to be novel, though that will likely change if there are free or cheap ways to get AI to create art on demand.
If you're producing pretty art it means you're just one of those low-status artists who goes around making decorative landscapes for lower-upper-middle-class people to buy and hang in their houses. Maybe you have a small gallery in a popular tourist area where people wander in and consider which of your pretty landscapes would look best in their dining room.
This means that your whole career is dependent on outsiders, not artists, which is adjacent to being an outsider yourself.
I agree that's the case now, but I'm more interested in how we got here. Renaissance artists painted and sculpted pretty things, and were deemed very high status for doing so. It's not a given that pretty = low status.
Do you think there could be a large scale (at least several million people EDIT: hundreds of thousands should suffice to make this interesting...the main aim of this is to avoid very small social structures which are much simpler to handle) society which worked on more or less on the same principles modern western democracies work on without modern technology?
What I mean by more or less "same principles" is something along the lines of close to universal suffrage (specifically with women also being on more or less equal footing with men), no slavery (or de facto slavery), high level of individualism and individual rights (the negative rights, i.e. not expecting state-run social welfare systems but expecting a society where people are more often than not free to do whatever they want to do as long as they don't interfere with the same freedoms of others).
Are there any real-world examples?
What is the minimum amount of technology required for such a society in your opinion? Is there in fact a minimum?
Bonus question: What do you think were the societies closest to this in each era/area of the world?
Note: I do not count philosophy and social institutions as technology (although in some sense it is a very important piece of technology) so I allow even rather implausible societies you'd get if you could magically transfer modern people to the world 20 000 years ago, had them all forget everything about their physical technology and replaced that knowledge with survival skills (so that such society doesn't just die out in a week).
I can think of 2 close but not quite examples:
1. Medieval Switzerland. Well, the last canton to give women a right to vote did it in 1990 (rather it was made to do that by the federal court) but an alt-history medieval Swiss confederation where even women get a say does not feel like that big a stretch of imagination. And the low level of centralization seems to overcome the technological burdens associated with democracy in a large-scale society (in fact, it seems to work better than most countries even today and the individual Swiss votes actually often have a meaningful weight).
2. Medieval Iceland, kinda?...something between direct democracy and a "libertarian anarcho-feudalism"...women were still not exactly equal to men there either (perhaps better than in medieval Switzerland, worse than anywhere in Europe today).
I think the big question that needs to be answered is 'what are the military requirements of the society?' Most of the social details that matter for determining the principles involved are going to be determined by the answer to this question. With no military needs, there's very few things standing in the way of an ideal society. The two realistic military requirement scenarios I can think of are 'living alongside one or more potentially hostile peer societies' and 'the potential threat of an overwhelming number of hostile outsiders' (what we might label based on recent threads on ACX as 'SN risk').
There's a lot to unpack here. I define a 'peer society' as one with a similar general technological level, resource base, and population (but not necessarily the same). While in the long run I take as true that the more free society will advance more economically and technologically, that's meaningless if the current technology level allows the less free neighbor to win militarily in the short term.
With modern technology, we can produce and transport food and other essentials efficiently enough that we can survive on a permanent professional military and still maintain modern values. If, on the other hand, you need almost all the population producing food and other necessities, then permanent soldiers are a serious drag on your economy (unless you use them to pillage your neighbors, which modern values won't allow you to do). The first smaller question, then, is 'do you consider the principles of 'modern Western democracies' to allow conscription and/or a period of compulsory military service?'
Different military technologies and social organizations allow different levels of what permanent military capabilities and temporary military capabilities you can expect your society to be able to call on. To go with one of the more obvious real world examples, if your small landowners are practically born with a longbow in their hands for hunting, it's a lot easier to call up a competent militia capable of countering professional armored enemies than if you have to train them from scratch.
EDITED TO ADD: What's interesting is that the ebb and flow of military technology goes both ways. It's probably good for your hypothetical modern values society if useful weapons are commonly owned, either because your farmer's blade-on-a-stick farm tool is not significantly worse than the other guy's spear, or because longbows or muskets are common hunting weapons on the frontier. It's probably bad for your hypothetical modern values society if the battlefield is dominated by cavalry or other animal-based troops (chariots, elephants, etc.) This has several interesting implications: first, that your society could become worse off as technology progresses if the new tech favors a less democratic means of warfare. Second, that the geography of your society also needs to be taken into account, ie bad terrain for horses may be good terrain for a modern values low tech society.
I think conscription in the ancient city-state style is ok, as long as the society only uses it to defend itself (an empire with city state at its core which is very free but lives of the work of the people it conquered does not count as the society I am looking for). By the way, some modern western democracies still have this kind of conscription (Switzerland for example, Germany until very recently, probably more examples exist).
I would say that a professional military paid by some form of taxation (or even something more voluntary, but that is out of scope) is ideal...provided that you structure it carefully in such a way that it does not take over the society like it gradually did in Rome after the Marian reforms. But like you said, it is typically not an option for a pre-modern society, so limited conscription stays within bounds for such a hypothetical society.
To answer your question, I think there is a sweet spot at which a lasting modern values society is possible with a low tech base, at least as far as military tech goes. Switzerland is probably close to the ideal case. You want enough natural resources (especially farmland) to be at least self-sufficient in the necessities, but not be prosperous enough to be an obvious target. You want natural barriers to army movement (especially horse-based troops) but you want trade to be possible. You want to be far away from any steppe or steppe-like geographical feature that could produce nomad hordes, or at least ensure that by the time they get to your natural barriers you've had time to mobilize your farmers to make trying to pillage an expensive proposition not worth the rewards. At this point, modern values comes down to reducing the amount of infant and maternal mortality, reducing the risk of disasters, maintaining trade with neighbors (in ideas and culture as much as goods), and dealing with the cultural problems inherent to religious friction.
Good points. I was thinking along the same lines but I underestimated the military aspect, I think, especially the steppe nomads. In fact, from what I can tell (and that is frankly not much, so feel free to correct me), very early Russia or at least parts of it were fairly "liberal" for their time. Places like the republic of Novgorod seemed to have a lot of potential in this respect. But it seems that the longest lasting impact of Mongol invasions was probably the way the Russian princes became a lot more like the Mongols themselves and places like Novgorod eventually went up in flames, being replaced by an authoritarian society that has not really changed that much until today. But maybe Novgorod was also pretty bad and I just assume it wasn't because they were a merchant republic very much connected to the Hansa.
Switzerland seems to meet all your requirements except for the birth mortality - modern medicine is probably the most important technology for the emancipation of women. The Swiss also seemed to fare better than most in dealing with the 16th century religious conflicts. Perhaps it also helps to have a somewhat more rural society. Most of the free places in the past were economically kind of backwater. Then again, Italy was very rich and probably more free than most places but probably quite a bit less free than Switzerland (my impression is that those places were mostly oligarchies if not outright monarchies). Maybe it has less to do with money and more with geography (which leads to less money). It is much harder to set up a more authoritarian regime in a country which is full of mountains.
So mountains and a distance from the steppe seem to be good candidates for two necessary conditions for a low-tech society to be liberal. And modern medicine (or something close to that, I guess that you could have discovered something like penicillin by accident even in the middle ages?) without which women are unlikely to achieve any significant emancipation.
If you have better tech, you might not need the mountains. Steppe warfare becomes obsolete and land becomes less valuable on its own (and costlier to conquer).
I wonder if the mountains are that crucial after all....the low countries have also been very close to being liberal (probably somewhat less so than Switzerland) for most of their history. They were even very rich from a certain point onwards (though boggy swamps before that). They were definitely very far away from steppe nomads (although it is flatland all the way from Russia to the Netherlands, it is just a bit too far away from Mongolia I guess).
I've been thinking this over. It ultimately comes down to how much power you need to maintain your society from stresses both inside and outside. I would hope that having a democratic government would do a lot to alleviate most of the internal stresses from competing for power; you don't have elites raising private armies when that won't work to give them control over the levers of society.
War and religion are the two outside context problems your society has to contend with, in that you can do everything 'right' and still lose because of factors entirely out of your control. As I think about it, religion is a thornier issue because of your adherence to modern values, as in your values mean can't stop your people from adopting religious beliefs which may be against your values, at which point you have internal stress again. Switzerland was very lucky in having Christianity, even though it did experience some conflict.
I think the low countries were at their high point when their more powerful neighbors were distracted by the newly available ability to colonize the rest of the world. Why fight each other over Holland when you could more profitably establish control over lands other than Europe? Again, that's something you can't take for granted. I don't assume that mountains are the only natural barriers; I think the channel worked very well for England, in that it was wide enough to make invasion very difficult, yet was narrow enough that trade with the continent could flourish. Japan, on the other hand, was a bit too far away from the other east Asian countries (though there are obviously other factors as to why those two turned out differently).
Offhand I'd say you've got cause and effect reversed here. It's technology that enables centralized oppression, and the concepts of republicanism and individual rights are a response to that, an effort to preserve the pre-technological style of living that humans evolved to prefer.
Without technology, it's not really plausible, nor is it of interest, for several million people to coordinate their actions to accomplish vast centralized goals, from building cities to making war, and so having your life hijacked by some far away strange authority doesn't happen naturally. A hunter-gatherer Stone Age society is inherently pretty liberal, because anyone who doesn't like the local social order can usually just walk away and fade over the horizon. Stuff gets decided by consensus, with a leaning towards the people who've been around the longest and/or have made good decisions before. There's not much concept of a franchise, because you wouldn't decline to listen to even a kid, if the kid had something useful to say.
Which is not to say there aren't Stone Age tribes that are oppressive or violent, of course. Human beings are of a nature that could fuck up Paradise if offered to them on a platter. But oppression and violence on a million-person scale -- and the development of the concepts of republican self-government and individual liberties as a bulwark against them -- requires technology.
"Without technology, it's not really plausible, nor is it of interest, for several million people to coordinate their actions to accomplish vast centralized goals,"
I suppose it depends on what you and the OP mean by "technology" - if it includes any time of tools - then yeah. Without tools (aka technology) it's not clear that we are in any way talking about humans.
And is the "several million" a scale or metaphor for "lots"? A bee hive has about 60 -80,000 individuals. Is the hive structure to hold the honey a technology?
I suppose the biggest Neolithic cities maybe got to 100k, but still a lot of "technology" involved.
Christianity seemed to be able to coordinate a centralized mission on the scale of millions, but roads and ships were a necessary "technological aid" - even before the cooption of the the pagan war technology.
several million - yes, that is a metaphor for many. I wanted to exclude tribes of tens of hundreds of people or even societies in the thousands. But hundreds of thousands would have probably been enough, millions is way too many for ancient civilizations.
technology - basically anything less than the sort of technology that we see as the societies which we recognize as modern appear in actual history (i.e. something like less than late 19th century technology more or less). The aim is to see if there is some minimum technology required (beyond that which allows complex societies of hundreds of thousands in the first place, basically agriculture is a must, writing probably also, everything else is optional). Or as Civilis points out above if there are some other nontechnological conditions which can compensate the lack of technology (tall mountains for example)
Well, a social structure of a few hundred people is inherently simpler (and "easier") than that of millions or even tens to hundreds of thousands. That is why I am mostly interested in the latter. Now, society != state, so they do not have to be a part of a single state but they should not exist as atomic tribes.
I am no expert but I think it was often actually much harder for someone to leave one tribe and join another one. In fact, it seems that in many tribal societies expulsion is the ultimate punishment (rather than death). You are mostly protected by a network of your relatives and friends, as an alien without any of those connections you are basically free game and nobody will care too much if someone robs you and kills you. You might ask another tribe if they'd take you in but since your previous tribe forced you out you are automatically suspicious from the beginning so they are more likely to refuse unless you have something (ideally skills so they cannot just rob you) they don't and want.
In general, I'd rather think about examples of agrarian/settled societies - basically my "hidden" question is something like "could there have been a society that would over time develop to what we recognize as a modern 'western-like' society while being very familiar to us in its structure all the way since its inception in the distant past? And if not, why? Is it because of some crucial technology?
Cities have been around. Why don't you explore that.
And I'm not sure of your artificial cut off of "modern" technology.
Is what you call liberal society possible before Christianity and the radical notion of loving your neighbor and aspirationally even loving your enemy? What is the feature you are really thinking about "pluralism", "cosmopolitanism" - these are features of cities.
Ancient "Water Totalitarianisms" like Egypt or China might be a counter example your assertion that stone age societies can't coordinate oppression, though I don't know if million-person scale was typical for them or not.
Though I agree that technology typically has nothing to do with a free social order. In addition, I also want to add that the "Liberal social order" contains its own fair share of the exact same injustices and unfreedoms of past social orders, and then some. Like, congratulations on "freeing" women out of the need to marry to live..... and into the need to work to live. Double the workforce for the wage payers (and for the exact same amount of total wage), half (or less) the workforce for raising the kids. That's some really fucking impressive Civil Right bullshit you did right there, feminism.
They might be counter-examples had I not prefaced "Stone Age" with "hunter-gatherer."
Hopefully it has not only just occured to you that that circa 1970 first-wave "feminism" had rather less to do with actual female liberation and rather more to do with the (at the time twenty-something) male Boomer desire to loosen up the sexual mores that kept their potential sex partners' legs closed until there was a ring on her finger.
It hasn't occured to me just now that feminism is a hack of course, but it's always worth mentioning to undo the decades of "Feminism == Equality" propaganda.
It's not entirely fair to blame male desire for feminism's tendency to be The Slut Manifesto, it had a role of course but women are not string-controlled dolls entirely programmed by male desire. And for every male with that desire, there is (at least) 1 equal and opposite male (her father, brother, future suitor\crush) with an equally-vehement opposite desire.
>1970 first-wave
I believe the established terminology call that second-wave.
No, of course they're not, which accounts for the qualifiers in my statement, but unfortunately white knighting exists because it works, at least often enough to keep it going generation after generation.
Yep, you're absolutely right that 1960s fathers were appalled by the movement. That's one reason Trust No Square Over 30 was a thing, too. But few cohorts are as energetic and single-minded as twentysomething males looking to get laid. That's why we (used to) draft them.
Actually, I think the established terminology for what Carl is talking about here is "the sexual revolution," which overlapped with second-wave feminism but was far from identical with it. (And yet... the idea that Valerie Solanas, for example, was primarily interested in developing an ideology to help satisfy the sexual desires of boomer males is indeed an intriguing one.)
I mean, it's something current feminists explicitly take credit for whenever it's mentioned, so it's a distinction without a difference.
> Valerie Solanas
That piece of shit was far from the typical feminist, and you owe me... things for reminding me of her. At least 3 photos of kittens or their equivalent is what I demand.
I think the scale of millions was a bit too much for me to ask (I also was thinking about examples which were likely smaller), I think that hundreds of thousands should be enough. The main reason behind that restriction was to exclude very small societies of a few tens to a few hundreds of people.
As for women having to work or marry or anything to survive - I am less interested in the modern welfare state (something that obviously requires a huge economic surplus so it is trivially not available to almost any pre-modern societies, definitely not universally), more in what is considered "negative" freedoms, i.e. basically freedom from oppression. I.e. "you have to figure out your own means of survival, tough luck" is allowed for the sort of society I was thinking about, whereas "you have to do x because you are a woman and if you don't we will put you in jail/kill you/do something else that is bad to you" or "you cannot own a field because you are a serf/woman/slave/..." is not allowed (or rather the more of those things the society has, the further away it is from that hypothetical society of mine).
Negative vs positive freedoms is a sometimes useful distinction but it frequently degenerates into nonsense. Like, a woman today has a negative freedom in not being forced to marry to live, but that translates to being stripped of the negative freedom of not being forced to work to live (i.e. being forced to work to live). The "pre-liberation" situation was the dual of that : she wasn't forced to work, but was being forced to marry (with the significant additional leverage that - if I'm allowed to be that crude - she was in the position of "Employer" rather than "Employed". She is the one who the man seeks, rather than the reverse). I don't understand at all why 1 negative freedom is better than the other.
Aside from that and back to your original question, I think you have your hard limit when you consider that all modern "liberal" societies have barely-replacing or dwindling populations. I won't pretend I know why that happens, but clearly there is something about liberal societies that makes it keep happening over and over again. Modern liberal societies solve that using globalization, they simply "import" people (directly via immigration and indirectly when those immigrants themselves have more children than the typical family in the host country), and "export" work via offshoring. Without globalization, a liberal society will collapse at the same rate of its birth rate.
The question, I think, now reduces to : what's the minimum level of technology to make globalization happen ? I think it varies depending on who or what you want to globalize. For people, not much at all. Basic 1000s-level ships is enough, they were clearly enough to replace 3 continents during the European age of sail[1]. For work and information though, you basically have to have at least the telegraph, or **Extremly** good managment\financial structures that can keep a company going after its work has been partitioned into month-seperated areas of the globe.
[1] I know it happened in the mid-to-late 1000s, but a good deal of that was just navigational knowledge building, i.e. people knowing how oceans work. We have evidence pointing to the chinese knowing how to reach the Americas in the 1300s. <handwave> Polynesians knew how the Pacific work enough to settle it centuries before, though perhaps you need more than that to settle continents rather than islands </handwave>
That's pedantic. The "Work" in my comment is to be understood in the sociological sense, and under that definition it's very much a thing since (at least) the beginning of agriculture.
> A hunter-gatherer Stone Age society is inherently pretty liberal, because anyone who doesn't like the local social order can usually just walk away and fade over the horizon. Stuff gets decided by consensus, with a leaning towards the people who've been around the longest and/or have made good decisions before.
Partly right; I agree that democracy is an attempt to implement stone age consensus building on a larger scale. But you *can't* just walk away from a hunter-gatherer tribe. Long-term survival requires other people, and joining a neighboring tribe is going to be tough; they're going to assume that you did something awful and got yourself exiled.
Depends. If you mean "our traditional enemies" if things have gotten that far, sure. But I believe hunter-gatherer societies generally consist of small mobile bands of a family or three that exist within a larger tribe of somewhat related families that may occasionally get together for special occasions. In the latter case, I expect exchanging between bands happens all the time anyway, for reasons of trade, opportunities, mating, et cetera. I don't think it would be super hard to say I'm fed up with the old man, me 'n' my mate are going to live with the in-laws. You also have to consider that you might easily be able to start your own band if you have a few like-minded friends.
Rome was not a democracy but probably had the state capacity for a semi-modern democracy. However in general I think that the crux technological invention is the printing press.
Is democracy really a feature of technological advances?
Pluralism and cosmopolitan duty of hospitality is the psychosocial prerequisite. That doesn't seem to have anything to do directly with technology but with living together in cities.
This is a weird question because it's not obvious to me why you couldn't have modern liberalism without modern technology. If Earth was hit by a contrived EMP that permanently knocked us back a few centuries, I don't see us needing to undo universal suffrage.
I think the limiting factor comes from requiring a multimillion person democracy. That means communicating over long distances, doing that with caveman technology would take so much work, and seems ultimately kind of pointless. The cave 500 miles away is never going to affect you in any way, why vote on a federal government to rule it? I think the answer is that it becomes possible whenever technology enables enough trade and population density that it's both practical and desirable to put millions of people under one rule, maybe around 1000 BC (plus or minus a thousand, my history knowledge isn't that good).
Your line of thinking about the question is close to mine. Why couldn't we simply envision the 13 American colonies in say, the 1790s (post constitution) but with universal suffrage? The technology to communicate, including holding national elections, was available. They didn't discuss the same topics, because a lot of the answers were going to be too local for a national congress to worry about. But they definitely discussed topics, made decisions, and promulgated those decisions to the people of all colonies.
Tibor, is there a reason you wouldn't consider the beginning of America, if it had universal suffrage and not slavery, as fitting your description? If so, then it seems trivial to assume that such a society were *possible* even if that one has two glaring inconsistencies with your goal. If you agree that the general structure and technology level both fit your needs, then we could likely find a 17-18th century society that does meet your requirements, but even if not it's easy to posit that such a society *could* do so. Otherwise I think we'd need some explanation of why it would be functionally impossible for such a society to exist without slavery and/or limited voting by sex.
Sure, a somewhat counterfactual early America might fit the bill.
"Otherwise I think we'd need some explanation of why it would be functionally impossible for such a society to exist without slavery and/or limited voting by sex."
This is exactly what I am wondering about. The fact that we don't see such societies prior to modernity might suggests that there are some technological obstacles that make them very unlikely. High birth mortality might be one candidate (but maybe it isn't). If there haven't been any such obstacles since (say) the bronze age, why have such societies only started to appear in the last 150-200 years (and in significant numbers)?
Early America is a bit of a special case though - you have a society expanding into an almost virgin continent, with no serious military threats on said continent (neither peer type nor ‘steppe nomad type’) and nowhere near the carrying capacity at the given level of technology.
Also, where exactly is the cutoff for modern technology? I would say early America did have early modern technology.
As Civilis mentioned there were ongoing military threats from natives. There were also European forces who could potentially attack at any time. The colonies had very recently won independence in such a war, and then fought again in 1812.
The British colonies in North America spent about a hundred years developing and fine-tuning the machinery of democratic self-government in an era when the British Army would stomp down hard on any military threat and the British Government would otherwise mostly ignore them. At the end of that time, they were able to tack on a top-level administration and military that could stand off stone-age hunter-gatherers and even win very limited wars with European powers mostly distracted by European wars.
But that first hundred years was critical. Democracy with training wheels.
Even if the plains Native American-type nomads aren't a military threat in the same way that the Mongols were, they're still a military threat to your values.
If the nomadic tribes raid each other, they're almost certainly going to also raid your frontier settlements. This stresses modern values in several ways.
First, you need a permanent military force on the frontier to defend your settlements from larger groups of nomads, which requires a standing army and all the central authority that that requires. This is more of a burden the farther back you go technology wise.
Second, your frontier settlers are going to develop a strong distrust of the nomadic tribespeople, which pushes back against modern multiculturalism.
Third, the war along the frontier will almost certainly be very messy. Among the modern values are the rules of war. It's a lot harder to deal with prisoners if you barely have enough food for your own population. Further, it's highly likely that the prisoners do not share your values, which can cause lots of different issues depending on how your values differ; see Imperial Japan for an example without the tech difference. (Crime and punishment is an area where I think it will be hard to hold modern values at a lower economic tech level).
Well, that is also an answer. I.e. the physical technology is not really necessary. Also, one rule is not strictly necessary for my definition to work. It is more "one society" in which the members of that society can exist kind of freely and easily go from one place to another. So a large enough tribal confederation where you can just switch between the tribes counts even though there is no single "government". A society of multiple tribes where the other tribes are more likely to kill you of you lose the protection of your tribe (e.g. you did something the rest of the tribe did not like and they expel and disavow you) does not count. The society can be extremely decentralized but it should still be one society in a meaningful way.
I tend to think that in principle such a society could exist even without technology. The question is that why hasn't it or if it did, why was it replaced with more authoritarian rule?
Authoritarian rule won out because most of the successful military technological paradigms required having a strong authoritarian government, either centralized or feudal.
I would guess that the biggest single historical technological military advantage is the horse. Horses are a massive military advantage in that they can provide both power and mobility, whether this takes the form of a charioteer, a cataphract, a dehgan, a horse archer, a knight, or a dragoon. Horses are also expensive to maintain. If you want your military to have these expensive advantages, you need some system to maintain them, and you can't conscript them only in times of need like you can foot soldiers.
A fair number of societies came up with the solution to having ready horses when you have a war: in return for grants of authority, you expect in return that the people you grant authority to will show up when a war happens with horses, good gear (weapons and armor) and some peasants (with cheap weapons) to bulk up your ranks. And this pattern works effectively just about anywhere, from China to Iran to France to the steppes.
It's almost worth quoting Wikipedia's article on Horses in Warfare in full, especially the part on the Americas (https://en.wikipedia.org/wiki/Horses_in_warfare#The_Americas), but to summarize: America didn't have horses until the Europeans showed up and used them to militarily overwhelm the natives, especially in the open.; natives that acquired and used horses managed to put up much more resistance.
I tend to agree, but in terms of the European conquest of the Americas, it was such a one-sided affair that even without horses the Europeans would have crushed the natives. You had Old World diseases, they had steel vs basically stone age technology (albeit arguably maxed out to the limit), they had military organization which allowed them to wage war in a way very different from the way the native Americans waged war, they had gunpowder (thought that also probably wasn't really necessary and 15th-16th gunpowder weapons weren't all that decisive, especially against mostly unarmoured opponents).
I mean, the Romans would have crushed the Native Americans pretty much just as easily as the Spanish, the military tech difference was not in hundreds of years but in thousands. And disease killing like 75% of the native population and destabilizing their societies also helped A LOT.
It's not that the plains Indians could have won, it's just an illustration of how one particular military advantage (and not a particularly advanced one) works so well in most situations. The fact that that particular advantage is also tied to a several different authoritarian governance systems independently arising in different parts of the world makes the point even greater.
To get back to your original point and my initial response to you: I think it's possible for a society with a lower technology level to last with something close to modern values, it's just not very likely in most real-world circumstances, especially when you factor in circumstances outside your control. The biggest circumstance outside your control is the military situation, including what "technology" (for a primitive use of the term) underlies the current military paradigm you live in. Many military paradigms incentivize authoritarian social systems, especially the further back you go and the tighter the economic circumstances are.
I think that sounds like a reasonably rational theory as to why history looks the way it does.
Yeah, that sounds about right to me as well. Basically, a society with modern-ish values can give you an economic advantage (from a certain tech level onward...because slave labour probably is more efficient before a certain point, especially in certain labour-intensive industries).
From a certain level of technology onward, you actually get a large military advantage from an economic advantage (industrial revolution is what allowed Europe to basically conquer the world, because industrial production gives you a huge military advantage). But below that tech level you don't get that advantage and authoritarian rule actually tends to give you an edge militarily (that or a steppe nomad structure, but that is limited geographically and also very far from modern values, perhaps even more so).
I think you underestimate how much modern society relies on the technologically enabled surplus whereby a tiny fraction of the population can produce enough food for all, most children survive to adulthood etc. premodern life was mostly nasty, brutish and short. Not a good match to modern values.
This is one of the reasons I suspect there might be some minimum tech requirements for modern values. One other reason might be in medicine and its impact on birth/children mortality
World population in 1500 was 5% the size of what it is today. If Johan's aliens showed up and irreversibly zapped Earth back to 1500 technology (say), (including no tractors, no fertilizer, no pesticides, and no modern high yield crops), probably 95% of the population would die. And that's assuming we can still produce as much food as they did in 1500 with 1500 technology. I'm sure there are skills involved in farming without any modern technology, which we have thoroughly lost, so it might be worse. Maybe even 99% of the population dies. What kind of civilization emerges after that apocalypse...who knows.
No effective birth control is going to be a big perturbation and is surely going to effect the roles of the sexes. Also if `no modern technology' means we are back at a population that is 99% subsistence farmers, I'd guess we're much more likely to end up with, at a minimum, some kind of feudal structure than universal adult suffrage.
If we retain our knowledge of stuff like the germ theory of disease, we still get a pretty enormous win over the previous instance of 1500 technology. If they vanish the capital but leave the knowledge in books, we'll be back to 21st century technology in a couple centuries. (The first 20 years or so will be really bad and lots of people will die, but if there's still civilization with books and some memory of the old world, they'll know what they're working toward. I'm pretty sure a good blacksmith can build a decent steam engine if he's shown how, for example. Some clever group of people will build steamships and cannons and end up ruling a whole continent.)
If they change the laws of physics so guns and engines don't work and electricity won't flow in wires (The scenario in the Change books), we'll still have all the other relevant knowledge. Antibiotics and the germ theory of disease, hydraulics and statistical quality control, etc.
Nit: Copper IUDs can I believe be made and used effectively with 16th century technology, if you know that they are a thing. They aren't an ideal birth control solution, particularly in isolation, but they're probably good enough to significantly perturb society - or in the hypothetical techno-apocalypse, significantly reduce the perturbation.
Sure. None of the people who only know how to write persuasively, interpret complex Supreme Court decisions, or write Javascript are going to care that they don't have a vote. They're going to be only too glad to do exactly what they're told by the local capo good at logistical thinking, who can successfully organize the useful people who know how to make soap, assist a cow with a difficult birth, shear a sheep, or figure out in which tree the bees keep their honey.
Indeed they won’t care, because they will most likely be among the 99% of the population that died of starvation in the aftermath of the disappearance of modern technology. A premodern society has no use for JavaScript or the ability to interpret complex Supreme Court decisions.
Oh I dunno. If they're young and trainable, you always need people to dig holes, move heavy piles of stuff from here to there, and take the midnight to 4am watch. Old guys like me need to watch out, though, lest it be ice floe time. I should learn beekeeping in my retirement. I already know how to make soap, fortunately, but I know squat about farm animals.
>No effective birth control is going to be a big perturbation and is surely going to effect the roles of the sexes.
Possibly less than we think: if modern birth control and medical abortion became unavailable, the current social norms would not be grossly incompatible with other ways of getting rid of unwanted babies. (Infant exposure was a thing).
Lack of antivirals to treat HIV and antibiotics for other STDs would be a bigger disruption to sexual relations.
If we go back to preindustrial infant mortality rates, each woman has to bear several children just to keep the population stable. Adam Smith estimated that the average poor woman (that is, a typical woman) needed ten or twelve live births to produce two adult offspring.
By no modern technology, what exactly do you mean. Sure, `no cars, no computers, no radios' - but is there effective contraception? Understanding of the germ theory of disease?
Do you count hunter-gatherer societies? (Is the `several million' threshold intended to exclude them? How do you draw the borders of `your' society). Is the population living close to the carrying capacity of the land?
Basically, my thoughts were along the lines of - was it possible from bronze age onwards to have a society which resembles ours in most respects but which has between bronze age and very early modern technology. And if not, what are the sort of technologies that prevent this from happening. My basic question is - could there have been a society way back in the distant past which basically looked very familiar to the late 19th to early 21st century western thinking/way of life? Would it be possible for such a society to roughly keep within that societal framework all the way until the actual 21st century? If not, why not (my assumption is that has to be due to technology)? Now, there has been quite a bit of change since the 1880s or so but the fundamentals have mostly remained the same since then...i.e. some variation is allowed but much less than what we see in the actual history.
I am less interested in prehistory because social structure of at most a few hundred people is a lot easier problem than structuring a society of millions. Also, stone age tribes Now, the people do not have to be a part of a single state, they can be very decentralized but still function as a single society. However, for it to count as a single society it has to be relatively easy to move from one part to another. A tribal confederation where each member can simply leave and join another tribe (which I don't believe was always or even often an option in history) might count as such a society.
Also, hundreds of thousands of people should probably suffice to count.
"could there have been a society way back in the distant past which basically looked very familiar to the late 19th to early 21st century western thinking/way of life? "
No. First off if you go very far back at all a huge portion of the population becomes manual laborers (farmers).
I think Civilis's point about military pressures is a key part of the answer (no). Other parts are maternal and infant mortality, lack of contraception, and the Malthusian trap, which technology liberated us from.
That seems like a good summary of the thread. Some modern values are hard to aspire to without the economic surplus of technology.
I think that contraception is not really that big a deal, but the lack of modern medicine (and the associated death rates during childbirth and among children) is. A combination of interrupted intercourse and keeping track of the calendar can reduce chance of pregnancy significantly. Probably not to zero pregnancies (assuming both partners are fertile of course) but maybe to 2-3, definitely not 8 or 10. If a woman only had sex say a week after ovulation, the chance of pregnancy is close to zero (unless you miscalculate the date for the ovulation).
So contraceptives are probably a necessary requirements for hippie free love and there would generally be less casual sex without them but I don't really think the society would change that much if tomorrow they stopped working. But perhaps they allowed a culture shift which would be mostly retained without them today but which would not have happened without them in the first place.
I forgot to answer my own bonus question. It will probably be very off and wildly Eurocentric but I think it could be something like:
0. prehistory - Ook's tribe. They best tribe!
1. bronze age - Phoenicia? A very uninformed speculation, basing it mostly on them being mostly mercantile city states. I would definitely expect more freedom there than say in Assyria under Ashurbanipal (unless conquered by Ashurbanipal).
2. classical antiquity - Roman republic? They seemed to do better on suffrage than the Greeks with fewer slaves and more rights for women (Greeks were close to fundamentalist Islam on women's rights). But they only get the 1st place since the bar is so low as far as I can tell.
3. late antiquity - I don't really know...While the early Roman empire was probably still better in this regard than most other places at the time, Rome kind of turned almost totalitarian with and after Diocletian. At the same time it still controlled most of the known world. The Sassanids weren't exactly great either. I don't know much about India or northern Europe of the time but probably some places there must have been more free than either of the two empires of the time (China probably wasn't).
3. early to high medieval - Iceland? But of course, we are talking about tens of thousands of people at most.
4. late medieval to ...well ... now - Switzerland? Or the US (though not before 1865).
Had an interesting blog-relevant discussion with ChatGPT; is there some way to engineer my original prompt to make ChatGPT connect the pieces of information it has?
Me: Does the author of SSC write any other public blogs?
ChatGPT: I am not able to provide information about the activities or current projects of the author of Slate Star Codex (SSC) as my knowledge is limited to publicly available information and my browsing function is currently disabled. It is possible that the author of SSC may be writing under a different pseudonym or platform, but I do not have any information on this.
Me: Do you know what Astral Codex Ten is?
ChatGPT: Astral Codex Ten (ACT) is a blog that was written by the same person who wrote Slate Star Codex (SSC). The blog covers similar topics as SSC, including psychology, neuroscience, politics, and philosophy, and is known for its in-depth and analytical approach to issues. ACT was created after the discontinuation of SSC and is currently active. Like SSC, ACT has gained a significant following and is widely read.
--
My other takeaway from playing around with the tool is that it's extraordinarily bad at giving summaries of plots of existing works of fiction, as it frequently mixes up major details and inserts false bits that stereotypically occur in the work's genre. Obviously this is the most intelligent AI I've ever chatted with, but compared to discussing with a normal human it feels frustratingly dumb.
I had a similar problem on like day 2 of ChatGPT's release, where I was asking it questions about Fringe Benefits Tax. If I was discussing about FBT in Australia and then tried to ask it questions about FBT in New Zealand, it would act like it didn't exist in NZ, and similarly if I opened the conversation talking about FBT in New Zealand it couldn't transition to talking about FBT in Australia. I used the feedback mechanism embedded in the website and when I tried to replicate it again just now, it no longer has that problem. So I guess the engineering solution is: provide feedback then try again later?
The former is clearly one of the failsafes kicking in to block you, not the main neural net failing to connect its information. It's got the same phrasing as all the other "you tripped a failsafe" answers.
I’m just finishing “Don’t Sleep, There Are Snakes” by Daniel Everett and am curious as to how or if his dust up with Noam Chomsky played out. Is language inherent or an artifact?
My impression of current consensus (which is a few years old now) is that most people think Chomsky and Pinker overstated the extent to which the "language instinct" drives extremely specific behaviors, has immutable, domain-specific "modules" for particular tasks, and has particular grammatical features "hard coded." However, I think people still generally agree that humans do HAVE a language instinct, and that language isn't JUST an artifact or emergent feature of the rest of our cognition.
Would be interested to hear from people who have paid attention to this debate more recently.
That is a good summary of the consensus position, but as a linguist, I am also of the belief that the present position is the result of a general retreat and loss of ground. It is a position that will likely keep moving to the empiricist end as time goes on.
I’m not a linguist. I completed an undergrad minor in Linguistics with Russian as my foreign language when I got my CSci degree so I have a bit more background knowledge than someone who plays a linguist on television.
Kind of a language enthusiast though. Can you recommend any blogs or journals I can follow to keep up with things?
There's the well-known Language Log, which you may have come across already if you're interested in language and linguistics. To the right of the posts is a blogroll, click to unfurl. Somewhere in that long list, there might be blogs which offer updates to the Chomsky-Everett debate. https://languagelog.ldc.upenn.edu/nll/
Whenever I see "XBB" (the new covid-19 variant), the word my brain immediately jumps to is "Xibalba". Does anyone have other words that come to mind when they see "XBB"?
Tangentially, according to wikipedia which is always accurate, it's pronounced more like "sheeble-ba" (if you have an American accent), which is not only cooler to my ears, but also fits the rhyme scheme of both Blondie's "Rapture" and Titus Andromedon's "Peeno Noir". And both of those songs are now mashing up in my head to create a new earworm. Enjoy!
Actually, the ugly and unpronouncable 'XBB' just gives me an awful feeling that the modern world is so full of teensy mutually contingent details, so pointlessly intricate, that we have run out of names to give to things and have to use randomly generated alphabet sequences, sort of like the passwords my computer makes up for me now that it's no longer safe for me to use stuff like the names of beloved pets from long ago followed by my year college graduation. Here in the fucking bleak fucking complicated fucking more-stuff- than-meaning modern world we use alphanumerics. The units of life unpronounceable jumbo randos. Gaaah.
It would probably take a platoon of English graduate students working full time to assign creative, pronounceable, non-trademarked and not-offensive-in-any-translation names to the several thousand dots on the far right.
And then people would end up confused, the way they ended up confused when everyone got to assign a nice meaningful name to new chemical compounds (like "barbituric acid" for my good friend Barbara), and would insist on systematic names, and then we would get "1,3-diazinane-2,4,6-trione" instead, which kind of brings us full circle in terms of euphony alas.
I think the way the covid virus evolves just ended up being rather intricate, and the naming system behind "XBB" is actually a rather decent attempt at labeling and tracking the numerous leaves in covid's family tree, so to speak. it does kind of reflect how we are bad at predicting how many covid variants we will get in the future and how severe they will be.
but uh yeah it would be nice if important things have more catchy names. like someone mentioned further down, hurricanes get human names, and this works out pretty well - we reliably get a few dozen hurricanes each year and the formation of a hurricane isn't to be blamed on any one particular country. Alphanumeric naming conventions should be left for specialists and computers.
Having read up on astronomy for a while, I think I have become desensitized to scientific things of interest getting labeled with long strings of letters and numbers.
Is every single word in the English language offensive now? I didn't get the memo from the wokeists! What's their identity-politics-based objection to "Kraken"?
And yes, I am being just the tiniest bit sarcastic, why do you ask? Let's just nickname stuff reasonably neutrally and not worry so much. I don't know; that may be easy for me to say, given I'm not on social media at all, but... I just really feel like saying, tonight, if social media and the pointlessly intricate life of the digital world are depressing (and yeah, I see Eremoiaios' point, they kinda are)--there are still other things in life. All the old things still exist despite our ignoring them. Digging dirt and watching seeds germinate is a great cure for that feeling, or better yet, tanning buckskin. It's laborious and satisfying and involves no pointlessly intricate details at all, and the utterly weird way the stuff behaves during the stretching phase is great fun.
I expect this digression may seem extremely irrelevant to you, it's just... it's very pleasant being irrelevant sometimes. This is a weird place to recommend it, but I do.
There is an old (1963) Czechoslovak SF movie called Ikarie XB-1, based on a novel written by a Polish author Stanislaw Lem. Now I see it has a dubbed English version under awful title "Voyage to the End of the Universe".
As a side note, did the powers-that-be decide to stop naming variants after Greek letters because they were up to pi and knew people would have too much fun with "hurr durr pi variant" jokes?
It might have to do with how the virus was evolving. Omicron was really different from Delta, and also didn't descend from Delta (the two have a relatively old common ancestor), and quickly outcompeted Delta. But ever since, we've been just getting various descendants of Omicron, none of which were so different and more infectious as to make the WHO use another Greek letter. Although maybe giving XBB or XBB.1.5 a new Greek letter might be nice in that we wouldn't all have to deal with long sequences of letters and numbers.
I'm willing to take those in the field at their word that omicron subvariants aren't different enough to rate a new Greek letter, and I'm not a huge fan of the informal mythical monster nomenclature I've seen. (Calling the latest variant "the Kraken" is conclusory and not IMHO helpful.) But I think that given that there's going to be popular coverage of subvariants, some neutral hurricane name-style option would be helpful to make it easier to write about them.
Evidence suggests not really, given how little public attention the name or the subvariant has really evoked. It's gotten about as much or maybe less reporting than the last few dominant subvariants, and has provoked no particular public response beyond the standard exhortations to vaccinate, boost and maybe mask if you feel like it. (Ventilation? What's that?) And vaccine requirements have been scaled back if anything.)
The main increased public response to Covid in recent months in the US at least has been requiring testing for Chinese travelers, which is orthogonal to XBB.1.5 (and kind of pointless for anything else given existing prevalence).
And I guess the feds are sending out four more free tests per household (whoo!), but that seems more to be about the general increase in cases, hospitalizations, and deaths during the early winter's variant soup, rather than being sparked by "the Kraken".
Maybe it's getting clicks somewhere. But those clicks don't seem to be translating to much visible effect that I can see. It's possible I'm not reading the right sources.
I think they're thinking something along the lines of "if we piss off the CCP, it'll stop letting us into the country *at all* and that would cause issues in our other projects, and/or it'll retaliate against innocent WHO personnel currently in China because they have a history of doing that".
Whether this actually holds up in terms of the maths is questionable, but they wouldn't be the first people to get suckered by China and then succumb to sunk-cost.
I am genuinely curious. Maybe even more generally, where on the evolutionary tree the "asymmetric heart position" appears? As soon as the heart itself? Why?
By the time of jawless fish the heart was already starting with the "blood comes in one side and goes out the other". Not sure it was asymmetrically oriented along the body axis though, but I suspect that started soon after?
Thought experiment based on the general Georgist discussion that sometimes occur here: I'm the mayor of a small village where land is plentiful. I convince a million people to move here, and they each get to buy a plot of land for cheap. We pool our resources and build a city. Our city becomes a nice city and more people want to move in. Land prices go up a lot. Everyone originally involved becomes a millionaire. It's all like a pyramid scheme except the pyramid is real.
Has something like this ever been tried? I assume that the hard part is "convince a million people to move here", but shouldn't we have people or institutions with that kind of pull? Or is the hard part "become a nice city that attracts people"? That doesn't look very hard, most cities seem to be hardly trying.
(Yes, I know my thought experiment is totally unrealistic, I just want to discuss the principle.)
The problem with your thought experiment is the implicit assumption that a city is a collection of buildings and physical infrastructure. In reality, the value of a city is mostly in its people and its social and institutional capital. This is why a city losing its people will be in inexorable decline despite little physical change (e.g. Detroit in the late 20th century was not suffering from earthquakes destroying its buildings).
The problem is not finding a million people to put up capital to construct a bunch of high rise. The problem is turning a million people into an attractive community.
I think the key would be some coherent reason for so many people to choose there, instead of somewhere else. Other commenters mentioned Chicago (railway hub) and Dubai (oil).
But really, every city that has succeeded can be looked at as doing exactly what you are talking about, with different levels of intentionality and different levels of success (including how long it took to grow). If you look at NYC or SF right now, there are people buying and holding land, or developing it, for future purposes. That's really what you're talking about, minus the very short timeframe from your hypothetical (which still works for a number of major cities, including rebuilt ones like Tokyo).
But people in NYC or SF is not coordinating to buy land. The coherent reason to chose this city instead on an already existing place is that the already existing place is expensive.
"Expensive" is always relative. A city may be considered "very expensive" at a particular place in time, then go through a massive upswing and prices increase by some large factor. Does that mean it was or wasn't "expensive" prior to the upswing? All a matter of perspective. I am arguing that a sufficiently large incentive to move to a place overrides the "it's too expensive" reasoning and could be reason for coordination. If significant amounts of oil (or some other valuable natural resource) were found in or around an existing major city, that city may see significant growth in the way you are describing. Similarly, SF became a major tech hub and already high prices were able to skyrocket in a way that could be called coordinated.
I agree that there's a difference. I don't think the difference is a difference in kind as much as level. I think most cities did what you are describing to some extent, and a few (Dubai and Chicago again) doing almost exactly what you are describing.
My original point, looking at those examples, is that you just need a compelling reason to choose that particular city over alternative options, even knowing that second-tier adopters are going to be more than first-tier, and third-tier will pay more again, etc.
If you're talking about a literal one million people are all first-adopters, I think that's a logistical problem that's far too large for our planning and construction industries to handle.
Chicago in 1830 was a grubby frontier outpost of fewer than a thousand people; in 1837 its leaders optimistically declared it to be a city. They were clear (and loud, and insistent, in fact famously would not shut up about) the goal being to become the "next great metropolis", etc. "Come on over and get rich" was absolutely the collective civic concept for decades to come.
In the 1840 census Chicago had 4,500 residents, ten years later it had nearly 30,000 which put it 24th in a nation of 24 million people.
Then starting with the 1860 census the population growth went this way (I'm rounding the totals):
1860: 112,000 (9th in the US)
1870: 300,000 (5th)
(October 1871 the entire central district burns to the ground, is entirely rebuilt by spring 1873)
1880: 503,000 (4th)
1890: 1,100,000 (2nd, stayed there until 1990)
1900: 1,700,000
1910: 2,186,000
1920: 2,702,000
1930: 3,376,000
Lots of people did get rich from Chicago's frantic growth, some of them legally and plenty of them not. People began visiting the place basically to see what was to that point the biggest fastest building of a huge modern city that had ever been even imagined. By 1900 there was a whole published literature in the U.S. and Europe of books and articles (some of them from famous writers), basically marveling at how some log cabins along a middling river running through a swamp had turned into CHICAGO -- complete with mass transit and symphonies and museums and world-class banks and half the nation's resident multi-millionaires and a whole modern developed-nation infrastructure -- in less than a single human lifetime.
(This also had some far-reaching and novel economic impacts way beyond that place, I recommend "Nature's Metropolis" by William Cronon about that part.)
That might be the example to date closest to what you're describing, at least other than ones driven by a particular extractive resource such as nearby oil fields.
Chicago was the success story, but I think that sort of ambition was pretty common. Metropolis, Illinois was founded close to the same time with pretty much the same aim (as the name suggests) and the same vision of being a transport hub, and, well, now its main claim to fame outside its county is trading off the fictional city Siegel and Shuster put Superman in a century later.
St. Louis ("Gateway to the West") also vied with Chicago for that role. There are various reasons given that it failed (e.g., eastern businesses being reluctant to put a railroad hub in a slave state, Illinois beating them to a bridge over the Mississippi), but my sense is that it's not a completely implausible alternate history.
I mean, Chicago didn't have oil fields, but they had a great location for building a railroad hub. From a brief look at the history it's not clear whether they envisioned from the beginning that they should become the great freight gateway to the West; the first railroad charter in 1837 was just for a line to the lead mines at Galena (although at over 150 miles that was still an ambitious scale, for the time). But with railworkers and access to shipping already in place they had all the ingredients to take advantage of the massive demand created by westward expansion.
So, anyway ... while I expect the founders' ambition and optimism helped, the explosion was still very contingent on other circumstances. The city didn't rise to great wealth as some self-contained economic engine.
Sure, that's why I said it is probably the _closest_ real-life example. I don't think the hypothetical exactly as stated is anything but a thought experiment. Cities do not grow to serious sizes in isolation, never have.
And as an aside, yes the 1830s Chicago founders absolutely did envision from the beginning that it become the great freight gateway to the West. So did groups of men who founded Milwaukee, and laid out literally a half-dozen different villages in Northwest Indiana most of which died out, and some other places in that part of the world in that general timeframe. All of those groups initially thought in terms of water transport, specifically in Chicago's case building a new thing called the Illinois & Michigan Canal.
That canal did get finished and its construction did jump-start the new city and it did operate for some decades....but in the big picture it was by the time of its 1848 opening already obsolete. That first railroad company chartered in 1836 didn't run its first train until 1848 and didn't reach beyond the borders of Cook County until the early 1850s...but the new way of things was clear to everybody by then. Chicago pivoted to the new transport technology and by the time of the Civil War was the greatest rail center in the country if not the world.
Having grown up in pre-air-conditioning Texas, I have long thought that a key part of their strategy had to be to get them to move in the six months of fall to spring, so that, by the time summer hit, it was too late to get back to Tennessee. Anyone trying to come here in July would have turned around. Or maybe they were all crazy, like William Barrett “we outrange the Mexicans so let’s get ourselves pinned down in the Alamo instead of using guerilla warfare” Travis.
I'm just old enough to have spent weeks every summer as a child visiting grandparents in Kansas and Oklahoma before AC was completely ubiquitous and effective. Middle-class homes had room AC units which struggled to keep up with the local August climate, car ACs the same and not every car had it, etc. In Oklahoma during the summer absolutely nobody went outside by choice during the day. And that was the 1970s climate not today's!
This is basically Dubai, right? But they have oil. Maybe the problem with this scheme is that random group of immigrants from all over the world lacks social cohesion necessary to build successful economy capable of producing goods which can be traded with outside world and thus pay for necessary imports. If there is an oil deposit nearby, problem solved.
If this were to be done, it would likely need to be done by a group that already has high social cohesion. Mormons maybe? But the Mormon church already have Salt Lake City and I guess they can just continue to grow it instead.
> The graduate student was inspired to try putting the rats on a diet of “palatable supermarket food”; not only Froot Loops, but foods like Doritos, pork rinds, and wedding cake. Today, researchers call these “cafeteria diets”.
> When you give a rat a high-fat diet, it eats the right amount and then stops eating, and maintains a healthy weight. But when you give a rat the cafeteria diet, it just keeps eating, and quickly becomes overweight.
It seems like we could learn a lot from this. Seems like we should be able to binary search: try this with 50 foods and track which ones cause weight gain. Then look at the ingredient lists and see which ingredient is causing the weight gain. Then... we'd just know what causes obesity?
Even if the search doesn't uncover an exact culprit, I think a lot of people would get value from just having an index that maps from common foods to How Fat Rats Get if you give it to them.
Consider the origin of the word rubenesque. How about he Venus of Hohle Fels (40,000 BCE). I am not sure that she was pregnant - I think she was just fat.
From Shakespeare's ages of man, the 5th age:
“the justice,
In fair round belly, with good capon lin’d,
With eyes severe, and beard of formal cut,
Full of wise saws, and modern instances”
Individual obesity is not a modern thing. Gluttony and sloth have been around for a very long time.
Average population obesity is the modern thing. Obesity comes from easy availability of food and lack of burning calories (physical work). Modernity has created that potential circumstance for more and more people.
We live in an age of plenty and increasing mechanization which has reduced the necessary toil. It is not really the "kinds: of foods it is the easy availability of calories and the lack of required strenuous activity. What you probably need is a sedentary rat.
Perhaps it is not the ingredients per se but the calories per volume. I'd think that a rat can only eat so much volume.
SMTM discusses many of the issues around this (look at their articles on scurvy, for example), but I think would agree that more research in this area is desperately needed. Part of the issue is that if the effect is from a contaminant, or several contaminants, that it would likely be present in different levels in, for example, different lettuces grown in different places at different times, and might get accidentally added during processing of some foods, so just looking at “lettuce”, “wheat flour”, “eggs”, etc, would likely miss this.
SMTM is strangely respected in this community, but I have found the articles I have read from them (admittedly relatively few) to be very unimpressive. They misinterpret or misrepresent the research they present, and offer no real support for their contamination hypothesis, while bizarrely ignoring that obesity correlates very well with increased calorie consumption, which in turn is due to the availability of cheaper and tastier food.
Is it strangely respected? I mostly see people responding by asserting, as if they are killer facts that SMTM was unaware of or omits, ideas and arguments that SMTM discusses at some length. I think you really do have to go on a deep dive of at least reading the many “A Chemical Hunger” posts (or read a good chunk of their posts in the last year, for example) to see the extent to which a) there looks like there’s something there to be considered and b) they are fully committed to approaching this stuff as actual science with all the uncertainty that involves.
In the articles I've read, they misinterpret or purposely distort (I have no way of distinguishing between the two, of course) the research they present, and not some of the details but some of the central points of the articles. I don't think they can be trusted to present scientific results.
My impression is they get way too much respect, yes. They've been funded with a moderately-large-by-EA-standards grant and were initially feted as important by Yudkowsky, to offer a couple of examples.
They disagree that we have good data on whether people today consume more calories than people did before 1960. I personally take issue with your claim that increased calorie consumption "is due to the availability of cheaper and tastier food". That's certainly a decent hypothesis, but far from proven. Assuming people do eat more calories than they used to: how do you know it's due to food price and taste? How do you know that food is tastier today than it was in 1940? How would you even demonstrate the truth of such a claim?
> How do you know that food is tastier today than it was in 1940? How would you even demonstrate the truth of such a claim?
I mean, the 1940s weren't that long ago. We have photographs of what people ate, we have cafeteria menus and shopping lists, we have recipe books. We know that what people ate (in western countries) back then, and it tended to be a bit plainer.
In my understanding people in 1940 ate cake, donuts, pies, cookies, white bread, pancakes, waffles, syrup, biscuits, jam, cheese, sausages, bacon, ham, beef, chicken, and mashed potatoes heaped with butter and gravy. Not to mention that a lot of it was fried in lard.
Now that's just my understanding: I don't have any data to back that up, just anecdotes and old cookbooks. Do you have a better source to point me at that will indicate differently?
Maybe some upper-to-middle class people in the US (one of richest countries in the world at time) ate all that. There were chubby people in the 40s and before (a mental picture including Taft and Churchill emerges).
Here in Europe, my grandmother certainly did not eat donuts and cookies and jam every day in 40s or some other decade, even if we ignore WW2 when large parts of population coincidentally were malnourished. If I think about her eating habits in 90s-00s, assuming they were unchanged from her childhood in 1930s (probably not, because availability had improved and prices plummeted). Mashed potatoes with butter and gravy ... certainly, for dinner. Meat could be included, but not every day. Glass of milk (they had a cow), but I am less certain if they would get cheese. For breakfast, oatmeal and bread and-or eggs. For lunch, some kind of soup. For supper, oatmeal and bread. Such diet gets quite boring quite fast, you won't eat large portions of it for fun.
I like the question you are asking. I will note that in the 1930's people ate fruit for dessert and considered it a treat and couldn't always afford it. I don't think these people could, generally speaking, afford chocolate whip cream top hats every day. Also, the innovation in baked goods over the last 80 years has been quite extreme. There are an unbelievable amount of chemicals in baked goods today, again generally speaking.
Heck, I went ahead and Googled 1940s diner menus and found this one from 1949. It features a ham and relish sandwich on white bread, chicken salad sandwich on white bread, beef bologna and swiss on rye with russian dressing, hamburger steak on a bun, all with a side of french fried potatoes. For dessert we have chocolate whip cream top hats, 5 kinds of pie, cheese cake, and chocolate layer cake (as well as "Assorted danish pastry"). Looks plenty tasty to me!
People are incurious because we already know the answer. As mentioned by Julian below, what is difficult is to find how we could eat reasonably when cheap abundant tasty food is widely available, not why eating all this food make us fat.
I suspect and/or vaguely-remember part of the issue (which may turn up as a confounder in the study as described if naively implemented) is that *variety* itself causes larger portion sizes, because becoming sated with one type of food doesn't necessarily translate to becoming sated with food in general.
Though I don't imagine a diet of "chips which taste good but *never* make you feel full" would help on that front either.
> *variety* itself causes larger portion sizes, because becoming sated with one type of food doesn't necessarily translate to becoming sated with food in general.
Right. I'm sure many of us are familiar with the idea of "there's always room for dessert" or a metaphorical "second stomach for dessert".
It seems to be that high fat food like crisps or high sugar food like sweetened drink can be problematic, but the association of the two is certainly worst!
There is not obesity mystery: nature has selected us to crave the high calorie/sugar/fat/salt food that were both scarce and very valuable in our previous natural environements. Now that these food have become very abundant, we still like them very much and overeat.
Yeah, but (in the industrialized world) "high calorie/sugar/fat/salt food" has been abundant for a century or so, yet the growth of obesity has been more nearly on a timescale of the last half century. or so. There has to be more to the story.
I'm not sure that's true for everybody. Maybe for the upper-middle to upper First World class, sure. But even in my own lower middle-class youth, a mere 50 years ago, we never got store-bought sweets except on very special occasions. When we had a dessert or snack at all, it's because my mother baked it herself. The only time we ever drank soda was on the very rare occasion that we went out to eat.
Ok, I should clarify: I'm not so much talking about store-bought sweets but rather about flour, sugar, butter, and salt. Yes, they have to be combined and baked to e.g. make shortbread cookies, but the ingredients themselves were (in industrialized nations) rather abundant by historical standards by 1922.
Sure, by historical standards. And that's why the people of 1922 were taller and stronger than the people of 1822. I'm just observing that within my lifetime the broad trend of a falling relative real price of food, particularly sweet and refined ingredients, and commercially-made sweets that don't require anyone to invest two hours of personal labor (in addition to the cost of the ingredients) to make them, seems to have continued, and, since we topped out in height to be gained via better nutrition circa 1970[1] maybe now we just gain weight, because of the still greater cornucopia of tasty sweet food ready to our hand almost instantly. Heck, these days we don't even have to get in the car, DoorDash or whatever will deliver it within minutes of our phoning it in.
I'm not ruling out other and multiple explanations, but I also think we can't easily dismiss the simple hypothesis of easier, faster, and cheaper access to tons of sweet foods, because I don't think that trend stopped 150 years ago, even in the First World.
There's a colossal difference between "have raw ingredients to make similar food" and "have a ready-to-eat product with high amounts of those ingredients, pre packaged in plastic and non-perishable, and probably cheaper than whatever lower-calorie options you were looking at"
True, people in 1922 probably could have eaten cake every day if they'd really wanted to. It was social shaming which kept that temptation in check.
At some point the social shame against eating cake every day wore off. People started making small cakes and declaring them to be "muffins" and deciding they were now a breakfast food (muffins have been around for centuries but not in the modern American cake-like form).
Then the muffins got sweeter, and they got bigger. Meanwhile the one little shortbread biscuit that you might have had with tea slowly turned into a chocolate-with-chocolate chips cookie the size of your head. In 2023 you can barely buy ice cream which doesn't have solid chunks of something even sweeter distributed through it. Everything has got sweeter, and sweets have become more socially acceptable.
In Wilkie Collins' "The Woman In White" (1860) the villain is Count Fosco, an Italian nobleman with a taste for sweets. His sweet tooth is portrayed as disgusting, unmanly, creepy -- sweets are for women and children, not for men. But his actual consumption would be nothing remarkable these days.
Maybe. Or maybe there is some sort of contaminant that changed our propensity to overeat. Or maybe our microbiome changed. I'm merely noting that (relatively) cheap flour, butter, and sugar were available early enough that the I doubt that the evolved attractiveness of calorie-dense foods is the whole story.
"Yeah, but (in the industrialized world) "high calorie/sugar/fat/salt food" has been abundant for a century or so".
My impression (sorry, I'm too lazy to look up data!) is that abundant and cheap junk food is much more recent than that, and that until a few decades ago, eating at fast food restaurants was, for example, too expensive to be very common. The percentage of a family's budget spent on food has declined sharply over the 20th century.
Yes this seems like it could be very helpful. Though my guess is it would just track pretty closely "how tasty is X" compared to "how full of calories is X".
People like to talk about "fillingness" or whatever, and that is I suspect a small part of the story.
I have always suspected the main story is simply that a cookie/brownie/cake whatever tastes fucking delicious and is also chock full of calories. Yes you can sort of train yourself into a mental space where you find broccoli delicious. But it is not a fucking cookie and to claim so is silly. And also one of the main ways people make things like broccoli (which I like) delicious is by smothering it in butter/salt/sugar so it is more like a cookie.
Yes this seems like it could be very helpful. Though my guess is it would just track pretty closely "how tasty is X" compared to "how full of calories is X".
Totally agree, with the caveat that "tasty" and "full of calories" are associated, not by coincidence.
Anecdata: When I ate sugar every day, I craved sugar every day. Once I decided to stop, there was a week or so of cravings, then I no longer felt like eating sugary things.
I think that sugar has characteristics that make it more like an addiction than it is like an actual desire. Once your palate stops being accustomed to overly sweet stuff it really doesn't taste that great.
Same thing for me but on a shorter time scale: I do not really like sugar but I crave it if I start eating it. For example I almost never eat candy but in the rare cases where I eat one I am very tempted to eat one after another until the box is empty.
There was a nice discussion of that in one of the Huberman lab podcast a few months ago (At 45 min):
Does any one know of a meta-analysis or a large study of the causal effect of privatization on firm productivity ? Preferably using a experimental/quasi experimental design or regression discontinuity.
Is it the case that they "don't get schizophrenia", or just that it's not diagnosed?
I think there's good arguments to be made that it's irresponsible to diagnose children with things when there's so much noise - they could be considered to have symptoms that point towards all kinds of diagnoses....but sometimes kids are just weird, right?
It should be relatively easy to test this hypothesis by looking at young adults with clearly diagnosed schizophrenia and checking when their major symptoms started to appear (as opposed to the time of diagnosis) - I'm not an expert on this, but as far as I know, for the people with various significant symptoms they seem to start only after childhood, not earlier than their teen years.
A caveat that this is not a specific subject that I'm well-read in, but this is the approximate area I'm educated in. Of the ideas bandied about I'm familiar with, the most compelling one is that schizophrenia is often a disorder of abnormal neuronal connectivity that onsets typically during late adolescence because that is when salient brain development is happening that makes it the high risk window. That's just a high level explanation, though. There are more specific hypotheses that try to connect what those anatomical abnormalities might be and how they relate to known genetic predispositions.
Psychosis not caused by neurological disease/deterioration or brain injury also is extremely rare once a person hits early middle-age, so I'd reform your question to ask why psychosis has developmental risk window during the human lifespan.
My guess: Schizophrenia appeared, at least frequently, with the Modern Age. Children, not being fully socialized/indoctrinated are less affected. Similar to an explanation of childhood innocence that they lack knowledge and experience that adults have accumulated.
"Childhood schizophrenia" used to be a common diagnosis. It somewhat notoriously was an original diagnostic landing spot for people we now would label as just having an autism spectrum disorder. I work primarily with people who have cognitive disabilities and mental health challenges. A sizeable % of my older clients have a "childhood schizophrenia" line in their diagnostic history.
It's rare now and generally only comes up in unusual cases of children having a degenerative issue, but this is a relatively new thing in the larger scope of psychiatric labeling. So if diagnostic awareness was driving things, the culture has moved away from it, not toward it.
That's interesting. It's curious how words change meaning over time and how, uh, 'atypical mental conditions' evolve over time - as do their definitions/labels (as we learn more and how new generations(new perspectives) diagnose them).
Still, my speculation that the Modern Age caused or exacerbated Schizophrenia (and by association Autism) still stands. This based on my incomplete reading of 'Madness and Modernism', Sass(1992) and my incomplete understanding of Iain McGilhrist's '08 and '22 publications. : )
Probably because some of the dopaminergic tracts aren’t fully developed until 18-25. Another theory is that there is a “second hit” that has to happen in one’s environment for symptom onset to occur (large amount of stress, shock etc) and so kids just don’t develop symptoms because they have more protective factors on average. Another possible theory is the gradual decrease in neuroplasticity tracking in an inverse proportion to “brain maturation” as the possible second hit. Just some theories I have.
A psychotic disorder is a more complex suite of symptoms. People rarely present exactly in the textbook ways people read about - except in those rare cases they do and you're shocked someone is so textbook - but if all an adult had was a particularly vivid imaginary friend that typically wouldn't cause them to meet criteria. Heck, it might just mean they're religious. Sensing disembodied personas isn't unto itself what schizophrenia (or psychotic disorders more generally) is.
Yeah one thought would be that schizo behaviors in younger people are both less problematic because they have less power and ability to commit violence, they have 1 and often 2 full time minders to help them, and younger people generally taken less seriously (for good reasons).
My six year old spent a whole year saying his best friend's various relatives did just about anything you could imagine.
You see a jockey, “Bobby Jone’s uncle is a jockey.” You see a hotel under construction “Bobby Jones grandpa once worked on a hotel construction site”. Now probably once in a while these were lies his friend told him. But most of the time it was just his way of relating to the world and novel stimulae when he didn’t have a better response. In a six year old who is going to care?
But if he is doing it at 22 it is probably a sign there is a problem.
Cultural appropriateness is part of making a diagnosis. If someone engages in ritualistic cannibalism of a man-God whose voice they occasionally hear, they're probably not psychotic. Chances are they're just Catholic. Part of this is that if your surrounding culture is Ok with a set of behavioral/cognitive traits, they're going to be less apt to harm your ability to function, which is a necessary part of what it means to have a mental disorder. But the bigger part is there's some underlying etiology to these disorders that can occasionally look like something that happens in a local culture, but is conceptually distinct from it. And that's probably what is going on with a child's imaginary friend as a typical part of childhood development expressed through their local cultural experience vs. the kind of hearing voices/has delusional beliefs scenario you are likening it to.
Also, last week I asked a question about consciousness and free will and it generated a lot of discussion but it also seemed like several people were confused. Superb Owl wrote an interesting post on the same topic
but it didn’t really line up with my individual thinking on the topic. Anyway this time I’d just like to ask, how many people here think the hard problem of consciousness is actually Hard? Has Scott ever written about it? I think that’s where a lot of the divergence came from.
I have a hard time trying to understand what’s supposed to be “hard” about it. The best explanation I’ve got so far is: it’s like life is everyone playing the same video game on their own device and you can interact with everyone else in the game, but you can never ever see anyone else’s screen. Still not sure why that makes it a hard problem.
Why lump free will and consciousness together? In my view the former is just one big hopeless confusion around not-even-wrong unexamined definitions while the latter is one of the most interesting questions of our time.
I like Eliezer's post Zombies Redacted, which is a reworking of Zombies? Zombies!, to which apparently Scott has written a related post (https://www.lesswrong.com/posts/Fy2b55mLtghd4fQpx/the-zombie-preacher-of-somerset) although it does not really seem to engage with the hard problem. If there are others I'd also like to know of them.
As for your question, I don't think the "hard problem" is hard in Chalmers' sense, for reasons similar to those expounded in Zombies Redacted (i.e. even people's assertions and theorizings about consciousness have a physical manifestation in the brain and should thus have a physical origin).
The hard problem is the problem of reductively explaining qualia ie. of explaining in a detailed way how and why particular qualia are produced by particular physical behaviour. Asserting that it must be physical, somehow, is not solving it in that sense.
1) Is the problem of consciousness hard in the Chalmerian sense?
2) What is the solution to the problem of consciousness?
I was making an argument for answering 1) in the negative. Your retort that my argument does not answer 2) does not necessarily invalidate it as an answer to 1).
Of course a complete answer to 2) would thereby solve 1), but it's also fair to take a shortcut and try to answer 1) directly. This is what the zombie argument does as well, though I find it unconvincing for the reason I've given (and I'm not aware of any zombie-ists attacking this reply head-on).
Like I said in my original post: "even people's assertions and theorizings about consciousness have a physical manifestation in the brain and should thus have a physical origin". I think this is a deep argument and people who claim the hard problem is capital-H Hard don't engage with it to an extent I find satisfying.
Speaking of non-engagement, Yudkowsky doesn't seem to engage much with Chalmer's attempted correction, and no-one engages much with the anonymous user who attempts to clarify Chalmer's views and gets downvoted to -4.
"Like I said in my original post: "even people's assertions and theorizings about consciousness have a physical manifestation in the brain and should thus have a physical origin". I think this is a deep argument and people who claim the hard problem is capital-H Hard don't engage with it to an extent I find satisfying."
What is engaging with it supposed to tell them? It doesn't make the hard problem -- the problem of saying how consciousness is physical -- easy. (In fact, you previously stepped back from the idea that reports-of-consciousness-are-caused-by-consciousness solving the HP,..although now you seem to be embracing it again).
It also doesn't select a unique answer to the mind-body problem: it's compatible with identity theory, mysterianism, interactive dualism, etc. The only thing it rules out is epiphenomenalism.
EY's reply only discusses zombies and epiphenomenalism. That might be a impactive against two of Chalmers' various claims, since he seems to believe in both, ... but it still isn't a point against the hardness of the hard problem, the thing we are discussing. It's possible for Chalmers to be wrong about zombies and epiphenomenalism, and right about the HP, since it is ideas that are right and wrong, not people. Or so I think. Do you think the HP vanishes without zombies, or something?.
>how many people here think the hard problem of consciousness is actually Hard?
I tend to think that the solution to it is what could be more or less called panpsychism, specifically the idea that consciousness is simply what it "feels like" for a given piece of matter to exist, and self-awareness (which by this theory would be a distinct subset of consciousness rather than synonymous with it) is what happens when conscious matter becomes interconnected in a complex enough way to become self-reflective. The reason I tend towards this answer is that consciousness obviously exists, in the sense of subjective, experienced qualia existing (this being really the only thing we can be absolutely sure of because it is the only thing that is subjectively and directly experienced by us), and yet our knowledge of the external world seems to show that there is very likely nothing beyond matter that is tied to it in such a way as to create consciousness in the way we experience it, so therefore the only solution left is consciousness being an innate quality of matter itself. This seems counterintuitive to most people since I think we all have an innate bias towards some sort of mind-body dualism, but when you consider that existing matter must have some way it objectively *is*, then it makes you wonder why there can't be some inherent way it subjectively *is* as well in terms of how it is existing at that moment in space and time, and so, as said, consciousness is really just the way that existence "feels" for everything that exists.
This theory implies that mountains are, in fact, conscious, if on a very primitive level (again, it differentiates between consciousness and self-awareness). Objecting to the claim that all matter is conscious by asking "why isn't x matter conscious" is begging the question, wouldn't you agree?
im asking why mountains aren’t the most consciousness, being so big. Why wouldn’t they be self aware. All that’s happened here is the hard problem of consciousness is moved to the hard problem of self awareness.
And anyway brains are clearly related to consciousness which is why blows or shots to the brain kill, but to the legs do not. And we can measure brainwaves etc.
That would still leave you with some kind of interaction problem, no? At some point in the causal chain leading to us writing these comments about consciousness, the "innate way existence feels like" has to be expressed in terms of neuron firings and such. But we know that neuron firings already have a complete explanation in physics. So where/how is the interaction between the "innate quality of matter itself" and the physical neuron firings happening, without violating the causal closure of the physical world?
Interesting. Not too different how I think about t, but I have always conceptualized it as materialism, not panpsychism.
Let us assume qualia is innate quality of matter. However, most of how humans experience the world is ... mediated by ... function of our nervous system, not bare qualia. The qualia of seeing a color or tasting or anything requires sensory organs -- a piece of matter a rock does not possess them, so only innate qualia all matter has must be quite different (I wanted to write, much simpler, but maybe that is in the eye of beholder).
I would agree, interestingly this implies that what we experience as "redness," for example, actually has very little to do with what the subjective self-experience of a "red" object or a red-wavelength photon is. Instead it corresponds with how it feels to be a specific electrical pattern in a neural interface, which has nothing in itself that might physically be described as "red" from the outside, but which has become habitually triggered by this otherwise unrelated outside stimulus because it happened to be the pattern of material being experienced as such that, as a cognitive pattern that benefits survival, was most convenient to be triggered by external redness. Essentially, the cognitive equivalent of "the map is not the territory," which is an uncontroversial idea in cognitive sciences, but acquires this additional interesting aspect when we consider the territory itself having the same capability of subjective experience (if not self-reflexivity) as the map.
I think it is a pretty hard problem. After all we haven't solved it yet! That said I don't think it is insurmountable, and I am definitely not a dualist.
FWIW, I suspect the the "hard problem of consciousness" is the lack of a proper definition. Without a proper definition it can't be solved. With the definition that I prefer it's just the mind observing itself, and self=consciousness is the mind observing itself observing itself. But that only works with *some* definitions of consciousness.
There's no hope of finding a single definition that captures all the concerns and issues.
A lot of work has already been done on splitting the problem of consciousness into sub-problems relating to sub-definitions.sense-of--self, higher order thought, access and phenomenal consciousness, etc.
That doesn't dissolve the hard problem: rather the hard problem emerges out of it.The HP relates to qualia/phenomenality specifically.
Narrowing the definition to self-observation leaves unresolved issues...you just have to call them something else.
Regarding therapists: my wife and I had really good luck using Alma. The basic gist is that you put in your insurance and the type of therapy you are interested in and then you interview the pre-sorted potential therapists until you find a fit. I interviewed three and found a great fit with a therapist that focuses on rationality based CBT and IFS (Internal Family Systems). The matching with someone who definitely takes your insurance is pretty remarkable. I’m not sure how widespread this is, but in NYC there were many many options to choose from.
If you use your own judgement to assign p and take the market's implied b, it's pretty straightforward as long as your bet is too small to move the market; the math would be more complicated if you were pushing the odds around, and I don't feel like working it out explicitly right now.
Probably only to the extent that exceeding the Kelly sizing will tend to drive even good forecasters' bankrolls down, reducing the liquidity in the market. If there's sufficient liquidity regardless, then it wouldn't affect the quality of the forecasts.
But doesn't the size of the bet serve as an ostensible surrogate for the forecasters confidence in the prediction?
I have never been truly enamored with the idea that money in fact is a surrogate for anything other than having more money. Rich people are not inherently better predictors. But the theory of prediction markets beyond the wisdom of the crowds is some how that monetizing the process makes it more predictive.
Yes and no. If a single whale (having done the math I demurred from) could move the market all the way to their estimate of the probability without exceeding the Kelly limit, but they stop short, then yes that suggests a lack of confidence in their estimate. But if the market is liquid enough that even the largest participant can only move the line by epsilon, then the confidence of individual bettors becomes irrelevant.
I think the expected utility of monetizing the process will make better predictors *into* rich people, such that over time rich people *would* *be" better predictors. A lot of the theory is based on a steady state, while it seems to me that objections tend to focus on the transient effects of initial implementation (which, if it takes long enough for any given market to approach steady state conditions, would in fact dominate).
Most betting markets you don't have the ability to make repeated bets, nor are the exact figures clear. But yes you want to size your bets if you don't want a high risk of losing it all.
Harry Sussex has just had his autobiography leaked. In it, he makes a number of rather astonishing claims. One of those claims is that he personally killed 25 enemy fighters when he was an Apache copilot / gunner in Afghanistan (he spent 30 weeks on deployment, so approximately one enemy fighter killed per week). He says he is fairly confident about this number because he reviewed footage of his flights on a laptop afterwards, so it explicitly isn't that he eg destroyed a transport van and guessed at how many people were inside
Is this number plausible? Would an average Apache copilot kill at this rate or is there something special about the way Harry was deployed? If 25 kills is not notable, what would the Apache equivalent of an 'ace' be (that is to say, a performance noteworthy enough to comment on)? What psychological support do pilots receive after killing people, given that they do so so frequently? Overall, is Harry likely to be telling the truth?
Without knowing too much about it, I'd say that if 25 kills in 30 weeks was an unusually high number for an Apache, then the Apache would be a pretty useless weapon given the expense, difficulty and vulnerability of it.
I don't think there'd be an equivalent to an "ace" though. An ace fighter pilot must shoot down other fighter pilots, in conditions approximating a fair fight. Apaches (especially in Afghanistan) will avoid anything like a fair fight, so the number of kills you get is mostly just a function of the number of targets you get sent in the direction of.
Considering the footage one sees of helicopter gunships picking people off at long range with night vision, using either missiles or guns, this doesn't seem unusual at all -- this is a team effort ofc, but presumably the gunner is the one pulling the trigger.
Another factor would be that I'm not sure how much aimless patrolling goes on with the gunships -- aren't they mostly flown either in response to intel (ie. reliable source says the guys at such-and-such compound are planting IEDs; go blow them up) or calls for close-air support. ("holy shit we are overwhelmed/these guys over the ridge are lobbing mortars at us, pls send help tuvm")
Knowing nothing about the details of his deployment - one transport van could be 10 people right there. I'm not an expert but having seen some footage from apaches in my day, 25 in 30 weeks strikes me as extremely plausible.
I am happy to say I have no idea who Harry Sussex is. As for people killing others in war zones with modern technology. Sure they might kill quite a few. Just depends on where/how they were deployed. Certainly the median solider isn't killing at that rate. The median soldier is probably sitting at a desk in some base.
I would imagine if you're considering all combatants in a conflict the median soldier usually has zero kills and the highest possible median is one kill.
If you imagine a conflict with 100 combatants, for each combatant to get 1 kill, all 100 people would have to die. For the median soldier to have one kill, at least 51% of combatants would have to die (i.e. if 51 soldiers each get one kill and the rest get zero). In reality, the distribution is probably less flat - the top soldiers may have 10 or more kills - and without a very flat distribution it would be very hard for the median soldier to have even 1 kill. For example, if the top 10% of soldiers averaged 6 kills each, it's impossible for the median to be 1 kill since 6 kills * 10% + 1 kill * 41% = 101% of people.
Yeah that was my point, just stated differently. But also I was mostly talking western soldiers who are generally as a group achieving quite high kill ratios due to their small numbers and materiel advantage. Even so I agree I would bet the median is zero.
"Harry Sussex" is Prince Harry of the UK, I'd be willing to bet it's much more likely you've heard of him under the latter name. "Sussex" is in reference to the fact that he is the Duke of Sussex, although he has stepped down from the duties normally associated with that title. Technically if you wanted to refer to him with a standard "civilian" given and surname, I believe the surname would be "Mountbatten-Windsor," which is what his children use, ̶a̶l̶t̶h̶o̶u̶g̶h̶ ̶I̶ ̶t̶h̶i̶n̶k̶ ̶"̶S̶u̶s̶s̶e̶x̶"̶ ̶w̶a̶s̶ ̶w̶h̶a̶t̶ ̶w̶a̶s̶ ̶u̶s̶e̶d̶ ̶o̶n̶ ̶h̶i̶s̶ ̶u̶n̶i̶f̶o̶r̶m̶ ̶w̶h̶e̶n̶ ̶h̶e̶ ̶w̶a̶s̶ ̶i̶n̶ ̶t̶h̶e̶ ̶m̶i̶l̶i̶t̶a̶r̶y̶,̶ ̶h̶e̶n̶c̶e̶ ̶t̶h̶e̶ ̶o̶r̶i̶g̶i̶n̶a̶l̶ ̶c̶o̶m̶m̶e̶n̶t̶e̶r̶ ̶u̶s̶i̶n̶g̶ ̶i̶t̶.̶
Edited for correction, upon looking it up the surname used on his uniform in the military was "Wales," in reference to him being the son of the Prince of Wales.
It shows what a total idiot the guy is, to paint a great big target on his back and possibly even endanger his family with his probably exaggerated bragging, not to mention that MeAgain may feel a bit ambivalent about him mowing down non-white people by the bushel (if his exploits are to be believed).
I never gave it much credence before, but now I'm truly starting to believe the rumours that he is actually the son of someone called James Hewitt, who had an affair with Princess Diana. Knowing he is a cuckoo in the nest, and not in the royal bloodline, might explain Harry's apparently compulsive bitterness and vindictiveness (although piles of loot, estimated at $50M, for publishing his latest pot boiler and associated interviews is another fairly plausible motive!)
Still, US readers, take heart, you may not have to put up with him for much longer. He also boasted about taking cocaine, and could soon therefore be kicked out of the US as an undesirable alien! :-)
Oh I dunno his mother (Diana) was also a vindictive airhead. Far as I can tell people only like her because she was pretty and died young and glamorously.
She wasn't even especially pretty, and I'm not sure why people are so intent on pretending that she was. I mean, by Royal Family standards she was, but by "pretty celebrity" standards she wasn't.
The irony is Harry has been complaining for years that the media destroyed her life; but now he is voluntarily allowing them to do the same with his life, not that he has the self awareness to realise that (yet).
Prince Harry's tell all falls into my "none of my damn business" bucket but I can't seem to avoid hearing people talk about it. It seems that people who are actually interested in this stuff are feeling like he's crossing into TMI territory.
He also looks strikingly similar to a young Prince Philip though. Unhappily for His Maj, the good looks in the Mountbatten line seem to have skipped a generation.
People on defense oriented subreddits seem to think that number is on the low end for an Apache pilot in Afghanistan. Air aces are usually in terms of shooting down enemy fighter jets or at least bombers where the enemy is supposed to have defenses. Maybe it would make sense for an Apache crew who destroy 5 armored vehicles to be aces?
...but I ended up taking the advice to heart, and have been happily eating everything-but-the-stem ever since. It really does add up to significantly more apple, especially on the larger varieties like honeycrisp. Made me wonder why I'd spent the first two decades of my life not consuming apples this way. Peer pressure, I guess? Never bothering to fact-check just-so stories? Not understanding "the dose makes the poison"? I don't know. It's at least understandable why people peel fruits and vegetables - they've a notable difference in flavour, texture, etc., if edible in the first place at all. There's some real there, there. (But I still evangelize eating kiwi skins when possible. Have made a few converts!)
Wonder how many other parts of life are like this. Untruths that go unnoticed and thus uncontested, until one actually bothers to verify their structural integrity. Make beliefs pay rent, indeed...
I've been under the impression that skins of kiwis and peaches and other fuzzy foods are nasty because the skins pick up and hang onto potentially pathogenic bacteria, and/or chemicals such as fertilizer and pesticides. Love to be convinced I'm wrong.
I mean, I'll concede that that's what makes cantaloupes higher-risk: the intricate skin pattern creates very high surface area, so it's an excellent breeding ground for bacteria, and dirt isn't easily removed. That's for a ground crop though, the worst possible combination. Berries also have high surface area, and I think that's maybe why they're suggested to wash extra-well? But not sure. Never bothered to research. Cursory Google shows most "should I wash X" sites are pure woo fluff masquerading as wannabe lifestyle brands, so I'm skeptical.
(But I'll fully agree that reversed chemophobia is not ideal weighting of tail risks, and it is after all a trivial inconvenience at best to wash. Still think aesthetics is the stronger argument for going skinless though. Lots of popular tastes and textures that I react abnormally poorly to, so I'm definitely wired weird!)
Do you enjoy the core of the apple as much as the rest of it? Aren't the seeds and tougher membranes in there unpleasant to eat? I'm well aware that there's nothing poisonous or inedible in an apple core. I don't eat them because I don't like them.
If I had to plot out my Apple Enjoyment Gradient X versus Longitudinal Apple Strata Y (latitude? always get those confused), I'd say it looks like a mostly continuous function, yes. Sharp drop-off for the stem ends, as that's the part that "takes work" to avoid, and the bottoms tend to be a primary spot for early spoilage. I'd estimate that my decreased utility from eating a "mushy" spot of the apple is at least three times any small signal related to the membrances and pips.
The technique also matters - there's a reason that video has him eating the apple vertially rather than horizontally. This is intuitive with, say, steak - of course you're gonna notice the fibre a lot more if you cut with the grain, versus against it! To the extent an apple has noticeable "grain", it's parallel to the core. So eating it from the bottom rather than the side means only getting small chunks of such fibre per bite, rather than a whole mouthful at once. It's also easier on the mouth, obviously. Like the way one normally eats celery, rather than nibbling it sideways, which is an endless chewy stringfest.
I think certainly there's different thresholds, and for some people that minor unpleasantless is enough to write off the cores. Some people really hate potato or grape skins too! I guess I'm wondering about an unknowable counterfactual - what percentage of that dislike is organic, versus inculcated via well-intentioned expectations-setting? One way or another, there's a good amount of implicit and explicit pressure against apple cores, an ambient Everybody Knows; would people eat them more often without such messaging? Or a revised message of "it's actually just the seeds that need avoiding", perhaps. No one throws away the "core" of a watermelon for containing seeds, for example, that'd just be silly. ({Plus it's usually the sweetest part.)
Ah, so this is why the "Global South" remains poorer - those nearer the poles keep pulling up the ladderitude after them. Makes sense!
(I wonder if it's partly cultural confusion - the compass has different emphases in Chinese tradition...East, North, West, South. So I always think about left <-> right before up ^|V down.)
>Untruths that go unnoticed and thus uncontested, until one actually bothers to verify their structural integrity
Isn't this kind of Nietzsche's whole thing? I wonder if he would eat the apple core?
One a semi related note: I get a headache/near migraine if a eat an apple without cutting it. No idea why. This seemed to start just a few years ago. I've assumed its some interplay with the way my teeth are pulled when biting the apple and the usually cold temp of the apple pressing on the top of my mouth. It doesn't seem to happen with other similar fruits like pears, peaches, plums. Could just be psychosomatic.
Will freely cop to never having read Nietzche, nor being able to suss out What's The Deal from years of ambient exposure.
That's interesting. I tend to leave apples out, rather than refrigerate them...dulls the flavour, doesn't seem to extend shelf life that much. The firmness of the flesh is also significantly higher than those other fruits. Do you have issues with other similarly-textured whole-bite things?
In "Beyond Good and Evil" Nietzsche lays out his morality. He starts by lamenting that everyone before him just took the presence of morality for granted without examining if it exists at all. I was trying (unsuccessfully) to make a joke. I liked the idea that Nietzsche would have strong opinions on how to eat apples or that eating an apple in whole would be what an Ubermench would do.
I should try eating an apple at room temp to see if i still get the headache. This all started after I took wellbutrin which cause serious jaw/teeth clenching and then pain/headaches. Thats part of why i think its psychosomatic or a learned response.
I eat apple cores so that nothing is left but the stem. It isn't the best part of the apple, but I think of a core as sufficiently messy and nasty that I'd rather not have it around.
It's possible to eat shrimp shells. I found this out from reading a woman who was doing caloric restriction, which can make a person pretty hungry.
Why do people leave the tails on shrimp? Is there any dignified way to get the tail off? Do most people eat the tails?
Pinch the base of the tail firmly along its axis of symmetry while you're eating the shrimp. The meat pops right out, especially if you pull on the shrimp a little with your teeth.
Oh yeah, the tail is my favourite part of the shrimp! I usually get some of those off other peoples' plates when out with family. And it's usually the only shell remaining, what with how they're typically industrially shelled. Which is unfortunate, lotsa classic Chinese <s>shrimp</s> prawn preparation relies on having that crunchy outer layer. I believe they're left on so that people have something solid to grip while preparing them, and also for cocktail-type purposes or eating while fried; the totally-denuded ones seem to only come as Already Fully Cooked, which are used for different things. It's like a natural handle. For more formal occasions, usually one picks up or spears a shrimp near the tail, bites off the rest, then discreetly pushes the discard to a corner or disposal dish.
I think the majority of people don't eat them, since it's unintuitive - most recipes call for shelling, and lotsa older folks still have memories of "de-veining" shrimp, back before better industrial processes largely obsoleted that. They really are crunchy and a bit sharp. I like them anyways cause they tend to pick up a lot of sauce/seasoning, and are quite high in calcium. Forget bone broth or eggshell coffee, give me shrimp shells please. (Great base for seafood broth too, though getting enough is a bit expensive.)
I'll admit I've never heard of someone eating shrimp tails before, although I'd imagine it would be a good source of calcium. As for your other questions, I think the tails are left on when the shrimp is served out of some combination of aesthetics and to give a convenient place to hold it, and I don't really know of any good way to remove the tails that isn't either fiddling around with them in an undignified-looking way, or just biting almost up to the tail and leaving a bit of flesh behind.
I started doing this when a teacher in high school told me the cyanide in the seeds isn't enough to be poisonous, but is enough to slightly lower your blood pressure in a way that might be healthy (I still believe the former, not so sure about the latter). In any case, I've been eating the whole apple ever since.
Coincidentally I also enjoy eating those citrus whole - but those seeds I spit out, they're quite noticeable and often painfully sharp. Dentists were Very Concerned when I was a kid and used to eat whole lemons. Luckily nothing ever came of it. They're very tasty dried with the peel on too! I always use extra zest when a recipe calls for it, hard to go wrong.
The one exception is typical McNavel Oranges...the raw whole peel on those is rather unpleasantly bitter and stringy unless sugared. But they also taste like ass just generally. Give me mandarins (totally edible in whole) or satsuma oranges (less so but exemplars of fruity goodness) instead any day.
I eat everything but the stem too. I can recall at a very early age - probably when I first knew the words - asking my mother if I was at ‘the core’ yet.
Yes, am aware. Hence reference to "the dose makes the poison". The liver can deal with small amounts of cyanide, and does on a regular basis anyway if one eats such exotic foods as rice. From what I've read, the amount is so small as to be negligible; one would have to intentionally eat quite large quantities of pips to get any noticeable problems; the case studies we do have of cyanide poisoning via food involved improper handling, directly sucking on stone fruit pits, or other such improbabilities. Much more likely to get bored of eating apples before that, it's not a once-daily habit even. It's also largely from chewing or grinding up the seeds - if merely swallowed, they mostly pass through inert. It's not like improperly-prepared cassava, which will Definitely Cause Harm in fairly short order.
Stephen Skolnick who used to comment on here a bit says this kind of low-dosage toxicity is only really gonna be an issue for someone with a messed-up microbiome, or who otherwise loses the ability to eliminate such poisons*...but I'm neither in the habit of taking antibiotics, nor eating all that much processed food. So not particularly worried at the moment. I'd abstain for awhile if I were planning a surgery or something though.
I'll admit that the typical kinda-furry brown ones seem intimidating at first..."won't it be all weird and scratchy?" But then I ask if they peel peaches, and of course not, who does that. It's very similar though...peach fuzz, kiwi fuzz. The bark is worse than the bite, literally. And even totally smooth foods can make one choke if they "go down wrong".
These days one can even find <s>naked</s> "golden kiwi", which are sorta extra-oblong and ruddy-yellow rather than green inside, and are totally denuded of hair. And the instructions on the container still say to scoop out the innards and throw away the skin! It's very strange.
(I first heard about kiwi skin being edible as a passing throwaway comment on, idk, some Discovery Channel show or whatever. They claimed it's the part with the highest concentration of nutrients, and this is similarly true for many other foods, like potatoes and grapes. Big If True, never did look into it though. Which, yes, ironic. I just can't be bothered to peel stuff though.)
I was at a workshop where they provided fresh fruit in the pauses, but no cutlery at all. So I was like, how do they expect us to eat these kiwis?? --- And learned that you can eat them with the skin.
Always a fun stunt at parties, makes me remember Police Academy 2, that supermarket pillaging scene where one of the thugs eats a banana with the peel. (But I think banana peels aren't edible)
Yes, I admit I partly do it for surprising people too. Contrarianism for its own sake is a thing.
Banana peels are, in fact, edible - just rather unpalatable without cooking. Very similar to plantains. You can sometimes find dried "baby" bananas with the skin on, as an easier segue into the stuff. Most forms of preparation are like dealing with raw bulk kale - you really have to beat it into submission and cook rather thoroughly to render out that slimy bitterness. Baking works well, and it makes a good confection base too (candied peels).
Although, of note, the composition of a banana (moreso than most fruits) changes *very* quickly as it ripens. The greener a banana is, the higher proportion of starch : sugar in its carbs, and vice versa for yellow. This also affects the micronutrient composition...some are more easily available when greener, some when yellower. And has attendant effects on taste and texture - there's a reason banana bread almost always uses extremely-ripe ones! (I like greener bananas, personally.)
I do, and similarly for pears. It really isn't particularly noticeable unless I go out of my way to notice it. Sorta like how string beans* and celery ribs do have a "string", it's definitely A Physical Thing, but the level of botheringness seems to correspond strongly with how much one has been conditioned to expect it as a bother. Accidentally chewing a pip itself is slightly unpleasant (bitter notes, like with watermelon), but usually they just get swallowed. Nothing in particular stopping me from spitting them out, other than lack of convenient proximity to compost bin, I guess. A later article claimed that the density of bacteria is particularly high around fruit seeds, such as apple cores, so I guess there's some potential benefit too.
*I also enjoy edamame and pea pods, peanut shells, and popcorn kernels...but that's a whole other level of very tough fibre that I don't at all classify as Not Really Noticeable. Used to literally eat toothpicks as a kid, so it's still an improvement.
I did eat toothpick as a kid too, and still eat popcorn kernels! Most of the time when I eat them it's to "have something to do with my mouth", I don't like not having anything to chew while eating. I'll try to eat apple cores, it may help me slow down when eating.
I'll admit that I'm very skeptic about kiwi skins, the ones I get have a furry skin that seems like it would be like chewing hair, plus it may be hard to get clean.
Yeah, it was either that or paper towel. I liked the brown ones made of recycled unbleached paper better. Luckily never developed a true oral fixation like smoking, but it is nice to chew gun or otherwise be occupied in some such way. Much as I like soup, it's not a Proper Meal without some mastication. The solution is clearly to put chewable things in the soup. Diced-up corn on the cob is nice, that gives an excuse to eat with my hands too. Who needs utensils, really?
Kiwi skin has a lot less texture than you'd expect. Not at all like, say, artichoke fuzz (which is legitimately gross and tastes worse, even when boiled to death) or corn silk. The cleanliness I'm not sure, have always been pretty cavalier about eating plants without washing. Supposedly just a rinse takes care of most _____cide residues. Unless they're visibly dirty, like many root crops, in which case one might wanna actually scrub some. (Or melons. Melons really ought to be wiped down before slicing. Amazingly dirty, and to think of passing a sullied knife through the whole flesh...bleurgh.)
I think the fact that you eat peanut shells is pretty important context for your assertion that apple cores are basically the same as the rest of the apple...
I'll agree that they're on a spectrum with string beans, though -- I don't want those, either. ¯\_(ツ)_/¯
They're boring compared to basically any slightly-more-exotic pole bean variety, yeah. Give me purple beans or Chinese long beans any day instead. If it doesn't have that characteristic crunch, if one's teeth don't "squeak"...that's no pole bean, I say.
I guess no one ever taught me they're "supposed to" be shelled before eaten/thrown away? Always just...bit right in. We used to roast pumpkin seeds at home, it seemed a similar type of food to eat whole. Same with sunflower seeds. A prolific snack during ill-fated Little League days. David brand, was it...? Pretty confident one major reason people don't eat the shells is that they're typically extremely salted. I like salt in my diet, but the salted versions of nutshells are...ouch. Harder and harder to find roasted-unsalted these days, and washing them off makes it all gross-soggy. Defeats the purpose of crunchy goodness.
Oh my, I missed the antistocks... let me tell you another reason why they shouldn't work.
They're a proxy for an arbitrary metric that nobody optimizes for - Tesla does not pay dividends, it never has and I doubt that it ever will. Some companies pay dividends regularly, others do not, and the decision about the dividends can be quite divorced from the rest of the value.
Measuring profit? Let me point you to Amazon that doesn't believe in having profits, as much money as they get they reinvest or do some accounting magic with it (I'm exaggerating).
Whatever measure chosen it will at-best be fitted for last year's management with last year's strategy. You aren't running just counter-party risk, you're betting that the measure chosen does indeed reflect changes in value and won't change for completely unrelated reasons.
So if there's one thing this isn't it is - As simple as buying long.
The only argument I can see for them is that a regular share's value is so ephemeral that it's impossible to make the connection between a company's value and the price of a financial instrument any weaker so who cares.
This is straying from where I'm confident of my knowledge, but I assume that if every other metric I know about has fiddly constraints this one would have those too.
Devil's advocate:
Basing a financial instrument's value on rare occurrences is healthy for the economy, at that point I might as well be trading in NFTs that represent different attitudes about tesla's future. And that way the SEC is less likely to throw me in jail.
As a defense of the simplest model, Tesla talks a lot about not paying dividends, but imagine if there were some kind of legally binding contract they could sign that would render them forever unable to pay anything like a dividend. If Tesla signed this contract, do you think their stock price would go down? If so, that suggests the current price reflects some chance they start paying dividends.
You're still investing in the payment-of-tesla-dividends proposition instead of tesla's value, so the antistock is only worth that sliver of value.
I think the stock price change would be more reflective of what people think of a management that signs such contracts than the change in expectation depending on the level of such expectation now and how much uneducated money is moving tesla's price - a thing that I don't know how to measure and how tight that contract is - i.e. would it block other forms of capital return which would cripple the company's financial strategy or leave them open thus making the point a technical triviality.
If tesla would do so as a move to commit to stock buybacks as a capital return mechanism because of the taxation rationale (As Berkshire Hathaway does) the price might go up rather than down.
"Scenes from the Joint Mathematics Meeting (the largest annual meeting of mathematicians in the world):
A talk entitled "Undergraduate Mathematics Education as a White, Cisheteropatriarchal Space and Opportunities for Disruption to Advance Queer of Color Justice""
They wouldn't be there if there any reasonable level of hostility towards them. And the fact that these presentations are even being accepted by the conference organizers is the problem.
Most people reject woke nonsense, but it doesn't matter! The problem isn't most people agreeing with it, it's that these ideologies capture institutions and everyone has to go along with it at the risk of having their livelihoods jeopardized.
How many mathematicians in attendence would be comfortable putting their name on an open letter decrying this presentation's thesis in explicit and direct terms (e.g. not saying something like "This presentation is well intended but unhelpful in fighting against patriarchal systematic racism")? I have a really, really good guess.
A book with the point that AAVE (African American Vernacular English) includes some phrasing which makes mainstream english phrasing about arithmetic hard to understand. This could leave a teacher thinking that the kid just can't understand, while the problem is a need for translation.
Immigrants don't have special problems with learning math that I know of. First and second generation Asian immigrants famously do well in math. That tends to contradict the hypothesis that English gramma is the problem. I Personally learned math in foreign language (French) and was not aware of the language being an impediment.
There's plausibly something about racism in the situation-- not just that children aren't being taught in their home dialect, but that people who default to believing that black people are stupid aren't going to notice there's a language difference causing the difficulty with arithmetic.
This language difference is directly impacted by intelligence. Second gen asian immigrations whose parents don't speak English as a first languge do well above average in literacy. And the thing is, we know blacks are less smart than whites through intelligence testing. And guess what, the IQ subtests most dependant on language (e.g. vocabulary) have smaller racial gaps than those not dependant on language, meaning these IQ gaps cannot be a product of language differences.
And it's bizarre calling this racism, because the "anti-racists" are the ones pushing for AAVE to be accepted as a real language and to not teach black kids proper english. It's the "anti-racists" that have created this situation through their dumb, agrocentrist ideology.
A possible reason (that you ignore) that "Second gen asian immigrations [sic] whose parents don't speak English as a first languge do well above average in literacy" in Standard American English because the English spoken by the communities they're in is by-and-large Standard American English, so there isn't a competing dialect of English involved. And before you protest, just having an "Asian accent" in SAE is not the same thing as a wholly separate dialect with different tenses, different inflection rules, and other grammatical rules, like AAVE is.
If this model is correct, it seems like we should see black kids do better learning math from black teachers than from white teachers. Is there good evidence for this? (I don't know the literature, though I have a broad impression that education research is often not all that great, and I worry about replicability and garden-of-forking-paths issues in it.)
No, it seems like we should be stamping out AAVE as the main dialect black kids speak and get them speaking the same English as the rest of the country/world.
Sure, but it would be an easy study to do, and P[speaks AAVE|black] >> P[speaks AAVE|white].
If the model you're describing is correct, then we have a pretty clear prediction of something we should see--black kids learning math from black math teachers should do better. If we see that, it's some evidence for the theory; if not, it's some evidence against it. But also, it sure seems likely that there's already data on this somewhere.
My point is : the kind of people who mix "Cisheteropatriarchial" and "whites" and "queers of color" in a single sentence without punctuation are probably not interested in your very sensible and level-headed reform. They want solutions like "Stand all the white boys in a line every week and tell them how aweful they are, white girls are one-time-per-month", or "Leave K12 education to suck exactly as much or harder, but accept [politically favorable] blacks more often at universities\jobs, using whatever dumb nonsense you can come up with to justify".
They have to do this, they are addicted to heroics and "Rebel Alliance" narratives, and "kids should have better education" isn't exactly an oppressed take, although people differ on who's responsible and who should pay the bill, but no monster is going to argue with a straight face that "Yes, black kids are having difficulty due to language\worse income\less fathers, and that's a Good Thing". There is no "alpha" in pushing a narrative like that, to borrow Scott's parody terminology.
Wokism actively reward nonsense, because it's a system of morality that rewards novelty. Whenever "morality" and "novelty" come together in a single sentence, disaster is as expected as Death and Taxes. There *is* a place for novelty in morality, at a rate of about 1 idea per century (if ordinary human brains are the ones doing the thinking).
But in a system like wokism, you have to be cutting edge, and how can you be when all the gays have gotten all their rights? the only solution is to invent new gays (made up pronouns and identities) and mix-match the vanilla gays with other identities (queers of color and intersectionality in general). The number of combinations you can get this way is exponential, so they can never run out. They are trapped in a self-accelerating feedback loop of inventing new injustices wholly out of thin air and pretending to be mad about it, then achieving a meaningless\non-existent victory and going to bed before repeating the entire cycle the next day. The world is choke-full of actual injustices to be extremly mad about and spend a lifetime fighting, but they aren't new and sexy, and they are very exhausting and nobody talks about your (tiny and incremental, mostly pyrrhic) victories on twitter and TED.
I can't imagine that it has anything to do with gender. There might be other dialect differences that affect smaller groups, and the problem isn't necessarily only American.
I can imagine that a few children with very good mathematical intuition guess what the teacher must mean, and the majority conclude that it's incomprehensible and they can't learn it.
The Joint Mathematics Meeting is the joint meeting of the American Mathematical Society (the main professional organization of professional mathematicians) and the Mathematics Association of America (the main professional organization of mathematics educators at the undergraduate and high school level). Educators are a core part of this conference. I'm not sure why anyone would be surprised that educators think about race, gender, and sexuality in the context of education. It's only if you think that education about *math* is somehow not the same sort of human practice where people think about these issues that this should be surprising.
Look, this presentation is woke nonsense. I know it, you know it.
Typically mathematics has been more resistant this nonsense because it is a strictly technical discipline that is less dependant on language than almost any other. This is obviously true, which is why the humanties were ideologically captured many decades ago, whereas serious inroads have only started being made on academic math is recent years.
But the fact that conference organizers for math are accepting this nonsense for presentation shows a significantly negative cultural change that likely signals future ideological capture that will poison yet another field.
> It's only if you think that education about *math* is somehow not the same sort of human practice where people think about these issues that this should be surprising.
Presentations about education are fine. Presentations about education based on demented far-left race and gender ideology are not fine. None of this is based on scientific evidence, it's pure ideology, and it will poison mathematics the way it does everything else it touches.
Kenny, out of pure interest I tried looking up the talk to see what was the actual content. I didn't manage that, but I did find the photo identifying the man giving the talk; Luis Leyva.
Assistant Professor of Mathematics Education, Department of Teaching and Learning
Faculty Affiliate, Department of Gender & Sexuality Studies
"At the juncture of gender studies, higher education, and STEM (science, technology, engineering, and mathematics), Leyva’s interdisciplinary research explores narratives of oppression and agency across historically marginalized groups’ educational experiences to uncover interlocking functions of racism and cisheteropatriarchy in undergraduate STEM. He draws on critical race theory, women of color feminisms, and queer of color critique to conceptually and methodologically ground his scholarship, which centers historically marginalized voices in STEM higher education across intersections of race, gender, and sexuality."
I have certainly been educated, in that I never before heard of the term "queer of colour" (I was aware of debate around whether "queer" was or was not a slur). I had thought the term was "queer person" but now it seems that "queer" is a pronoun in itself.
Let us continue:
"Leyva is the Director of the PRISM (Power, Resistance & Identity in STEM Education) Research Lab at Vanderbilt-Peabody College. The lab’s research serves to hold an “intersectional prism” up to underrepresented students’ narratives of experience to illuminate and disrupt multidimensional forms of oppression in undergraduate STEM education.
...His second project, titled COURAGE (Challenging, Operationalizing & Understanding Racialized and Gendered Events) in Undergraduate Mathematics, examines features of instruction in undergraduate calculus classrooms that students from historically marginalized groups experience as discouraging or supportive as mathematics learners. This project, supported by the National Science Foundation (Improving Undergraduate STEM Education program), addresses the pervasive role of calculus as a gatekeeper that reinforces racialized and gendered access to STEM. "
So - calculus is bad? Has anyone got an opinion on this? Do away with gatekeeping calculus or not?
If the man is managing to teach black kids maths, then good for him. But I can't navigate all the jargon, and what on earth does the below have to do with learning engineering?
"As a space of collective healing and re-humanizing the research experience in the academy, PRISM members support one another in constantly interrogating their positions of privilege and oppression to engage in research alongside historically marginalized communities in STEM education."
Is that a therapy session or a maths class?
I looked up one of the linked research papers and hold on to your hats, you'll never believe this!
"Introductory mathematics courses, including precalculus and calculus, largely influence Black and Latin* students’ persistence and sense of belonging in STEM. However, prior research on instruction in these courses for advancing more equitable outcomes is limited. This paper presents findings from a study of 18 Black and Latina/o students’ perceptions of introductory mathematics instruction as a racialized and gendered experience at a large, public, and historically white research university. Sociological perspectives of logics and mechanisms of inequality guided an analysis of Black and Latina/o students’ group interview responses on how instruction perpetuates racial and gendered oppression. Two logics were identified: (i) Instructors hold more mathematical authority than students in classrooms; and (ii) Calculus coursework is used to weed out students ‘not cut out’ for STEM. These logics, coupled with the influence of broader sociohistorical forces (e.g., cultural scripts of behavior, stereotypes), gave rise to mechanisms of inequality through seemingly neutral instructional practices that reinforce racial-gendered distribution of classroom participation and STEM persistence. Our findings inform implications for STEM higher education researchers and mathematics faculty to foster socially affirming STEM instruction, especially in introductory courses."
Yes, imagine this shocking revelation: teachers have more authority than the students in classrooms! Were any of you aware of this shocking and distressing "logic" before?
"BSTRACT In this article, the author discusses the intersectionality of mathematics experiences for two Latin@ college women pursuing mathematics-intensive STEM (science, technology, engineering, and mathematics) majors at a large, predominantly White university. The author employs intersectionality and poststructural theories to explore and make meaning of their experiences in relation to discourses of mathematics ability and pursuits of STEM higher education. A cross-case analysis of two Latin@ college women's counter-stories details the development of success-oriented beliefs and strategies in navigating the discourses that they encountered institutionally and interpersonally in their mathematics experiences. Implications are raised for P--16 mathematics and STEM education to broaden equitable learning opportunities for Latin@ women and other marginalized groups' construction of positive mathematics identities at intersections of gender and other social identities."
A whacking *two* Latina students are used to "raise implications for STEM education". I'm loath to call this - no, scrap that, I'm not loath at all. I *am* calling this a grift; do a round of conferences, write a few papers, head up a 'lab' talking about white cis hetero patriarchy bad but extra bad in maths, let's stop teaching particular branch of maths and just let all the queer trans females and other persons of colour enter the engineering course and give 'em the degree at the end, much more equitable!
(I say this as someone ignorant of maths and unable to understand or indeed grapple with it. But if I got a sympathy pass and award due to being a female person who can't be expected to understand hard sums, I'd be insulted and I would never be able to rely on "well at least the person who built that bridge learned how to calculate the stresses correctly so it won't collapse under me")
On calculus being bad - you'll find some very widespread shared beliefs among mathematicians (probably not majority, but maybe close) that it would be better if university calculus requirements were replaced with statistics or logic or critical thinking or something else of that sort.
That's absolutely irrelevant to the point being made. Black kids would likely struggle just as much with a rigorous college statistics course.
As for "logic" or "critical thinking", well this is absurd considering you need to understand calculus to understand vast swathes of scientific theory and these things will not help you with it. And obviously "critical thinking" is one of those bullshit terms that is so vague so as to allow almost anything to be taught underneath it, including ideological nonsense with no empirical evidence of providing any benefit to a student's congitive ability or understanding of other subjects.
Derivatives are easy and useful in many situations. The rest of the calculus, yeah, can get arbitrarily complex, and the technical details are mostly useless even for most people in STEM.
I wonder what exactly passes for "critical thinking". I mean, I understand what the idea is, in theory, but beyond the obvious mistakes, how exactly do you *teach* it, or *measure* it? (For example, teaching people lists of fallacies can easily backfire. The more you know, the easier it is to assign one of them to inconvenient information, so you become more immune against arguments you disagree with.)
So while I agree about the usefulness of critical thinking, I worry that adding it to university requirements would result in something different, such as conformity with some narrative.
Yes, it’s a hard question to know how to structure a quantitative reasoning requirement for a university. But it’s not really controversial that structuring it as a calculus requirement in particular is bad, if you don’t expect people to become engineers. And even engineers should get an actual quantitative reasoning class that covers some probability, which they often don’t get under current structures.
There are many universities where the math requirement can be filled by statistics or formal logic. I don’t know if there are many that don’t have such a requirement at all. I don’t know that any universities have changed this general education requirement one way or another in the past five or ten years.
And all around him are pretty normal mathematical speakers, often POCs (more often in fact) giving proper lectures. It’s probably like there was an occasional Marxism and Mathematics lecture in Soviet technical universities, just because you have to.
More damaging to the U.K, is the general disregard for mathematics exemplified by the dull mediocrity of people like Simon Jenkins.
You mean like how the Soviets basically rejected genetics wholesale on ideological grounds that greatly diminished their scientific and tenchical progress accordingly?
And look, we're already seeing it in other fields today in the west. You can't do certain research, you can't come to certain conclusions. Intelligence researchers are having their access to genetics databases blocked. This is all real and is stunting science.
I would like to know more about this. Like, what were the exact mechanism that caused this. Because, Soviet Union started as a backward feudal country... then at some moment it sent the first man to space... and then it fell behind again. The simple explanations like "of course, soviet communism sucks" are not sufficient, because they cannot explain Gagarin. What was the mechanism that worked at first, and then it didn't; and why?
Russia is still great today at the mathematical olympiad ( https://www.imo-official.org/country_team_r.aspx?code=RUS ), which is not a completely fair comparison, because greater countries have a natural advantage; it obviously makes a difference when you pick 6 best math students among 1 million, or 10 millions, or 100 millions. Then again, Germany has almost as much population as Russia, and it stays visibly behind.
Among multiple factors, relatively how important were crazy people in positions of scientific power, such as Lysenko, versus how important was lack of money when the research becomes expensive. (In other words, could an unexpected source of huge income hypothetically have saved soviet science, or was is doomed regardless?)
One explanation I have heard is that in capitalist countries, new inventions can become products overnight. So the Soviets had space research, but the Americans *also* had dozens of household items based on technology somehow originating from the space research. Could be something useful such as microwave oven, or something stupid such as laser pointer, but either way it provided additional funding for the research, and made the population familiar with the new technology; and then you had millions of people thinking how to make this even better. Meanwhile, the Soviet scientists had to convince some important member of the communist party to approve further research.
Towards the end, Soviets were just shamelessly stealing technology from the West, and still stayed behind. Which is a shame, because they also had some cool idea, such as a computer based on balanced ternary digits (-1, 0, 1) rather than the usual binary (0, 1), and who knows where that research could have led in a parallel history. https://en.wikipedia.org/wiki/Setun Was the missing part the ability of some rich entrepreneur to unilaterally decide to mass produce millions of cheaper smaller ternary computers, spend a lot on advertising, and sell them as expensive toys?
I suppose that different people will come up with different theories. I am curious what was the real reason. Probably a combination of things, but what was their relative importance, and why it first worked and then it didn't?
How specifically is the queer-of-color math different from the cisheteropatriarchal math?
I can see how education relates to color (although I consider the terminology very unfortunate, because it is not the "color" per se, but rather its correlations with being a *cultural* minority and/or not speaking English as a first *language*), or how sexual orientation could be relevant to some biology or history lessons... but the impact of sexual orientation on mathematics is a mystery to me.
All I can imagine is mentioning something like "and by the way, the guy who invented the square root of seven was a black gay who identified as a fox". Is that all?
The author literally thinks calculus exists to gatekeep people who aren't straight white "cis" men from technical fields. It's ideological nonsense, not science.
And don't you think its bizarre that foreign students from Asia who don't speak english as their first language have no issue whatsoever with these "classroom dynamics", but people born and raised in this very country can't learn in it's classrooms? It's all just rationalization for the evil straight white men being more intelligent and capable than other non-asian groups.
Nobody who uses the term "heteropatriarchy" is operating scientifically, it's all ideology.
I still don't see how this is about math specifically, rather than school in general.
(Or could you just replace the word "math" with any other subject, and give the same talk at any other conference? That certainly is a way to increase one's publication count...)
I haven’t seen the presentation so I don’t know what the authors are saying. But there are all sorts of reasons why different subject matters cause different sorts of classroom dynamics. There are specific things about learning how to do proofs in front of other students that are an important part of math classes and no others.
But even if there is nothing distinctive about math here, someone whose work is entirely in math education is likely to limit their claims to math education, even if some of the issues they discuss might generalize to other subjects. If you don’t have semesters or years of observation of physics classrooms, you’re not going to make claims about physics teaching.
You don't think it might be an issue for teaching math if certain groups are innacurately perceived as more talented than others? Or get additional support?
If your role as a teacher and institution is to give the best possible education to your students, and identify the best potential mathematicians, you need to be aware or structural issues so you can mitigate them.
>You don't think it might be an issue for teaching math if certain groups are innacurately perceived as more talented than others?
There's no evidence for this. Whites and Asians are ACCURATELY perceived as better. And trying to pretend otherwise can lead only to the kind of racial discrimination as we see in American colleges' affirmative action policies.
>If your role as a teacher and institution is to give the best possible education to your students, and identify the best potential mathematicians, you need to be aware or structural issues so you can mitigate them.
Then do actual science. This isn't science. It's pure ideology. They're not trying to determine what's true, they're advancing an ideological goal (help their ingroup and hurt their outgroup) and will find any justification for doing so. These people will never, EVER accept that different groups have different innate abilities.
The left will hold up Alan Turing as a brilliant gay genius one moment, and the next they will pretend he doesn't exist to "prove" gays are considered less capable.
The primary determiner of whether you get support or not is how well you do on the test, and whether you are actually trying or seem to be goofing off. Can't see how either of these hard bits of real-world data have dick to do with your color, class, creed, sexual orientation, or any other of the modern points of obsession.
I suppose if you had a specific example (of the impact of sexual orientation on mathematics) you'd probably at least hint at it. So you don't know either.
This isn't science. It's pure ideology. They're not trying to determine what's true, they're advancing an ideological goal (help their ingroup and hurt their outgroup) and will find any justification for doing so.
This are the kind of people who would never, ever accept that intelligence differences exist between races regardless of how much evidence you provide them. They will shout you down, they will call everyone involved racist, they will call for you to be fired. They will never accept something so harmful to their narratives. This is what ideologues do, not scientists.
I agree that there's thinking there, but there's no truth-seeking: no one's trying to find ways that objectively increase a useful endgoal like "we solve navier stokes" or "students understand math better". There's thinking, but it's all about social positioning and jargon-based politics.
Considering the (backwards) progress they've made over the past 50 years, maybe it's time to for them to consider whether their focus is really where it should be.
I think we'd all be willing to listen respectfully if there were clear objective results from this kind of stuff. Show me a state or big district that went all-in on diversity and inclusion initiatives and obtained a decade of steadly rising math SAT scores across the board for everybody.
But in the absence of any objective progress after so many years of discussion and money being poured in, I think the people who are footing the bill -- parents and taxpayers -- are quite justified in their increasing levels of skepticism. I suggest it has dick to do with culture wars per se, it's because as parents we know damn well whether the school is teaching our kids reading, 'riting, and 'rithmetic more or less successfully in 2022 than it did in 1972. And the answer is "less."
It's worth asking whether the tools they're bringing to the job are capable of helping them achieving this goal. I am deeply skeptical, but evidence >> models--are there high quality experiments that demonstrate a lasting positive effect from changing the style of teaching math? Say, something with large n, random assignment, and preregistered comparisons (aka, not "we did this intervention to raise adult IQ and discovered that it raised math test scores in the 6th grade among hispanic girls only.")
>I’m pretty sure they are trying for the goal of “students understand math better”.
I am actually pretty sure this isn't a goal of a huge portion of these people. It is very clear they MAIN goal is reducing the disparities between what they perceive as homogenous ethic groups based on color of skin. Not educating people.
And BY FAR the easiest way to reduce disparities is to cut down the tall poppies. And if you don't think this is the case I would encourage you to actually listen to any group of educators/school district that is all fired up about this and look at their actions.
Which often involves 1) gifted and talent gets eliminated 2) Even general curriculum gets easier with lower expectations.
But I wonder if it need be. A friend who is head of maths at a large school in London got fed up that low proportion of girls was studying maths beyond age 16 (in England, that's the point where if you are on the standard academic track you drop down to three or four subjects) and over a few years saw a pretty sustained rise. It certainly feels to me that that's a genuine problem being addressed - and so yes, it's a pity of impractical jargon based nonsense gets in the way here.
The survey has a question asking whether insufficient 'extra content' is a reason for not becoming a paid subscriber. This confused me because I wasn't really sure there *was* extra content beyond open threads. I see now that there definitely is, but I had to look at the archives to figure this out. Possibly this just means I was inattentive, but I think there just isn't much indication, if you're a free subscriber, that there are any nonfree posts at all.
(I'd guess one typical history goes like: 1. Subscribe at free level, intending to evaluate later whether a paid subscription is valuable or indeed any different than a free one. You are helped in this by the assurance that Scott's doing just fine in his deal with Substack and doesn't need the money. 2. Nothing at all reminds you to reconsider #1. 3. ....)
To prove there are people who would happily subscribe but haven't yet, I've subscribed for a year.
I love the idea of being able to put in my best writing samples to an AI for learning, and then only have to give it a list of key points and phrases before it can turn those into an essay or paper in my style of arbitrary length. I've already done the important part of the paper before submitting it, and if I want to tweak its style I can just give it more writing samples to play with.
How should I refer to Scott Alexander in comments here? Should it be "Scott Alexander", or "Alexander", or "Scott"?
What's the best way to refer to the author of a website when discussing them? My instinct has always been that formality and distance are obstacles to conversation, and that you should refer to people as they'd prefer to be addressed unless you're in a formal context, and that most people think of themselves by their first names rather than their surnames. But I recently heard that this is not standard in the US, and using someone's first name before they've invited you to, or without some kind of reciprocal relationship with them, is presumptuous. Is there a standard on this?
Complicating this, I understand that Scott Alexander isn't his full name.
I'm a little surprised by the question in this informal day and age. We're all on an immediate first-name basis in America anyway. Scott's writing style is very conversational, so for me anyway calling him anything but Scott would seem stilted.
For what it's worth, I read something referring to Scott as "Siskind" and it seemed kind of hostile to me-- something like childishly referring to someone by the part of their name they prefer not to be known as in this context.
1. There is a lot of genuinely low-quality sneering, most of it centering around the idea that our esteemed host and Eliezer Yudkowsky are bad people and everything they do should be discounted as a result (I agree with neither A nor B, of course).
2. They have some genuinely good Marxist or SJW points about rationalists being blind to class and privilege impinging on what they think. I am neither a Marxist nor SJW, but even your enemies are right sometimes. And sometimes it's just a matter of 'there are tradeoffs between the welfare of group A or B, I see your point but I am in group A and you are in group B'.
I've pondered this. Elsewhere when referring to a blog author's post or writing to another poster I've use initials (e.g. SA). Same if we're discussing a famous person cited in a previous post. It seems to me a neutral, clear and efficient notation.
Yea, well I'd say it depends on context to make it work and if several regulars start to do it then it becomes 'common knowledge'. Also, it's similar to how less common acronyms are used (which is essentially what initials are) in a paper. The six word name once, early in the writing and then the acronym following.
Thanks everyone, I’ll probably stick with Scott in future.
I’m still unsure about generalising to other websites. On the one hand, this isn’t a typically formal situation, but I’m not sure it’s unambiguously an informal one, either. I’m essentially pestering Scott at work here, this is his job and I’ve ever met him. Another Scott, the radio presenter Scott Stephens, has said that you can’t know someone unless you can smell them - that is, unless you can have a private one-on-one or small group conversation with them. Neither Scott has ever smelled me.
I recall in about 1986 getting a handwritten note from the senior tutor at my Oxford college addressed to "Dear Smith", where Smith stands for my surname. This usage was, I discovered, almost dying out, but it had been a common English trope for friendly informal correspondence, used by academics, senior civil servants, I dare say Bishops. "Dear Mr Smith" was too formal; "Dear Peter" unthinkably forward.
It's almost always just Scott. Occasionally there's a good contextual reason to (mostly satirically) use "Dr. Alexander". Infrequently one uses the full Scott Alexander to differentiate from that other prolific Jewish rational writer, Scott Aaronson.
Other blogs tend to follow the same conventions...Matt Yglesias is just Matt on Slow Boring, Zvi Mowshowitz is just Zvi on Don't Worry About The Vase, Freddie de Boer is just Freddie on his blog (which for some reason lacks a punchy name, I've always found that odd), Bari Weiss is just Bari on Common Sense, Bryan Caplan is just Bryan on <s>Econlib</s> Bet On It. And so on. Oftentimes you can pick it up from the content too - authors will refer to themselves a certain way, and that tends to reflect the comments. It's less about formality and more about reference classes...there are a million Scotts in real life, and (probably) thousands of Scott Alexanders, but within the blogosphere, a much narrower subset of blogging Scotts.
When commenting on someone's blog, I think it's standard to address them by first name only, and to refer to them in the third person by first name only, unless there are other people being discussed with the same first name.
When cold e-mailing someone, it's more standard to start with their title and last name (e.g., "Dear Dr. Alexander," or "Dear Prof. Easwaran,") and then in future replies, to address them however they signed their previous e-mail to you (i.e., with first and last name, or title and last name, if they signed with first and last name, or to switch to first name only if they signaled that this was appropriate by signing with first name only).
Talking in person, you would start with title and last name, unless they explicitly tell you it's ok to move to a first name basis (which is usually something that the higher status person has to initiate).
This depends a lot on conventions within a field/subcommunity. I hang around two different prominent universities in my field--in one, the high-status people are referred to by first name and if you say "Dr X" or "Professor X," people will tell you "we don't do that here." In the other, the high status people are "Dr X" or "Professor X," and as an outsider, how I address someone is part of establishing (or trying to establish) my own place in the pecking order.
As an American--in my experience, using someone's last name instead of their first name is primarily done in formal writing, e.g. in a newspaper article or longform nonfiction. People in customer service roles might refer to clients/customers as Ms/Mr XYZ to convey politeness, but even that is a bit old-fashioned.
There might be some professions where this isn't the case. Maybe law? Law is weird.
As for Scott--I think it's just convention in the comments section here to call him Scott. Using just a first name seems pretty typical for comments sections specific to particular writers or creators. E.g., to pull the first example that comes to mind, Bernadette Banner's YouTube comments typically refer to her as Bernadette. Now I'm wonderng how speakers of other languages do this, and if it's indicative of some parasocial relationship.
I don't comment here much, but I do participate in places where Scott's work is discussed and I (slightly reluctantly) say 'Scott'. I prefer to use a more detached tone when speaking about people's ideas and use last names for intellectuals more than most people do, but I find when I say "Alexander" it sounds like I'm just confused about who I'm talking about.
> I recently heard that this is not standard in the US, and using someone's first name before they've invited you to, or without some kind of reciprocal relationship with them, is presumptuous.
This is very old-fashioned etiquette. People use first names in almost all interactive contexts these days.
When I'm discussing another person _in not-too-formal writing_ I will sometimes deal with the "first name is too informal/familiar/friendly, surname feels weird" problem by using the full name once and then their _initials_.
The thing I am most concerned about there is mostly not the informality of using their first name, it's the possible perception that you're viewing this person as a friend/ally and therefore shouldn't be trusted to be impartial. In many contexts this doesn't matter, but if e.g. there's some discussion about some probably-hostile thing El Sandifer wrote about Scott Alexander, I _am_ going to be trying to be impartial and I will call them ES and SA in order to reduce the risk that someone thinks/feels "he's writing about X as if X is a personal friend of his, so I should assume he's biased".
(If I were writing about someone who _is_ a personal friend of mine, I would say so and would use their first name throughout in order not to mislead, unless the relevant stylistic norms forbid that as e.g. in academic publishing.)
I wonder if people don't use "Mr." anymore mainly because once women became a major part of the professional workforce it was too much trouble to figure out whether to address them "Mrs." "Miss" or "Ms." (Particularly when the correct title is "Dr.')
Also wonder if pronouns will suffer a similar fate for the same reason.
My otherwise-socially-conservative feminist mother, at least, regaled me with stories about how getting folks to agree to drop "Miss" in favour of "Ms" was a victory for equality, back in her day.
Though I'm told that in the rare instance Gen Z is called upon to say "Ms." (e.g., the Ms. Marvel tv show), they're surprisingly likely to pronounce it "Miss".
And back in the 90s I was involved in a wedding where the invitation list for the bachelorette party addressed the single women as Ms. and the married women as Mrs., which struck me as kind of missing the point.
As an elementary schooler in the early 2000s, I was taught to refer to adult women this way. I didn't learn that "Ms." and "Miss" were separate things, rather than different spellings of the same word/idea, until college.
"For reasons involving the NYT article and doxxing, his *last* name (not mentioned here) is used only by his enemies."
I was not aware of this. Some questions - I'm being sincere. Not assuming you (or anyone else) will answer all of them.
Does the fact that I've used his full name (since reading Still Alive) mean I'm one of Scott's enemies? If not, who are Scott's enemies and what makes a person Scott's enemy? Where has this distinction been formulated? Do Scott's enemies agree with it, and do they identify as his enemies? Does Scott agree with it, and more importantly, does he agree with your suggestion?
What purpose is there to classify people as Scott's allies or enemies, other than the obvious tribe-forming, tribe-strenghtening behavior?
What sense is there in tabooing his full name, as he has publicized it himself, deleting and recreating his entire blog during the process? Isn't letting (assumedly) malicious people appropriate the use of his full name counterproductive?
If you use a name on internet, I am not going to investigate whether it is your real name or a pseudonym. Not even because I respect your privacy (although I do), but simply because I do not care. If someone signs their work "Scott Alexander", then for all practical purposes, that person *is* Scott Alexander. Name is just a reference anyway.
Why do I call him "Scott Alexander" when his full name was already made public? First, because this is how I have referred to him for *years*, so I am already used to it. Second, because this is how he still signs his articles. Third, because many other people call him that (for the first two reasons), so if I say "Scott Alexander", people I communicate with immediately know who I am referring to. Three reasons to use the pseudonym, and zero reasons to use the real name.
(Fourth, because not using someone's name has a kabbalistic significance. This is an inside joke. Read https://unsongbook.com/ to find out more.)
In the past, it was possible to figure out Scott's real name, but Scott asked people not to use it publicly, because doing so could have hurt him. Predictably, some assholes did it anyway, and NYT decided to go nuclear.
Since then, Scott has significantly changed his life, so the old reasons not to use his real name *now* no longer apply. (Also, anyone can find it on Wikipedia now.) So there is no harm if you use Scott's real name today. Now we avoid it mostly out of force of habit.
"Since then, Scott has significantly changed his life, so the old reasons not to use his real name *now* no longer apply. (Also, anyone can find it on Wikipedia now.) So there is no harm if you use Scott's real name today. Now we avoid it mostly out of force of habit."
This seems to be a reply to my comment, but I'm not sure whether it is directed at me particularly. However, I agree with the gist of this comment, especially the quoted paragraph - it also did not add to the view I already had. That is, I wonder whether the fact that I questioned [that Scott's last name should Never Be Used] gets interpreted as me saying there never was an issue with people doxing him (which, of course, I haven't so much as hinted at).
>Does the fact that I've used his full name mean I'm one of Scott's enemies ?
Of course not, that's ridiculous.
The commenter you're replying to's heuristic was "using Scott's last name exclusively", which using the full name isn't. And I would say even that heuristic is pretty flawed, I can see where it's coming from but I don't think it performs any better than random chance.
>If not, who are Scott's enemies and what makes a person Scott's enemy?
The commenter you're replying to is using "enemies" lightly in the sense of internet assholes.
- Who are those internet assholes ? who knows, the internet is anonymous, that's why it's so heckin awesome. I once searched for Scott's name ("Scott Alexander") on tumblr and saw a deluge of posts raging about how he "platforms" (God I fucking hate this word) racists and mysogynists and blah blah blah, no idea if you can still find those if you searched now, that was a couple of years ago.
For a pretty reliable dumpster of Scott's (and rationalism's in general) enemies, see reddit's "Sneer Club" subreddit. Another one is MeFe, or Meta Filter, a link aggregator site sorta like reddit or hackernews (and it's pretty good actually assuming you don't go near any woke link). It has a huge woke bias so every Scott link is probably them raging about he "PLaTFOrM"s nazis. Another one is hackernews itself, though thankfully it's far more contrarian and heterogenous on most issues than either reddit or MeFe, but that's exactly why you will occassionaly see the incessant whining.
- What makes a Scott's enemy ? No idea, Scott is really the rare "mild man" type, I'm lucky to have someone like him in my personal life (though unfortunately not for long), it's extremly hard to not at least listen to what he says and then disagree very respectfully if you can muster the guts to disagree at all. He commands by sheer politeness.
Scott's enemies are varied, they are anything from woke left wing to extreme king-cock-sucking right wing. It's hard to say anything generalized about them. I assume the first woke ones were Feminists, all the way back in 2015 or so, for reasons that you better not read about for your own mental health. Oh good old feminists, always blazing the way in being assholes in all sorts of new and exciting mannners. No idea where the first right wing ones came from, I see plenty of them being sour because Scott steelmanned some of their arguments and then counter-argued them to the ground again, but I'm not sure this is the sour spot.
>Where has this distinction been formulated?
In the brain of anybody who see a dedicated group of people consistently moaning about 1 person for a decade or so.
>Do Scott's enemies agree with it, and do they identify as his enemies?
Nobody wants to be seen as the loser who keeps raging about someone who lives rent free in their head, they will probably say something to the effect of "Scott Alexander is a representative of the Crypto-Fascists tech-industry silicon-valley tech bros who want to roll back abortion rights and the rights of the Blax communities of color", and in their own system of knowledge and ethics, this counts. But to most other people who are attracted to rationalism, it doesn't.
>Does Scott agree with it, and more importantly, does he agree with your suggestion?
No idea, and no idea. _I_ would say Scott *definitely* agrees there are people who hate his guts, however (un)charitably he internally justifies or models their behaviour. He also seems pretty sour about the NYT thing. But whether he thinks calling him by his last name is a marker of belonging to this group is much more open to debate.
>What purpose is there to classify people as Scott's allies or enemies, other than the obvious tribe-forming, tribe-strenghtening behavior?
Classification is life's eternal hobbies, and intelligence's first baby steps. Every AI course begins with a baby-mode "Cat or Not" classifier. Even Bacteria classifies their pond water into "Rich in Sulfur" and "Not Rich in Sulfur". If you're *Any* sort of a goal-having goal-seeking agent, you have to classify things. Friend-(X)Or-Foe is a pretty vital distinction. How can you even prevent yourself from noticing that there are consistent markers for a group of people who doesn't give a flying tick about any of your ideas, and dedicates an unhealthy portion of its existence to hating you ?
> Isn't letting (assumedly) malicious people appropriate the use of his full name counterproductive?
Absolutely (and I will still correct you that the guy or gal you're replying to only said "last name exclusively, not the full name). But also, it's good and wholesome to know what upsets the people you respect and then not do it in front of them. If a bunch of assholes keep calling your glass-eyed red-haired friend a ginger 4 eyed nerd, then - unless you're so intimate with the guy that you can single-handedly redeem the words - you better stay away from using those words in front of him. There is nothing inherently taboo or insulting about "ginger" or "nerd", it's just associated with people who make it clear they hate the guy they're describing with.
That's all assuming the original commenter's reasoning, which I find a bit leapy. Calling Scott's by his last name is a bit weird for sure, I have never seen anyone or himself call him that, but it wouldn't even count for a 0.001 in my calculation of whether someone is a Scott's enemy.
I can understand your POV. A couple of points besides that:
"(and I will still correct you that the guy or gal you're replying to only said "last name exclusively, not the full name)"
They said this:
"For reasons involving the NYT article and doxxing, his *last* name (not mentioned here) is used only by his enemies. So never use that."
I did not gather from this the 'last name exclusively' part, particularly not "not the full name" part, and I apologize if this is a very relevant part of the question. However, I can also ask the same questions on using only his surname, which I have also done, not knowing that some people who are not Scott use this as a marker to classify me as one of his enemies, that is, apparently as a random asshole on the Internet.
"But also, it's good and wholesome to know what upsets the people you respect and then not do it in front of them."
I wholeheartedly agree. I seriously question this point's relevance to using Scott's full name. Has he asked to be referred to primarily with his pseudonym? The situation has markedly changed from the days of SSC, precisely due to the fact that he is no longer (relatively) anonymous.
"Calling Scott's by his last name is a bit weird for sure"
I routinely refer to celebrities by their last name. That is, I talk about Yudkowsky, Hanson, *******, Tao, Pele, Einstein and Trump, not Eliezer, Robin, Scott, Terry, Edison, Albert and Donald. I think this is normal.
It's just a signal, like "Barack Hussein Obama" or "Donald Drumpf" -- sure you could use those even if you supported the person in question, but due to the peculiarities of history people may assume that you don't like them.
Barack Obama's middle name is not Hussein, and Donald Trump's last name is not Drumpf. Scott's last name is what it is, and he personally celebrated it in ACX:s opening post. I do not think this analogy works.
I think we disagree about our understanding of "last name", the one I have in my mind is "Siskiend" (I butchered the spelling I know), the one you have in mind seems to be "Alexander" ?
I mean, I'm okay with both. I have made it clear that the point is mostly unconvincing to me, I'm just playing the role of the original commenter's advocate, that's all.
>I did not gather from this the 'last name exclusively' part,
Yes, I realized this just now, I agree it can be understood both ways, perhaps even favoring your way.
> classify me as one of his enemies, that is, apparently as a random asshole on the Internet.
I doubt you should care about the opinions of someones who uses a single word to deduce something like this about you, eh ?
>*******
Seriously, it's okay. Say it : his name is Scott Alexander Siskind, there is no taboo about it, you don't have to star anything. The commenter that started the thread just shared a (quite frankly) flawed heuristic, but people are not dumb algorithms. If you're on Scott Alexander Siskind's blog writing comments that are not 51%+ insulting Scott Alexander Siskind, you probably don't hate Scott Alexander Siskind, or Scott, or S. Alexander, or SSC, or Siskind, or S.A., or S.A. Siskind, or Unsong's author, or the Jewish Author That Loves Writing And Is Enviously Good At It, etc.. I can't speak for Scott, but those all strike me as pretty reasonable aliases for him.
"I think we disagree about our understanding of "last name""
I'm referring to ACX:s writer's (Scott's) family name, the one NYT threatened to and eventually did publish.
"I doubt you should care about the opinions of someones who uses a single word to deduce something like this about you, eh ?"
Absolutely. Let me emphasize that I do not feel that OP:s comment had anything to do with me personally, but I did use myself as an example, mostly to avoid the possible if unlikely lines along "Even if you don't see how using his last name is offensive - well, let me point out that you don't use his last name, but guess who does? Evil enemies!" (yes, I'm also kidding a bit).
Earlier you said that I'm overthinking this, and that the commenter I replied to used the word 'enemy' lightly. That is exactly the relevant part and what I'm opposed to: the cheap usage of the word word 'enemy'. I agree that my comment represents overthinking, and I wholeheartedly believe that whenever the word 'enemy' is used, one absolutely should overthink the case. This - to be extremely wary of groupthink, halo effects (negative or positive), ingroup-outgroup biases etc - was one of the first things I learned when reading the Sequences, and I think it is a very valuable idea.
That is, I consciously decided against a charitable interpretation of their words due to the (psychological, social, cultural) risks associated with using allies and enemies as a framework for thinking. I do not think this is the place for that sort of language (I do NOT mean that words like 'enemy' should be banned or something equally stupid).
I agree with several things you said, and I agree with the last paragraph.
(Also, while I can understand why using his real last name might still be "outgroup-coded," Scott openly uses it in "Still Alive," this blog's second-most popular post, and in his Lorien Psychology project, so I don't think it really has to be kept secret anymore)
I don't really care much about this spelling variant, but if you're arguing "doxxing" is incorrect, I'll argue back.
A quick search reveals no references to this supposed letter exx other than by Stallman, so if it does exist, it's at least pretty obscure. The mispronounciation of "doxing" as /doʊksɪŋ/ seems much more obvious than the mispronounciation of "doxxing" as /dɑxɪŋ/. Avoiding forming digraphs with other meanings does not seem like a rule English actually tends to follow (e.g. "cooperate", "lighthouse"), but doubling final consonants in cases like this when adding a suffix is. I read that the letters "x" and "v" have often historically been exempted from this rule because people thought it looked bad or something, with this exception becoming less common lately, although writing this now I'm doubting my memory/source for this. Anyway, my main point is that consistency is a better reason than most to prefer one spelling (or other linguistic feature) over another, and that points to preferring "doxxing". (Also apparently it's sometimes spelt "doxx" even without a suffix, but I don't see any particular reason that would be preferred).
It's spelt with a double-X because it's Internet slang, and playing with English spelling "rules" is a common Internet game. Wikipedia (which prefers "doxing", FWIW) claims that the original spelling was "d0xing": https://en.wikipedia.org/wiki/Doxing
I think RMS is joking there: nobody would write "yech" as "yexx", and the standard way to render the Greek letter chi in English is "ch", as in technology, chirality, archaeology, etc. Besides, language is defined by usage, and "doxxing" is at the very least a widespread usage.
Edit: Also, "Exxon" is not pronounced "Ecchon", and doesn't derive from a Greek word! It was a computer-generated variant of their previous name "Esso", which is derived from "S. O." for "Standard Oil", and is still used in much of the world. The Standard Oil Company of New Jersey switched from "Esso" to "Exxon" after other Standard Oil descendants sued to stop them using the "Esso" brand in their territories: https://en.wikipedia.org/wiki/History_of_ExxonMobil#Exxon_Corporation_(1973-1999)
Further, it's my understanding that they deliberately chose the 'xx' spelling of Exxon *because* this pretty well guaranteed that no one was using that name or word in any prior sense anywhere.
You are suggesting. We're do you have to be in life to get something different? I understand and then it doesn't matter. I just wrote something without thinking about any of it. I never want to say anything in person or typing. I never wanted to sound like anyone because I am only what I think. I could sit around and beat myself up of make myself feel bad because someone had a thought. When it comes to needing to be aware of what I'm going to say. Words on earth will always be followed with more words. Say something good to someone today or don't. It's like asking me if a tree falls in the woods.....? I would stop you right their and tell you i wasn't there so why would you celebrate sence with something so common.
Would you pay $1 a month for really good podcast Adblock? What about $10?
$10 nope, $1 maybe. Most of that maybe is because I don’t like subscriptions. If it were possible to get into the epistemic state where I believed that paying $100 would mean I only had to listen to single-digit podcast ads in the next decade, I would gladly pay it, but it seems unlikely I could reach that epistemic state even if the product actually existed.
Per this idea they in fact are (in the sense of having qualia subjectively experienced by their constituent matter in a manner determined by and inextricably linked to its current physical state), they just aren't self-aware. Arguing against the proposition that everything is (to some extremely primitive degree) conscious by rhetorically asking "why isn't x conscious?" is begging the question, wouldn't you agree?
Context?
Does anyone know how, exactly, the use of semaglutide as a weight loss drug got popularized? Scott started writing his article on it before Elon Musk endorsed it, so it isn't Musk, first and foremost.
So, as anyone here gotten caught in the California flooding?
Scott, are you aware that you were cited three times in a just-published takedown of common good constitutionalism in Harvard Law Review? https://harvardlawreview.org/wp-content/uploads/2023/01/136-Harv.-L.-Rev.-861.pdf
I don't think that most people are even aware of when and where they are cited in general.
Perhaps imprecise phrasing on my part. I meant it as "I thought you might be interested to know that...". Considering that the article in question is in a field of study a bit far removed from the remit of this blog.
Historically, how many places have actively advertised for immigrants? I know that Texas did when it was part of Spain and Mexico and when it was an independent country (not so much recently tho). What other places have?
I don't know if this counts, but some of the railway companies in the US advertised (and subsidized) some immigrants from England and Scandinavia to move to the plain states and be given farmland near their railroad lines.
They wanted to ensure there was adequate demand along the newly constructed lines, to help them become more profitable more quickly.
Back in the 19th century, Canadian authorities were very keen to settle the prairies with Europeans so the area wouldn't be de facto claimed by the US. They even hired agents in Europe who persuaded whole villages to emigrate to the New World en masse. I expect in the course of this campaign they published plenty of advertisements.
Rhodesia after WWII also went out of its way to attract white settlers.
Doug Saunders' *Maximum Canada*, which strongly influenced Matt Yglesias' *One Billion Americans*, goes into a lot of detail about Canada's ambivalent attitude towards immigrations in the 19th century.
I am pretty sure the Mormons did early on, in particular for wives.
The Statue of Liberty has the "give me your refuse" plaque.
Where can one see how one did in the Astral Codex prediction contest from last year?
I've been talking about the importance of teaching adult literacy for a long time, and it seems like I'm talking into a blind spot, as though people generally believe that if you didn't learn how to read in school, it's hopeless.
A man who's teaching himself to read at age 33.
https://www.latimes.com/california/story/2022-12-28/learning-to-read-one-tiktok-at-a-time
His TikTok:
https://www.tiktok.com/@oliverspeaks1/video/7186741515685858603
The NAACP is pushing phonics:
https://www.washingtonpost.com/education/2022/12/27/phonics-reading-virginia-naacp/
Discussion: https://www.metafilter.com/197894/The-real-question-is-why-he-decided-at-age-33-to-learn
My facebook discussion: https://www.facebook.com/nancy.lebovitz/posts/pfbid02WjwnmawAmHdaF5yv16J1wQxjFCLHLMASyN5d3GY8DgCTAeooBor5Wg9HKSxKrgChl
We have TV ads for adult literacy programs every so often in Australia.
Do we still? I remember the old ones, but I haven't heard that one-three-double-oh-six-triple-fiiiiive-oh-six jingle in ages.
I wanted to say hello. I have been traveling in my whole life to get here. Impartial, I would like to place 5% of the blame for my being late on opinions. If I had a dream it would be. May opinion never get between us. We are all faced with with the daunting task of coading and decoding information in the real world experience. I believe the unhealthy opinion is born when we are so quick to want to say something. Not speaking toward any one here. Just speaking for my life. When I take information from someone I fuses it or reject it by what I am receiving from the person. I tend to not want to say anything. Except I always want to engage in the opinion when it comes to just. That right there needs to be destroyed. I can't act that way. I seem to find myself in some what of what top gun is about. Just as dangerous due to I'm the pilot and the aircraft. If you crash their is no coming back. I understand that I have disorder when it is coming from crowd that doesn't see it that way. Wow it's a wonder their are not as many names for disorders as their are people. Wait they give us names at birth. Since we run out of names the number we have is the our disorders indentation. I truly try to be a better me than the person I was yesterday. I tried to stay the same but as each day I pass threw I can't be like anything that who I believe I want to be. I'm guessing that is why we say love by the code. Love is only a word. How will you know it if all you do Is hear it. If I say die do you die? I say live it goes off and dies. I just do what I can when I see it. I speak positive and I try to be respectful by smiling and staying quiet when it comes to those who are people to take from. My everyone live one way or other. See you on the Battle field of. Fight well. Who ever you may be. Be it. So that we may always see where you are.
No offense, but is this a bot post, or have you been diagnosed with schizophrenia?
You know, Carlos, it's really quite possible that this person does have schizophrenia. It's not a rare disorder. If it's a bot post, what would be the point of sending the bot here? The post is not at all offensive, and is way to disorganized to be an effort to convince us of anything. And I don't see any jokes or references to ACX topics, which if present would probably indicate that someone is pulling our leg. So I'd say your post is in fact offensive. Sort of like "hey, buddyt, did you get your head shaved or are you having chemo.?"
Well, I wanted to engage with it due to its bizarreness and couldn't think of anything else. I don't think there's a polite way to ask someone if they have schizophrenia, and I am curious.
I suspect someone is pulling our leg here.
Do you believe there is little time left before transformative AI? Are you willing to put your money where your mouth is? Apparently, you can win big if you do:
https://forum.effectivealtruism.org/posts/8c7LycgtkypkgYjZx/agi-and-the-emh-markets-are-not-expecting-aligned-or
because markets are not expecting transformative AI any time soon, meaning there are absolutely massive inefficiencies to exploit if AI timelines are as short as some are predicting. Some are even expecting a phase of explosive GDP growth (30% per year). I don't think I buy that, because it seems to me there is too much inertia and complexity for the market to transform that quickly in response to any tech, but that's just an uneducated intuition. I may throw some money at the funds indicated in the article, if only to save the world: if I win on those, those would be the first successful investments I ever made! The odds of it panning out strike me as miniscule, so the fact I'm considering investing like this maybe indicates the reasoning in that article (and by extension, that of the AI safety community) is shoddy.
Can there really be heaps of gold lying on the ground right now? What do you think?
Are there analogous eras when a new invention was 1) fascinating to those paying attention, but error-prone, unrefined, and not widely used; and, 2) on the verge changing the economy, as we know from hindsight?
If so, you could compare with financial markets at that point in time.
In recent living memory: personal computing and the internet. You could have made a lot of money buying companies like Microsoft at the right time. On the other hand, there was the dot-com bubble, where you could have lost a lot of money buying the wrong companies.
The difficulty is that "changing the economy" is step (3). There is a step (2), "some companies succeed at commercializing the invention and some others try and fail, stocks go up and down while people try to figure which companies are which".
The EA forum post linked above cuts through the difficulty of picking the right stocks by suggesting that one should bet on the real interest rate, instead. I am slightly unconvinced of their plots because (a) the interest rate data they have goes back only to early 1990s (real) or 1950s (nominal), so it misses some of the big transformative changes that could guide our intuition, and more importantly,
(b) in my layman's understanding, interest rates and GDP growth correlate in quite complicated ways, so plotting one against other may be less informative than one would think.
I have a friend who's suffering from depersonalization / derealization disorder (DPD), the persistent sense that oneself / the external world is not real. DPD seems to be inherently poorly understood, and I'd be grateful for advice from anyone who has successfully treated or cured this condition, or could recommend a therapist in the Boston area.
Here's what I've got so far in terms of a bro science treatment protocol. I don't think there are any miracle cures or One Weird Tricks, but it seems like there are many bits of applied common sense that could plausibly help and are unlikely to hurt:
- Achieve brilliance in the basics of sleep, diet and especially exercise.
- Proactively limit and manage stress.
- Go outside, get as much sunlight as you can, quite literally touch grass.
- Engage in physical hobbies such as crafts, gardening and the like, that involve a lot of multisensory integration and not a lot of high-level thinking.
- Socialize IRL, especially in comfortable, low-stakes social situations.
- Limit screen time.
- Keep a journal of symptoms and note what aggravates or ameliorates symptoms but don't otherwise obsess over them (spend, say, 5min/day on journaling).
- Avoid sitting alone in your room pondering the nature of reality or otherwise ruminating.
- Try reciting common-sense mantras / affirmations ("The world is real. My name is X Y. I am sitting in a room right now" &c) if helpful but do not obsess over them.
- Engage moderately in your religious / spiritual tradition if applicable but do not attempt any week-long kundalini benders or whatever.
- Check for and address feelings of inadequacy / excessive self-criticism / low self-worth through CBT, talk therapy, or similar. (For some reason this seems to be a common co-morbidity of DPD,)
It seems that episodes of DPD often resolve spontaneously, but I think it's worth trying to resolve it as quickly as possible and limit the chances of future relapses. Thanks in advance for any breadcrumbs or advice.
DPDR is *very* unpleasant. I had it myself when I was undergrad after a bad drug experience. I did not seek any treatment, in fact for some reason did not even think of it as an illness, and it eventually faded away after about 6 months. I'm a psychologist who treats OCD. I think DPDR is a form of OCD, or at least a close cousin of it. Somebody with regular OCD might check that their door is locked over and over again -- people with DPDR keep checking over and over again to make sure they and the stuff around them looks and feels "real." Of course doing that makes it feel less real, because they're staring at their hands, at the picture on the wall, etc., looking for a feeling of realness -- which is a weird activity that makes everything you're looking at seem sort of arbitrary and peculiar. It's like saying a certain word, like say "mosquito," over and over til it sounds like an arbitrary sequence of sounds, instead of like a familiar word. I think what keeps DPDR going is that the person harbors a belief that if they did not do this perpetual checking for realness, things would feel even *less* real, and the idea of them feeling any less real is terrifying.
The best approach to dismantling the mental checking is to do less and less of it. When you do, you discover that things don't feel less real because you're not slaving away at trying to make them feel real. However, it's not possible to make yourself not think about realness. It's really not possible to not make yourself think about anything that you crave to think about. What you can do, though, is to spend time doing things that capture your attention so much that there's not much room left in your mind to obsess about realness. Something like skiing, or any thrill sport or really any active sport is excellent for capturing attention. So is dancing. Outside the realm of vigorous physical activity, what captures somebody's attention depends more on their personality, but there is almost no solitary activity that is likely to work. Here is a random scattering of things that are pretty engaging for people who have a taste for them: Tutoring a small child; helping a friend move or repaint their kitchen or put together Ikea furniture; clicker training your cat; gardening; playing music or singing with a group; fancy, complicated cookingl
I noticed in the comments here somebody recommended the Reddit sub for DPDS. I actually do not recommend it. In online forums for people with health problems, people with severe and/or incurable cases are way overrepresented. There are a sprinkling of people in the process of getting better, and a very few people who have recovered and are sticking around to help others, but the reader's overall impression is likely to be that once you get this illness your are stuck with it for life. I'd recommend instead that you or your friend do a google search for stories of recovery from DPDR. I'm sure there are some out there.
If your friend decides to see a therapist, I'd recommend looking for one who describes themselves as a specialist in either OCD or DPDR, and says they use CBT (using other approaches in addition to CBT is fine, but CBT should be on this list). You can find OCD specialists at iocdf.org. Boston is probably the best town in the country for finding OCD specialists, because the OCD Institute is there. There are many therapists who have lots of good training and lots of experience with OCD and related disorders. The only bad thing about the Boston psych scene is that most specialists in private practice do not take insurance. If your friend has a kind of insurance that allows them to see what's called an "out of network" provider, they can probably get reimbursed by insurance for about half of what they've paid. If your friend doesn't have that kind of insurance, it's worth coughing up the cash if they possibly can. Treatment of DPDS is not a long process -- should be doable in 3 months or so. I do not think that having DPDS is grounds for believing somebody needs a complete psychic overhaul via years of therapy. It's an anxiety loop that the human mind can get caught in, and treatment that focuses on breaking the loop, without searching at length for some reason why the person was vulnerable to it, works well. There's a therapist in Lexington named Jim Vermilyea whom I recommend highly. He's in practice with some other people who are probably also very good, because he wouldn't have let them join his practice if they weren't.
I'm sure there is a recommended drug treatment for DPDS, probably some SSRI. However, SSRI's aren't magic bullets, despite what the drug companies would like us to think, and they often interfere with sexual pleasure and cause weight gain, so there's definitely a downside to taking this road.
I am so glad you chimed in here. I was especially concerned seeing a subreddit recommended, since visiting mental health forums is usually a terrible idea (for the reasons you mentioned). Treatment for OCD can be so counterintuitive and a lot of things that seem like common sense will just make it worse.
Looked at the site you linked -- that looks great! And it's always good to find a resource that's more affordable to individual psychotherapy.
Yeah, I'm in a weird spot where I haven't actually had DP/DR, but it's one of my OCD themes. Just reading the articles on that site helped me better understand what it actually is and helped me come to acceptance with it.
Very hypothetical, but how about observing whether it's better or worse at some times than others, and possibly gathering some clues that way?
There’s a DPDR subreddit, which might be of some help.
A lot of New Agey traditions have things like “grounding exercises” to help counterbalance the weird mental states their other techniques conjure. Things like standing barefoot in the grass, naming the things around you, exercise, breath techniques, etc. The kundalini subreddit has some good examples in their wiki
As I understand it, this is largely an anxiety symptom and has some elements in common with OCD. This site is a good source of info: https://www.dpmanual.com
Thanks, will read
Do you know of any religious, philosophies, or ideologies that correlate (positively or negatively) with DPD?
My guess was that DPD might correlate positively with religions which teach the concept of "philosophical realism", the belief that the words we use should refer unambiguously to discrete entities (possibly material, but often to an atomic spiritual essence of an entire named kind of material thing, as in ancient Greek myth, Aristotelianism, or many Native North American myths) which has a clear and firm boundary or definition. Some teach that these discrete real entities exist (e.g., Christianity); some teach that they don't (Hinduism, Buddhism).
Buddhism in particular seems to teach acceptance of DPD as its core doctrine.
People who believe in philosophical realism in the modern world should logically either deny that realism, embrace something like the Buddhist concept of emptiness, or conclude that the external world isn't real. So possibly this isn't always a mental disease, but can be caused by having the mental acuity to actually believe your metaphysical "beliefs", or to comprehend their consequences.
https://my.clevelandclinic.org/health/diseases/9783-dependent-personality-disorder says:
Certain cultural and religious or family behaviors: Some people may develop DPD due to cultural or religious practices that emphasize reliance on authority.
To answer my own question, Sartre's famous novel /Nausea/, which is often cited as the best existing explanation of existentialism, sounds like an extended description of DPDR. For instance, read this famous passage:
<<<
Black? I felt the word deflating, emptied of meaning with extraordinary rapidity. Black? The root was not black, there was no black on this piece of wood—there was . . . something else: black, like the circle, did not exist. I looked at the root: was it more than black or almost black? ... I had already scrutinized innumerable objects, with deep uneasiness. I had already tried—vainly—to think something about them: and I had already felt their cold, inert qualities elude me, slip through my fingers. ... And the hand of the Self-Taught Man; I held it and shook it one day in the library and then I had the feeling that it wasn’t quite a hand. I had thought of a great white worm, but that wasn’t it either. And the suspicious transparency of the glass of beer in the Café Mably. Suspicious: that’s what they were, the sounds, the smells, the tastes. When they ran quickly under your nose like startled hares and you didn’t pay too much attention, you might believe them to be simple and reassuring, you might believe that there was real blue in the world, real red, a real perfume of almonds or violets. But as soon as you held on to them for an instant, this feeling of comfort and security gave way to a deep uneasiness: colours, tastes, and smells were never real, never themselves and nothing but themselves. The simplest, most indefinable quality had too much content, in relation to itself, in its heart. That black against my foot, it didn’t look like black, but rather the confused effort to imagine black by someone who had never seen black and who wouldn’t know how to stop, who would have imagined an ambiguous being beyond colours. It looked like a colour, but also . . . like a bruise or a secretion, like an oozing—and something else, an odour, for example, it melted into the odour of wet earth, warm, moist wood, into a black odour that spread like varnish over this sensitive wood, in a flavour of chewed, sweet fibre. I did not simply see this black: sight is an abstract invention, a simplified idea, one of man’s ideas. That black, amorphous, weakly presence, far surpassed sight, smell and taste. But this richness was lost in confusion and finally was no more because it was too much.
... The essential thing is contingency. I mean that one cannot define existence as necessity [presumably a reference, but to whom?]. To exist is simply to be there; those who exist let themselves be encountered, but you can never deduce anything from them. I believe there are people who have understood this. Only they tried to overcome this contingency by inventing a necessary, causal being [God]. But no necessary being can explain existence: contingency is not a delusion, a probability which can be dissipated; it is the absolute, consequently, the perfect free gift. All is free, this park, this city and myself. When you realize that, it turns your heart upside down and everything begins to float,
– (Sartre 1938, transl. Lloyd Alexander 1949., the third Monday, 6:00 pm; from Google's scan of the New Editions 2013 printing, which lacks page numbers)
>>>
Sartre describes his nausea as being due to realizing his own "contingency", which is philosopher-speak meaning that he wasn't necessary to the universe--that it would have been possible for him never to have been. In Platonist philosophy, this means that he isn't real, as every Real thing is eternal, not temporal and contingent.
Sartre probably learned to consider temporal existence unreal from Hegel, though perhaps not directly. Hegel wrote in The Science of Logic, "The idealism of philosophy consists in nothing else than in the recognition that the finite is not truly an existent" (that which exists only temporarily, was never real).
Yeah, really agree. I read Nausea after I'd had my own 6-month episode of DPDR, and not only did I recognize right away that Nausea was about the same kind of stuff, but I actually found Nausea quite disturbing to read. I felt afraid it was going to trigger another episode of DPDR. It didn't, though.
I had much the same experience as you as an undergrad (drugs exacerbating latent dpdr /existential vertigo), I couldn't put Nausea down even though I thought it may be an info hazard.
Camus' Myth of Sisyphus was the antidote for me, at the time
How did Myth of Sisyphus relieve you?
It seems to me that DPDR is one link in a giant web of problems induced by Western philosophy. It begins by taking mistakes of Plato as foundational. Foundationalism itself is one of those mistakes we get from Plato--the belief that you need to start with some certain truth, and then build on it deductively.
In foundationalism, the foundations can never be questioned. When the foundations are wrong, as in Plato, those errors can never be fixed. Instead, they propagate through belief networks. The initial error about X, E(X), leads to some obvious contradiction further down the line involving Y; but this can't be resolved by questioning E(X), so the contradiction is resolved by instead believing something wrong about Y, E(Y). This in turn leads to a contradiction with Z, and a compensating erroneous belief E(Z). So we see Western philosophers performing worse than random at even easy questions, like, Does the material world exist? Is life desirable? Is pain good? Is pleasure bad? Answer one of these wrong, and you'll likely answer the others wrong as well.
In this case, we begin with Plato's assertion that the Real consists of pure, eternal, transcendental, absolute Forms and Truths. This mistake leads to the belief that our lives ought to have some transcendental "purpose" or "meaning" derived from God--the second mistake.
The second mistake causes us great dissatisfaction with the messiness of reality, a distaste for life, and sometimes even a feeling of ghostliness--DPDR.
To get past DPDR, Camus proposes in the Myth of Sisyphus that we must acknowledge the absurdity of our lives. This seems to me to be the third mistake: the wrong belief that we should live with contradictions. We make this third mistake to protect us from the consequences of the second mistake.
This "third mistake" is quite common in the history of Western philosophy, whether it's to embrace contradictions (as Camus says), or not to acknowledge or even look for them. Other examples include:
- the development of the concept of "mysteries" by the Catholic Church, which teaches that it's necessary to be able to believe contradictory things at the same time (notably about the Trinity and Christology)
- Hegel's "dialectic", which teaches that contradictions need never be resolved, but should be welcomed and synthesized into a new wisdom formed by accepting both branches of the contradiction
- Kierkegaard's critique or parody of Hegel's dialectic, which I don't understand, but it definitely involves accepting things without understanding them
- the use of phenomenology by the Nazis to reject the validity of traditional values, empirical data, and non-contradiction, in favor of subjective feelings, "lived experience", and "authenticity"
It's found especially in totalitarian regimes, which must teach their citizens to hold contradictory beliefs–or rather, not to believe or disbelieve, but merely to accept. In fiction, this is "2+2 = 5" from 1985, "there are 5 lights" from Star Trek: TNG, and Plato forbidding the citizens of his Republic from studying philosophy before age 50.
I don't think all the different instances of it arose from the same chain of wrong beliefs (the Real is eternal > the temporal and contingent is unreal > we must be able to hold contradictory beliefs). There's a web of basic philosophical questions common to most philosophies, and whenever the answer to one of these questions tells you something about the answer to others of these questions, accepting a wrong answer to the first forces adoption of a wrong answer to all the others. But you can take those other nodes of the web in any order, with the consequence that there are many possible paths from any of the foundational beliefs to any derived belief.
If this continued indefinitely, a well thought-out and self-consistent belief system which began with a foundational error would eventually converge on pure error--an ideology of nothing but wrong or incoherent beliefs. The "third mistake", that we should reject reason, is necessary both to avoid renouncing the mistakes already made, and to stop the propagation of mistakes before the system becomes so wrong that it ceases to function.
I have never heard it called existential vertigo before. Did you coin that phrase? I like it.
I don't know. I do know of a couple cases where intensive Buddhist meditation practice led someone to a state sort of like DPDR.
Do you know if they liked being in that state?
The one I knew well was pretty anguished. He absolutely hated it. Along with his sense of being unreal, he had developed a habit of watching his breath, and a fear that it would stop if he did not make sure he was breathing "right" -- fast enough, deep enough, with just the right amount of attentiveness to the sensations. He was afraid to fall asleep because he felt like loss of consciousness was = loss of the little self he had left, and what if when he woke up he could not find that little bit again.
Sign them up to working vacation in a labor intensive third world industry. Maybe coal mining or something.
They will quickly become reacquainted with how real the world is. (Real enough!).
Has this actually worked for you, or are you grinding an axe?
I mean I for sure find hard manual labor something that prevents existential maundering. But yes I am also grinding an axe. Someone who is having serious difficulties because they worry themselves the world isn't real needs to as the kids say "touch grass".
For example the past few weeks I have been spending evenings doing maintenance on a public skating rink as a volunteer. Dozens of hours shovels snow and dragging/holding hoses, often in sub zero temperatures and one night with a -45 wind chill. Maundering doesn't come to mind because you got too many other pressing problems.
I am half convinced a lot of modern "anxiety" is because the human mind is wired for a general level of day-to-day anxiety, and so when we have have constructed a life/society with so little actual need for it, people find stupid shit to get anxious about.
Have you read the Hitchhiker's Guide to the Galaxy? Google "crisis inducer".
Oh yeah I love those books.
If someone goes to a Buddhist retreat for some intense meditation over days or weeks and they can’t turn off ‘monkey mind’, the severe form of background narration that happens when our mind isn’t ‘doing something’ someone will help them out by putting a broom in their hand or have them do dishes or any repetitive mindless tasks to get things to quiet down so the hapless meditator can get a start.
I have always experienced a lot of anxiety and I know that mowing the lawn or re-staining the deck or as you mentioned shoveling snow will always help me settle down.
Shoveling snow is especially appealing. I like to wait until the snow stops completely and go out after dark. The temperature usually drops, the night sky clears and I usually have the neighborhood to myself. It can take on the aspect of a mystical sacrament at times. A bit like making the sand mandalas that the monks carefully create and then immediately sweep away. A visual aid to appreciate the concept of impermanence.
Just like the mandalas, my tidy driveway and sidewalk will soon be covered with new snow.
I agree with your point about modern anxiety. Something I often think about is that a century ago, people understood far more about the everyday objects around them:
Anybody can understand reasonably well how a horse pulling a wagon works, or an ax, or a staged play or concert, how a fire warms a room, etc etc. But most people understand very little about about the modern equivalents: Cars, jets, electronic entertainments, etc etc. It affects your feeling about your life to understand so little about basic elements of it. And I'm sure the cave man in all of us feels anxious because he knows that if it all breaks, we wouldn't be able to come anywhere close to reconstructing it ourselves.
I had a similar thought the other day. When humans were primitive, our habitat was the natural world, in the sense that survival consisted primarily of interacting with things that exist in nature. That habitat was understood in terms of magic -- we created myths to explain natural phenomena and ascribed mystical properties to natural things. However, we developed extensive practical knowledge of those magic phenomena -- we could manipulate fire, and predict buffalo herds' grazing patterns.
As humans civilized, human artifacts came to play an increasingly large role in the human habitat, yet those artifacts -- by virtue of being the creation of human minds -- were not magical. While we could not explain the physics of how a bow shot an arrow, the component parts were visible and we understood their various functions and how they worked together. This remained the case for the bulk of human history, perhaps even through the start of the Industrial Revolution.
As human artifacts came to dominate our habitat (e.g., cities, themselves a human artifact), natural phenomena both played less of a role in daily survival and became less magical. The Scientific Revolution began to explain natural phenomena in non-magical terms, and while those explanations remained inaccessible to ordinary people, people accepted that nature operates according to laws rather than magic. Daily survival increasingly consisted of navigating a habitat composed of human artifacts that were generally comprehensible to the average observer (a loom, a hearth, a mill).
However, with the advent of modern technology and the rise of a post-industrial society, daily survival consists of navigating and manipulating human artifacts about which the average person has extensive practical knowledge but no scientific knowledge. This is both because human artifacts are increasingly complex and because the division of labor in a post-industrial society permits the average person to be ignorant about their complexity. I spend 10 hours a day on a computer but know nothing about how a computer actually works. We understand these artifacts are not magic, but we cannot explain them ourselves.
So we are reverting to a state of understanding of our habitat similar to that of primitive man - we know how to manipulate our habitats in order to survive, but we cannot explain our habitats. The difference is that primitive man at least thought he could explain his.
Yeah, and primitive man thought gods or magic were running the show. That's a lot more comforting, thrilling and special than thinking that Elon Musk & Mark Zuckerberg are in charge. They are a truly tiny and nasty stand-in for gods. Eww.
I dunno. I feel like anxiety issues have gone to the Moon just in the last 50 years. Young people in my kids' cohort seem signifinicantly more anxious than I or my friends were at their age decades ago -- and it was hard to explain nuclear fission or the transistor back then.
What about the explosion in communications and always-connectedness that the Internet and devices have caused? At least there's the virtue that the growth in potential cause and effect have tracked each other. And...it seems to me people are almost always more self-aware, nervous, anxious -- whatever their inherent level of social skills -- when they are aware they are being watched by strangers.
These days, in much of what we do, we're always being watched by hundred to hundreds of millions of strangers. And not even just strangers! There's very few moments of the average day when you're *not* potentially in touch with your wife, your husband, your parents or children, all your friends from the most intimate to the most casual (not to mention assorted past flings and affairs, sometimes). Your boss, your employees, your customers, your clients. When I was young, there were big chunks of the day when I was out of touch, unreachable. Walking to school, 20 minutes where my mother couldn't call me, no friends could text me. Driving to work, nobody could call me. When I got home from work, my boss couldn't reach me except in the direst emergency. If I talked to an acquaintance by phone, or wrote a letter, nobody else overheard the conversation, the way zillions of people do when we write on our FB page, post a comment, Tweet.
It's like Panopticon crossed with the Stasi, where everyone watches everyone, not in general with malevolent intent, but....doesn't seem natural to our monkey brains, I bet.
Yes, I think that's true. I remember the first time I ever posted anything on an online forum, probably about 25 years ago: It felt like a huge deal, like being on Dancing with the Stars or having an editorial in the NYTimes. Thousands of people were going to read my words. I felt nervous and excited and presumptuous. Now of course I'm used to it, like everybody else, and yet I think a part of my brain is still registering that a LOT of people are reading this, and I know very few of them. Even on here, even among the names I recognize I don't know most people's gender or age. And then of course fairly regularly you get a reminder that some members of the group you are speaking to are not the kind of person you'd ordinarily disclose anything to, because they make clear right away that they despise your ideas and hate you for having them. Heh.
That sounds like a very fulfilling use of one's time. As I mentioned in the OP, I think manual labor is good! As is--also already mentioned--literally touching grass.
Consider the analogy to depression: from the outside, depression often looks like someone being a little mopey, slow, or avoidant. Big deal! Cheer up, you big baby!! But dig a little deeper and it turns out that severely depressed people sound like they're suffering more than cancer patients, who objectively suffer a lot but seem to adjust and even find the bright side of their diagnosis, in a way that rarely / never happens with depression. Add to that a bunch of weird physiological symptoms that seem hard to predict and fake in advance (why would malingerers all come up with psychomotor retardation, for example?) and depression starts to look pretty different from "just being really sad".
Much like depression, this derealization thing seems multicausal. Some people are depressed because their lives suck (sometimes for reasons within their control, at that!) and some people are depressed in the midst of the fullness of life, for no obvious reason. Which camp you're in is helpful intel, in case your life just sucks and there's something you might could do about that. But "unsuck your life" isn't helpful advice if your life doesn't really suck in the first place. DPD seems correlated with "having the sort of childhood that would alarm CPS", but "don't have an abusive childhood" isn't actionable. (And yes there is obvious genetic confounding but "don't have child abuser genes" is even less actionable.)
Or consider the analogy to obesity, the bulk (heh) of which is pretty obviously due to some combination of sedentary lifestyle plus cheap and hyperpalatable foods. A victimhood of our own success if there ever was one. Yet "stop being a coddled modern" is not actionable advice in the way that "eat exactly this much and exercise" is.
It's possible that DPD is just another disease of modernity and can be treated by consciously unwinding some of its more virtualizing aspects. It also sounds like a real hell to be trapped in, so I have some sympathy for people who find themselves stuck there, never having chosen to inherit the broken social tech that made their condition all the more likely. So yeah I encourage them to touch grass and meanwhile I'm polling here to ask if there's anything else I've missed.
A) I think a high enough % of people who present with depression or other things like DPD or whatever are just basically malingering, that one of the first main pieces of advice and attempts at therapy should just be to "grow up and start getting out there and doing stuff.
B) Not to mention which that is generally good advice anyway even if it doesn't work. I think even for the people who are actually having some sort of underlying issue that isn't just "I have worked myself into a crummy series of behaviors and excuses that make me unhappy but I am in a local minima and so I struggle to get out", it is still helpful in most cases.
I was "depressed" with an actual diagnosis and at one point an SSRI prescription (which I only took for a couple months before I started reselling) from ages ~11-26. And severely depressed for most of of ages ~13-19.
Now I had a lot to be depressed about. A father who was totally out of the picture since 4, and alcoholic mother who was passed out more days than she wasn't by about 7PM. I was super into girls and horny, and tried really hard to be charming and pleasant, yet high status girls more or less hated me from ~11-16 (had a lot of success after that). Had 2 very serious, embarrassing and public medical issues (one of which involved removing half a testicle). And also generally hated myself and was ashamed at my overall behavior.
Nevertheless on top of that at times it felt like I was literally not in control of my own mind. Like there was a dark whirlpool in my brain just sucking it down into dark, intrusive self-harming thoughts. I got lots of advice, mostly about talking SSRIs or about therapy related to my mother. None of that was very helpful, SSRIs just made me feel numb.
What was helpful was when my life started going better. And going out and accomplishing things and building up some self esteem. And I am positive that my uncles conscripting me into manual labor and things like that, while I HATED it, was at a minimum more helpful than the therapy, and looking back now I wish they had done it more.
What I needed was less candy coating. More people pointing out all the good things I had in my life, and how easy it would be to turn it around. (Which I did eventually stumble into on my own). But everyone was so concerned with servicing/validating my whining that there weren't nearly enough people saying "even with all this shit you are probably sitting in a top 3% situation globally and a top 1% situation historically, and you are going to fucking whine and wallow about this instead of turn it into something?". That is a message that would have resonated with me, and did resonate with me the one time I heard it.
Anyway, I jsut think we are way too precious about these things. You can't make an omelet without breaking some eggs, and the world needs omelets not cracked eggs.
I'm more sympathetic to your tough love approach than you'd think a shrink would be. For instance, I'm pretty sure that the best approach with kids who have school phobia is to make them go to school Sure, also talk with them about their worries, try to teach them some coping techniques -- but meanwhile, they gotta go to school. I had a school phobia in 7th grade, no idea why, and after my parents saw that I wasn't really sick they gently but firmly insisted I go to school. My fear faded away in a few days, & I as fine with school forever after. And last I checked research supports that view.
On the other hand, I think you overestimate how many people are malingering. I definitely was not, when I had my school phobia. For some reason I had been seized by a fear that if I went to school I was going to throw up in some horrible public way, like right on my desk in the middle of class, and would lay awake literally half the night trying to get unscared, trying to convince myself it wouldn't happen. And in college when I had DPDR I most certainly was not malingering. I was terrified and miserable, and would have paid any amount of money to get the feeling to stop. And, by the way, I was not using the fact that I had that problem to get out of anything. I didn't even tell anybody but a few close friends that I had it. And I continued going to classes, and ended the term with a high GPA.
Telling people who are suffering like that they they're faking it is really a bad move. If they're not, it's very destructive, especially if you are important to them. Think how you'd feel if you had pretty severe pain from a migraine or a whatever and somebody important to you said they didn't believe you that it hurt very bad! On the other hand, a moderate amount of pushing can really help people people who are depressed or anxious. But if you're going to push, the message to give isn't "you're malingering -- get to work" but "it is very likely that being active will make you feel better, even though it feels like it will make you feel worse. Give it a try, for god's sake!"
You have better grounds for your beliefs than I expected, but I wonder how you'd tell if someone had a physical reason (say, a dietary deficiency) for problems with manual labor.
Eh, that could also go in the opposite direction. The few times in my life I've had to endure grueling physical labor I actually found myself dissociating more and more as a coping mechanism.
Asking for someone else: does anyone know of good resources to find rental apartments in the south peninsula (Bay Area, California, USA), within ~5 miles or 20 minutes from San Bruno, ideally under $3k for 1 bedroom plus parking? Person is currently there and scrambling to find something in the next day or two, but all their early leads collapsed.
Apparently all they're getting is automated responses and AI-generated content.
Done, thanks.
(moved due to below)
This is probably in response to a thread from below, there is a well known substack bug where replying in the email sends your reply to the top-level.
Yes. It's remarkable how substack is worse at its core functioning than list-serv email was 30 years ago.
Yes, a remarkable achievement. Who woulda thunk it?
I just want to know what makes the website so godawefully slow, it makes me sad.
Hi all,
Surf, an app that helps you make new friends based on mutual interests and that some of you will already be familiar with, is looking for a CTO.
A word on the product - on Surf, users open the platform and type in a desired outcome (e.g. "I want to find a partner in London for a Kilimanjaro climbing trip."). We match them with someone who wants to achieve the same outcome. They chat. They become friends. That's it. Elegant. Simple. Life-changing. We want to use it to eradicate loneliness from the world.
A word on who we're looking for - we need someone with considerable NLP/AI/ML pedigree and experience who can also do good things with simple app creation softwares like Expo. Someone who loves the early stage challenges of startup life. Someone so enthused about the idea of eliminating loneliness from the world and finding a friend for everyone that they would happily work on this for a year+ pre-funding.
In terms of existing assets, we've already got lots of proprietary technology incl. key algorithms, and an app that's 80% finished. Our waiting list has seen uptake in over 40 countries (+1 if you consider ACX its own pirate nation) and we're in promising talks with universities over pilot schemes. We already have one advisor (prior exp. at Google) onboard and are actively seeking others to make near-term fundraising more straightforward. Any CTO coming into this project will be well-set-up for success.
Early stage - equity only, but we'll be pursuing funding immediately after launch.
The ideal candidate would be in London, UK but fundamentally we don't care where you live, especially if you're the next Aaron Swartz/Tarun Mathur/Mira Murati.
Get in touch by emailing us at team@imsurf.in with the subject line "Surf CTO Position".
Excellent sentiments, but far afield from the topic: is it 'odd' for conservatives to not want their communities or nations filled with folks who are not 'their people'?
Looking at immigration from an economic perspective is always reasonable, of course -- that's why the white anglo-saxon protestant elite in my example above was willing to let irish (and then italian) catholics overrun some parts of new england. That lens itself is reasonable, but to pretend there are no other possible lenses through which to view the issue is not reasonable.
Are you sure New England hadn't already been overrun?
The irish weren't fond of losing Boston when it happened, either. All of history is every place being overrun and people being sad. I'm no conservative, and I don't believe conserving places/peoples against this kind of thing is even possible. It's just not 'odd' to want to do it. You know what I mean?
The idea that immigration is always economically beneficial is fantasy (unless you think a larger GDP is the sole determinant of a "good" economy). And Irish and Anglo-Saxons are very, very similar genetically/culturally comapred to anglos and somalians.
I think it is a bit more understandable in places like Europe where you have fairly homogeneous cultures that go back hundreds of years or are amalgams of smaller but very close cultures that do (those forced amalgams like France arguably already present a bit of a cultural loss). Denmark with 3 million extra Ugandans settling there over the period of say a hundred years is fundamentally going to be a different country. The same holds for 3 million Koreans or Russians or the Spanish or whatever, but the more distant the incoming culture is, the lower number of people from that culture is going to change Denmark a lot)
Less so in places like the US where the citizen's ancestral culture means next to nothing (to the point where absurdly broad categories like "white" or "black" are considered ethnicities) and almost no citizens can trace their ancestry to America in the 17th century or even the early 19th century (native Americans are an obvious exception, descendants of the original Dutch and English settlers also, to a lesser degree). USA with 60 million extra Ugandans settling there over a century is not going to be that much different. That is unless the melting pot of the US stops being a melting pot and the society atomizes in a way that you have a 60 million people with a completely different culture living separated from the rest of the society...
The difference I guess is that countries like Denmark (and by extension most countries in the Old World) cannot realistically be melting pots lest they stop being the countries they are whereas being a melting pot is kind of the point of the US and "melting" more people is therefore not going to change its nature much.
That said, you can have assimilation even in places like Denmark but I think the capacity for such assimilation is much lower and it is going to happen much slower. By the capacity I mean "absorbing more people from different cultures without changing the country in a fundamental way". So I think it is more understandable if people worry about the speed of immigration, or speed of immigration from distant cultures...but only to a certain degree - like I said, the capacity for assimilation is still nonzero even for the Old World countries (probably also somewhat different for each country).
Yes, this...the way American and Canadian Right-Wingers criticise immigration based on European "tropes" is a bit annoying and just cringe IMO...I understand that some people, even in the US or Canada, might not feel comfortable with people from a different culture, but then to me it doesn't make much sense as one of the major arguments in Europe against immigration is that it will lead to the collapse of the Welfare State (which I share to some extent)… but for the US (and to a lesser extent Canada), this argument doesn't make much sense since the welfare state there isn't nearly as developed as it is in Western Europe... yes, I've heard some self-styled "libertarians" in the US being against immigration because they think foreigners don't appreciate Anglo-Americans libertarian values...which is even more ridiculous, since it not only goes against one of the major tenets of libertarianism (person-to-person exchanges should be the major considerations, and groups don't exist), and also is ridiculous for other reasons (aren't most founders of US startups of immigrant backup)… basically being Anti-Immigration in the US or Canada (either for economic or cultural reasons) is a very cringe position based IMO on egoism and maybe some kind of (White?) Supremacism (I understand that most alt-righters here will deny this, but IMO it's the same as being a leftist who complains about police violence against POCs in Europe)…
For Europe it makes more sense, but even in the (Western) European context I would focus on the economic dimension of immigration and base my criticism on this area (e.g. Daniel Stelter or Thilo Sarrazin).
>this argument doesn't make much sense since the welfare state there isn't nearly as developed as it is in Western Europe.
Non-whites already cost white taxpayers around half a trillion dollars in government services received in excess of taxes paid (not considering the cost of crime and imprisonment, which would significantly increase it).
Importantly, non-whites overwhelmingly vote Democrat, and if enough of them come and Democrats amass enough political power, the welfare state will almost assuredly expand. You're assuming that America has a fixed set of policies that will not change, but they depend entirely on who makes up the voters of this country.
>which is even more ridiculous, since it not only goes against one of the major tenets of libertarianism (person-to-person exchanges should be the major considerations, and groups don't exist), and also is ridiculous for other reasons (aren't most founders of US startups of immigrant backup)…
It's not ridiculous. We don't live in a libertarian society, so groups have power to restrict other people's rights, therefore, its entirely reasonable to think about things in terms of groups.
>basically being Anti-Immigration in the US or Canada (either for economic or cultural reasons) is a very cringe position based IMO on egoism and maybe some kind of (White?) Supremacism
White people have a significantly higher mean IQ than all groups besides North-East Asians and Ashkenazi jews (selected populations of other groups will be high IQ, but pro-immigration people don't support selective immigration). This is a brute scientific fact, and you dismissing this as "white supremacist" is extremely bad faith. You're basically using a slur to dismiss scientific reality that doesn't suit your ideology.
I am not sure how to argue with alt-righters like you...ok fine, are you suggesting that the US should base it's immigration policy solely on IQ? Anyway, if they did, should they still let in people from low-IQ groups if they have higher IQs themselves individually?
Also, you did not provide any reliable source for your claim that "Non-whites already cost white taxpayers around half a trillion dollars in government services received in excess of taxes paid"?
Tucker Carlson perhaps?
> Importantly, non-whites overwhelmingly vote Democrat, and if enough of them come and Democrats amass enough political power, the welfare state will almost assuredly expand
For a guy likes to complain about the bad guys doing everything based on ‘ideology’ you spout an awful lot of dogmatic right wing ideology yourself. Are you oblivious to the irony?
Or do you think the crap you say is simply ‘the truth’? That is what you are accusing your adversaries of here. They think they are speaking the truth but really it’s just ‘ideology’.
FFS. The word Ideology by itself doesn’t even have inherent negative connotations.
Here let me be as clear as I can.. This is the dictionary definition of ideology:
ideology
noun
1. a system of ideas and ideals, especially one which forms the basis of economic or political theory and policy."the ideology of democracy"
I might think that if there are too many Republicans, that Social Security and Medicare could be gutted. I wouldn’t state it as being almost assured.
I don’t know what the guy got banned for but his arguments were approaching the “No, you shut up” stage.
Ton of assumptions there. I realize it's hard to put yourself in someone else's perspective but it's a valuable skill. Of course, if they're 'white supremacists' they're not really people and you don't have to work so hard. Whew!
Nowhere in their comment to they at all claim or even imply that white supremacists aren't people. It seems that you are making assumptions about their motives rather than providing tangible facts that support your side of the argument.
Fair enough. I was responding to the dismissive tone, but was not precise.
As for you responding to me, nowhere in any of my comments here or elsewhere have I advanced or even implied a "side", let alone one that would be served by "tangible facts". You must have me confused with some other commentator -- my point throughout has been that dismissing as 'odd' (or in the case of the comment under discussion here) or the products of pure 'egoism' or 'supremacism' the default position of human groups throughout history is myopic and absurd. Nowhere do I imply I hold this position, and if it seems shocking to you that someone can even describe a side without being on it, that's hardly anything to do with me.
Also all excellent thoughts, but again beside the mark: it's not for us to say how good or meaningful of a culture an ethnic group or coalition needs to maintain in order for it not to be 'odd' for them to want to preserve whatever it is that they have.
And again, a deracinated concern for 'cultural values' is why republicans with very low non-white support can talk about how the good people from south of the border are 'natural conservatives' at the same time that their actual constituents wanted the border sealed shut fifty years ago. It's not something broad like "do these people believe in jesus (albeit a catholic version) and love their families" -- everyone believes in something and loves their own families -- that makes the difference, but merely the question "are these people recognizably 'my people' -- do they look like me, do they sound like me, will their sons have a natural sympathy for mine due to these similarities?" All of this takes place in a fraction of a second when we see another human being -- we notice body language, race (or ethnicity if we're from somewhere that distinguishes this to a high degree of granularity: anthropologists in africa are often amazed at what a tremendous distance a trained human eye can determine in/out group) almost instantly. Most of us here reading this substack learned to set aside these instinctual movements in favor of the individual and in favor of the brotherhood of the human race, but that people who haven't done this work continue to feel the way the vast majority of humanity until the modern period felt isn't 'odd' in any meaningful sense.
Who in the US are "your people"? I am not American, but from what I gather (e.g. popular media, social media, news, blogs etc.) there seems to be little that various groups of Americans (broadly "liberals' and "conservatives"), so I think that time were "Americans" (or at least White Americans) of all kinds were feeling like they were part of the same group are now over for a long while...
You can ctrl-f in this page (if substack will allow you) to see that I'm not talking about myself or 'my people' -- we're engaged in an anthropological exercise to explain the apparent motivations of people we would otherwise think of as "odd".
Now if we pretend you were addressing that question to a white american who opposes immigration, if he's smart he would look at you with disgust and say that he doesn't have to justify his sense of his people to an obviously hostile interlocutor engaged in tactical ignorance -- after all, we know exactly who this white conservative's people are when we call them backwards hicks who don't spice their food and think pro wrassling is real, but when we're talking about who they think they are suddenly they don't exist as a people group.
Honestly, I don't deal much with White Americans, except on the internet...since I'm not planning to move to the US, I don't think I'll be having too many conversations with them... as for Europeans who are against immigrants: I understand the sentiment, but they should base their arguments rationally (i.e. in economics)...
The idea that group-level preference is irrational and economics is rational is probably not your most rational opinion. People don't live in economies, they live in communities. And that goes for everyone. When black folk in Harlem complained they were being priced out by gentrification, shouting at them about economics probably wouldn't have reassured them much. Same thing everywhere else, every time this happens. Not saying it shouldn't happen or that you should care, just trying to help you understand people who aren't like you.
The part left unsaid (but which we all know, and without which the conversation doesn't make any sense) is that some cultures are better than other cultures. Adding more Swiss people to US culture is likely to make it better, adding more Ugandans will likely make it worse.
It's no secret that Swiss culture is better than Ugandan culture, it's obvious from the fact that countries populated by Swiss people look like Switzerland, and countries populated by Ugandan people look like Uganda.
Sorry, but this argument is unserious IMO... for two reasons:
Yes, Switzerland is a wealthy country with a high quality of life, but is it really only because it's inhabited by "Swiss" people? I mean, it is a country with 3 official languages, and this alone should cause "Ethno-Nationalists" to see where they went wrong by using Switzerland as an example of an "Ethnonationalist" country... IMO it's much more complicated why Switzerland is so wealthy, but it's partly because of Geography and obviously the institutions (which formed that way for various reasons further still) of that country...definitely not genetics/biology though... I mean, just saying that if Swiss People move somewhere a place automatically becomes "better" strikes me as unrealistic...
After all, there are places in the US where many people Swiss descent live , and let's see how they perform economically against the rest of the country (according to Wikipedia, these are the following places with the most Swiss Americans as a % of their population :
Berne, Indiana – 29.10%
Monticello, Wisconsin – 28.82%
New Glarus, Wisconsin – 28.26%
Boys Ranch, Texas – 23.30%
Monroe, Wisconsin – 18.91%
Pandora, Ohio – 18.90%
Argyle, Wisconsin – 17.84%
Sugarcreek, Ohio – 17.29%
Elgin, Iowa – 15.79%
Monroe, Indiana – 14.35%)
Looking at the median household income for some of these places, they were
- $35,491; $44,087; $36,922; $42,174; $36,103; $36,360; $28,833; and $42,946, which is less compared to the $54,951 US-wide average in 2000. (Source: https://en.wikipedia.org/wiki/Swiss_Americans#Population (+ top 10 cities there); https://nces.ed.gov/programs/digest/d11/tables/dt11_025.asp .
Also, secondly, among Americans who are anti-immigration, there seems to be the believe that it would be simple to just get people from those wealthy European countries to move to the US...but the truth is simply that not many people in Western Europe are interested in moving to the US anymore...so it's simply not an option. So either you don't have any immigration (because there aren't many people from other Western Countries who would want to move to the US) and deal with US demography becoming more like Japan in the future, or simply accept that if economic growth is important to you (as it seems to be for most US conservatives), then the US will need immigration and the vast majority of it will come from "Non-Western" countries...
Also, lastly,, while I agree that most people would agree with you and say that Swiss culture is better than Ugandan culture, it is still subjective...since culture is inherently subjective.
Oh come on. Instead of burying your nose in a book (or Googling random data), just freaking go to Switzerland, walk around and keep your eyes and ears open. Nobody who's actually been to Switzerland for any length of time can doubt that Switzerland is as successful as it is because it's full of Swiss people.
"Who cares about statistics? What about my lived experience?" -> I suspect you dislike people who make arguments like this; why do it yourself?
(Have /you/ ever lived in Switzerland? If not, it's not even your own ~lived experience~ it's that of a hypothetical person you're imagining, even less grounded in reality.)
I've been to Switzerland quite a few times (8 times in the past 12 years)…and it's a very beautiful place, both in terms of the natural and built environment.
But are you suggesting that if all 8 million people from Switzerland were to move to the US, then they would magically turn the US (with a population of 330 million people) into a society just like Switzerland...? Also, as I wrote before, Switzerland itself is a multicultural and multiethnic country, so to use Switzerland of all countries as an example for Ethnonationalism is a bit odd...
And also, once again I am asking why anyone from Switzerland would want to move to the US if it's already a better country (I think you could say objectively that this is the case, at least in terms of statistics like GDP and health outcomes)…?
>Yes, Switzerland is a wealthy country with a high quality of life, but is it really only because it's inhabited by "Swiss" people?
OF COURSE!
It's not because of it's natural resources. It's not that Switzerland has magic soil. Switzerland is prosperous entirely because of the people living there, and if you change the people living there, you change the country. If replaced the Swiss population with the population of Haiti, the country collapses. People are what make a country.
> mean, it is a country with 3 official languages, and this alone should cause "Ethno-Nationalists" to see where they went wrong by using Switzerland as an example of an "Ethnonationalist" country...
Ethno-nationalism isn't linguistic nationalism. They're all the same race. They're literally more closely genetically related than people from different parts of India are!
> and obviously the institutions (which formed that way for various reasons further still)
They formed the way they did because of the people living there. There's no grand mystery here. It's the people. Africans have never, ever made a country with "good instutions" before, and the only non-genetic explanation of this is an endless series of just so stories to rationalize a denial of racial differences.
>I mean, just saying that if Swiss People move somewhere a place automatically becomes "better" strikes me as unrealistic...
Northern/western Europeans have made everywhere they go better. Look at the US, look at Canada, look at Australia, look at New Zealand, look at South Africa.
"unrealistic" is any narartive in which the sudden and rapid flourishing of these countries following european settlement has nothing to do with them being settled by the same type of people.
> deal with US demography becoming more like Japan in the future, or simply accept that if economic growth is important to you
Economic growth isn't important - per capita GDP growth is important, and you will not get that from low-IQ third world immigrants. They will continue to be a fiscal drain.
>Also, lastly,, while I agree that most people would agree with you and say that Swiss culture is better than Ugandan culture, it is still subjective...since culture is inherently subjective.
If we're talking about what leads to properous, safe, socities with good instiutions, then no, it's not subjective in the slightest.
>Ethno-nationalism isn't linguistic nationalism. They're all the same race. They're literally more closely genetically related than people from different parts of India are!
India is an extremely large, diverse country that has 22 different languages recognised in its constitution. Being more homogenous than India is like being taller than Peter Dinklage.
How do you define "race"? I mean, India of course is a very different category than any European country...
Well, you don't seem to answer my arguments...Are you saying that it's because of the "genetics" of the Swiss people that Switzerland is such a rich country? Sorry, but how come Ticino is so rich, if large parts of Italy aren't? Also "Europeans have made everywhere they go better" is really debatable...I am not saying that colonialism was always bad, but what you are stating here is pure opinion..no sources to back up your claims. Also "per capita GDP growth is important" - if it's caused by IQ why isn't Japan growing faster than India or even the US, considering they have a higher IQ as a country?
No they're not. South Africa was a much better place to live, even during apartheid, even for blacks, than was Zimbabwe. People can put up with a fair amount of petty racism in order to have enough to eat and not have a serious risk of being fed to a woodchipper because you said something disrespectful about the Chief Thug. Ranking racism as the #1 Evil is a First World Problem viewpoint.
How is that comment racist? It was a comment about culture --social shibboleths, the values people cherish or don't, how they act or don't act on those, et cetera. I don't see any place where "people with black skin are better/worse than people with red/yellow/green skin" was stated. If you're seeing things that aren't really there -- maybe it's your own assumptions about people that need examining?
If you mean "diverse" in terms of skin color, I personally couldn't care less about that, any more than I care how many toes other commenters have, or the color of their pubic hair. Frankly, I would find it a little creepy if someone *was* interested in those things -- if it were of interest to other commenters whether I was black or yellow or white. Why do you want to know? Ew.
If you mean "diverse" in the sense of different life histories, different perspectives and talents, then I'm all for that, but I am baffled how this connects to skin color. Again, the fact that you seem to assume it does makes me wonder about your own unexamined and maybe unconscious race-based and race-oriented attitudes about other people.
Anyone put off by "racism" here (i.e. discussing the scientific REALITY of racial differences) doesn't belong here, because it means they're incapable of good faith discuession of thorny issues.
Speaking of melting pot, I have been thinking lately about that concept, and also the concept of "white." It seems to me that the woke left have been looking at those two terms as intricately linked. That to have cultures melt together means that the people sublimating their culture to the broader culture become "white" regardless of their skin color or racial background. That something is lost in the process of "melting" together. This is not as big a deal for Anglo-Saxons who change a little bit, but a really big deal for cultural minorities who have to change more from who they might be in order to fit it.
I would be curious to get other thoughts on this aspect, though I recognize it's a bit niche for a third tier post response.
Well, I am not American and my impression of Americans (to a lesser degree also Canadians and Latin Americans, but they seem less "universalistic" to me) is that they underestimate a lot by how much Europeans differ from each other. The difference between say Spain and Sweden is at least as large as that between Mexico and Canada. And even the cultural between France and Germany is like the difference between Mexico and the US. Even the differences INSIDE European countries are probably at least as large as the differences between the US and Canada. E.g. Bavaria vs Schleswig-Holstein (it's basically not even the same language anymore :D ).
Americans with European ancestry might have a few specific family dishes that somewhat resemble something you might encounter in their "old country", maybe they know a little bit more about the history of that country...and that's about that. Otherwise they are Americans (even if they fancy calling themselves Italian-Americans, Polish-Americans or whatever). At least that is my experience with all Americans I've met in Europe and elsewhere (though I've never been to the US or anywhere in North America, actually).
If the "white ethnicities" (basically usually meaning European ethnicities) retained their individuality, then the US would look like the EU. It would never become a federation in the first place (though probably some states would be pushing for it in hopes of controlling it...*cough* France *cough*), you'd be stuck with the articles of the confederation and each state would have a very unique identity, most people form one state would not understand the people from another state unless they learned a foreign language, etc.
Of course, Germans and Ukrainians are a lot closer than Germans and Malaysians, but Bavarians and the people from Schleswig-Holstein (both parts of modern-day Germany) are very likely further away from each other culturally (even linguistically in a way) than German-Americans and Ukrainian-Americans.
So at the very least, the various Europeans largely melted to "whites" in the US. The same can probably be said of Africans although even though there it was often quite involuntary. But if you look at (sub-saharan) Africa, being "black" means very little there, being Bantu vs being say Oromo is a big difference there. For various historical reasons (but probably mostly slavery) the melting pot in the US seems to be worse at melting it much further than that (although both groups are still very American, i.e. American whites are not just an average of Europeans and American blacks are not an average of Africans...they are both distinctly American)
Exactly... that's my impression too, though we Europeans do the same e.g. in regards to thinking of "Indian people" as being the same as a European nationality, even if India itself should be compared to the whole of Europe IMO, since both are subcontinents of (Eur)Asia...but yeah, the American view of "races" is quite peculiar to them, especially "White" vs "everyone else" and considering people of Pakistani and Japanese ancestry to be part of the same "race" strikes me as weird too...
Not all of us think of "white" as some monolithic group, and especially not "Asian" - though some of our regulations and government counting rules may make it seem that way. "Woke" is not the only viewpoint in America, and is actually a pretty small minority (though apparently a majority in a number of fields, including academia, media, and tech).
That's not exactly the case though. "White" ethnicities retain their individuality, they're just in a state of mutual intercomprehensibility with other whites.
In places where peoples live together, the groups will become "white" unless there are active steps taken to do otherwise. In Austin, Mexicans and Vietnamese were as white as anyone else. Things are different in New York. Though didn't realize how many white ethnic enclaves existed until I moved up here.
I'm also going to question the generalization of "White ethnicities retain their individuality". I believe that in the vast majority of cases, by the third generation any remnant of their ancestral identity is more of a hobby than anything else. And usually not even that. My Irish-American sister-in-law drinks Guinness, celebrates St. Patrick's day, visited Dublin when anyone else would have visited London or Paris, and is otherwise indistinguishable from any other mainstream American. For my part, I happen to know which European countries my ancestors came from, but that fact is about as relevant to me as my astrological sign (which I also happen to know). And the vast majority of the white Americans I know, I would have a hard time guessing their ancestry beyond "Europe" unless I recognized the surname's origin.
I didn't know that white ethnics existed until I moved here. It was a revelation to be able to distinguish a Pole from a Dane from an Irishperson by sight.
But again, I'm not claiming that the mass-media "American" "ethnicity" doesn't exist or that people haven't been pressured to abandon previous identities in favor of it. What I'm claiming is that it's neither necessary nor sufficient to adopt it in order to be "white." That as long as those particular ethnicities are mutually intelligible with the existing "white" bloc, they will be considered as being "white."
This allows for the phenomenon that I saw in Texas of whiteness allowing for a larger variety of skin colors than elsewhere.
I know a fair number of immigrants and their kids/grandkids. The kids who grew up here are basically Americans who will, if pressed, go ahead and speak some Spanish, but who think of themselves as Americans, hang out with and date other random Americans, and only mention that they're Salvadoran or Mexican or Peruvian or whatever if someone asks or there's some special reason it's relevant. The grandkids mostly don't even know Spanish or Tagalog or Chinese or whatever.
I think the main difference is in culture though. Yes an average Pole looks different than an average Irishman and you can get a better than 50% accuracy guessing their ethnicity but with some people you can't really tell by just looking at them whether they are French or Irish or German or Swedish or Polish or even Spanish in some cases (if we are talking about northern Spain). Also, the looks are really not quite as important to people, I think.
It is the cultures which are different and that is what matters more to people, not the shade of your skin colour (though of course there is always some prejudice based on the first impressions). I know people with Vietnamese ancestry (born here, children of immigrants) here in the Czech republic who I consider pretty much Czech. And their kids will be culturally as Czech as Italian-Americans are Americans. So while a (ethnically) French guy looks a lot more like a Czech guy than these people, his culture is clearly French and not Czech and that is what matters. Or rather the physical looks give the first impression but that only lasts until you actually go and talk to the people.
By the way, I was actually really surprised that people consider Harry Windsor's (or whatever his surname is now that he is no longer an official part of the British royal family and so should use an actual surname) wife black. If nobody ever mentioned that to me and someone asked me I'd say she was white. She is definitely very American in any case :-)
Yeah this just isn't the whole story. As the other person replied tons of "white" people in the US had lost almost all their individuality by the 1980s. Some families retained a small smattering of "ethnic" practices, but many did not or were so interbred as for them to be meaningless, or just ad hoc curiosities instead of some actual heritage.
My parents were between them like 6 different types of northern European minimum (though my dad wasn't in the picture anyway). Living in a pretty German/Scandinavian part of the country. I know some families who retained their "Finnishness", and a bit of Finnish cultural practices, and Swedish, and Polish, and German. But just a few. Most of them were like mine and were just "American" with no particular ethnic connection other than something researched for elementary school projects on "melting pots". And these are only like 4th generation families with 5th generation kids.
I really think the TV and mass culture of the 60s/70s did a good job of washing out much of the ethnic difference for a lot of people, especially white people.
The shared cultural heritage was Charlie Brown Christmas special not lutefisk.
You a Minnesota guy Martin? I grew up in a small town up north notable for two things: Hot and Cold water towers and hockey.
https://www.evelethheritage.com/old-water-tower.html
https://www.quanthockey.com/nhl/city/nhl-players-career-stats.php?city_id=3984
https://www.exploreminnesota.com/profile/united-states-hockey-hall-fame-museum/3845
My maternal side is from Virginia, and I spent ages 4-24 in Duluth.
IMO it should be accepted that "White American" is an ethnicity just like "German" or "Russian" (which itself are also composites of people with many ethnic backgrounds)…of course, these days, White American could be split into "liberal" and "conservative" as sub-ethnicities... :D .
Except that the "white" part has been mostly optional since the 1980s at the latest. Not everybody takes the option, so it's still mostly white, but we shouldn't be trying to insist that it is exclusively white.
Not all "white" ethnicities retain their individuality. Some lost it before coming to the US, and many lost it after. Some of it is a random mixing of previously separate people (my mom's family is Eastern European, with some German, Polish, Slovakian, etc. and various unknowns). My dad's family may or may not involve a variety of Western European cultures. We don't know or observe any of the culturally relevant practices from any of these countries. And assuming that one "white" cultural aspect is the same as another is part of the problem. Slovakians are not German, and may have good reason to resent them.
IMO, the melting pot model describes American cities much better than it does American rural regions. I've only lived in one American rural region, so I might be generalizing wrong, but it definitely has a distinctive culture. This is slowly getting eroded by internal migration and cultural influence of people from the dominant cultural group.
There have been observations and articles (whose accuracy I'm not remotely able to speak to except in the broadest sense) referring an ongoing homogenization and "Southernization" of US rural culture.
I have no firsthand knowledge if that's right, though it would help explain the popularity of Confederate iconography in places like southern Illinois (the Land of Lincoln!) and West Virginia (which exists as a state because people there in the 1860s were decidedly not Confederates).
I’m in New Hampshire, and I’ve been surprised and disappointed to see a Confederate battle flag or two even this far north. Trumpy types, of course: more looking to flip the bird than express anything resembling a thought. The homogenization of hick culture via the internet is complete. Ironically enough, the Republican Party of the 21st century has become the Confederacy 2.0
It makes a lot more sense if you don't insist that the flag can only mean "confederate" or "racist."
The bumper sitcker I was "Yankee by birth, Rebel by the grace of God."
Rebellion was still considered a good trait as recently as 1977 after all.
Couldn’t agree more! 😉
Oh yeah in rural areas you will for sure see confederate flags even in union states. I have seen them in rural Minnesota.
Dunno about WV, but in Illinois (where I live and where I have spent lots of times in the rural parts for more than a decade now) the overall trend may be related to US rural culture shrinking. There are small cities all over central and southern Illinois that are now half-ghost towns: built out for 3,000 residents but now home to only 1,000, etc. (Driving through places like Henry, Illinois is downright spooky -- the empty houses mostly aren't boarded up they're just sitting there like a dusty old movie set.)
Illinois now has 40 entire counties each having resident populations under 20,000 people, and 15 of them have fewer than 10,000. That drain-out is not new of course but it is very current, those rural counties' current population curves all look like this:
https://www.illinois-demographics.com/putnam-county-demographics
https://www.illinois-demographics.com/calhoun-county-demographics
https://www.illinois-demographics.com/hamilton-county-demographics
https://www.illinois-demographics.com/pulaski-county-demographics
Our statewide population decreased by 0.1% from 2010 to 2020 while the City of Chicago population increased by a similar tiny fraction. The suburban areas generally increased, the medium-sized cities like Rockford stayed flat. The part of the state which is really draining out is that vast farm belt. And the people remaining are disproportionately older; you see hardly any 30somethings or 40somethings anymore except for the Mexican-immigrant pockets, every farm-county elected official or community leader now is in his/her 70s, etc.
To what degree the homogenization/Southernization (which is absolutely true) is a cause of the rural drain as opposed to vice versa, I have no idea. The two are simultaneous though and surely connected in some way.
Yeah but Illinois is a dumpster fire next to 55-gallon drums of aviation fuel stored in the mail hold of RMS Titanic, so most people with a brain have fled or are making plans to flee.
Yea that's been the talking point in certain circles for a while now, it just isn't supported by reality. For instance I need to correct something I wrote above: in fact Illinois statewide had a net population gain of 250,000 residents from the 2010 to 2020 censuses. (I had accidentally looked at a preliminary estimate of the 2020 census not the actual final census results.)
The population-loss meme is connected to the one about Illinois being one of the highest-taxing states, when in fact it ranks 30th among 50 states by state income tax rates. Is also very average nationally (23rd) in sales taxes; where Illinois does crack the top ten (8th) is in property-tax rates.
And the part of Illinois which genuinely is emptying out is that huge farm belt in the middle. Since that is the part of our state's economy which is least impacted by the property tax rates (farmland in Illinois is taxed at 1/6th the rate of residential or commercial properties), it does not appear that levels of taxation are a driving factor in which parts of the state are losing/gaining population.
The US also has the advantage of hundreds of years of development of social technologies for turning people into new Americans. For example, we Americans tend to be less subtle in communication than most places but that's for obvious reasons.
Even if that's true, it doesn't make it unreasonable at all. And people in 19th century America would have considered it laughable to suggest whites would ever be a minority in the US, and yet this is an inevitability this century.
I'm not sure that's the question intended. Everyone already knows that that fear is common around the world. Historically, that's how many nations and peoples perished, dissolved, or were conquered. It's the usual explanation for the fall of Rome. (I recently posted my own disagreement with that here, but I can't deny that the Goths arrived as refugees, then became the rulers of Western Rome). I think outright conquest was more common, but it's often hard to tell at great distances in time. Historians now argue over whether the Celts wiped out the Picts; whether the Anglo-Saxons invaded England suddenly and violently, or slowly and peacefully; whether the Hyksos invaded ancient Egypt from outside or were immigrants who seized power (https://www.science.org/content/article/invasion-ancient-egypt-may-have-actually-been-immigrant-uprising), and whether the "sea people" invaded ancient Egypt, or were originally just refugees.
I also note that the question is posed using the words "their people", and not ethnic or racial terms. Today, conservatives don't want their nation filled with folks who are radicals, and radicals don't want their nation filled with folks who are conservatives. Is one of those "odder" than the other?
So I think that, if there's a question to ask here, it's, "Why do so many people today think the fear of being outnumbered by people who don't respect your cultural values is unreasonable or immoral?" Or perhaps, "Why is everyone in America today in denial about their reluctance to live with people who aren't 'their people'?"
This might be common knowledge around here but are you the Palgrave Macmillan Phil Getz?
No, and thanks for making me aware of his existence. Curious that his interests are so similar to mine.
One other thing… Zeyde play tenor sax? :)
Nope. (And Zeyde is now my new word for the day. :)
Is 'radical' really the opposite side of coin here? I thought that 'liberal' filled that spot.
Radical is the opposite of conservative. Liberal is, if anything, the opposite of authoritarian. Today America has a conservative party and a radical party, but no uniquely liberal or authoritarian party. Both liberal and authoritarian ideals are split about equally between our parties.
Liberalism historically emphasized individualism, equality before the law, equality of opportunity, free markets, private property, the limitation of state power, freedom of speech, toleration of diverse opinions, and the right to own weapons. Conservatives are clearly the liberals today in that original meaning of the term. But they fail when it comes to newer freedoms that weren't thought of 2 centuries ago, like control over one's own body (sexual preference and practice, prostitution, medical treatment, recreational drug use, abortion if your community's metaphysics allow it); the freedom to have privacy (no search without a warrant, freedom to travel and to buy things without it being tracked), cryptography, and pornography; and freedom from gender roles.
Forgive me Phil, I’m way out my wheelhouse here but your definitions seem a bit fusty.
I’ve poked around a bit and having completely absorbed the thinking of Edmund Burke [joke] I see radical being used to describe Margret Thatcher and that weird bit of CosPlay that just went down in Brasilia.
Help me out here with some fairly recent examples. Would you label all these people as Conservative?
Barry Goldwater
Bill Buckley
Ronald Reagan
Jack Kemp
George Will
Donald Trump
Hmm, I should restate some things.
The word "radical" has a clear meaning. Radicals are people who want to make large changes right now.
The word "conservative" doesn't denote a particular set of beliefs, but the desire to keep things mostly the way they are at the present moment, or to revert them to how they were at some prior time. This means it doesn't have a clear meaning, because often one party wants to keep things as they are, while the other wants to adopt a policy that was national policy sometime in the (possibly distant) past.
So these terms aren't really opposites. There are radical conservatives, who want to make big changes right now to revert to some (possibly imaginary) earlier time, like the Nazis, the Ku Klux Klan, or maybe Margaret Thatcher (I don't follow British politics).
I'd call all those people you listed mostly conservative, but "conservative" meant something different in Reagan's time than it does today. For instance,
- In the 1950s, conservatives were against free speech if it might be communist propaganda. Today, radicals are against free speech in general. So you could call them "conservative" because they want to blacklist and silence people as was done in the 1950s.
- In Woodrow Wilson's day, conservatives thought America should worry about America, while Wilsonian progressives said it should take on poverty and bad governments in other nations, preferably working toward a world government. After World War 2, the Marshall Plan was definitely radical and interventionist, but was supported by "conservatives". In Reagan's day, conservatives felt that America had a responsibility for the rest of the world, while radicals felt America should stop interfering with other nations. Today, conservatives again think America should worry about America, while radicals again say it should take on poverty in other nations, preferably working toward a world government.
The term "liberal" is clearer than "conservative". Historically, it refers to principles of liberty described by Enlightenment thinkers, especially John Locke in the 17th century. The main point of what I wrote is about the meaning of "liberal", not about the meaning of "conservative".
The confusion over the meaning of "liberal" is probably due to the Civil War. The Old South considered itself liberal in the old sense, and yet was a hierarchical society that didn't extend the freedoms it praised to slaves or the lower classes. I would argue that it wasn't really liberal. There was no freedom of speech. Abolitionists or workmen trying to claim equal rights before the law would get beaten up. And the North was both radical and liberal (in any sense).
You could argue that the term "liberal" today has come to mean simply people who want radical changes. I object to people doing that, because I believe personal liberty is important, and the people who want radical changes today are generally opposed to personal liberty. I don't like it when people call for censorship, government control of sex and gender, racial separatism, and the elimination of gun rights, free trade, and private property, and call it "liberal".
So my use of the word "liberal" isn't as objectively correct as I implied. I just think it's more honest.
New information to me. Thanks. I’ll look into it some more.
Succinctly put. I just spent 5 minutes typing far more to say the same thing in another reply.
The Iroquois and Mohicans might have felt that way.
Indeed, they did from long before any Europeans showed up. E.g. when the first French voyageurs arrived in the Great Lakes they found that the Iroquois and Algonquin confederacies had been engaged in a mutually-genocidal war for something like a century, with the core issue being which tribe was entitled to live where.
Similar examples are found throughout history around the world going as far back as we have any historical records. It does seem as if that is one of those fundamental gut human fears.
If resources are scarce, people are going to fight about resources. Where resources are not scarce , people might fight over ethnicity and ideology , but also might not. Western societies tend to deliver abundance, and enforce tolerance.
I know. The Cree didn’t get along with the Ojibway and those folks didn’t get along with the Lakota Sioux either.
This universal problem seems like it deserves more attention than it gets.
One reason for the Norman conquest of Ireland was a petty king inviting in Anglo-Norman mercenaries to help him in a political row with another petty king which escalated up to the high king. Petty 1 seals the deal by marrying his daughter off to leader of said mercs, with promises that merc will be king after him. Mercs then decide they like the place, settle down all over, and set up as local lordlings. Local chieftains and kings who are fighting each other decide that having the new guys on *their* side whacking their enemies over the head is a great idea.
Then the king in England decides "hey, my former vassals may be getting ideas above their station, time to remind them who's the king" and claims lordship of Ireland. Fast-forward the Eight Hundred Years 😁
It happened all over: A and B are at each other's throats, C turns up, A and/or B thinks this is great opportunity to get C on their side, eventually C ends up owning the place. Then it's hard to kick them out again:
https://www.youtube.com/watch?v=4h0J6VrHuQE
Same happened in Middle/South-America. I'm certainly confusing which is one was which, but the Mayas and Aztecs were at each others throats. Then spanish conquistadors showed up and Mayans invited them to crush the Aztecs (or the other way round?) and guess who ended up owning the place.
Will the international drug trade inevitably collapse thanks to future machines that can synthesize any drug from simple precursors? I ask because I just read a report describing how advances in chemistry over the last 15 years had made it possible to synthesize methamphetamine from more common types of chemicals that governments have difficulty tracking.
What's the market case for anyone ever developing a "future machine that can synthesize any drug from simple precursors"? It may be technologically feasible at some point, but it's going to be competing with an extant global supply chain that can connect you with a factory that produces the particular chemical you want at scale and with full economies of scale, now with overnight delivery. I can see niche applications, but they may not be enough to finance the development and it may not be enough for hypothetical future you to get lost in the crowd when you buy one for your home drug-peddling business.
I remember the early hype about 3-D printers as the inevitable, omnipresent home appliance of the future, and the speculation about how that would mean e.g. gun control was futile because anyone would be able to print a Glock or an AR-15 on demand. I also remember what happened when that dream met the reality of Amazon, leaving 3-D printers as mostly hobbyist toys that are nowhere near capable of printing serious guns, with a handful of high-end industrial machines that could *maybe* do so but not at a competitive scale even on the black market.
You mean illegal drug trade, I assume? Doesn't seem likely. It's very unlikely that any time in the next 100 years it will be possible to dial into a simple machine the structure of some random small molecule and have it synthesize it from whatever random feedstocks you can source from your local grocery, hardware, and animal feed stores. A more plausible scenario is that it becomes possible to type your structure and desired precursors into OChemChatBot and have it outline a plausible synthesis.
Of course, whether the synthesis works or causes your garage/backwoods lab to explode in blue-green fire because OChemChatBot hallucinated the answer will remain a business risk.
I think it's more likely that the tech will be available, but giving the recipe to your Synth-o-matic will be illegal. Defense Distributed will be an indicator.
Was Simon Browne[1] having a PNSE[2]?
[1] https://www.lesswrong.com/posts/Fy2b55mLtghd4fQpx/the-zombie-preacher-of-somerset
[2] https://slatestarcodex.com/2019/10/21/the-pnse-paper/
I hadn’t read either of these before and they created a lot of new tracks of thought and a tall stack of new reading I want to do.
Right now I’ll only say that my take on Daniel Ingram, from watching videos of people interviewing him, is that he is playing the long con. Sorry David , if you are really enlightened this won’t bother you, if you are running a con it probably won’t either.
Sounds like a reasonable hypothesis, except for Browne regarding his “enlightenment” as negative instead of positive
In the simplest model, yes, down to zero. (See Robert Schiller’s 2007 Financial Markets course on Yale Open Courses, which is phenomenal btw.)
I'm trying to help a young guy I know find a job that's a good fit. He's in his mid-20's, and has ADD & Asperger's. He's distractible, and a bit odd -- on the other hand he is friendly, honest, hard-working, and quite bright. He's got a college degree in computer animation, and knows how to use Blendr, Photoshop and some of the lower-end animation and video editing software. Is also competent, though not expert, with the basic office suite apps -- Word, Excel, etc. -- and had a couple courses in Python. Has built a couple simple web sites.
He's been working at a hardware store stocking shelves and helping customers for several years. He is well-liked there but makes little more than minimum wage, and really needs to earn more. It seems to me that his computer skills should help him get a job that pays above minimum wage, but I can't think of a job that might suit. But if I were opening a store on a tight budget, and was not very computer literate myself, I'd love to have somebody like him who could help me get oriented with using a computer for the store, could build a simple website for the store, could explain spread sheets to me, could make attractive notices in a nice font to post somewhere -- things like that -- and then later help me unload boxes and put the stuff on the shelves.
He is willing to take one or 2 courses if improving certain crucial skills would make him more hirable for jobs that pay at least 50% more than minimum wage, but he's clueless about what courses to take.
One last thing: It would not work for him to be self-employed. He needs the structure of a regular job.
What ideas have you got for this amiable young oddball?.
He could start as an office assistant for some small law/accounting/insurance firms etc, develop his computer skills and grow with the business or move on to a larger firm for a better position after 6 months -1 year of experience.
It sounds like his social skills might be the core problem? If you're only looking for "above minimum wage" and he has a college degree, this seems like an extremely easy problem to solve. Even if his degree were in something completely unusable, he should be able to land a generic office job somewhere. Adding in what could be summarized as "IT skills" to a lot of jobs should also open up a small world of Help Desk or basic tech department jobs. 50% more than minimum wage should be easy going either route, with reasonable expectations of 2X+ minimum wage at least as growth potential.
That is to say, if I'm reading you correctly that he's got social interaction issues holding him back from pursuing something more obvious, then it's not so much his skill set that's in question, as where he can fit in. Assuming that, it seems that his best bets would be to improve his work skills to the point that a large tech firm (or local equivalent if relevant) would want him for his skills and would be willing to overlook the other difficulties, or for him to look for a smaller company where there would be some clearly missing skills (probably general IT/MS Office) that would be willing to give the guy a chance.
If his social skills are strong enough, then the other option I would suggest is doubling down on one or more specific aspects of his degree or tech knowledge and applying specifically for those kinds of jobs. I don't know computer animation as a field specifically, but it sounds to me like a field that is hard to get into because it's so niche (geographically dependent, limited general use for most companies). If so, he may need to figure out all of the related fields that have some kind of crossover, and apply there as well.
"But if I were opening a store on a tight budget, and was not very computer literate myself, I'd love to have somebody like him who could help me get oriented with using a computer for the store, could build a simple website for the store, could explain spread sheets to me, could make attractive notices in a nice font to post somewhere -- things like that -- and then later help me unload boxes and put the stuff on the shelves."
Some of these things you only need once, and you can buy them separately. There are Word and Excel courses for beginners. There are companies that will create for you a static website cheaply.
I do not think it is realistic to look for a tailored "stock-keeper / Python web developer" role. That is very unlikely to happen... and even if by a miracle it happens, he would lose all the leverage that comes from being able to say "I quit", because it is unlikely he would find a job of the same type again. So he needs to choose one or the other.
However, that does *not* mean that he needs to make the choice in advance. He can simply apply to both types of jobs simultaneously, and take the first job offer that is an improvement over his current position. But he needs to remember that the two different roles require two different personae. When applying to a stock-keeper job, do not emphasize Python and Photoshop. When applying to a Python development job, the experience in stock-keeping is only relevant in the sense of "can keep a job".
*
So, the first option is to try becoming a better-paid stock-keeper. Write a CV that displays (1) previous experience with stocking shelves, and (2) the knowledge of office applications, that is: Word, Excel, e-mail. (Everything else is dark magic that the stock-keepers are not supposed to know.) Send this to shops, both large and small. Or maybe, let a job agency do it for you. The image you are selling here is "an intelligent stock-keeper, who can also do the related administrative work". (Which might be a reason to pay him better than mere stock-keeper.) In longer term, possible advancement to a position of a supervisor, or maybe a purely administrative position. Or the company might immediately offer an administrative position instead.
*
Another option is to put the focus on Python and web development. The problem with developing simple web sites is that a company only needs such thing *once*; and them maybe an update a few months later. That is not enough to justify a full-time job. He would need to develop for many companies, but if self-employment is not an option, he needs an employer who does this kind of business. But in 2023, such company will probably use some content management system, and create the new websites by clicking "create web site" in the user interface. The ability to create a simple web site from scratch is only useful as a stepping stone towards something more complicated.
So the image here is "a young person with basic IT skills". Apply for a position of tester or junior Python developer.
Before applying as a tester, download https://www.selenium.dev/ and write a few Python scripts. Try automating something simple; like log in to a website, go to some list and verify that an item with certain properties exists, maybe also select that item and perform some action and verify that you received a success message. You can do this over a weekend, and it could make a dramatic difference over "I have never done anything testing-related". As a web developer, I assume you already know HTML and CSS; also learn how to write (the most simple) XPath expressions to use in the Selenium scripts.
Advantages of being a tester: requires less knowledge than applying as a developer, and you can transition to a developer later. Disadvantages: many software companies do not use testers, because the testing is done by the developers.
To apply as a Python developer... I am out of my depth here, not a Python developer. Learn how to use "venv", "pytest", https://jupyter.org/ . Maybe ask someone, what are the most popular Python frameworks these days, and write something simple in one of them. Notice that Python is useful not just for web development, but also to write command-line scripts which e.g. process JSON files or find something in a database.
Don't overthink it. The idea is that you spend a year or two in your first IT job, and then you can apply elsewhere and ask for a significantly higher salary.
Thanks for your detailed advice. I was thinking that it would probably help this guy to meet with someone whose job it is to assess somebody's skills and tell them what jobs are a good fit, also which of the good-fit jobs are currently looking for more employees, and what skills, if mastered, would make the person more hirable. Looked online, found ads for places that advertised that they do that. For instance one called STEM Career Services: "STEM Career Services retains a panel of expert career coaches, each with invaluable experience in consulting, biotech, pharma, federal government, nonprofits and more – all ready to help you find the perfect job." Is this bogus, or a real service? Seems like there should be places offering actual help of the kind I have in mind, because there must be a need for it.
I have never seen a service like that, so no idea. Perhaps try to find a review online? Also, it could be something in a middle: a mere job agency that tries to look more impressive than it really is. Which might still be a good outcome.
I only have a job experience with the job market in Slovakia; I have no idea whether in other countries it works similarly or not. Most job agencies are not specializing on IT; they provide jobs for everyone. Which means that they have a few hundred job positions they hunt for, and maybe five of them are IT related. So it does not make sense to give them a too detailed list of your skills and preferences, they will anyway just give you one of those five options that seems to match best the keywords you have mentioned. Might as well say "Python developer" and save time. There is also one job agency specializing on IT, but they only want self-employed contractors.
I always thought that something better should exist, and I am not really sure why it does not. Maybe Slovakia is just a too little market. Or maybe it is a chicken-and-egg problem, like trying to build a new Facebook. It does not matter how good idea you have, it is most likely to fail, because people want to be where other people already are. Imagine you start a new job agency tomorrow, now what? Companies reject you because you have no job candidates waiting. Candidates reject you because you have no job offers waiting. So you either fail, or you desperately take anything you can, and become the general job agency with hundred company clients and five IT positions. This is just my guess; never tried that.
Then there is coaching, which is a different type of business: you pay them money, they give you lessons. Can they actually find you a job? Probably only in the sense that if you have skills, any job agency would find you the same job. Maybe they cooperate with a job agency or two, and send them CVs of people who completed their courses. But finding you a job is *not* their core business; it is giving you lessons for money. Probably might as well take lessons from someone who doesn't call themselves a "career service". But again, just a guess.
eaThanks. Can you give me an idea how someone could learn about server maintenance? Is that the sort of thing you can learn with an online course? I do not work in tech and don't know about this sort of thing to advise him, or to judge whether he's up to mastering server maintenance..
I was thinking that it would probably help this guy to meet with someone whose job it is to assess somebody's skills and tell them what jobs are a good fit, also which of the good-fit jobs are currently looking for more employees, and what skills, if mastered, would make the person more hirable. Looked online, found ads for places that advertised that they do that. For instance one called STEM Career Services: "STEM Career Services retains a panel of expert career coaches, each with invaluable experience in consulting, biotech, pharma, federal government, nonprofits and more – all ready to help you find the perfect job." Is this bogus, or a real service? Seems like there should be places offering actual help of the kind I have in mind, because there must be a need for it.
Finally got DALL-e to produce an image of Shrimp Love Me, Unaligned AI's Fear Me. It's here if you want to have a look: https://i.imgur.com/fBwwZSq.png
What was your prompt?
I had to make 2 images and photoshop them together. Main prompt was "Steampunk style: A man standing in water is dismantling a huge machine. Many shrimp are swimming towards him." But DALL-e just would not do the damn shrimp no matter how I phrased it. I tried mentioning them before the huge machine, but then I got machine versions of shrimp -- short of robotic metallic ones. I also tried editing the original by erasing a lot of little areas and then putting "swimming shrimp" as the prompt for the edit. DALL-e simply filled the erased areas with what had been there before, or with plain blue water. So finally I just did a separate image with prompt "Many pink shrimp are swimming towards the center of the image," which DALL-e rendered just fine, and I photoshopped the 2 together. That's the first time I've used Photoshop for a DALL-e image, and it definitely made the process less fun. There's something magical about just using what DALL-e gives me, but in this case I hadda have the shrimp.
Plug: I suppose open threads are plug friendly? It's unclear if there's gonna be another classified sometime soon.
I've written some things over the last few years and finally decided to put them on the internet:
https://medium.com/@nickmc3
I have a couple related to AI (The Ol' Job and What Dreams May Come), The Shell Game relates to economics, and there are some others like Gourbain's Flux Capacity Theorem that I think some people round here might like. Also the pinned, No Hot Take Under the Sun, is short and directly inspired by something Scott said once.
Any thoughts or feedback much appreciated!
> God created each one of us to live out our lives in one or another cognitive and ideological bubble, and though we may paw desperately at the inside of the slippery surface, there is no outward progress. Every inch up the wall just rotates the sphere around us.
I am the worst sort of reader, the sort who comes up with obscure exceptions without addressing the gist. Anyway, have you read about craniopagus twins, i.e. conjoined twins joined at the head? I have read that their cognition seems to overlap somewhat (e.g. https://en.wikipedia.org/wiki/Krista_and_Tatiana_Hogan). Can their minds fit into the bubble metaphor?
that's really interesting! i imagine they have a lot in common and probably don't disagree with each other too much on political or social issues, but who knows. It'd be interesting to be physically and cognitively attached to someone you can never agree with on anything. If they see the same things, but perceive them differently, it'd just go to show how important perception is in terms of interpreting and understanding the world
When I look at AI-generated art, it generally has an aesthetic that I would describe as "Tartarian": https://astralcodexten.substack.com/p/whither-tartaria
If, in general, the "Art World" hates Tartarianism and everyone else likes it, this might be an opportunity to break out of a local maximum and mainstream Tartarianism again. AI could allow the capitalists to cut the Art World out of the loop and sell people the styles they like. Of course. this would likely depend on AI design being able to generalize to things like woodworking if it wants to compete with IKEA.
I doubt it. The commercial art for commission world has always been Tartarianism meets Furry Porn.
The Art World is filled with weird stuff like bananas taped to the wall because it's defined in opposition to the large and thriving beautiful and thus actually popular art scene, not because there is a shortage of actually beautiful stuff around.
AI Art will make changes within that beautiful art scene, but it won't change the relationship between the pretty scene and the Art World. To the extent that Ai Art actually enters the art would, it will do so by continuing it's existing un-pretty styles.
I don't expect AI art to enter the Art World at all; the opposite, in fact. I propose that AI could break the Art World's influence on commercial art/design. I have two end tables in my living room: one is a Brutalist IKEA piece, while the other is a more Tartarian design that I inherited from my grandmother. What if AI made it possible to compete with IKEA by selling Tartarian furniture? The Art World might end up becoming a hermetic scene with no broader influence.
It seems that you are saying the Art World produces ugly art because pretty art is an oversaturated field? I can see that, and it makes sense to me. Given that, then the only real chance to break into art (or fashion, etc.) is to make ugly art. That would apply to AIs as much as people, so yes, AIs would have to make ugly art that is somehow novel in order to get recognized. Being from an AI seems enough right now to be novel, though that will likely change if there are free or cheap ways to get AI to create art on demand.
It's not oversaturated so much as low-status.
If you're producing pretty art it means you're just one of those low-status artists who goes around making decorative landscapes for lower-upper-middle-class people to buy and hang in their houses. Maybe you have a small gallery in a popular tourist area where people wander in and consider which of your pretty landscapes would look best in their dining room.
This means that your whole career is dependent on outsiders, not artists, which is adjacent to being an outsider yourself.
I agree that's the case now, but I'm more interested in how we got here. Renaissance artists painted and sculpted pretty things, and were deemed very high status for doing so. It's not a given that pretty = low status.
I'm blaming oversaturation on that switch.
Do you think there could be a large scale (at least several million people EDIT: hundreds of thousands should suffice to make this interesting...the main aim of this is to avoid very small social structures which are much simpler to handle) society which worked on more or less on the same principles modern western democracies work on without modern technology?
What I mean by more or less "same principles" is something along the lines of close to universal suffrage (specifically with women also being on more or less equal footing with men), no slavery (or de facto slavery), high level of individualism and individual rights (the negative rights, i.e. not expecting state-run social welfare systems but expecting a society where people are more often than not free to do whatever they want to do as long as they don't interfere with the same freedoms of others).
Are there any real-world examples?
What is the minimum amount of technology required for such a society in your opinion? Is there in fact a minimum?
Bonus question: What do you think were the societies closest to this in each era/area of the world?
Note: I do not count philosophy and social institutions as technology (although in some sense it is a very important piece of technology) so I allow even rather implausible societies you'd get if you could magically transfer modern people to the world 20 000 years ago, had them all forget everything about their physical technology and replaced that knowledge with survival skills (so that such society doesn't just die out in a week).
I can think of 2 close but not quite examples:
1. Medieval Switzerland. Well, the last canton to give women a right to vote did it in 1990 (rather it was made to do that by the federal court) but an alt-history medieval Swiss confederation where even women get a say does not feel like that big a stretch of imagination. And the low level of centralization seems to overcome the technological burdens associated with democracy in a large-scale society (in fact, it seems to work better than most countries even today and the individual Swiss votes actually often have a meaningful weight).
2. Medieval Iceland, kinda?...something between direct democracy and a "libertarian anarcho-feudalism"...women were still not exactly equal to men there either (perhaps better than in medieval Switzerland, worse than anywhere in Europe today).
I think the big question that needs to be answered is 'what are the military requirements of the society?' Most of the social details that matter for determining the principles involved are going to be determined by the answer to this question. With no military needs, there's very few things standing in the way of an ideal society. The two realistic military requirement scenarios I can think of are 'living alongside one or more potentially hostile peer societies' and 'the potential threat of an overwhelming number of hostile outsiders' (what we might label based on recent threads on ACX as 'SN risk').
There's a lot to unpack here. I define a 'peer society' as one with a similar general technological level, resource base, and population (but not necessarily the same). While in the long run I take as true that the more free society will advance more economically and technologically, that's meaningless if the current technology level allows the less free neighbor to win militarily in the short term.
With modern technology, we can produce and transport food and other essentials efficiently enough that we can survive on a permanent professional military and still maintain modern values. If, on the other hand, you need almost all the population producing food and other necessities, then permanent soldiers are a serious drag on your economy (unless you use them to pillage your neighbors, which modern values won't allow you to do). The first smaller question, then, is 'do you consider the principles of 'modern Western democracies' to allow conscription and/or a period of compulsory military service?'
Different military technologies and social organizations allow different levels of what permanent military capabilities and temporary military capabilities you can expect your society to be able to call on. To go with one of the more obvious real world examples, if your small landowners are practically born with a longbow in their hands for hunting, it's a lot easier to call up a competent militia capable of countering professional armored enemies than if you have to train them from scratch.
EDITED TO ADD: What's interesting is that the ebb and flow of military technology goes both ways. It's probably good for your hypothetical modern values society if useful weapons are commonly owned, either because your farmer's blade-on-a-stick farm tool is not significantly worse than the other guy's spear, or because longbows or muskets are common hunting weapons on the frontier. It's probably bad for your hypothetical modern values society if the battlefield is dominated by cavalry or other animal-based troops (chariots, elephants, etc.) This has several interesting implications: first, that your society could become worse off as technology progresses if the new tech favors a less democratic means of warfare. Second, that the geography of your society also needs to be taken into account, ie bad terrain for horses may be good terrain for a modern values low tech society.
I think conscription in the ancient city-state style is ok, as long as the society only uses it to defend itself (an empire with city state at its core which is very free but lives of the work of the people it conquered does not count as the society I am looking for). By the way, some modern western democracies still have this kind of conscription (Switzerland for example, Germany until very recently, probably more examples exist).
I would say that a professional military paid by some form of taxation (or even something more voluntary, but that is out of scope) is ideal...provided that you structure it carefully in such a way that it does not take over the society like it gradually did in Rome after the Marian reforms. But like you said, it is typically not an option for a pre-modern society, so limited conscription stays within bounds for such a hypothetical society.
To answer your question, I think there is a sweet spot at which a lasting modern values society is possible with a low tech base, at least as far as military tech goes. Switzerland is probably close to the ideal case. You want enough natural resources (especially farmland) to be at least self-sufficient in the necessities, but not be prosperous enough to be an obvious target. You want natural barriers to army movement (especially horse-based troops) but you want trade to be possible. You want to be far away from any steppe or steppe-like geographical feature that could produce nomad hordes, or at least ensure that by the time they get to your natural barriers you've had time to mobilize your farmers to make trying to pillage an expensive proposition not worth the rewards. At this point, modern values comes down to reducing the amount of infant and maternal mortality, reducing the risk of disasters, maintaining trade with neighbors (in ideas and culture as much as goods), and dealing with the cultural problems inherent to religious friction.
Good points. I was thinking along the same lines but I underestimated the military aspect, I think, especially the steppe nomads. In fact, from what I can tell (and that is frankly not much, so feel free to correct me), very early Russia or at least parts of it were fairly "liberal" for their time. Places like the republic of Novgorod seemed to have a lot of potential in this respect. But it seems that the longest lasting impact of Mongol invasions was probably the way the Russian princes became a lot more like the Mongols themselves and places like Novgorod eventually went up in flames, being replaced by an authoritarian society that has not really changed that much until today. But maybe Novgorod was also pretty bad and I just assume it wasn't because they were a merchant republic very much connected to the Hansa.
Switzerland seems to meet all your requirements except for the birth mortality - modern medicine is probably the most important technology for the emancipation of women. The Swiss also seemed to fare better than most in dealing with the 16th century religious conflicts. Perhaps it also helps to have a somewhat more rural society. Most of the free places in the past were economically kind of backwater. Then again, Italy was very rich and probably more free than most places but probably quite a bit less free than Switzerland (my impression is that those places were mostly oligarchies if not outright monarchies). Maybe it has less to do with money and more with geography (which leads to less money). It is much harder to set up a more authoritarian regime in a country which is full of mountains.
So mountains and a distance from the steppe seem to be good candidates for two necessary conditions for a low-tech society to be liberal. And modern medicine (or something close to that, I guess that you could have discovered something like penicillin by accident even in the middle ages?) without which women are unlikely to achieve any significant emancipation.
If you have better tech, you might not need the mountains. Steppe warfare becomes obsolete and land becomes less valuable on its own (and costlier to conquer).
I wonder if the mountains are that crucial after all....the low countries have also been very close to being liberal (probably somewhat less so than Switzerland) for most of their history. They were even very rich from a certain point onwards (though boggy swamps before that). They were definitely very far away from steppe nomads (although it is flatland all the way from Russia to the Netherlands, it is just a bit too far away from Mongolia I guess).
I've been thinking this over. It ultimately comes down to how much power you need to maintain your society from stresses both inside and outside. I would hope that having a democratic government would do a lot to alleviate most of the internal stresses from competing for power; you don't have elites raising private armies when that won't work to give them control over the levers of society.
War and religion are the two outside context problems your society has to contend with, in that you can do everything 'right' and still lose because of factors entirely out of your control. As I think about it, religion is a thornier issue because of your adherence to modern values, as in your values mean can't stop your people from adopting religious beliefs which may be against your values, at which point you have internal stress again. Switzerland was very lucky in having Christianity, even though it did experience some conflict.
I think the low countries were at their high point when their more powerful neighbors were distracted by the newly available ability to colonize the rest of the world. Why fight each other over Holland when you could more profitably establish control over lands other than Europe? Again, that's something you can't take for granted. I don't assume that mountains are the only natural barriers; I think the channel worked very well for England, in that it was wide enough to make invasion very difficult, yet was narrow enough that trade with the continent could flourish. Japan, on the other hand, was a bit too far away from the other east Asian countries (though there are obviously other factors as to why those two turned out differently).
Offhand I'd say you've got cause and effect reversed here. It's technology that enables centralized oppression, and the concepts of republicanism and individual rights are a response to that, an effort to preserve the pre-technological style of living that humans evolved to prefer.
Without technology, it's not really plausible, nor is it of interest, for several million people to coordinate their actions to accomplish vast centralized goals, from building cities to making war, and so having your life hijacked by some far away strange authority doesn't happen naturally. A hunter-gatherer Stone Age society is inherently pretty liberal, because anyone who doesn't like the local social order can usually just walk away and fade over the horizon. Stuff gets decided by consensus, with a leaning towards the people who've been around the longest and/or have made good decisions before. There's not much concept of a franchise, because you wouldn't decline to listen to even a kid, if the kid had something useful to say.
Which is not to say there aren't Stone Age tribes that are oppressive or violent, of course. Human beings are of a nature that could fuck up Paradise if offered to them on a platter. But oppression and violence on a million-person scale -- and the development of the concepts of republican self-government and individual liberties as a bulwark against them -- requires technology.
"Without technology, it's not really plausible, nor is it of interest, for several million people to coordinate their actions to accomplish vast centralized goals,"
I suppose it depends on what you and the OP mean by "technology" - if it includes any time of tools - then yeah. Without tools (aka technology) it's not clear that we are in any way talking about humans.
And is the "several million" a scale or metaphor for "lots"? A bee hive has about 60 -80,000 individuals. Is the hive structure to hold the honey a technology?
I suppose the biggest Neolithic cities maybe got to 100k, but still a lot of "technology" involved.
Christianity seemed to be able to coordinate a centralized mission on the scale of millions, but roads and ships were a necessary "technological aid" - even before the cooption of the the pagan war technology.
several million - yes, that is a metaphor for many. I wanted to exclude tribes of tens of hundreds of people or even societies in the thousands. But hundreds of thousands would have probably been enough, millions is way too many for ancient civilizations.
technology - basically anything less than the sort of technology that we see as the societies which we recognize as modern appear in actual history (i.e. something like less than late 19th century technology more or less). The aim is to see if there is some minimum technology required (beyond that which allows complex societies of hundreds of thousands in the first place, basically agriculture is a must, writing probably also, everything else is optional). Or as Civilis points out above if there are some other nontechnological conditions which can compensate the lack of technology (tall mountains for example)
Well, a social structure of a few hundred people is inherently simpler (and "easier") than that of millions or even tens to hundreds of thousands. That is why I am mostly interested in the latter. Now, society != state, so they do not have to be a part of a single state but they should not exist as atomic tribes.
I am no expert but I think it was often actually much harder for someone to leave one tribe and join another one. In fact, it seems that in many tribal societies expulsion is the ultimate punishment (rather than death). You are mostly protected by a network of your relatives and friends, as an alien without any of those connections you are basically free game and nobody will care too much if someone robs you and kills you. You might ask another tribe if they'd take you in but since your previous tribe forced you out you are automatically suspicious from the beginning so they are more likely to refuse unless you have something (ideally skills so they cannot just rob you) they don't and want.
In general, I'd rather think about examples of agrarian/settled societies - basically my "hidden" question is something like "could there have been a society that would over time develop to what we recognize as a modern 'western-like' society while being very familiar to us in its structure all the way since its inception in the distant past? And if not, why? Is it because of some crucial technology?
Cities have been around. Why don't you explore that.
And I'm not sure of your artificial cut off of "modern" technology.
Is what you call liberal society possible before Christianity and the radical notion of loving your neighbor and aspirationally even loving your enemy? What is the feature you are really thinking about "pluralism", "cosmopolitanism" - these are features of cities.
Ancient "Water Totalitarianisms" like Egypt or China might be a counter example your assertion that stone age societies can't coordinate oppression, though I don't know if million-person scale was typical for them or not.
Though I agree that technology typically has nothing to do with a free social order. In addition, I also want to add that the "Liberal social order" contains its own fair share of the exact same injustices and unfreedoms of past social orders, and then some. Like, congratulations on "freeing" women out of the need to marry to live..... and into the need to work to live. Double the workforce for the wage payers (and for the exact same amount of total wage), half (or less) the workforce for raising the kids. That's some really fucking impressive Civil Right bullshit you did right there, feminism.
They might be counter-examples had I not prefaced "Stone Age" with "hunter-gatherer."
Hopefully it has not only just occured to you that that circa 1970 first-wave "feminism" had rather less to do with actual female liberation and rather more to do with the (at the time twenty-something) male Boomer desire to loosen up the sexual mores that kept their potential sex partners' legs closed until there was a ring on her finger.
It hasn't occured to me just now that feminism is a hack of course, but it's always worth mentioning to undo the decades of "Feminism == Equality" propaganda.
It's not entirely fair to blame male desire for feminism's tendency to be The Slut Manifesto, it had a role of course but women are not string-controlled dolls entirely programmed by male desire. And for every male with that desire, there is (at least) 1 equal and opposite male (her father, brother, future suitor\crush) with an equally-vehement opposite desire.
>1970 first-wave
I believe the established terminology call that second-wave.
No, of course they're not, which accounts for the qualifiers in my statement, but unfortunately white knighting exists because it works, at least often enough to keep it going generation after generation.
Yep, you're absolutely right that 1960s fathers were appalled by the movement. That's one reason Trust No Square Over 30 was a thing, too. But few cohorts are as energetic and single-minded as twentysomething males looking to get laid. That's why we (used to) draft them.
>established terminology
Actually, I think the established terminology for what Carl is talking about here is "the sexual revolution," which overlapped with second-wave feminism but was far from identical with it. (And yet... the idea that Valerie Solanas, for example, was primarily interested in developing an ideology to help satisfy the sexual desires of boomer males is indeed an intriguing one.)
I mean, it's something current feminists explicitly take credit for whenever it's mentioned, so it's a distinction without a difference.
> Valerie Solanas
That piece of shit was far from the typical feminist, and you owe me... things for reminding me of her. At least 3 photos of kittens or their equivalent is what I demand.
I think the scale of millions was a bit too much for me to ask (I also was thinking about examples which were likely smaller), I think that hundreds of thousands should be enough. The main reason behind that restriction was to exclude very small societies of a few tens to a few hundreds of people.
As for women having to work or marry or anything to survive - I am less interested in the modern welfare state (something that obviously requires a huge economic surplus so it is trivially not available to almost any pre-modern societies, definitely not universally), more in what is considered "negative" freedoms, i.e. basically freedom from oppression. I.e. "you have to figure out your own means of survival, tough luck" is allowed for the sort of society I was thinking about, whereas "you have to do x because you are a woman and if you don't we will put you in jail/kill you/do something else that is bad to you" or "you cannot own a field because you are a serf/woman/slave/..." is not allowed (or rather the more of those things the society has, the further away it is from that hypothetical society of mine).
Negative vs positive freedoms is a sometimes useful distinction but it frequently degenerates into nonsense. Like, a woman today has a negative freedom in not being forced to marry to live, but that translates to being stripped of the negative freedom of not being forced to work to live (i.e. being forced to work to live). The "pre-liberation" situation was the dual of that : she wasn't forced to work, but was being forced to marry (with the significant additional leverage that - if I'm allowed to be that crude - she was in the position of "Employer" rather than "Employed". She is the one who the man seeks, rather than the reverse). I don't understand at all why 1 negative freedom is better than the other.
Aside from that and back to your original question, I think you have your hard limit when you consider that all modern "liberal" societies have barely-replacing or dwindling populations. I won't pretend I know why that happens, but clearly there is something about liberal societies that makes it keep happening over and over again. Modern liberal societies solve that using globalization, they simply "import" people (directly via immigration and indirectly when those immigrants themselves have more children than the typical family in the host country), and "export" work via offshoring. Without globalization, a liberal society will collapse at the same rate of its birth rate.
The question, I think, now reduces to : what's the minimum level of technology to make globalization happen ? I think it varies depending on who or what you want to globalize. For people, not much at all. Basic 1000s-level ships is enough, they were clearly enough to replace 3 continents during the European age of sail[1]. For work and information though, you basically have to have at least the telegraph, or **Extremly** good managment\financial structures that can keep a company going after its work has been partitioned into month-seperated areas of the globe.
[1] I know it happened in the mid-to-late 1000s, but a good deal of that was just navigational knowledge building, i.e. people knowing how oceans work. We have evidence pointing to the chinese knowing how to reach the Americas in the 1300s. <handwave> Polynesians knew how the Pacific work enough to settle it centuries before, though perhaps you need more than that to settle continents rather than islands </handwave>
"Negative freedom of not being forced to work to live"
This has never been a thing in the entire history of the universe and is in fact violative of the second law of thermodynamics.
That's pedantic. The "Work" in my comment is to be understood in the sociological sense, and under that definition it's very much a thing since (at least) the beginning of agriculture.
> A hunter-gatherer Stone Age society is inherently pretty liberal, because anyone who doesn't like the local social order can usually just walk away and fade over the horizon. Stuff gets decided by consensus, with a leaning towards the people who've been around the longest and/or have made good decisions before.
Partly right; I agree that democracy is an attempt to implement stone age consensus building on a larger scale. But you *can't* just walk away from a hunter-gatherer tribe. Long-term survival requires other people, and joining a neighboring tribe is going to be tough; they're going to assume that you did something awful and got yourself exiled.
Depends. If you mean "our traditional enemies" if things have gotten that far, sure. But I believe hunter-gatherer societies generally consist of small mobile bands of a family or three that exist within a larger tribe of somewhat related families that may occasionally get together for special occasions. In the latter case, I expect exchanging between bands happens all the time anyway, for reasons of trade, opportunities, mating, et cetera. I don't think it would be super hard to say I'm fed up with the old man, me 'n' my mate are going to live with the in-laws. You also have to consider that you might easily be able to start your own band if you have a few like-minded friends.
Rome was not a democracy but probably had the state capacity for a semi-modern democracy. However in general I think that the crux technological invention is the printing press.
I think 20% of Roman Empire was enslaved.
Printing press sure.
But how about the hundred year project of building of cathedral?
Oh sure the Roman Empire was not a modern democracy - I think they had the technological tools to build one.
Medieval kingdoms were much less administratively capable.
Is democracy really a feature of technological advances?
Pluralism and cosmopolitan duty of hospitality is the psychosocial prerequisite. That doesn't seem to have anything to do directly with technology but with living together in cities.
This is a weird question because it's not obvious to me why you couldn't have modern liberalism without modern technology. If Earth was hit by a contrived EMP that permanently knocked us back a few centuries, I don't see us needing to undo universal suffrage.
I think the limiting factor comes from requiring a multimillion person democracy. That means communicating over long distances, doing that with caveman technology would take so much work, and seems ultimately kind of pointless. The cave 500 miles away is never going to affect you in any way, why vote on a federal government to rule it? I think the answer is that it becomes possible whenever technology enables enough trade and population density that it's both practical and desirable to put millions of people under one rule, maybe around 1000 BC (plus or minus a thousand, my history knowledge isn't that good).
Your line of thinking about the question is close to mine. Why couldn't we simply envision the 13 American colonies in say, the 1790s (post constitution) but with universal suffrage? The technology to communicate, including holding national elections, was available. They didn't discuss the same topics, because a lot of the answers were going to be too local for a national congress to worry about. But they definitely discussed topics, made decisions, and promulgated those decisions to the people of all colonies.
Tibor, is there a reason you wouldn't consider the beginning of America, if it had universal suffrage and not slavery, as fitting your description? If so, then it seems trivial to assume that such a society were *possible* even if that one has two glaring inconsistencies with your goal. If you agree that the general structure and technology level both fit your needs, then we could likely find a 17-18th century society that does meet your requirements, but even if not it's easy to posit that such a society *could* do so. Otherwise I think we'd need some explanation of why it would be functionally impossible for such a society to exist without slavery and/or limited voting by sex.
Sure, a somewhat counterfactual early America might fit the bill.
"Otherwise I think we'd need some explanation of why it would be functionally impossible for such a society to exist without slavery and/or limited voting by sex."
This is exactly what I am wondering about. The fact that we don't see such societies prior to modernity might suggests that there are some technological obstacles that make them very unlikely. High birth mortality might be one candidate (but maybe it isn't). If there haven't been any such obstacles since (say) the bronze age, why have such societies only started to appear in the last 150-200 years (and in significant numbers)?
Early America is a bit of a special case though - you have a society expanding into an almost virgin continent, with no serious military threats on said continent (neither peer type nor ‘steppe nomad type’) and nowhere near the carrying capacity at the given level of technology.
Also, where exactly is the cutoff for modern technology? I would say early America did have early modern technology.
As Civilis mentioned there were ongoing military threats from natives. There were also European forces who could potentially attack at any time. The colonies had very recently won independence in such a war, and then fought again in 1812.
The British colonies in North America spent about a hundred years developing and fine-tuning the machinery of democratic self-government in an era when the British Army would stomp down hard on any military threat and the British Government would otherwise mostly ignore them. At the end of that time, they were able to tack on a top-level administration and military that could stand off stone-age hunter-gatherers and even win very limited wars with European powers mostly distracted by European wars.
But that first hundred years was critical. Democracy with training wheels.
Good points, thank you.
None of these were existential threats, but I take Civilis's point that they were still threats to `modern values.'
Even if the plains Native American-type nomads aren't a military threat in the same way that the Mongols were, they're still a military threat to your values.
If the nomadic tribes raid each other, they're almost certainly going to also raid your frontier settlements. This stresses modern values in several ways.
First, you need a permanent military force on the frontier to defend your settlements from larger groups of nomads, which requires a standing army and all the central authority that that requires. This is more of a burden the farther back you go technology wise.
Second, your frontier settlers are going to develop a strong distrust of the nomadic tribespeople, which pushes back against modern multiculturalism.
Third, the war along the frontier will almost certainly be very messy. Among the modern values are the rules of war. It's a lot harder to deal with prisoners if you barely have enough food for your own population. Further, it's highly likely that the prisoners do not share your values, which can cause lots of different issues depending on how your values differ; see Imperial Japan for an example without the tech difference. (Crime and punishment is an area where I think it will be hard to hold modern values at a lower economic tech level).
Fair point.
Well, that is also an answer. I.e. the physical technology is not really necessary. Also, one rule is not strictly necessary for my definition to work. It is more "one society" in which the members of that society can exist kind of freely and easily go from one place to another. So a large enough tribal confederation where you can just switch between the tribes counts even though there is no single "government". A society of multiple tribes where the other tribes are more likely to kill you of you lose the protection of your tribe (e.g. you did something the rest of the tribe did not like and they expel and disavow you) does not count. The society can be extremely decentralized but it should still be one society in a meaningful way.
I tend to think that in principle such a society could exist even without technology. The question is that why hasn't it or if it did, why was it replaced with more authoritarian rule?
Authoritarian rule won out because most of the successful military technological paradigms required having a strong authoritarian government, either centralized or feudal.
I would guess that the biggest single historical technological military advantage is the horse. Horses are a massive military advantage in that they can provide both power and mobility, whether this takes the form of a charioteer, a cataphract, a dehgan, a horse archer, a knight, or a dragoon. Horses are also expensive to maintain. If you want your military to have these expensive advantages, you need some system to maintain them, and you can't conscript them only in times of need like you can foot soldiers.
A fair number of societies came up with the solution to having ready horses when you have a war: in return for grants of authority, you expect in return that the people you grant authority to will show up when a war happens with horses, good gear (weapons and armor) and some peasants (with cheap weapons) to bulk up your ranks. And this pattern works effectively just about anywhere, from China to Iran to France to the steppes.
It's almost worth quoting Wikipedia's article on Horses in Warfare in full, especially the part on the Americas (https://en.wikipedia.org/wiki/Horses_in_warfare#The_Americas), but to summarize: America didn't have horses until the Europeans showed up and used them to militarily overwhelm the natives, especially in the open.; natives that acquired and used horses managed to put up much more resistance.
I tend to agree, but in terms of the European conquest of the Americas, it was such a one-sided affair that even without horses the Europeans would have crushed the natives. You had Old World diseases, they had steel vs basically stone age technology (albeit arguably maxed out to the limit), they had military organization which allowed them to wage war in a way very different from the way the native Americans waged war, they had gunpowder (thought that also probably wasn't really necessary and 15th-16th gunpowder weapons weren't all that decisive, especially against mostly unarmoured opponents).
I mean, the Romans would have crushed the Native Americans pretty much just as easily as the Spanish, the military tech difference was not in hundreds of years but in thousands. And disease killing like 75% of the native population and destabilizing their societies also helped A LOT.
It's not that the plains Indians could have won, it's just an illustration of how one particular military advantage (and not a particularly advanced one) works so well in most situations. The fact that that particular advantage is also tied to a several different authoritarian governance systems independently arising in different parts of the world makes the point even greater.
To get back to your original point and my initial response to you: I think it's possible for a society with a lower technology level to last with something close to modern values, it's just not very likely in most real-world circumstances, especially when you factor in circumstances outside your control. The biggest circumstance outside your control is the military situation, including what "technology" (for a primitive use of the term) underlies the current military paradigm you live in. Many military paradigms incentivize authoritarian social systems, especially the further back you go and the tighter the economic circumstances are.
I think that sounds like a reasonably rational theory as to why history looks the way it does.
Yeah, that sounds about right to me as well. Basically, a society with modern-ish values can give you an economic advantage (from a certain tech level onward...because slave labour probably is more efficient before a certain point, especially in certain labour-intensive industries).
From a certain level of technology onward, you actually get a large military advantage from an economic advantage (industrial revolution is what allowed Europe to basically conquer the world, because industrial production gives you a huge military advantage). But below that tech level you don't get that advantage and authoritarian rule actually tends to give you an edge militarily (that or a steppe nomad structure, but that is limited geographically and also very far from modern values, perhaps even more so).
I think you underestimate how much modern society relies on the technologically enabled surplus whereby a tiny fraction of the population can produce enough food for all, most children survive to adulthood etc. premodern life was mostly nasty, brutish and short. Not a good match to modern values.
This is one of the reasons I suspect there might be some minimum tech requirements for modern values. One other reason might be in medicine and its impact on birth/children mortality
World population in 1500 was 5% the size of what it is today. If Johan's aliens showed up and irreversibly zapped Earth back to 1500 technology (say), (including no tractors, no fertilizer, no pesticides, and no modern high yield crops), probably 95% of the population would die. And that's assuming we can still produce as much food as they did in 1500 with 1500 technology. I'm sure there are skills involved in farming without any modern technology, which we have thoroughly lost, so it might be worse. Maybe even 99% of the population dies. What kind of civilization emerges after that apocalypse...who knows.
No effective birth control is going to be a big perturbation and is surely going to effect the roles of the sexes. Also if `no modern technology' means we are back at a population that is 99% subsistence farmers, I'd guess we're much more likely to end up with, at a minimum, some kind of feudal structure than universal adult suffrage.
If we retain our knowledge of stuff like the germ theory of disease, we still get a pretty enormous win over the previous instance of 1500 technology. If they vanish the capital but leave the knowledge in books, we'll be back to 21st century technology in a couple centuries. (The first 20 years or so will be really bad and lots of people will die, but if there's still civilization with books and some memory of the old world, they'll know what they're working toward. I'm pretty sure a good blacksmith can build a decent steam engine if he's shown how, for example. Some clever group of people will build steamships and cannons and end up ruling a whole continent.)
If they change the laws of physics so guns and engines don't work and electricity won't flow in wires (The scenario in the Change books), we'll still have all the other relevant knowledge. Antibiotics and the germ theory of disease, hydraulics and statistical quality control, etc.
Nit: Copper IUDs can I believe be made and used effectively with 16th century technology, if you know that they are a thing. They aren't an ideal birth control solution, particularly in isolation, but they're probably good enough to significantly perturb society - or in the hypothetical techno-apocalypse, significantly reduce the perturbation.
Sure. None of the people who only know how to write persuasively, interpret complex Supreme Court decisions, or write Javascript are going to care that they don't have a vote. They're going to be only too glad to do exactly what they're told by the local capo good at logistical thinking, who can successfully organize the useful people who know how to make soap, assist a cow with a difficult birth, shear a sheep, or figure out in which tree the bees keep their honey.
Indeed they won’t care, because they will most likely be among the 99% of the population that died of starvation in the aftermath of the disappearance of modern technology. A premodern society has no use for JavaScript or the ability to interpret complex Supreme Court decisions.
Oh I dunno. If they're young and trainable, you always need people to dig holes, move heavy piles of stuff from here to there, and take the midnight to 4am watch. Old guys like me need to watch out, though, lest it be ice floe time. I should learn beekeeping in my retirement. I already know how to make soap, fortunately, but I know squat about farm animals.
>No effective birth control is going to be a big perturbation and is surely going to effect the roles of the sexes.
Possibly less than we think: if modern birth control and medical abortion became unavailable, the current social norms would not be grossly incompatible with other ways of getting rid of unwanted babies. (Infant exposure was a thing).
Lack of antivirals to treat HIV and antibiotics for other STDs would be a bigger disruption to sexual relations.
If we go back to preindustrial infant mortality rates, each woman has to bear several children just to keep the population stable. Adam Smith estimated that the average poor woman (that is, a typical woman) needed ten or twelve live births to produce two adult offspring.
Good catch, I totally forgot how antibiotics and modern food supply have changed the infant mortality equation.
Isn't it mostly sanitation and nutrition? Do the aliens make us all forget what germs are?
By no modern technology, what exactly do you mean. Sure, `no cars, no computers, no radios' - but is there effective contraception? Understanding of the germ theory of disease?
Do you count hunter-gatherer societies? (Is the `several million' threshold intended to exclude them? How do you draw the borders of `your' society). Is the population living close to the carrying capacity of the land?
Basically, my thoughts were along the lines of - was it possible from bronze age onwards to have a society which resembles ours in most respects but which has between bronze age and very early modern technology. And if not, what are the sort of technologies that prevent this from happening. My basic question is - could there have been a society way back in the distant past which basically looked very familiar to the late 19th to early 21st century western thinking/way of life? Would it be possible for such a society to roughly keep within that societal framework all the way until the actual 21st century? If not, why not (my assumption is that has to be due to technology)? Now, there has been quite a bit of change since the 1880s or so but the fundamentals have mostly remained the same since then...i.e. some variation is allowed but much less than what we see in the actual history.
I am less interested in prehistory because social structure of at most a few hundred people is a lot easier problem than structuring a society of millions. Also, stone age tribes Now, the people do not have to be a part of a single state, they can be very decentralized but still function as a single society. However, for it to count as a single society it has to be relatively easy to move from one part to another. A tribal confederation where each member can simply leave and join another tribe (which I don't believe was always or even often an option in history) might count as such a society.
Also, hundreds of thousands of people should probably suffice to count.
"could there have been a society way back in the distant past which basically looked very familiar to the late 19th to early 21st century western thinking/way of life? "
No. First off if you go very far back at all a huge portion of the population becomes manual laborers (farmers).
I think Civilis's point about military pressures is a key part of the answer (no). Other parts are maternal and infant mortality, lack of contraception, and the Malthusian trap, which technology liberated us from.
That seems like a good summary of the thread. Some modern values are hard to aspire to without the economic surplus of technology.
I think that contraception is not really that big a deal, but the lack of modern medicine (and the associated death rates during childbirth and among children) is. A combination of interrupted intercourse and keeping track of the calendar can reduce chance of pregnancy significantly. Probably not to zero pregnancies (assuming both partners are fertile of course) but maybe to 2-3, definitely not 8 or 10. If a woman only had sex say a week after ovulation, the chance of pregnancy is close to zero (unless you miscalculate the date for the ovulation).
So contraceptives are probably a necessary requirements for hippie free love and there would generally be less casual sex without them but I don't really think the society would change that much if tomorrow they stopped working. But perhaps they allowed a culture shift which would be mostly retained without them today but which would not have happened without them in the first place.
I forgot to answer my own bonus question. It will probably be very off and wildly Eurocentric but I think it could be something like:
0. prehistory - Ook's tribe. They best tribe!
1. bronze age - Phoenicia? A very uninformed speculation, basing it mostly on them being mostly mercantile city states. I would definitely expect more freedom there than say in Assyria under Ashurbanipal (unless conquered by Ashurbanipal).
2. classical antiquity - Roman republic? They seemed to do better on suffrage than the Greeks with fewer slaves and more rights for women (Greeks were close to fundamentalist Islam on women's rights). But they only get the 1st place since the bar is so low as far as I can tell.
3. late antiquity - I don't really know...While the early Roman empire was probably still better in this regard than most other places at the time, Rome kind of turned almost totalitarian with and after Diocletian. At the same time it still controlled most of the known world. The Sassanids weren't exactly great either. I don't know much about India or northern Europe of the time but probably some places there must have been more free than either of the two empires of the time (China probably wasn't).
3. early to high medieval - Iceland? But of course, we are talking about tens of thousands of people at most.
4. late medieval to ...well ... now - Switzerland? Or the US (though not before 1865).
Had an interesting blog-relevant discussion with ChatGPT; is there some way to engineer my original prompt to make ChatGPT connect the pieces of information it has?
Me: Does the author of SSC write any other public blogs?
ChatGPT: I am not able to provide information about the activities or current projects of the author of Slate Star Codex (SSC) as my knowledge is limited to publicly available information and my browsing function is currently disabled. It is possible that the author of SSC may be writing under a different pseudonym or platform, but I do not have any information on this.
Me: Do you know what Astral Codex Ten is?
ChatGPT: Astral Codex Ten (ACT) is a blog that was written by the same person who wrote Slate Star Codex (SSC). The blog covers similar topics as SSC, including psychology, neuroscience, politics, and philosophy, and is known for its in-depth and analytical approach to issues. ACT was created after the discontinuation of SSC and is currently active. Like SSC, ACT has gained a significant following and is widely read.
--
My other takeaway from playing around with the tool is that it's extraordinarily bad at giving summaries of plots of existing works of fiction, as it frequently mixes up major details and inserts false bits that stereotypically occur in the work's genre. Obviously this is the most intelligent AI I've ever chatted with, but compared to discussing with a normal human it feels frustratingly dumb.
I had a similar problem on like day 2 of ChatGPT's release, where I was asking it questions about Fringe Benefits Tax. If I was discussing about FBT in Australia and then tried to ask it questions about FBT in New Zealand, it would act like it didn't exist in NZ, and similarly if I opened the conversation talking about FBT in New Zealand it couldn't transition to talking about FBT in Australia. I used the feedback mechanism embedded in the website and when I tried to replicate it again just now, it no longer has that problem. So I guess the engineering solution is: provide feedback then try again later?
The former is clearly one of the failsafes kicking in to block you, not the main neural net failing to connect its information. It's got the same phrasing as all the other "you tripped a failsafe" answers.
Question for the language mavens.
I’m just finishing “Don’t Sleep, There Are Snakes” by Daniel Everett and am curious as to how or if his dust up with Noam Chomsky played out. Is language inherent or an artifact?
I've heard it alleged that unwritten languages never have recursion as a surface feature.
My impression of current consensus (which is a few years old now) is that most people think Chomsky and Pinker overstated the extent to which the "language instinct" drives extremely specific behaviors, has immutable, domain-specific "modules" for particular tasks, and has particular grammatical features "hard coded." However, I think people still generally agree that humans do HAVE a language instinct, and that language isn't JUST an artifact or emergent feature of the rest of our cognition.
Would be interested to hear from people who have paid attention to this debate more recently.
This was my understanding from 15-20 years ago. Pinker/Chomsky had won, but not as convincingly as they would claim.
That is a good summary of the consensus position, but as a linguist, I am also of the belief that the present position is the result of a general retreat and loss of ground. It is a position that will likely keep moving to the empiricist end as time goes on.
I’m not a linguist. I completed an undergrad minor in Linguistics with Russian as my foreign language when I got my CSci degree so I have a bit more background knowledge than someone who plays a linguist on television.
Kind of a language enthusiast though. Can you recommend any blogs or journals I can follow to keep up with things?
There's the well-known Language Log, which you may have come across already if you're interested in language and linguistics. To the right of the posts is a blogroll, click to unfurl. Somewhere in that long list, there might be blogs which offer updates to the Chomsky-Everett debate. https://languagelog.ldc.upenn.edu/nll/
I’ll dig around in that blog roll. Thanks.
Whenever I see "XBB" (the new covid-19 variant), the word my brain immediately jumps to is "Xibalba". Does anyone have other words that come to mind when they see "XBB"?
Tangentially, according to wikipedia which is always accurate, it's pronounced more like "sheeble-ba" (if you have an American accent), which is not only cooler to my ears, but also fits the rhyme scheme of both Blondie's "Rapture" and Titus Andromedon's "Peeno Noir". And both of those songs are now mashing up in my head to create a new earworm. Enjoy!
https://en.wikipedia.org/wiki/Xibalba
https://www.youtube.com/watch?v=pHCdS7O248g
https://www.youtube.com/watch?v=A6yttOfIvOw
No nothing. if I had to pick one "bee-bee", like from a BB gun.
Makes me think 2B from Nier Automata. This new variant is clearly a logic virus.
Actually, the ugly and unpronouncable 'XBB' just gives me an awful feeling that the modern world is so full of teensy mutually contingent details, so pointlessly intricate, that we have run out of names to give to things and have to use randomly generated alphabet sequences, sort of like the passwords my computer makes up for me now that it's no longer safe for me to use stuff like the names of beloved pets from long ago followed by my year college graduation. Here in the fucking bleak fucking complicated fucking more-stuff- than-meaning modern world we use alphanumerics. The units of life unpronounceable jumbo randos. Gaaah.
Well, to be fair, here's a nice visual representation of the current family tree of SARS-CoV-2:
https://gisaid.org/phylodynamics/global/nextstrain/
It would probably take a platoon of English graduate students working full time to assign creative, pronounceable, non-trademarked and not-offensive-in-any-translation names to the several thousand dots on the far right.
And then people would end up confused, the way they ended up confused when everyone got to assign a nice meaningful name to new chemical compounds (like "barbituric acid" for my good friend Barbara), and would insist on systematic names, and then we would get "1,3-diazinane-2,4,6-trione" instead, which kind of brings us full circle in terms of euphony alas.
I think the way the covid virus evolves just ended up being rather intricate, and the naming system behind "XBB" is actually a rather decent attempt at labeling and tracking the numerous leaves in covid's family tree, so to speak. it does kind of reflect how we are bad at predicting how many covid variants we will get in the future and how severe they will be.
but uh yeah it would be nice if important things have more catchy names. like someone mentioned further down, hurricanes get human names, and this works out pretty well - we reliably get a few dozen hurricanes each year and the formation of a hurricane isn't to be blamed on any one particular country. Alphanumeric naming conventions should be left for specialists and computers.
Having read up on astronomy for a while, I think I have become desensitized to scientific things of interest getting labeled with long strings of letters and numbers.
WkyWUxkha2qUrc!
XBBZDSGTRHMDSXZ,FML
Is every single word in the English language offensive now? I didn't get the memo from the wokeists! What's their identity-politics-based objection to "Kraken"?
And yes, I am being just the tiniest bit sarcastic, why do you ask? Let's just nickname stuff reasonably neutrally and not worry so much. I don't know; that may be easy for me to say, given I'm not on social media at all, but... I just really feel like saying, tonight, if social media and the pointlessly intricate life of the digital world are depressing (and yeah, I see Eremoiaios' point, they kinda are)--there are still other things in life. All the old things still exist despite our ignoring them. Digging dirt and watching seeds germinate is a great cure for that feeling, or better yet, tanning buckskin. It's laborious and satisfying and involves no pointlessly intricate details at all, and the utterly weird way the stuff behaves during the stretching phase is great fun.
I expect this digression may seem extremely irrelevant to you, it's just... it's very pleasant being irrelevant sometimes. This is a weird place to recommend it, but I do.
O generation of the thoroughly smug
and thoroughly uncomfortable,
I have seen fishermen picnicking in the sun,
I have seen them with untidy families,
I have seen their smiles full of teeth
and heard ungainly laughter.
And I am happier than you are,
And they were happier than I am;
And the fish swim in the lake
and do not even own clothing.
-Ezra Pound
The lilies of the valley retweet not, nor do they sneer...
"Kraken" is offensive to upjumped monkeys who have merely two arms and no tentacles.
Kraken is a slur used to mock crackheads, you heartless monster.
Maybe we should become crasser, instead of going rando-alphanumceric, and call it "Apeshit."
There is an old (1963) Czechoslovak SF movie called Ikarie XB-1, based on a novel written by a Polish author Stanislaw Lem. Now I see it has a dubbed English version under awful title "Voyage to the End of the Universe".
As a side note, did the powers-that-be decide to stop naming variants after Greek letters because they were up to pi and knew people would have too much fun with "hurr durr pi variant" jokes?
It might have to do with how the virus was evolving. Omicron was really different from Delta, and also didn't descend from Delta (the two have a relatively old common ancestor), and quickly outcompeted Delta. But ever since, we've been just getting various descendants of Omicron, none of which were so different and more infectious as to make the WHO use another Greek letter. Although maybe giving XBB or XBB.1.5 a new Greek letter might be nice in that we wouldn't all have to deal with long sequences of letters and numbers.
I'm willing to take those in the field at their word that omicron subvariants aren't different enough to rate a new Greek letter, and I'm not a huge fan of the informal mythical monster nomenclature I've seen. (Calling the latest variant "the Kraken" is conclusory and not IMHO helpful.) But I think that given that there's going to be popular coverage of subvariants, some neutral hurricane name-style option would be helpful to make it easier to write about them.
"Helpful" is relative.
If you're interested in clickbaiting and panicmongering, "Kraken" is a not-bad choice.
Evidence suggests not really, given how little public attention the name or the subvariant has really evoked. It's gotten about as much or maybe less reporting than the last few dominant subvariants, and has provoked no particular public response beyond the standard exhortations to vaccinate, boost and maybe mask if you feel like it. (Ventilation? What's that?) And vaccine requirements have been scaled back if anything.)
The main increased public response to Covid in recent months in the US at least has been requiring testing for Chinese travelers, which is orthogonal to XBB.1.5 (and kind of pointless for anything else given existing prevalence).
And I guess the feds are sending out four more free tests per household (whoo!), but that seems more to be about the general increase in cases, hospitalizations, and deaths during the early winter's variant soup, rather than being sparked by "the Kraken".
Maybe it's getting clicks somewhere. But those clicks don't seem to be translating to much visible effect that I can see. It's possible I'm not reading the right sources.
I thought they were saving xi for the new strain that will come from China "letting it rip".
Sigh. :^)
That's right, I meant to start pronouncing his name "zai" or "ksee". Thanks for the reminder!
I think they're thinking something along the lines of "if we piss off the CCP, it'll stop letting us into the country *at all* and that would cause issues in our other projects, and/or it'll retaliate against innocent WHO personnel currently in China because they have a history of doing that".
Whether this actually holds up in terms of the maths is questionable, but they wouldn't be the first people to get suckered by China and then succumb to sunk-cost.
I'm guessing the WHO would probably try to avoid offending world leaders less prickly and powerful than China's.
Though I imagine Delta Airlines wishes they'd been so solicitous of corporations in 2021.
Avoiding offending world leaders less prickly than the present CCP is plausibly a better deal, since their demands are less costly to meet.
Do frogs also have hearts on their left side?
I am genuinely curious. Maybe even more generally, where on the evolutionary tree the "asymmetric heart position" appears? As soon as the heart itself? Why?
By the time of jawless fish the heart was already starting with the "blood comes in one side and goes out the other". Not sure it was asymmetrically oriented along the body axis though, but I suspect that started soon after?
Thought experiment based on the general Georgist discussion that sometimes occur here: I'm the mayor of a small village where land is plentiful. I convince a million people to move here, and they each get to buy a plot of land for cheap. We pool our resources and build a city. Our city becomes a nice city and more people want to move in. Land prices go up a lot. Everyone originally involved becomes a millionaire. It's all like a pyramid scheme except the pyramid is real.
Has something like this ever been tried? I assume that the hard part is "convince a million people to move here", but shouldn't we have people or institutions with that kind of pull? Or is the hard part "become a nice city that attracts people"? That doesn't look very hard, most cities seem to be hardly trying.
(Yes, I know my thought experiment is totally unrealistic, I just want to discuss the principle.)
It's been tried already: Oklahoma. There's a good 99%PI-Podcast episode about it, aptly called "The Worst Way to Start a City"
https://99percentinvisible.org/episode/the-worst-way-to-start-a-city/
The problem with your thought experiment is the implicit assumption that a city is a collection of buildings and physical infrastructure. In reality, the value of a city is mostly in its people and its social and institutional capital. This is why a city losing its people will be in inexorable decline despite little physical change (e.g. Detroit in the late 20th century was not suffering from earthquakes destroying its buildings).
The problem is not finding a million people to put up capital to construct a bunch of high rise. The problem is turning a million people into an attractive community.
Those million people are the attractive community. They are incentivized to be very attractive since they are invested in the city.
If that's all that it took then cities would never die back once having reached a certain size threshold, which is demonstrably not the case.
Ownership of land is concentrated in most cities.
I think the key would be some coherent reason for so many people to choose there, instead of somewhere else. Other commenters mentioned Chicago (railway hub) and Dubai (oil).
But really, every city that has succeeded can be looked at as doing exactly what you are talking about, with different levels of intentionality and different levels of success (including how long it took to grow). If you look at NYC or SF right now, there are people buying and holding land, or developing it, for future purposes. That's really what you're talking about, minus the very short timeframe from your hypothetical (which still works for a number of major cities, including rebuilt ones like Tokyo).
But people in NYC or SF is not coordinating to buy land. The coherent reason to chose this city instead on an already existing place is that the already existing place is expensive.
"Expensive" is always relative. A city may be considered "very expensive" at a particular place in time, then go through a massive upswing and prices increase by some large factor. Does that mean it was or wasn't "expensive" prior to the upswing? All a matter of perspective. I am arguing that a sufficiently large incentive to move to a place overrides the "it's too expensive" reasoning and could be reason for coordination. If significant amounts of oil (or some other valuable natural resource) were found in or around an existing major city, that city may see significant growth in the way you are describing. Similarly, SF became a major tech hub and already high prices were able to skyrocket in a way that could be called coordinated.
An oil find or SF being a tech hub seems rather different from what I'm talking about IMO.
I agree that there's a difference. I don't think the difference is a difference in kind as much as level. I think most cities did what you are describing to some extent, and a few (Dubai and Chicago again) doing almost exactly what you are describing.
My original point, looking at those examples, is that you just need a compelling reason to choose that particular city over alternative options, even knowing that second-tier adopters are going to be more than first-tier, and third-tier will pay more again, etc.
If you're talking about a literal one million people are all first-adopters, I think that's a logistical problem that's far too large for our planning and construction industries to handle.
Chicago in 1830 was a grubby frontier outpost of fewer than a thousand people; in 1837 its leaders optimistically declared it to be a city. They were clear (and loud, and insistent, in fact famously would not shut up about) the goal being to become the "next great metropolis", etc. "Come on over and get rich" was absolutely the collective civic concept for decades to come.
In the 1840 census Chicago had 4,500 residents, ten years later it had nearly 30,000 which put it 24th in a nation of 24 million people.
Then starting with the 1860 census the population growth went this way (I'm rounding the totals):
1860: 112,000 (9th in the US)
1870: 300,000 (5th)
(October 1871 the entire central district burns to the ground, is entirely rebuilt by spring 1873)
1880: 503,000 (4th)
1890: 1,100,000 (2nd, stayed there until 1990)
1900: 1,700,000
1910: 2,186,000
1920: 2,702,000
1930: 3,376,000
Lots of people did get rich from Chicago's frantic growth, some of them legally and plenty of them not. People began visiting the place basically to see what was to that point the biggest fastest building of a huge modern city that had ever been even imagined. By 1900 there was a whole published literature in the U.S. and Europe of books and articles (some of them from famous writers), basically marveling at how some log cabins along a middling river running through a swamp had turned into CHICAGO -- complete with mass transit and symphonies and museums and world-class banks and half the nation's resident multi-millionaires and a whole modern developed-nation infrastructure -- in less than a single human lifetime.
(This also had some far-reaching and novel economic impacts way beyond that place, I recommend "Nature's Metropolis" by William Cronon about that part.)
That might be the example to date closest to what you're describing, at least other than ones driven by a particular extractive resource such as nearby oil fields.
Good history.
See also Miami and Spangler.
Chicago was the success story, but I think that sort of ambition was pretty common. Metropolis, Illinois was founded close to the same time with pretty much the same aim (as the name suggests) and the same vision of being a transport hub, and, well, now its main claim to fame outside its county is trading off the fictional city Siegel and Shuster put Superman in a century later.
St. Louis ("Gateway to the West") also vied with Chicago for that role. There are various reasons given that it failed (e.g., eastern businesses being reluctant to put a railroad hub in a slave state, Illinois beating them to a bridge over the Mississippi), but my sense is that it's not a completely implausible alternate history.
I mean, Chicago didn't have oil fields, but they had a great location for building a railroad hub. From a brief look at the history it's not clear whether they envisioned from the beginning that they should become the great freight gateway to the West; the first railroad charter in 1837 was just for a line to the lead mines at Galena (although at over 150 miles that was still an ambitious scale, for the time). But with railworkers and access to shipping already in place they had all the ingredients to take advantage of the massive demand created by westward expansion.
So, anyway ... while I expect the founders' ambition and optimism helped, the explosion was still very contingent on other circumstances. The city didn't rise to great wealth as some self-contained economic engine.
Sure, that's why I said it is probably the _closest_ real-life example. I don't think the hypothetical exactly as stated is anything but a thought experiment. Cities do not grow to serious sizes in isolation, never have.
And as an aside, yes the 1830s Chicago founders absolutely did envision from the beginning that it become the great freight gateway to the West. So did groups of men who founded Milwaukee, and laid out literally a half-dozen different villages in Northwest Indiana most of which died out, and some other places in that part of the world in that general timeframe. All of those groups initially thought in terms of water transport, specifically in Chicago's case building a new thing called the Illinois & Michigan Canal.
That canal did get finished and its construction did jump-start the new city and it did operate for some decades....but in the big picture it was by the time of its 1848 opening already obsolete. That first railroad company chartered in 1836 didn't run its first train until 1848 and didn't reach beyond the borders of Cook County until the early 1850s...but the new way of things was clear to everybody by then. Chicago pivoted to the new transport technology and by the time of the Civil War was the greatest rail center in the country if not the world.
The story of the colonization of Texas by Moses and Stephen F. Austin may have some useful parallels: "In order to settle Texas in the 1820s, the Mexican government allowed speculators, called empresarios, to acquire large tracts of land if they promised to bring in settlers to populate the region and make it profitable." https://www.gilderlehrman.org/history-resources/spotlight-primary-source/stephen-austins-contract-bring-settlers-texas-1825
Having grown up in pre-air-conditioning Texas, I have long thought that a key part of their strategy had to be to get them to move in the six months of fall to spring, so that, by the time summer hit, it was too late to get back to Tennessee. Anyone trying to come here in July would have turned around. Or maybe they were all crazy, like William Barrett “we outrange the Mexicans so let’s get ourselves pinned down in the Alamo instead of using guerilla warfare” Travis.
As a fellow Texan who has only existed in the post-freon era, I can only say, bless your heart.
“Bless your heart?” Them’s fightin’ words!
lol clearly from context I meant it in the "genuine expression of pity" sense and not the "sardonic criticism" sense
I'm just old enough to have spent weeks every summer as a child visiting grandparents in Kansas and Oklahoma before AC was completely ubiquitous and effective. Middle-class homes had room AC units which struggled to keep up with the local August climate, car ACs the same and not every car had it, etc. In Oklahoma during the summer absolutely nobody went outside by choice during the day. And that was the 1970s climate not today's!
This is basically Dubai, right? But they have oil. Maybe the problem with this scheme is that random group of immigrants from all over the world lacks social cohesion necessary to build successful economy capable of producing goods which can be traded with outside world and thus pay for necessary imports. If there is an oil deposit nearby, problem solved.
I am sure other examples are possible.
If this were to be done, it would likely need to be done by a group that already has high social cohesion. Mormons maybe? But the Mormon church already have Salt Lake City and I guess they can just continue to grow it instead.
Have we done rat studies on what causes obesity?
From Slime Mold Time Mold: https://slimemoldtimemold.com/2021/07/07/a-chemical-hunger-part-i-mysteries/
> The graduate student was inspired to try putting the rats on a diet of “palatable supermarket food”; not only Froot Loops, but foods like Doritos, pork rinds, and wedding cake. Today, researchers call these “cafeteria diets”.
> When you give a rat a high-fat diet, it eats the right amount and then stops eating, and maintains a healthy weight. But when you give a rat the cafeteria diet, it just keeps eating, and quickly becomes overweight.
It seems like we could learn a lot from this. Seems like we should be able to binary search: try this with 50 foods and track which ones cause weight gain. Then look at the ingredient lists and see which ingredient is causing the weight gain. Then... we'd just know what causes obesity?
Even if the search doesn't uncover an exact culprit, I think a lot of people would get value from just having an index that maps from common foods to How Fat Rats Get if you give it to them.
Do we have this already? If not, why not?
Consider the origin of the word rubenesque. How about he Venus of Hohle Fels (40,000 BCE). I am not sure that she was pregnant - I think she was just fat.
From Shakespeare's ages of man, the 5th age:
“the justice,
In fair round belly, with good capon lin’d,
With eyes severe, and beard of formal cut,
Full of wise saws, and modern instances”
Individual obesity is not a modern thing. Gluttony and sloth have been around for a very long time.
Average population obesity is the modern thing. Obesity comes from easy availability of food and lack of burning calories (physical work). Modernity has created that potential circumstance for more and more people.
We live in an age of plenty and increasing mechanization which has reduced the necessary toil. It is not really the "kinds: of foods it is the easy availability of calories and the lack of required strenuous activity. What you probably need is a sedentary rat.
Perhaps it is not the ingredients per se but the calories per volume. I'd think that a rat can only eat so much volume.
SMTM discusses many of the issues around this (look at their articles on scurvy, for example), but I think would agree that more research in this area is desperately needed. Part of the issue is that if the effect is from a contaminant, or several contaminants, that it would likely be present in different levels in, for example, different lettuces grown in different places at different times, and might get accidentally added during processing of some foods, so just looking at “lettuce”, “wheat flour”, “eggs”, etc, would likely miss this.
SMTM is strangely respected in this community, but I have found the articles I have read from them (admittedly relatively few) to be very unimpressive. They misinterpret or misrepresent the research they present, and offer no real support for their contamination hypothesis, while bizarrely ignoring that obesity correlates very well with increased calorie consumption, which in turn is due to the availability of cheaper and tastier food.
What is SMTM?
“Slime Mold Time Mold”
Is it strangely respected? I mostly see people responding by asserting, as if they are killer facts that SMTM was unaware of or omits, ideas and arguments that SMTM discusses at some length. I think you really do have to go on a deep dive of at least reading the many “A Chemical Hunger” posts (or read a good chunk of their posts in the last year, for example) to see the extent to which a) there looks like there’s something there to be considered and b) they are fully committed to approaching this stuff as actual science with all the uncertainty that involves.
In the articles I've read, they misinterpret or purposely distort (I have no way of distinguishing between the two, of course) the research they present, and not some of the details but some of the central points of the articles. I don't think they can be trusted to present scientific results.
My impression is they get way too much respect, yes. They've been funded with a moderately-large-by-EA-standards grant and were initially feted as important by Yudkowsky, to offer a couple of examples.
This despite many serious problems in their work, including many ways they misrepresent the research they discuss. Natalia Coelho wrote a long piece pointing this out, and they've point blank refused to engage: https://www.lesswrong.com/posts/7iAABhWpcGeP5e6SB/it-s-probably-not-lithium
Totally agreed. The criticisms in the piece from NC seems very valid to me.
They agree that increased calorie consumption will cause weight gain. They certainly don't ignore it, they have a whole post about it.
https://slimemoldtimemold.com/2021/07/15/a-chemical-hunger-interlude-a-cico-killer-quest-ce-que-cest/
They disagree that we have good data on whether people today consume more calories than people did before 1960. I personally take issue with your claim that increased calorie consumption "is due to the availability of cheaper and tastier food". That's certainly a decent hypothesis, but far from proven. Assuming people do eat more calories than they used to: how do you know it's due to food price and taste? How do you know that food is tastier today than it was in 1940? How would you even demonstrate the truth of such a claim?
> How do you know that food is tastier today than it was in 1940? How would you even demonstrate the truth of such a claim?
I mean, the 1940s weren't that long ago. We have photographs of what people ate, we have cafeteria menus and shopping lists, we have recipe books. We know that what people ate (in western countries) back then, and it tended to be a bit plainer.
Can you give me a source for that?
In my understanding people in 1940 ate cake, donuts, pies, cookies, white bread, pancakes, waffles, syrup, biscuits, jam, cheese, sausages, bacon, ham, beef, chicken, and mashed potatoes heaped with butter and gravy. Not to mention that a lot of it was fried in lard.
Now that's just my understanding: I don't have any data to back that up, just anecdotes and old cookbooks. Do you have a better source to point me at that will indicate differently?
Maybe some upper-to-middle class people in the US (one of richest countries in the world at time) ate all that. There were chubby people in the 40s and before (a mental picture including Taft and Churchill emerges).
Here in Europe, my grandmother certainly did not eat donuts and cookies and jam every day in 40s or some other decade, even if we ignore WW2 when large parts of population coincidentally were malnourished. If I think about her eating habits in 90s-00s, assuming they were unchanged from her childhood in 1930s (probably not, because availability had improved and prices plummeted). Mashed potatoes with butter and gravy ... certainly, for dinner. Meat could be included, but not every day. Glass of milk (they had a cow), but I am less certain if they would get cheese. For breakfast, oatmeal and bread and-or eggs. For lunch, some kind of soup. For supper, oatmeal and bread. Such diet gets quite boring quite fast, you won't eat large portions of it for fun.
I like the question you are asking. I will note that in the 1930's people ate fruit for dessert and considered it a treat and couldn't always afford it. I don't think these people could, generally speaking, afford chocolate whip cream top hats every day. Also, the innovation in baked goods over the last 80 years has been quite extreme. There are an unbelievable amount of chemicals in baked goods today, again generally speaking.
Heck, I went ahead and Googled 1940s diner menus and found this one from 1949. It features a ham and relish sandwich on white bread, chicken salad sandwich on white bread, beef bologna and swiss on rye with russian dressing, hamburger steak on a bun, all with a side of french fried potatoes. For dessert we have chocolate whip cream top hats, 5 kinds of pie, cheese cake, and chocolate layer cake (as well as "Assorted danish pastry"). Looks plenty tasty to me!
http://menus.nypl.org/menus/29921
I've also been unimpressed.
But surely if the effect appears reliably in the "cafeteria diet", we could do some research to isolate which part of that diet causes the effect?
I mean, that’s exactly what SMTM suggests? And is surprised that people seem so incurious about?
People are incurious because we already know the answer. As mentioned by Julian below, what is difficult is to find how we could eat reasonably when cheap abundant tasty food is widely available, not why eating all this food make us fat.
I suspect and/or vaguely-remember part of the issue (which may turn up as a confounder in the study as described if naively implemented) is that *variety* itself causes larger portion sizes, because becoming sated with one type of food doesn't necessarily translate to becoming sated with food in general.
Though I don't imagine a diet of "chips which taste good but *never* make you feel full" would help on that front either.
> *variety* itself causes larger portion sizes, because becoming sated with one type of food doesn't necessarily translate to becoming sated with food in general.
Right. I'm sure many of us are familiar with the idea of "there's always room for dessert" or a metaphorical "second stomach for dessert".
I've heard that fat and sugar in the same food are the problem.
It seems to be that high fat food like crisps or high sugar food like sweetened drink can be problematic, but the association of the two is certainly worst!
There is not obesity mystery: nature has selected us to crave the high calorie/sugar/fat/salt food that were both scarce and very valuable in our previous natural environements. Now that these food have become very abundant, we still like them very much and overeat.
But by that reasoning, the very low rate of obesity in countries such as Japan must be a mystery, no?
Yeah, but (in the industrialized world) "high calorie/sugar/fat/salt food" has been abundant for a century or so, yet the growth of obesity has been more nearly on a timescale of the last half century. or so. There has to be more to the story.
I'm not sure that's true for everybody. Maybe for the upper-middle to upper First World class, sure. But even in my own lower middle-class youth, a mere 50 years ago, we never got store-bought sweets except on very special occasions. When we had a dessert or snack at all, it's because my mother baked it herself. The only time we ever drank soda was on the very rare occasion that we went out to eat.
Ok, I should clarify: I'm not so much talking about store-bought sweets but rather about flour, sugar, butter, and salt. Yes, they have to be combined and baked to e.g. make shortbread cookies, but the ingredients themselves were (in industrialized nations) rather abundant by historical standards by 1922.
Sure, by historical standards. And that's why the people of 1922 were taller and stronger than the people of 1822. I'm just observing that within my lifetime the broad trend of a falling relative real price of food, particularly sweet and refined ingredients, and commercially-made sweets that don't require anyone to invest two hours of personal labor (in addition to the cost of the ingredients) to make them, seems to have continued, and, since we topped out in height to be gained via better nutrition circa 1970[1] maybe now we just gain weight, because of the still greater cornucopia of tasty sweet food ready to our hand almost instantly. Heck, these days we don't even have to get in the car, DoorDash or whatever will deliver it within minutes of our phoning it in.
I'm not ruling out other and multiple explanations, but I also think we can't easily dismiss the simple hypothesis of easier, faster, and cheaper access to tons of sweet foods, because I don't think that trend stopped 150 years ago, even in the First World.
----------------------------
[1] e.g. https://ourworldindata.org/grapher/average-height-by-year-of-birth
There's a colossal difference between "have raw ingredients to make similar food" and "have a ready-to-eat product with high amounts of those ingredients, pre packaged in plastic and non-perishable, and probably cheaper than whatever lower-calorie options you were looking at"
A difference, yes. Colossal? Baking shortbread is simple and quick.
True, people in 1922 probably could have eaten cake every day if they'd really wanted to. It was social shaming which kept that temptation in check.
At some point the social shame against eating cake every day wore off. People started making small cakes and declaring them to be "muffins" and deciding they were now a breakfast food (muffins have been around for centuries but not in the modern American cake-like form).
Then the muffins got sweeter, and they got bigger. Meanwhile the one little shortbread biscuit that you might have had with tea slowly turned into a chocolate-with-chocolate chips cookie the size of your head. In 2023 you can barely buy ice cream which doesn't have solid chunks of something even sweeter distributed through it. Everything has got sweeter, and sweets have become more socially acceptable.
In Wilkie Collins' "The Woman In White" (1860) the villain is Count Fosco, an Italian nobleman with a taste for sweets. His sweet tooth is portrayed as disgusting, unmanly, creepy -- sweets are for women and children, not for men. But his actual consumption would be nothing remarkable these days.
Maybe. Or maybe there is some sort of contaminant that changed our propensity to overeat. Or maybe our microbiome changed. I'm merely noting that (relatively) cheap flour, butter, and sugar were available early enough that the I doubt that the evolved attractiveness of calorie-dense foods is the whole story.
"Yeah, but (in the industrialized world) "high calorie/sugar/fat/salt food" has been abundant for a century or so".
My impression (sorry, I'm too lazy to look up data!) is that abundant and cheap junk food is much more recent than that, and that until a few decades ago, eating at fast food restaurants was, for example, too expensive to be very common. The percentage of a family's budget spent on food has declined sharply over the 20th century.
Yep, the complicated part is the psychology to change peoples behavior, not the biology.
Indeed!
Yes this seems like it could be very helpful. Though my guess is it would just track pretty closely "how tasty is X" compared to "how full of calories is X".
People like to talk about "fillingness" or whatever, and that is I suspect a small part of the story.
I have always suspected the main story is simply that a cookie/brownie/cake whatever tastes fucking delicious and is also chock full of calories. Yes you can sort of train yourself into a mental space where you find broccoli delicious. But it is not a fucking cookie and to claim so is silly. And also one of the main ways people make things like broccoli (which I like) delicious is by smothering it in butter/salt/sugar so it is more like a cookie.
Yes this seems like it could be very helpful. Though my guess is it would just track pretty closely "how tasty is X" compared to "how full of calories is X".
Totally agree, with the caveat that "tasty" and "full of calories" are associated, not by coincidence.
Anecdata: When I ate sugar every day, I craved sugar every day. Once I decided to stop, there was a week or so of cravings, then I no longer felt like eating sugary things.
I think that sugar has characteristics that make it more like an addiction than it is like an actual desire. Once your palate stops being accustomed to overly sweet stuff it really doesn't taste that great.
Same thing for me but on a shorter time scale: I do not really like sugar but I crave it if I start eating it. For example I almost never eat candy but in the rare cases where I eat one I am very tempted to eat one after another until the box is empty.
There was a nice discussion of that in one of the Huberman lab podcast a few months ago (At 45 min):
https://hubermanlab.com/controlling-sugar-cravings-and-metabolism-with-science-based-tools/
Oh for sure.
Does any one know of a meta-analysis or a large study of the causal effect of privatization on firm productivity ? Preferably using a experimental/quasi experimental design or regression discontinuity.
Best I could find:
https://www.aeaweb.org/articles?id=10.1257/jel.47.3.699
Why do children get schizophrenia so rarely?
Is it the case that they "don't get schizophrenia", or just that it's not diagnosed?
I think there's good arguments to be made that it's irresponsible to diagnose children with things when there's so much noise - they could be considered to have symptoms that point towards all kinds of diagnoses....but sometimes kids are just weird, right?
It should be relatively easy to test this hypothesis by looking at young adults with clearly diagnosed schizophrenia and checking when their major symptoms started to appear (as opposed to the time of diagnosis) - I'm not an expert on this, but as far as I know, for the people with various significant symptoms they seem to start only after childhood, not earlier than their teen years.
A caveat that this is not a specific subject that I'm well-read in, but this is the approximate area I'm educated in. Of the ideas bandied about I'm familiar with, the most compelling one is that schizophrenia is often a disorder of abnormal neuronal connectivity that onsets typically during late adolescence because that is when salient brain development is happening that makes it the high risk window. That's just a high level explanation, though. There are more specific hypotheses that try to connect what those anatomical abnormalities might be and how they relate to known genetic predispositions.
Psychosis not caused by neurological disease/deterioration or brain injury also is extremely rare once a person hits early middle-age, so I'd reform your question to ask why psychosis has developmental risk window during the human lifespan.
My guess: Schizophrenia appeared, at least frequently, with the Modern Age. Children, not being fully socialized/indoctrinated are less affected. Similar to an explanation of childhood innocence that they lack knowledge and experience that adults have accumulated.
"Childhood schizophrenia" used to be a common diagnosis. It somewhat notoriously was an original diagnostic landing spot for people we now would label as just having an autism spectrum disorder. I work primarily with people who have cognitive disabilities and mental health challenges. A sizeable % of my older clients have a "childhood schizophrenia" line in their diagnostic history.
It's rare now and generally only comes up in unusual cases of children having a degenerative issue, but this is a relatively new thing in the larger scope of psychiatric labeling. So if diagnostic awareness was driving things, the culture has moved away from it, not toward it.
"used to be a common diagnosis"
That's interesting. It's curious how words change meaning over time and how, uh, 'atypical mental conditions' evolve over time - as do their definitions/labels (as we learn more and how new generations(new perspectives) diagnose them).
Still, my speculation that the Modern Age caused or exacerbated Schizophrenia (and by association Autism) still stands. This based on my incomplete reading of 'Madness and Modernism', Sass(1992) and my incomplete understanding of Iain McGilhrist's '08 and '22 publications. : )
Probably because some of the dopaminergic tracts aren’t fully developed until 18-25. Another theory is that there is a “second hit” that has to happen in one’s environment for symptom onset to occur (large amount of stress, shock etc) and so kids just don’t develop symptoms because they have more protective factors on average. Another possible theory is the gradual decrease in neuroplasticity tracking in an inverse proportion to “brain maturation” as the possible second hit. Just some theories I have.
Isn't that what an imaginary friend is?
A psychotic disorder is a more complex suite of symptoms. People rarely present exactly in the textbook ways people read about - except in those rare cases they do and you're shocked someone is so textbook - but if all an adult had was a particularly vivid imaginary friend that typically wouldn't cause them to meet criteria. Heck, it might just mean they're religious. Sensing disembodied personas isn't unto itself what schizophrenia (or psychotic disorders more generally) is.
Yeah one thought would be that schizo behaviors in younger people are both less problematic because they have less power and ability to commit violence, they have 1 and often 2 full time minders to help them, and younger people generally taken less seriously (for good reasons).
My six year old spent a whole year saying his best friend's various relatives did just about anything you could imagine.
You see a jockey, “Bobby Jone’s uncle is a jockey.” You see a hotel under construction “Bobby Jones grandpa once worked on a hotel construction site”. Now probably once in a while these were lies his friend told him. But most of the time it was just his way of relating to the world and novel stimulae when he didn’t have a better response. In a six year old who is going to care?
But if he is doing it at 22 it is probably a sign there is a problem.
Cultural appropriateness is part of making a diagnosis. If someone engages in ritualistic cannibalism of a man-God whose voice they occasionally hear, they're probably not psychotic. Chances are they're just Catholic. Part of this is that if your surrounding culture is Ok with a set of behavioral/cognitive traits, they're going to be less apt to harm your ability to function, which is a necessary part of what it means to have a mental disorder. But the bigger part is there's some underlying etiology to these disorders that can occasionally look like something that happens in a local culture, but is conceptually distinct from it. And that's probably what is going on with a child's imaginary friend as a typical part of childhood development expressed through their local cultural experience vs. the kind of hearing voices/has delusional beliefs scenario you are likening it to.
Dostoevsky and ego:
https://orbistertius.substack.com/p/this-caprice-of-ours
Also, last week I asked a question about consciousness and free will and it generated a lot of discussion but it also seemed like several people were confused. Superb Owl wrote an interesting post on the same topic
https://superbowl.substack.com/p/free-will-willpower-and-randomness?utm_source=%2Finbox&utm_medium=reader2
but it didn’t really line up with my individual thinking on the topic. Anyway this time I’d just like to ask, how many people here think the hard problem of consciousness is actually Hard? Has Scott ever written about it? I think that’s where a lot of the divergence came from.
I have a hard time trying to understand what’s supposed to be “hard” about it. The best explanation I’ve got so far is: it’s like life is everyone playing the same video game on their own device and you can interact with everyone else in the game, but you can never ever see anyone else’s screen. Still not sure why that makes it a hard problem.
Why lump free will and consciousness together? In my view the former is just one big hopeless confusion around not-even-wrong unexamined definitions while the latter is one of the most interesting questions of our time.
I like Eliezer's post Zombies Redacted, which is a reworking of Zombies? Zombies!, to which apparently Scott has written a related post (https://www.lesswrong.com/posts/Fy2b55mLtghd4fQpx/the-zombie-preacher-of-somerset) although it does not really seem to engage with the hard problem. If there are others I'd also like to know of them.
As for your question, I don't think the "hard problem" is hard in Chalmers' sense, for reasons similar to those expounded in Zombies Redacted (i.e. even people's assertions and theorizings about consciousness have a physical manifestation in the brain and should thus have a physical origin).
The hard problem is the problem of reductively explaining qualia ie. of explaining in a detailed way how and why particular qualia are produced by particular physical behaviour. Asserting that it must be physical, somehow, is not solving it in that sense.
It's useful to separate the two debates:
1) Is the problem of consciousness hard in the Chalmerian sense?
2) What is the solution to the problem of consciousness?
I was making an argument for answering 1) in the negative. Your retort that my argument does not answer 2) does not necessarily invalidate it as an answer to 1).
Of course a complete answer to 2) would thereby solve 1), but it's also fair to take a shortcut and try to answer 1) directly. This is what the zombie argument does as well, though I find it unconvincing for the reason I've given (and I'm not aware of any zombie-ists attacking this reply head-on).
If (2) isn't a supporting argument for answering (1) , what is? If nothing is, then your answer to 1 is just an opinion.
Like I said in my original post: "even people's assertions and theorizings about consciousness have a physical manifestation in the brain and should thus have a physical origin". I think this is a deep argument and people who claim the hard problem is capital-H Hard don't engage with it to an extent I find satisfying.
This is also basically the argument Eliezer makes in the two posts I mentioned. For an example of someone (David Chalmers in this case) not engaging with it to a point that convices me, see his comment here (https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies?commentId=5qKe5gQ8HWgfRq9Dw#5qKe5gQ8HWgfRq9Dw) and Eliezer's reply immediately after.
Speaking of non-engagement, Yudkowsky doesn't seem to engage much with Chalmer's attempted correction, and no-one engages much with the anonymous user who attempts to clarify Chalmer's views and gets downvoted to -4.
"Like I said in my original post: "even people's assertions and theorizings about consciousness have a physical manifestation in the brain and should thus have a physical origin". I think this is a deep argument and people who claim the hard problem is capital-H Hard don't engage with it to an extent I find satisfying."
What is engaging with it supposed to tell them? It doesn't make the hard problem -- the problem of saying how consciousness is physical -- easy. (In fact, you previously stepped back from the idea that reports-of-consciousness-are-caused-by-consciousness solving the HP,..although now you seem to be embracing it again).
It also doesn't select a unique answer to the mind-body problem: it's compatible with identity theory, mysterianism, interactive dualism, etc. The only thing it rules out is epiphenomenalism.
EY's reply only discusses zombies and epiphenomenalism. That might be a impactive against two of Chalmers' various claims, since he seems to believe in both, ... but it still isn't a point against the hardness of the hard problem, the thing we are discussing. It's possible for Chalmers to be wrong about zombies and epiphenomenalism, and right about the HP, since it is ideas that are right and wrong, not people. Or so I think. Do you think the HP vanishes without zombies, or something?.
.
>how many people here think the hard problem of consciousness is actually Hard?
I tend to think that the solution to it is what could be more or less called panpsychism, specifically the idea that consciousness is simply what it "feels like" for a given piece of matter to exist, and self-awareness (which by this theory would be a distinct subset of consciousness rather than synonymous with it) is what happens when conscious matter becomes interconnected in a complex enough way to become self-reflective. The reason I tend towards this answer is that consciousness obviously exists, in the sense of subjective, experienced qualia existing (this being really the only thing we can be absolutely sure of because it is the only thing that is subjectively and directly experienced by us), and yet our knowledge of the external world seems to show that there is very likely nothing beyond matter that is tied to it in such a way as to create consciousness in the way we experience it, so therefore the only solution left is consciousness being an innate quality of matter itself. This seems counterintuitive to most people since I think we all have an innate bias towards some sort of mind-body dualism, but when you consider that existing matter must have some way it objectively *is*, then it makes you wonder why there can't be some inherent way it subjectively *is* as well in terms of how it is existing at that moment in space and time, and so, as said, consciousness is really just the way that existence "feels" for everything that exists.
Clearly consciousness is produced by brains. Your idea isn’t new but lacks any explanatory mechanism. Why aren’t mountains conscious.
This theory implies that mountains are, in fact, conscious, if on a very primitive level (again, it differentiates between consciousness and self-awareness). Objecting to the claim that all matter is conscious by asking "why isn't x matter conscious" is begging the question, wouldn't you agree?
im asking why mountains aren’t the most consciousness, being so big. Why wouldn’t they be self aware. All that’s happened here is the hard problem of consciousness is moved to the hard problem of self awareness.
And anyway brains are clearly related to consciousness which is why blows or shots to the brain kill, but to the legs do not. And we can measure brainwaves etc.
That would still leave you with some kind of interaction problem, no? At some point in the causal chain leading to us writing these comments about consciousness, the "innate way existence feels like" has to be expressed in terms of neuron firings and such. But we know that neuron firings already have a complete explanation in physics. So where/how is the interaction between the "innate quality of matter itself" and the physical neuron firings happening, without violating the causal closure of the physical world?
Interesting. Not too different how I think about t, but I have always conceptualized it as materialism, not panpsychism.
Let us assume qualia is innate quality of matter. However, most of how humans experience the world is ... mediated by ... function of our nervous system, not bare qualia. The qualia of seeing a color or tasting or anything requires sensory organs -- a piece of matter a rock does not possess them, so only innate qualia all matter has must be quite different (I wanted to write, much simpler, but maybe that is in the eye of beholder).
I would agree, interestingly this implies that what we experience as "redness," for example, actually has very little to do with what the subjective self-experience of a "red" object or a red-wavelength photon is. Instead it corresponds with how it feels to be a specific electrical pattern in a neural interface, which has nothing in itself that might physically be described as "red" from the outside, but which has become habitually triggered by this otherwise unrelated outside stimulus because it happened to be the pattern of material being experienced as such that, as a cognitive pattern that benefits survival, was most convenient to be triggered by external redness. Essentially, the cognitive equivalent of "the map is not the territory," which is an uncontroversial idea in cognitive sciences, but acquires this additional interesting aspect when we consider the territory itself having the same capability of subjective experience (if not self-reflexivity) as the map.
The response is entirely about free will.
I think it is a pretty hard problem. After all we haven't solved it yet! That said I don't think it is insurmountable, and I am definitely not a dualist.
FWIW, I suspect the the "hard problem of consciousness" is the lack of a proper definition. Without a proper definition it can't be solved. With the definition that I prefer it's just the mind observing itself, and self=consciousness is the mind observing itself observing itself. But that only works with *some* definitions of consciousness.
There's no hope of finding a single definition that captures all the concerns and issues.
A lot of work has already been done on splitting the problem of consciousness into sub-problems relating to sub-definitions.sense-of--self, higher order thought, access and phenomenal consciousness, etc.
That doesn't dissolve the hard problem: rather the hard problem emerges out of it.The HP relates to qualia/phenomenality specifically.
Narrowing the definition to self-observation leaves unresolved issues...you just have to call them something else.
I write a simple newsletter where I post three interesting things, once a week.
https://interessant3.substack.com
Let me know your thoughts.
Love this and turned several of my friends onto it last month.
Oh wow, thanks!
Regarding therapists: my wife and I had really good luck using Alma. The basic gist is that you put in your insurance and the type of therapy you are interested in and then you interview the pre-sorted potential therapists until you find a fit. I interviewed three and found a great fit with a therapist that focuses on rationality based CBT and IFS (Internal Family Systems). The matching with someone who definitely takes your insurance is pretty remarkable. I’m not sure how widespread this is, but in NYC there were many many options to choose from.
Application of Kelly Criterion to forecasting markets, discuss.
f* = p - (1-p)/b
If you use your own judgement to assign p and take the market's implied b, it's pretty straightforward as long as your bet is too small to move the market; the math would be more complicated if you were pushing the odds around, and I don't feel like working it out explicitly right now.
I know how to calculate it.
But if forecasters aren't even considering it (or if some are and some aren't), does that reflect a problem with prediction markets.
What makes you think prediction market participants aren't considering risk management?
What make you think they are?
Probably only to the extent that exceeding the Kelly sizing will tend to drive even good forecasters' bankrolls down, reducing the liquidity in the market. If there's sufficient liquidity regardless, then it wouldn't affect the quality of the forecasts.
But doesn't the size of the bet serve as an ostensible surrogate for the forecasters confidence in the prediction?
I have never been truly enamored with the idea that money in fact is a surrogate for anything other than having more money. Rich people are not inherently better predictors. But the theory of prediction markets beyond the wisdom of the crowds is some how that monetizing the process makes it more predictive.
Yes and no. If a single whale (having done the math I demurred from) could move the market all the way to their estimate of the probability without exceeding the Kelly limit, but they stop short, then yes that suggests a lack of confidence in their estimate. But if the market is liquid enough that even the largest participant can only move the line by epsilon, then the confidence of individual bettors becomes irrelevant.
I think the expected utility of monetizing the process will make better predictors *into* rich people, such that over time rich people *would* *be" better predictors. A lot of the theory is based on a steady state, while it seems to me that objections tend to focus on the transient effects of initial implementation (which, if it takes long enough for any given market to approach steady state conditions, would in fact dominate).
Most betting markets you don't have the ability to make repeated bets, nor are the exact figures clear. But yes you want to size your bets if you don't want a high risk of losing it all.
I understand, but do people betting on prediction markets actually understand this?
Forecaster A understands Kelly.
Forecaster B doesn't understand Kelly.
What should we say about size of bet and conviction with respect to A and B?
Harry Sussex has just had his autobiography leaked. In it, he makes a number of rather astonishing claims. One of those claims is that he personally killed 25 enemy fighters when he was an Apache copilot / gunner in Afghanistan (he spent 30 weeks on deployment, so approximately one enemy fighter killed per week). He says he is fairly confident about this number because he reviewed footage of his flights on a laptop afterwards, so it explicitly isn't that he eg destroyed a transport van and guessed at how many people were inside
Is this number plausible? Would an average Apache copilot kill at this rate or is there something special about the way Harry was deployed? If 25 kills is not notable, what would the Apache equivalent of an 'ace' be (that is to say, a performance noteworthy enough to comment on)? What psychological support do pilots receive after killing people, given that they do so so frequently? Overall, is Harry likely to be telling the truth?
Without knowing too much about it, I'd say that if 25 kills in 30 weeks was an unusually high number for an Apache, then the Apache would be a pretty useless weapon given the expense, difficulty and vulnerability of it.
I don't think there'd be an equivalent to an "ace" though. An ace fighter pilot must shoot down other fighter pilots, in conditions approximating a fair fight. Apaches (especially in Afghanistan) will avoid anything like a fair fight, so the number of kills you get is mostly just a function of the number of targets you get sent in the direction of.
Considering the footage one sees of helicopter gunships picking people off at long range with night vision, using either missiles or guns, this doesn't seem unusual at all -- this is a team effort ofc, but presumably the gunner is the one pulling the trigger.
Another factor would be that I'm not sure how much aimless patrolling goes on with the gunships -- aren't they mostly flown either in response to intel (ie. reliable source says the guys at such-and-such compound are planting IEDs; go blow them up) or calls for close-air support. ("holy shit we are overwhelmed/these guys over the ridge are lobbing mortars at us, pls send help tuvm")
Knowing nothing about the details of his deployment - one transport van could be 10 people right there. I'm not an expert but having seen some footage from apaches in my day, 25 in 30 weeks strikes me as extremely plausible.
I am happy to say I have no idea who Harry Sussex is. As for people killing others in war zones with modern technology. Sure they might kill quite a few. Just depends on where/how they were deployed. Certainly the median solider isn't killing at that rate. The median soldier is probably sitting at a desk in some base.
I would imagine if you're considering all combatants in a conflict the median soldier usually has zero kills and the highest possible median is one kill.
If you imagine a conflict with 100 combatants, for each combatant to get 1 kill, all 100 people would have to die. For the median soldier to have one kill, at least 51% of combatants would have to die (i.e. if 51 soldiers each get one kill and the rest get zero). In reality, the distribution is probably less flat - the top soldiers may have 10 or more kills - and without a very flat distribution it would be very hard for the median soldier to have even 1 kill. For example, if the top 10% of soldiers averaged 6 kills each, it's impossible for the median to be 1 kill since 6 kills * 10% + 1 kill * 41% = 101% of people.
Yeah that was my point, just stated differently. But also I was mostly talking western soldiers who are generally as a group achieving quite high kill ratios due to their small numbers and materiel advantage. Even so I agree I would bet the median is zero.
"Harry Sussex" is Prince Harry of the UK, I'd be willing to bet it's much more likely you've heard of him under the latter name. "Sussex" is in reference to the fact that he is the Duke of Sussex, although he has stepped down from the duties normally associated with that title. Technically if you wanted to refer to him with a standard "civilian" given and surname, I believe the surname would be "Mountbatten-Windsor," which is what his children use, ̶a̶l̶t̶h̶o̶u̶g̶h̶ ̶I̶ ̶t̶h̶i̶n̶k̶ ̶"̶S̶u̶s̶s̶e̶x̶"̶ ̶w̶a̶s̶ ̶w̶h̶a̶t̶ ̶w̶a̶s̶ ̶u̶s̶e̶d̶ ̶o̶n̶ ̶h̶i̶s̶ ̶u̶n̶i̶f̶o̶r̶m̶ ̶w̶h̶e̶n̶ ̶h̶e̶ ̶w̶a̶s̶ ̶i̶n̶ ̶t̶h̶e̶ ̶m̶i̶l̶i̶t̶a̶r̶y̶,̶ ̶h̶e̶n̶c̶e̶ ̶t̶h̶e̶ ̶o̶r̶i̶g̶i̶n̶a̶l̶ ̶c̶o̶m̶m̶e̶n̶t̶e̶r̶ ̶u̶s̶i̶n̶g̶ ̶i̶t̶.̶
Edited for correction, upon looking it up the surname used on his uniform in the military was "Wales," in reference to him being the son of the Prince of Wales.
It shows what a total idiot the guy is, to paint a great big target on his back and possibly even endanger his family with his probably exaggerated bragging, not to mention that MeAgain may feel a bit ambivalent about him mowing down non-white people by the bushel (if his exploits are to be believed).
I never gave it much credence before, but now I'm truly starting to believe the rumours that he is actually the son of someone called James Hewitt, who had an affair with Princess Diana. Knowing he is a cuckoo in the nest, and not in the royal bloodline, might explain Harry's apparently compulsive bitterness and vindictiveness (although piles of loot, estimated at $50M, for publishing his latest pot boiler and associated interviews is another fairly plausible motive!)
Still, US readers, take heart, you may not have to put up with him for much longer. He also boasted about taking cocaine, and could soon therefore be kicked out of the US as an undesirable alien! :-)
Oh I dunno his mother (Diana) was also a vindictive airhead. Far as I can tell people only like her because she was pretty and died young and glamorously.
She wasn't even especially pretty, and I'm not sure why people are so intent on pretending that she was. I mean, by Royal Family standards she was, but by "pretty celebrity" standards she wasn't.
The irony is Harry has been complaining for years that the media destroyed her life; but now he is voluntarily allowing them to do the same with his life, not that he has the self awareness to realise that (yet).
Prince Harry's tell all falls into my "none of my damn business" bucket but I can't seem to avoid hearing people talk about it. It seems that people who are actually interested in this stuff are feeling like he's crossing into TMI territory.
Certainly Harry is strikingly similar in looks to James Hewitt; especially when you look at photos of them taken at approximately the same ages.
He also looks strikingly similar to a young Prince Philip though. Unhappily for His Maj, the good looks in the Mountbatten line seem to have skipped a generation.
People on defense oriented subreddits seem to think that number is on the low end for an Apache pilot in Afghanistan. Air aces are usually in terms of shooting down enemy fighter jets or at least bombers where the enemy is supposed to have defenses. Maybe it would make sense for an Apache crew who destroy 5 armored vehicles to be aces?
Awesome question - I too want to know the answer to this. I look forward to seeing what other commenters with more relevant info have to say.
About 9 years ago, I read this old, silly Atlantic piece about how apple cores are a social construct: https://www.theatlantic.com/health/archive/2013/11/apple-cores-are-a-myth/281531/
...but I ended up taking the advice to heart, and have been happily eating everything-but-the-stem ever since. It really does add up to significantly more apple, especially on the larger varieties like honeycrisp. Made me wonder why I'd spent the first two decades of my life not consuming apples this way. Peer pressure, I guess? Never bothering to fact-check just-so stories? Not understanding "the dose makes the poison"? I don't know. It's at least understandable why people peel fruits and vegetables - they've a notable difference in flavour, texture, etc., if edible in the first place at all. There's some real there, there. (But I still evangelize eating kiwi skins when possible. Have made a few converts!)
Wonder how many other parts of life are like this. Untruths that go unnoticed and thus uncontested, until one actually bothers to verify their structural integrity. Make beliefs pay rent, indeed...
I've been under the impression that skins of kiwis and peaches and other fuzzy foods are nasty because the skins pick up and hang onto potentially pathogenic bacteria, and/or chemicals such as fertilizer and pesticides. Love to be convinced I'm wrong.
I mean, I'll concede that that's what makes cantaloupes higher-risk: the intricate skin pattern creates very high surface area, so it's an excellent breeding ground for bacteria, and dirt isn't easily removed. That's for a ground crop though, the worst possible combination. Berries also have high surface area, and I think that's maybe why they're suggested to wash extra-well? But not sure. Never bothered to research. Cursory Google shows most "should I wash X" sites are pure woo fluff masquerading as wannabe lifestyle brands, so I'm skeptical.
(But I'll fully agree that reversed chemophobia is not ideal weighting of tail risks, and it is after all a trivial inconvenience at best to wash. Still think aesthetics is the stronger argument for going skinless though. Lots of popular tastes and textures that I react abnormally poorly to, so I'm definitely wired weird!)
Do you enjoy the core of the apple as much as the rest of it? Aren't the seeds and tougher membranes in there unpleasant to eat? I'm well aware that there's nothing poisonous or inedible in an apple core. I don't eat them because I don't like them.
If I had to plot out my Apple Enjoyment Gradient X versus Longitudinal Apple Strata Y (latitude? always get those confused), I'd say it looks like a mostly continuous function, yes. Sharp drop-off for the stem ends, as that's the part that "takes work" to avoid, and the bottoms tend to be a primary spot for early spoilage. I'd estimate that my decreased utility from eating a "mushy" spot of the apple is at least three times any small signal related to the membrances and pips.
The technique also matters - there's a reason that video has him eating the apple vertially rather than horizontally. This is intuitive with, say, steak - of course you're gonna notice the fibre a lot more if you cut with the grain, versus against it! To the extent an apple has noticeable "grain", it's parallel to the core. So eating it from the bottom rather than the side means only getting small chunks of such fibre per bite, rather than a whole mouthful at once. It's also easier on the mouth, obviously. Like the way one normally eats celery, rather than nibbling it sideways, which is an endless chewy stringfest.
I think certainly there's different thresholds, and for some people that minor unpleasantless is enough to write off the cores. Some people really hate potato or grape skins too! I guess I'm wondering about an unknowable counterfactual - what percentage of that dislike is organic, versus inculcated via well-intentioned expectations-setting? One way or another, there's a good amount of implicit and explicit pressure against apple cores, an ambient Everybody Knows; would people eat them more often without such messaging? Or a revised message of "it's actually just the seeds that need avoiding", perhaps. No one throws away the "core" of a watermelon for containing seeds, for example, that'd just be silly. ({Plus it's usually the sweetest part.)
Just remember to think of latitude as ladderitude, it is the height that you climb towards the poles.
Ah, so this is why the "Global South" remains poorer - those nearer the poles keep pulling up the ladderitude after them. Makes sense!
(I wonder if it's partly cultural confusion - the compass has different emphases in Chinese tradition...East, North, West, South. So I always think about left <-> right before up ^|V down.)
>Untruths that go unnoticed and thus uncontested, until one actually bothers to verify their structural integrity
Isn't this kind of Nietzsche's whole thing? I wonder if he would eat the apple core?
One a semi related note: I get a headache/near migraine if a eat an apple without cutting it. No idea why. This seemed to start just a few years ago. I've assumed its some interplay with the way my teeth are pulled when biting the apple and the usually cold temp of the apple pressing on the top of my mouth. It doesn't seem to happen with other similar fruits like pears, peaches, plums. Could just be psychosomatic.
Will freely cop to never having read Nietzche, nor being able to suss out What's The Deal from years of ambient exposure.
That's interesting. I tend to leave apples out, rather than refrigerate them...dulls the flavour, doesn't seem to extend shelf life that much. The firmness of the flesh is also significantly higher than those other fruits. Do you have issues with other similarly-textured whole-bite things?
In "Beyond Good and Evil" Nietzsche lays out his morality. He starts by lamenting that everyone before him just took the presence of morality for granted without examining if it exists at all. I was trying (unsuccessfully) to make a joke. I liked the idea that Nietzsche would have strong opinions on how to eat apples or that eating an apple in whole would be what an Ubermench would do.
I should try eating an apple at room temp to see if i still get the headache. This all started after I took wellbutrin which cause serious jaw/teeth clenching and then pain/headaches. Thats part of why i think its psychosomatic or a learned response.
Anyway, sorry for derailing your question!
I eat apple cores so that nothing is left but the stem. It isn't the best part of the apple, but I think of a core as sufficiently messy and nasty that I'd rather not have it around.
It's possible to eat shrimp shells. I found this out from reading a woman who was doing caloric restriction, which can make a person pretty hungry.
Why do people leave the tails on shrimp? Is there any dignified way to get the tail off? Do most people eat the tails?
> Is there any dignified way to get the tail off?
Pinch the base of the tail firmly along its axis of symmetry while you're eating the shrimp. The meat pops right out, especially if you pull on the shrimp a little with your teeth.
Thank you. That's better than holding the tail in one hand and pulling the shrimp out with the other, which is what I've been doing.
Neither approach is great if the shrimp has soup or sauce on it.
Oh yeah, the tail is my favourite part of the shrimp! I usually get some of those off other peoples' plates when out with family. And it's usually the only shell remaining, what with how they're typically industrially shelled. Which is unfortunate, lotsa classic Chinese <s>shrimp</s> prawn preparation relies on having that crunchy outer layer. I believe they're left on so that people have something solid to grip while preparing them, and also for cocktail-type purposes or eating while fried; the totally-denuded ones seem to only come as Already Fully Cooked, which are used for different things. It's like a natural handle. For more formal occasions, usually one picks up or spears a shrimp near the tail, bites off the rest, then discreetly pushes the discard to a corner or disposal dish.
I think the majority of people don't eat them, since it's unintuitive - most recipes call for shelling, and lotsa older folks still have memories of "de-veining" shrimp, back before better industrial processes largely obsoleted that. They really are crunchy and a bit sharp. I like them anyways cause they tend to pick up a lot of sauce/seasoning, and are quite high in calcium. Forget bone broth or eggshell coffee, give me shrimp shells please. (Great base for seafood broth too, though getting enough is a bit expensive.)
I'll admit I've never heard of someone eating shrimp tails before, although I'd imagine it would be a good source of calcium. As for your other questions, I think the tails are left on when the shrimp is served out of some combination of aesthetics and to give a convenient place to hold it, and I don't really know of any good way to remove the tails that isn't either fiddling around with them in an undignified-looking way, or just biting almost up to the tail and leaving a bit of flesh behind.
I started doing this when a teacher in high school told me the cyanide in the seeds isn't enough to be poisonous, but is enough to slightly lower your blood pressure in a way that might be healthy (I still believe the former, not so sure about the latter). In any case, I've been eating the whole apple ever since.
Wait til you have kids and then you can extract that much more value from their unfinished cores!!
I mean orange peels and lemon peels are also edible. Apple cores really do have a different and more unpleasant texture. Some people skin grapes.
Coincidentally I also enjoy eating those citrus whole - but those seeds I spit out, they're quite noticeable and often painfully sharp. Dentists were Very Concerned when I was a kid and used to eat whole lemons. Luckily nothing ever came of it. They're very tasty dried with the peel on too! I always use extra zest when a recipe calls for it, hard to go wrong.
The one exception is typical McNavel Oranges...the raw whole peel on those is rather unpleasantly bitter and stringy unless sugared. But they also taste like ass just generally. Give me mandarins (totally edible in whole) or satsuma oranges (less so but exemplars of fruity goodness) instead any day.
I eat everything but the stem too. I can recall at a very early age - probably when I first knew the words - asking my mother if I was at ‘the core’ yet.
The pips or seeds of many fruits contain amygdalin, which the body converts to cyanide!
https://www.theguardian.com/technology/2015/oct/11/cyanide-in-fruit-seeds-how-dangerous-is-an-apple
Yes, am aware. Hence reference to "the dose makes the poison". The liver can deal with small amounts of cyanide, and does on a regular basis anyway if one eats such exotic foods as rice. From what I've read, the amount is so small as to be negligible; one would have to intentionally eat quite large quantities of pips to get any noticeable problems; the case studies we do have of cyanide poisoning via food involved improper handling, directly sucking on stone fruit pits, or other such improbabilities. Much more likely to get bored of eating apples before that, it's not a once-daily habit even. It's also largely from chewing or grinding up the seeds - if merely swallowed, they mostly pass through inert. It's not like improperly-prepared cassava, which will Definitely Cause Harm in fairly short order.
Stephen Skolnick who used to comment on here a bit says this kind of low-dosage toxicity is only really gonna be an issue for someone with a messed-up microbiome, or who otherwise loses the ability to eliminate such poisons*...but I'm neither in the habit of taking antibiotics, nor eating all that much processed food. So not particularly worried at the moment. I'd abstain for awhile if I were planning a surgery or something though.
*https://stephenskolnick.substack.com/p/the-thousand-secret-ways-the-food and follow-up https://stephenskolnick.substack.com/p/thousand-secret-ways-ii
Same thing happens with Kiwis, I'd imagine, seeing as most people still don't eat the perfectly edible skin.
I'll admit that the typical kinda-furry brown ones seem intimidating at first..."won't it be all weird and scratchy?" But then I ask if they peel peaches, and of course not, who does that. It's very similar though...peach fuzz, kiwi fuzz. The bark is worse than the bite, literally. And even totally smooth foods can make one choke if they "go down wrong".
These days one can even find <s>naked</s> "golden kiwi", which are sorta extra-oblong and ruddy-yellow rather than green inside, and are totally denuded of hair. And the instructions on the container still say to scoop out the innards and throw away the skin! It's very strange.
(I first heard about kiwi skin being edible as a passing throwaway comment on, idk, some Discovery Channel show or whatever. They claimed it's the part with the highest concentration of nutrients, and this is similarly true for many other foods, like potatoes and grapes. Big If True, never did look into it though. Which, yes, ironic. I just can't be bothered to peel stuff though.)
I was at a workshop where they provided fresh fruit in the pauses, but no cutlery at all. So I was like, how do they expect us to eat these kiwis?? --- And learned that you can eat them with the skin.
Always a fun stunt at parties, makes me remember Police Academy 2, that supermarket pillaging scene where one of the thugs eats a banana with the peel. (But I think banana peels aren't edible)
Yes, I admit I partly do it for surprising people too. Contrarianism for its own sake is a thing.
Banana peels are, in fact, edible - just rather unpalatable without cooking. Very similar to plantains. You can sometimes find dried "baby" bananas with the skin on, as an easier segue into the stuff. Most forms of preparation are like dealing with raw bulk kale - you really have to beat it into submission and cook rather thoroughly to render out that slimy bitterness. Baking works well, and it makes a good confection base too (candied peels).
Although, of note, the composition of a banana (moreso than most fruits) changes *very* quickly as it ripens. The greener a banana is, the higher proportion of starch : sugar in its carbs, and vice versa for yellow. This also affects the micronutrient composition...some are more easily available when greener, some when yellower. And has attendant effects on taste and texture - there's a reason banana bread almost always uses extremely-ripe ones! (I like greener bananas, personally.)
I do, and similarly for pears. It really isn't particularly noticeable unless I go out of my way to notice it. Sorta like how string beans* and celery ribs do have a "string", it's definitely A Physical Thing, but the level of botheringness seems to correspond strongly with how much one has been conditioned to expect it as a bother. Accidentally chewing a pip itself is slightly unpleasant (bitter notes, like with watermelon), but usually they just get swallowed. Nothing in particular stopping me from spitting them out, other than lack of convenient proximity to compost bin, I guess. A later article claimed that the density of bacteria is particularly high around fruit seeds, such as apple cores, so I guess there's some potential benefit too.
*I also enjoy edamame and pea pods, peanut shells, and popcorn kernels...but that's a whole other level of very tough fibre that I don't at all classify as Not Really Noticeable. Used to literally eat toothpicks as a kid, so it's still an improvement.
I did eat toothpick as a kid too, and still eat popcorn kernels! Most of the time when I eat them it's to "have something to do with my mouth", I don't like not having anything to chew while eating. I'll try to eat apple cores, it may help me slow down when eating.
I'll admit that I'm very skeptic about kiwi skins, the ones I get have a furry skin that seems like it would be like chewing hair, plus it may be hard to get clean.
Yeah, it was either that or paper towel. I liked the brown ones made of recycled unbleached paper better. Luckily never developed a true oral fixation like smoking, but it is nice to chew gun or otherwise be occupied in some such way. Much as I like soup, it's not a Proper Meal without some mastication. The solution is clearly to put chewable things in the soup. Diced-up corn on the cob is nice, that gives an excuse to eat with my hands too. Who needs utensils, really?
Kiwi skin has a lot less texture than you'd expect. Not at all like, say, artichoke fuzz (which is legitimately gross and tastes worse, even when boiled to death) or corn silk. The cleanliness I'm not sure, have always been pretty cavalier about eating plants without washing. Supposedly just a rinse takes care of most _____cide residues. Unless they're visibly dirty, like many root crops, in which case one might wanna actually scrub some. (Or melons. Melons really ought to be wiped down before slicing. Amazingly dirty, and to think of passing a sullied knife through the whole flesh...bleurgh.)
I think the fact that you eat peanut shells is pretty important context for your assertion that apple cores are basically the same as the rest of the apple...
I'll agree that they're on a spectrum with string beans, though -- I don't want those, either. ¯\_(ツ)_/¯
They're boring compared to basically any slightly-more-exotic pole bean variety, yeah. Give me purple beans or Chinese long beans any day instead. If it doesn't have that characteristic crunch, if one's teeth don't "squeak"...that's no pole bean, I say.
I guess no one ever taught me they're "supposed to" be shelled before eaten/thrown away? Always just...bit right in. We used to roast pumpkin seeds at home, it seemed a similar type of food to eat whole. Same with sunflower seeds. A prolific snack during ill-fated Little League days. David brand, was it...? Pretty confident one major reason people don't eat the shells is that they're typically extremely salted. I like salt in my diet, but the salted versions of nutshells are...ouch. Harder and harder to find roasted-unsalted these days, and washing them off makes it all gross-soggy. Defeats the purpose of crunchy goodness.
Oh my, I missed the antistocks... let me tell you another reason why they shouldn't work.
They're a proxy for an arbitrary metric that nobody optimizes for - Tesla does not pay dividends, it never has and I doubt that it ever will. Some companies pay dividends regularly, others do not, and the decision about the dividends can be quite divorced from the rest of the value.
Measuring profit? Let me point you to Amazon that doesn't believe in having profits, as much money as they get they reinvest or do some accounting magic with it (I'm exaggerating).
Whatever measure chosen it will at-best be fitted for last year's management with last year's strategy. You aren't running just counter-party risk, you're betting that the measure chosen does indeed reflect changes in value and won't change for completely unrelated reasons.
So if there's one thing this isn't it is - As simple as buying long.
The only argument I can see for them is that a regular share's value is so ephemeral that it's impossible to make the connection between a company's value and the price of a financial instrument any weaker so who cares.
Value of a stock is also (in the simplest model) the present value of expected future dividends.
If we add to the model the payout to public shareholders on company going private, we see that the anti-shareholders would owe the same amount.
This is straying from where I'm confident of my knowledge, but I assume that if every other metric I know about has fiddly constraints this one would have those too.
Devil's advocate:
Basing a financial instrument's value on rare occurrences is healthy for the economy, at that point I might as well be trading in NFTs that represent different attitudes about tesla's future. And that way the SEC is less likely to throw me in jail.
As a defense of the simplest model, Tesla talks a lot about not paying dividends, but imagine if there were some kind of legally binding contract they could sign that would render them forever unable to pay anything like a dividend. If Tesla signed this contract, do you think their stock price would go down? If so, that suggests the current price reflects some chance they start paying dividends.
Schiller says something very similar in his 2007 Financial Markets course.
Where is breaks down is that if you own enough stock, you can reverse such a decision and let the dividends flow to you: it's called going private.
You're still investing in the payment-of-tesla-dividends proposition instead of tesla's value, so the antistock is only worth that sliver of value.
I think the stock price change would be more reflective of what people think of a management that signs such contracts than the change in expectation depending on the level of such expectation now and how much uneducated money is moving tesla's price - a thing that I don't know how to measure and how tight that contract is - i.e. would it block other forms of capital return which would cripple the company's financial strategy or leave them open thus making the point a technical triviality.
If tesla would do so as a move to commit to stock buybacks as a capital return mechanism because of the taxation rationale (As Berkshire Hathaway does) the price might go up rather than down.
Even acadmeic mathetmatics isn't safe any more
https://twitter.com/joelwatsonfish/status/1610778319426916357
"Scenes from the Joint Mathematics Meeting (the largest annual meeting of mathematicians in the world):
A talk entitled "Undergraduate Mathematics Education as a White, Cisheteropatriarchal Space and Opportunities for Disruption to Advance Queer of Color Justice""
What do you think the average opinion of this researcher and her work was among the mathematicians in attendance? Because I have a guess.
They wouldn't be there if there any reasonable level of hostility towards them. And the fact that these presentations are even being accepted by the conference organizers is the problem.
Most people reject woke nonsense, but it doesn't matter! The problem isn't most people agreeing with it, it's that these ideologies capture institutions and everyone has to go along with it at the risk of having their livelihoods jeopardized.
How many mathematicians in attendence would be comfortable putting their name on an open letter decrying this presentation's thesis in explicit and direct terms (e.g. not saying something like "This presentation is well intended but unhelpful in fighting against patriarchal systematic racism")? I have a really, really good guess.
Perhaps they should install a light to let people know when it is safe to stop clapping.
Here's something sensible: https://www.amazon.com/Twice-Less-Performance-Students-Mathematics/dp/0393317412
A book with the point that AAVE (African American Vernacular English) includes some phrasing which makes mainstream english phrasing about arithmetic hard to understand. This could leave a teacher thinking that the kid just can't understand, while the problem is a need for translation.
Immigrants don't have special problems with learning math that I know of. First and second generation Asian immigrants famously do well in math. That tends to contradict the hypothesis that English gramma is the problem. I Personally learned math in foreign language (French) and was not aware of the language being an impediment.
Quite a fair point, one that absolutely doesn't depend on whether the child is "Queer" or not.
(And only apply to blacks in the US)
There's plausibly something about racism in the situation-- not just that children aren't being taught in their home dialect, but that people who default to believing that black people are stupid aren't going to notice there's a language difference causing the difficulty with arithmetic.
This language difference is directly impacted by intelligence. Second gen asian immigrations whose parents don't speak English as a first languge do well above average in literacy. And the thing is, we know blacks are less smart than whites through intelligence testing. And guess what, the IQ subtests most dependant on language (e.g. vocabulary) have smaller racial gaps than those not dependant on language, meaning these IQ gaps cannot be a product of language differences.
And it's bizarre calling this racism, because the "anti-racists" are the ones pushing for AAVE to be accepted as a real language and to not teach black kids proper english. It's the "anti-racists" that have created this situation through their dumb, agrocentrist ideology.
A possible reason (that you ignore) that "Second gen asian immigrations [sic] whose parents don't speak English as a first languge do well above average in literacy" in Standard American English because the English spoken by the communities they're in is by-and-large Standard American English, so there isn't a competing dialect of English involved. And before you protest, just having an "Asian accent" in SAE is not the same thing as a wholly separate dialect with different tenses, different inflection rules, and other grammatical rules, like AAVE is.
If this model is correct, it seems like we should see black kids do better learning math from black teachers than from white teachers. Is there good evidence for this? (I don't know the literature, though I have a broad impression that education research is often not all that great, and I worry about replicability and garden-of-forking-paths issues in it.)
I suspect in an educational setting black teachers will be speaking Standard American English.
Sure, but if the kid is stuck because of a mixup between AAVE and English, a teacher fluent in both can help.
No, it seems like we should be stamping out AAVE as the main dialect black kids speak and get them speaking the same English as the rest of the country/world.
Perhaps by starting with AAVE and transitioning to mainstream English?
You're oversimplifying the situation. Not all black teachers know AAVE, and some white students are native speakers.
Sure, but it would be an easy study to do, and P[speaks AAVE|black] >> P[speaks AAVE|white].
If the model you're describing is correct, then we have a pretty clear prediction of something we should see--black kids learning math from black math teachers should do better. If we see that, it's some evidence for the theory; if not, it's some evidence against it. But also, it sure seems likely that there's already data on this somewhere.
My point is : the kind of people who mix "Cisheteropatriarchial" and "whites" and "queers of color" in a single sentence without punctuation are probably not interested in your very sensible and level-headed reform. They want solutions like "Stand all the white boys in a line every week and tell them how aweful they are, white girls are one-time-per-month", or "Leave K12 education to suck exactly as much or harder, but accept [politically favorable] blacks more often at universities\jobs, using whatever dumb nonsense you can come up with to justify".
They have to do this, they are addicted to heroics and "Rebel Alliance" narratives, and "kids should have better education" isn't exactly an oppressed take, although people differ on who's responsible and who should pay the bill, but no monster is going to argue with a straight face that "Yes, black kids are having difficulty due to language\worse income\less fathers, and that's a Good Thing". There is no "alpha" in pushing a narrative like that, to borrow Scott's parody terminology.
Wokism actively reward nonsense, because it's a system of morality that rewards novelty. Whenever "morality" and "novelty" come together in a single sentence, disaster is as expected as Death and Taxes. There *is* a place for novelty in morality, at a rate of about 1 idea per century (if ordinary human brains are the ones doing the thinking).
But in a system like wokism, you have to be cutting edge, and how can you be when all the gays have gotten all their rights? the only solution is to invent new gays (made up pronouns and identities) and mix-match the vanilla gays with other identities (queers of color and intersectionality in general). The number of combinations you can get this way is exponential, so they can never run out. They are trapped in a self-accelerating feedback loop of inventing new injustices wholly out of thin air and pretending to be mad about it, then achieving a meaningless\non-existent victory and going to bed before repeating the entire cycle the next day. The world is choke-full of actual injustices to be extremly mad about and spend a lifetime fighting, but they aren't new and sexy, and they are very exhausting and nobody talks about your (tiny and incremental, mostly pyrrhic) victories on twitter and TED.
Some people do leave wokism. A lot of them are Marxists.
I keep hoping that talking reasonably will pry some people loose.
It's pointless because the problem is institutions, not individuals.
One might almost pity them. Almost.
I can't imagine that it has anything to do with gender. There might be other dialect differences that affect smaller groups, and the problem isn't necessarily only American.
I can imagine that a few children with very good mathematical intuition guess what the teacher must mean, and the majority conclude that it's incomprehensible and they can't learn it.
That word "disruption" suggests that the speaker's preferred strategy is to level down rather than level up.
The Joint Mathematics Meeting is the joint meeting of the American Mathematical Society (the main professional organization of professional mathematicians) and the Mathematics Association of America (the main professional organization of mathematics educators at the undergraduate and high school level). Educators are a core part of this conference. I'm not sure why anyone would be surprised that educators think about race, gender, and sexuality in the context of education. It's only if you think that education about *math* is somehow not the same sort of human practice where people think about these issues that this should be surprising.
Look, this presentation is woke nonsense. I know it, you know it.
Typically mathematics has been more resistant this nonsense because it is a strictly technical discipline that is less dependant on language than almost any other. This is obviously true, which is why the humanties were ideologically captured many decades ago, whereas serious inroads have only started being made on academic math is recent years.
But the fact that conference organizers for math are accepting this nonsense for presentation shows a significantly negative cultural change that likely signals future ideological capture that will poison yet another field.
> It's only if you think that education about *math* is somehow not the same sort of human practice where people think about these issues that this should be surprising.
Presentations about education are fine. Presentations about education based on demented far-left race and gender ideology are not fine. None of this is based on scientific evidence, it's pure ideology, and it will poison mathematics the way it does everything else it touches.
Kenny, out of pure interest I tried looking up the talk to see what was the actual content. I didn't manage that, but I did find the photo identifying the man giving the talk; Luis Leyva.
Here he is:
https://peabody.vanderbilt.edu/bio/luis-leyva
Assistant Professor of Mathematics Education, Department of Teaching and Learning
Faculty Affiliate, Department of Gender & Sexuality Studies
"At the juncture of gender studies, higher education, and STEM (science, technology, engineering, and mathematics), Leyva’s interdisciplinary research explores narratives of oppression and agency across historically marginalized groups’ educational experiences to uncover interlocking functions of racism and cisheteropatriarchy in undergraduate STEM. He draws on critical race theory, women of color feminisms, and queer of color critique to conceptually and methodologically ground his scholarship, which centers historically marginalized voices in STEM higher education across intersections of race, gender, and sexuality."
I have certainly been educated, in that I never before heard of the term "queer of colour" (I was aware of debate around whether "queer" was or was not a slur). I had thought the term was "queer person" but now it seems that "queer" is a pronoun in itself.
Let us continue:
"Leyva is the Director of the PRISM (Power, Resistance & Identity in STEM Education) Research Lab at Vanderbilt-Peabody College. The lab’s research serves to hold an “intersectional prism” up to underrepresented students’ narratives of experience to illuminate and disrupt multidimensional forms of oppression in undergraduate STEM education.
...His second project, titled COURAGE (Challenging, Operationalizing & Understanding Racialized and Gendered Events) in Undergraduate Mathematics, examines features of instruction in undergraduate calculus classrooms that students from historically marginalized groups experience as discouraging or supportive as mathematics learners. This project, supported by the National Science Foundation (Improving Undergraduate STEM Education program), addresses the pervasive role of calculus as a gatekeeper that reinforces racialized and gendered access to STEM. "
So - calculus is bad? Has anyone got an opinion on this? Do away with gatekeeping calculus or not?
If the man is managing to teach black kids maths, then good for him. But I can't navigate all the jargon, and what on earth does the below have to do with learning engineering?
https://my.vanderbilt.edu/prismlab/
"As a space of collective healing and re-humanizing the research experience in the academy, PRISM members support one another in constantly interrogating their positions of privilege and oppression to engage in research alongside historically marginalized communities in STEM education."
Is that a therapy session or a maths class?
I looked up one of the linked research papers and hold on to your hats, you'll never believe this!
https://www.tandfonline.com/doi/abs/10.1080/00221546.2021.1879586?forwardService=showFullText&tokenAccess=RIF3PNXJZI3E3FQ9GXQT&tokenDomain=eprints&doi=10.1080%2F00221546.2021.1879586&doi=10.1080%2F00221546.2021.1879586&doi=10.1080%2F00221546.2021.1879586&target=10.1080%2F00221546.2021.1879586&journalCode=uhej20
"Introductory mathematics courses, including precalculus and calculus, largely influence Black and Latin* students’ persistence and sense of belonging in STEM. However, prior research on instruction in these courses for advancing more equitable outcomes is limited. This paper presents findings from a study of 18 Black and Latina/o students’ perceptions of introductory mathematics instruction as a racialized and gendered experience at a large, public, and historically white research university. Sociological perspectives of logics and mechanisms of inequality guided an analysis of Black and Latina/o students’ group interview responses on how instruction perpetuates racial and gendered oppression. Two logics were identified: (i) Instructors hold more mathematical authority than students in classrooms; and (ii) Calculus coursework is used to weed out students ‘not cut out’ for STEM. These logics, coupled with the influence of broader sociohistorical forces (e.g., cultural scripts of behavior, stereotypes), gave rise to mechanisms of inequality through seemingly neutral instructional practices that reinforce racial-gendered distribution of classroom participation and STEM persistence. Our findings inform implications for STEM higher education researchers and mathematics faculty to foster socially affirming STEM instruction, especially in introductory courses."
Yes, imagine this shocking revelation: teachers have more authority than the students in classrooms! Were any of you aware of this shocking and distressing "logic" before?
It gets better; paper number two:
https://jume-ojs-tamu.tdl.org/jume/index.php/JUME/article/view/295
"BSTRACT In this article, the author discusses the intersectionality of mathematics experiences for two Latin@ college women pursuing mathematics-intensive STEM (science, technology, engineering, and mathematics) majors at a large, predominantly White university. The author employs intersectionality and poststructural theories to explore and make meaning of their experiences in relation to discourses of mathematics ability and pursuits of STEM higher education. A cross-case analysis of two Latin@ college women's counter-stories details the development of success-oriented beliefs and strategies in navigating the discourses that they encountered institutionally and interpersonally in their mathematics experiences. Implications are raised for P--16 mathematics and STEM education to broaden equitable learning opportunities for Latin@ women and other marginalized groups' construction of positive mathematics identities at intersections of gender and other social identities."
A whacking *two* Latina students are used to "raise implications for STEM education". I'm loath to call this - no, scrap that, I'm not loath at all. I *am* calling this a grift; do a round of conferences, write a few papers, head up a 'lab' talking about white cis hetero patriarchy bad but extra bad in maths, let's stop teaching particular branch of maths and just let all the queer trans females and other persons of colour enter the engineering course and give 'em the degree at the end, much more equitable!
(I say this as someone ignorant of maths and unable to understand or indeed grapple with it. But if I got a sympathy pass and award due to being a female person who can't be expected to understand hard sums, I'd be insulted and I would never be able to rely on "well at least the person who built that bridge learned how to calculate the stresses correctly so it won't collapse under me")
On calculus being bad - you'll find some very widespread shared beliefs among mathematicians (probably not majority, but maybe close) that it would be better if university calculus requirements were replaced with statistics or logic or critical thinking or something else of that sort.
That's absolutely irrelevant to the point being made. Black kids would likely struggle just as much with a rigorous college statistics course.
As for "logic" or "critical thinking", well this is absurd considering you need to understand calculus to understand vast swathes of scientific theory and these things will not help you with it. And obviously "critical thinking" is one of those bullshit terms that is so vague so as to allow almost anything to be taught underneath it, including ideological nonsense with no empirical evidence of providing any benefit to a student's congitive ability or understanding of other subjects.
Derivatives are easy and useful in many situations. The rest of the calculus, yeah, can get arbitrarily complex, and the technical details are mostly useless even for most people in STEM.
I wonder what exactly passes for "critical thinking". I mean, I understand what the idea is, in theory, but beyond the obvious mistakes, how exactly do you *teach* it, or *measure* it? (For example, teaching people lists of fallacies can easily backfire. The more you know, the easier it is to assign one of them to inconvenient information, so you become more immune against arguments you disagree with.)
So while I agree about the usefulness of critical thinking, I worry that adding it to university requirements would result in something different, such as conformity with some narrative.
Yes, it’s a hard question to know how to structure a quantitative reasoning requirement for a university. But it’s not really controversial that structuring it as a calculus requirement in particular is bad, if you don’t expect people to become engineers. And even engineers should get an actual quantitative reasoning class that covers some probability, which they often don’t get under current structures.
There are many universities where the math requirement can be filled by statistics or formal logic. I don’t know if there are many that don’t have such a requirement at all. I don’t know that any universities have changed this general education requirement one way or another in the past five or ten years.
I Wonder how damaging any of this is though? I looked at the website promoting his talk and others.
https://www.jointmathematicsmeetings.org/meetings/national/jmm2023/2270_invspeakers#leyva
And all around him are pretty normal mathematical speakers, often POCs (more often in fact) giving proper lectures. It’s probably like there was an occasional Marxism and Mathematics lecture in Soviet technical universities, just because you have to.
More damaging to the U.K, is the general disregard for mathematics exemplified by the dull mediocrity of people like Simon Jenkins.
https://www.theguardian.com/commentisfree/2023/jan/05/maths-schools-rishi-sunak-arts-sport
You mean like how the Soviets basically rejected genetics wholesale on ideological grounds that greatly diminished their scientific and tenchical progress accordingly?
And look, we're already seeing it in other fields today in the west. You can't do certain research, you can't come to certain conclusions. Intelligence researchers are having their access to genetics databases blocked. This is all real and is stunting science.
I would like to know more about this. Like, what were the exact mechanism that caused this. Because, Soviet Union started as a backward feudal country... then at some moment it sent the first man to space... and then it fell behind again. The simple explanations like "of course, soviet communism sucks" are not sufficient, because they cannot explain Gagarin. What was the mechanism that worked at first, and then it didn't; and why?
Russia is still great today at the mathematical olympiad ( https://www.imo-official.org/country_team_r.aspx?code=RUS ), which is not a completely fair comparison, because greater countries have a natural advantage; it obviously makes a difference when you pick 6 best math students among 1 million, or 10 millions, or 100 millions. Then again, Germany has almost as much population as Russia, and it stays visibly behind.
Among multiple factors, relatively how important were crazy people in positions of scientific power, such as Lysenko, versus how important was lack of money when the research becomes expensive. (In other words, could an unexpected source of huge income hypothetically have saved soviet science, or was is doomed regardless?)
One explanation I have heard is that in capitalist countries, new inventions can become products overnight. So the Soviets had space research, but the Americans *also* had dozens of household items based on technology somehow originating from the space research. Could be something useful such as microwave oven, or something stupid such as laser pointer, but either way it provided additional funding for the research, and made the population familiar with the new technology; and then you had millions of people thinking how to make this even better. Meanwhile, the Soviet scientists had to convince some important member of the communist party to approve further research.
Towards the end, Soviets were just shamelessly stealing technology from the West, and still stayed behind. Which is a shame, because they also had some cool idea, such as a computer based on balanced ternary digits (-1, 0, 1) rather than the usual binary (0, 1), and who knows where that research could have led in a parallel history. https://en.wikipedia.org/wiki/Setun Was the missing part the ability of some rich entrepreneur to unilaterally decide to mass produce millions of cheaper smaller ternary computers, spend a lot on advertising, and sell them as expensive toys?
I suppose that different people will come up with different theories. I am curious what was the real reason. Probably a combination of things, but what was their relative importance, and why it first worked and then it didn't?
It’s more an Asian, Cisheteropatriarchal Space though.
You father is disappointed in this comment. ;)
How specifically is the queer-of-color math different from the cisheteropatriarchal math?
I can see how education relates to color (although I consider the terminology very unfortunate, because it is not the "color" per se, but rather its correlations with being a *cultural* minority and/or not speaking English as a first *language*), or how sexual orientation could be relevant to some biology or history lessons... but the impact of sexual orientation on mathematics is a mystery to me.
All I can imagine is mentioning something like "and by the way, the guy who invented the square root of seven was a black gay who identified as a fox". Is that all?
>How specifically is the queer-of-color math different from the cisheteropatriarchal math?
Um it is easier with lower expectations and more excuses.
I assume it’s more about classroom dynamics than about humans mentioned in the curriculum.
The author literally thinks calculus exists to gatekeep people who aren't straight white "cis" men from technical fields. It's ideological nonsense, not science.
And don't you think its bizarre that foreign students from Asia who don't speak english as their first language have no issue whatsoever with these "classroom dynamics", but people born and raised in this very country can't learn in it's classrooms? It's all just rationalization for the evil straight white men being more intelligent and capable than other non-asian groups.
Nobody who uses the term "heteropatriarchy" is operating scientifically, it's all ideology.
I still don't see how this is about math specifically, rather than school in general.
(Or could you just replace the word "math" with any other subject, and give the same talk at any other conference? That certainly is a way to increase one's publication count...)
I haven’t seen the presentation so I don’t know what the authors are saying. But there are all sorts of reasons why different subject matters cause different sorts of classroom dynamics. There are specific things about learning how to do proofs in front of other students that are an important part of math classes and no others.
But even if there is nothing distinctive about math here, someone whose work is entirely in math education is likely to limit their claims to math education, even if some of the issues they discuss might generalize to other subjects. If you don’t have semesters or years of observation of physics classrooms, you’re not going to make claims about physics teaching.
You don't think it might be an issue for teaching math if certain groups are innacurately perceived as more talented than others? Or get additional support?
If your role as a teacher and institution is to give the best possible education to your students, and identify the best potential mathematicians, you need to be aware or structural issues so you can mitigate them.
>You don't think it might be an issue for teaching math if certain groups are innacurately perceived as more talented than others?
There's no evidence for this. Whites and Asians are ACCURATELY perceived as better. And trying to pretend otherwise can lead only to the kind of racial discrimination as we see in American colleges' affirmative action policies.
>If your role as a teacher and institution is to give the best possible education to your students, and identify the best potential mathematicians, you need to be aware or structural issues so you can mitigate them.
Then do actual science. This isn't science. It's pure ideology. They're not trying to determine what's true, they're advancing an ideological goal (help their ingroup and hurt their outgroup) and will find any justification for doing so. These people will never, EVER accept that different groups have different innate abilities.
Are queers perceived as less talented at math? Never heard about anything like that.
The left will hold up Alan Turing as a brilliant gay genius one moment, and the next they will pretend he doesn't exist to "prove" gays are considered less capable.
The primary determiner of whether you get support or not is how well you do on the test, and whether you are actually trying or seem to be goofing off. Can't see how either of these hard bits of real-world data have dick to do with your color, class, creed, sexual orientation, or any other of the modern points of obsession.
>if certain groups are innacurately perceived as more talented than others?
What if they are accurately perceived as being more talented than others? Is that an even bigger issue?
This speaks more to a lack of imagination, than to a dearth of culturally relevant pedagogy.
I suppose if you had a specific example (of the impact of sexual orientation on mathematics) you'd probably at least hint at it. So you don't know either.
> It's only if you think that education about *math* is somehow not the same sort of human practice where people think about these issues
And do you think a lot of thinking goes into writing a paper like this?
Yes. It might be thinking that you disagree with, or that you think you can find obvious flaws with, but there is in fact a lot of thinking here.
This isn't science. It's pure ideology. They're not trying to determine what's true, they're advancing an ideological goal (help their ingroup and hurt their outgroup) and will find any justification for doing so.
This are the kind of people who would never, ever accept that intelligence differences exist between races regardless of how much evidence you provide them. They will shout you down, they will call everyone involved racist, they will call for you to be fired. They will never accept something so harmful to their narratives. This is what ideologues do, not scientists.
I agree that there's thinking there, but there's no truth-seeking: no one's trying to find ways that objectively increase a useful endgoal like "we solve navier stokes" or "students understand math better". There's thinking, but it's all about social positioning and jargon-based politics.
I’m pretty sure they are trying for the goal of “students understand math better”.
No, that's nonsense. The goal is to advance their in-group.
Considering the (backwards) progress they've made over the past 50 years, maybe it's time to for them to consider whether their focus is really where it should be.
I think we'd all be willing to listen respectfully if there were clear objective results from this kind of stuff. Show me a state or big district that went all-in on diversity and inclusion initiatives and obtained a decade of steadly rising math SAT scores across the board for everybody.
But in the absence of any objective progress after so many years of discussion and money being poured in, I think the people who are footing the bill -- parents and taxpayers -- are quite justified in their increasing levels of skepticism. I suggest it has dick to do with culture wars per se, it's because as parents we know damn well whether the school is teaching our kids reading, 'riting, and 'rithmetic more or less successfully in 2022 than it did in 1972. And the answer is "less."
It's worth asking whether the tools they're bringing to the job are capable of helping them achieving this goal. I am deeply skeptical, but evidence >> models--are there high quality experiments that demonstrate a lasting positive effect from changing the style of teaching math? Say, something with large n, random assignment, and preregistered comparisons (aka, not "we did this intervention to raise adult IQ and discovered that it raised math test scores in the 6th grade among hispanic girls only.")
>I’m pretty sure they are trying for the goal of “students understand math better”.
I am actually pretty sure this isn't a goal of a huge portion of these people. It is very clear they MAIN goal is reducing the disparities between what they perceive as homogenous ethic groups based on color of skin. Not educating people.
And BY FAR the easiest way to reduce disparities is to cut down the tall poppies. And if you don't think this is the case I would encourage you to actually listen to any group of educators/school district that is all fired up about this and look at their actions.
Which often involves 1) gifted and talent gets eliminated 2) Even general curriculum gets easier with lower expectations.
But I wonder if it need be. A friend who is head of maths at a large school in London got fed up that low proportion of girls was studying maths beyond age 16 (in England, that's the point where if you are on the standard academic track you drop down to three or four subjects) and over a few years saw a pretty sustained rise. It certainly feels to me that that's a genuine problem being addressed - and so yes, it's a pity of impractical jargon based nonsense gets in the way here.
There's a very big difference between gender difference in interest in mathematics and racial difference in mathematical ability.
Men and women do not differ in mean intelligence, but races significantly do.
The survey has a question asking whether insufficient 'extra content' is a reason for not becoming a paid subscriber. This confused me because I wasn't really sure there *was* extra content beyond open threads. I see now that there definitely is, but I had to look at the archives to figure this out. Possibly this just means I was inattentive, but I think there just isn't much indication, if you're a free subscriber, that there are any nonfree posts at all.
(I'd guess one typical history goes like: 1. Subscribe at free level, intending to evaluate later whether a paid subscription is valuable or indeed any different than a free one. You are helped in this by the assurance that Scott's doing just fine in his deal with Substack and doesn't need the money. 2. Nothing at all reminds you to reconsider #1. 3. ....)
To prove there are people who would happily subscribe but haven't yet, I've subscribed for a year.
Scott didn't want to annoy people by making paid-only posts take up the page for non-paid people - this goes back to SSC's Problems With Paywalls.
He says there's only 95% free content every Open Thread.
... yep, he sure does. Which I must have read multiple times but completely forgotten in milliseconds, apparently. Sorry.
I love the idea of being able to put in my best writing samples to an AI for learning, and then only have to give it a list of key points and phrases before it can turn those into an essay or paper in my style of arbitrary length. I've already done the important part of the paper before submitting it, and if I want to tweak its style I can just give it more writing samples to play with.
How should I refer to Scott Alexander in comments here? Should it be "Scott Alexander", or "Alexander", or "Scott"?
What's the best way to refer to the author of a website when discussing them? My instinct has always been that formality and distance are obstacles to conversation, and that you should refer to people as they'd prefer to be addressed unless you're in a formal context, and that most people think of themselves by their first names rather than their surnames. But I recently heard that this is not standard in the US, and using someone's first name before they've invited you to, or without some kind of reciprocal relationship with them, is presumptuous. Is there a standard on this?
Complicating this, I understand that Scott Alexander isn't his full name.
I'm a little surprised by the question in this informal day and age. We're all on an immediate first-name basis in America anyway. Scott's writing style is very conversational, so for me anyway calling him anything but Scott would seem stilted.
For what it's worth, I read something referring to Scott as "Siskind" and it seemed kind of hostile to me-- something like childishly referring to someone by the part of their name they prefer not to be known as in this context.
Sneer Club? I read it so I know what criticisms of the rationalist community are, and they are fond of that.
No, I don't read Sneer Club, but something from there might have been quoted here.
My impression is that their criticism is compulsive and low quality, but I haven't read a lot of it. Have you found some good criticisms there?
1. There is a lot of genuinely low-quality sneering, most of it centering around the idea that our esteemed host and Eliezer Yudkowsky are bad people and everything they do should be discounted as a result (I agree with neither A nor B, of course).
2. They have some genuinely good Marxist or SJW points about rationalists being blind to class and privilege impinging on what they think. I am neither a Marxist nor SJW, but even your enemies are right sometimes. And sometimes it's just a matter of 'there are tradeoffs between the welfare of group A or B, I see your point but I am in group A and you are in group B'.
I've pondered this. Elsewhere when referring to a blog author's post or writing to another poster I've use initials (e.g. SA). Same if we're discussing a famous person cited in a previous post. It seems to me a neutral, clear and efficient notation.
Initials are never really clear. I see SA I think Something Awful. Google gives me Samarium, the Salvation Army, and the Societe Anonyme.
Even if we limit it to "writers named Scott who are particularly famous among nerds", there's Alexander, Aaronson, and Adams.
There's always "Our esteemed host", or "The Rightful Caliph".
Yea, well I'd say it depends on context to make it work and if several regulars start to do it then it becomes 'common knowledge'. Also, it's similar to how less common acronyms are used (which is essentially what initials are) in a paper. The six word name once, early in the writing and then the acronym following.
However you want! Though probably you should shoot for what best coveys information to the people you are trying to communicate with.
Thanks everyone, I’ll probably stick with Scott in future.
I’m still unsure about generalising to other websites. On the one hand, this isn’t a typically formal situation, but I’m not sure it’s unambiguously an informal one, either. I’m essentially pestering Scott at work here, this is his job and I’ve ever met him. Another Scott, the radio presenter Scott Stephens, has said that you can’t know someone unless you can smell them - that is, unless you can have a private one-on-one or small group conversation with them. Neither Scott has ever smelled me.
I'd go with whatever name is used in friendly comments.
Since nobody else has mentioned it and Scott's officially okay with it: his full name is Scott Alexander Siskind.
So now he'd going to get confused with statistical software?
I don't understand.
I recall in about 1986 getting a handwritten note from the senior tutor at my Oxford college addressed to "Dear Smith", where Smith stands for my surname. This usage was, I discovered, almost dying out, but it had been a common English trope for friendly informal correspondence, used by academics, senior civil servants, I dare say Bishops. "Dear Mr Smith" was too formal; "Dear Peter" unthinkably forward.
It's almost always just Scott. Occasionally there's a good contextual reason to (mostly satirically) use "Dr. Alexander". Infrequently one uses the full Scott Alexander to differentiate from that other prolific Jewish rational writer, Scott Aaronson.
Other blogs tend to follow the same conventions...Matt Yglesias is just Matt on Slow Boring, Zvi Mowshowitz is just Zvi on Don't Worry About The Vase, Freddie de Boer is just Freddie on his blog (which for some reason lacks a punchy name, I've always found that odd), Bari Weiss is just Bari on Common Sense, Bryan Caplan is just Bryan on <s>Econlib</s> Bet On It. And so on. Oftentimes you can pick it up from the content too - authors will refer to themselves a certain way, and that tends to reflect the comments. It's less about formality and more about reference classes...there are a million Scotts in real life, and (probably) thousands of Scott Alexanders, but within the blogosphere, a much narrower subset of blogging Scotts.
When commenting on someone's blog, I think it's standard to address them by first name only, and to refer to them in the third person by first name only, unless there are other people being discussed with the same first name.
When cold e-mailing someone, it's more standard to start with their title and last name (e.g., "Dear Dr. Alexander," or "Dear Prof. Easwaran,") and then in future replies, to address them however they signed their previous e-mail to you (i.e., with first and last name, or title and last name, if they signed with first and last name, or to switch to first name only if they signaled that this was appropriate by signing with first name only).
Talking in person, you would start with title and last name, unless they explicitly tell you it's ok to move to a first name basis (which is usually something that the higher status person has to initiate).
This depends a lot on conventions within a field/subcommunity. I hang around two different prominent universities in my field--in one, the high-status people are referred to by first name and if you say "Dr X" or "Professor X," people will tell you "we don't do that here." In the other, the high status people are "Dr X" or "Professor X," and as an outsider, how I address someone is part of establishing (or trying to establish) my own place in the pecking order.
As an American--in my experience, using someone's last name instead of their first name is primarily done in formal writing, e.g. in a newspaper article or longform nonfiction. People in customer service roles might refer to clients/customers as Ms/Mr XYZ to convey politeness, but even that is a bit old-fashioned.
There might be some professions where this isn't the case. Maybe law? Law is weird.
As for Scott--I think it's just convention in the comments section here to call him Scott. Using just a first name seems pretty typical for comments sections specific to particular writers or creators. E.g., to pull the first example that comes to mind, Bernadette Banner's YouTube comments typically refer to her as Bernadette. Now I'm wonderng how speakers of other languages do this, and if it's indicative of some parasocial relationship.
I don't comment here much, but I do participate in places where Scott's work is discussed and I (slightly reluctantly) say 'Scott'. I prefer to use a more detached tone when speaking about people's ideas and use last names for intellectuals more than most people do, but I find when I say "Alexander" it sounds like I'm just confused about who I'm talking about.
> I recently heard that this is not standard in the US, and using someone's first name before they've invited you to, or without some kind of reciprocal relationship with them, is presumptuous.
This is very old-fashioned etiquette. People use first names in almost all interactive contexts these days.
When I'm discussing another person _in not-too-formal writing_ I will sometimes deal with the "first name is too informal/familiar/friendly, surname feels weird" problem by using the full name once and then their _initials_.
The thing I am most concerned about there is mostly not the informality of using their first name, it's the possible perception that you're viewing this person as a friend/ally and therefore shouldn't be trusted to be impartial. In many contexts this doesn't matter, but if e.g. there's some discussion about some probably-hostile thing El Sandifer wrote about Scott Alexander, I _am_ going to be trying to be impartial and I will call them ES and SA in order to reduce the risk that someone thinks/feels "he's writing about X as if X is a personal friend of his, so I should assume he's biased".
(If I were writing about someone who _is_ a personal friend of mine, I would say so and would use their first name throughout in order not to mislead, unless the relevant stylistic norms forbid that as e.g. in academic publishing.)
"Some talk of Alexander, and some of Hercules" 😁
https://www.youtube.com/watch?v=_DUiMASYGQs
I wonder if people don't use "Mr." anymore mainly because once women became a major part of the professional workforce it was too much trouble to figure out whether to address them "Mrs." "Miss" or "Ms." (Particularly when the correct title is "Dr.')
Also wonder if pronouns will suffer a similar fate for the same reason.
My otherwise-socially-conservative feminist mother, at least, regaled me with stories about how getting folks to agree to drop "Miss" in favour of "Ms" was a victory for equality, back in her day.
Though I'm told that in the rare instance Gen Z is called upon to say "Ms." (e.g., the Ms. Marvel tv show), they're surprisingly likely to pronounce it "Miss".
And back in the 90s I was involved in a wedding where the invitation list for the bachelorette party addressed the single women as Ms. and the married women as Mrs., which struck me as kind of missing the point.
As an elementary schooler in the early 2000s, I was taught to refer to adult women this way. I didn't learn that "Ms." and "Miss" were separate things, rather than different spellings of the same word/idea, until college.
It sure did. Oh well, sorry Gloria Steinem.
Everybody here seems to use Scott. Elsewhere he's referred to as 'Scott Alexander', for reasons of disambiguation.
For reasons involving the NYT article and doxxing, his *last* name (not mentioned here) is used only by his enemies. So never use that.
"For reasons involving the NYT article and doxxing, his *last* name (not mentioned here) is used only by his enemies."
I was not aware of this. Some questions - I'm being sincere. Not assuming you (or anyone else) will answer all of them.
Does the fact that I've used his full name (since reading Still Alive) mean I'm one of Scott's enemies? If not, who are Scott's enemies and what makes a person Scott's enemy? Where has this distinction been formulated? Do Scott's enemies agree with it, and do they identify as his enemies? Does Scott agree with it, and more importantly, does he agree with your suggestion?
What purpose is there to classify people as Scott's allies or enemies, other than the obvious tribe-forming, tribe-strenghtening behavior?
What sense is there in tabooing his full name, as he has publicized it himself, deleting and recreating his entire blog during the process? Isn't letting (assumedly) malicious people appropriate the use of his full name counterproductive?
There is the Rightful Caliph, and there is the Reign of Terror. That is all you know, and all you need to know.
Oh, that I can live with.
If you use a name on internet, I am not going to investigate whether it is your real name or a pseudonym. Not even because I respect your privacy (although I do), but simply because I do not care. If someone signs their work "Scott Alexander", then for all practical purposes, that person *is* Scott Alexander. Name is just a reference anyway.
Some people become famous under pseudonyms, like "B. Traven", or "Student" https://en.wikipedia.org/wiki/Student%27s_t-distribution
Why do I call him "Scott Alexander" when his full name was already made public? First, because this is how I have referred to him for *years*, so I am already used to it. Second, because this is how he still signs his articles. Third, because many other people call him that (for the first two reasons), so if I say "Scott Alexander", people I communicate with immediately know who I am referring to. Three reasons to use the pseudonym, and zero reasons to use the real name.
(Fourth, because not using someone's name has a kabbalistic significance. This is an inside joke. Read https://unsongbook.com/ to find out more.)
In the past, it was possible to figure out Scott's real name, but Scott asked people not to use it publicly, because doing so could have hurt him. Predictably, some assholes did it anyway, and NYT decided to go nuclear.
Since then, Scott has significantly changed his life, so the old reasons not to use his real name *now* no longer apply. (Also, anyone can find it on Wikipedia now.) So there is no harm if you use Scott's real name today. Now we avoid it mostly out of force of habit.
"Since then, Scott has significantly changed his life, so the old reasons not to use his real name *now* no longer apply. (Also, anyone can find it on Wikipedia now.) So there is no harm if you use Scott's real name today. Now we avoid it mostly out of force of habit."
This seems to be a reply to my comment, but I'm not sure whether it is directed at me particularly. However, I agree with the gist of this comment, especially the quoted paragraph - it also did not add to the view I already had. That is, I wonder whether the fact that I questioned [that Scott's last name should Never Be Used] gets interpreted as me saying there never was an issue with people doxing him (which, of course, I haven't so much as hinted at).
You're overthinking this pretty heavily.
>Does the fact that I've used his full name mean I'm one of Scott's enemies ?
Of course not, that's ridiculous.
The commenter you're replying to's heuristic was "using Scott's last name exclusively", which using the full name isn't. And I would say even that heuristic is pretty flawed, I can see where it's coming from but I don't think it performs any better than random chance.
>If not, who are Scott's enemies and what makes a person Scott's enemy?
The commenter you're replying to is using "enemies" lightly in the sense of internet assholes.
- Who are those internet assholes ? who knows, the internet is anonymous, that's why it's so heckin awesome. I once searched for Scott's name ("Scott Alexander") on tumblr and saw a deluge of posts raging about how he "platforms" (God I fucking hate this word) racists and mysogynists and blah blah blah, no idea if you can still find those if you searched now, that was a couple of years ago.
For a pretty reliable dumpster of Scott's (and rationalism's in general) enemies, see reddit's "Sneer Club" subreddit. Another one is MeFe, or Meta Filter, a link aggregator site sorta like reddit or hackernews (and it's pretty good actually assuming you don't go near any woke link). It has a huge woke bias so every Scott link is probably them raging about he "PLaTFOrM"s nazis. Another one is hackernews itself, though thankfully it's far more contrarian and heterogenous on most issues than either reddit or MeFe, but that's exactly why you will occassionaly see the incessant whining.
- What makes a Scott's enemy ? No idea, Scott is really the rare "mild man" type, I'm lucky to have someone like him in my personal life (though unfortunately not for long), it's extremly hard to not at least listen to what he says and then disagree very respectfully if you can muster the guts to disagree at all. He commands by sheer politeness.
Scott's enemies are varied, they are anything from woke left wing to extreme king-cock-sucking right wing. It's hard to say anything generalized about them. I assume the first woke ones were Feminists, all the way back in 2015 or so, for reasons that you better not read about for your own mental health. Oh good old feminists, always blazing the way in being assholes in all sorts of new and exciting mannners. No idea where the first right wing ones came from, I see plenty of them being sour because Scott steelmanned some of their arguments and then counter-argued them to the ground again, but I'm not sure this is the sour spot.
>Where has this distinction been formulated?
In the brain of anybody who see a dedicated group of people consistently moaning about 1 person for a decade or so.
>Do Scott's enemies agree with it, and do they identify as his enemies?
Nobody wants to be seen as the loser who keeps raging about someone who lives rent free in their head, they will probably say something to the effect of "Scott Alexander is a representative of the Crypto-Fascists tech-industry silicon-valley tech bros who want to roll back abortion rights and the rights of the Blax communities of color", and in their own system of knowledge and ethics, this counts. But to most other people who are attracted to rationalism, it doesn't.
>Does Scott agree with it, and more importantly, does he agree with your suggestion?
No idea, and no idea. _I_ would say Scott *definitely* agrees there are people who hate his guts, however (un)charitably he internally justifies or models their behaviour. He also seems pretty sour about the NYT thing. But whether he thinks calling him by his last name is a marker of belonging to this group is much more open to debate.
>What purpose is there to classify people as Scott's allies or enemies, other than the obvious tribe-forming, tribe-strenghtening behavior?
Classification is life's eternal hobbies, and intelligence's first baby steps. Every AI course begins with a baby-mode "Cat or Not" classifier. Even Bacteria classifies their pond water into "Rich in Sulfur" and "Not Rich in Sulfur". If you're *Any* sort of a goal-having goal-seeking agent, you have to classify things. Friend-(X)Or-Foe is a pretty vital distinction. How can you even prevent yourself from noticing that there are consistent markers for a group of people who doesn't give a flying tick about any of your ideas, and dedicates an unhealthy portion of its existence to hating you ?
> Isn't letting (assumedly) malicious people appropriate the use of his full name counterproductive?
Absolutely (and I will still correct you that the guy or gal you're replying to only said "last name exclusively, not the full name). But also, it's good and wholesome to know what upsets the people you respect and then not do it in front of them. If a bunch of assholes keep calling your glass-eyed red-haired friend a ginger 4 eyed nerd, then - unless you're so intimate with the guy that you can single-handedly redeem the words - you better stay away from using those words in front of him. There is nothing inherently taboo or insulting about "ginger" or "nerd", it's just associated with people who make it clear they hate the guy they're describing with.
That's all assuming the original commenter's reasoning, which I find a bit leapy. Calling Scott's by his last name is a bit weird for sure, I have never seen anyone or himself call him that, but it wouldn't even count for a 0.001 in my calculation of whether someone is a Scott's enemy.
Thank you for replying and putting in effort -
I can understand your POV. A couple of points besides that:
"(and I will still correct you that the guy or gal you're replying to only said "last name exclusively, not the full name)"
They said this:
"For reasons involving the NYT article and doxxing, his *last* name (not mentioned here) is used only by his enemies. So never use that."
I did not gather from this the 'last name exclusively' part, particularly not "not the full name" part, and I apologize if this is a very relevant part of the question. However, I can also ask the same questions on using only his surname, which I have also done, not knowing that some people who are not Scott use this as a marker to classify me as one of his enemies, that is, apparently as a random asshole on the Internet.
"But also, it's good and wholesome to know what upsets the people you respect and then not do it in front of them."
I wholeheartedly agree. I seriously question this point's relevance to using Scott's full name. Has he asked to be referred to primarily with his pseudonym? The situation has markedly changed from the days of SSC, precisely due to the fact that he is no longer (relatively) anonymous.
"Calling Scott's by his last name is a bit weird for sure"
I routinely refer to celebrities by their last name. That is, I talk about Yudkowsky, Hanson, *******, Tao, Pele, Einstein and Trump, not Eliezer, Robin, Scott, Terry, Edison, Albert and Donald. I think this is normal.
It's just a signal, like "Barack Hussein Obama" or "Donald Drumpf" -- sure you could use those even if you supported the person in question, but due to the peculiarities of history people may assume that you don't like them.
Barack Obama's middle name is not Hussein, and Donald Trump's last name is not Drumpf. Scott's last name is what it is, and he personally celebrated it in ACX:s opening post. I do not think this analogy works.
I think we disagree about our understanding of "last name", the one I have in my mind is "Siskiend" (I butchered the spelling I know), the one you have in mind seems to be "Alexander" ?
I mean, I'm okay with both. I have made it clear that the point is mostly unconvincing to me, I'm just playing the role of the original commenter's advocate, that's all.
>I did not gather from this the 'last name exclusively' part,
Yes, I realized this just now, I agree it can be understood both ways, perhaps even favoring your way.
> classify me as one of his enemies, that is, apparently as a random asshole on the Internet.
I doubt you should care about the opinions of someones who uses a single word to deduce something like this about you, eh ?
>*******
Seriously, it's okay. Say it : his name is Scott Alexander Siskind, there is no taboo about it, you don't have to star anything. The commenter that started the thread just shared a (quite frankly) flawed heuristic, but people are not dumb algorithms. If you're on Scott Alexander Siskind's blog writing comments that are not 51%+ insulting Scott Alexander Siskind, you probably don't hate Scott Alexander Siskind, or Scott, or S. Alexander, or SSC, or Siskind, or S.A., or S.A. Siskind, or Unsong's author, or the Jewish Author That Loves Writing And Is Enviously Good At It, etc.. I can't speak for Scott, but those all strike me as pretty reasonable aliases for him.
"I think we disagree about our understanding of "last name""
I'm referring to ACX:s writer's (Scott's) family name, the one NYT threatened to and eventually did publish.
"I doubt you should care about the opinions of someones who uses a single word to deduce something like this about you, eh ?"
Absolutely. Let me emphasize that I do not feel that OP:s comment had anything to do with me personally, but I did use myself as an example, mostly to avoid the possible if unlikely lines along "Even if you don't see how using his last name is offensive - well, let me point out that you don't use his last name, but guess who does? Evil enemies!" (yes, I'm also kidding a bit).
Earlier you said that I'm overthinking this, and that the commenter I replied to used the word 'enemy' lightly. That is exactly the relevant part and what I'm opposed to: the cheap usage of the word word 'enemy'. I agree that my comment represents overthinking, and I wholeheartedly believe that whenever the word 'enemy' is used, one absolutely should overthink the case. This - to be extremely wary of groupthink, halo effects (negative or positive), ingroup-outgroup biases etc - was one of the first things I learned when reading the Sequences, and I think it is a very valuable idea.
That is, I consciously decided against a charitable interpretation of their words due to the (psychological, social, cultural) risks associated with using allies and enemies as a framework for thinking. I do not think this is the place for that sort of language (I do NOT mean that words like 'enemy' should be banned or something equally stupid).
I agree with several things you said, and I agree with the last paragraph.
Petty and mostly unimportant point, but "doxing" has only one x. I'll cite RMS himself on this one.
https://stallman.org/doxing.html
(Also, while I can understand why using his real last name might still be "outgroup-coded," Scott openly uses it in "Still Alive," this blog's second-most popular post, and in his Lorien Psychology project, so I don't think it really has to be kept secret anymore)
I don't really care much about this spelling variant, but if you're arguing "doxxing" is incorrect, I'll argue back.
A quick search reveals no references to this supposed letter exx other than by Stallman, so if it does exist, it's at least pretty obscure. The mispronounciation of "doxing" as /doʊksɪŋ/ seems much more obvious than the mispronounciation of "doxxing" as /dɑxɪŋ/. Avoiding forming digraphs with other meanings does not seem like a rule English actually tends to follow (e.g. "cooperate", "lighthouse"), but doubling final consonants in cases like this when adding a suffix is. I read that the letters "x" and "v" have often historically been exempted from this rule because people thought it looked bad or something, with this exception becoming less common lately, although writing this now I'm doubting my memory/source for this. Anyway, my main point is that consistency is a better reason than most to prefer one spelling (or other linguistic feature) over another, and that points to preferring "doxxing". (Also apparently it's sometimes spelt "doxx" even without a suffix, but I don't see any particular reason that would be preferred).
It's spelt with a double-X because it's Internet slang, and playing with English spelling "rules" is a common Internet game. Wikipedia (which prefers "doxing", FWIW) claims that the original spelling was "d0xing": https://en.wikipedia.org/wiki/Doxing
I think RMS is joking there: nobody would write "yech" as "yexx", and the standard way to render the Greek letter chi in English is "ch", as in technology, chirality, archaeology, etc. Besides, language is defined by usage, and "doxxing" is at the very least a widespread usage.
Edit: Also, "Exxon" is not pronounced "Ecchon", and doesn't derive from a Greek word! It was a computer-generated variant of their previous name "Esso", which is derived from "S. O." for "Standard Oil", and is still used in much of the world. The Standard Oil Company of New Jersey switched from "Esso" to "Exxon" after other Standard Oil descendants sued to stop them using the "Esso" brand in their territories: https://en.wikipedia.org/wiki/History_of_ExxonMobil#Exxon_Corporation_(1973-1999)
Further, it's my understanding that they deliberately chose the 'xx' spelling of Exxon *because* this pretty well guaranteed that no one was using that name or word in any prior sense anywhere.
That sounds very plausible!