How often do Antarctic tourists experience cardiovascular symptoms? How much time does it take for them to manifest? What changes occur in the blood circulation systems of the winterers (who stay there for a year), compared to the people who only make a short visit? Is there at least some difference between the Maritime Antarctica and the East Antarctica research bases (it's colder and drier in the EA)? What about the USA Scott-Amundsen Station (South Pole)?
1. Cold exposure increases cardiovascular risk in the short term.
2. But I also know that cold exposure increases basal metabolic rate. This *might" prevent obesity and have a longer term protective effect against cardiovascular diseases, if people don't fully compensate by eating more.
Which effect is bigger? It doesn't look like 2 has been researched very much.
"Homeotherms maintain an optimal body temperature that is most often above their environment or ambient temperature. As ambient temperature decreases, energy expenditure (and energy intake) must increase to maintain thermal homeostasis. With the widespread adoption of climate control, humans in modern society are buffered from temperature extremes and spend an increasing amount of time in a thermally comfortable state where energetic demands are minimized. This is hypothesized to contribute to the contemporary increase in obesity rates. Studies reporting exposures of animals and humans to different ambient temperatures are discussed. Additional consideration is given to the potentially altered metabolic and physiologic responses in obese versus lean subjects at a given temperature. The data suggest that ambient temperature is a significant contributor to both energy intake and energy expenditure, and that this variable should be more thoroughly explored in future studies as a potential contributor to obesity susceptibility."
>> "How can we get food to them?" Chomsky told YouTube's Primo Radical Sunday. "Well, that's actually their problem."
I would have gone with "Doordash" there, but share the general sentiment, in regard to mandates. If you're trying to portray yourself as "taking a stand", you shouldn't turn around and try to play the victim either, if you lose your job.
At both a practical-level, and a meta-level, this kind of discussion bothers me.
At the practical level: I don't think the Federal Agencies that are publishing vaccine mandates have the Constitutional authority to do so. It is possible that State Governors have this authority for those people employed by or attending State-mandated/State-run/State-funded institutions. (Here is where compulsory education gives room for the State to support public health via vaccination of schoolchildren: the State has the authority to put strong demands on the health status of students in State-supported schools, and has the authority to compel attendance at some school. At least, this is how it works in the United States.)
At the meta-level, let's do a thought experiment: if your favorite political boogeyman (a rightist who hates organized labor, or a centrist who gives scientific reasons for her desire to limit abortion, a leftist who wants to ban guns, or a weirdo who hates a particular ethnic group) gets into power at the Federal level, can he use this authority in a way you would find abusive?
Imagine the Federal Government using anti-terrorist mandates to severely restrict jobs for people who have ever given money to a foreign terrorist organization.
The supporters of this cause would point to children who died when the last terrorist exploded an IED at a public event, and say that terrorism can cause a public-health crisis. People who support terrorism should "remove themselves from the community".
Are you comfortable with this kind of thing? It's the government saying that certain people are unemployable, and forcing businesses or non-profits to not employ/associate-with certain Bad People.
If you are not comfortable with that, why are you comfortable with the current push to use vaccination to limit access to jobs?
>Are you comfortable with this kind of thing? It's the government saying that certain people are unemployable, and forcing businesses or non-profits to not employ/associate-with certain Bad People.
I'm definitely not comfortable with that, but I also don't think that's what's happening here. Being non-vaxed isn't a thing you *are*, it's a thing you're choosing. My job has had a flu shot mandate for a decade. I knew one guy who grumbled about it but no one who histrionically quit over it. (Or, technically, histrionically complained about oppression while refusing to get their shot until they were canned.)
If the gov't wants to mandate that a class of people, say, middle-aged white men, are BAD, and therefore shouldn't be hired, then sure. I'm all in against that. But this isn't that. This is mandatory flu shots. I've been fine with that from my employer, and I'd have been fine with it even if the mandate was coming from one of the levels of government over my employer. I don't get the objection here.
No, but for a different reason. Political identity (like religious identity) is specially protected in our society. In the hypothetical above, it was asked if it would be ok to ban someone who had given money to terrorists from working. The answer is clearly no. We let people who belong to terrorist organizations run for office. You're correct that it's chosen, but it's a choice we've decided to protect very strongly.
Vaccines are different. Again, we have a history of how we do this, and we've always been fine with mandating this shot or that shot for school or work.
>> At the practical level: I don't think the Federal Agencies that are publishing vaccine mandates have the Constitutional authority to do so.
Unless/until there's some judicial intervention, I'll just assume they do. SCOTUS hasn't gotten involved yet.
>> Imagine the Federal Government using anti-terrorist mandates to severely restrict jobs for people who have ever given money to a foreign terrorist organization.
Isn't there already a law against that? I think they'd have much bigger problems than losing a job.
> Are you comfortable with this kind of thing? It's the government saying that certain people are unemployable, and forcing businesses or non-profits to not employ/associate-with certain Bad People.
This mandate only applies to companies of a certain size, I believe. So the nonvax'd who wanted employment can look for remote jobs, or ones with smaller work forces, or start their own business/whatever.
I also look at the flipside - essentially what they're fighting for is the right to have a greater probability of transmitting COVID to their fellow employees if they have it. I don't see that as selfless or brave, to put it mildly. If you're going to "take a stand", don't be afraid of the consequences.
A friend has to write a Bachelor thesis which is kind of a literature summary on semantic text understanding through AI. I know almost nothing about that field. (The description was also very vague about what kind of things have to be understood.) Are there any generic helpful pointers for approaching this?
Probably wants to research NLP (Natural Language Processing). This is a book I read, though it's for R, and most people use Python (it is surveyish, though):
It's complicated, but there are a ton of 'simplifying' tutorials. Essentially what the algo's do is first convert a corpus of words into vectors, then import some type of metric relating them (these are the embeddings or representations), then train for a given task like Q&A, completion, sentiment analysis, whatever. So search terms might be like "NLP", "transcoders nlp", "word embeddings", "nlp words to vectors". I guess one could describe the general workflow pipe, then point out different approaches in the different steps, idk.
One thing I found interesting in the discussion of hypothermia (and deaths due to extreme temperature) was the quick turn to one of two explanations: wealth/poverty of a region, or genetic adaptation.
I find this pair of explanations lacking something important. It is lacking something mentioned in a book review by Scott, of a book titled "The Secret of Our Success."
The skills needed to survive a cold night (without suffering hypothermia) are the type of skills that people learn from the culture they live in.
In the parts of the world that don't deal with extreme cold on a regular basis, the local culture may not remember the tricks used to survive a cold night. Even if that cold night is only 10 degrees C, in a climate that usually has overnight temperatures around 20 degrees C.
In cold regions of the world, the local culture may not remember how to deal with the occasional bout of extreme heat. Thus, members of that culture may be at risk of heat-stroke in scenarios that other cultures think of as a hot-but-survivable day.
The wealth of the industrialized world gives us a different cultural answer to the problem of keeping warm (or cool) when weather is extreme. That cultural answer is less than a century old: buy a better heater/air-conditioning-unit for your house. Find a building/car with AC. Buy a cold drink, or a cup of hot chocolate. Find a better jacket.
Within the past few centuries, we've seen many teams of explorers attempt to map poorly-known regions of the planet. In most cases, those explorers were interacting with economically-poor natives in the area of exploration. (Think of Livingstone in the heart of Africa, or explorers trying to find the NorthWest Passage.)
Generally, the natives were poorer than the explorers. But they were better at living off the land, and probably equally good at surviving extreme weather typical in that area of the world.
It is definitely true that a wealthy culture provides easy-to-use ways for all members of that culture to survive harsh weather. But it is also true that wealth, by itself, isn't the only factor in helping people survive harsh weather. Cultural knowledge is a huge factor.
As we see with the discussion of hypothermia: a wealthy culture can lose common-knowledge-level of information about dealing with cold temperatures. Even if that knowledge was common in that culture a century or two ago.
I realize I say this as someone who lives in a cold climate but it doesn't seem like it takes evolved cultural expertise to know that on a 50 degree F day (10 C) you need a jacket and a sweater, and at night you just need a couple good blankets.
Maybe these places simply don't have those things or money to buy/produce those things on short notice, and maybe if you are then in 50 degrees after months of hot weather, your body isn't ready to survive 50 in light clothing. That's possible, and tragic. But it doesn't seem like it requires evolved wisdom or long-term preparation or even mechanical help (heaters) to survive 50 F/10 C.
Per the discussion of hypothermia: at night, you need a couple of good blankets and an insulated layer between yourself and the ground. The blankets may help provide that insulated layer, but not providing that insulating layer is a really foolish choice.
This is the kind of cultural knowledge I'm focusing on.
On the broader front: I was thinking about whether "Richer cultures survive temperature extremes better than poorer ones" applies in all scenarios.
When the richer culture is sending explorers into the heart of Africa, or sending explorers to find the Northwest Passage, there isn't much apparent difference in survival of temperature extremes between the rich explorers and the poor natives.
Per the article posted by Scott, there is a noticeable difference between rich explorers and poor natives in acquiring food from the local environment.
What I'm noticing is that Industrial-age people (or Western, Educated, Industrialized, Rich, Democratic people) tend not to see notice how much cultural knowledge about extreme temperature has been washed away by "that is no longer necessary to remember".
I notice that lots of people touched on this, but didn't seem to realize that this kind of cultural knowledge is something Scott blogged about before.
I just wanted to say "thank you", to everyone who came to the Cambridge meetup. Scott, thank you also for being there. It was great to meet you all. I am a longtime reader of the ACX and SSC blog, and it has had a big impact on my life, especially over the past few years. The biggest effect I have seen is that it has raised my own expectations of myself. I try (much of the time I fail, but I am always trying) to reason my way through problems in a rational way. I have found this a really valuable approach in my life when making decisions/ deciding how to think about new ideas, as well as when interacting with other people (and particularly those I disagree with). I want to express my gratitude that this blog and this community exists. Thank you all.
I know this corner of the internet has a certain affinity to "optimized internet food". I'm in need of a state-of-the-art recommendation. I'm looking for something that can take me through the day with as little preparation, timing and other ceremony as possible (and can be ordered on the East Coast, but that shouldn't be a big problem?). If it helps keep me awake, that's a bonus, but all is fine as long as it doesn't actively make me sleepy.
I don't mean to change my general eating habits; I need a temporary fix for the remaining 2 months of crunch time on my job.
Well, I assume you're familiar with them given the phrase in quotes, but in case you're not, mealsquares aren't bad and they're legit unwrap and eat.
For a slightly more complex solution, beans and rice is pretty good. You can do up a big batch of beans and rice at the beginning of the week and eat it for lunch and dinner. It doesn't have to be heated up, and can be dressed up for variety by sauces and mix-ins. Assuming fridge access, you can go from hungry to eating in a couple of minutes, and in combination beans and rice is a complete protein, so it's not bad for you. It's also very cheap, if that's wanted.
There's a song I like a lot in Russian whose lyrics purport to be a translation of a W.J. Smith poem. (Depending on the website I look at, sometimes the claim is that it's an original invention, but it being a translation would make better sense to me.) I'm looking for the original poem.
It's about a dragon who lives in a tower (as dragons do), and, being bored there, plays the violin. He is visited by a princess, who scolds him but then they're reconciled and get married. He explains that he's fed up living in a tower like a dragon, and instead would like normal domestic life. He also says that the princess shouldn't be afraid of the dragon who lives beyond the marshes, because if he ever gets rude with her, he (the dragon) will tell him to leave and he'll go. (Yes, it's supposed to be ambiguous whether there's really two dragons involved.)
Anyone have any idea what the original poem might be?
I'm looking for a learning resource on how to test hypotheses using (frequentist or Bayesian) statistics. Basically: given a hypothesis, how do I then use data (e.g. samples) to refute that hypothesis?
We just had a realspace South Bay meetup (Sunday, October 24th). Thirty to forty people attending. If infection rates continue to fall, we plan to do another one in a month or two.
Suppose that a group of whimsical aliens decide they want to make Saturn's rings prettier so they teleport a new shepherd moon into orbit in the middle of a ring. How long will they have to wait before a new gap appears in the rings: are we talking years, millennia, or astronomical time scales?
The rings are are probably moon(s) that have disintegrated from tidal forces. There's a distance called the Roche radius that depends on the dimensions and densities of the planet and the moon. Inside this limit, a moon that is held together by its own gravity will break up. Outside, pieces of floating stuff will aggregate into a moon. Saturn's rings are inside the Roche limit, except for the weak F ring which is just outside which may not be stable.
If the moon is solid it may not break up, though objects on the surface can lift off. I would also expect that even a solid moon with eventually start to crack from flexing at some point. It would then break into pieces down to a size that is structurally integral and actually manages to hold itself together.
We can expect the aliens capable of creating or moving moons to know about this.
So maybe your question should read: How long till the moon becomes a ring? The rings themselves are unstable as gravitational interactions will fling stuff up and down but the moons are thought to mitigate this process. The age of the rings is uncertain.
That sounds true about real moons, but we’re talking about aliens teleporting things. What if the moon is a (Saturn-moon-weight) solid piece of tungsten, or maybe nanomanufactured composite with ridiculous elasticity and toughness, or some kind of magically-stable shard of degenerate matter, or maybe even a itsy-bitsy tiny black hole?
It has to cause a disturbance in the rings via gravity, doesn’t it?
ok, here's a ball park calculation. A small object placed 27,000 km from a 10 metre diameter ball of tungsten in an empty universe will take about 7.5 million years to contact under gravity. Most of the stuff in the rings is like 10 cm in size. 27,000 km is the diameter of the Saturn's main rings. So that's a ball park coalescence time.
Black holes may not be a good idea, please size your your black hole carefully. Black holes evaporate by Hawking radiation converting mass into radiant energy. For large black holes this is incredible slow - like protons of mass/energy per zillion years and takes to the end of time. It gets crazy fast for small black holes. The final million kilogram evaporates as energy in 46 seconds ending with a flash brighter than the Sun.
There's a lot of focus (both in public AI-risk activism and within the community itself) on the failure mode where an AGI gets created with a buggy utility function and turns the universe into paperclips or whatever. There's another really bad failure mode, however, where the AGI gets created by CCCP or whatever with some version of the utility function "make all humans obey Xi Jinping/Putin/<some evil group of people>'s dictats forever".
If making an AGI requires a huge Manhattan project style research effort (rather than being a hacker-in-a-room thing) then IMO having a country/world with liberal politics is a necessary (but not sufficient) condition for creating a good AGI. If you buy this argument, a crucial part of dealing with AGI x-risk is (a) making sure your favorite liberal countries have the leading AI programs and (b) pushing back on totalitarianism world wide.
When we add this to the fact that surveillance AI could lead to a bunch of other x-risk scenarios (e.g., impossible to remove world-wide totalitarianism, counter-value nuclear war with AI guided missiles), IDK... It might be that the summed x-risk associated with all the failure modes widely used surveillance AI creates (including leading to bad AGI) outweighs the x-risk of bad AGI directly.
Well, so what? Articles of this sort don't ever seem to contain any actionable proposals, just one more dunk on the already low status "sci-fi" weirdos who actually try to do something, however misguided they may be. If Thiel is so worried, then why wouldn't he announce a $10 million grant to combat this horrible communist menace?
If telepathy was possible, wouldn't its enormous reproductive advantage have made it a universal trait?
Being able to read the minds of your prey, predators, and conspecifics seems like it would confer vast reproductive advantages over non-telepathic rivals.
Did any of those parapsychology researchers look into the heritability of such mental powers? Wouldn't such an advantage have made these kind of abilities universal among humans?
If reading minds could be evolved, then you would evolve the ability to read before evolving the ability to filter precisely such that you could distinguish friend from foe, or predator and prey. Seems like it would be very confusing and not necessarily a clear advantage, even if the ability itself cost nothing extra in terms of neural resources to have.
If visual sight could be evolved, then you would evolve the ability to distinguish light and dark and different colours before evolving the ability to filter precisely such that you could distinguish friend from foe, or predator and prey. Seems like it would be very confusing and not necessarily a clear advantage, even if the ability itself cost nothing extra in terms of neural resources to have. :)
Light/dark is far less subtle than "intent to kill". Even humans with evolved intelligences and an understanding of other minds have trouble identifying and understanding intent.
The state of a lot of parapsychology research isn't so much movie-grade telepathy as "This dude guesses the right card 18% of the time which is a statistically significant improvement on the 1/6 odds of random guessing". The effects are really small (perhaps small enough that motivated researchers can consistently p-hack them into existence) and for all we know ESP burns a lot of calories so it's not obviously adaptive to the ancestral environment.
If anyone could demonstrate the ability to guess right 18% of the time, that would be a revolution in itself. I would stop everything else I am doing and work on that.
Given the wide variety of issues we see in other fields, it would be surprising if they were *all* due to that one factor. Motivated stopping, file drawer, bad experiment design, outright fraud... there's probably a lot going wrong.
You are right. I was thinking of one study in particular where they found a hugely signficant (N was very large) but very small effect, and it seemed to me that they miscalculated the expected probablities under the null hypothesis, but yes this does not have toi be the only explanation.
I've been thinking a lot about different types of reasoning styles, and I figure here are a few axes along which people can differ, and examples of people at different ends of each axis.
First, we have the individuals/institutions axis. In this axis, we consider whether or not we take the reasoning done by an individual to be trustworthy, or if we consider individuals to be fallible, and in need of institutions to come in and correct them. On the extreme individual end, we have people like Eliezer Yudkowsky arguing against epistemic humility, and on the extreme institution end we have the catchphrase "trust the experts", or Naomi Oreskes' method of censensus.
Second, we have the facts/paradigms axis. This axis asks whether or not you think the fundamental object of analysis is individual facts, or larger narratives and stories. On the paradigm side you would have people like Thomas Kuhn and most leftists, while on the facts side you would have reductionists like Descartes, or neoclassical economists.
Finally, we have the reason/intuition axis. On the reason side, we have people who think that the best way to find out if something is true is by carefully going through the justifications for every statement, and making sure that everything fits into nice syllogisms, or some other form of formal reasoning. On the intuition side, we can say that methods like that are potentially misleading, and instead trust "human" aspects of judgement. For reason, you might have someone like Richard Dawkins or Peter Singer, while on the intuition side you might have someone like James Scott or Joseph Henrich, or the entire idea of "lived experiences".
I think that these three axes provide a nice categorization of different types of reasoning that you can see nowadays. Most rationalist types probably fall into the individuals/facts/reason corner, while mainstream progressivism probably fall into the institutions/paradigms/reason corner.
I think that this provides a nice explanatory model for why so many rationalists tend to be libertarian oriented, as well. Libertarian analysis is extremely individualist, places skepticism in what they believe are fallible institutions, and tends to place high emphasis on mathematical/game theoretical models. This fits quite nicely with the individuals/facts/reason corner, which as mentioned before seems to be where rationalists typically fall.
Do you think this categorization is useful? Is there anything in particular that you would add/change about it? Where do you fit on it?
I don't agree with your basic assumptions. I don't believe that human reasoning can mapped out on such a simple (simplistic?) heuristic coordinate axis. And even if you could map human reasoning "styles" along two- or three-dimensional axes, I doubt if humans would necessarily reason according to these pre-mapped categories all the time, some of the time, or any of the time. Moreover, I think it's cognitive mistake to assume that people can't assume different modes of reasoning that may be contradictory to the modes of reasoning that they'd use under different circumstances.
Furthermore, I would argue that reasoning is only one of the sub-components of consciousness, and much of the problem solving we do throughout the day in waking consciousness (and probably in sleeping and dreaming consciousness) is curiously resistant to self-analysis, and although it can sometimes be logical and methodical, it mostly isn't. For instance, when you're trying to catch a ball — you're solving a calculus problem in micro-seconds without any deductive or inductive reasoning. And part and parcel of our consciousness are the way process our qualia — which can be amazingly efficient, but which can also deceive us by seeing patterns where there are none. Underneath it all we've got some emotional/instinctive components that we may have little control over but that lead us to making non-rational problem-solving decisions (which we may sugar coat with the patina of rationality).
I *am* not saying that we don't on occasion reason in the modes you're implying. We do. But reasoning (i.e. using logic to solve problems) is only a small part of how we problem solve and arrive at answers.
My first impression is that the axes are not adequately fundamental. For example, I might be happy to concede that a lot of institutions that exist around me are broken/corrupt, which might nudge me toward individualism, but that's not the same as a belief that institutions are inherently less trustworthy than individuals. And reason/intuition, the way you frame it, seems like just another name for rider/elephant, Type 2/Type 1 thinking, etc. In my observation, rationalists are quite open to the idea of mining formless, subverbal intuition for useful info - if memory serves, CFAR even included Gendlin's focusing technique in its coursework.
Facts/paradigms appears to me to be the strongest of the three, and the most interesting. I think it does say a lot about an individual if they are committed to overarching theories and firmly held heuristics, whatever their content might be. It probably tracks well with the good, old 'is/ought' distinction, which is one of my own main rules of thumb for classifying people. Some try their hardest to develop a clear view of how things are, others are far more concerned with how things ought to be.
You might be right about the institutions axis not being very fundamental, perhaps I should think about that a bit more.
I think that you misunderstand the reason/intuition axis. It's not a reference to type 1/type 2 thinking, but rather asks if intuitive judgements are valid at all, or if you need to ground everything and reason from first principles. For an example of what I am talking about, suppose I ask you if consentual cannibalism is okay. You might try to figure out exactly what the consequences are and whether or not we should live in a society that allows such a thing, what laws restricting it are doing, etc. You also might just say that it is clearly obscene and wrong, and take that to be sufficient evidence. This isn't really system 1 and system 2 so much as "wisdom of disgust", which is an intuitive thing in the sense that I am trying to get at. Another example is the idea of lived experiences; you have certain experiences, nobody can tell you that you don't have them, and they provide reliable information about the world. The fact that you haven't done a meta analysis of randomized control trials doesn't really matter from this perspective, and in fact might even be misleading. Is that type of reasoning legitimate? Well that depends on who you ask, so I think makes for an interesting distinction.
The surveys do indeed show more center left people, but Scott has called himself a Libertarian, Julia Galef has defended Libertarian ideas, many people here are into things like prediction markets and cryptocurrency, etc. From my own experience hanging out in the discord and these comment sections, Libertarian ideas seem to be very overrepresented compared to the mainstream.
I think this is distorted by self-categorization. Being socially liberal seems to be a simple moral decision for rationalists. Placing yourself on the fiscal Conservative-Liberal axis requires a deep understanding of numerous economic principles.
Choosing to define your self as Conservative in the absence of a complete understanding of both the rules of the game (economists aren't even sure) and the values of all the state variable seems to be a Rational decision. Except that conservative in this case should be lower-case [c]onservative. Centrist or Moderate seems to be a more accurate label.
A bit tongue in cheek, but in most cases deferring your judgement to an institution is intellectual laziness (or just prioritizing).
Driven individuals will almost always reach better conclusions than commitees and organizations, as they have motivation to get it right without getting mired in perverse incentives and politics.
Once you see how the sausage is made several times you lose the taste for sausage.
You've dismissed a main argument against by putting it in parentheses, but it is still valid. I know it is trendy in contrarion circles to be anti institutions, but this isnt thought out.. you cant go with the sledgehammer and always reach your own conclusion. You need to use the scalpel to know when and where you need to be skeptical of institutions. Without infinite time, driven individual will *not* reach better conclusions.
"In practice you can never completely eliminate reliance on authority. Good authorities are more likely to know about any counterevidence that exists and should be taken into account; a lesser authority is less likely to know this, which makes their arguments less reliable. This is not a factor you can eliminate merely by hearing the evidence they did take into account."
Part of the issue is determining which authorities are "good" and which are "lesser" or even "not an authority in this field." There are plenty of scientists or other experts who speak out on topics that interest them, often playing on their name recognition, that are outside of the areas they understand better than a layman. Paul Krugman comes to mind.
On another note, a lot of what's come out of the CDC over the last 18 months has not been the most reliable information, especially in how they interpreted and summarized the information.
I disagree, at least for a reasonable definition of "driven" that doesn't include spending years of training to become an expert in the topic. Who will give me a better answer on any astronomy question: NASA, or a random person on the street? Who will give me a better answer on how to fill out my tax form: the IRS, or your gardener? A "driven" gardener might spend 3 hours researching the topic rather than 3 minutes, but that's still not enough to understand the foundations of modern astrophysics.
Do you think NASA is the single best source for "any astronomy question"? Surely there are better sources for specific information you might want to know, even if they might make a good general source of information. Not all astronomers work for NASA or share information with them. If you want to know something that happened last night in a particular section of the sky, local amateurs may actually be the best source of information.
> ho will give me a better answer on any astronomy question: NASA, or a random person on the street?
That's a false choice that ignores the many failure modes of blind trust in institutions.
A slightly better example might be: who will give you better and more prompt advice on viewing conditions near the arctic circle, NASA or the local Inuit?
Clearly institutions simply cannot capture all pertinent local information, and they also suffer from various structural problems. They're useful, but they are not sources of unvarnished truth.
I think what's implied here is that the driven gardener is someone who spends 10-15 hours a week on their garden for years on end, and that such a person would have better conclusions than the local "ask an expert" hour at the botannical garden.
I don't agree with this, btw, as I think the dedicated amateur probably puts in a lot of time interfacing with those local committees of experts, but it's a plausible argument.
I was saving this till we passed the Hoary Astrologer part of the thread because it isn’t very important.
I’ve always had trouble with the 1-10 rating of pain at my doctor’s office. It’s a problem of calibration. What would qualify as a 10?
I’m pretty sure I’ve never felt a 10 level of pain so how do I scale my radius compression fracture? I mean it’s not nearly as bad as passing kidney stones. - Go through that a couple times and you become a fiend about hydration -
So yeah, it hurts a bit. Do you want to know if I need codeine or sonmething? No, it’s not that bad. But now that I think about it that stuff does produce a rather pleasant mood, but no I don’t *need* it. Ibuprofen can handle this one.
Now the time I had my four wisdom teeth extracted and had 4 dry sockets, that was painful. Even percodan didn’t completely dull that. I’d have to give if a solid 8 maybe even a 9, but you know, I want to leave room at the top end in case something worse comes along.
When I had reconstructive surgery on my face to pop my cheekbone back out after that sandlot football mishap, I woke my surgeon up with a call to his home after the coagulated blood re-liquified and started to find some way out of my head. Sumbitch, that probably *was* a 9. But who knows? Things can always get worse.
So the upshot is. most of the time I just shrug and say 2 maybe a 3.
As other people have mentioned, I think it makes the most sense to think of the impact the pain has on you. For example: https://i.redd.it/xoh2o5y09ed21.png
I recall reading a story where a patient for some reason couldn't have anaesthetic during a surgery; in recovery afterwards, the patient asked for more painkillers, the nurse asked them to rate their pain on the 1-10 scale, and they responded, after some though, with a "2". The nurse was about to dismiss their request for painkillers, then read their patient chart and realised what their "10" was calibrated to. Classic internet just-so story, but it highlights the issue well enough.
Personally, I think you're expected to work backwards from "how much do I need painkillers" - 2-3 is an ache, 4 -5 is "need painkillers to be able to concentrate or sleep", 7+ is "actively screaming in agony". (on that scale, the worst I've ever experienced personally was a 6, but I'm quite capable of imagining worse)
I've found the scale to be mostly meaningless as well.
In general, I'm rarely in pain. When I was having contractions, and the phone nurse was trying to figure out if they were actual labor contractions or not, she asked me to rate them on the 1-10 pain scale. I told her probably 4 (I assume there are way more painful things), and she seemed to think that meant I should wait an indefinite period of time to do anything further, possibly days? My husband looked at my behavior, and thought I should have rated it higher. Eventually they induced for high blood pressure anyway. But I think in retrospect the behavior approach probably made more sense. If I can't think about anything at all, or talk coherently, and am compulsively pacing through the whole night (despite having slept soundly every single night prior), those are the relevant symptoms, not whether I can imagine someone in worse pain or not. I suppose if it came up again I would check my blood pressure before calling.
A little while ago, there was a lot of discussion about schooling. I decided to take a dive into Wikipedia to look at the history of schooling in the US. The result is now on my blog:
One possible explanation for the question 'Why don't we have Puritans anymore?' is that we replaced the Puritan education system with the Prussian education system.
It seems to me that ideology drives the choice of subject matter, but the size of school is a matter of population density and transportation. But maybe size and subject matter got conflated somehow?
That's not true. Puritans worked on a parish model and if parishes got too large they'd have tiny urban parishes that could take up a block. The Prussians found this economically inefficient and centralized into large districts. Likewise, these small school houses were subject to much less central control than the Prussian system.
I don't know about that, but the one room schoolhouse probably does more to let students learn at their own pace, while conventional schools imply that people need to be subordinated to organization which is convenient for someone else.
That's true, but is there a reason to not have schools structured that way (either a lot of separate single rooms, or one-room structures in a larger building) in more densely populated areas?
Montessori schools are fairly popular, and they work on a different model. I know they let kids learn at their own pace with guidance from the teachers, and I believe they do a degree of age mixing. OTOH, my kids only went to a montessori preschool, so I'm not sure how they work in practice at the elementary or higher levels. Anyone who has kids in one of these feel like commenting?
Any suggestions for good Discord discussion servers? The one linked to on the blogroll of this substack is surprisingly disappointing, and substack's comment system is nigh-unusable.
Horary astrologer in the traditional method here. Performing analysis of various questions is how I get better at horary astrology, so I would be pleased to use what I know to answer any inquiry you have. My email address is FlexOnMaterialists@protonmail.com.
Well, I'd rather not blogspam our gracious host (yet) and in the interim I don't have a good grasp of prediction markets (cursory perusal of Data Secrets Lox didn't reveal the real-money-betting part of the forum). Could you recommend a site?
I apologize if this is not permitted as it is technically an ad. I'm still looking for participants for a small study. (LW link (includes an email, you don't need an account): https://www.lesswrong.com/posts/HjFkEcw26GGHrjMXu/?commentId=EbH9i4o5ExeDfjojL). I require one university course (course, not degree) in computer science or statistics, but no knowledge in image processing. And the compensation is quite generous (60$ for ~60 min). I'm still hoping to get by without using Mechanical Turk.
Edit: The maximum number of participants has now been exceeded, so unless several people drop out (in which case I'll post an update), further applications won't have chance.
Can someone point me toward a decent study or report showing that the covid 19 vaccines slowed the pandemic? I’m most curious to see their action in comparison to other flu or similar pandemics (or just epidemics). So far all I’ve found are articles that say, “duh, stupid…scientists said so!” And other ones that describe reasons not enough vaccines were given or reasons why they might not be as useful or arguments for/against them causing variants. I just want to know if it worked and from what I can tell this pandemic followed about the same path as other pandemics making me unsure the vaccines really had much of an impact. I haven’t seen anything that doesn’t appear totally spoiled by ideology and I’m sincerely not trying to be a troll.
I'm not sure this is the same question as what you're asking, but here's a related question.
Were vaccines, as they were actually deployed, a cost-effective intervention? The obvious costs are money + side effects, the benefits were at some point hailed as "stop the spread and return to normal immediately" but subsequently turned out to be the more modest "reduce the spread (but not stop it), stop most of the severe cases, hopefully maybe advance the return to normal". What does this work out to in $ per QALY? Were there realistic paths towards making that ratio smaller, which would presumably count as a more effective way to deploy the vaccines?
Nut graf for you is probably this one in the summary:
"Averaged weekly, age-standardized incidence rate ratios (IRRs) for cases among persons who were not fully vaccinated compared with those among fully vaccinated persons decreased from 11.1 (95% confidence interval [CI] = 7.8–15.8) to 4.6 (95% CI = 2.5–8.5) between two periods when prevalence of the Delta variant was lower (<50% of sequenced isolates; April 4–June 19) and higher (≥50%; June 20–July 17), and IRRs for hospitalizations and deaths decreased between the same two periods, from 13.3 (95% CI = 11.3–15.6) to 10.4 (95% CI = 8.1–13.3) and from 16.6 (95% CI = 13.5–20.4) to 11.3 (95% CI = 9.1–13.9). Findings were consistent with a potential decline in vaccine protection against confirmed SARS-CoV-2 infection and continued strong protection against COVID-19–associated hospitalization and death."
An IRR of 11 means you are 11 times more likely to get the disease if you are unvaccinated than if you are vaccinated, or to put it the other way around, you are 1/11 times as likely to get the disease if your are vaccinated versus unvaccinated.
So that's crystal clear in terms of the numbers of cases, hospitalizations, and deaths and how vaccination affected them (it decrease them all, by a lot).
If you're asking a more subtle question about the rates of change of those quantities, e.g. irrespective of how *many* cases we had in July 2021, say, what was the *speed* with which those cases were diagnosed and proceeded to some conclusion? that is a much more subtle question. One assumes that if the highly vulnerable are in sufficiently close contact, then the *speed* with which the disease spreads will not be especially affected by the presence or absence of less vulnerable (e.g. vaccinated) individuals. That is, the disease will spread just as "fast" -- it just will just poop out sooner because it will run out of highly vulnerable individuals faster.
Thank you that was a pretty good start, but I don't think it's quite getting at what I'm asking, which may be the more subtle question you mention.
An age-adjusted IRR looking at case rates during the Delta does indicate effectiveness of the vaccine in preventing bad outcomes and, to some extent transmissibility (though they also indicate this difference is narrowing and demonstrating a decrease in vaccine effectiveness), of the Delta variant. We know Delta is more transmissible, the data seems clear on this. Yet there's precious little data about the danger of this variant in terms of mortality and long-term health risks.
The best reporting I can find via Google, says that hospitalizations are up in the US (presumably from the lows of the spring and early summer) and they all site the same two studies from Canada and Scotland using hospitalization rates to determine the overall danger of the variant. (Worth noting, Duck-Duck-Go, produces a greater variety of articles when searching for "is delta variant more fatal", another chit in my "don't trust Google" mental Bozo bucket). Aside from the monolithic nature of the reporting, I have an issue with this as a useful data point. Imagining ICUs as buckets, the initial Covid surges of 2020 were like dumping a swimming pool of water into every bucket at once. Watching the infection rate data from Google's search results (using NYT data presumably from the CDC) I saw with the Delta variant, at its worst, was like pouring a bathtub of water into random buckets for a much shorter span of time. In both instances, ICUs are overloaded but the difference looks like almost an order of magnitude. The major by-line of the past months has been "hospital ICUs over-taxed!" but I don't see how it's a useful comparison to the original surge when the by-line was about bodies in the street and finding bodies weeks later.
I hope I'm making it clear why it's difficult for me to formulate a current threat level when it doesn't appear we're comparing apples to apples. It seems unlikely to me that sans vaccines Delta would have been more dangerous and deadly than the initial surge, absolute numbers of deaths might have been higher, ICUs might have been more taxed, but it's pretty clear that even at it's worst Delta's impact was never going to match the primary attack. And I think this points at the question I'm actually interested in.
Having glanced over the infection rate data for other air-borne, respiratory flu-like infections, the course Covid-19 has taken appears roughly the same: initial attack, huge secondary surge, big decrease, then third smaller surge, then smaller and smaller, more seasonal surges perpetually or until the strain is replaced by something we consider different. This is where I lose confidence as I'm not sure how to compare this to previous similar(ish) nor how to find relevant past data. When I look at this (https://www.cdc.gov/flu/pandemic-resources/1918-commemoration/three-waves.htm) or this (https://www.researchgate.net/figure/Three-waves-of-the-2009-H1N1-influenza-pandemic-in-Thailand-Source-Bureau-of_fig1_228506946) the charts looks almost exactly like the chart for Covid infection rates and as far as I know, there were no vaccines for 1918 and 2009 vaccine rates peaked in the US at 60%. So, sure, vaccines may have saved more lives in absolute terms, but did they really affect the course of the pandemic?
Some of this depends on what you're looking for. What you would ideally want is several dozen populations, each with similar behavior, weather, genetics, age distribution, etc., largely isolated from each other, and with differing vaccine status (hopefully half of the populations highly vaccinated and half not). That would give us the most powerful and significant test, if we could then notice that the epidemic trends in different ways in the different sets of populations.
The problem is, it's hard to even get *two* populations that are matched on these features, whether or not they differ in vaccination.
Right now, I think there are several lines of evidence pointing towards some sort of population-scale effectiveness of vaccines.
First, we have population-scale evidence that the alpha variant was a lot more infectious than the classic variant, and the delta variant was a lot more infectious than alpha - this is because many different locations were on a general downward trend in infections while one variant dominated, while the next variant was increasing from a tiny rate, and overall infections went up once that next variant became dominant. The fact that the alpha variant caused only a small wave in the United States (I believe not even as large as the summer wave in 2020) is suggestive that vaccinations have been highly helpful (though it's hard to separate out the effect of post-infection immunity from vaccine-related immunity). Similarly, the fact that the delta wave has been mostly infecting unvaccinated people, even though vaccinated people are a majority of the US population, is further suggestive that vaccination has made this wave much smaller than it could have been.
Another line of evidence is all the studies showing reduced risk of symptomatic infection or positive test among vaccinated people compared to unvaccinated people. The initial vaccine trials were the only double-blind ones, and they tended to find the largest effect sizes. It's theoretically possible that just as many people are getting infections and transmitting them to others while vaccinated, despite not getting symptoms and not testing positive, but this seems highly unlikely.
It's hard to say what it means that "this pandemic followed about the same path as other pandemics". I believe it's followed quite a different path from plague and HIV, but you might be restricting to respiratory pandemics with similar infection time, like influenza. But again, we know that different influenza pandemics have had structurally similar, but observationally still quite different, patterns - 1918 was worst, 1956 and 1968 and 1977 seem to have been smaller, and 2009 looked quite different as well. It's hard to know if covid would have been very different from these if it hadn't been for the various distancing measures and vaccination, or if these did nothing to change the overall shape, or what, since we don't have a big enough sample of respiratory pandemics to know what fraction of them have how many waves of what relative sizes, lasting how long.
If you have something more specific about the "about the same path", that would be interesting to see, to figure out whether there are any statistical tests that would or wouldn't show differences between 1918, 2009, 2020, and things like annual flu.
Maybe someone has a direct study for you (I'm not particularly literate in these areas) - but I guess my question is: there are a lot of individual studies on the effectiveness of each vaccine - e.g. "Vaccine A is X% effective against preventing Y strain of COVID-19, (and (X+Something)% effective at preventing serious cases" - do you disbelieve these studies as a whole? If so, why?
If not, how can a large chunk of the population being given these vaccines with fairly high effectiveness *not* slow the pandemic? Certainly the greatly diminished death rate in vaccinated individuals should count as "slowing the pandemic" by some regard, right?
There may still be a study for it (... though identifying a control case may be tricky - since there's no country saying "you know what, let's skip the vaccines" AFAIK)... but this feels a lot like the "must I believe this?" approach to evidence gathering (vs. "can I believe this"), which speaks to a biased view coming in.
I’m open to the possible soldier mindset going on, but really is it true? Did the vaccines stop or slow the pandemic? How much?
I can only find information about the effectiveness of specific vaccines but I’m not convinced that’s enough to draw the conclusion that it *must* have worked. Did enough people get it soon enough to make a difference? Did the imbalance of distribution in some countries negate the impact in others? Did the virus already kill most of the people it was going to kill before the vaccines came out? I thing we should expect some kind of post mortem coming soon, no?
Anyway not trying to start a fight so better to let this thread die than let it get out of hand.
How much did it slow the pandemic? Well, the cheeky answer is "it slowed the pandemic more than if nobody was vaccinated, but less than if everybody were vaccinated"... but the actual question is "compared to what?"
If you're comparing to a hypothetical situation without any vaccination at all... well, I'm not sure the follow-up questions make sense.
> Did the imbalance of distribution in some countries negate the impact in others?
If the vaccine saved X people across all the countries where it was widely available... surely it didn't "negate the impact" by *killing* X people in countries where it was less available compared to a hypothetical situation where there's no vaccine at all. Being concerned about imbalanced distribution seems to presuppose that the vaccine is effective.
> Did the virus already kill most of the people it was going to kill before the vaccines came out?
Clearly not, as the pandemic is still going with hundreds of thousands of new cases each day. A bit too soon for a post mortem as we aren't yet post mortes.
> Did enough people get it soon enough to make a difference? [Did it work?]
Again the glib answer is "if it saved one person, it made a difference to them", but again I'm not sure what you're comparing against. Surely it made a difference compared to the hypothetical where there are no vaccines at all, and continues to make a difference.
Yes, there's a cost to all things, and I think there's merit of a cost/benefit analysis re: things like lockdowns... but with the per-vaccine efficiency rates, and the ongoing number of cases, it's very hard for me to imagine a reasonable threshold of cost/benefit that the vaccines don't clear.
>Clearly not, as the pandemic is still going with hundreds of thousands of new cases each day. A bit too soon for a post mortem as we aren't yet post mortes.
I'm not sure I agree with this point. A fair number of health services have already declared Covid-19 endemic either because they achieved a vaccination rate or simply had enough of the population infected that herd immunity (lack of better term) was achieved. Two that immediately come to mind are Denmark and Iowa, but I think there are a fair number of others. Declaring the disease endemic is subjective and will happen in phases as populations differ, but in this as with all thing Covid-19 related, I expect the CDC and the WHO to be behind the curve.
If the disease is endemic, we'll still see infections and deaths just not everywhere all at once in huge numbers. I was under-confident in this prediction on Metaculus by about 1 week, regardless the trend is definitely down and has been for over a month (https://www.metaculus.com/questions/7627/date-of-va-covid-deaths-peak-before-1-oct/) The start-to-date graphs of the disease, across geographical areas, map almost exactly to 1918 and 2009 flu. While surely fewer people died, particularly the elderly, due to vaccinations, I don't see how the vaccines have really impacted the overall course trajectory of the disease and it seems weird to simply say that vaccines must be the reason when there could be a confluence of factors.
I'm very sensitive to the possibility I may be 'Soldier mindset' here but there's an accumulation of cruft making it very hard for me to distinguish, "must I believe this," from, "is this true," when I feel like I'm asking is it true of just about everything I encounter from the press and online media. The idea that vaccines are the only way this pandemic ends seems ridiculously simple and not in-line with history or the data, yet, it is the only message I see. I'm not debating the merits of the vaccine for any individual (children excepted but that's a different discussion), simply the idea that everyone has to be vaccinated for this to all be over. I don't get that and I'm reasonably confident this is already over if not very very close.
Sort of a tangent, but I'd also be careful about putting up the "Mission Accomplished" banner about this pandemic being over too soon.
I can't help think of Slovenia which declared the pandemic "over" in May of 2020, had basically no cases up to October of last year, but now has as many deaths per capita as the US.
Saying the virus is now endemic rather than pandemic seems like a semantic distinction with no relevance. Remember that the statement I was replying to with "Clearly not" was:
> > Did the virus already kill most of the people it was going to kill before the vaccines came out?
It doesn't matter if you say the virus is now "endemic" and technically its no longer a "pandemic". That may be true, but the statement that "the virus already killed most of the people before the vaccines came out" is still clearly not true.
---
And I don't think there's much doubt that the pandemic would end eventually, even without vaccines - the two possibilities have *always* been herd immunity or total elimination of the disease, and I don't think anyone expected the second.
Vaccination 'just' speeds up the process of getting herd immunity and minimizes the sickness and death required to do it.
So, yes, vaccination doesn't change anything, in the sense that the overall trajectory of the virus is still "it runs until enough people are immune for herd immunity to kick in"... but the time and number of deaths required between the two scenarios is hardly a mere semantic difference.
It seems like you're arguing that since vaccination wasn't the only factor, it must not have "worked".
My go-to demonstration that vaccines alter the epidemiology of COVID-19 in a particular context comes from long-term care homes in Ontario, Canada. Using the epidemiologic summaries from the link at the end of this comment as a source, there are only very few (<1 per day on average) cases in residents in long-term care homes at the moment - whereas at a comparable point (similar total number of daily cases in the province) before vaccination was available - I'll pick October 5, 2020 - there were around 10 cases per day. What I take away from this is that cases in long-term care homes aren't turning into outbreaks among residents. I'll admit I don't have direct evidence for that last statement, but it seems to be the easiest way to explain that data. At least 92% of long-term care residents are fully vaccinated (according to my second source, from February; I seem to remember something like 95% during the summer but I can't find a source for that esaily). This also seems to explain why we aren't seeing total suppression of transmission in areas with high vaccination rates: nowhere has really achieved such a high vaccination rates.
As to whether this could be attributable to other measures used to control transmission: to my knowledge, there haven't really been any new measures put in place recently (since October 2020 at least, which is when my first reference point is) to try to control transmission in long-term care homes. Indeed, if there were any new measures introduced that had an impact, it would be quite a damning indictment of the initial response to COVID-19 in long-term care homes. I tend to think we would've heard about it if such a measure was implemented, though this is a bit of a weak argument.
I've also included a link to an analysis from Ontario's COVID-19 advisory board in March 2021 seeming to come to similar conclusions as me (though I haven't read it since I just found it after already doing most of the research).
This does raise the question of what the actual vaccination rate necessary to control transmission in the community more broadly (as opposed to long-term care homes) without other public health measures is. My guess is that it's somewhere around 95%, at least with the vaccine mix we're using here in Canada - but that's just a guess. It probably wouldn't work as well if there were reservoirs with lower vaccination rates.
Here in Ireland 90% of people are vaccinated with the proportion unvaccinated higher amongst the young. Yet hospitalisations are 50% unvaccinated and the “vast majority” in ICU are unvaccinated. That indicates a potent reduction in danger when vaccinated.
I have a student flat that's like 3 minutes walk from the Edinburgh meetup. And plenty of teabags. (A few vaccinated people in the communal kitchen should be OK if sheltering from rain.)
Does anyone have a back-of-the-envelope number for the percentage of de novo germline mutations attributable to ionising radiation (as opposed to endogenous mutagenic processes)? I suppose it would be species-specific, so I guess I could narrow the scope of the question to Homo sapiens in particular, but my interest in the question is broadly along the lines of: approximately how much of the mutative "raw material" supplied to natural selection in the evolution of organisms in general is due to the sun?
A lot of the literature I went through was focused on narrow-scope quantification of medical risk due to exposure to mutagens etc, and was therefore irrelevant to the objective of my search. Can someone point me in the right direction?
Unfortunately I don't have any numbers to put on it better then <<50%, and I don't have any sources I can link to in English. But from my limited understanding, at least for complex organisms (eukaryotes and especially sexually reproducing eukaryotes) ionising radiation and other "external" sources are not a significant source of de novo mutations used for evolution - most of those come from errors during replication and damage factors internal to the cell, and there seems to be some evidence that organisms have a degree of control over those mutation rates in different regions. E.g. Homo and IIRC other apes have highly elevated mutation rates in the regions of DNA associated with brain functioning. And there's some complicated semi-random-id-generation thing going on with the genes coding for pheromones in the species that use them to recognize their kin. And generally the Boring Billion can be taken as an evidence that evolution on purely random external mutations tends to be extremely slow. So afaiu if you took away all external radiation, it would slow evolution of complex life a bit but not too much, and even that bit would perhaps be rectified eventually to whatever is optimal for this species in this environment.
Does it matter to you whether the source of the ionising radiation is solar? I don't know if any studies have been done on humans (pesky ethics concerns with irradiating people), but there's fascinating data to be had from eg. the bacteria found living in nuclear power plant coolant loops. There's also the process of deliberately inducing mutations in crops with radiation, that the commenter before me mentioned.
It does matter to me in this instance, yes, given that what I'm looking for is something like "in the course of the evolution of the entire biosphere across evolutionary time, what approximate percentage of de novo germline mutations in everything from archaea to mammals was due to exogenous factors as opposed to endogenous factors like replication errors and oxidation? Like, is it closer to 2%, or 50%, or 98%?" I assumed at first this would actually be a Googleable number, and then it turned out I couldn't crack it even with an hour's shallow-diving.
Along the lines of what you and KieferO suggest, there are a lot of quite precise data for metrics like CNVs-per-gray of ionizing radiation exposure deriving from studies of irradiated rodents, and lower resolution data deriving from the "natural experiments" of the twentieth century. But this doesn't really answer my question.
Well, just a rough exogenous vs endogenous split would be great at this point, but yeah, if you know the specific mutative contribution of those two then that would be great too.....
Sorry, that's as much as I've got, I don't even have a ranking for various sources of radiation. And of course some of them vary locally and some of them vary over time.
You might look at atomic gardening: https://en.wikipedia.org/wiki/Atomic_gardening . If anyone was very carefully recording the useful (to humans) mutation rate per gamma ray, it would have been the people who used cobalt-60 to make the modern grapefruit.
This seems to be an AI optimistic blog. I’m more of a skeptic. I’ll believe we are on the way to the singularity (maybe) when speech technology is able to not just translate letters and words into speech but to modify the output based on context. A good example for irregular verbs where the past tense is spelt the same but pronounced differently. I spoke this to my phone today.
“ I went to the park yesterday, while I was there I read the paper. I read the paper every day, I like to keep ahead of the news.”
I pronounced the first read as red (
/rEd/ ) and the second as reed (/riːd/). When I asked to play it back the phone pronounces both as reed. (/riːd/). Humans get it right.
I’ve never seen a text to speech AI do this. Has anybody? It’s a hard problem, not just learning to map symbols to sound but understanding the context of entire sentence, or even paragraph.
Honestly, that strikes me as an *easy* AI problem, on an objective scale in which a manual typewriter defines the lower end and your average Target checkout clerk the upper end. Children can manage that task when they are 1 or 2, without any benefit of organized training -- just listening to people. You have a well-defined algorithm which will tell you the pronunciation of words and how it changes in context. It's not a *simple* algorithm, but it's neither ambiguous nor contingent and can be readily deduced from the actual practice of speech. If this is defined as a "hard" problem in AI then the goal of independent creative thinking that can even rival, let alone exceed, that of humans is ridiculously far off.
Even full-scale natural language parsing and understanding, to the point where your iPhone understands you as well as the Target checkout clerk, doesn't qualify as a really hard AI problem on that big scale, because, again, children manage that around the time they learn to walk, and it gets you not one inch closer to the problem of solving general problems the way genuine intelligence can. Even very, very stupid human beings are capable of fluent grasp of speech. Monkeys can be taught human language (e.g. ASL), and can understand directions and make requests. My dog knows what "go to the kitchen!" and "want to go for a walk?" mean, even if I don't speak the words clearly, my voice is muffled by distance or changed by a cold, or I cough in the middle, et cetera. Brains much more primitive than ours can manage this task.
I would say if you think fully understanding and speaking a human language is a very hard problem (and I actually do), then what this tells you is that duplicating human intelligence is by contrast a surpassingly hard problem, one where you add a dozen zeros to the hardness factor. It's like observing that playing a nice clean G chord on the guitar is not easy, and correctly inferring that being able to compose and play like Mark Knopfler is going to be really, really hard.
You seem to be confusing me with someone who said this was a hard problem to fix. I didn’t. All I said was I was an AI skeptic and look “here’s something that hasn’t been solved” yet. As it happens it looks like google has solved it, sorta. I can break it though, but it’s mostly there.
it did. I’ve been looking for that to work for a while now. Apple haven’t solved it.
I played around with tenses and it got this wrong
“I went to the park often then, and I read the paper. I read the paper every day, I like to keep ahead of the news.”
But when I replaced “and I read” with “where I read”, which is better English it got it right. So that line of defense against my AI scepticism has been breached.
However my theory that AI isn’t a threat yet is still confirmed by the fact that I had to prove I wasn’t a robot. When the robots can recognise sidewalks we are done for.
(Robots - or self driving cars -learning to recognise capthas is probably why we are asked about sidewalks, bridges and taxis so much. There’s a system designed to end itself).
Captchas are weird. I'm pretty sure current AI can beat those captchas. Waymo has commercial self-driving cars on the street, and yet pictures of traffic lights and cross walks are supposed to stop AI? If there's one object AI can identify better than humans at this point it's traffic lights.
A quick search finds articles saying bots can solve these captchas better then humans. Which leaves the question, why are we still using captchas?
My guess is the majority of spammers are looking for easy targets. The compute resources required for breaking captchas increases the cost, and setting up the captcha-breaking AI is an extra thing to do. If spammers were willing to pay even a few cents to break a captcha, employing humans would also be an option. A human should be easily be able to do over 100 captchas an hour (36 seconds each) and in some countries you could pay less than $1 an hour for labor.
I think the reCaptcha also uses other data apart from the images to block spammers, such as your browsing history and IP address.
Another trick spammers use to bypass Captchas is to put up a porn site and then forward the captchas they want to solve to people trying to access their porn site.
ReCaptcha definitely does this - if you go to a site in Incognito mode you'll see captchas more frequently and they won't be the "check this box to prove you're human" type.
I read that it also looks at your mouse movements to see if there's a human behind the clicks.
it's not just that captchas cost money for spammers to bypass - they also make money for the company that serves them - where do you think Waymo got all it's training data for recognising those thing? It bought it off of ReCAPTCHA. Anything that extracts value from users is going to persist, even if it stops being useful for its original purpose
It's important to recognize that there's a big difference between some kind of AI takeoff scenario and AI being a "threat." The USSR apparently had a Dead Hand system to launch their nukes without human intervention (I guess a type of AI) in case of an attack. That's certainly a "threat" even if the computing behind it was simple and clearly not an AI capable of taking off. A very simple AI put in charge of important systems can always be a threat, even if it's pretty dumb - or maybe more so if it's dumb.
Note that Dead Hand (aka Perimeter) was subordinate to human intervention. It was not constantly lurking in the background of the Soviet military ready to launch a nuclear strike if triggered; it was an option the Soviet leadership could chose to engage for a time if they thought they might already be under attack. Said leadership could then head for their bunkers or escape trains without wholly abdicating their responsibility to defend the Motherland while they moved to more secure command posts.
AFIK, it was never engaged. The US equivalent was to trust the judgement and professionalism of select human officers who were always present in dispersed and/or hardened command posts; the Soviets apparently didn't have that level of trust for their officer corps.
Thanks, I didn't know much detail about it. For the purposes of "AI is a threat" it worked as a thought experiment either way. If you put even a very simple algorithm in charge of when to launch your nukes (or other very important functions), that is easily a threat. You don't need to worry about your AI going Skynet or I, Robot in order to be threatened by it.
I've seen a suggestion that AI can't even start to be a threat until it can organize a sock drawer-- or even find a sock drawer.
My "don't even start to worry yet" line is that AI can't custom-make comfortable shoes. I grant that this skill isn't necessary for taking over the world, but it does seem a good bit easier.
One of these days I'll invest in the right kind of extruder so I can 3d print flexible plastics, and I'll try making some custom insoles. Though I worry about breathability.
I sometimes ask people who believe that unskilled jobs are going away soon this question - why do we have baristas? After all we have coffee machines.
Today I actually had coffee in a coffee shop with no barista. It was in a golf club and the cafe had vending machines and one coffee machine. You bought the cup for €3. The line at the machine was slow as people were slow. Older people confused by buttons. The machine ran out of something while I waited. The person at the counter had to try fix it. When I eventually got there the latte button was unlit because i assume they were out of milk. I had a black coffee rather than harass the untrained teenager who was hired to man a counter not pour coffee. No real coffee shop is going to replace a barista or go out of business. The same rule applies to bar staff, sales staff and so on.
Don't worry about your house burning down yet. Its just the curtains that are on fire. When the sofa catches fire, then your allowed to start worrying.
I think the "Don't worry, AI can't do X yet" is stupid. We are in a situation where AI is predictably likely to cause lots of problems in the future. What it can do now is irrelevant.
(I mean half the time X is already done, or something that AI experts haven't put much effort into trying to do. But even in the best case its a bad argument)
When I was a kid flash paper was hard to come by in my little town. I found a magic book that suggested mixing rubbing alcohol and water 1:1 and using to set a handkerchief ablaze without actually burning it.
I tried it - of course, I was a 9 year old boy - and it worked!
I think that was about the time my mother started to show gray hair.
The reason I asked was because I've studied and implemented NLP algorithms, and was curious which one was being used. The most recent information about Apple's I could find (https://machinelearning.apple.com/research/hey-siri) was from 2017, and used a Deep Neural Net. That has likely been replaced by a newer architecture with a Transformer.
As to reaching AGI - in the NLP domain, I don't think it will happen strictly through the machine learning approach. IEEE recently published a series of articles detailing some of the challenges, and a big one is the cost of training the hugely parametrized new models.
Only if from a very low base. However it’s not like Siri was totally useless a decade ago. It’s slightly better than useless now, from almost useless. But then Apple might not be the Avantgarde here.
I have seen exponential growth, since tapered out, in the smart phone capacity though. Particularly in the first few iterations of the iPhone (and no doubt Android). This isn’t it.
> When a third red box is added to the experiment, the results change quite a lot. Researchers have found preschoolers are ultimately torn over which box Maxi will choose, splitting their choice 50/50 between red and blue.
> "When there are only two locations, 4- and 5-year-old children can answer correctly without truly understanding that Maxi has a false belief about the location of the chocolate bar," explains psychologist William Fabricius from Arizona State University.
> "Adding a third location results in them guessing at chance between the two empty locations."
> Preschoolers appear to understand that Maxi does not know the chocolate is in the green box, because he did not see his mother put it there. As far as they are concerned, that leaves the red box or the blue box as Maxi's inevitable 'wrong choice'.
> The authors call this thinking perceptual access reasoning, or PAR. While a child understands that seeing leads to knowing, they do not incorporate the memory of Maxi putting the chocolate in the blue box into their answer.
This seems so bizarre... Partially because I remember (vaguely) being 4 or 5, and some of the things I used to think about, and I can't imagine I would have made that mistake.
Might be a memory of a memory but I shoved a bobby pin into an electrical outlet from my crib and burned my right hand at whatever age a person is when they are in a crib - presumably pre-puberty at any rate. I have some sort of recollection that my left hand did not work as well spooning my Malt O Meal into my mouth as the bandaged right.
It seems like my mom and dad told me about the idea of handedness by way of explanation for my having trouble with the scooping and shoving but this was the pre-langauge period for me, so again, I might be interpreting a memory of a memory. I suppose I should add that I am now fully ambidextrous in the spooning Malt O Meal department.
I'm petty certain I have memories of pre language dreams - the shapes of toy parts and some colors I wouldn't see until the DayGlow poster period of the hippie days. And that goofy Dumbo the Baby Elephant rubber band wind up toy that flapped around pathetically trying to fly in my bedroom - yeah he was a visitor in my dreams then too.
I know memory can be unreliable. Events can be externally verified, but I can't prove my memories of my thoughts are accurate.
But with that disclaimer, here's what I remember. From pre-K, not that much. I remember a mean girl (who in my mind was tall and intimidating) scratched another kid across the face, hard enough to draw blood. I remember feeling sympathy for him.
Because my birthday falls near the end of the school year, I was 5 for most of kindergarten, and I remember that better. I could have told you what my classmates were like. Who was smart, who was friendly. A group of popular kids (feels funny to say that about kindergarteners) used to monopolize the building blocks. I mostly avoided their clique.
I had a fair number of self-reflective thoughts. Sometimes at night I would think about what attributes I wanted to have, and why these attributes would serve me well. I liked to imagine my classmates were doing the same and they might choose differently, and why my choices would be better.
I used to always want to know how everything worked. Luckily my father was always willing to explain stuff.
I have a clear memory of not being able to hear the difference between "white" and "light" and thinking it was one word with two different (but conceptually related) meanings). I think this must have been before I learned to read, or at least before I learned the spellings of those words.
I looked it up just now on Wiktionary. Looks like they're not related. But "white" is related to the Sanskrit word for light, and "light" is related to the Ancient Greek word for white.
Does anybody recall Scott saying something about how the worst thing you can do when encountering a viewpoint you don't understand is round it off to "they're just evil and hiding it"? I know the Seventh Meditation addresses something much like this, but I recall the quote (or something like that quote).
(It might be from somewhere other than Scott, but I'd still like to find it.)
In Europe, we have the Digital Green Certificate, issued after 1) full COVID vaccination or 2) 14 days after a positive PCR test (maybe there is something similar in the US). In case 1) the certificate is valid indefinitely AFAIK, and in case 2) for 6 months.
Currently the ECDC recommends against issuing this certificate after any sort of antibody test, proving past infection. Their stated reasons as I understand them are: there relationship between antibodies and immunity is yet to be determined, we do not know what levels of antibodies detected is sufficient and how long any such immunity lasts [0].
However the ECDC seems agree with these two facts:
1. (At least some of the available) antibody tests provide a reliable enough identification of past infection and
2. Past infection is sufficient to issue a green certificate (one is available shortly after a positive PCR)
In the US, the CDC seems to hold a similar position [1]. The evidence they cite point in the direction of strong and lasting natural protection (fact 2 above), and from my reading of their document they do not seem to worry about the test not being able to indicate past infection (fact 1). However the CDC is also against using antibody tests to assess immunity [2], citing concerns similar to the ECDC, namely that 'serologic correlates of protection have not been established'.
However I find it hard to reconcile this stance with the two facts above. If the evidence for natural protection is not strong enough, shouldn't PCR tests also be insufficient for getting the certificate? If the issue is that antibodies may indicate an infection too long ago in the past, shouldn't vaccine-related certificates also come with a time restriction (since there we also have no good evidence of the duration of protection)?
Can somebody try to give me a better explanation of the ECDC reasoning? Also, I would love to hear your stances on whether the risks outweigh the benefits of allowing such certificates to be issued after a positive antibody test or not.
I think there's also some reasoning going on that you don't want to treat immunity due to natural infection *quite* as well as immunity due to vaccination, because it creates perverse incentives to get infected.
If we had expected vaccination to take longer, I think there would have been a case for setting up "infection hotels" for healthy young people to visit and remain quarantined while sick - but incentivizing people to get infected while living in mixed society with old and immunocompromised people seems dangerous.
I think you are right. But then you have the other perverse effet of loss of credibility: If some official infos are provided not because they are fact (i.e. the truth as far as we can tell using cutting edge scientific knowledge) but because they convince you to adopt a desired behavior, why should the public trust infos tagged "official" more than some random source? There is a strong long term loss in trust for some short term gain, which seems quite common in modern scientific vulgarisation, and that no "fact checking" or "more vulgarisation" approach will solve. Without clearly separating fact from policy, it just erode trust even more without convincing people opposing the policy...
We had a similar situation in Switzerland. Our certificates are compatible with the EU ones, but with a 1-year limit for vaccinations. The government just announced that the time limit for PCR tests will be extended to be a year as well, and antibody tests will be valid for 90 days. This will be implemented as a separate type of certificate, since it won't be compatible with current EU rules anymore.
I am strongly in favor of this kind of relaxation, because of logical consistency as you pointed out, but also because the expected utility of a vaccine for a person who already went through covid is typically going to be negative (both for themselves and society). In fact, this change considerably reduced my opposition to covid certificates.
Not accepting antibody tests is also hugely unfair to people who for some reason didn't get tested in the officially approved way, e.g., doing an at-home test and then not following up at a test center for whatever reason. We are going to have to reintegrate vaccine skeptics (of all types) into our society sooner rather than later. Demonizing people and imposing pedantic standards for certificates is not going to help with that.
A positive antibody test provides a significant reduction in harm on a statistical basis, even if we can't guarantee that in every individual case. This will be hard to accept for some people. I think this whole pandemic brought out a lot of neuroticism, and I now see many folks being irrationally concerned with covid specifically (compared to other daily risks they take without a second thought). Overcoming these attitudes will be one of the biggest hurdles to actually get out of the pandemic in a timely manner.
During my master thesis I’ve developed a new method for metal printing, potentially cutting cost by a factor of ten. I want to turn this technology into a business and make metal printing accessible to SMEs, focusing creative industries. (designers, architects, artists, ...)
There is a crude proof-of-principle and by the end of the year, a proof-of-concept should exist.
I’m still looking for a cofounder. The recent ACX-meetup Vienna showed me what a good proxy ACX-reedership is for “compatible ways of thinking”. There’s no particular profile I’m looking for, just a lot of motivation regarding metal printing.
About myself: Physicist by heart. Finished my masters degree in 2019. Earned money as a programmer while at university, was researching/working on metal printing ever since. Mostly extrovert.
I’m open to found somewhere other than Vienna, but not outside Europe.
About the company: There's business plan. While it will be a profitable business, I do believe it's strongest asset is it's potential social impact by providing a lot of people with access to new manufacturing methods.
If you think this is great and want to help without becoming a founder: I’m also looking for showcase-projects (Stuff that only works with metal printing / would be too expensive without it) and buckloads of money. I have absolutely no idea how angel investors or VCs can be found so this is my try at it.
I'll reach out. I know some Austrian startup founders (and more European ones). It depends on the industry but I'd suggest being willing to move. If you're looking to do metal fab then either to the US or South/East Asia. If you absolutely must stay in Europe then Germany or Italy. Though they're relatively minor players internationally. Europe just doesn't do much heavy industry (excluding Russia).
Reading this it looks like Central Europe really needs venture capitalism. And your university seeks to lack channels too. This isn’t a criticism of you because I think I you would have suitors around the block in the U.K. or the US.
Why not, using university email, send an email to large VCs across the world. Even rich people.
So to be clear, this is licensed use of intellectual property. How does your agreement work in regards to developments of the intellectual property, which is probably one of the key potential reasons to invest in you?
It's more versatile (range of materials, possible geometries) and less labor, since there is no second step, i.e. it forms structures directly from metal. For multiple parts, it is more reproducible and involves less (basically no) manual step. However, for a single piece investment casting is probably less expensive.
Looking over the sixth IPCC report, a question occurred to me that I don't think they discuss and I wondered if anyone here was aware of published work on it. One of the more confident predictions about the effect of climate change on tropical cyclones (aka hurricanes, typhoons) is that their tracks will shift poleward, but I don't think the report says by how much although I could have missed it. Looking at a map of past cyclone tracks and a map of population density, it looks as though the intense cyclones over land are largely over densely populated regions — Mexico, Central America, Southern China, India. If so, a shift poleward might, if large enough, move them to less densely populated regions, decreasing total human damage.
One result would be to move them more over the U.S., which would be unfortunate for us and might increase material damage, since there is more expensive stuff to be damaged in the U.S. than in Mexico — but fewer people per square mile.
Has anyone looked at the question? The report mentions that changing the cyclone tracks could change their affect on humans, but not how.
Although many tropical countries are densely populated, the populations are often not concentrated in coastal low-lying areas. In temperate countries, they often are. The historical reasons for such population patterns are often associated with diseases rather than hurricanes (there is also the fact that the climate is just more pleasant at high altitude if the latitude is tropical).
Taking the two main countries you discuss - yes Mexico is more dense than the US overall, but all of the densest parts of Mexico are more than a mile above sea level - where people worry about earthquakes and volcanoes, not hurricanes. All of the non-landlocked states of Mexico are much less dense than Florida's ~380 residents per square mile. Though I guess since we're talking about hurricanes turning to higher latitudes than before, probably instead of Florida we should talk about places like New Jersey (~1200 residents per square mile)
The report argues that there's good observational evidence and consistent model projections for a northward shift in tracks and peak intensity for typhoons in the western Pacific. Elsewhere, it's not so clear. Much of the observed global increase in the latitude of peak tropical cyclone intensity is due to changes in the overall number in different ocean basins, with an increase in number in basins where storms are found at higher latitudes anyway. In the Atlantic basin, models tend to project hurricanes farther north but there's no observed trend in that.
I haven't done a comprehensive look for studies that may have investigated changes in population exposure as a function of projected track changes, but an all-changes study published last month (limitations: four GCMs, one hurricane generating algorithm that seems to favor increases more than other algorithms) finds increases in population exposure in all basins even while holding population constant: https://doi.org/10.1038/s41558-021-01157-9 .
Is it that all hurricanes will shift northward or that warmer seas will expand the zone in which they can form northward. Other climate effects on hurricanes: more rainfall, more intense, slower moving lingering longer, storm surge added on to higher seas, more rapid intensification making evacuation warning less timely.
It's not the warmer seas that matter (the sea surface temperature threshold for development depends on the average temperature of the tropical atmosphere, which also goes up), but rather the likely expansion of the tropical wind pattern known as the Hadley Cell, whose low-shear environment is one necessary ingredient for tropical cyclone intensification.
>(the sea surface temperature threshold for development depends on the average temperature of the tropical atmosphere, which also goes up)
It depends on the average temperature of the tropical *upper* troposphere. That doesn't go up under warming driven by tropospheric greenhouse gases (since there's nothing to break its correlation with outgoing longwave and outgoing longwave is fixed).
The basic way greenhouse gas warming works is: for a given tropospheric temperature profile, the outgoing longwave (OLR) decreases as greenhouse gases increase. Since outgoing longwave is fixed (at equilibrium), the temperature increases to restore the outgoing longwave intensity. So yes, it does go up.
If you want to think about it mathematically, to a first approximation OLR =A + BT (embodying the correlation between OLR and temperature T). Increase greenhouse gases, and A decreases. OLR is restored/maintained by an increase of T.
No. OLR is proportional to T^4 of the effective surface emitting to space. Greenhouse gases do not affect the proportionality.
What greenhouse gases do is reduce the effective value of T, by providing layers in the air - *colder* than the ground - that absorb and re-radiate longwave (both up and down). Having twice as much greenhouse gas means the layers are half as thick, there are therefore twice as many layers, and therefore - as each pair of layers needs a temperature differential to transport heat outward - the surface is warmer.
Adding greenhouse gases to the troposphere increases the number of layers in the troposphere, but not the number of layers above it - ergo, the temperature of the tropopause doesn't change (rather, the temperature gradient in the troposphere gets steeper). Adding greenhouse gases *above* a given height does increase the temperature at that height, but AIUI anthropogenic GHG haven't yet reached the stratosphere in quantity (also, even then it doesn't raise the temperature at X km above ground as much as it does at ground level).
OLR is proportional to T^4, but over the range of temperatures relevant here it can be approximated by the linear function, with A != 0. The first folks who created energy balance models used that approximation, then people came along and said "That's stupid, it's the early 70s, we have powerful computers now" and plugged in the T^4 equation and found it didn't make much difference. The logic and physics work the same either way, and most people's brains do better with A + BT.
"Layers and energy differentials" is a great way to think about it. And you've *sort of* described the greenhouse gas part of the energy transfer accurately. But most of the global average net energy transfer from the ground to the atmosphere is via evaporation (i.e. latent heat). Neither it nor heat diffusion via conduction or turbulence cares directly about what greenhouse gases and radiation are doing, except to the the extent that the greenhouse gases are modifying the energy differential a bit. So we get more evaporation to make up for the greater difficulty of net upward IR energy transfer.
This is *exactly* what a lot of the Nobel Prize for Physics in 2021 was awarded for. If you just consider radiation as your energy transfer mechanism, your equilibrium temperature profile cools off way too rapidly with altitude. Instead, it's the evaporation and subsequent thunderstorm activity (in the tropics anyway, and something qualitatively similar in mid-latitudes) that constrains the temperature profile in the real world. Too much cooling with height, and the thunderstorms crank up. Too little cooling with height, and the thunderstorms (and latent heat transfer) shuts down because the atmosphere's not unstable anymore.
Adding greenhouse gases to the troposphere increases the altitude above which longwave is able to escape to space (for those wavelength bands in which the greenhouse gas is active) and hence decreases the amount of energy escaping to space (since emissions increase with temperature and higher altitudes have colder temperatures). So the atmosphere warms, and the surface warms because net flux from the ground to the atmosphere depends on the ground-atmosphere energy differential.
BTW, the well-mixed greenhouse gases (CO2, CH4, N2O, etc.) have mixed through the troposphere and stratosphere (it only takes a few years). ["Mixed" means roughly uniform mixing ratio.] Presently, only CO2 has a large enough concentration that this matters: for a narrow set of wavelengths, emissions to space come from CO2 in the stratosphere. And for those wavelengths, increasing CO2 doesn't matter much because temperature doesn't change much with height in the lower stratosphere (so changing the emission altitude doesn't change the intensity of the escaping radiation). Most CO2 emissions come from the troposphere though, on the broad wings of that band. This is why the greenhouse effect of CO2 is logarithmic in CO2 concentration rather than linear like it is for the other greenhouse gases. It's also why skeptics can try to argue "Hey, it's logarithmic, so don't worry about it".
I should add that "increasing greenhouse gases adds another layer to the atmosphere" is how it's commonly taught, because it's easy to understand and easy to do the math. I don't like it because it leads to incorrect conclusions about the details and about how things like clouds affect energy balance. I teach it only to students who are not expected to think through the implications beyond "if I understand this, I'm more likely to survive the next test". You're clearly not that person!
It’s international stuttering awareness day. We still have only a poor understanding of the disorder, but for a while now studies have been exploring the use of dopamine antagonists as a possible pharmacological treatment. (https://www.frontiersin.org/articles/10.3389/fnins.2020.00158/full) Would be curious to hear people’s thoughts on the quality of evidence here, and on the idea of stuttering as a dopamine issue more broadly
Hey, we're running a Discord server about achieving financial independence through side businesses, career advancement and investments in decentralized finance. Would love to have ACX members join http://BowTiedDiscord.com
Our community has lately been wondering how to look for arbitrage opportunities through bridging (e.g. on Arbitrum), as well as any airdrop opportunities. Happy to hear any suggestions or thoughts from you all!
Decentralized finance is a great way to lose your savings. I implore anyone reading this to check out the skeptic case before they get involved (David Gerard's blog is a good place to start).
I'm currently in the process of training to become a secondary school teacher in the UK (ages roughly 12 to 18). I'll need to do some research on theories/models of how children learn, and I was wondering if people here might have useful suggestions of sources I should check out - particularly if they cover Rationality-adjacent concerns, but generally anything beyond the usual Piaget and Vygotsky references is greatly appreciated.
So I was introduced to the concept of CBT (Cognitive Behavioral Therapy) a few years ago while seeing a therapist for various issues. I didn't end up getting much value out the therapist process itself, but I did purchase a CBT workbook, study up on the topic quite a bit like the nerd I am (and some variations like Acceptance and Commitment Theory), and so my question is- am I actually doing it successfully now?
I have some actual life issues that most outsiders would objectively consider to be relatively serious problems (i.e. they are not just thought distortions as described by CBT). My..... I guess cognitive update over the last couple of years is that I simply don't dwell on them- I don't really think about them at all. Is this CBT? For example let's pretend that you have a homeless alcoholic who was previously depressed about his life status, but now he simply doesn't think about his various issues whatsoever, and his personal happiness is greatly increased even though he remains homeless, addicted to alcohol etc. Is this CBT? (I am not a homeless alcoholic, just using this as an example).
TLDR if I have serious, chronic problems in my life and I simply ignore them and think about something pleasant instead- am I practicing Cognitive Behavioral Therapy? I would say that doing so has reduced my depression a ton- but I'm also not doing anything active to fix ongoing issues. I'm closer to what Pink Floyd described as 'comfortably numb'. It's worth noting that I started practicing meditation around the same time that I took this up, which has probably enhanced my ability to not dwell on the negative/direct my thoughts in general. Would be interested to hear people's general thoughts on CBT, ACT etc. etc.
> let's pretend that you have a homeless alcoholic who was previously depressed about his life status, but now he simply doesn't think about his various issues whatsoever, and his personal happiness is greatly increased even though he remains homeless, addicted to alcohol etc.
That sounds like meditation to me. The reality did not improve, but the suffering is reduced. Get rid of the addiction, and he is ready to be a Buddhist monk.
I'm interested in how you did this. The common pattern in CBT is to recognize negative root assumptions that are making you unhappy and replace them with more realistic alternatives. Did you identify negative root thoughts and practice ignoring them?
I've no idea if what you did is CBT or not. I remember a post by Scott where he says that he ignores troubling thoughts all the time, despite it being looked down on by the psychological community. But if it works it strikes me as a very useful skill for people with problems beyond their control. Imagine if it worked for people with chronic pain!
“ I remember a post by Scott where he says that he ignores troubling thoughts all the time, despite it being looked down on by the psychological community.”
This sounds like a wonderful super power to a chronic worrier.
'I'm interested in how you did this. The common pattern in CBT is to recognize negative root assumptions that are making you unhappy and replace them with more realistic alternatives. Did you identify negative root thoughts and practice ignoring them?'
I just.... did it. Honestly I didn't really find it that challenging, which is ironic because I think I'm terrible at meditation and am constantly interrupted by completely random (not negative, just random) thoughts.
I think I'm overly analytical/ruminate too much, which for most of my life lead to an influx of negative thinking (hard to not see myself quite clearly), but also made identifying the negative root thoughts easy. I didn't need a CBT workbook for that, it just sort of kick-started me in the right direction
"The common pattern in CBT is to recognize negative root assumptions that are making you unhappy and replace them with more realistic alternatives. Did you identify negative root thoughts and practice ignoring them?"
This is my problem with CBT. It explicitly addresses symptoms, not causes which is fair enough - it's a quick fix approach, like taking a painkiller when you have a toothache.
But the assumptions it operates on are that there really aren't severe underlying causes - your toothache is just a temporary twinge, not an abscess that you need to have treated.
That's where the 'negative root assumption' comes in. You think you're a failure in life and worry about losing your job and how are you going to get a new one if you're fired from this one? And the exercise/therapy is "well, look at the facts: you have a career, you have achieved promotions and awards, whenever you went looking for a new job you quickly found one" and thus cutting the negative assumption off.
It doesn't work when it's "let's look at the facts - okay, you have a crappy job, you go through long periods of unemployment when you are trying to find work and it's not easy for you to find a new job, and you don't have skills that are in high demand". In that case, the root assumption may be negative, but it's not an assumption, it's fact. CBT can't go "stop thinking bad thoughts, think happy thoughts!" there.
Yeah, it's definitely not always the right tool for the job. And the good resources I've read have always said that changing your circumstances is the front-line defense.
I think the modern model for self-help probably has a lot in common with the prayer
"God, grant me the serenity to accept the things I cannot change,
courage to change the things I can,
and wisdom to know the difference."
But if we prayed to a different deity
"God grant me the ability to straight up ignore the shit I don't like,
And give me a set of powerful delusions to perpetuate my happiness."
If the serious, chronic problems in your life are things you probably cannot do anything about (for example, a beloved relative who is drinking himself to death and ignoring
your attempts to help) then using a technique that reduces your preoccupation with the matter and eases your pain seems like a good idea. CBT is one such technique. But if the problems are things you have a reasonable chance of addressing -- things like hating your job -- you should be trying to get yourself to address them. CBT might help: It would involve capturing and challenging the thoughts that keep you from addressing the problem. If what's keeping you from addressing the job problem is thoughts like "I'll never find anything better" or "I can't endure the stress and hassle of job-hunting" -- well, those thoughts are almost certainly distortions that could be challenged. But CBT is just one tool in the toolbox.
CBT alone doesn’t fix major life issues (like homelessness as you mentioned, etc). CBT is a technique for managing unnecessary intrusive thoughts and anxiety. It’s just a tool in the toolbox. You have to put in a different kind of work to change your life circumstances. And when you can’t change the circumstances, CBT comes in handy to not drown in your own negativity.
"I would say that doing so has reduced my depression a ton" - I would say this is good and whether or not you're doing CBT "right" is not important.
"I'm also not doing anything active to fix ongoing issues." - Do you *want* to be moving toward fixing ongoing issues? If not, then the improvements you've experienced, whether coming from CBT, meditation, or something else, are an unmitigated good. Calming the nervous system can sometimes feel like numbness at first. My experience is needing to move from activation into a more regulated state first. Then you can start practicing feeling into what you want, what matters to you, what your values are. Then you can start making decisions and taking actions that are in alignment with those wants and values.
I am wondering if your concern about whether you've been successful with CBT is stemming from a dissatisfaction with your ongoing troubles and judging your current comfortably numb status as wrong/bad/insufficient because it's not moving your forward. I am also wondering if you may be having some difficulty connecting with your visceral, emotional, feeling self, which you would need to do in order to truly explore your wants and values. We cannot simply think our way out of complex & possibly traumatizing shit (and it sounds like you're going through some heavy stuff). There are other aspects of our minds that need attending to besides our cognition. Wishing you well <3
I'm no expert with CBT, but to me it seems like you're drawing an arbitrary line between 'real issues' and 'silly thought distortions'. Is there much difference between a problem like, say, stubbing your toe, and a more serious problem, than the scale? If your techniques stop you from dwelling on your issues unnecessarily, it seems helpful.
Perhaps if you're not motivated to actually fix the problems, your techniques aren't helping enough. But then again, perhaps it's only a matter of time.
I'm really glad you posted this and I hope you receive some high caliber responses.
Until then, I'll offer my own thoroughly amateur thoughts...
I would say that your hypothetical friend is not necessarily practicing CBT, per se, because CBT is a formal and deliberate form of therapy that one undertakes in response to psychological dilemma. That's not to say that what your friend is doing isn't helpful, or is helpful, but that whatever it is, it's not CBT. Call it willful neglect, disinterest, or open acceptance, but I don't see it as qualifying as CBT.
Maybe the more interesting question is related to the outcome associated with this kind of behavior. Is your friend generally better off with this approach? I suppose that depends on who you talk to? A psychologist would say one thing (yes?), while a hematologist would probably say another. I think at the end of the day, what really matters is your friend's experience.
Shooting in the dark, it's hard to see how one could benefit in the long term from overlooking the inevitable consequences of consistent physiologically harmful behavior, but in the short term, maybe a different story (psychologists hate him!)
I wonder if it would have an effect on new reader intake if the thing people saw when going to acx.s was *not* half open threads and lynxes. Like if there was a distinct meta tab, and you weren't viewing it by default. Because the way it is now, a passerby might think 'I can't start this, there's more community than content', but the other way, it might look a bit emptier/updated more rarely than other substacks.
My thought is that once he's done with the travel, his previous posting frequency was sufficient to avoid this as an issue. These weeks though, it does look a bit threadbare on the content, if someone just looks at the recent posts.
I'm looking for suggestions of great short stories with a business/finance element or theme. One example: The Accountant by Ethan Canin. I'm assembling material for a series of zoom sessions that I'm co-moderating with an English professor this winter. If you haven't read The Accountant, i recommend it. Beautifully written with a pitch perfect voice, at times very funny, at times quite philosophical.
Another golden age story (possibly by Kornbluth) about a depression which is set off by a man who is reluctant to buy a refrigerator(?). The knock-on effects take the economy down.
The problem is traced back to him, and the president takes the last little bit of money out of treasury to give to the man so he can buy the refrigerator and get the economy started again.
Obviously satire, or something akin to satire, but people might have fun analyzing what's wrong with the premise.
It's economics rather than business/finance, but you might be interested in a page I have up linking to and commenting on short works of literature with interesting economic ideas in them:
Ruined City by Nevil Shute fits the bill. It's hilarious, moving, and overall just brilliant, and captures something interesting of the Depression in industrial England. Without spoiling too much, it's about an absolute caricature of an evil capitalist using his skills for something else.
For a virus most people haven't heard of, CMV is suprisingly bad. This is because it causes birth defects, and contributes to aging. My rough estimate of its DALY cost places it in the range of HIV.
And yet I didn't even know CMV was a thing until my final year of undergrad studies!
I'm one of those freaks who is CMV-negative into middle-age. (My wife has never infected me, either, so maybe she is also negative. I found out through blood donations, which she doesn't do. It's also possible I ended up getting it and the Red Cross never told me.)
I'd love to get vaccinated to stay that way. How is Moderna's vaccine trial coming along?
It's disturbing to see that there's something big you're mentioning that is redacted!
Also, do you have any thoughts on what good strategy might be, either for individuals or populations? Given the high prevalence and lifetime infectiousness, is there anything we can do to reduce prevalence? (I suppose the fact that prevalence is lower in North America than in Europe suggests that something could be done.) Given that first infection during pregnancy is associated with far worse outcomes, would reducing prevalence be expected to increase the number of people who get first infection during pregnancy? (This is related to worries I've had about reducing prevalence of common cold - if some common colds are minor only because nearly everyone has had them multiple times, then attempts to reduce the burden of annual infections might accidentally make first infections much worse, by making them happen to older people.)
Humans are incredibly resistant to cold. As someone who grew up in Manitoba Canada, and now lives in Edmonton, Canada - it is so strange to learn that people can die of hypothermia at 50 F, equivalent to 10 degrees C. The idea that dying from exposure at such a high temperature is just completely outside of my experience. If you are soaked to the bone, and the wind is really blowing, and the average temp is 10 degrees, but you are in the shade, making the local temp lower - I guess I can see it happening somehow. But you could probably just walk or jog briskly indefinitely to keep warm in this situation if you have the open space and enough energy.
In my hometown, the temperature routinely goes down to minus 30 degrees for an entire month in the winter, with temperatures including wind-chill during that time at around minus 40 degrees, down to - 50 on the worst days. School only shut down maybe once or twice a year due to cold or snow - and the reason was that the busses couldn't start in the extremely low temps, even though they were all plugged in to keep the oil warm. It would need to be -40C before wind chill effects in order to have it so bad that the busses couldn't start. I walked to school with friends all winter during high school, 30 minutes one way. The cold wasn't an issue.
Same deal with cold and school closures in N Minnesota. Occasionally the buses wouldn’t start. The kids that rode the buses from the hinterlands were off the hook. ‘Walkers’ were expected in class.
I took a flyer and simply googled "Kenya hypothermia". The first hit was Nyandiko et al 2021 https://doi.org/10.1371/journal.pone.0248838 Neonatal hypothermia and adherence to World Health Organisation thermal care guidelines among newborns at Moi Teaching and Referral Hospital, Kenya
From the abstract: "Admission hypothermia was noted among 73.7% (274) and 13% (49) died on the first day of admission. Only 7.8% (29) newborns accessed optimal thermal care. " So I suppose a lot of the hypothermia mortality in the tropics may be among newborns. If you've just popped out, 18C is damn chilly.
Actually, we are not. You're talking about a situation where you're not actually exposed to the cold, it's held at bay by insulated clothing, et cetera. Under those circumstances, if you are protected from heat loss, you can walk on the Moon at -300F. But if you are genuinely exposed to even moderately low temperatures, meaning heat can freely leave your body, such as when you are immersed in cold water, bad things happen rather quickly. From here:
Even at 45F or so one can expect to be able to perform normally for only 5-10 minutes, to become unconscious within 30-60 min, and to die with 1-3 hours. At the freezing point death usually comes within 15-45 minutes. Clearly if your heat sink is the surrounding air instead of surrounding water, heat loss will be far slower, and you can readily make it all night by being sensible (e.g. finding shelter and avoiding air flow), but if you don't have adequate clothing, can't find shelter, and the conditions favor copious air flow -- there's a stiff breeze, say, and the air is humid -- I can readily believe a night at 45 could kill someone.
There's a "Rule of 3s" that is sometimes used by people who do a lot of outdoor stuff to remind one of the priorities: "You can survive 3 minutes without breathing, 3 hours without adequate warmth, 3 days without water, and 3 weeks without food." This is useful to point out to newbies, who will often prioritize emergency food over emergency protection from the elements (and sometimes even drinking water), a serious but surprisingly common mistake that has definitely led to unnecessary loss of life.
In my teens, I capsized and swamped a small sailing dinghy. It was November in Nova Scotia. Was probably only in the water 10-15 minutes, but was shivering uncontrollably when rescued by a passing keelboat.
I’ve also surfed on Long Island, NY in the winter, with a summer wetsuit augmented with a couple of neoprene vests. I could last about an hour before shivering would set in.
I've heard that sleeping directly on the ground can be very dangerous, so maybe that's how people die? The context was survival stuff in forests, and how you should make a bed of leaves to insulate yourself from the floor.
Same climatic background here. I think short-term adaptation of some sort must be a factor. A -35C week in late January is tolerable from a -25 baseline, but it feels much worse if it's unseasonably cold in November after a warm fall. (Higher humidity in that case though.) If prepared psychologically, though, -40 with no wind and zero humidity is kind of nice, and it's quite possible to overheat exercising in winter clothing at that temperature.
True. Walking on snow packed streets at -40 you get that cool crunching sound. With no wind it’s kinda fun. If I went for run in that kind of cold the sweat around my eyes freezes a bit between blinks. Get home with hoar frost on the legs of my sweats.
Doesn’t get down that low in the Twin Cities but my home town north of Duluth could get down to -50 F.
You were probably well wrapped up. As someone who got drenched on a sodden Irish mountain, temperature 10c. wearing few layers and a useless summer coat, I can see why it could kill somebody. Luckily o didn’t stay very long in it.
+1. I got soaked on a bike trip in Iceland at around 10C ambient. I needed to fix the bike, but I had so much panic and brain fog setting in from the hypothermia, I had to run along with the bike to warm myself up, then continue fixing it for a few minutes, repeat.
Agreed. Is the winter pretty dry in Minnesota? I have heard from friends in Toronto and the East coast that the winters there aren't as cold as Manitoba or Alberta, but in the East it's a damp cold that gets down to the bones, and it takes way longer to warm up when the air is cold and humid.
What you describe is true at very low temperatures where air has so little moisture carrying capacity that it is both dry (by absolute measure) AND is effectively saturated (100% RH). But at 40-50F, which is the “wet cold” people experience in humid climates, air is capable of holding a significant amounts of moisture. Air at 45F / 95% RH feels different to the human body than 45F / 10% RH. The nice thing about wet cold is that it’s easier to breathe than dry cold. But yeah… I would much rather sit outside for several hours in the dry cold. After a while the cold invades your winter layers in a wet cold.
Will there be "Learn French with ACX" and (most excitingly) "Learn English with ACX" posts after your trips to France/the UK? (apologies for referring to subscriber-only posts in public, feel free to delete if you don't like that)
" The English, German and Italians called it "the French disease", while the French referred to it as the "Neapolitan disease". The Dutch called it the "Spanish pocks" during the Dutch Revolt. To the Turks it was known as the "Christian disease", whilst in India, the Hindus and Muslims named the disease after each other."
It'd suggest the instrument wasn't well liked, either.
Unfortunately, I believe the "cor anglais" is the French name for the instrument called "English horn" in English. However, it turns out that the name "cor anglais" is actually a folk etymology, and "cor anglé" is likely the original name, since it is an angled horn.
Gnostics have long seen the Bible or the canon as an instance of esoteric writing, but rarely are characters within the Bible submitted to Straussian readings.
Didn't call it Straussian in the essay itself, but did in the tweet:
I just watched Dune, and having not read the book prior to the movie or seeing the Lynch one, I had serious one issue with it - I strongly agree with your first point, I do not understand why the Atreides are our protagonists.
The first scene of the movie is Zendaya (I don't know the characters name) narrating about how while Harkonnen is evil, whoever comes next will continue to oppress them as all they care about as spice. The first introduction of Duke Leto is the scene where Atreides receives stewardship of Dune is a blatant Triumph of the Will homage, which is also used in every Sci-Fi movie ever to convey evil. Visually, it's like if the main character of The Force Awakens was Hux, and we're supposed to feel bad for him when Smoke is murdered. I don't think this is "complex" or "challenging", the visual language of Triump of the Will in science fiction movies is too on the nose for that. It is very possible that these characterizations of Atreides will pay off in the sequels, but it made it hard for me to root for the characters.
So, do you consider the rebels to be the bad guys in the original Star Wars because they were the ones with the blatant "Triumph" homage?
There are a lot of useful and/or aesthetically pleasing things that happen to have been first invented by Nazis. It does not serve humanity to make those things forever off-limits to good people putting them to good uses.
Sure - I agree with this. I found the visual shorthand for "these are bad people" made it challenging for me to get emotionally involved with the characters.
Movies have to be judicious with their choices, even in 2 hour long ones, and I thought those two scenes failed to accomplish what the movie needed. If Atrides is supposed to have some darkness, I think you could do it through the Duke's discussions with his military advisors, and if the scene is supposed to be them in full-throated martial glory, I think you could do that without the cliched allusion that almost always in current media is short hand for "these are the bad guys". IIRC, this is the first scene the Duke is introduced in! It's confusing and challenging (probably intentionally), and made it hard for me to emotionally relate to him.
I agree with you that these things should not be forever off limits - however it is important to understand that scenes, sounds, and visual metaphors all have meanings that are encoded by society, and that can impact how a work of fiction uses it. When Harry Styles talks about watermelon sugar, he's obviously talking about sex (independent of the other lyrics) in a way that Genesis 3 isn't.
Honestly, I think your mention of the rebels in Star Wars is an example of a good use! (assuming one recognizes the allusion, and doesn't just think it looks cool) Firstly, it wasn't as nearly as cliched at that point, and second, it's at the end after we've already established emotional investment in the alliance - if people start to think they might have shades of grey, that's a great hook for a second movie. TFA, being a much worse movie, uses the allusion the way it's always used nowadays, to quickly indicate "these are bad people".
Useful context, thanks! Reading the first half of the novel has made me appreciate some of the tightrope Villeneuve was walking. [Some spoilers for the first part of the first book]
I'd think even adding the part of the spice harvester scene where the duke gives his worm-spotting bounty to the crew might've tipped the score for me. It felt (to me) that it was over indexed on the not anything more vs the better masters.
It's interesting because some groups, specifically the Bene Gesserit, seem much more devious in the novels, and the fact that they've specifically seeded messiah stories for them to exploit is a much more explicit criticism (unless I missed this in the movie from the mumbled dialogue).
How often do Antarctic tourists experience cardiovascular symptoms? How much time does it take for them to manifest? What changes occur in the blood circulation systems of the winterers (who stay there for a year), compared to the people who only make a short visit? Is there at least some difference between the Maritime Antarctica and the East Antarctica research bases (it's colder and drier in the EA)? What about the USA Scott-Amundsen Station (South Pole)?
I've reviewed a puzzle game on Less Wrong which I'd consider to be a rationalist game, i.e. one where playing it requires practice of rationalist skills like forming hypotheses, noticing confusion, etc. Link to the review: https://www.lesswrong.com/posts/39Ae9JEoGCEkfiegr/recommending-understand-a-game-about-discerning-the-rules
The arts science divide started around the time the Nobel Prizes did, and since then the PMs have been rather more lopsided.
In the last century Cambridge have had one PM (Stanley Baldwin, first elected 1923). Oxford have had 11.
1. Cold exposure increases cardiovascular risk in the short term.
2. But I also know that cold exposure increases basal metabolic rate. This *might" prevent obesity and have a longer term protective effect against cardiovascular diseases, if people don't fully compensate by eating more.
Which effect is bigger? It doesn't look like 2 has been researched very much.
https://link.springer.com/article/10.1007/s13679-011-0002-7
"Homeotherms maintain an optimal body temperature that is most often above their environment or ambient temperature. As ambient temperature decreases, energy expenditure (and energy intake) must increase to maintain thermal homeostasis. With the widespread adoption of climate control, humans in modern society are buffered from temperature extremes and spend an increasing amount of time in a thermally comfortable state where energetic demands are minimized. This is hypothesized to contribute to the contemporary increase in obesity rates. Studies reporting exposures of animals and humans to different ambient temperatures are discussed. Additional consideration is given to the potentially altered metabolic and physiologic responses in obese versus lean subjects at a given temperature. The data suggest that ambient temperature is a significant contributor to both energy intake and energy expenditure, and that this variable should be more thoroughly explored in future studies as a potential contributor to obesity susceptibility."
https://www.researchgate.net/profile/Jameson-Voss/publication/235379448_Association_of_elevation_urbanization_and_ambient_temperature_with_obesity_prevalence_in_the_United_States/links/00b4953cdca7a14139000000/Association-of-elevation-urbanization-and-ambient-temperature-with-obesity-prevalence-in-the-United-States.pdf
"In the fully adjusted GEE model controlling for elevation, urbanization, demographics, and lifestyle, all
temperature categories (5 °C increments) were not statistically significantly different than the highest temperature
category, but extremes of temperature category trended to the lowest odds (Table 1). Median BMI by quantile
regression was similar across temperature categories with suggestion of lower median BMIs at the extremes of
temperature category (Table 3)."
If there is any effect of ambient temperature on obesity rates, it was too small to reach statistical significance here.
> if people don't fully compensate by eating more.
I'm reasonably certain they do.
Noam Chomsky on the unvaccinated, stating they should "remove themselves from the community".
https://www.msn.com/en-us/news/us/noam-chomsky-unvaccinated-should-remove-themselves-from-the-community-access-to-food-their-problem/ar-AAPYJFH?ocid=msedgntp
>> "How can we get food to them?" Chomsky told YouTube's Primo Radical Sunday. "Well, that's actually their problem."
I would have gone with "Doordash" there, but share the general sentiment, in regard to mandates. If you're trying to portray yourself as "taking a stand", you shouldn't turn around and try to play the victim either, if you lose your job.
At both a practical-level, and a meta-level, this kind of discussion bothers me.
At the practical level: I don't think the Federal Agencies that are publishing vaccine mandates have the Constitutional authority to do so. It is possible that State Governors have this authority for those people employed by or attending State-mandated/State-run/State-funded institutions. (Here is where compulsory education gives room for the State to support public health via vaccination of schoolchildren: the State has the authority to put strong demands on the health status of students in State-supported schools, and has the authority to compel attendance at some school. At least, this is how it works in the United States.)
At the meta-level, let's do a thought experiment: if your favorite political boogeyman (a rightist who hates organized labor, or a centrist who gives scientific reasons for her desire to limit abortion, a leftist who wants to ban guns, or a weirdo who hates a particular ethnic group) gets into power at the Federal level, can he use this authority in a way you would find abusive?
Imagine the Federal Government using anti-terrorist mandates to severely restrict jobs for people who have ever given money to a foreign terrorist organization.
The supporters of this cause would point to children who died when the last terrorist exploded an IED at a public event, and say that terrorism can cause a public-health crisis. People who support terrorism should "remove themselves from the community".
Are you comfortable with this kind of thing? It's the government saying that certain people are unemployable, and forcing businesses or non-profits to not employ/associate-with certain Bad People.
If you are not comfortable with that, why are you comfortable with the current push to use vaccination to limit access to jobs?
>Are you comfortable with this kind of thing? It's the government saying that certain people are unemployable, and forcing businesses or non-profits to not employ/associate-with certain Bad People.
I'm definitely not comfortable with that, but I also don't think that's what's happening here. Being non-vaxed isn't a thing you *are*, it's a thing you're choosing. My job has had a flu shot mandate for a decade. I knew one guy who grumbled about it but no one who histrionically quit over it. (Or, technically, histrionically complained about oppression while refusing to get their shot until they were canned.)
If the gov't wants to mandate that a class of people, say, middle-aged white men, are BAD, and therefore shouldn't be hired, then sure. I'm all in against that. But this isn't that. This is mandatory flu shots. I've been fine with that from my employer, and I'd have been fine with it even if the mandate was coming from one of the levels of government over my employer. I don't get the objection here.
Would it be okay for a Democratic president to ban Republicans from working? Being a republican is a thing you're choosing after all.
No, but for a different reason. Political identity (like religious identity) is specially protected in our society. In the hypothetical above, it was asked if it would be ok to ban someone who had given money to terrorists from working. The answer is clearly no. We let people who belong to terrorist organizations run for office. You're correct that it's chosen, but it's a choice we've decided to protect very strongly.
Vaccines are different. Again, we have a history of how we do this, and we've always been fine with mandating this shot or that shot for school or work.
>> At the practical level: I don't think the Federal Agencies that are publishing vaccine mandates have the Constitutional authority to do so.
Unless/until there's some judicial intervention, I'll just assume they do. SCOTUS hasn't gotten involved yet.
>> Imagine the Federal Government using anti-terrorist mandates to severely restrict jobs for people who have ever given money to a foreign terrorist organization.
Isn't there already a law against that? I think they'd have much bigger problems than losing a job.
https://www.law.cornell.edu/uscode/text/18/2339B#:~:text=Whoever%20knowingly%20provides%20material%20support,of%20years%20or%20for%20life.
> Are you comfortable with this kind of thing? It's the government saying that certain people are unemployable, and forcing businesses or non-profits to not employ/associate-with certain Bad People.
This mandate only applies to companies of a certain size, I believe. So the nonvax'd who wanted employment can look for remote jobs, or ones with smaller work forces, or start their own business/whatever.
I also look at the flipside - essentially what they're fighting for is the right to have a greater probability of transmitting COVID to their fellow employees if they have it. I don't see that as selfless or brave, to put it mildly. If you're going to "take a stand", don't be afraid of the consequences.
Israel is iodine deficient. Unlike other developed countries, it does not iodize salt. Iodine is known to increase IQ. New EA cause area?
https://www.haaretz.com/israel-news/.premium-israeli-kids-low-in-iodine-desalinated-water-use-blamed-1.9969026
A friend has to write a Bachelor thesis which is kind of a literature summary on semantic text understanding through AI. I know almost nothing about that field. (The description was also very vague about what kind of things have to be understood.) Are there any generic helpful pointers for approaching this?
Would the focus be on papers/books, or algorithms/approaches?
I believe it would be the latter.
Probably wants to research NLP (Natural Language Processing). This is a book I read, though it's for R, and most people use Python (it is surveyish, though):
https://www.tidytextmining.com/
This seems to be similar, a survey-type course for Python:
https://www.nltk.org/book/
As far as state-of-the-art, most of the high-end models are now based on the transcoder architecture. This is the seminal paper:
https://arxiv.org/pdf/1706.03762.pdf
It's complicated, but there are a ton of 'simplifying' tutorials. Essentially what the algo's do is first convert a corpus of words into vectors, then import some type of metric relating them (these are the embeddings or representations), then train for a given task like Q&A, completion, sentiment analysis, whatever. So search terms might be like "NLP", "transcoders nlp", "word embeddings", "nlp words to vectors". I guess one could describe the general workflow pipe, then point out different approaches in the different steps, idk.
One thing I found interesting in the discussion of hypothermia (and deaths due to extreme temperature) was the quick turn to one of two explanations: wealth/poverty of a region, or genetic adaptation.
I find this pair of explanations lacking something important. It is lacking something mentioned in a book review by Scott, of a book titled "The Secret of Our Success."
https://slatestarcodex.com/2019/06/04/book-review-the-secret-of-our-success/
The skills needed to survive a cold night (without suffering hypothermia) are the type of skills that people learn from the culture they live in.
In the parts of the world that don't deal with extreme cold on a regular basis, the local culture may not remember the tricks used to survive a cold night. Even if that cold night is only 10 degrees C, in a climate that usually has overnight temperatures around 20 degrees C.
In cold regions of the world, the local culture may not remember how to deal with the occasional bout of extreme heat. Thus, members of that culture may be at risk of heat-stroke in scenarios that other cultures think of as a hot-but-survivable day.
The wealth of the industrialized world gives us a different cultural answer to the problem of keeping warm (or cool) when weather is extreme. That cultural answer is less than a century old: buy a better heater/air-conditioning-unit for your house. Find a building/car with AC. Buy a cold drink, or a cup of hot chocolate. Find a better jacket.
Within the past few centuries, we've seen many teams of explorers attempt to map poorly-known regions of the planet. In most cases, those explorers were interacting with economically-poor natives in the area of exploration. (Think of Livingstone in the heart of Africa, or explorers trying to find the NorthWest Passage.)
Generally, the natives were poorer than the explorers. But they were better at living off the land, and probably equally good at surviving extreme weather typical in that area of the world.
It is definitely true that a wealthy culture provides easy-to-use ways for all members of that culture to survive harsh weather. But it is also true that wealth, by itself, isn't the only factor in helping people survive harsh weather. Cultural knowledge is a huge factor.
As we see with the discussion of hypothermia: a wealthy culture can lose common-knowledge-level of information about dealing with cold temperatures. Even if that knowledge was common in that culture a century or two ago.
I realize I say this as someone who lives in a cold climate but it doesn't seem like it takes evolved cultural expertise to know that on a 50 degree F day (10 C) you need a jacket and a sweater, and at night you just need a couple good blankets.
Maybe these places simply don't have those things or money to buy/produce those things on short notice, and maybe if you are then in 50 degrees after months of hot weather, your body isn't ready to survive 50 in light clothing. That's possible, and tragic. But it doesn't seem like it requires evolved wisdom or long-term preparation or even mechanical help (heaters) to survive 50 F/10 C.
>>At night you just need a couple good blankets.
Per the discussion of hypothermia: at night, you need a couple of good blankets and an insulated layer between yourself and the ground. The blankets may help provide that insulated layer, but not providing that insulating layer is a really foolish choice.
This is the kind of cultural knowledge I'm focusing on.
On the broader front: I was thinking about whether "Richer cultures survive temperature extremes better than poorer ones" applies in all scenarios.
When the richer culture is sending explorers into the heart of Africa, or sending explorers to find the Northwest Passage, there isn't much apparent difference in survival of temperature extremes between the rich explorers and the poor natives.
Per the article posted by Scott, there is a noticeable difference between rich explorers and poor natives in acquiring food from the local environment.
What I'm noticing is that Industrial-age people (or Western, Educated, Industrialized, Rich, Democratic people) tend not to see notice how much cultural knowledge about extreme temperature has been washed away by "that is no longer necessary to remember".
I notice that lots of people touched on this, but didn't seem to realize that this kind of cultural knowledge is something Scott blogged about before.
I just wanted to say "thank you", to everyone who came to the Cambridge meetup. Scott, thank you also for being there. It was great to meet you all. I am a longtime reader of the ACX and SSC blog, and it has had a big impact on my life, especially over the past few years. The biggest effect I have seen is that it has raised my own expectations of myself. I try (much of the time I fail, but I am always trying) to reason my way through problems in a rational way. I have found this a really valuable approach in my life when making decisions/ deciding how to think about new ideas, as well as when interacting with other people (and particularly those I disagree with). I want to express my gratitude that this blog and this community exists. Thank you all.
I know this corner of the internet has a certain affinity to "optimized internet food". I'm in need of a state-of-the-art recommendation. I'm looking for something that can take me through the day with as little preparation, timing and other ceremony as possible (and can be ordered on the East Coast, but that shouldn't be a big problem?). If it helps keep me awake, that's a bonus, but all is fine as long as it doesn't actively make me sleepy.
I don't mean to change my general eating habits; I need a temporary fix for the remaining 2 months of crunch time on my job.
Thanks for any tips!
Well, I assume you're familiar with them given the phrase in quotes, but in case you're not, mealsquares aren't bad and they're legit unwrap and eat.
For a slightly more complex solution, beans and rice is pretty good. You can do up a big batch of beans and rice at the beginning of the week and eat it for lunch and dinner. It doesn't have to be heated up, and can be dressed up for variety by sauces and mix-ins. Assuming fridge access, you can go from hungry to eating in a couple of minutes, and in combination beans and rice is a complete protein, so it's not bad for you. It's also very cheap, if that's wanted.
First, I had forgotten the name, and second I wasn't sure whether they are still widely considered a good idea. Thanks for reminding and confirming!
Beans and rice feels like it wouldn't help my hunger and also might be hard to do at the office, but I'll try mealsquares for a week or two.
There's a song I like a lot in Russian whose lyrics purport to be a translation of a W.J. Smith poem. (Depending on the website I look at, sometimes the claim is that it's an original invention, but it being a translation would make better sense to me.) I'm looking for the original poem.
It's about a dragon who lives in a tower (as dragons do), and, being bored there, plays the violin. He is visited by a princess, who scolds him but then they're reconciled and get married. He explains that he's fed up living in a tower like a dragon, and instead would like normal domestic life. He also says that the princess shouldn't be afraid of the dragon who lives beyond the marshes, because if he ever gets rude with her, he (the dragon) will tell him to leave and he'll go. (Yes, it's supposed to be ambiguous whether there's really two dragons involved.)
Anyone have any idea what the original poem might be?
This the W J Smith you have in mind?
https://www.amazon.com/Around-Room-William-Jay-Smith/dp/0374304068
This looks like the right poet, whether or not it contains the exact poem -- thank you!
I'm looking for a learning resource on how to test hypotheses using (frequentist or Bayesian) statistics. Basically: given a hypothesis, how do I then use data (e.g. samples) to refute that hypothesis?
We just had a realspace South Bay meetup (Sunday, October 24th). Thirty to forty people attending. If infection rates continue to fall, we plan to do another one in a month or two.
Scott reviewed Turchin on Secular Cycles some time ago on SSC https://slatestarcodex.com/2019/08/12/book-review-secular-cycles/ Now Bret Devereaux aka The Pedant went over this and other psychohistory-style theories in his Fireside Friday: https://acoup.blog/2021/10/15/fireside-friday-october-15-2021/#easy-footnote-3-9666 He is not impressed.
There was a brief discussion of this on the subreddit, Scott added a bit of a rebuttal concerning the main thrust of the article: https://teddit.net/r/slatestarcodex/comments/q9hy6t/historian_bret_devereaux_against_peter_turchin_of/
Suppose that a group of whimsical aliens decide they want to make Saturn's rings prettier so they teleport a new shepherd moon into orbit in the middle of a ring. How long will they have to wait before a new gap appears in the rings: are we talking years, millennia, or astronomical time scales?
The rings are are probably moon(s) that have disintegrated from tidal forces. There's a distance called the Roche radius that depends on the dimensions and densities of the planet and the moon. Inside this limit, a moon that is held together by its own gravity will break up. Outside, pieces of floating stuff will aggregate into a moon. Saturn's rings are inside the Roche limit, except for the weak F ring which is just outside which may not be stable.
If the moon is solid it may not break up, though objects on the surface can lift off. I would also expect that even a solid moon with eventually start to crack from flexing at some point. It would then break into pieces down to a size that is structurally integral and actually manages to hold itself together.
We can expect the aliens capable of creating or moving moons to know about this.
So maybe your question should read: How long till the moon becomes a ring? The rings themselves are unstable as gravitational interactions will fling stuff up and down but the moons are thought to mitigate this process. The age of the rings is uncertain.
That sounds true about real moons, but we’re talking about aliens teleporting things. What if the moon is a (Saturn-moon-weight) solid piece of tungsten, or maybe nanomanufactured composite with ridiculous elasticity and toughness, or some kind of magically-stable shard of degenerate matter, or maybe even a itsy-bitsy tiny black hole?
It has to cause a disturbance in the rings via gravity, doesn’t it?
ok, here's a ball park calculation. A small object placed 27,000 km from a 10 metre diameter ball of tungsten in an empty universe will take about 7.5 million years to contact under gravity. Most of the stuff in the rings is like 10 cm in size. 27,000 km is the diameter of the Saturn's main rings. So that's a ball park coalescence time.
Black holes may not be a good idea, please size your your black hole carefully. Black holes evaporate by Hawking radiation converting mass into radiant energy. For large black holes this is incredible slow - like protons of mass/energy per zillion years and takes to the end of time. It gets crazy fast for small black holes. The final million kilogram evaporates as energy in 46 seconds ending with a flash brighter than the Sun.
Has there been any recent progress (dis)proving the variable speed of light theory?
It's simple:
In order for the concept of a "variable speed of light" to be coherent, you must not use the metric system.
No serious scientist will ever use anything other than the metric system.
Therefore, no progress on this concept will be made. QED
Steel-manning the Surveillance AI X-Risk > AGI X-Risk Case
Example: https://www.cnbc.com/2021/10/22/palantirs-peter-thiel-surveillance-ai-is-more-concerning-than-agi.html
There's a lot of focus (both in public AI-risk activism and within the community itself) on the failure mode where an AGI gets created with a buggy utility function and turns the universe into paperclips or whatever. There's another really bad failure mode, however, where the AGI gets created by CCCP or whatever with some version of the utility function "make all humans obey Xi Jinping/Putin/<some evil group of people>'s dictats forever".
If making an AGI requires a huge Manhattan project style research effort (rather than being a hacker-in-a-room thing) then IMO having a country/world with liberal politics is a necessary (but not sufficient) condition for creating a good AGI. If you buy this argument, a crucial part of dealing with AGI x-risk is (a) making sure your favorite liberal countries have the leading AI programs and (b) pushing back on totalitarianism world wide.
When we add this to the fact that surveillance AI could lead to a bunch of other x-risk scenarios (e.g., impossible to remove world-wide totalitarianism, counter-value nuclear war with AI guided missiles), IDK... It might be that the summed x-risk associated with all the failure modes widely used surveillance AI creates (including leading to bad AGI) outweighs the x-risk of bad AGI directly.
Well, so what? Articles of this sort don't ever seem to contain any actionable proposals, just one more dunk on the already low status "sci-fi" weirdos who actually try to do something, however misguided they may be. If Thiel is so worried, then why wouldn't he announce a $10 million grant to combat this horrible communist menace?
If telepathy was possible, wouldn't its enormous reproductive advantage have made it a universal trait?
Being able to read the minds of your prey, predators, and conspecifics seems like it would confer vast reproductive advantages over non-telepathic rivals.
Did any of those parapsychology researchers look into the heritability of such mental powers? Wouldn't such an advantage have made these kind of abilities universal among humans?
If reading minds could be evolved, then you would evolve the ability to read before evolving the ability to filter precisely such that you could distinguish friend from foe, or predator and prey. Seems like it would be very confusing and not necessarily a clear advantage, even if the ability itself cost nothing extra in terms of neural resources to have.
If visual sight could be evolved, then you would evolve the ability to distinguish light and dark and different colours before evolving the ability to filter precisely such that you could distinguish friend from foe, or predator and prey. Seems like it would be very confusing and not necessarily a clear advantage, even if the ability itself cost nothing extra in terms of neural resources to have. :)
Light/dark is far less subtle than "intent to kill". Even humans with evolved intelligences and an understanding of other minds have trouble identifying and understanding intent.
No, because evolution seeks out local maxima not global maxima.
The state of a lot of parapsychology research isn't so much movie-grade telepathy as "This dude guesses the right card 18% of the time which is a statistically significant improvement on the 1/6 odds of random guessing". The effects are really small (perhaps small enough that motivated researchers can consistently p-hack them into existence) and for all we know ESP burns a lot of calories so it's not obviously adaptive to the ancestral environment.
I thought these small effects were due to bad specification of the null hypothesis. Do you have reason to think otherwise?
If anyone could demonstrate the ability to guess right 18% of the time, that would be a revolution in itself. I would stop everything else I am doing and work on that.
Given the wide variety of issues we see in other fields, it would be surprising if they were *all* due to that one factor. Motivated stopping, file drawer, bad experiment design, outright fraud... there's probably a lot going wrong.
You are right. I was thinking of one study in particular where they found a hugely signficant (N was very large) but very small effect, and it seemed to me that they miscalculated the expected probablities under the null hypothesis, but yes this does not have toi be the only explanation.
Sorry to have missed today's meetup. I hope you enjoy your visit to Edinburgh
I've been thinking a lot about different types of reasoning styles, and I figure here are a few axes along which people can differ, and examples of people at different ends of each axis.
First, we have the individuals/institutions axis. In this axis, we consider whether or not we take the reasoning done by an individual to be trustworthy, or if we consider individuals to be fallible, and in need of institutions to come in and correct them. On the extreme individual end, we have people like Eliezer Yudkowsky arguing against epistemic humility, and on the extreme institution end we have the catchphrase "trust the experts", or Naomi Oreskes' method of censensus.
Second, we have the facts/paradigms axis. This axis asks whether or not you think the fundamental object of analysis is individual facts, or larger narratives and stories. On the paradigm side you would have people like Thomas Kuhn and most leftists, while on the facts side you would have reductionists like Descartes, or neoclassical economists.
Finally, we have the reason/intuition axis. On the reason side, we have people who think that the best way to find out if something is true is by carefully going through the justifications for every statement, and making sure that everything fits into nice syllogisms, or some other form of formal reasoning. On the intuition side, we can say that methods like that are potentially misleading, and instead trust "human" aspects of judgement. For reason, you might have someone like Richard Dawkins or Peter Singer, while on the intuition side you might have someone like James Scott or Joseph Henrich, or the entire idea of "lived experiences".
I think that these three axes provide a nice categorization of different types of reasoning that you can see nowadays. Most rationalist types probably fall into the individuals/facts/reason corner, while mainstream progressivism probably fall into the institutions/paradigms/reason corner.
I think that this provides a nice explanatory model for why so many rationalists tend to be libertarian oriented, as well. Libertarian analysis is extremely individualist, places skepticism in what they believe are fallible institutions, and tends to place high emphasis on mathematical/game theoretical models. This fits quite nicely with the individuals/facts/reason corner, which as mentioned before seems to be where rationalists typically fall.
Do you think this categorization is useful? Is there anything in particular that you would add/change about it? Where do you fit on it?
I don't agree with your basic assumptions. I don't believe that human reasoning can mapped out on such a simple (simplistic?) heuristic coordinate axis. And even if you could map human reasoning "styles" along two- or three-dimensional axes, I doubt if humans would necessarily reason according to these pre-mapped categories all the time, some of the time, or any of the time. Moreover, I think it's cognitive mistake to assume that people can't assume different modes of reasoning that may be contradictory to the modes of reasoning that they'd use under different circumstances.
Furthermore, I would argue that reasoning is only one of the sub-components of consciousness, and much of the problem solving we do throughout the day in waking consciousness (and probably in sleeping and dreaming consciousness) is curiously resistant to self-analysis, and although it can sometimes be logical and methodical, it mostly isn't. For instance, when you're trying to catch a ball — you're solving a calculus problem in micro-seconds without any deductive or inductive reasoning. And part and parcel of our consciousness are the way process our qualia — which can be amazingly efficient, but which can also deceive us by seeing patterns where there are none. Underneath it all we've got some emotional/instinctive components that we may have little control over but that lead us to making non-rational problem-solving decisions (which we may sugar coat with the patina of rationality).
I *am* not saying that we don't on occasion reason in the modes you're implying. We do. But reasoning (i.e. using logic to solve problems) is only a small part of how we problem solve and arrive at answers.
My first impression is that the axes are not adequately fundamental. For example, I might be happy to concede that a lot of institutions that exist around me are broken/corrupt, which might nudge me toward individualism, but that's not the same as a belief that institutions are inherently less trustworthy than individuals. And reason/intuition, the way you frame it, seems like just another name for rider/elephant, Type 2/Type 1 thinking, etc. In my observation, rationalists are quite open to the idea of mining formless, subverbal intuition for useful info - if memory serves, CFAR even included Gendlin's focusing technique in its coursework.
Facts/paradigms appears to me to be the strongest of the three, and the most interesting. I think it does say a lot about an individual if they are committed to overarching theories and firmly held heuristics, whatever their content might be. It probably tracks well with the good, old 'is/ought' distinction, which is one of my own main rules of thumb for classifying people. Some try their hardest to develop a clear view of how things are, others are far more concerned with how things ought to be.
You might be right about the institutions axis not being very fundamental, perhaps I should think about that a bit more.
I think that you misunderstand the reason/intuition axis. It's not a reference to type 1/type 2 thinking, but rather asks if intuitive judgements are valid at all, or if you need to ground everything and reason from first principles. For an example of what I am talking about, suppose I ask you if consentual cannibalism is okay. You might try to figure out exactly what the consequences are and whether or not we should live in a society that allows such a thing, what laws restricting it are doing, etc. You also might just say that it is clearly obscene and wrong, and take that to be sufficient evidence. This isn't really system 1 and system 2 so much as "wisdom of disgust", which is an intuitive thing in the sense that I am trying to get at. Another example is the idea of lived experiences; you have certain experiences, nobody can tell you that you don't have them, and they provide reliable information about the world. The fact that you haven't done a meta analysis of randomized control trials doesn't really matter from this perspective, and in fact might even be misleading. Is that type of reasoning legitimate? Well that depends on who you ask, so I think makes for an interesting distinction.
"so many rationalists tend to be libertarian oriented"
Do the SSC/ACT surveys show more libertarian readers than centre-left ones?
The surveys do indeed show more center left people, but Scott has called himself a Libertarian, Julia Galef has defended Libertarian ideas, many people here are into things like prediction markets and cryptocurrency, etc. From my own experience hanging out in the discord and these comment sections, Libertarian ideas seem to be very overrepresented compared to the mainstream.
I think this is distorted by self-categorization. Being socially liberal seems to be a simple moral decision for rationalists. Placing yourself on the fiscal Conservative-Liberal axis requires a deep understanding of numerous economic principles.
Choosing to define your self as Conservative in the absence of a complete understanding of both the rules of the game (economists aren't even sure) and the values of all the state variable seems to be a Rational decision. Except that conservative in this case should be lower-case [c]onservative. Centrist or Moderate seems to be a more accurate label.
I wouldn't call trusting institutions "reasoning".
can you elaborate?
A bit tongue in cheek, but in most cases deferring your judgement to an institution is intellectual laziness (or just prioritizing).
Driven individuals will almost always reach better conclusions than commitees and organizations, as they have motivation to get it right without getting mired in perverse incentives and politics.
Once you see how the sausage is made several times you lose the taste for sausage.
You've dismissed a main argument against by putting it in parentheses, but it is still valid. I know it is trendy in contrarion circles to be anti institutions, but this isnt thought out.. you cant go with the sledgehammer and always reach your own conclusion. You need to use the scalpel to know when and where you need to be skeptical of institutions. Without infinite time, driven individual will *not* reach better conclusions.
"In practice you can never completely eliminate reliance on authority. Good authorities are more likely to know about any counterevidence that exists and should be taken into account; a lesser authority is less likely to know this, which makes their arguments less reliable. This is not a factor you can eliminate merely by hearing the evidence they did take into account."
Part of the issue is determining which authorities are "good" and which are "lesser" or even "not an authority in this field." There are plenty of scientists or other experts who speak out on topics that interest them, often playing on their name recognition, that are outside of the areas they understand better than a layman. Paul Krugman comes to mind.
On another note, a lot of what's come out of the CDC over the last 18 months has not been the most reliable information, especially in how they interpreted and summarized the information.
I disagree, at least for a reasonable definition of "driven" that doesn't include spending years of training to become an expert in the topic. Who will give me a better answer on any astronomy question: NASA, or a random person on the street? Who will give me a better answer on how to fill out my tax form: the IRS, or your gardener? A "driven" gardener might spend 3 hours researching the topic rather than 3 minutes, but that's still not enough to understand the foundations of modern astrophysics.
Do you think NASA is the single best source for "any astronomy question"? Surely there are better sources for specific information you might want to know, even if they might make a good general source of information. Not all astronomers work for NASA or share information with them. If you want to know something that happened last night in a particular section of the sky, local amateurs may actually be the best source of information.
> ho will give me a better answer on any astronomy question: NASA, or a random person on the street?
That's a false choice that ignores the many failure modes of blind trust in institutions.
A slightly better example might be: who will give you better and more prompt advice on viewing conditions near the arctic circle, NASA or the local Inuit?
Clearly institutions simply cannot capture all pertinent local information, and they also suffer from various structural problems. They're useful, but they are not sources of unvarnished truth.
I think what's implied here is that the driven gardener is someone who spends 10-15 hours a week on their garden for years on end, and that such a person would have better conclusions than the local "ask an expert" hour at the botannical garden.
I don't agree with this, btw, as I think the dedicated amateur probably puts in a lot of time interfacing with those local committees of experts, but it's a plausible argument.
I was saving this till we passed the Hoary Astrologer part of the thread because it isn’t very important.
I’ve always had trouble with the 1-10 rating of pain at my doctor’s office. It’s a problem of calibration. What would qualify as a 10?
I’m pretty sure I’ve never felt a 10 level of pain so how do I scale my radius compression fracture? I mean it’s not nearly as bad as passing kidney stones. - Go through that a couple times and you become a fiend about hydration -
So yeah, it hurts a bit. Do you want to know if I need codeine or sonmething? No, it’s not that bad. But now that I think about it that stuff does produce a rather pleasant mood, but no I don’t *need* it. Ibuprofen can handle this one.
Now the time I had my four wisdom teeth extracted and had 4 dry sockets, that was painful. Even percodan didn’t completely dull that. I’d have to give if a solid 8 maybe even a 9, but you know, I want to leave room at the top end in case something worse comes along.
When I had reconstructive surgery on my face to pop my cheekbone back out after that sandlot football mishap, I woke my surgeon up with a call to his home after the coagulated blood re-liquified and started to find some way out of my head. Sumbitch, that probably *was* a 9. But who knows? Things can always get worse.
So the upshot is. most of the time I just shrug and say 2 maybe a 3.
I’m curious how other people approach this.
As other people have mentioned, I think it makes the most sense to think of the impact the pain has on you. For example: https://i.redd.it/xoh2o5y09ed21.png
Such scale is really dumb. I had misfortune experiencing 10-rated pain.
That every time reclassified previously experience 10-rated pain as "5 at most, that was not so bad compared to this".
Is the 10 incident the origin of your username?
Two classics:
http://hyperboleandahalf.blogspot.com/2010/02/boyfriend-doesnt-have-ebola-probably.html
https://www.explainxkcd.com/wiki/index.php/883:_Pain_Rating
I recall reading a story where a patient for some reason couldn't have anaesthetic during a surgery; in recovery afterwards, the patient asked for more painkillers, the nurse asked them to rate their pain on the 1-10 scale, and they responded, after some though, with a "2". The nurse was about to dismiss their request for painkillers, then read their patient chart and realised what their "10" was calibrated to. Classic internet just-so story, but it highlights the issue well enough.
Personally, I think you're expected to work backwards from "how much do I need painkillers" - 2-3 is an ache, 4 -5 is "need painkillers to be able to concentrate or sleep", 7+ is "actively screaming in agony". (on that scale, the worst I've ever experienced personally was a 6, but I'm quite capable of imagining worse)
I've found the scale to be mostly meaningless as well.
In general, I'm rarely in pain. When I was having contractions, and the phone nurse was trying to figure out if they were actual labor contractions or not, she asked me to rate them on the 1-10 pain scale. I told her probably 4 (I assume there are way more painful things), and she seemed to think that meant I should wait an indefinite period of time to do anything further, possibly days? My husband looked at my behavior, and thought I should have rated it higher. Eventually they induced for high blood pressure anyway. But I think in retrospect the behavior approach probably made more sense. If I can't think about anything at all, or talk coherently, and am compulsively pacing through the whole night (despite having slept soundly every single night prior), those are the relevant symptoms, not whether I can imagine someone in worse pain or not. I suppose if it came up again I would check my blood pressure before calling.
A little while ago, there was a lot of discussion about schooling. I decided to take a dive into Wikipedia to look at the history of schooling in the US. The result is now on my blog:
http://thechaostician.com/a-brief-history-of-schooling-in-america/
One possible explanation for the question 'Why don't we have Puritans anymore?' is that we replaced the Puritan education system with the Prussian education system.
I didn't realize the "one room schoolhouse" had a different religious history than current conventional schooling.
I've seen discussion of what's wrong with the Prussian model-- John Taylor Gatto is good-- but much less about how the Puritan model worked.
It seems to me that ideology drives the choice of subject matter, but the size of school is a matter of population density and transportation. But maybe size and subject matter got conflated somehow?
That's not true. Puritans worked on a parish model and if parishes got too large they'd have tiny urban parishes that could take up a block. The Prussians found this economically inefficient and centralized into large districts. Likewise, these small school houses were subject to much less central control than the Prussian system.
I don't know about that, but the one room schoolhouse probably does more to let students learn at their own pace, while conventional schools imply that people need to be subordinated to organization which is convenient for someone else.
In a small community, a one room schoolhouse was the only option. They didn't have enough kids for a bigger school.
That's true, but is there a reason to not have schools structured that way (either a lot of separate single rooms, or one-room structures in a larger building) in more densely populated areas?
Montessori schools are fairly popular, and they work on a different model. I know they let kids learn at their own pace with guidance from the teachers, and I believe they do a degree of age mixing. OTOH, my kids only went to a montessori preschool, so I'm not sure how they work in practice at the elementary or higher levels. Anyone who has kids in one of these feel like commenting?
Any suggestions for good Discord discussion servers? The one linked to on the blogroll of this substack is surprisingly disappointing, and substack's comment system is nigh-unusable.
Horary astrologer in the traditional method here. Performing analysis of various questions is how I get better at horary astrology, so I would be pleased to use what I know to answer any inquiry you have. My email address is FlexOnMaterialists@protonmail.com.
Did you end up making any registered predictions yet?
Well, I'd rather not blogspam our gracious host (yet) and in the interim I don't have a good grasp of prediction markets (cursory perusal of Data Secrets Lox didn't reveal the real-money-betting part of the forum). Could you recommend a site?
Is betting money essential? If you'll settle for boasting rights, you can register predictions at predictionbook.com.
Fair enough; if Urania deigns to grant me success, this site seems as good as any.
I apologize if this is not permitted as it is technically an ad. I'm still looking for participants for a small study. (LW link (includes an email, you don't need an account): https://www.lesswrong.com/posts/HjFkEcw26GGHrjMXu/?commentId=EbH9i4o5ExeDfjojL). I require one university course (course, not degree) in computer science or statistics, but no knowledge in image processing. And the compensation is quite generous (60$ for ~60 min). I'm still hoping to get by without using Mechanical Turk.
Edit: The maximum number of participants has now been exceeded, so unless several people drop out (in which case I'll post an update), further applications won't have chance.
Can someone point me toward a decent study or report showing that the covid 19 vaccines slowed the pandemic? I’m most curious to see their action in comparison to other flu or similar pandemics (or just epidemics). So far all I’ve found are articles that say, “duh, stupid…scientists said so!” And other ones that describe reasons not enough vaccines were given or reasons why they might not be as useful or arguments for/against them causing variants. I just want to know if it worked and from what I can tell this pandemic followed about the same path as other pandemics making me unsure the vaccines really had much of an impact. I haven’t seen anything that doesn’t appear totally spoiled by ideology and I’m sincerely not trying to be a troll.
I'm not sure this is the same question as what you're asking, but here's a related question.
Were vaccines, as they were actually deployed, a cost-effective intervention? The obvious costs are money + side effects, the benefits were at some point hailed as "stop the spread and return to normal immediately" but subsequently turned out to be the more modest "reduce the spread (but not stop it), stop most of the severe cases, hopefully maybe advance the return to normal". What does this work out to in $ per QALY? Were there realistic paths towards making that ratio smaller, which would presumably count as a more effective way to deploy the vaccines?
Sure. There are tons of these at the CDC's website, derived straight from the data:
https://www.cdc.gov/mmwr/volumes/70/wr/mm7037e1.htm
Nut graf for you is probably this one in the summary:
"Averaged weekly, age-standardized incidence rate ratios (IRRs) for cases among persons who were not fully vaccinated compared with those among fully vaccinated persons decreased from 11.1 (95% confidence interval [CI] = 7.8–15.8) to 4.6 (95% CI = 2.5–8.5) between two periods when prevalence of the Delta variant was lower (<50% of sequenced isolates; April 4–June 19) and higher (≥50%; June 20–July 17), and IRRs for hospitalizations and deaths decreased between the same two periods, from 13.3 (95% CI = 11.3–15.6) to 10.4 (95% CI = 8.1–13.3) and from 16.6 (95% CI = 13.5–20.4) to 11.3 (95% CI = 9.1–13.9). Findings were consistent with a potential decline in vaccine protection against confirmed SARS-CoV-2 infection and continued strong protection against COVID-19–associated hospitalization and death."
An IRR of 11 means you are 11 times more likely to get the disease if you are unvaccinated than if you are vaccinated, or to put it the other way around, you are 1/11 times as likely to get the disease if your are vaccinated versus unvaccinated.
So that's crystal clear in terms of the numbers of cases, hospitalizations, and deaths and how vaccination affected them (it decrease them all, by a lot).
If you're asking a more subtle question about the rates of change of those quantities, e.g. irrespective of how *many* cases we had in July 2021, say, what was the *speed* with which those cases were diagnosed and proceeded to some conclusion? that is a much more subtle question. One assumes that if the highly vulnerable are in sufficiently close contact, then the *speed* with which the disease spreads will not be especially affected by the presence or absence of less vulnerable (e.g. vaccinated) individuals. That is, the disease will spread just as "fast" -- it just will just poop out sooner because it will run out of highly vulnerable individuals faster.
Thank you that was a pretty good start, but I don't think it's quite getting at what I'm asking, which may be the more subtle question you mention.
An age-adjusted IRR looking at case rates during the Delta does indicate effectiveness of the vaccine in preventing bad outcomes and, to some extent transmissibility (though they also indicate this difference is narrowing and demonstrating a decrease in vaccine effectiveness), of the Delta variant. We know Delta is more transmissible, the data seems clear on this. Yet there's precious little data about the danger of this variant in terms of mortality and long-term health risks.
The best reporting I can find via Google, says that hospitalizations are up in the US (presumably from the lows of the spring and early summer) and they all site the same two studies from Canada and Scotland using hospitalization rates to determine the overall danger of the variant. (Worth noting, Duck-Duck-Go, produces a greater variety of articles when searching for "is delta variant more fatal", another chit in my "don't trust Google" mental Bozo bucket). Aside from the monolithic nature of the reporting, I have an issue with this as a useful data point. Imagining ICUs as buckets, the initial Covid surges of 2020 were like dumping a swimming pool of water into every bucket at once. Watching the infection rate data from Google's search results (using NYT data presumably from the CDC) I saw with the Delta variant, at its worst, was like pouring a bathtub of water into random buckets for a much shorter span of time. In both instances, ICUs are overloaded but the difference looks like almost an order of magnitude. The major by-line of the past months has been "hospital ICUs over-taxed!" but I don't see how it's a useful comparison to the original surge when the by-line was about bodies in the street and finding bodies weeks later.
I hope I'm making it clear why it's difficult for me to formulate a current threat level when it doesn't appear we're comparing apples to apples. It seems unlikely to me that sans vaccines Delta would have been more dangerous and deadly than the initial surge, absolute numbers of deaths might have been higher, ICUs might have been more taxed, but it's pretty clear that even at it's worst Delta's impact was never going to match the primary attack. And I think this points at the question I'm actually interested in.
Having glanced over the infection rate data for other air-borne, respiratory flu-like infections, the course Covid-19 has taken appears roughly the same: initial attack, huge secondary surge, big decrease, then third smaller surge, then smaller and smaller, more seasonal surges perpetually or until the strain is replaced by something we consider different. This is where I lose confidence as I'm not sure how to compare this to previous similar(ish) nor how to find relevant past data. When I look at this (https://www.cdc.gov/flu/pandemic-resources/1918-commemoration/three-waves.htm) or this (https://www.researchgate.net/figure/Three-waves-of-the-2009-H1N1-influenza-pandemic-in-Thailand-Source-Bureau-of_fig1_228506946) the charts looks almost exactly like the chart for Covid infection rates and as far as I know, there were no vaccines for 1918 and 2009 vaccine rates peaked in the US at 60%. So, sure, vaccines may have saved more lives in absolute terms, but did they really affect the course of the pandemic?
Some of this depends on what you're looking for. What you would ideally want is several dozen populations, each with similar behavior, weather, genetics, age distribution, etc., largely isolated from each other, and with differing vaccine status (hopefully half of the populations highly vaccinated and half not). That would give us the most powerful and significant test, if we could then notice that the epidemic trends in different ways in the different sets of populations.
The problem is, it's hard to even get *two* populations that are matched on these features, whether or not they differ in vaccination.
Right now, I think there are several lines of evidence pointing towards some sort of population-scale effectiveness of vaccines.
First, we have population-scale evidence that the alpha variant was a lot more infectious than the classic variant, and the delta variant was a lot more infectious than alpha - this is because many different locations were on a general downward trend in infections while one variant dominated, while the next variant was increasing from a tiny rate, and overall infections went up once that next variant became dominant. The fact that the alpha variant caused only a small wave in the United States (I believe not even as large as the summer wave in 2020) is suggestive that vaccinations have been highly helpful (though it's hard to separate out the effect of post-infection immunity from vaccine-related immunity). Similarly, the fact that the delta wave has been mostly infecting unvaccinated people, even though vaccinated people are a majority of the US population, is further suggestive that vaccination has made this wave much smaller than it could have been.
Another line of evidence is all the studies showing reduced risk of symptomatic infection or positive test among vaccinated people compared to unvaccinated people. The initial vaccine trials were the only double-blind ones, and they tended to find the largest effect sizes. It's theoretically possible that just as many people are getting infections and transmitting them to others while vaccinated, despite not getting symptoms and not testing positive, but this seems highly unlikely.
It's hard to say what it means that "this pandemic followed about the same path as other pandemics". I believe it's followed quite a different path from plague and HIV, but you might be restricting to respiratory pandemics with similar infection time, like influenza. But again, we know that different influenza pandemics have had structurally similar, but observationally still quite different, patterns - 1918 was worst, 1956 and 1968 and 1977 seem to have been smaller, and 2009 looked quite different as well. It's hard to know if covid would have been very different from these if it hadn't been for the various distancing measures and vaccination, or if these did nothing to change the overall shape, or what, since we don't have a big enough sample of respiratory pandemics to know what fraction of them have how many waves of what relative sizes, lasting how long.
If you have something more specific about the "about the same path", that would be interesting to see, to figure out whether there are any statistical tests that would or wouldn't show differences between 1918, 2009, 2020, and things like annual flu.
Maybe someone has a direct study for you (I'm not particularly literate in these areas) - but I guess my question is: there are a lot of individual studies on the effectiveness of each vaccine - e.g. "Vaccine A is X% effective against preventing Y strain of COVID-19, (and (X+Something)% effective at preventing serious cases" - do you disbelieve these studies as a whole? If so, why?
If not, how can a large chunk of the population being given these vaccines with fairly high effectiveness *not* slow the pandemic? Certainly the greatly diminished death rate in vaccinated individuals should count as "slowing the pandemic" by some regard, right?
There may still be a study for it (... though identifying a control case may be tricky - since there's no country saying "you know what, let's skip the vaccines" AFAIK)... but this feels a lot like the "must I believe this?" approach to evidence gathering (vs. "can I believe this"), which speaks to a biased view coming in.
I’m open to the possible soldier mindset going on, but really is it true? Did the vaccines stop or slow the pandemic? How much?
I can only find information about the effectiveness of specific vaccines but I’m not convinced that’s enough to draw the conclusion that it *must* have worked. Did enough people get it soon enough to make a difference? Did the imbalance of distribution in some countries negate the impact in others? Did the virus already kill most of the people it was going to kill before the vaccines came out? I thing we should expect some kind of post mortem coming soon, no?
Anyway not trying to start a fight so better to let this thread die than let it get out of hand.
How much did it slow the pandemic? Well, the cheeky answer is "it slowed the pandemic more than if nobody was vaccinated, but less than if everybody were vaccinated"... but the actual question is "compared to what?"
If you're comparing to a hypothetical situation without any vaccination at all... well, I'm not sure the follow-up questions make sense.
> Did the imbalance of distribution in some countries negate the impact in others?
If the vaccine saved X people across all the countries where it was widely available... surely it didn't "negate the impact" by *killing* X people in countries where it was less available compared to a hypothetical situation where there's no vaccine at all. Being concerned about imbalanced distribution seems to presuppose that the vaccine is effective.
> Did the virus already kill most of the people it was going to kill before the vaccines came out?
Clearly not, as the pandemic is still going with hundreds of thousands of new cases each day. A bit too soon for a post mortem as we aren't yet post mortes.
> Did enough people get it soon enough to make a difference? [Did it work?]
Again the glib answer is "if it saved one person, it made a difference to them", but again I'm not sure what you're comparing against. Surely it made a difference compared to the hypothetical where there are no vaccines at all, and continues to make a difference.
Yes, there's a cost to all things, and I think there's merit of a cost/benefit analysis re: things like lockdowns... but with the per-vaccine efficiency rates, and the ongoing number of cases, it's very hard for me to imagine a reasonable threshold of cost/benefit that the vaccines don't clear.
>Clearly not, as the pandemic is still going with hundreds of thousands of new cases each day. A bit too soon for a post mortem as we aren't yet post mortes.
I'm not sure I agree with this point. A fair number of health services have already declared Covid-19 endemic either because they achieved a vaccination rate or simply had enough of the population infected that herd immunity (lack of better term) was achieved. Two that immediately come to mind are Denmark and Iowa, but I think there are a fair number of others. Declaring the disease endemic is subjective and will happen in phases as populations differ, but in this as with all thing Covid-19 related, I expect the CDC and the WHO to be behind the curve.
If the disease is endemic, we'll still see infections and deaths just not everywhere all at once in huge numbers. I was under-confident in this prediction on Metaculus by about 1 week, regardless the trend is definitely down and has been for over a month (https://www.metaculus.com/questions/7627/date-of-va-covid-deaths-peak-before-1-oct/) The start-to-date graphs of the disease, across geographical areas, map almost exactly to 1918 and 2009 flu. While surely fewer people died, particularly the elderly, due to vaccinations, I don't see how the vaccines have really impacted the overall course trajectory of the disease and it seems weird to simply say that vaccines must be the reason when there could be a confluence of factors.
I'm very sensitive to the possibility I may be 'Soldier mindset' here but there's an accumulation of cruft making it very hard for me to distinguish, "must I believe this," from, "is this true," when I feel like I'm asking is it true of just about everything I encounter from the press and online media. The idea that vaccines are the only way this pandemic ends seems ridiculously simple and not in-line with history or the data, yet, it is the only message I see. I'm not debating the merits of the vaccine for any individual (children excepted but that's a different discussion), simply the idea that everyone has to be vaccinated for this to all be over. I don't get that and I'm reasonably confident this is already over if not very very close.
Sort of a tangent, but I'd also be careful about putting up the "Mission Accomplished" banner about this pandemic being over too soon.
I can't help think of Slovenia which declared the pandemic "over" in May of 2020, had basically no cases up to October of last year, but now has as many deaths per capita as the US.
https://www.cnbc.com/2020/05/15/slovenia-becomes-first-eu-nation-to-declare-end-of-covid-19-epidemic.html
https://covid19.who.int/region/euro/country/si
Saying the virus is now endemic rather than pandemic seems like a semantic distinction with no relevance. Remember that the statement I was replying to with "Clearly not" was:
> > Did the virus already kill most of the people it was going to kill before the vaccines came out?
It doesn't matter if you say the virus is now "endemic" and technically its no longer a "pandemic". That may be true, but the statement that "the virus already killed most of the people before the vaccines came out" is still clearly not true.
---
And I don't think there's much doubt that the pandemic would end eventually, even without vaccines - the two possibilities have *always* been herd immunity or total elimination of the disease, and I don't think anyone expected the second.
Vaccination 'just' speeds up the process of getting herd immunity and minimizes the sickness and death required to do it.
So, yes, vaccination doesn't change anything, in the sense that the overall trajectory of the virus is still "it runs until enough people are immune for herd immunity to kick in"... but the time and number of deaths required between the two scenarios is hardly a mere semantic difference.
It seems like you're arguing that since vaccination wasn't the only factor, it must not have "worked".
My go-to demonstration that vaccines alter the epidemiology of COVID-19 in a particular context comes from long-term care homes in Ontario, Canada. Using the epidemiologic summaries from the link at the end of this comment as a source, there are only very few (<1 per day on average) cases in residents in long-term care homes at the moment - whereas at a comparable point (similar total number of daily cases in the province) before vaccination was available - I'll pick October 5, 2020 - there were around 10 cases per day. What I take away from this is that cases in long-term care homes aren't turning into outbreaks among residents. I'll admit I don't have direct evidence for that last statement, but it seems to be the easiest way to explain that data. At least 92% of long-term care residents are fully vaccinated (according to my second source, from February; I seem to remember something like 95% during the summer but I can't find a source for that esaily). This also seems to explain why we aren't seeing total suppression of transmission in areas with high vaccination rates: nowhere has really achieved such a high vaccination rates.
As to whether this could be attributable to other measures used to control transmission: to my knowledge, there haven't really been any new measures put in place recently (since October 2020 at least, which is when my first reference point is) to try to control transmission in long-term care homes. Indeed, if there were any new measures introduced that had an impact, it would be quite a damning indictment of the initial response to COVID-19 in long-term care homes. I tend to think we would've heard about it if such a measure was implemented, though this is a bit of a weak argument.
I've also included a link to an analysis from Ontario's COVID-19 advisory board in March 2021 seeming to come to similar conclusions as me (though I haven't read it since I just found it after already doing most of the research).
This does raise the question of what the actual vaccination rate necessary to control transmission in the community more broadly (as opposed to long-term care homes) without other public health measures is. My guess is that it's somewhere around 95%, at least with the vaccine mix we're using here in Canada - but that's just a guess. It probably wouldn't work as well if there were reservoirs with lower vaccination rates.
https://covid-19.ontario.ca/covid-19-epidemiologic-summaries-public-health-ontario
https://covid19-sciencetable.ca/sciencebrief/early-impact-of-ontarios-covid-19-vaccine-rollout-on-long-term-care-home-residents-and-health-care-workers/
I don’t think a discussion is a fight?
Here in Ireland 90% of people are vaccinated with the proportion unvaccinated higher amongst the young. Yet hospitalisations are 50% unvaccinated and the “vast majority” in ICU are unvaccinated. That indicates a potent reduction in danger when vaccinated.
https://m.independent.ie/irish-news/politics/vast-majority-of-those-in-intensive-care-with-covid-are-not-vaccinated-leo-varadkar-tells-fine-gael-meeting-40969152.html
I have a student flat that's like 3 minutes walk from the Edinburgh meetup. And plenty of teabags. (A few vaccinated people in the communal kitchen should be OK if sheltering from rain.)
Hi all. Long-time lurker, first-time commenter.
Does anyone have a back-of-the-envelope number for the percentage of de novo germline mutations attributable to ionising radiation (as opposed to endogenous mutagenic processes)? I suppose it would be species-specific, so I guess I could narrow the scope of the question to Homo sapiens in particular, but my interest in the question is broadly along the lines of: approximately how much of the mutative "raw material" supplied to natural selection in the evolution of organisms in general is due to the sun?
A lot of the literature I went through was focused on narrow-scope quantification of medical risk due to exposure to mutagens etc, and was therefore irrelevant to the objective of my search. Can someone point me in the right direction?
Unfortunately I don't have any numbers to put on it better then <<50%, and I don't have any sources I can link to in English. But from my limited understanding, at least for complex organisms (eukaryotes and especially sexually reproducing eukaryotes) ionising radiation and other "external" sources are not a significant source of de novo mutations used for evolution - most of those come from errors during replication and damage factors internal to the cell, and there seems to be some evidence that organisms have a degree of control over those mutation rates in different regions. E.g. Homo and IIRC other apes have highly elevated mutation rates in the regions of DNA associated with brain functioning. And there's some complicated semi-random-id-generation thing going on with the genes coding for pheromones in the species that use them to recognize their kin. And generally the Boring Billion can be taken as an evidence that evolution on purely random external mutations tends to be extremely slow. So afaiu if you took away all external radiation, it would slow evolution of complex life a bit but not too much, and even that bit would perhaps be rectified eventually to whatever is optimal for this species in this environment.
Does it matter to you whether the source of the ionising radiation is solar? I don't know if any studies have been done on humans (pesky ethics concerns with irradiating people), but there's fascinating data to be had from eg. the bacteria found living in nuclear power plant coolant loops. There's also the process of deliberately inducing mutations in crops with radiation, that the commenter before me mentioned.
It does matter to me in this instance, yes, given that what I'm looking for is something like "in the course of the evolution of the entire biosphere across evolutionary time, what approximate percentage of de novo germline mutations in everything from archaea to mammals was due to exogenous factors as opposed to endogenous factors like replication errors and oxidation? Like, is it closer to 2%, or 50%, or 98%?" I assumed at first this would actually be a Googleable number, and then it turned out I couldn't crack it even with an hour's shallow-diving.
Along the lines of what you and KieferO suggest, there are a lot of quite precise data for metrics like CNVs-per-gray of ionizing radiation exposure deriving from studies of irradiated rodents, and lower resolution data deriving from the "natural experiments" of the twentieth century. But this doesn't really answer my question.
In the case of Homo sapiens, I'd guess way less than 2%. Sunlight doesn't really reach germline cells much.
Do other sources of radiation like radon and carbon 14 count?
Well, just a rough exogenous vs endogenous split would be great at this point, but yeah, if you know the specific mutative contribution of those two then that would be great too.....
Sorry, that's as much as I've got, I don't even have a ranking for various sources of radiation. And of course some of them vary locally and some of them vary over time.
You might look at atomic gardening: https://en.wikipedia.org/wiki/Atomic_gardening . If anyone was very carefully recording the useful (to humans) mutation rate per gamma ray, it would have been the people who used cobalt-60 to make the modern grapefruit.
For reasons given above, this doesn't really answer my question. But it was an interesting read, so thank you!
This seems to be an AI optimistic blog. I’m more of a skeptic. I’ll believe we are on the way to the singularity (maybe) when speech technology is able to not just translate letters and words into speech but to modify the output based on context. A good example for irregular verbs where the past tense is spelt the same but pronounced differently. I spoke this to my phone today.
“ I went to the park yesterday, while I was there I read the paper. I read the paper every day, I like to keep ahead of the news.”
I pronounced the first read as red (
/rEd/ ) and the second as reed (/riːd/). When I asked to play it back the phone pronounces both as reed. (/riːd/). Humans get it right.
I’ve never seen a text to speech AI do this. Has anybody? It’s a hard problem, not just learning to map symbols to sound but understanding the context of entire sentence, or even paragraph.
Honestly, that strikes me as an *easy* AI problem, on an objective scale in which a manual typewriter defines the lower end and your average Target checkout clerk the upper end. Children can manage that task when they are 1 or 2, without any benefit of organized training -- just listening to people. You have a well-defined algorithm which will tell you the pronunciation of words and how it changes in context. It's not a *simple* algorithm, but it's neither ambiguous nor contingent and can be readily deduced from the actual practice of speech. If this is defined as a "hard" problem in AI then the goal of independent creative thinking that can even rival, let alone exceed, that of humans is ridiculously far off.
Even full-scale natural language parsing and understanding, to the point where your iPhone understands you as well as the Target checkout clerk, doesn't qualify as a really hard AI problem on that big scale, because, again, children manage that around the time they learn to walk, and it gets you not one inch closer to the problem of solving general problems the way genuine intelligence can. Even very, very stupid human beings are capable of fluent grasp of speech. Monkeys can be taught human language (e.g. ASL), and can understand directions and make requests. My dog knows what "go to the kitchen!" and "want to go for a walk?" mean, even if I don't speak the words clearly, my voice is muffled by distance or changed by a cold, or I cough in the middle, et cetera. Brains much more primitive than ours can manage this task.
I would say if you think fully understanding and speaking a human language is a very hard problem (and I actually do), then what this tells you is that duplicating human intelligence is by contrast a surpassingly hard problem, one where you add a dozen zeros to the hardness factor. It's like observing that playing a nice clean G chord on the guitar is not easy, and correctly inferring that being able to compose and play like Mark Knopfler is going to be really, really hard.
You seem to be confusing me with someone who said this was a hard problem to fix. I didn’t. All I said was I was an AI skeptic and look “here’s something that hasn’t been solved” yet. As it happens it looks like google has solved it, sorta. I can break it though, but it’s mostly there.
actually it has been solved, even to the point of understanding vagueness and ambiguity. https://openai.com/blog/dall-e/
That’s not related.
I searched "text to speech" in Google and put your example into the first result (https://www.nuance.com/omni-channel-customer-engagement/voice-and-ivr/text-to-speech.html). It got it right.
it did. I’ve been looking for that to work for a while now. Apple haven’t solved it.
I played around with tenses and it got this wrong
“I went to the park often then, and I read the paper. I read the paper every day, I like to keep ahead of the news.”
But when I replaced “and I read” with “where I read”, which is better English it got it right. So that line of defense against my AI scepticism has been breached.
However my theory that AI isn’t a threat yet is still confirmed by the fact that I had to prove I wasn’t a robot. When the robots can recognise sidewalks we are done for.
(Robots - or self driving cars -learning to recognise capthas is probably why we are asked about sidewalks, bridges and taxis so much. There’s a system designed to end itself).
Captchas are weird. I'm pretty sure current AI can beat those captchas. Waymo has commercial self-driving cars on the street, and yet pictures of traffic lights and cross walks are supposed to stop AI? If there's one object AI can identify better than humans at this point it's traffic lights.
A quick search finds articles saying bots can solve these captchas better then humans. Which leaves the question, why are we still using captchas?
My guess is the majority of spammers are looking for easy targets. The compute resources required for breaking captchas increases the cost, and setting up the captcha-breaking AI is an extra thing to do. If spammers were willing to pay even a few cents to break a captcha, employing humans would also be an option. A human should be easily be able to do over 100 captchas an hour (36 seconds each) and in some countries you could pay less than $1 an hour for labor.
I think the reCaptcha also uses other data apart from the images to block spammers, such as your browsing history and IP address.
Another trick spammers use to bypass Captchas is to put up a porn site and then forward the captchas they want to solve to people trying to access their porn site.
I suspect the real purpose of captchas these days is to annoy you into being more trackable by Google.
ReCaptcha definitely does this - if you go to a site in Incognito mode you'll see captchas more frequently and they won't be the "check this box to prove you're human" type.
I read that it also looks at your mouse movements to see if there's a human behind the clicks.
it's not just that captchas cost money for spammers to bypass - they also make money for the company that serves them - where do you think Waymo got all it's training data for recognising those thing? It bought it off of ReCAPTCHA. Anything that extracts value from users is going to persist, even if it stops being useful for its original purpose
Of course most current AI isn't a threat. The question is how long (if ever) it will take for future AI to become a threat.
It's important to recognize that there's a big difference between some kind of AI takeoff scenario and AI being a "threat." The USSR apparently had a Dead Hand system to launch their nukes without human intervention (I guess a type of AI) in case of an attack. That's certainly a "threat" even if the computing behind it was simple and clearly not an AI capable of taking off. A very simple AI put in charge of important systems can always be a threat, even if it's pretty dumb - or maybe more so if it's dumb.
Note that Dead Hand (aka Perimeter) was subordinate to human intervention. It was not constantly lurking in the background of the Soviet military ready to launch a nuclear strike if triggered; it was an option the Soviet leadership could chose to engage for a time if they thought they might already be under attack. Said leadership could then head for their bunkers or escape trains without wholly abdicating their responsibility to defend the Motherland while they moved to more secure command posts.
AFIK, it was never engaged. The US equivalent was to trust the judgement and professionalism of select human officers who were always present in dispersed and/or hardened command posts; the Soviets apparently didn't have that level of trust for their officer corps.
Thanks, I didn't know much detail about it. For the purposes of "AI is a threat" it worked as a thought experiment either way. If you put even a very simple algorithm in charge of when to launch your nukes (or other very important functions), that is easily a threat. You don't need to worry about your AI going Skynet or I, Robot in order to be threatened by it.
I've seen a suggestion that AI can't even start to be a threat until it can organize a sock drawer-- or even find a sock drawer.
My "don't even start to worry yet" line is that AI can't custom-make comfortable shoes. I grant that this skill isn't necessary for taking over the world, but it does seem a good bit easier.
I regret to inform you about recent developments in auxetic material research.
https://lgg.epfl.ch/publications/2016/BeyondDevelopable/index.php
One of these days I'll invest in the right kind of extruder so I can 3d print flexible plastics, and I'll try making some custom insoles. Though I worry about breathability.
Designing the insole is a more important challenge than printing it.
Wishing you well with the project, though.
I sometimes ask people who believe that unskilled jobs are going away soon this question - why do we have baristas? After all we have coffee machines.
Today I actually had coffee in a coffee shop with no barista. It was in a golf club and the cafe had vending machines and one coffee machine. You bought the cup for €3. The line at the machine was slow as people were slow. Older people confused by buttons. The machine ran out of something while I waited. The person at the counter had to try fix it. When I eventually got there the latte button was unlit because i assume they were out of milk. I had a black coffee rather than harass the untrained teenager who was hired to man a counter not pour coffee. No real coffee shop is going to replace a barista or go out of business. The same rule applies to bar staff, sales staff and so on.
Don't worry about your house burning down yet. Its just the curtains that are on fire. When the sofa catches fire, then your allowed to start worrying.
I think the "Don't worry, AI can't do X yet" is stupid. We are in a situation where AI is predictably likely to cause lots of problems in the future. What it can do now is irrelevant.
(I mean half the time X is already done, or something that AI experts haven't put much effort into trying to do. But even in the best case its a bad argument)
That’s totally begging the question.
The interesting question is whether the curtains are on fire, or if there's a burning candle is a securely placed candle holder.
Or if it's flash paper, which I think doesn't burn hot enough to be dangerous.
When I was a kid flash paper was hard to come by in my little town. I found a magic book that suggested mixing rubbing alcohol and water 1:1 and using to set a handkerchief ablaze without actually burning it.
I tried it - of course, I was a 9 year old boy - and it worked!
I think that was about the time my mother started to show gray hair.
I meant "a burning candle *in* a securely placed candle holder".
Moravec's Paradox
Never heard that paradox, but I came to the same conclusion myself - a barman is going to be much harder to replace than an accountant.
And there's the idea that the last employee in a factory will be the janitor.
I suppose the next question is whether an AI needs that perception and movement stuff to be dangerous.
Was this siri or google assistant?
Turns out it was an iOS issue. Maybe that’s why I am sceptical about AI, there’s no exponential growth in Siri for sure).
The reason I asked was because I've studied and implemented NLP algorithms, and was curious which one was being used. The most recent information about Apple's I could find (https://machinelearning.apple.com/research/hey-siri) was from 2017, and used a Deep Neural Net. That has likely been replaced by a newer architecture with a Transformer.
As to reaching AGI - in the NLP domain, I don't think it will happen strictly through the machine learning approach. IEEE recently published a series of articles detailing some of the challenges, and a big one is the cost of training the hugely parametrized new models.
But that’s the thing about exponential growth. It may not look like much growth at all…until it does
Only if from a very low base. However it’s not like Siri was totally useless a decade ago. It’s slightly better than useless now, from almost useless. But then Apple might not be the Avantgarde here.
I have seen exponential growth, since tapered out, in the smart phone capacity though. Particularly in the first few iterations of the iPhone (and no doubt Android). This isn’t it.
Wow these default avatars are hard to differentiate. Bring back Gravatar, or those funky monsters ThingofThings used?
And did the bloke in London with the doctors without borders jacket get a headcount?
I forget now if I had to re-upload or just pulled my Gravatar, but it's not hard to set a personal avatar
Interesting article saying that we may have overestimated how much theory of mind children have at a young age, as when given a variant on the normal hidden object test they don't do so well: https://www.sciencealert.com/new-evidence-suggests-children-may-not-have-theory-of-mind-until-age-6-or-7 (original paper linked in the article)
> When a third red box is added to the experiment, the results change quite a lot. Researchers have found preschoolers are ultimately torn over which box Maxi will choose, splitting their choice 50/50 between red and blue.
> "When there are only two locations, 4- and 5-year-old children can answer correctly without truly understanding that Maxi has a false belief about the location of the chocolate bar," explains psychologist William Fabricius from Arizona State University.
> "Adding a third location results in them guessing at chance between the two empty locations."
> Preschoolers appear to understand that Maxi does not know the chocolate is in the green box, because he did not see his mother put it there. As far as they are concerned, that leaves the red box or the blue box as Maxi's inevitable 'wrong choice'.
> The authors call this thinking perceptual access reasoning, or PAR. While a child understands that seeing leads to knowing, they do not incorporate the memory of Maxi putting the chocolate in the blue box into their answer.
This seems so bizarre... Partially because I remember (vaguely) being 4 or 5, and some of the things I used to think about, and I can't imagine I would have made that mistake.
Just out of curiosity what do you remember thinking from that time?
I remember some things that I did or happened, but I don't have any recollection of what I thought.
Might be a memory of a memory but I shoved a bobby pin into an electrical outlet from my crib and burned my right hand at whatever age a person is when they are in a crib - presumably pre-puberty at any rate. I have some sort of recollection that my left hand did not work as well spooning my Malt O Meal into my mouth as the bandaged right.
It seems like my mom and dad told me about the idea of handedness by way of explanation for my having trouble with the scooping and shoving but this was the pre-langauge period for me, so again, I might be interpreting a memory of a memory. I suppose I should add that I am now fully ambidextrous in the spooning Malt O Meal department.
I'm petty certain I have memories of pre language dreams - the shapes of toy parts and some colors I wouldn't see until the DayGlow poster period of the hippie days. And that goofy Dumbo the Baby Elephant rubber band wind up toy that flapped around pathetically trying to fly in my bedroom - yeah he was a visitor in my dreams then too.
All of this is very hazy of course.
I know memory can be unreliable. Events can be externally verified, but I can't prove my memories of my thoughts are accurate.
But with that disclaimer, here's what I remember. From pre-K, not that much. I remember a mean girl (who in my mind was tall and intimidating) scratched another kid across the face, hard enough to draw blood. I remember feeling sympathy for him.
Because my birthday falls near the end of the school year, I was 5 for most of kindergarten, and I remember that better. I could have told you what my classmates were like. Who was smart, who was friendly. A group of popular kids (feels funny to say that about kindergarteners) used to monopolize the building blocks. I mostly avoided their clique.
I had a fair number of self-reflective thoughts. Sometimes at night I would think about what attributes I wanted to have, and why these attributes would serve me well. I liked to imagine my classmates were doing the same and they might choose differently, and why my choices would be better.
I used to always want to know how everything worked. Luckily my father was always willing to explain stuff.
I have a clear memory of not being able to hear the difference between "white" and "light" and thinking it was one word with two different (but conceptually related) meanings). I think this must have been before I learned to read, or at least before I learned the spellings of those words.
I'd believe trivia that'd say the etymology for both is the same.
Is this some kind of bouba/kiki thing again?
I looked it up just now on Wiktionary. Looks like they're not related. But "white" is related to the Sanskrit word for light, and "light" is related to the Ancient Greek word for white.
Does anybody recall Scott saying something about how the worst thing you can do when encountering a viewpoint you don't understand is round it off to "they're just evil and hiding it"? I know the Seventh Meditation addresses something much like this, but I recall the quote (or something like that quote).
(It might be from somewhere other than Scott, but I'd still like to find it.)
Perhaps you're thinking of this?
https://www.lesswrong.com/posts/28bAMAxhoX3bwbAKC/are-your-enemies-innately-evil
Scott's post of abortion has similar themes.
https://slatestarcodex.com/2013/05/30/fetal-attraction-abortion-and-the-principle-of-charity/
I think I've read the second but not the first before; again it seems similar but doesn't have the quote that stuck in my head.
I don't remember that, but it sounds like something he would say.
In Europe, we have the Digital Green Certificate, issued after 1) full COVID vaccination or 2) 14 days after a positive PCR test (maybe there is something similar in the US). In case 1) the certificate is valid indefinitely AFAIK, and in case 2) for 6 months.
Currently the ECDC recommends against issuing this certificate after any sort of antibody test, proving past infection. Their stated reasons as I understand them are: there relationship between antibodies and immunity is yet to be determined, we do not know what levels of antibodies detected is sufficient and how long any such immunity lasts [0].
However the ECDC seems agree with these two facts:
1. (At least some of the available) antibody tests provide a reliable enough identification of past infection and
2. Past infection is sufficient to issue a green certificate (one is available shortly after a positive PCR)
In the US, the CDC seems to hold a similar position [1]. The evidence they cite point in the direction of strong and lasting natural protection (fact 2 above), and from my reading of their document they do not seem to worry about the test not being able to indicate past infection (fact 1). However the CDC is also against using antibody tests to assess immunity [2], citing concerns similar to the ECDC, namely that 'serologic correlates of protection have not been established'.
However I find it hard to reconcile this stance with the two facts above. If the evidence for natural protection is not strong enough, shouldn't PCR tests also be insufficient for getting the certificate? If the issue is that antibodies may indicate an infection too long ago in the past, shouldn't vaccine-related certificates also come with a time restriction (since there we also have no good evidence of the duration of protection)?
Can somebody try to give me a better explanation of the ECDC reasoning? Also, I would love to hear your stances on whether the risks outweigh the benefits of allowing such certificates to be issued after a positive antibody test or not.
I think there's also some reasoning going on that you don't want to treat immunity due to natural infection *quite* as well as immunity due to vaccination, because it creates perverse incentives to get infected.
If we had expected vaccination to take longer, I think there would have been a case for setting up "infection hotels" for healthy young people to visit and remain quarantined while sick - but incentivizing people to get infected while living in mixed society with old and immunocompromised people seems dangerous.
I think you are right. But then you have the other perverse effet of loss of credibility: If some official infos are provided not because they are fact (i.e. the truth as far as we can tell using cutting edge scientific knowledge) but because they convince you to adopt a desired behavior, why should the public trust infos tagged "official" more than some random source? There is a strong long term loss in trust for some short term gain, which seems quite common in modern scientific vulgarisation, and that no "fact checking" or "more vulgarisation" approach will solve. Without clearly separating fact from policy, it just erode trust even more without convincing people opposing the policy...
We had a similar situation in Switzerland. Our certificates are compatible with the EU ones, but with a 1-year limit for vaccinations. The government just announced that the time limit for PCR tests will be extended to be a year as well, and antibody tests will be valid for 90 days. This will be implemented as a separate type of certificate, since it won't be compatible with current EU rules anymore.
I am strongly in favor of this kind of relaxation, because of logical consistency as you pointed out, but also because the expected utility of a vaccine for a person who already went through covid is typically going to be negative (both for themselves and society). In fact, this change considerably reduced my opposition to covid certificates.
Not accepting antibody tests is also hugely unfair to people who for some reason didn't get tested in the officially approved way, e.g., doing an at-home test and then not following up at a test center for whatever reason. We are going to have to reintegrate vaccine skeptics (of all types) into our society sooner rather than later. Demonizing people and imposing pedantic standards for certificates is not going to help with that.
A positive antibody test provides a significant reduction in harm on a statistical basis, even if we can't guarantee that in every individual case. This will be hard to accept for some people. I think this whole pandemic brought out a lot of neuroticism, and I now see many folks being irrationally concerned with covid specifically (compared to other daily risks they take without a second thought). Overcoming these attitudes will be one of the biggest hurdles to actually get out of the pandemic in a timely manner.
forgot the sources:
[0] https://www.ecdc.europa.eu/en/publications-data/use-antibody-tests-sars-cov-2-context-digital-green-certificates
[1] https://www.cdc.gov/coronavirus/2019-ncov/lab/resources/antibody-tests-guidelines.html
[2] https://www.cdc.gov/vaccines/covid-19/clinical-considerations/covid-19-vaccines-us.html?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fvaccines%2Fcovid-19%2Finfo-by-product%2Fclinical-considerations.html
During my master thesis I’ve developed a new method for metal printing, potentially cutting cost by a factor of ten. I want to turn this technology into a business and make metal printing accessible to SMEs, focusing creative industries. (designers, architects, artists, ...)
There is a crude proof-of-principle and by the end of the year, a proof-of-concept should exist.
I’m still looking for a cofounder. The recent ACX-meetup Vienna showed me what a good proxy ACX-reedership is for “compatible ways of thinking”. There’s no particular profile I’m looking for, just a lot of motivation regarding metal printing.
About myself: Physicist by heart. Finished my masters degree in 2019. Earned money as a programmer while at university, was researching/working on metal printing ever since. Mostly extrovert.
I’m open to found somewhere other than Vienna, but not outside Europe.
About the company: There's business plan. While it will be a profitable business, I do believe it's strongest asset is it's potential social impact by providing a lot of people with access to new manufacturing methods.
If you think this is great and want to help without becoming a founder: I’m also looking for showcase-projects (Stuff that only works with metal printing / would be too expensive without it) and buckloads of money. I have absolutely no idea how angel investors or VCs can be found so this is my try at it.
Here is a very crude homepage: http://budgetmetalprinting.com/
I'll reach out. I know some Austrian startup founders (and more European ones). It depends on the industry but I'd suggest being willing to move. If you're looking to do metal fab then either to the US or South/East Asia. If you absolutely must stay in Europe then Germany or Italy. Though they're relatively minor players internationally. Europe just doesn't do much heavy industry (excluding Russia).
Reading this it looks like Central Europe really needs venture capitalism. And your university seeks to lack channels too. This isn’t a criticism of you because I think I you would have suitors around the block in the U.K. or the US.
Why not, using university email, send an email to large VCs across the world. Even rich people.
Be prepared to fly to them.
Isn't this almost the type example of the sort of thing the Classifieds Threads were made for?
Does your university not claim ownership to this invention? Most universities i know will do that.
Yes, but we have a preliminary agreement that the company gets an exclusive license.
So to be clear, this is licensed use of intellectual property. How does your agreement work in regards to developments of the intellectual property, which is probably one of the key potential reasons to invest in you?
How does this compare to printing in plastic or resin and then investment casting it into a metal part?
It's more versatile (range of materials, possible geometries) and less labor, since there is no second step, i.e. it forms structures directly from metal. For multiple parts, it is more reproducible and involves less (basically no) manual step. However, for a single piece investment casting is probably less expensive.
http://budgetmetalprinting.com/ is not making clear that you look for funding/VC
I see no way to contact you (it is hidden in a weird button that does not look like a button)
Thanks for the heads up! You can use acxcallout@budgetmetalprinting.com
Looking over the sixth IPCC report, a question occurred to me that I don't think they discuss and I wondered if anyone here was aware of published work on it. One of the more confident predictions about the effect of climate change on tropical cyclones (aka hurricanes, typhoons) is that their tracks will shift poleward, but I don't think the report says by how much although I could have missed it. Looking at a map of past cyclone tracks and a map of population density, it looks as though the intense cyclones over land are largely over densely populated regions — Mexico, Central America, Southern China, India. If so, a shift poleward might, if large enough, move them to less densely populated regions, decreasing total human damage.
One result would be to move them more over the U.S., which would be unfortunate for us and might increase material damage, since there is more expensive stuff to be damaged in the U.S. than in Mexico — but fewer people per square mile.
Has anyone looked at the question? The report mentions that changing the cyclone tracks could change their affect on humans, but not how.
Although many tropical countries are densely populated, the populations are often not concentrated in coastal low-lying areas. In temperate countries, they often are. The historical reasons for such population patterns are often associated with diseases rather than hurricanes (there is also the fact that the climate is just more pleasant at high altitude if the latitude is tropical).
Taking the two main countries you discuss - yes Mexico is more dense than the US overall, but all of the densest parts of Mexico are more than a mile above sea level - where people worry about earthquakes and volcanoes, not hurricanes. All of the non-landlocked states of Mexico are much less dense than Florida's ~380 residents per square mile. Though I guess since we're talking about hurricanes turning to higher latitudes than before, probably instead of Florida we should talk about places like New Jersey (~1200 residents per square mile)
The report argues that there's good observational evidence and consistent model projections for a northward shift in tracks and peak intensity for typhoons in the western Pacific. Elsewhere, it's not so clear. Much of the observed global increase in the latitude of peak tropical cyclone intensity is due to changes in the overall number in different ocean basins, with an increase in number in basins where storms are found at higher latitudes anyway. In the Atlantic basin, models tend to project hurricanes farther north but there's no observed trend in that.
I haven't done a comprehensive look for studies that may have investigated changes in population exposure as a function of projected track changes, but an all-changes study published last month (limitations: four GCMs, one hurricane generating algorithm that seems to favor increases more than other algorithms) finds increases in population exposure in all basins even while holding population constant: https://doi.org/10.1038/s41558-021-01157-9 .
Thanks. That looks interesting. Downloaded.
Is it that all hurricanes will shift northward or that warmer seas will expand the zone in which they can form northward. Other climate effects on hurricanes: more rainfall, more intense, slower moving lingering longer, storm surge added on to higher seas, more rapid intensification making evacuation warning less timely.
It's not the warmer seas that matter (the sea surface temperature threshold for development depends on the average temperature of the tropical atmosphere, which also goes up), but rather the likely expansion of the tropical wind pattern known as the Hadley Cell, whose low-shear environment is one necessary ingredient for tropical cyclone intensification.
>(the sea surface temperature threshold for development depends on the average temperature of the tropical atmosphere, which also goes up)
It depends on the average temperature of the tropical *upper* troposphere. That doesn't go up under warming driven by tropospheric greenhouse gases (since there's nothing to break its correlation with outgoing longwave and outgoing longwave is fixed).
The basic way greenhouse gas warming works is: for a given tropospheric temperature profile, the outgoing longwave (OLR) decreases as greenhouse gases increase. Since outgoing longwave is fixed (at equilibrium), the temperature increases to restore the outgoing longwave intensity. So yes, it does go up.
If you want to think about it mathematically, to a first approximation OLR =A + BT (embodying the correlation between OLR and temperature T). Increase greenhouse gases, and A decreases. OLR is restored/maintained by an increase of T.
>OLR =A + BT
No. OLR is proportional to T^4 of the effective surface emitting to space. Greenhouse gases do not affect the proportionality.
What greenhouse gases do is reduce the effective value of T, by providing layers in the air - *colder* than the ground - that absorb and re-radiate longwave (both up and down). Having twice as much greenhouse gas means the layers are half as thick, there are therefore twice as many layers, and therefore - as each pair of layers needs a temperature differential to transport heat outward - the surface is warmer.
Adding greenhouse gases to the troposphere increases the number of layers in the troposphere, but not the number of layers above it - ergo, the temperature of the tropopause doesn't change (rather, the temperature gradient in the troposphere gets steeper). Adding greenhouse gases *above* a given height does increase the temperature at that height, but AIUI anthropogenic GHG haven't yet reached the stratosphere in quantity (also, even then it doesn't raise the temperature at X km above ground as much as it does at ground level).
OLR is proportional to T^4, but over the range of temperatures relevant here it can be approximated by the linear function, with A != 0. The first folks who created energy balance models used that approximation, then people came along and said "That's stupid, it's the early 70s, we have powerful computers now" and plugged in the T^4 equation and found it didn't make much difference. The logic and physics work the same either way, and most people's brains do better with A + BT.
"Layers and energy differentials" is a great way to think about it. And you've *sort of* described the greenhouse gas part of the energy transfer accurately. But most of the global average net energy transfer from the ground to the atmosphere is via evaporation (i.e. latent heat). Neither it nor heat diffusion via conduction or turbulence cares directly about what greenhouse gases and radiation are doing, except to the the extent that the greenhouse gases are modifying the energy differential a bit. So we get more evaporation to make up for the greater difficulty of net upward IR energy transfer.
This is *exactly* what a lot of the Nobel Prize for Physics in 2021 was awarded for. If you just consider radiation as your energy transfer mechanism, your equilibrium temperature profile cools off way too rapidly with altitude. Instead, it's the evaporation and subsequent thunderstorm activity (in the tropics anyway, and something qualitatively similar in mid-latitudes) that constrains the temperature profile in the real world. Too much cooling with height, and the thunderstorms crank up. Too little cooling with height, and the thunderstorms (and latent heat transfer) shuts down because the atmosphere's not unstable anymore.
Adding greenhouse gases to the troposphere increases the altitude above which longwave is able to escape to space (for those wavelength bands in which the greenhouse gas is active) and hence decreases the amount of energy escaping to space (since emissions increase with temperature and higher altitudes have colder temperatures). So the atmosphere warms, and the surface warms because net flux from the ground to the atmosphere depends on the ground-atmosphere energy differential.
BTW, the well-mixed greenhouse gases (CO2, CH4, N2O, etc.) have mixed through the troposphere and stratosphere (it only takes a few years). ["Mixed" means roughly uniform mixing ratio.] Presently, only CO2 has a large enough concentration that this matters: for a narrow set of wavelengths, emissions to space come from CO2 in the stratosphere. And for those wavelengths, increasing CO2 doesn't matter much because temperature doesn't change much with height in the lower stratosphere (so changing the emission altitude doesn't change the intensity of the escaping radiation). Most CO2 emissions come from the troposphere though, on the broad wings of that band. This is why the greenhouse effect of CO2 is logarithmic in CO2 concentration rather than linear like it is for the other greenhouse gases. It's also why skeptics can try to argue "Hey, it's logarithmic, so don't worry about it".
I should add that "increasing greenhouse gases adds another layer to the atmosphere" is how it's commonly taught, because it's easy to understand and easy to do the math. I don't like it because it leads to incorrect conclusions about the details and about how things like clouds affect energy balance. I teach it only to students who are not expected to think through the implications beyond "if I understand this, I'm more likely to survive the next test". You're clearly not that person!
It’s international stuttering awareness day. We still have only a poor understanding of the disorder, but for a while now studies have been exploring the use of dopamine antagonists as a possible pharmacological treatment. (https://www.frontiersin.org/articles/10.3389/fnins.2020.00158/full) Would be curious to hear people’s thoughts on the quality of evidence here, and on the idea of stuttering as a dopamine issue more broadly
Hey, we're running a Discord server about achieving financial independence through side businesses, career advancement and investments in decentralized finance. Would love to have ACX members join http://BowTiedDiscord.com
Our community has lately been wondering how to look for arbitrage opportunities through bridging (e.g. on Arbitrum), as well as any airdrop opportunities. Happy to hear any suggestions or thoughts from you all!
Decentralized finance is a great way to lose your savings. I implore anyone reading this to check out the skeptic case before they get involved (David Gerard's blog is a good place to start).
I'm getting an "unable to accept invite" error when I click the link. Do you have a fixed one?
http://discord.gg/K6WfHphzBj
"as well as any airdrop opportunities" is it code for "cryptocurrency scams"?
Yes, scams exist, which is why our community is discussing how to thoroughly vet them first.
I'm currently in the process of training to become a secondary school teacher in the UK (ages roughly 12 to 18). I'll need to do some research on theories/models of how children learn, and I was wondering if people here might have useful suggestions of sources I should check out - particularly if they cover Rationality-adjacent concerns, but generally anything beyond the usual Piaget and Vygotsky references is greatly appreciated.
Far from an expert, and this might not be what you're looking for, but are you familiar with Alison Gopnik's work in developmental psychology?
I'm not, and I'll check it out (looks potentially useful depending on how much she has done on older children). Thank you for the suggestion!
So I was introduced to the concept of CBT (Cognitive Behavioral Therapy) a few years ago while seeing a therapist for various issues. I didn't end up getting much value out the therapist process itself, but I did purchase a CBT workbook, study up on the topic quite a bit like the nerd I am (and some variations like Acceptance and Commitment Theory), and so my question is- am I actually doing it successfully now?
I have some actual life issues that most outsiders would objectively consider to be relatively serious problems (i.e. they are not just thought distortions as described by CBT). My..... I guess cognitive update over the last couple of years is that I simply don't dwell on them- I don't really think about them at all. Is this CBT? For example let's pretend that you have a homeless alcoholic who was previously depressed about his life status, but now he simply doesn't think about his various issues whatsoever, and his personal happiness is greatly increased even though he remains homeless, addicted to alcohol etc. Is this CBT? (I am not a homeless alcoholic, just using this as an example).
TLDR if I have serious, chronic problems in my life and I simply ignore them and think about something pleasant instead- am I practicing Cognitive Behavioral Therapy? I would say that doing so has reduced my depression a ton- but I'm also not doing anything active to fix ongoing issues. I'm closer to what Pink Floyd described as 'comfortably numb'. It's worth noting that I started practicing meditation around the same time that I took this up, which has probably enhanced my ability to not dwell on the negative/direct my thoughts in general. Would be interested to hear people's general thoughts on CBT, ACT etc. etc.
> let's pretend that you have a homeless alcoholic who was previously depressed about his life status, but now he simply doesn't think about his various issues whatsoever, and his personal happiness is greatly increased even though he remains homeless, addicted to alcohol etc.
That sounds like meditation to me. The reality did not improve, but the suffering is reduced. Get rid of the addiction, and he is ready to be a Buddhist monk.
I'm interested in how you did this. The common pattern in CBT is to recognize negative root assumptions that are making you unhappy and replace them with more realistic alternatives. Did you identify negative root thoughts and practice ignoring them?
I've no idea if what you did is CBT or not. I remember a post by Scott where he says that he ignores troubling thoughts all the time, despite it being looked down on by the psychological community. But if it works it strikes me as a very useful skill for people with problems beyond their control. Imagine if it worked for people with chronic pain!
“ I remember a post by Scott where he says that he ignores troubling thoughts all the time, despite it being looked down on by the psychological community.”
This sounds like a wonderful super power to a chronic worrier.
'I'm interested in how you did this. The common pattern in CBT is to recognize negative root assumptions that are making you unhappy and replace them with more realistic alternatives. Did you identify negative root thoughts and practice ignoring them?'
I just.... did it. Honestly I didn't really find it that challenging, which is ironic because I think I'm terrible at meditation and am constantly interrupted by completely random (not negative, just random) thoughts.
I think I'm overly analytical/ruminate too much, which for most of my life lead to an influx of negative thinking (hard to not see myself quite clearly), but also made identifying the negative root thoughts easy. I didn't need a CBT workbook for that, it just sort of kick-started me in the right direction
Grand, certainly seems like a life improvement, hope it keeps working out for you.
"The common pattern in CBT is to recognize negative root assumptions that are making you unhappy and replace them with more realistic alternatives. Did you identify negative root thoughts and practice ignoring them?"
This is my problem with CBT. It explicitly addresses symptoms, not causes which is fair enough - it's a quick fix approach, like taking a painkiller when you have a toothache.
But the assumptions it operates on are that there really aren't severe underlying causes - your toothache is just a temporary twinge, not an abscess that you need to have treated.
That's where the 'negative root assumption' comes in. You think you're a failure in life and worry about losing your job and how are you going to get a new one if you're fired from this one? And the exercise/therapy is "well, look at the facts: you have a career, you have achieved promotions and awards, whenever you went looking for a new job you quickly found one" and thus cutting the negative assumption off.
It doesn't work when it's "let's look at the facts - okay, you have a crappy job, you go through long periods of unemployment when you are trying to find work and it's not easy for you to find a new job, and you don't have skills that are in high demand". In that case, the root assumption may be negative, but it's not an assumption, it's fact. CBT can't go "stop thinking bad thoughts, think happy thoughts!" there.
Yeah, it's definitely not always the right tool for the job. And the good resources I've read have always said that changing your circumstances is the front-line defense.
I think the modern model for self-help probably has a lot in common with the prayer
"God, grant me the serenity to accept the things I cannot change,
courage to change the things I can,
and wisdom to know the difference."
But if we prayed to a different deity
"God grant me the ability to straight up ignore the shit I don't like,
And give me a set of powerful delusions to perpetuate my happiness."
Would we be any less happy?!
If the serious, chronic problems in your life are things you probably cannot do anything about (for example, a beloved relative who is drinking himself to death and ignoring
your attempts to help) then using a technique that reduces your preoccupation with the matter and eases your pain seems like a good idea. CBT is one such technique. But if the problems are things you have a reasonable chance of addressing -- things like hating your job -- you should be trying to get yourself to address them. CBT might help: It would involve capturing and challenging the thoughts that keep you from addressing the problem. If what's keeping you from addressing the job problem is thoughts like "I'll never find anything better" or "I can't endure the stress and hassle of job-hunting" -- well, those thoughts are almost certainly distortions that could be challenged. But CBT is just one tool in the toolbox.
CBT alone doesn’t fix major life issues (like homelessness as you mentioned, etc). CBT is a technique for managing unnecessary intrusive thoughts and anxiety. It’s just a tool in the toolbox. You have to put in a different kind of work to change your life circumstances. And when you can’t change the circumstances, CBT comes in handy to not drown in your own negativity.
"I would say that doing so has reduced my depression a ton" - I would say this is good and whether or not you're doing CBT "right" is not important.
"I'm also not doing anything active to fix ongoing issues." - Do you *want* to be moving toward fixing ongoing issues? If not, then the improvements you've experienced, whether coming from CBT, meditation, or something else, are an unmitigated good. Calming the nervous system can sometimes feel like numbness at first. My experience is needing to move from activation into a more regulated state first. Then you can start practicing feeling into what you want, what matters to you, what your values are. Then you can start making decisions and taking actions that are in alignment with those wants and values.
I am wondering if your concern about whether you've been successful with CBT is stemming from a dissatisfaction with your ongoing troubles and judging your current comfortably numb status as wrong/bad/insufficient because it's not moving your forward. I am also wondering if you may be having some difficulty connecting with your visceral, emotional, feeling self, which you would need to do in order to truly explore your wants and values. We cannot simply think our way out of complex & possibly traumatizing shit (and it sounds like you're going through some heavy stuff). There are other aspects of our minds that need attending to besides our cognition. Wishing you well <3
I'm no expert with CBT, but to me it seems like you're drawing an arbitrary line between 'real issues' and 'silly thought distortions'. Is there much difference between a problem like, say, stubbing your toe, and a more serious problem, than the scale? If your techniques stop you from dwelling on your issues unnecessarily, it seems helpful.
Perhaps if you're not motivated to actually fix the problems, your techniques aren't helping enough. But then again, perhaps it's only a matter of time.
I'm really glad you posted this and I hope you receive some high caliber responses.
Until then, I'll offer my own thoroughly amateur thoughts...
I would say that your hypothetical friend is not necessarily practicing CBT, per se, because CBT is a formal and deliberate form of therapy that one undertakes in response to psychological dilemma. That's not to say that what your friend is doing isn't helpful, or is helpful, but that whatever it is, it's not CBT. Call it willful neglect, disinterest, or open acceptance, but I don't see it as qualifying as CBT.
Maybe the more interesting question is related to the outcome associated with this kind of behavior. Is your friend generally better off with this approach? I suppose that depends on who you talk to? A psychologist would say one thing (yes?), while a hematologist would probably say another. I think at the end of the day, what really matters is your friend's experience.
Shooting in the dark, it's hard to see how one could benefit in the long term from overlooking the inevitable consequences of consistent physiologically harmful behavior, but in the short term, maybe a different story (psychologists hate him!)
I wonder if it would have an effect on new reader intake if the thing people saw when going to acx.s was *not* half open threads and lynxes. Like if there was a distinct meta tab, and you weren't viewing it by default. Because the way it is now, a passerby might think 'I can't start this, there's more community than content', but the other way, it might look a bit emptier/updated more rarely than other substacks.
Substack allows creating "sections", but I have no idea what happens as a result, visually.
I agree, I should probably ask Substack.
My thought is that once he's done with the travel, his previous posting frequency was sufficient to avoid this as an issue. These weeks though, it does look a bit threadbare on the content, if someone just looks at the recent posts.
I'm looking for suggestions of great short stories with a business/finance element or theme. One example: The Accountant by Ethan Canin. I'm assembling material for a series of zoom sessions that I'm co-moderating with an English professor this winter. If you haven't read The Accountant, i recommend it. Beautifully written with a pitch perfect voice, at times very funny, at times quite philosophical.
If door-to-door-salesmanship qualifies, there's The Man Who Sold Rope to the Gnoles:
https://d-infinity.net/posts/fiction/man-who-sold-rope-gnoles
If it doesn't qualify, read it anyway - it's very short. Also nasty and brutish...
Thanks so much! Loved the Hobbesian reference.
"Compound Interest" by Mack Reynolds
Another golden age story (possibly by Kornbluth) about a depression which is set off by a man who is reluctant to buy a refrigerator(?). The knock-on effects take the economy down.
The problem is traced back to him, and the president takes the last little bit of money out of treasury to give to the man so he can buy the refrigerator and get the economy started again.
Obviously satire, or something akin to satire, but people might have fun analyzing what's wrong with the premise.
Contents Of The Dead Man's Pockets by Jack Finney
The Fatal Equilibrium, by Marshall Jevons - an economic murder mystery novel.
Thanks!
Glengarry Glen Ross (the play by David Mamet)
Iconic, but I think too long for my purposes. Thanks.
It's economics rather than business/finance, but you might be interested in a page I have up linking to and commenting on short works of literature with interesting economic ideas in them:
http://www.daviddfriedman.com/Academic/Fictional%20Economics/Embedded%20Economics.html
Thanks so much. I'll check out your site.
Not a short story, but the Orconomics series is great.
I read the first book, and I have mixed opinions about it. I would have panned it utterly, but the twist at the end made me reconsider.
Appreciate the suggestion
"The Cambist and Lord Iron: A Fairy Tale of Economics" by Daniel Abraham
https://www.lightspeedmagazine.com/fiction/the-cambist-and-lord-iron-a-fairy-tale-of-economics/
+1
Thank You.
"Bartleby, the Scrivener: A Story of Wall Street" by Herman Melville
I would say "i prefer not" but that would be pretty weak humor!
Big fan of this one - I love how it kind of feels like a 19th century version of the Office before it turns more serious.
Ruined City by Nevil Shute fits the bill. It's hilarious, moving, and overall just brilliant, and captures something interesting of the Depression in industrial England. Without spoiling too much, it's about an absolute caricature of an evil capitalist using his skills for something else.
I should add that this was published as a stand-alone novel but is very short.
Thanks so much.
I've recently published a piece on cytomegalovirus as part of my series, "The human Herpesviruses: much more than you wanted to know". https://denovo.substack.com/p/cytomegalovirus-the-worst-herpesvirus
For a virus most people haven't heard of, CMV is suprisingly bad. This is because it causes birth defects, and contributes to aging. My rough estimate of its DALY cost places it in the range of HIV.
And yet I didn't even know CMV was a thing until my final year of undergrad studies!
I'm one of those freaks who is CMV-negative into middle-age. (My wife has never infected me, either, so maybe she is also negative. I found out through blood donations, which she doesn't do. It's also possible I ended up getting it and the Red Cross never told me.)
I'd love to get vaccinated to stay that way. How is Moderna's vaccine trial coming along?
It's disturbing to see that there's something big you're mentioning that is redacted!
Also, do you have any thoughts on what good strategy might be, either for individuals or populations? Given the high prevalence and lifetime infectiousness, is there anything we can do to reduce prevalence? (I suppose the fact that prevalence is lower in North America than in Europe suggests that something could be done.) Given that first infection during pregnancy is associated with far worse outcomes, would reducing prevalence be expected to increase the number of people who get first infection during pregnancy? (This is related to worries I've had about reducing prevalence of common cold - if some common colds are minor only because nearly everyone has had them multiple times, then attempts to reduce the burden of annual infections might accidentally make first infections much worse, by making them happen to older people.)
Humans are incredibly resistant to cold. As someone who grew up in Manitoba Canada, and now lives in Edmonton, Canada - it is so strange to learn that people can die of hypothermia at 50 F, equivalent to 10 degrees C. The idea that dying from exposure at such a high temperature is just completely outside of my experience. If you are soaked to the bone, and the wind is really blowing, and the average temp is 10 degrees, but you are in the shade, making the local temp lower - I guess I can see it happening somehow. But you could probably just walk or jog briskly indefinitely to keep warm in this situation if you have the open space and enough energy.
In my hometown, the temperature routinely goes down to minus 30 degrees for an entire month in the winter, with temperatures including wind-chill during that time at around minus 40 degrees, down to - 50 on the worst days. School only shut down maybe once or twice a year due to cold or snow - and the reason was that the busses couldn't start in the extremely low temps, even though they were all plugged in to keep the oil warm. It would need to be -40C before wind chill effects in order to have it so bad that the busses couldn't start. I walked to school with friends all winter during high school, 30 minutes one way. The cold wasn't an issue.
Same deal with cold and school closures in N Minnesota. Occasionally the buses wouldn’t start. The kids that rode the buses from the hinterlands were off the hook. ‘Walkers’ were expected in class.
I took a flyer and simply googled "Kenya hypothermia". The first hit was Nyandiko et al 2021 https://doi.org/10.1371/journal.pone.0248838 Neonatal hypothermia and adherence to World Health Organisation thermal care guidelines among newborns at Moi Teaching and Referral Hospital, Kenya
From the abstract: "Admission hypothermia was noted among 73.7% (274) and 13% (49) died on the first day of admission. Only 7.8% (29) newborns accessed optimal thermal care. " So I suppose a lot of the hypothermia mortality in the tropics may be among newborns. If you've just popped out, 18C is damn chilly.
Actually, we are not. You're talking about a situation where you're not actually exposed to the cold, it's held at bay by insulated clothing, et cetera. Under those circumstances, if you are protected from heat loss, you can walk on the Moon at -300F. But if you are genuinely exposed to even moderately low temperatures, meaning heat can freely leave your body, such as when you are immersed in cold water, bad things happen rather quickly. From here:
https://www.useakayak.org/references/hypothermia_table.html
Even at 45F or so one can expect to be able to perform normally for only 5-10 minutes, to become unconscious within 30-60 min, and to die with 1-3 hours. At the freezing point death usually comes within 15-45 minutes. Clearly if your heat sink is the surrounding air instead of surrounding water, heat loss will be far slower, and you can readily make it all night by being sensible (e.g. finding shelter and avoiding air flow), but if you don't have adequate clothing, can't find shelter, and the conditions favor copious air flow -- there's a stiff breeze, say, and the air is humid -- I can readily believe a night at 45 could kill someone.
There's a "Rule of 3s" that is sometimes used by people who do a lot of outdoor stuff to remind one of the priorities: "You can survive 3 minutes without breathing, 3 hours without adequate warmth, 3 days without water, and 3 weeks without food." This is useful to point out to newbies, who will often prioritize emergency food over emergency protection from the elements (and sometimes even drinking water), a serious but surprisingly common mistake that has definitely led to unnecessary loss of life.
I was surprised to see recently how quick hyperthermia kicks in in what I thought were fairly high temperatures in water
https://www.hofmannlawfirm.com/faqs/how-long-does-it-take-to-get-hypothermia-in-cold-water.cfm
At a water temperature of 60 - 70 degrees, death may occur in 2 - 40 hours.
I’ve definitely swum in water < 60f. Not for more than two hours though.
Can vouch for this.
In my teens, I capsized and swamped a small sailing dinghy. It was November in Nova Scotia. Was probably only in the water 10-15 minutes, but was shivering uncontrollably when rescued by a passing keelboat.
I’ve also surfed on Long Island, NY in the winter, with a summer wetsuit augmented with a couple of neoprene vests. I could last about an hour before shivering would set in.
I've heard that sleeping directly on the ground can be very dangerous, so maybe that's how people die? The context was survival stuff in forests, and how you should make a bed of leaves to insulate yourself from the floor.
I think the issue only arises if you are sleeping, outside, on the ground, without adequate clothing or blanketing, and probably wet.
Same climatic background here. I think short-term adaptation of some sort must be a factor. A -35C week in late January is tolerable from a -25 baseline, but it feels much worse if it's unseasonably cold in November after a warm fall. (Higher humidity in that case though.) If prepared psychologically, though, -40 with no wind and zero humidity is kind of nice, and it's quite possible to overheat exercising in winter clothing at that temperature.
True. Walking on snow packed streets at -40 you get that cool crunching sound. With no wind it’s kinda fun. If I went for run in that kind of cold the sweat around my eyes freezes a bit between blinks. Get home with hoar frost on the legs of my sweats.
Doesn’t get down that low in the Twin Cities but my home town north of Duluth could get down to -50 F.
You were probably well wrapped up. As someone who got drenched on a sodden Irish mountain, temperature 10c. wearing few layers and a useless summer coat, I can see why it could kill somebody. Luckily o didn’t stay very long in it.
+1. I got soaked on a bike trip in Iceland at around 10C ambient. I needed to fix the bike, but I had so much panic and brain fog setting in from the hypothermia, I had to run along with the bike to warm myself up, then continue fixing it for a few minutes, repeat.
True! I wore several layers, and winter air is pretty dry in Canada, which helps.
Yeah, I grew up in Minnesota, and 50F seems like no big deal. But if you're malnourished and poorly insulated it could definitely kill you.
Agreed. Is the winter pretty dry in Minnesota? I have heard from friends in Toronto and the East coast that the winters there aren't as cold as Manitoba or Alberta, but in the East it's a damp cold that gets down to the bones, and it takes way longer to warm up when the air is cold and humid.
Nova Scotia here… Your friends are correct!
Just like a “dry heat” is more tolerable, same for a “dry cold”, it seems.
I'll believe in a wet cold when someone tells me how the relative humidity can be hovering near 100%, but still be "dry".
What you describe is true at very low temperatures where air has so little moisture carrying capacity that it is both dry (by absolute measure) AND is effectively saturated (100% RH). But at 40-50F, which is the “wet cold” people experience in humid climates, air is capable of holding a significant amounts of moisture. Air at 45F / 95% RH feels different to the human body than 45F / 10% RH. The nice thing about wet cold is that it’s easier to breathe than dry cold. But yeah… I would much rather sit outside for several hours in the dry cold. After a while the cold invades your winter layers in a wet cold.
I’m not sure I understand your point, but the the general tone indicates that you have never experienced the phenomenon personally.
Speaking as someone who has, I can report that it is indeed a real thing…at least subjectively.
Will there be "Learn French with ACX" and (most excitingly) "Learn English with ACX" posts after your trips to France/the UK? (apologies for referring to subscriber-only posts in public, feel free to delete if you don't like that)
French no, I was only in France ~2 days. English hopefully.
You will discover the English letter (no connection to French letters).
I am amused how we say cor anglais in French and French horn in English for the musical instruments.
Is this something like the names for syphilis?
" The English, German and Italians called it "the French disease", while the French referred to it as the "Neapolitan disease". The Dutch called it the "Spanish pocks" during the Dutch Revolt. To the Turks it was known as the "Christian disease", whilst in India, the Hindus and Muslims named the disease after each other."
It'd suggest the instrument wasn't well liked, either.
Similarly, in English, leaving without saying goodbye is "a French leave"; in French, it's "filer à l'anglaise".
I've always aspired to pull that off but I usually spoil the effect by announcing to people my intent to leave without saying goodbye
Huh, I'd always known that as an 'Irish goodbye'
I believe the "french letter" was known in France as an "English device."
Pairs well with "English vice".
Yes, the exact phrase is "capote anglaise"
Unfortunately, I believe the "cor anglais" is the French name for the instrument called "English horn" in English. However, it turns out that the name "cor anglais" is actually a folk etymology, and "cor anglé" is likely the original name, since it is an angled horn.
Gnostics have long seen the Bible or the canon as an instance of esoteric writing, but rarely are characters within the Bible submitted to Straussian readings.
Didn't call it Straussian in the essay itself, but did in the tweet:
https://twitter.com/ZoharAtkins/status/1451575808800313352
https://etzhasadeh.substack.com/p/the-conspiracy
Second link is broken.
https://etzhasadeh.substack.com/p/the-conspiracy-of-heaven-and-earth
I just watched Dune, and having not read the book prior to the movie or seeing the Lynch one, I had serious one issue with it - I strongly agree with your first point, I do not understand why the Atreides are our protagonists.
The first scene of the movie is Zendaya (I don't know the characters name) narrating about how while Harkonnen is evil, whoever comes next will continue to oppress them as all they care about as spice. The first introduction of Duke Leto is the scene where Atreides receives stewardship of Dune is a blatant Triumph of the Will homage, which is also used in every Sci-Fi movie ever to convey evil. Visually, it's like if the main character of The Force Awakens was Hux, and we're supposed to feel bad for him when Smoke is murdered. I don't think this is "complex" or "challenging", the visual language of Triump of the Will in science fiction movies is too on the nose for that. It is very possible that these characterizations of Atreides will pay off in the sequels, but it made it hard for me to root for the characters.
So, do you consider the rebels to be the bad guys in the original Star Wars because they were the ones with the blatant "Triumph" homage?
There are a lot of useful and/or aesthetically pleasing things that happen to have been first invented by Nazis. It does not serve humanity to make those things forever off-limits to good people putting them to good uses.
Sure - I agree with this. I found the visual shorthand for "these are bad people" made it challenging for me to get emotionally involved with the characters.
Movies have to be judicious with their choices, even in 2 hour long ones, and I thought those two scenes failed to accomplish what the movie needed. If Atrides is supposed to have some darkness, I think you could do it through the Duke's discussions with his military advisors, and if the scene is supposed to be them in full-throated martial glory, I think you could do that without the cliched allusion that almost always in current media is short hand for "these are the bad guys". IIRC, this is the first scene the Duke is introduced in! It's confusing and challenging (probably intentionally), and made it hard for me to emotionally relate to him.
I agree with you that these things should not be forever off limits - however it is important to understand that scenes, sounds, and visual metaphors all have meanings that are encoded by society, and that can impact how a work of fiction uses it. When Harry Styles talks about watermelon sugar, he's obviously talking about sex (independent of the other lyrics) in a way that Genesis 3 isn't.
Honestly, I think your mention of the rebels in Star Wars is an example of a good use! (assuming one recognizes the allusion, and doesn't just think it looks cool) Firstly, it wasn't as nearly as cliched at that point, and second, it's at the end after we've already established emotional investment in the alliance - if people start to think they might have shades of grey, that's a great hook for a second movie. TFA, being a much worse movie, uses the allusion the way it's always used nowadays, to quickly indicate "these are bad people".
Useful context, thanks! Reading the first half of the novel has made me appreciate some of the tightrope Villeneuve was walking. [Some spoilers for the first part of the first book]
I'd think even adding the part of the spice harvester scene where the duke gives his worm-spotting bounty to the crew might've tipped the score for me. It felt (to me) that it was over indexed on the not anything more vs the better masters.
It's interesting because some groups, specifically the Bene Gesserit, seem much more devious in the novels, and the fact that they've specifically seeded messiah stories for them to exploit is a much more explicit criticism (unless I missed this in the movie from the mumbled dialogue).